markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
And just for the sake of completion, let's temporarily kick out Tony from the table. Temporary, since it's not inplace. | df.drop('Tony', axis = 0)
# Renaming Columns
df.rename(columns={'Jan': 'January'}, inplace=True)
df
df.rename(columns={'Feb': 'February', 'Mar': 'March', 'Apr': 'April'}, inplace=True)
df | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
Dataframe from a Dictionry
Let's create a new dataframe from a dictionary, and then apply some of the selection techniques we just learnt. | dict1 = {'first_name': ['Erlich', 'Richard', "Dinesh", 'Gilfoyle', 'Nelson'],
'second_name': ['Bachman', 'Hendricks', np.nan, np.nan, 'Bighetti'],
'occupation': ['Investor', 'Entrepreneur', 'Coder', 'Coder', 'Bench Warmer'],
'age': [40, 30, 28, 29, 28]}
df = pd.DataFrame(dict1, columns = ['first_name', 'second_name','occupation', 'age'])
df
# Who is under 30 years of age?
df[df["age"]<30]
# Who are the coders?
df[df["occupation"] == "Coder"]
# Multiple Conditions : Coders, below 30
# Not that conditions are Booleans, as shown below
coders = df["occupation"] == "Coder"
und_30 = df["age"]<30
df[coders & und_30]
df[df["second_name"].notnull()] | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
Exercise | np.random.seed(42)
np.random.randn(4,4)
np.random.seed(42)
df = pd.DataFrame(np.random.randn(4,4), index = "Peter,Clarke,Bruce,Tony".split(","), columns = "Jan,Feb,Mar,Apr".split(","))
df
# Who scored greater than 0 in Apr?
df[df>0][["Apr"]]
# Who scored below 0 in March?
# In which month/months did Clarke score above 0?
# Find the highest scores for each month
# Hint: .max()
# Find the lowest scores for each month
# Plot the higest score for each month in a bar graph
| 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
Handling Missing Data
Pay special attention to this section. If needed, spend some extra time to cover all the relevant techniques. <br>
Never in my experience have I come across a 100% clean data set "in the wild". What that means is that of course you will find that most data sets that you train with to be complete, but real world data is messy and incomplete.
Even when working with high quality, financial data from exchanges, they might often have missing data points. The less said about unstructured data like text, the better.
TL/DR: If you're going to fight Mike Tyson, don't train to fight Mr Bean.
<img src="images/bean_box.jpg">
What is Missing Data?
Data can be missing because:
* It was never captured
* The data does not exist
* It was captured but got corrupted
In Pandas, missing data will be represented as None or NaN. | df = pd.DataFrame({'NYC':[3,np.nan,7,9,6],
'SF':[4,3,8,7,15],
'CHI':[4,np.nan,np.nan,14,6],
'MIA':[3, 9,12,8,9]}, index = ['Mon','Tue','Wed','Thu','Fri'])
df | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
First thing we can do is drop rows with missing values with the dropna() function. By default, rows are dropped, but you can change this to columns as well. | df.dropna()
df.dropna(axis = 0)
df.dropna(axis = 1) | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
While this can be helpful in some ways, if your dataset is small, you are losing a significant portion of your data.
For example, if 100 rows out of 1 million rows have missing data, that's negligible, and can potentially be thrown away. What if you have 10 out of 85 rows with incorrect, unusable or missing data? | df2 = df.copy()
df2
df2.mean()
# Are these really the means though?
df
mean = df2['SF'].mean()
mean | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
Imputation
Using the fillna function, we can replace missing values. | df = pd.DataFrame({'NYC':[3,np.nan,7,9,6],
'SF':[4,3,8,7,15],
'CHI':[4,np.nan,np.nan,14,6],
'MIA':[3, 9,12,8,9]}, index = ['Mon','Tue','Wed','Thu','Fri'])
df
df.mean()
df.fillna(value = df.mean(), inplace = True)
df
df = pd.DataFrame({'NYC':[3,np.nan,7,9,6],
'SF':[4,3,8,7,15],
'CHI':[4,np.nan,np.nan,14,6],
'MIA':[3, 9,12,8,9]}, index = ['Mon','Tue','Wed','Thu','Fri'])
df
df3 = df.copy()
df3
median = df3['SF'].median()
median
df3.fillna(value = median, inplace = True)
df3
df3.mode() | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
But sometimes, the data isn't part of the table. Consider the scenario below. We know that the below tables contains names of female babies. But it's missing in our dataset. | baby_names = {
'id': ['101', '102', '103', '104', '105'],
'first_name': ['Emma', 'Madison', 'Hannah', 'Grace', 'Emily']
}
df_baby = pd.DataFrame(baby_names, columns = ['id', 'first_name'])
df_baby
df_baby.columns
df_baby["gender"] = "F"
df_baby
df_baby['gender'] = 0
df_baby | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
Interpolation
Read up more on the interpolate function here and here | df = pd.read_csv("data/cafe_sales2015.csv")
df
df["Date"].head()
df["Date"] = pd.to_datetime(df["Date"])
df.set_index(["Date"], inplace = True)
df.head()
df.tail()
df.head(3)
df.describe()
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (20,5)
df.plot(kind="line")
df["Water"].plot(kind="line")
df.interpolate(method = "linear", inplace = True)
df.head(5)
df.interpolate().count()
df[["Latte", "Water"]].plot(kind="line") | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
Keep in mind though, that these are at best approximations.
A Quick Detour into some Data Viz
Install Vincent by running the following line in your command line:
Python 2.x: pip install vincent <br>
Python 3.x: pip3 install vincent | import vincent
vincent.core.initialize_notebook()
line = vincent.Line(df)
line.axis_titles(x='Date', y='Amount')
line = vincent.Line(df[["Latte", "Water"]])
line.axis_titles(x='Date', y='Amount')
stacked = vincent.StackedArea(df)
stacked.axis_titles(x='Date', y='Amount')
stacked.legend(title='Cafe Sales')
stacked.colors(brew='Spectral') | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
Read about using the Vincent package here.
The latest update to Matplotlib, V 2.0.0 has really improved the quality of the graphics, but it's still not quite production ready, while on the positive side, it is stable and has a large community of people who use it. Niche packages like Vincent can produce some amazing graphics right out of the box with minimal tweaking, but they may not be very mature. Nevertheless, as Data Scientists, it's good to learn about new packages, especially those that help you communicate your results to a non-technical audience. If people don't understand what you do, they won't think what you do is important!
Merge, Join, Concatenate
<img src="images/sql-joins.png">
Image Source: http://www.datapine.com/blog/sql-joins-and-data-analysis-using-sql/
Merge | customers = {
'customer_id': ['101', '102', '103', '104', '105'],
'first_name': ['Tony', 'Silvio', 'Paulie', 'Corrado', 'Christopher'],
'last_name': ['Soprano', 'Dante', 'Gualtieri', 'Soprano', 'Moltisanti']}
df_1 = pd.DataFrame(customers, columns = ['customer_id', 'first_name', 'last_name'])
df_1
orders = {
'customer_id': ['101', '104', '105', '108', '111'],
'order_date': ['2015-01-01', '2015-01-08', '2015-01-19', '2015-02-10', '2015-02-11'],
'order_value': ['10000', '25000', '1100', '5000', '4400']}
df_2 = pd.DataFrame(orders, columns = ['customer_id', 'order_date', 'order_value'])
df_2
pd.merge(df_1, df_2, how = 'inner', on = 'customer_id')
pd.merge(df_1, df_2, how = 'left', on = 'customer_id')
pd.merge(df_1, df_2, how = 'right', on = 'customer_id')
pd.merge(df_1, df_2, how = 'outer', on = 'customer_id') | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
Join | customers = {
'customer_id': ['101', '102', '103', '104', '105'],
'first_name': ['Tony', 'Silvio', 'Paulie', 'Corrado', 'Christopher'],
'last_name': ['Soprano', 'Dante', 'Gualtieri', 'Soprano', 'Moltisanti']}
customers
orders = {
'customer_id': ['101', '104', '105', '108', '111'],
'order_date': ['2015-01-01', '2015-01-08', '2015-01-19', '2015-02-10', '2015-02-11'],
'order_value': ['10000', '25000', '1100', '5000', '4400']}
orders
df1_new = pd.DataFrame.from_dict(customers, orient='columns', dtype=None)
df1_new
df1_new = df1_new.set_index('customer_id')
df1_new
df2_new = pd.DataFrame.from_dict(orders, orient='columns', dtype=None)
df2_new
df2_new = df2_new.set_index('customer_id')
df2_new
df1_new.join(df2_new,how = "inner")
df1_new.join(df2_new,how = "outer")
df1_new.join(df2_new,how = "left")
df1_new.join(df2_new,how = "right")
# Alternate Way : I don't recommend this
df_1.join(df_2, on = "customer_id", lsuffix='_l', rsuffix='_r') | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
Concatenate | customers = {
'customer_id': ['101', '102', '103', '104', '105'],
'first_name': ['Tony', 'Silvio', 'Paulie', 'Corrado', 'Christopher'],
'last_name': ['Soprano', 'Dante', 'Gualtieri', 'Soprano', 'Moltisanti']}
df_1 = pd.DataFrame(customers, columns = ['customer_id', 'first_name', 'last_name'])
df_1
orders = {
'customer_id': ['101', '104', '105', '108', '111'],
'order_date': ['2015-01-01', '2015-01-08', '2015-01-19', '2015-02-10', '2015-02-11'],
'order_value': ['10000', '25000', '1100', '5000', '4400']}
df_2 = pd.DataFrame(orders, columns = ['customer_id', 'order_date', 'order_value'])
df_2
pd.concat([df_1,df_2])
pd.concat([df_1,df_2],axis=0)
pd.concat([df_1,df_2],axis=1) | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
One final resource on why you would want to perform these operations in Pandas - and evidence on how fast it really is! http://wesmckinney.com/blog/high-performance-database-joins-with-pandas-dataframe-more-benchmarks/
Grouping, a.k.a. split-apply-combine
While analysing data, a Data Scientist has to very often perform aggregations, perform transformation ops like standardising data, and filter through the dataset to look at only relevant samples.
This is what the groupby function is primarily used for.
Read more here. | import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
plt.rcParams["figure.figsize"] = (15,7)
paintball = {'Team': ['Super Ducks','Super Ducks', 'Super Ducks', 'Super Ducks', 'Super Ducks', 'Bobcats', 'Bobcats', 'Bobcats', 'Bobcats', 'Tigers', 'Tigers', 'Tigers', 'Tigers','Tigers','Tigers'],
'Name': ['Tony', 'Antonio', 'Felipe', 'Ryan', 'Mario', 'Sergio', 'Tanaka', 'Anderson', 'Joe', 'Floyd', 'Manny', 'Chris', 'Junior', 'George','Brock'],
'Kills': ['1', '1', '1', '4', '3', '2', '2', '2','5', '1', '1', '7', '4','8','5'],
'Shots Fired Before': [17, 19, 22, 8, 13, 85, 64, 49, 74, 14, 20, 24,13,31,37],
'Shots Fired After': [41, 73, 57, 30, 74, 37, 28, 40, 43, 18, 19, 21,13,32,39]}
df = pd.DataFrame(paintball, columns = ['Team', 'Name', 'Shots Fired Before', 'Shots Fired After','Kills'])
df
df.groupby('Team').mean()
byteam = df.groupby('Team')
byteam.count()
byteam.describe()
byteam.describe().transpose()['Bobcats']
Team_Before = df[['Shots Fired Before']].groupby(df['Team']).mean()
Team_After = df[['Shots Fired After']].groupby(df['Team']).mean()
Team_Before
Team_After
Team_Before.join(Team_After)
plt.style.use('ggplot')
plt.rcParams["figure.figsize"] = (15,7)
Team_Before.join(Team_After).plot(kind="Bar") | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
Cool graph, but can we improve it, visually speaking? Yes of course we can! Let's look at some of the styles available within Matplotlib. | plt.style.available | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
Personally I am quite partial to ggplot and seaborn, but not so much to fivethirtyeight. Let's try these. | plt.style.use('ggplot')
plt.rcParams["figure.figsize"] = (15,7)
Team_Before.join(Team_After).plot(kind="Bar") | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
What about fivethirtyeight? | plt.style.use('fivethirtyeight')
plt.rcParams["figure.figsize"] = (15,7)
Team_Before.join(Team_After).plot(kind="Bar") | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
And seaborn. Note that seaborn is a visualisation library that works with Matplotlib. You can mimic the style without actually using it. | plt.style.use('seaborn')
plt.rcParams["figure.figsize"] = (15,7)
Team_Before.join(Team_After).plot(kind="Bar")
plt.rcParams.update(plt.rcParamsDefault)
plt.style.use('seaborn-poster')
plt.rcParams["figure.figsize"] = (15,7)
Team_Before.join(Team_After).plot(kind="Bar")
pd.crosstab(df["Team"], df["Kills"], margins = True)
plt.rcParams.update(plt.rcParamsDefault)
%matplotlib inline
plt.rcParams["figure.figsize"] = (15,7)
plt.style.use('seaborn-deep')
df.groupby('Kills').mean().plot(kind="bar") | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
Apply
We can use the apply function to perform an operation over an axis in a dataframe. | import pandas as pd
import numpy as np
df = pd.read_csv("data/cafe_sales2015.csv")
df.head()
df["Date"] = pd.to_datetime(df["Date"])
df.set_index(["Date"], inplace = True)
df.interpolate(method = "linear", inplace = True)
df.head()
#print(df.apply(np.cumsum))
df.apply(np.average)
df.apply(lambda x: x.max() - x.min())
# What columns have missing values?
df.apply(lambda x: sum(x.isnull()),axis=0)
# Using Apply to find missing values
# Obviously don't do this for datasets with thousands or millions of rows!
empty = df.apply(lambda col: pd.isnull(col))
empty | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
Map
The map function iterates over each element of a series. | import pandas as pd
import numpy as np
df = pd.read_csv("data/cafe_sales2015.csv")
df.head()
df["Latte"] = df["Latte"].map(lambda x: x+2)
df.head()
df.interpolate(method = "linear", inplace = True)
df["Water"] = df["Water"].map(lambda x: x-1 if (x>0) else 0)
df.head() | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
ApplyMap | import pandas as pd
import numpy as np
df = pd.read_csv("data/cafe_sales2015.csv")
df.head()
def to_int(x):
if type(x) is float:
x = int(x)
return x
else:
return x
df.interpolate(method = "linear", inplace = True)
df.applymap(to_int).head() | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
Further Reading<br>
Wes McKinney's amazing book covers this issue. Refer to Page 132.
Pivot Tables
Pivot tables are summarisation tables that help the user sort, count, total or average the data available in a dataset. If you have used Excel, you will be very familiar with them. If not, let's look at it from a fresh Pandas perspective.
Typically, there are four parameters, but you don't always have to specify every one of them, as we will see in the examples below.
index: An array of the dataset that will used as indices to our new reshaped and aggregated DataFrame
columns: An array of the dataset that will provide columns to the new DataFrame
values: These are the values we wish to aggregate in each cell.
aggfunc: The function we will use to perform the aggregation
Sales Reports | import pandas as pd
import numpy as np
# The 'xlrd' module gets imported automatically, if not, install it with 'pip install xlrd'
df = pd.read_excel("Data/bev-sales.xlsx")
df.head()
df.tail()
df.describe()
help(pd.pivot_table)
df.head()
pd.pivot_table(df,index=["Sales Exec"],values=["Revenue"],aggfunc="sum")
%matplotlib inline
import matplotlib.pyplot as plt
pd.pivot_table(df, index=["Sales Exec"],values=["Revenue"],aggfunc="sum").plot(kind="bar")
pd.pivot_table(df,index=["Sales Exec"],values=["Revenue"],aggfunc="mean")
pd.pivot_table(df, index=["Sales Exec", "Item"], values=["Revenue"], aggfunc="sum")
pd.pivot_table(df,index=["Sales Exec"],values=["Revenue"],aggfunc=[np.sum])
pd.pivot_table(df,index=["Sales Exec"],values=["Units sold", "Revenue"],aggfunc=[np.sum])
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn')
plt.rcParams["figure.figsize"] = (15,7)
pd.pivot_table(df,index=["Sales Exec", "Item"],values=["Revenue"],aggfunc=[np.sum]).plot(kind="bar")
plt.title('January Sales Report')
pd.pivot_table(df,index=["Sales Exec", "Item"],values=["Units sold", "Revenue"],
columns=["Price per Unit"], aggfunc="sum", margins = True) | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
Tips | df = pd.read_csv("Data/tips.csv")
df.head()
df["tip_pc"] = df["tip"] / df["total_bill"]
df.head()
pd.pivot_table(df,index=["sex"], values = ["tip_pc"], aggfunc="mean")
pd.pivot_table(df, index = ["smoker", "sex"], values = ["tip_pc"], aggfunc = "mean")
pd.pivot_table(df,index=["sex"], values = ["total_bill","tip"], aggfunc="sum") | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
Bada Bing! | import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
df = pd.read_excel("Data/Sopranos/sopranos-killings.xlsx")
df.head()
pd.pivot_table(df,index=["Cause of Death"],values = ["Season"], aggfunc="first")
pd.pivot_table(df,index=["Cause of Death"],values = ["Season"], aggfunc="count", margins=True)
whacked = pd.pivot_table(df,index=["Cause of Death"],values = ["Season"], aggfunc="count")
whacked
plt.style.available
plt.rcParams.update(plt.rcParamsDefault)
%matplotlib inline
plt.style.use('seaborn-deep')
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (15,7)
whacked.plot(kind = "bar", legend=None)
plt.title('How People Died on The Sopranos')
with plt.style.context('ggplot', after_reset=True):
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (15,7)
whacked.plot(kind = "bar", legend=None)
plt.title('How People Died on The Sopranos')
killer = pd.pivot_table(df,index=["Killer"],values = ["Season"], aggfunc="count")
killer = killer.sort_values(by=["Season"], ascending = False)
killer
plt.rcParams.update(plt.rcParamsDefault)
plt.style.use('ggplot')
plt.rcParams["figure.figsize"] = (15,7)
killer[:10].plot(kind = "bar", legend=None)
plt.title('Top 10 Killers')
| 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
Basic Statistical Operations/Explorations | import pandas as pd
import numpy as np
df = pd.read_csv("data/cafe_sales2015.csv")
df["Date"] = pd.to_datetime(df["Date"])
df.set_index(["Date"], inplace = True)
df.interpolate(method = "linear", inplace = True)
df.head()
df.tail()
df.describe()
print("Mean\n", df.mean())
print("\n\nMedian\n", df.median())
print("\n\nMode\n", df.mode())
print("The Maximum value is:\n",df.max())
print("\n\nThe Minimum value is:\n",df.min())
print("\n\nKurtosis:\n",df.kurtosis()) | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
The backbone of the decision tree algorithms is a criterion (e.g. entropy, Gini, error) with which we can choose the best (in a greedy sense) attribute to add to the tree. ID3 and C4.5 use information gain (entropy) and normalized information gain, respectively. | def weighted_entropy(data, col_num):
entropies = []
n_s = []
entropy_of_attribute = entropy(data[:,col_num])
for value in columns[col_num]:
candidate_child = data[data[:,col_num] == value]
n_s.append(len(candidate_child))
entropies.append(entropy(candidate_child[:,6]))
n_s = np.array(n_s)
n_s = n_s / np.sum(n_s)
weighted_entropy = n_s.dot(entropies)
return weighted_entropy, entropy_of_attribute
def entropy(data):
classes = np.unique(data)
n = len(data)
n_s = []
for class_ in classes:
n_s.append(len(data[data==class_]))
n_s = np.array(n_s)
n_s = n_s/n
n_s = n_s * np.log2(n_s)
return max(0,-np.sum(n_s)) | decision trees/Decision Trees.ipynb | bbartoldson/examples | mit |
To store our tree, we wll use dictionaries. Each node of the tree is a Python dict. | def build_node(data, entropy, label, depth, class_="TBD", parent=None):
new_node = dict()
new_node['data'] = data
new_node['entropy'] = entropy
new_node['label'] = label
new_node['depth'] = depth
new_node['class'] = class_
new_node['parent'] = parent
new_node['children'] = []
return new_node
root = build_node(data, entropy(data[:,6]), "all data", 0)
classes = np.unique(root['data'][:,6])
print(classes) | decision trees/Decision Trees.ipynb | bbartoldson/examples | mit |
Functions that helps us build our tree and classify its leaves. find_best_split acts on a node, and returns the attribute that leads to the best (possibly normalized) information gain. | def find_best_split(node, c45 = False):
data = node['data']
entropy = node['entropy']
gains = []
for col_num in range(len(columns) - 1):
new_entropy, entropy_of_attribute = weighted_entropy(data, col_num)
if c45:
if entropy_of_attribute==0:
gains.append(0)
else:
gains.append((entropy - new_entropy) / (entropy_of_attribute))
else:
gains.append(entropy - new_entropy)
if np.max(gains) > 10**-3 :
best_attribute = np.argmax(gains)
return best_attribute
else:
return -1
def classify(node_data):
data = node_data[:, 6]
n_s = []
for class_ in classes:
n_s.append(len(data[data==class_]))
return columns[-1][np.argmax(n_s)]
labels[find_best_split(root)], classify(root['data']) | decision trees/Decision Trees.ipynb | bbartoldson/examples | mit |
This function is recursive and will construct a decision tree out of a root node that contains your training data. | def build_tree(node, c45 = False, max_depth = 999, noisy=False):
next_split_attribute = find_best_split(node, c45)
if next_split_attribute == -1 or node['depth'] == max_depth:
node['class'] = classify(node['data'])
#this if statement just handles some printing of the tree (rudimentary visualization)
if noisy:
label = []
label.append(node['label'])
temp_parent = node
while temp_parent['parent']:
temp_parent = temp_parent['parent']
label.append(temp_parent['label'])
depth = node['depth']
for i, layer_label in enumerate(reversed(label)):
for _ in range(i):
print("\t", end="")
if i==depth:
print("{} -> class {}".format(layer_label, node['class']))
else:
print("{}".format(layer_label))
else:
for value in columns[next_split_attribute]:
data = node['data'][ node['data'][:, next_split_attribute] == value ]
entropy_ = entropy(data[:, 6])
new_node = build_node(data, entropy_, "{} == {}".format(
labels[next_split_attribute],value),
node['depth'] + 1, parent=node)
build_tree(new_node, c45, max_depth, noisy)
node['children'].append(new_node) | decision trees/Decision Trees.ipynb | bbartoldson/examples | mit |
Lastly, before building the tree, we need a function to check the tree's accuracy. | def correct(decision_tree):
if not decision_tree['children']:
return np.sum(classify(decision_tree['data'])==decision_tree['data'][:,6])
else:
n_correct = 0
for child in decision_tree['children']:
n_correct += correct(child)
return n_correct
correct(root)/1728 | decision trees/Decision Trees.ipynb | bbartoldson/examples | mit |
Let's make a tree!
But first, a quick look at the class distribution after splitting on safety, an important attribute according to our algorithm | for safety in columns[5]:
plt.hist(data[data[:,5]==safety, 6])
plt.title(safety + " safety")
plt.show()
root = build_node(data, entropy(data[:,6]), "all data", 0)
build_tree(root, max_depth=1, noisy=True)
print("\nTree Accuracy: {}".format(correct(root)/1728))
root = build_node(data, entropy(data[:,6]), "all data", 0)
build_tree(root, max_depth=2, noisy=True)
print("\nTree Accuracy: {}".format(correct(root)/1728))
for persons in columns[3]:
indices1 = data[:,5]=="high"
indices2 = data[:,3]==persons
indices = np.alltrue([indices1,indices2], axis=0)
plt.hist(data[indices, 6])
plt.title("high safety and {} persons".format(persons))
plt.show() | decision trees/Decision Trees.ipynb | bbartoldson/examples | mit |
On this dataset, C4.5 and ID3 get similar accuracies... | print("Training Accuracy Comparison")
print("---------")
print(" ID3 C4.5")
for depth in range(7):
root = build_node(data, entropy(data[:,6]), "all data", 0)
build_tree(root, max_depth=depth, c45=False)
id3=correct(root)/1728
root = build_node(data, entropy(data[:,6]), "all data", 0)
build_tree(root, max_depth=depth, c45=True)
c45=correct(root)/1728
print('{:.3f} '.format(round(id3,3)), ' {:.3f}'.format(round(c45,3))) | decision trees/Decision Trees.ipynb | bbartoldson/examples | mit |
စိတ်ထဲမှာ ပေါ်လာတာကို ကောက်ရေးပြီးတော့ syllable segmentation လုပ်ခိုင်းလိုက်တာပါ။ :)
နောက်ထပ် ဥပမာအနေနဲ့ Wikipedia Myanmar မှာရေးထားတဲ့ အာခီမီးဒီးစ် ရဲ့ အတ္ထုပ္ပတ္တိအကျဉ်း ထဲမှာရေးထားတဲ့
စာကြောင်းတွေကို sylbreak နဲ့ ဖြတ်ကြည့်ရအောင်။ | sylbreak("""အာခီမီးဒီးစ်ကို ဘီစီ ၂၈၇ ခန့်က ရှေးဟောင်း မဂ္ဂနာဂရေစီယာပြည်လက်အောက်ခံ စစ္စလီပြည် ဆိုင်ရာကျူးစ် မြို့ တွင် မွေးဖွားခဲ့သည်။ ဘိုင်ဇန်တိုင်းဂရိခေတ် က သမိုင်းပညာရှင် ဂျွန်ဇီဇီ ၏ မှတ်တမ်းအရ အာခီမီးဒီးစ်သည် အသက် ၇၅ နှစ်အထိ နေထိုင်သွားရကြောင်း သိရသည်။ အာခီမီးဒီးစ်သည် သူ၏ တီထွင်မှု တစ်ခုဖြစ်သော သဲနာရီ နှင့် ပတ်သက်၍ ရေးသားထားသော Sand Reckoners အမည်ရှိ စာတမ်းများတွင် သူ၏ ဖခင်အမည်ကို နက္ခတ္တဗေဒပညာရှင် ဖီးဒီးယပ်စ် ဟု ဖော်ပြထားသည်။ သမိုင်းပညာရှင် ပလူးတပ် ရေးသားသော ခေတ်ပြိုင်ပုဂ္ဂိုလ်ထူးကြီးများ စာအုပ်တွင် အာခီမီးဒီးစ်သည် ဆိုင်ရာကျူးစ်ဘုရင် ဒုတိယမြောက်ဟီရိုးနှင့် ဆွေမျိုး တော်စပ်ကြောင်း ဖော်ပြထားသည်။ သူငယ်ရွယ်စဉ်က အီဂျစ်ပြည် အလက်ဇန္ဒြီးယားမြို့ တွင် ပညာဆည်းပူး ခဲ့သည်ဟု ယူဆရသည်။ ဘီစီ ၂၁၂ တွင် အာခီမီးဒီးစ် သေဆုံးခဲ့သည်။ ရောမစစ်ဗိုလ်ချုပ် မားကပ်စ် ကလောဒီးယပ်စ် မာဆဲလပ်စ် က နှစ်နှစ်ကြာဝိုင်းရံ ပိတ်ဆို့ပြီးနောက် ဆိုင်ရာကျူးစ် မြို့ကို သိမ်းပိုက်လိုက်သည်။ ထိုအချိန်တွင် အာခီမီးဒီးသည် ဂျော်မက်ထရီ ပုစ္ဆာတစ်ပုဒ်ကို စဉ်းစား အဖြေရှာနေခိုက် ဖြစ်သည်။ ရောမစစ်သားက သူ့အား ဖမ်းဆီးလိုက်ပြီး ဗိုလ်ချုပ် မာဆဲလပ်စ် နှင့် တွေ့ဆုံရန် ပြောဆိုရာ သူက သူ၏ပုစ္ဆာစဉ်းစားနေဆဲဖြစ်၍ မတွေ့လိုကြောင်း ငြင်းဆိုသည်တွင် ရောမစစ်သားက ဒေါသထွက်ကာ ဓားဖြင့် ထိုးသတ်လိုက်သည်ဟု ပလူးတပ် က ရေးသားခဲ့သည်။ ဗိုလ်ချုပ် မာဆဲလပ်စ်သည် အာခီမီးဒီးစ် သေဆုံးသွားသည့် အတွက် များစွာ နှမြောတသဖြစ်ရသည်။ အာခီမီးဒီးစ်အား ပညာရှင် တစ်ယောက်အဖြစ် သိရှိထားသောကြောင့် မသတ်ရန် ကြိုတင် အမိန့်ပေးထားခဲ့သည်။ “ငါ့စက်ဝိုင်းတွေပေါ် တက်မနင်းပါနဲ့”ဟူသော စကားကို အာခီမီးဒီးစ် နောက်ဆုံး ပြောဆိုခဲ့သည်ဟု အချို့က ယူဆကြသော်လည်း သမိုင်းပညာရှင် ပလူးတပ် ရေးသော စာအုပ်တွင်မူ မပါရှိပေ။ အာခီမီးဒီးစ်၏ ဂူဗိမ္မာန်တွင် ထုလုံးရှည်မှန်တစ်ခုအတွင်း စက်လုံးတစ်ခုကို ထည့်သွင်းထားသည့် ရုပ်တုတစ်ခုကို စိုက်ထူထားသည်။ အာခီမီးဒီးစ် သေဆုံးပြီး နှစ်ပေါင်း ၁၃၇နှစ်အကြာ ဘီစီ ၇၅တွင် ရောမခေတ် နိုင်ငံရေးသုခမိန် ဆီဇာရိုက အာခီမီးဒီးစ် အကြောင်းကြားသိရ၍ သူ၏ အုတ်ဂူအား ရှာဖွေခဲ့သည်။ ခြုံနွယ်ပိတ်ပေါင်းများ ဖုံးအုပ်နေသော အာခီမီးဒီးစ်၏ အုတ်ဂူကို ဆိုင်ရာကျူးစ်မြို့အနီးတွင် ရှာဖွေ တွေ့ရှိခဲ့ပြီး သန့်ရှင်းရေးပြုလုပ်ကာ အုတ်ဂူပေါ်မှ စာသားများကို ဖတ်ရှုသွားသည်။ ဆိုင်ရာကျူးစ်စစ်ပွဲ အပြီး နှစ်ပေါင်း ၇၀ အကြာတွင် ပိုလီးဘီးယပ်စ် ရေးသားသော ဆိုင်ရာကျူးစ်စစ်ပွဲ အကြောင်း စာအုပ်တွင် အာခီမီးဒီးစ်နှင့် ပတ်သက်သော အကြောင်းများ ပါရှိ၍ သမိုင်းပညာရှင် ပလူးတပ် က ထပ်မံ ရေးသားနိုင်ခဲ့ခြင်း ဖြစ်ပါသည်။ ဆိုင်ရာကျူးစ်မြို့ ကာကွယ်ရေးအတွက် စစ်ပွဲဝင် စက်ကိရိယာ လက်နက်ဆန်းများကိုလည်း အာခီမီးဒီးစ်က တီထွင်ပေးခဲ့ကြောင်း အဆိုပါ စာအုပ်တွင် ဖော်ပြပါရှိပါသည်။
""") | jupyter-notebook/using-sylbreak-in-jupyter-notebook.ipynb | ye-kyaw-thu/sylbreak | apache-2.0 |
Typing order
မြန်မာစာနဲ့ ပတ်သက်တဲ့ NLP (Natural Language Processing) အလုပ် တစ်ခုခု လုပ်ဖို့အတွက် syllable segmentation လုပ်ကြမယ်ဆိုရင် တကယ်တမ်းက မလုပ်ခင်မှာ၊ မြန်မာစာ စာကြောင်းတွေရဲ့ typing order အပါအဝင် တခြား ဖြစ်တတ်တဲ့ အမှားတွေကိုလည်း cleaning လုပ်ရပါတယ်။ အဲဒီလိုမလုပ်ရင် sylbreak က ကျွန်တော် အကြမ်းမျဉ်းသတ်မှတ်ထားတဲ့ မြန်မာစာ syllable unit တွေအဖြစ် မှန်မှန်ကန်ကန် ဖြတ်ပေးနိုင်မှာ မဟုတ်ပါဘူး။ မြန်မာစာ စာကြောင်းတွေထဲမှာ ရှိတတ်တဲ့အမှား တွေက တကယ့်ကို အများကြီးပါ။ တချို့ အမှားတွေက မျက်လုံးနဲ့ကြည့်ယုံနဲ့ မခွဲခြားနိုင်တာမျိုးတွေလည်း ရှိပါတယ်။ ဒီနေရာမှာတော့ အမှားအမျိုးအစားတွေထဲက တစ်မျိုးဖြစ်တဲ့ typing order အမှား တစ်မျိုး၊ နှစ်မျိုးကို ဥပမာအနေနဲ့ရှင်းပြရင်း၊ အဲဒီလိုအခြေအနေမျိုးမှာ ဖြစ်တတ်တဲ့ sylbreak က ထွက်လာမယ့် အမှား output တွေကိုလည်း လေ့လာကြည့်ကြရအောင်။
အောက်မှာ သုံးပြထားတဲ့ "ခန့်" က "ခ န ့ ်" (ခခွေး နငယ် အောက်မြစ် အသတ်) ဆိုတဲ့ အစီအစဉ် အမှားနဲ့ ရိုက်ထားတာဖြစ်ပါတယ်။ အဲဒါကြောင့် sylbreak က ထွက်လာတဲ့အခါမှာ "ခခွေး" နဲ့ "နငယ် အသတ် အောက်မြစ်" က ကွဲနေတာဖြစ်ပါတယ်။ | sylbreak("ဘီစီ ၂၈၇ ခန့်") | jupyter-notebook/using-sylbreak-in-jupyter-notebook.ipynb | ye-kyaw-thu/sylbreak | apache-2.0 |
တကယ်တန်း မှန်ကန်တဲ့ "ခန့်" ရဲ့ typing order က "ခ န ် ့" (ခခွေး နငယ် အသတ် အောက်မြစ်) ပါ။
အမြင်အားဖြင့်ကတော့ မခွဲနိုင်ပေမဲ့၊ မှန်ကန်တဲ့ typing order နဲ့ ရိုက်ထားရင်တော့ "ခန့်" ဆိုပြီး syllable တစ်ခုအနေနဲ့ ရိုက်ထုတ်ပြပေးပါလိမ့်မယ်။ | sylbreak("ဘီစီ ၂၈၇ ခန့်") | jupyter-notebook/using-sylbreak-in-jupyter-notebook.ipynb | ye-kyaw-thu/sylbreak | apache-2.0 |
နောက်ထပ် typing order အမှားတစ်ခုကို ကြည့်ကြရအောင်။ | sylbreak("ထည့်သွင်းထားသည့်ရုပ်တု") | jupyter-notebook/using-sylbreak-in-jupyter-notebook.ipynb | ye-kyaw-thu/sylbreak | apache-2.0 |
"ညကြီး အောက်မြစ် အသတ်" ဆိုတဲ့ မှားနေတဲ့ အစီအစဉ်ကို "ညကြီး အသတ် အောက်မြစ်" ဆိုပြီး
ပြောင်းရိုက်ပြီးတော့ sylbreak လုပ်ကြည့်ရင်တော့ အောက်ပါအတိုင်း "ထ" နဲ့ "ည့်", "သ" နဲ့ "ည့်" တွေက ကွဲမနေတော့ပဲ မှန်မှန်ကန်ကန်ဖြတ်ပေးပါလိမ့်မယ်။ | sylbreak("ထည့်သွင်းထားသည့်ရုပ်တု") | jupyter-notebook/using-sylbreak-in-jupyter-notebook.ipynb | ye-kyaw-thu/sylbreak | apache-2.0 |
တချို့အမှားတွေကတော့ ဂရုစိုက်ရင် မျက်စိနဲ့ မြင်နိုင်ပါတယ်။
ဥပမာ "ဥ" (အက္ခရာ ဥ) နဲ့ "ဉ" (ညကလေး) ကိုမှားရိုက်တဲ့ကိစ္စပါ။
သို့သော် ကျွန်တော်မြန်မာစာကြောင်းတွေအများကြီးကို ကိုင်တွယ်အလုပ်လုပ်တဲ့အခါတိုင်းမှာ ဒီလိုအမှားက အမြဲတမ်းကို ပါတတ်ပါတယ်။
ဖောင့် (font) မှာလည်း မှန်မှန်ကန်ကန်ခွဲထားမယ်ဆိုရင်၊ အမှန်က ညကလေးဆိုရင် အမြီးက ရှည်ပါတယ်။
စာရိုက်သူအများစုက သတိမပြုမိတဲ့ အကြောင်းအရင်း တစ်ခုကလည်း တချို့ text editor တွေမှာ "အက္ခရာ ဥ" နှင့် ညကလေး "ဉ" ကို ကွဲပြားအောင် မပြသပေးနိုင်လို့ပါ။ | sylbreak("ကာရီသည်ဒီနှစ်၏ပါရမီရှင်တစ်ဉီးနှင့်ထိုက်တန်သောအမျိုးသမီးအဆိုရှင်ဖြစ်သည်။") | jupyter-notebook/using-sylbreak-in-jupyter-notebook.ipynb | ye-kyaw-thu/sylbreak | apache-2.0 |
ဝီကီပီးဒီးယားက မှားနေတဲ့ "ညကလေး" ကို "အက္ခရာ ဥ" နဲ့ပြန်ပြင်ရိုက်ထားတဲ့ စာကြောင်းနဲ့ နောက်တစ်ခေါက် syllable ဖြတ်ထားတာက အောက်ပါအတိုင်းဖြစ်ပါတယ်။ "ညကလေး" နဲ့ "အက္ခရာ ဥ" အမှားကိစ္စမှာတော့ syllable segmentation ဖြတ်တဲ့အပိုင်းမှာတော့ ထူးထူးခြားခြား အပြောင်းအလဲ မရှိပါဘူး။ | sylbreak("ကာရီသည်ဒီနှစ်၏ပါရမီရှင်တစ်ဦးနှင့်ထိုက်တန်သောအမျိုးသမီးအဆိုရှင်ဖြစ်သည်။") | jupyter-notebook/using-sylbreak-in-jupyter-notebook.ipynb | ye-kyaw-thu/sylbreak | apache-2.0 |
Neural Network
<img style="float: left" src="images/neural_network.png"/>
For the neural network, we'll test on a 3 layer neural network with ReLU activations and an Adam optimizer. The lessons you learn apply to other neural networks, including different activations and optimizers. | # Save the shapes of weights for each layer
print(mnist.train.images.shape[1])
layer_1_weight_shape = (mnist.train.images.shape[1], 256)
layer_2_weight_shape = (256, 128)
layer_3_weight_shape = (128, mnist.train.labels.shape[1]) | tutorials/weight-initialization/weight_initialization.ipynb | liumengjun/cn-deep-learning | mit |
Small scale example | def func(a, b, c):
res = tf.einsum('ijk,ja,kb->iab', a, b, c) + 1
res = tf.einsum('iab,kb->iak', res, c)
return res
a = tf.random_normal((10, 11, 12))
b = tf.random_normal((11, 13))
c = tf.random_normal((12, 14))
# res = func(a, b, c)
orders, optimized_func = tf_einsum_opt.optimizer(func, sess, a, b, c)
res1 = func(a, b, c)
%timeit sess.run(res1)
res2 = optimized_func(a, b, c)
%timeit sess.run(res2)
# Check that the results of optimized and the original function are the same.
np.testing.assert_allclose(*sess.run([res1, res2]), rtol=1e-5, atol=1e-5) | example.ipynb | Bihaqo/tf_einsum_opt | mit |
Example with more savings, but slower to optimize | def func(a, b, c, d):
res = tf.einsum('si,sj,sk,ij->s', a, b, d, c)
res += tf.einsum('s,si->s', res, a)
return res
a = tf.random_normal((100, 101))
b = tf.random_normal((100, 102))
c = tf.random_normal((101, 102))
d = tf.random_normal((100, 30))
orders, optimized_func = tf_einsum_opt.optimizer(func, sess, a, b, c, d)
res1 = func(a, b, c, d)
%timeit sess.run(res1)
res2 = optimized_func(a, b, c, d)
%timeit sess.run(res2) | example.ipynb | Bihaqo/tf_einsum_opt | mit |
Look at the recommendations: | orders | example.ipynb | Bihaqo/tf_einsum_opt | mit |
Original Voce-Chaboche model
First we will use RESSPyLab to generate a formatted table of parameters including the relative error metric, $\bar{\varphi}$.
The inputs to this function are:
1. Information about the name of the data set and the load protocols used in the optimization.
2. The file containing the history of parameters (generated from the optimization).
3. The data used in the optimization.
Two tables are returned (as pandas DataFrames) and are printed to screen in LaTeX format.
If you want the tables in some other format it is best to operate on the DataFrames directly (e.g., use to_csv()). | # Identify the material
material_def = {'material_id': ['Example 1'], 'load_protocols': ['1,5']}
# Set the path to the x log file
x_log_file_1 = './output/x_log.txt'
x_logs_all = [x_log_file_1]
# Load the data
data_files_1 = ['example_1.csv']
data_1 = rpl.load_data_set(data_files_1)
data_all = [data_1]
# Make the tables
param_table, metric_table = rpl.summary_tables_maker_vc(material_def, x_logs_all, data_all) | examples/Post_Processing_Example_1.ipynb | AlbanoCastroSousa/RESSPyLab | mit |
Tables can be easily generated following a standard format for several data sets by appending additional entries to the lists of values in material_def and to x_logs_all and data_all.
Now we will generate the consistency metric, $\xi_2$.
The input arguments are:
1. The parameters of the base case.
2. The parameters of the case that you would like to compare with.
3. The set of data to compute this metric over.
The metric is returned (the raw value, NOT as a percent) directly from this function. | # Load the base parameters, we want the last entry in the file
x_base = np.loadtxt(x_log_file_1, delimiter=' ')
x_base = x_base[-1]
# Load (or set) the sample parameters
x_sample = np.array([179750., 318.47, 100.72, 8.00, 11608.17, 145.22, 1026.33, 4.68])
# Calculate the metric
consistency_metric = rpl.vc_consistency_metric(x_base, x_sample, data_1)
print consistency_metric | examples/Post_Processing_Example_1.ipynb | AlbanoCastroSousa/RESSPyLab | mit |
The value of $\xi_2 = 65$ %, indicating that the two sets of parameters are inconsistent for this data set.
Updated Voce-Chaboche model
The inputs to generate the tables are the same as for the original model, however the input parameters have to come from optimization using the updated model. | # Identify the material
material_def = {'material_id': ['Example 1'], 'load_protocols': ['1']}
# Set the path to the x log file
x_log_file_2 = './output/x_log_upd.txt'
x_logs_all = [x_log_file_2]
# Load the data
data_files_2 = ['example_1.csv']
data_2 = rpl.load_data_set(data_files_2)
data_all = [data_2]
# Make the tables
param_table, metric_table = rpl.summary_tables_maker_uvc(material_def, x_logs_all, data_all) | examples/Post_Processing_Example_1.ipynb | AlbanoCastroSousa/RESSPyLab | mit |
I. Loading Labeling Matricies
First we'll load our label matrices from notebook 2 | from snorkel.annotations import LabelAnnotator
labeler = LabelAnnotator()
L_train = labeler.load_matrix(session, split=0)
L_dev = labeler.load_matrix(session, split=1) | tutorials/workshop/Workshop_3_Generative_Model_Training.ipynb | jasontlam/snorkel | apache-2.0 |
Now we set up and run the hyperparameter search, training our model with different hyperparamters and picking the best model configuration to keep. We'll set the random seed to maintain reproducibility.
Note that we are fitting our model's parameters to the training set generated by our labeling functions, while we are picking hyperparamters with respect to score over the development set labels which we created by hand.
II: Unifying supervision
A. Majority Vote
The most simple way to unify the output of all your LFs is by computed the unweighted majority vote. | from lib.scoring import *
majority_vote_score(L_dev, L_gold_dev) | tutorials/workshop/Workshop_3_Generative_Model_Training.ipynb | jasontlam/snorkel | apache-2.0 |
B. Generative Model
In data programming, we use a more sophisitcated model to unify our labeling functions. We know that these labeling functions will not be perfect, and some may be quite low-quality, so we will model their accuracies with a generative model, which Snorkel will help us easily apply.
This will ultimately produce a single set of noise-aware training labels, which we will then use to train an end extraction model in the next notebook. For more technical details of this overall approach, see our NIPS 2016 paper.
1. Training the Model
When training the generative model, we'll tune our hyperparamters using a simple grid search.
Parameter Definitions
epochs A single pass through all the data in your training set
step_size The factor by which we update model weights after computing the gradient
decay The rate our update factor dimishes (decay) over time. | from snorkel.learning import GenerativeModel
from snorkel.learning import RandomSearch, ListParameter, RangeParameter
# use grid search to optimize the generative model
step_size_param = ListParameter('step_size', [0.1 / L_train.shape[0], 1e-5])
decay_param = ListParameter('decay', [0.9, 0.95])
epochs_param = ListParameter('epochs', [10, 50])
reg_param = ListParameter('reg_param', [1e-3, 1e-6])
prior_param = ListParameter('LF_acc_prior_weight_default', [1.0, 0.9, 0.8])
# search for the best model
param_grid = [step_size_param, decay_param, epochs_param, reg_param, prior_param]
searcher = RandomSearch(GenerativeModel, param_grid, L_train, n=10, lf_propensity=False)
%time gen_model, run_stats = searcher.fit(L_dev, L_gold_dev, deps=set())
run_stats | tutorials/workshop/Workshop_3_Generative_Model_Training.ipynb | jasontlam/snorkel | apache-2.0 |
2. Model Accuracies
These are the weights learned for each LF | L_dev.lf_stats(session, L_gold_dev, gen_model.learned_lf_stats()['Accuracy'])
train_marginals = gen_model.marginals(L_train) | tutorials/workshop/Workshop_3_Generative_Model_Training.ipynb | jasontlam/snorkel | apache-2.0 |
III. Advanced Generative Model Features
A. Structure Learning
We may also want to include the dependencies between our LFs when training the generative model. Snorkel makes it easy to do this! DependencySelector runs a fast structure learning algorithm over the matrix of LF outputs to identify a set of likely dependencies. | from snorkel.learning.structure import DependencySelector
MAX_DEPS = 5
ds = DependencySelector()
deps = ds.select(L_train, threshold=0.1)
deps = set(list(deps)[0:min(len(deps), MAX_DEPS)])
print "Using {} dependencies".format(len(deps)) | tutorials/workshop/Workshop_3_Generative_Model_Training.ipynb | jasontlam/snorkel | apache-2.0 |
Initializing eyDNA object with free_dna.h5 file
eyDNA object is initialized by using the total number of base-pairs and HDF5 file.
This class contains all the required functions to calculate the elastic properties and deformation free energy. | eyDNA = dnaMD.dnaEY(27, 'BST', filename='elasticity_DNA/free_dna.h5') | docs/notebooks/calculate_elasticity_tutorial.ipynb | rjdkmr/do_x3dna | gpl-3.0 |
Determining modulus matrix - bending, stretching and twisting
Modulus matrix for all three major motions (bending, stretching and twisting) can be obtained with getStrecthTwistBend method.
In the following example, matrix is calculated for all frames and first 5000 frames, respectively. | # All frames
avg, mod_matrix = eyDNA.getStretchTwistBendModulus([4,20], paxis='X')
print('Average values for all frames: ', avg)
print('Modulus matrix for all frames: \n', mod_matrix )
print(' ')
# Elastic matrix
avg, mod_matrix = eyDNA.getStretchTwistBendModulus([4,20], paxis='X', matrix=True)
print('Average values for all frames: ', avg)
print('Elastic constant matrix for all frames: \n', mod_matrix )
print(' ') | docs/notebooks/calculate_elasticity_tutorial.ipynb | rjdkmr/do_x3dna | gpl-3.0 |
The elastic matrix is in this form:
$$\text{Elastic matrix} = \begin{bmatrix}
K_{Bx} & K_{Bx,By} & K_{Bx,S} & K_{Bx,T} \
K_{Bx,By} & K_{By} & K_{By,S} & K_{By,T} \
K_{Bx,S} & K_{By,S} & K_{S} & K_{S,T} \
K_{Bx,T} & K_{Bx,T} & K_{S,T} & K_{T}
\end{bmatrix}
$$
Where:
$Bx$ - Bending motion in one plane
$By$ - Bending motion in another orthogonal plane
$S$ - Stretching motion
$T$ - Twisting motion
$$\text{modulus matrix} =
\begin{bmatrix}
M_{Bx} & M_{Bx,By} & M_{Bx,S} & M_{Bx,T} \
M_{Bx,By} & M_{By} & M_{By,S} & M_{By,T} \
M_{Bx,S} & M_{By,S} & M_{S} & M_{S,T} \
M_{Bx,T} & M_{Bx,T} & M_{S,T} & M_{T}
\end{bmatrix}
$$
$$
= 4.1419464 \times \begin{bmatrix}
K_{Bx} & K_{Bx,By} & K_{Bx,S} & K_{Bx,T} \
K_{Bx,By} & K_{By} & K_{By,S} & K_{By,T} \
K_{Bx,S} & K_{By,S} & K_{S} & K_{S,T} \
K_{Bx,T} & K_{Bx,T} & K_{S,T} & K_{T}
\end{bmatrix} \times L_0
$$
Where:
$M_{Bx}$ - Bending-1 stiffness in one plane
$M_{By}$ - Bending-2 stiffness in another orthogonal plane
$M_{S}$ - Stretch Modulus
$M_{T}$ - Twist rigidity
$M_{Bx,By}$ - Bending-1 and Bending-2 coupling
$M_{By,S}$ - Bending-2 and stretching coupling
$M_{S,T}$ - Stretching Twsiting coupling
$M_{Bx,S}$ - Bending-1 Stretching coupling
$M_{By,T}$ - Bending-2 Twisting coupling
$M_{Bx,T}$ - Bending-1 Twisting coupling
Convergence in bending, stretching and twisting with their couplings
Elasticities cannot be calcualted from an individual snapshot or frame. However, these properties can be calculated as a function of time by considering all the frames up to that time. For example, 0-50 ns, 0-100 ns, 0-150 ns etc. By this method, we can analyze the convergence and also further we can calculate error using block average method.
Elasticities over the time can be calculated using getElasticityByTime method.
If esType='BST', A ordered dictionary of 1D arrays of shape (nframes). The keys in dictionary are name of the elasticity in the same order as listed above..
$M_{Bx}$ - bend-1 - Bending-1 stiffness in one plane
$M_{By}$ - bend-2 - Bending-2 stiffness in another orthogonal plane
$M_{S}$ - stretch - Stretch Modulus
$M_{T}$ - twist - Twist rigidity
$M_{Bx,By}$ - bend-1-bend-2 - Bending-1 and Bending-2 coupling
$M_{By,S}$ - bend-2-stretch - Bending-2 and stretching coupling
$M_{S,T}$ - stretch-twist - Stretching Twsiting coupling
$M_{Bx,S}$ - bend-1-stretch - Bending-1 Stretching coupling
$M_{By,T}$ - bend-2-twist - Bending-2 Twisting coupling
$M_{Bx,T}$ - bend-1-twist - Bending-1 Twisting coupling
If esType='ST', 2D array with three properties of shape (3, frame) will be returned.
$M_{S}$ - stretch - Stretch Modulus
$M_{T}$ - twist - Twist rigidity
$M_{S,T}$ -stretch-twist - Stretching Twsiting coupling
In the following example, modulus as a function of time was calculated by adding 1000 frames. | time, modulus = eyDNA.getModulusByTime([4,20], frameGap=500, masked=True)
print('Keys in returned dictionary:\n', '\n'.join(list(modulus.keys())), '\n-----------')
# Stretching modulus
plt.plot(time, modulus['stretch'])
plt.scatter(time, modulus['stretch'])
plt.xlabel('Time (ps)')
plt.ylabel(r'Stretching Modulus (pN)')
plt.show()
# Twist rigidity
plt.plot(time, modulus['twist'])
plt.scatter(time, modulus['twist'])
plt.xlabel('Time (ps)')
plt.ylabel(r'Rigidity (pN nm$^2$)')
plt.show()
# Stretch twist coupling
plt.plot(time, modulus['stretch-twist'])
plt.scatter(time, modulus['stretch-twist'])
plt.xlabel('Time (ps)')
plt.ylabel(r'Stretch-Twist Coupling (pN nm)',)
plt.show() | docs/notebooks/calculate_elasticity_tutorial.ipynb | rjdkmr/do_x3dna | gpl-3.0 |
Deformation free energy of bound DNA
Deformation energy of a probe DNA (bound DNA) can be calculated with reference to the DNA present in the current object.
The deformation free energy is calculated using elastic matrix as follows
$$G = \frac{1}{2L_0}\mathbf{xKx^T}$$
$$\mathbf{x} = \begin{bmatrix}
(\theta^{x} - \theta^{x}_0) & (\theta^{y} - \theta^{y}_0) & (L - L_0) & (\phi - \phi_0)
\end{bmatrix}$$
Where, $\mathbf{K}$, $\theta^{x}_0$, $\theta^{y}_0$, $L_0$ and $\phi_0$ is calculated from reference DNA while $\theta^{x}$, $\theta^{y}$, $L$ and $\phi$ is calculated for probe DNA from each frame.
We already loaded the data for reference DNA above. Here, we will load data for probe DNA. | # Load parameters of bound DNA
boundDNA = dnaMD.DNA(27, filename='elasticity_DNA/bound_dna.h5') | docs/notebooks/calculate_elasticity_tutorial.ipynb | rjdkmr/do_x3dna | gpl-3.0 |
Deformation free energy can be calculated for the following motions that can be used with which option.
'full' : Use entire elastic matrix -- all motions with their coupling
'diag' : Use diagonal of elastic matrix -- all motions but no coupling
'b1' : Only bending-1 motion
'b2' : Only bending-2 motion
'stretch' : Only stretching motion
'twist' : Only Twisting motions
'st_coupling' : Only stretch-twist coupling motion
'bs_coupling' : Only Bending and stretching coupling
'bt_coupling' : Only Bending and Twisting coupling
'bb_coupling' : Only bending-1 and bending-2 coupling
'bend' : Both bending motions with their coupling
'st' : Stretching and twisting motions with their coupling
'bs' : Bending (b1, b2) and stretching motions with their coupling
'bt' : Bending (b1, b2) and twisting motions with their coupling
which can be either 'all' or a list of energy terms given above. | # Deformation free energy of bound DNA and calculate all above listed terms
time, energy = eyDNA.getGlobalDeformationEnergy([4,20], boundDNA, paxis='X', which='all', masked=True)
energyTerms=list(energy.keys())
print('Keys in returned dictionary:\n', '\n'.join(energyTerms), '\n-----------')
# Plot two energy terms
fig = plt.figure(figsize=(8,8))
fig.subplots_adjust(hspace=0.3)
ax1 = fig.add_subplot(211)
ax1.set_title('Bound DNA, entire elastic matrix')
ax1.plot(time, energy['full'])
ax1.set_xlabel('Time (ps)')
ax1.set_ylabel(r'Deformation Free Energy (kJ/mol)',)
ax2 = fig.add_subplot(212)
ax2.set_title('Bound DNA, only diagonal of elastic matrix')
ax2.plot(time, energy['diag'])
ax2.set_xlabel('Time (ps)')
ax2.set_ylabel(r'Deformation Free Energy (kJ/mol)',)
plt.show()
# Calculate average and error for each energy terms
error = dnaMD.get_error(time, list(energy.values()), len(energyTerms), err_type='block', tool='gmx analyze')
print("==============================================")
print('{0:<16}{1:>14}{2:>14}'.format('Energy(kJ/mol)', 'Average', 'Error'))
print("----------------------------------------------")
for i in range(len(energyTerms)):
print('{0:<16}{1:>14.3f}{2:>14.3f}'.format(energyTerms[i], np.mean(energy[energyTerms[i]]),error[i]))
print("==============================================\n") | docs/notebooks/calculate_elasticity_tutorial.ipynb | rjdkmr/do_x3dna | gpl-3.0 |
Local elastic properties or stiffness
Local elastic properties can be caluclated using either local base-step parameters or local helical base-step parameters.
In case of base-step parameters: Shift ($Dx$), Slide ($Dy$), Rise ($Dz$), Tilt ($\tau$), Roll ($\rho$) and Twist ($\omega$), following elastic matrix is calculated.
$$
\mathbf{K}{base-step} = \begin{bmatrix}
K{Dx} & K_{Dx,Dy} & K_{Dx,Dz} & K_{Dx,\tau} & K_{Dx,\rho} & K_{Dx,\omega} \
K_{Dx,Dy} & K_{Dy} & K_{Dy,Dz} & K_{Dy,\tau} & K_{Dy,\rho} & K_{Dy,\omega} \
K_{Dx,Dz} & K_{Dy,Dz} & K_{Dz} & K_{Dz,\tau} & K_{Dz,\rho} & K_{Dz,\omega} \
K_{Dx,\tau} & K_{Dy,\tau} & K_{Dz,\tau} & K_{\tau} & K_{\tau, \rho} & K_{\tau,\omega} \
K_{Dx,\rho} & K_{Dy,\rho} & K_{Dz,\rho} & K_{\tau, \rho} & K_{\rho} & K_{\rho,\omega} \
K_{Dx,\omega} & K_{Dy,\omega} & K_{Dz,\omega} & K_{\tau, \omega} & K_{\rho, \omega} & K_{\omega} \
\end{bmatrix}
$$
In case of helical-base-step parameters: x-displacement ($dx$), y-displacement ($dy$), h-rise ($h$), inclination ($\eta$), tip ($\theta$) and twist ($\Omega$), following elastic matrix is calculated.
$$
\mathbf{K}{helical-base-step} = \begin{bmatrix}
K{dx} & K_{dx,dy} & K_{dx,h} & K_{dx,\eta} & K_{dx,\theta} & K_{dx,\Omega} \
K_{dx,dy} & K_{dy} & K_{dy,h} & K_{dy,\eta} & K_{dy,\theta} & K_{dy,\Omega} \
K_{dx,h} & K_{dy,h} & K_{h} & K_{h,\eta} & K_{h,\theta} & K_{h,\Omega} \
K_{dx,\eta} & K_{dy,\eta} & K_{h,\eta} & K_{\eta} & K_{\eta, \theta} & K_{\eta,\Omega} \
K_{dx,\theta} & K_{dy,\theta} & K_{h,\theta} & K_{\eta, \theta} & K_{\theta} & K_{\theta,\Omega} \
K_{dx,\Omega} & K_{dy,\Omega} & K_{h,\Omega} & K_{\eta, \Omega} & K_{\theta, \Omega} & K_{\Omega} \
\end{bmatrix}
$$ | # base-step
avg, matrix = eyDNA.calculateLocalElasticity([10,13], helical=False)
# Print matrix in nice format
out = ''
mean_out = ''
for i in range(matrix.shape[0]):
for j in range(matrix.shape[0]):
if j != matrix.shape[0]-1:
out += '{0:>10.5f} '.format(matrix[i][j])
else:
out += '{0:>10.5f}\n'.format(matrix[i][j])
mean_out += '{0:>15.3f} '.format(avg[i])
print('Average values for all frames: ', mean_out)
print('=========== ============== Elastic Matrix =============== ===========\n')
print(out)
print('=========== ====================== ====================== ===========')
# helical base-step
avg, matrix = eyDNA.calculateLocalElasticity([10,13], helical=True)
# Print matrix in nice format
out = ''
mean_out = ''
for i in range(matrix.shape[0]):
for j in range(matrix.shape[0]):
if j != matrix.shape[0]-1:
out += '{0:>10.5f} '.format(matrix[i][j])
else:
out += '{0:>10.5f}\n'.format(matrix[i][j])
mean_out += '{0:>15.3f} '.format(avg[i])
print('\n\nAverage values for all frames: ', mean_out)
print('=========== ============== Elastic Matrix =============== ===========\n')
print(out)
print('=========== ====================== ====================== ===========')
| docs/notebooks/calculate_elasticity_tutorial.ipynb | rjdkmr/do_x3dna | gpl-3.0 |
Local deformation energy of a local small segment
Using the above elastic matrix, deformation energy of this base-step in bound DNA can be calucalted. | # Here calculate energy for one base-step
time, energy = eyDNA.getLocalDeformationEnergy([10,13], boundDNA, helical=False, which='all')
energyTerms=list(energy.keys())
print('Keys in returned dictionary:\n', '\n'.join(energyTerms), '\n-----------')
# Plot two energy terms
fig = plt.figure(figsize=(8,8))
fig.subplots_adjust(hspace=0.3)
ax1 = fig.add_subplot(211)
ax1.set_title('Bound DNA, entire elastic matrix')
ax1.plot(time, energy['full'])
ax1.set_xlabel('Time (ps)')
ax1.set_ylabel(r'Local Deformation Energy (kJ/mol)',)
ax2 = fig.add_subplot(212)
ax2.set_title('Bound DNA, only diagonal of elastic matrix')
ax2.plot(time, energy['diag'])
ax2.set_xlabel('Time (ps)')
ax2.set_ylabel(r'Local Deformation Energy (kJ/mol)',)
plt.show()
# Calculate average and error for each energy terms
error = dnaMD.get_error(time, list(energy.values()), len(energyTerms), err_type='block', tool='gmx analyze')
print("==============================================")
print('{0:<16}{1:>14}{2:>14}'.format('Energy(kJ/mol)', 'Average', 'Error'))
print("----------------------------------------------")
for i in range(len(energyTerms)):
print('{0:<16}{1:>14.3f}{2:>14.3f}'.format(energyTerms[i], np.mean(energy[energyTerms[i]]),error[i]))
print("==============================================\n")
| docs/notebooks/calculate_elasticity_tutorial.ipynb | rjdkmr/do_x3dna | gpl-3.0 |
Deformation energy of the consecutive overlapped DNA segments
Above method gives energy of a small local segment of the DNA. However, we mostly interested in large segment of the DNA. This large segment can be further divided into smaller local segments. For these smaller segments local deformation energy can be calculated. Here these segments overlapped with each other. | # First calculation for local base-step parameters
segments, energies, error = eyDNA.getLocalDeformationEnergySegments([4,20], boundDNA, span=4,
helical=False, which='all',
err_type='block',
tool='gmx analyze')
energyTerms=list(energies.keys())
print('Keys in returned dictionary:\n', '\n'.join(energyTerms), '\n-----------')
# Now plot the data
fig = plt.figure(figsize=(14,8))
fig.subplots_adjust(hspace=0.3)
mpl.rcParams.update({'font.size': 16})
xticks = range(len(segments))
ax1 = fig.add_subplot(111)
ax1.set_title('Local base-step parameters')
for term in energyTerms:
ax1.errorbar(xticks, energies[term], yerr=error[term], ms=10, elinewidth=3, fmt='-o', label=term)
ax1.set_xticks(xticks)
ax1.set_xticklabels(segments, rotation='vertical')
ax1.set_xlabel('base-step number')
ax1.set_ylabel(r'Deformation Energy (kJ/mol)',)
plt.legend()
plt.show() | docs/notebooks/calculate_elasticity_tutorial.ipynb | rjdkmr/do_x3dna | gpl-3.0 |
Same as the above but energy is calculated using helical base-step parameters | # Secind calculation for local base-step parameters
segments, energies, error = eyDNA.getLocalDeformationEnergySegments([4,20], boundDNA, span=4,
helical=True, which='all',
err_type='block',
tool='gmx analyze')
energyTerms=list(energies.keys())
print('Keys in returned dictionary:\n', '\n'.join(energyTerms), '\n-----------')
# Now plot the data
fig = plt.figure(figsize=(14,8))
fig.subplots_adjust(hspace=0.3)
mpl.rcParams.update({'font.size': 16})
xticks = range(len(segments))
ax1 = fig.add_subplot(111)
ax1.set_title('Local base-step parameters')
for term in energyTerms:
ax1.errorbar(xticks, energies[term], yerr=error[term], ms=10, elinewidth=3, fmt='-o', label=term)
ax1.set_xticks(xticks)
ax1.set_xticklabels(segments, rotation='vertical')
ax1.set_xlabel('base-step number')
ax1.set_ylabel(r'Deformation Energy (kJ/mol)',)
plt.legend()
plt.show() | docs/notebooks/calculate_elasticity_tutorial.ipynb | rjdkmr/do_x3dna | gpl-3.0 |
Creating the training set
In order to learn the relationship between ACSFs and the energy of the system, we need a database of ACSFs for several atomic configurations, and the corresponding energy.
The sample configurations consist of the dimer, stretched and compressed. In reality the energy is calculated with quantum methods (DFT, CC, ...) but here we will use a simple Lennard-Jones function. | # array of meaningful distances
dists = numpy.arange(1.95, Rcut, Rcut/30)
# LJ energy at those distances
energy = numpy.power(dists/2,-12)-numpy.power(dists/2,-6) - 2
plt.plot(dists, energy,'.' )
plt.xlabel('Pair distance')
plt.ylabel('Energy')
plt.show() | ACSF-Dimer.ipynb | fullmetalfelix/ML-CSC-tutorial | gpl-3.0 |
Then we calculate the ACSFs for each dimer configuration. The results are formatted as a matrix: one row for each configuration, one column for each ACSF. | # ACSFs G1 parameter pairs: this is a list of eta/Rs values
params = [(0.4, 0.2),(0.4, 0.5)]
# initialise a matrix that will store the ACSFs of the first atom in all dimer configurations
nConfs = dists.shape[0]
acsf = numpy.zeros((nConfs, 1+len(params)))
print("Number of configurations: " + str(nConfs))
print("Number of ACSfs: " + str(acsf.shape[1]))
for k in range(nConfs): # for each configuration
r = dists[k] # distance between atoms
# compute G0 - sum of cutoffs
acsf[k,0] = fcut(r)
# compute all the G1
for p in range(len(params)):
# extract parameters
eta,rs = params[p]
# compute G1
acsf[k,1+p] = G1f(r, eta, rs)
# plot the Gs as a function of distance
for a in range(acsf.shape[1]):
plt.plot(dists, acsf[:,a])
plt.xlabel('Pair distance')
plt.ylabel('ACSFs')
plt.show() | ACSF-Dimer.ipynb | fullmetalfelix/ML-CSC-tutorial | gpl-3.0 |
OPTIONAL TRICK
We can center the ACSFs around their mean and rescale them so that their standard deviation is 1. This is a common trick in ML with neural networks, to make the learning easier. | acsf_mean = numpy.mean(acsf, axis=0)
for a in range(acsf.shape[1]):
acsf[:,a] -= acsf_mean[a]
acsf_std = numpy.std(acsf, axis=0)
for a in range(acsf.shape[1]):
acsf[:,a] /= acsf_std[a]
# plot the Gs as a function of distance
for a in range(acsf.shape[1]):
plt.plot(dists, acsf[:,a])
plt.xlabel('Pair distance')
plt.ylabel('ACSFs - scaled and shifted')
plt.show() | ACSF-Dimer.ipynb | fullmetalfelix/ML-CSC-tutorial | gpl-3.0 |
Training
We create a neural network and train it on the ACSF database we just constructed. | # setup the neural network
# the network uses tanh function on all hidden neurons
nn = MLPRegressor(hidden_layer_sizes=(5,), activation='tanh') | ACSF-Dimer.ipynb | fullmetalfelix/ML-CSC-tutorial | gpl-3.0 |
The fitting may not be trivial since our database is small... the next instruction can be executed multiple times let the NN train more and hopefully improve. | # change some training parameters
nn.set_params(solver='lbfgs', alpha=0.001, tol=1.0e-10, learning_rate='constant', learning_rate_init=0.01)
# do some training steps
nn.fit(acsf, energy);
# evaluate the training error
energyML = nn.predict(acsf)
print ("Mean Abs Error (training) : ", (numpy.abs(energyML-energy)).mean())
# energy curve
plt.plot(dists, energy,'-.' )
plt.plot(dists, energyML,'o' )
plt.xlabel('Pair distance')
plt.ylabel('Energy')
plt.show()
# regression plot
plt.plot(energy,energyML,'o')
plt.plot([-2.3,-1.7],[-2.3,-1.7]) # perfect fit line
plt.xlabel('correct energy')
plt.ylabel('NN energy')
plt.show() | ACSF-Dimer.ipynb | fullmetalfelix/ML-CSC-tutorial | gpl-3.0 |
Remarks
Do not be fooled! Real systems are much more difficult to model, requiring more ACSFs, larger NNs, and much larger datasets for training.
Exercises
1. Create a vaidation set and test the NN performance
For simplicity we just checked the error on training data, but it is better to check performance on a validation set not included in the training.
Create different dimer configurations and test NN performance on those.
2. Craft you own energy
Make the dimer energy expression more complex and attempt to machine-learn it.
3. Add/edit the ACSFs parameters
Try to change the ACSFs parameters to get better model performance.
4. A real molecule
Here is a real organic molecule... try to compute the ACSFs for its atoms using the DScribe package.
Documentation can be found here: https://singroup.github.io/dscribe/tutorials/acsf.html | # atomic positions as matrix
molxyz = numpy.load("./data/molecule.coords.npy")
# atom types
moltyp = numpy.load("./data/molecule.types.npy")
atoms_sys = Atoms(positions=molxyz, numbers=moltyp)
view(atoms_sys)
from dscribe.descriptors import ACSF
# Setting up the ACSF descriptor
acsf = ACSF(
species=["H", "C", "N", "O"],
rcut=6.0,
# configure parameters for desired ACSFs
g2_params=[[1, 1], [1, 2], [1, 3]],
g4_params=[[1, 1, 1], [1, 2, 1], [1, 1, -1], [1, 2, -1]],
)
# calculate the descriptor | ACSF-Dimer.ipynb | fullmetalfelix/ML-CSC-tutorial | gpl-3.0 |
Scikit-Learn
Scikit-Learn (http://scikit-learn.org) is a python package that uses NumPy & SciPy to enable the application of popular machine learning algorithms up on small to medium datasets.
Referring back to the machine learning models, every model in scikit is a python class with a uniform interface. Every instance of this class is an object and the general method of application is very similar.
a. Import class from module. (Here "abc" is an arbitrary algorithm.)
* from sklearn.ABC import abc
b. Instantiate estimator object
* abc_model=abc(arguments)
c. Fit model to training data
* abc_model.fit(data)
d. Use fitted model to predict
* abc_model.predict(new_data)
Now, we'll move from this (seemingly) abstract overview to actual application.
To motivate this discussion, lets start with a concrete problem...that of the infinite scroll.
The goal of Clustering is to find an arrangement in the data such that items in the same group (or cluster) are more similar to each other than those from different clusters.
The Prototype based K-Means algorithm is quiet popular. In prototype based clustering, each group is represented/exemplified by a prototype. In K-Means, the prototype is the mean (or centroid).
Exercise 1
Name another parameter that we could have chosen as a prototype?
When would this parameter be more suited than the centroid? | %matplotlib inline
from sklearn.datasets import make_blobs
import matplotlib.pyplot as plt
import numpy as np
X, y = make_blobs(n_samples=200,n_features=2,centers=6,cluster_std=0.8, shuffle=True,random_state=0)
plt.scatter(X[:,0],X[:,1]) | session-3/HPC-2016-Session-III-Supervised-Unsupervised-Learning.ipynb | stanfordhpccenter/datatutorial | mit |
Steps in the K-means algorithm:
Choose k centroids from the sample points as initial cluster centers.
Assign each data point to the nearest centroid (based on Euclidean distance).
Update the centroid locations to the mean of the samples that were assigned to it.
Repeat steps 2 and 3 till the cluster assignments do not change, or, a pre-defined tolerance, or, a maximum number of iterations is reached. | #import Kmeans class for the cluster module
from sklearn.cluster import KMeans
#instantiate the model
km = KMeans(n_clusters=3, init='random', n_init=10, max_iter=300, tol=1e-04, random_state=0) | session-3/HPC-2016-Session-III-Supervised-Unsupervised-Learning.ipynb | stanfordhpccenter/datatutorial | mit |
The arguments to the algorithm:
* n_clusters: The number of groups to be divided in.
* n_init: The number of different initial random centroids to be run.
* max_iter: The maximum number of iterations for each single run.
* tol: Cut-off for the changes in the within-cluster sum-squared-error. | #fitting the model to the data
y_km = km.fit_predict(X)
plt.scatter(X[y_km==0,0], X[y_km ==0,1], s=50, c='lightgreen', marker='o', label='Group A')
plt.scatter(X[y_km ==1,0], X[y_km ==1,1], s=50, c='orange', marker='o', label='Group B')
plt.scatter(X[y_km ==2,0], X[y_km ==2,1], s=50, c='white', marker='o', label='Group C')
plt.scatter(km.cluster_centers_[:,0],km.cluster_centers_[:,1], s=50, marker='o', c='black', label='Centers')
plt.legend() | session-3/HPC-2016-Session-III-Supervised-Unsupervised-Learning.ipynb | stanfordhpccenter/datatutorial | mit |
Exercise 2
Clustering the iris dataset based on sepal and petal lengths and widths. | display(Image(filename='1.png'))
from sklearn.datasets import load_iris
iris = load_iris()
n_samples, n_features = iris.data.shape
X, y = iris.data, iris.target
f, axarr = plt.subplots(2, 2)
axarr[0, 0].scatter(iris.data[:, 0], iris.data[:, 1],c=iris.target, cmap=plt.cm.get_cmap('RdYlBu', 3))
axarr[0, 0].set_title('Sepal length versus width')
axarr[0, 1].scatter(iris.data[:, 1], iris.data[:, 2],c=iris.target, cmap=plt.cm.get_cmap('RdYlBu', 3))
axarr[0, 1].set_title('Sepal width versus Petal Length')
axarr[1, 0].scatter(iris.data[:, 2], iris.data[:, 3],c=iris.target, cmap=plt.cm.get_cmap('RdYlBu', 3))
axarr[1, 0].set_title('Petal length versus width')
axarr[1, 1].scatter(iris.data[:, 0], iris.data[:, 2],c=iris.target, cmap=plt.cm.get_cmap('RdYlBu', 3))
axarr[1, 1].set_title('Sepal length versus Petal length')
plt.setp([a.get_xticklabels() for a in axarr[0, :]], visible=False);
plt.setp([a.get_yticklabels() for a in axarr[:, 1]], visible=False);
#Instantiate and fit the model here | session-3/HPC-2016-Session-III-Supervised-Unsupervised-Learning.ipynb | stanfordhpccenter/datatutorial | mit |
Regression | x=np.arange(100)
eps=50*np.random.randn(100)
y=2*x+eps
plt.scatter(x,y)
plt.xlabel("X")
plt.ylabel("Y")
from sklearn.linear_model import LinearRegression
model=LinearRegression(normalize=True)
X=x[:,np.newaxis]
model.fit(X,y)
X_fit=x[:,np.newaxis]
y_pred=model.predict(X_fit)
plt.scatter(x,y)
plt.plot(X_fit,y_pred,linewidth=2)
plt.xlabel("X")
plt.ylabel("Y")
print model.coef_
print model.intercept_
#So a unit change is X is associated with a ___ change in Y. | session-3/HPC-2016-Session-III-Supervised-Unsupervised-Learning.ipynb | stanfordhpccenter/datatutorial | mit |
Exercise 3
Linear Regression over a multi-dimensional data set. The data exhibits the advertising expenditure over TV, radio and the print media, versus the change in sales of the product. | import pandas as pd
data=pd.read_csv('addata.csv', index_col=0)
data.head(5)
#from sklearn.linear_model import LinearRegression
from sklearn import linear_model
clf=linear_model.LinearRegression()
feature_cols=["TV","Radio","Newspaper"]
X=data[feature_cols]
y=data["Sales"]
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
#Fit the model and print the coefficients here
#Make predictions for the test dataset here
from sklearn import metrics
print np.sqrt(metrics.mean_squared_error(y_test,y_pred)) #RMSE | session-3/HPC-2016-Session-III-Supervised-Unsupervised-Learning.ipynb | stanfordhpccenter/datatutorial | mit |
Define backend (here are implemented: caffe and torch) | backend = 'caffe' | examples/summary_statistics.ipynb | mlosch/nnadapter | mit |
Load a caffe model | if backend == 'caffe':
# make sure pycaffe is in your system path
caffe_root = os.getenv("HOME") + '/caffe/'
sys.path.insert(0, caffe_root + 'python')
# Load CaffeAdapter class
from emu.caffe import CaffeAdapter
# Define the path to .caffemodel, deploy.prototxt and mean.npy
# Here we use the pretrained CaffeNet from the Caffe model zoo
model_fp = caffe_root + 'models/bvlc_reference_caffenet/'
weights_fp = model_fp + 'bvlc_reference_caffenet.caffemodel'
prototxt_fp = model_fp + 'deploy.prototxt'
mean_fp = caffe_root + 'data/ilsvrc12/ilsvrc_2012_mean.npy'
# Alternatively, we could also define the mean as a numpy array:
# mean = np.array([104.00698793, 116.66876762, 122.67891434])
adapter = CaffeAdapter(prototxt_fp, weights_fp, mean_fp) | examples/summary_statistics.ipynb | mlosch/nnadapter | mit |
Load a torch model | if backend == 'torch':
# Load TorchAdapter class
from emu.torch import TorchAdapter
# Define the path to the model file where the file can be a torch7 or pytorch model.
# Torch7 models are supported but not well tested.
model_fp = 'models/resnet-18.t7'
# Alternatively, we can use pretrained torchvision models (see README).
# model_fp = 'resnet18'
# Define mean and std
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
# Alternatively, we could also pass a .t7 file path to the constructor
# Define the image input size to the model with order:
# Channels x Height x Width
input_size = (3, 224, 224)
adapter = TorchAdapter(model_fp, mean, std, input_size) | examples/summary_statistics.ipynb | mlosch/nnadapter | mit |
Load available layers and their types | layer_types = adapter.get_layers()
for lname, ltype in layer_types.items():
print('%s:\t%s' % (lname, ltype)) | examples/summary_statistics.ipynb | mlosch/nnadapter | mit |
Select convolutional layers | conv_layers = [lname for lname, ltype in layer_types.items() if 'conv' in ltype.lower()] | examples/summary_statistics.ipynb | mlosch/nnadapter | mit |
2. Forward images through network
Define path to a directory containing images and run them through the network | images_dp = 'images/'
files = os.listdir(images_dp)
# Filter for jpeg extension
image_files = [os.path.join(images_dp, f) for f in files if f.endswith('.jpg')]
# Run in batched fashion
batch_size = 32
# As we run in batch mode, we have to store the intermediate layer outputs
layer_outputs = OrderedDict()
for layer in conv_layers:
layer_outputs[layer] = []
for i in range(0, len(image_files), batch_size):
image_list = image_files[i:(i+batch_size)]
# Forward batch through network
# The adapter takes care of loading images and transforming them to the right format.
# Alternatively, we could load and transform the images manually and pass a list of numpy arrays.
batch = adapter.preprocess(image_list)
adapter.forward(batch)
# Save a copy of the outputs of the convolutional layers.
for layer in conv_layers:
output = adapter.get_layeroutput(layer).copy()
layer_outputs[layer].append(output)
# Concatenate batch arrays to single outputs
for name, layer_output in layer_outputs.items():
layer_outputs[name] = np.concatenate(layer_output, axis=0) | examples/summary_statistics.ipynb | mlosch/nnadapter | mit |
3. Calculate summary statistics
Estimate mean and standard deviation per layer | means = [output.mean() for output in layer_outputs.values()]
stds = [output.std() for output in layer_outputs.values()]
plt.plot(means)
plt.xticks(range(len(conv_layers)), conv_layers, rotation=45.0)
plt.title('Convolution output mean over network depth');
plt.xlabel('Layer');
plt.plot(stds)
plt.xticks(range(len(conv_layers)), conv_layers, rotation=45.0)
plt.title('Convolution output std over network depth');
plt.xlabel('Layer'); | examples/summary_statistics.ipynb | mlosch/nnadapter | mit |
Create an example dataframe | raw_data = {'geo': ['40.0024, -105.4102', '40.0068, -105.266', '39.9318, -105.2813', np.nan]}
df = pd.DataFrame(raw_data, columns = ['geo'])
df | python/pandas_split_lat_and_long_into_variables.ipynb | tpin3694/tpin3694.github.io | mit |
Split the geo variable into seperate lat and lon variables | # Create two lists for the loop results to be placed
lat = []
lon = []
# For each row in a varible,
for row in df['geo']:
# Try to,
try:
# Split the row by comma and append
# everything before the comma to lat
lat.append(row.split(',')[0])
# Split the row by comma and append
# everything after the comma to lon
lon.append(row.split(',')[1])
# But if you get an error
except:
# append a missing value to lat
lat.append(np.NaN)
# append a missing value to lon
lon.append(np.NaN)
# Create two new columns from lat and lon
df['latitude'] = lat
df['longitude'] = lon | python/pandas_split_lat_and_long_into_variables.ipynb | tpin3694/tpin3694.github.io | mit |
View the dataframe | df | python/pandas_split_lat_and_long_into_variables.ipynb | tpin3694/tpin3694.github.io | mit |
3 DOF System
<img src="bending.svg" style="width:100%">
In the figure above
<ol type='a'>
<li> the system under investigation, with the two supported masses and
the dynamical degrees of freedom that describe the system deformation
(top left);
<li> the three diagrams of bending moment (in red positive bending moments,
in blue negative ones) that derive from application of external unit
forces, corresponding to each of the three degrees of freedom.
</ol>
The same bending moments are represented in the following data structure in terms of polynomials of first degree p((linear_coefficient, constant_coefficient)), each row corresponding to a load condition while the terms in each row are corresponding, the first 4 to the segments on length L on the horizontal part, from left to right (1,2,3) and from rigth to left (4), the fifth is corresponding to the vertical part, from top to bottom. | bm = [[p(( 1, 0)), p(( 1, 1)), p(( 1, 2)), p(( 3, 0)), p(( 0, 0))],
[p(( 0, 0)), p(( 0, 0)), p(( 1, 0)), p(( 1, 0)), p(( 0, 0))],
[p(( 0, 0)), p(( 0,-1)), p(( 0,-1)), p((-1, 0)), p((-1, 0))]] | dati_2015/ha03/06_3_DOF_System.ipynb | boffi/boffi.github.io | mit |
To compute the flexibilities we sum the integrals of the products of bending moments on each of the five spans of unit length that we are using and place the results in a 2D data structure that is eventually converted to a matrix by np.mat. | F = np.mat([[sum(polyint(bm0[i]*bm1[i])(1) for i in range(5))
for bm1 in bm] for bm0 in bm])
print('F = 1/6 * L^3/EJ *')
print(F*6) | dati_2015/ha03/06_3_DOF_System.ipynb | boffi/boffi.github.io | mit |
we invert the flexibility matrix to obtain the stiffness matrix | K = F.I
print('K = 3/136 * EJ/L^3 *')
print(K*136/3) | dati_2015/ha03/06_3_DOF_System.ipynb | boffi/boffi.github.io | mit |
and eventually we define the mass matrix | M = np.mat(np.eye(3)) ; M[2,2]=2
print('M = m *')
print (M)
evals, evecs = eigh(K,M)
print("Eigenvalues, w_0^2 *", evals)
for i in range(3):
if evecs[0,i]<0: evecs[:,i]*=-1
print("Matrix of mass normalized eigenvectors,")
print(evecs) | dati_2015/ha03/06_3_DOF_System.ipynb | boffi/boffi.github.io | mit |
The Load
The load is $F_0\,\boldsymbol{r}\,f(t)$ with $F_0 = \delta EJ/L^3$, $\boldsymbol{r}=\begin{Bmatrix}1&0&0\end{Bmatrix}^T$ and
$f(t) = 2\sin^2(\omega_0t/2)=1-\cos(\omega_0t)$ for $0\le \omega_0 t\le 2\pi$ while $f(t)=0$ otherwise. | pi = np.pi
t1 = np.linspace(0,2*pi,601)
plt.plot(t1,1-np.cos(t1))
plt.xlabel(r'$\omega_0t$', size=20)
plt.ylabel(r'$p(t)\,\frac{L^3}{\delta\,EJ}$', size=20)
plt.xlim((0,2*pi))
plt.ylim((-0.05,2.05))
plt.xticks((0,pi/2,pi,pi*1.5,2*pi),
(r'$0$', r'$\pi/2$', r'$\pi$', r'$3\pi/2$', r'$2\pi$'), fontsize=20)
plt.title('The normalized load')
plt.show() | dati_2015/ha03/06_3_DOF_System.ipynb | boffi/boffi.github.io | mit |
The Particular Integrals
For our load, each modal equation of motion can be written as
\begin{align}
m \ddot q_i + m \Lambda_i^2\omega_0^2 q_i &=
\delta\frac{EJ}{L^3}\boldsymbol\psi_i^T\boldsymbol{r}\,
(1-\cos(\omega_0t))\Rightarrow\
\ddot q_i + \Lambda_i^2\omega_0^2 q_i &= G_i \delta\omega_0^2 \,
(1-\cos(\omega_0t))
\end{align}
with $G_i = \boldsymbol\psi_i^T\boldsymbol{r}.$
With $\xi_i = C_i + D_i \cos(\omega_0 t)$, substituting in the equation of motion and considering separately the constant terms and the cosine terms, with appropriate simplifications we have
\begin{align}
\Lambda_i^2\,C_i &= +G_i \, \delta\
(\Lambda_i^2-1) \, D_i &= -G_i\,\delta
\end{align}
and consequently
$$ C_i = +\delta\,\frac{\boldsymbol\psi_i^T\boldsymbol{r}}{\Lambda^2_i},\qquad
D_i = -\delta\,\frac{\boldsymbol\psi_i^T\boldsymbol{r}}{\Lambda^2_i-1}.$$ | r = np.array((1,0,0))
w = np.sqrt(evals)
C = np.dot(evecs.T,r)/evals
D = np.dot(evecs.T,r)/(1-evals)
display(Latex(r'\begin{align}' +
r'\\'.join(r"""
\frac{\xi_%d(t)}\delta &= %+g %+g \cos(\omega_0 t),
&& \text{for } 0 \le \omega_0 t \le 2\pi.
""" % (i+1,C[i],D[i]) for i in range(3)) +
r'\end{align}'))
for i in 0, 1, 2:
plt.plot(t1, C[i]+D[i]*np.cos(t1), label=r'$\xi_%d(t)$'%(i+1))
plt.xlabel(r'$\omega_0t$', size=20)
plt.ylabel(r'$\xi/\delta$', size=20)
plt.legend(loc=0, ncol=3)
plt.xlim((0,2*pi))
plt.xticks((0,pi/2,pi,pi*1.5,2*pi),
(r'$0$', r'$\pi/2$', r'$\pi$', r'$3\pi/2$', r'$2\pi$'))
plt.title('The particular integrals, mode by mode')
plt.show() | dati_2015/ha03/06_3_DOF_System.ipynb | boffi/boffi.github.io | mit |
Modal Responses
With respect to the forced phase, the modal responses have the generic expression
\begin{align}
q_i(t) & = A_i\cos(\Lambda_i\omega_0t)
+ B_i\sin(\Lambda_i\omega_0t) + C_i + D_i\cos(\omega_0t),\
\dot q_i(t) & = \Lambda_i\omega_0 \left(
B_i\cos(\Lambda_i\omega_0t) - A_i\sin(\Lambda_i\omega_0t) \right) -
\omega_0 D_i \sin(\omega_0t),
\end{align}
and we can write, for the specified initial rest conditions, that
$$ A_i + C_i + D_i = 0, \qquad B_i = 0$$
hence
\begin{align}
q_i(t) & = (1-\cos(\Lambda_i\omega_0t)) C_i
+ (\cos(\omega_0t) - \cos(\Lambda_i\omega_0t)) D_i,\
{\dot q}_i(t) & = \Lambda_i\omega_0 (C_i+D_i) \sin(\Lambda_i\omega_0t) -
\omega_0 D_i \sin(\omega_0t).
\end{align} | A = -C - D
L = np.sqrt(evals)
t1 = np.linspace(0,2*pi,601)
q1 = [A[i]*np.cos(L[i]*t1) + C[i] + D[i]*np.cos(t1) for i in (0,1,2)]
display(Latex(r'\begin{align}' +
r'\\'.join(r"""
\frac{q_%d(t)}\delta &= %+g %+g \cos(\omega_0 t) %+g \cos(%g\omega_0t), &&
\text{for } 0 \le \omega_0 t \le 2\pi.
""" % (i+1,C[i],D[i],A[i],L[i]) for i in range(3)) +
r'\end{align}')) | dati_2015/ha03/06_3_DOF_System.ipynb | boffi/boffi.github.io | mit |
With respect to the free response phase, $2\pi \le \omega_0t$, writing
$$
q^_i(t) = A^_i \cos(\Lambda_i\omega_0t) + B^*_i \sin(\Lambda_i\omega_0t)
$$
imposing the continuity of modal displacements and modal velocities we have
\begin{align}
q_i(t_1) &= A^_i \cos(\Lambda_i\omega_0t_1) + B^_i \sin(\Lambda_i\omega_0t_1)\
\dot q_i(t_1) &= \big(
B^_i \cos(\Lambda_i\omega_0t_1) - A^_i \sin(\Lambda_i\omega_0t_1)
\big) \Lambda_i\omega_0
\end{align}
that gives
\begin{align}
A^_i &= \frac{q_i(t_1)\Lambda_i\omega_0\cos(\Lambda_i\omega_0t_1) - \dot q_i(t_1)\sin(\Lambda_i\omega_0t_1)}{\Lambda_i\omega_0} \
B^_i &= \frac{q_i(t_1)\Lambda_i\omega_0\sin(\Lambda_i\omega_0t_1) + \dot q_i(t_1)\cos(\Lambda_i\omega_0t_1)}{\Lambda_i\omega_0} \
\end{align} | ct1 = np.cos(L*2*pi)
st1 = np.sin(L*2*pi)
q0t1 = C + D*np.cos(2*pi) + A*ct1
q1t1 = - D*np.sin(2*pi) - A*st1*L
print(q0t1, q1t1)
As = (q0t1*L*ct1 - q1t1*st1)/L
Bs = (q0t1*L*st1 + q1t1*ct1)/L
print(As*ct1+Bs*st1, L*(Bs*ct1-As*st1))
t2 = np.linspace(2*pi, 4*pi, 601)
q2 = [As[i]*np.cos(L[i]*t2) + Bs[i]*np.sin(L[i]*t2) for i in (0,1,2)]
display(Latex(r'\begin{align}' +
r'\\'.join(r"""
\frac{q^*_%d(t)}\delta &= %+g \cos(%g\omega_0 t) %+g \sin(%g\omega_0t), &&
\text{for } 2\pi \le \omega_0 t.
""" % (i+1, As[i], L[i], Bs[i], L[i]) for i in range(3)) +
r'\end{align}')) | dati_2015/ha03/06_3_DOF_System.ipynb | boffi/boffi.github.io | mit |
Plotting the modal responses
Let's plot the modal responses, first one by one, to appreciate the details of the single modal response | for i in (0,1,2):
plt.plot(t1/pi,q1[i], color=l_colors[i],
label='$q_{%d}(t)$'%(i+1))
plt.plot(t2/pi,q2[i], color=l_colors[i])
plt.xlabel(r'$\omega_0t/\pi$', fontsize=18)
plt.ylabel(r'$q/\delta$', fontsize=18)
plt.legend(loc=0, fontsize=18)
plt.show() | dati_2015/ha03/06_3_DOF_System.ipynb | boffi/boffi.github.io | mit |
then all of them in a single plot, to appreciate the relative magnutudes of the different modal responses | for i in (0,1,2):
plt.plot(t1/pi,q1[i], color=l_colors[i],
label='$q_{%d}(t)$'%(i+1))
plt.plot(t2/pi,q2[i], color=l_colors[i])
plt.xlabel(r'$\omega_0t/\pi$', fontsize=18)
plt.ylabel(r'$q/\delta$', fontsize=18)
plt.legend(loc=0, fontsize=18)
plt.show() | dati_2015/ha03/06_3_DOF_System.ipynb | boffi/boffi.github.io | mit |
System Response in Natural Coordinates
We stack together the times and the modal responses for the forced and the free phases in two single vectors, then we compute the nodal response by premultiplying the modal response by the eigenvectors matrix | t = np.hstack((t1, t2))
q = np.hstack((q1, q2))
x = np.dot(evecs, q) | dati_2015/ha03/06_3_DOF_System.ipynb | boffi/boffi.github.io | mit |
Plotting of the natural coordinate responses
All of them in a single plot, as they have the same order of magnitude | for i in (0,1,2): plt.plot(t/pi,x[i],
label='$x_{%d}(t)$'%(i+1))
plt.xlabel(r'$\omega_0t/\pi$', fontsize=18)
plt.ylabel(r'$x/\delta$', fontsize=18)
plt.legend(loc=0, fontsize=18)
plt.show() | dati_2015/ha03/06_3_DOF_System.ipynb | boffi/boffi.github.io | mit |
Final Displacements and Final Velocities
Say that $t_2=4\pi/\omega_0$, we compute the vectors of sines and cosines with different frequencies at $t_2$, then we compute the modal displacements and velocities (note that the dimensional velocities are these adimensional velocities multiplied by $\omega_0\,\delta$) and eventually we compute the nodal quantities by premultiplication by the eigenvectors matrix. | ct2 = np.cos(L*4*pi)
st2 = np.sin(L*4*pi)
q0t2 = As*ct2+Bs*st2 ; q1t2 = L*(Bs*ct2-As*st2)
display(Latex(r"$\boldsymbol x(t_2) = \{"+
",".join("%10.6f"%x for x in np.dot(evecs,q0t2))+
"\}\,\delta$"))
display(Latex(r"$\boldsymbol v(t_2) = \{"+
",".join("%10.6f"%x for x in np.dot(evecs,q1t2))+
"\}\,\omega_0\,\delta$")) | dati_2015/ha03/06_3_DOF_System.ipynb | boffi/boffi.github.io | mit |
Mission Fire Exploration
At the time that this was created, there is a lot of press going on right now about Mission district fires, and gossip that maybe it's the cause of landlords or some arsonist trying to get more money for older properties. This notebook captures some
initial thoughts about this.
This exploration troubles me, because I don't see an upside to producing this, however I see quite a few downsides if I get this wrong.
This seems to be a very politically charged topic at the moment, and there are a lot of people who are claiming things and getting carried away with facts that may or may not be true.
I'm not saying that one side or another is more right or wrong, but I'm confident that in the end, the data will prevail.
In the meantime, just as part of this exploration, I was curious to see if I could verify some of the claims that are being put forth, and figure out whether there are some other explanations, and just wrap my head around the problem and the data being used. | query_url = 'https://data.sfgov.org/resource/wbb6-uh78.json?$order=close_dttm%20DESC&$offset={}&$limit={}'
# query_url = "https://data.sfgov.org/resource/wbb6-uh78.json?$where=alarm_dttm>='2013-02-12 04:52:17'&$order=close_dttm%20DESC"
# query_url = "https://data.sfgov.org/resource/wbb6-uh78.json?$where=alarm_dttm>='2013-02-12 04:52:17'"
offset = 0
limit = 1000000
df = pd.read_json(query_url.format(offset, limit))
# df = pd.read_json(query_url)
cols_to_drop = ["automatic_extinguishing_sytem_failure_reason",
"automatic_extinguishing_sytem_type",
"battalion",
"box",
"call_number",
"detector_effectiveness",
"detector_failure_reason",
"ems_personnel",
"ems_units",
"exposure_number",
"first_unit_on_scene",
"ignition_factor_secondary",
"mutual_aid",
"no_flame_spead",
"other_personnel",
"other_units",
"station_area",
"supervisor_district"]
df = df.drop(cols_to_drop, axis=1)
for col in df.columns:
if 'dttm' in col:
df[col] = pd.to_datetime(df[col])
df.alarm_dttm.min() # The earliest timestamp of this dataset is 2013-02-12 04:52:17
df.estimated_property_loss.value_counts(dropna=False)
df.shape
# So we have 100,000 rows of data, going all the way back to February 10, 2013
# There is thoughts that there's a correlation with year and cost, especially in the mission
df[df.estimated_property_loss.isnull()].__len__()
# of the 100,000 rows, 96,335 are null
96335 / float(df.shape[0])
# wow, so where are these companies getting their data about the costs associated with fires?
# it's not from the sfgov website. we'll need to table that and come back later.
df['year'] = df.alarm_dttm.apply(lambda x: x.year)
temp_df = df[df.estimated_property_loss.notnull()]
temp_df.shape
temp_df.groupby('year').sum()['estimated_property_loss'] | notebooks/exploratory/0.8-mission-fire-exploration-revisited.ipynb | mikezawitkowski/fireRiskSF | mit |
Ch-Ch-Ch-Changes
Data which can be modified in place is called mutable, while data which cannot be modified is called immutable. Strings and numbers are immutable. This does not mean that variables with string or number values are constants, but when we want to change the value of a string or number variable, we can only replace the old value with a completely new value.
Lists and arrays, on the other hand, are mutable: we can modify them after they have been created. We can change individual elements, append new elements, or reorder the whole list. For some operations, like sorting, we can choose whether to use a function that modifies the data in place or a function that returns a modified copy and leaves the original unchanged.
Be careful when modifying data in place. If two variables refer to the same list, and you modify the list value, it will change for both variables! If you want variables with mutable values to be independent, you must make a copy of the value when you assign it.
Because of pitfalls like this, code which modifies data in place can be more difficult to understand. However, it is often far more efficient to modify a large data structure in place than to create a modified copy for every small change. You should consider both of these aspects when writing your code.
There are many ways to change the contents of lists besides assigning new values to individual elements: | odds.append(11)
print('odds after adding a value:', odds)
del odds[0]
print('odds after removing the first element:', odds)
odds.reverse()
print('odds after reversing:', odds) | 02-Python1/02-Python-1-Lists_Instructor.ipynb | OpenAstronomy/workshop_sunpy_astropy | mit |
This is because python stores a list in memory, and then can use multiple names to refer to the same list. If all we want to do is copy a (simple) list, we can use the list() command, so we do not modify a list we did not mean to: | odds = [1, 3, 5, 7]
primes = list(odds)
primes += [2]
print('primes:', primes)
print('odds:', odds) | 02-Python1/02-Python-1-Lists_Instructor.ipynb | OpenAstronomy/workshop_sunpy_astropy | mit |
First, create a set of views to limit the individual indicators to one record per county. The Ambry SQL parser is
ver simplistic, and can't handle anything mroe then very simple joins. | w = b.warehouse('hci_counties')
w.clean()
print w.dsn
w.query("""
-- Get only counties in California
CREATE VIEW geo AS SELECT gvid, name AS county_name, geometry FROM census.gov-tiger-2015-counties
WHERE statefp = 6;
-- Get only records for all race/ethinicities
CREATE VIEW hf_total AS SELECT gvid, mrfei FROM cdph.ca.gov-hci-healthy_food-county
WHERE race_eth_name = 'Total';
-- Get only records for all race/ethinicities
CREATE VIEW aq_total AS SELECT gvid, pm25_concentration FROM cdph.ca.gov-hci-air_quality-county
WHERE race_eth_name = 'Total';
-- THe overty table has a lot of otrher categories, for report year and type of poverty
CREATE VIEW pr_total AS SELECT gvid, percent FROM cdph.ca.gov-hci-poverty_rate-county
WHERE race_eth_name = 'Total' AND reportyear='2008-2010' AND poverty='Overall';
""").close() | test/bundle_tests/build.example.com/classification/Using SQL JOINS.ipynb | CivicKnowledge/ambry | bsd-2-clause |
Now we can run a query to join the indicators. | sql="""
SELECT county_name, mrfei, pm25_concentration, percent as percent_poverty FROM geo as counties
JOIN hf_total ON hf_total.gvid = counties.gvid
JOIN aq_total ON aq_total.gvid = counties.gvid
JOIN pr_total ON pr_total.gvid = counties.gvid;
"""
df = w.dataframe(sql)
df.head()
df.corr() | test/bundle_tests/build.example.com/classification/Using SQL JOINS.ipynb | CivicKnowledge/ambry | bsd-2-clause |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.