markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
Returning indexes of selected values | from datetime import datetime as py_dtime
dt_x_index = DateScale(min=np.datetime64(py_dtime(2006, 6, 1)))
lin_y2 = LinearScale()
lc2_index = Lines(x=dates_actual, y=prices, scales={"x": dt_x_index, "y": lin_y2})
x_ax1 = Axis(label="Date", scale=dt_x_index)
x_ay2 = Axis(label=(symbol + " Price"), scale=lin_y2, orientation="vertical")
intsel_date = FastIntervalSelector(scale=dt_x_index, marks=[lc2_index])
db_date = HTML()
db_date.value = str(intsel_date.selected)
## Now, we define a function that will be called when the selectors are interacted with - a callback
def date_interval_change_callback(change):
db_date.value = str(change.new)
## Notice here that we call the observe on the Mark lc2_index rather than on the selector intsel_date
lc2_index.observe(date_interval_change_callback, names=["selected"])
fig_date_mark = Figure(
marks=[lc2_index],
axes=[x_ax1, x_ay2],
title="Fast Interval Selector Selected Indices Example",
interaction=intsel_date,
)
VBox([db_date, fig_date_mark]) | examples/Interactions/Interaction Layer.ipynb | bloomberg/bqplot | apache-2.0 |
Brush Selector
We can do the same with any type of selector | ## Defining a new Figure
dt_x_brush = DateScale(min=np.datetime64(py_dtime(2006, 6, 1)))
lin_y2_brush = LinearScale()
lc3_brush = Lines(x=dates_actual, y=prices, scales={"x": dt_x_brush, "y": lin_y2_brush})
x_ax_brush = Axis(label="Date", scale=dt_x_brush)
x_ay_brush = Axis(label=(symbol + " Price"), scale=lin_y2_brush, orientation="vertical")
db_brush = HTML(value="[]")
brushsel_date = BrushIntervalSelector(
scale=dt_x_brush, marks=[lc3_brush], color="FireBrick"
)
## Now, we define a function that will be called when the selectors are interacted with - a callback
def date_brush_change_callback(change):
db_brush.value = str(change.new)
lc3_brush.observe(date_brush_change_callback, names=["selected"])
fig_brush_sel = Figure(
marks=[lc3_brush],
axes=[x_ax_brush, x_ay_brush],
title="Brush Selector Selected Indices Example",
interaction=brushsel_date,
)
VBox([db_brush, fig_brush_sel]) | examples/Interactions/Interaction Layer.ipynb | bloomberg/bqplot | apache-2.0 |
Scatter Chart Selectors
Brush Selector | date_fmt = "%m-%d-%Y"
sec2_data = price_data[symbol2].values
dates = price_data.index.values
sc_x = LinearScale()
sc_y = LinearScale()
scatt = Scatter(x=prices, y=sec2_data, scales={"x": sc_x, "y": sc_y})
sc_xax = Axis(label=(symbol), scale=sc_x)
sc_yax = Axis(label=(symbol2), scale=sc_y, orientation="vertical")
br_sel = BrushSelector(x_scale=sc_x, y_scale=sc_y, marks=[scatt], color="red")
db_scat_brush = HTML(value="[]")
## call back for the selector
def brush_callback(change):
db_scat_brush.value = str(br_sel.selected)
br_sel.observe(brush_callback, names=["brushing"])
fig_scat_brush = Figure(
marks=[scatt],
axes=[sc_xax, sc_yax],
title="Scatter Chart Brush Selector Example",
interaction=br_sel,
)
VBox([db_scat_brush, fig_scat_brush]) | examples/Interactions/Interaction Layer.ipynb | bloomberg/bqplot | apache-2.0 |
Brush Selector with Date Values | sc_brush_dt_x = DateScale(date_format=date_fmt)
sc_brush_dt_y = LinearScale()
scatt2 = Scatter(
x=dates_actual, y=sec2_data, scales={"x": sc_brush_dt_x, "y": sc_brush_dt_y}
)
br_sel_dt = BrushSelector(x_scale=sc_brush_dt_x, y_scale=sc_brush_dt_y, marks=[scatt2])
db_brush_dt = HTML(value=str(br_sel_dt.selected))
## call back for the selector
def brush_dt_callback(change):
db_brush_dt.value = str(br_sel_dt.selected)
br_sel_dt.observe(brush_dt_callback, names=["brushing"])
sc_xax = Axis(label=(symbol), scale=sc_brush_dt_x)
sc_yax = Axis(label=(symbol2), scale=sc_brush_dt_y, orientation="vertical")
fig_brush_dt = Figure(
marks=[scatt2],
axes=[sc_xax, sc_yax],
title="Brush Selector with Dates Example",
interaction=br_sel_dt,
)
VBox([db_brush_dt, fig_brush_dt]) | examples/Interactions/Interaction Layer.ipynb | bloomberg/bqplot | apache-2.0 |
Histogram Selectors | ## call back for selectors
def interval_change_callback(name, value):
db3.value = str(value)
## call back for the selector
def brush_callback(change):
if not br_intsel.brushing:
db3.value = str(br_intsel.selected)
returns = np.log(prices[1:]) - np.log(prices[:-1])
hist_x = LinearScale()
hist_y = LinearScale()
hist = Hist(sample=returns, scales={"sample": hist_x, "count": hist_y})
br_intsel = BrushIntervalSelector(scale=hist_x, marks=[hist])
br_intsel.observe(brush_callback, names=["selected"])
br_intsel.observe(brush_callback, names=["brushing"])
db3 = HTML()
db3.value = str(br_intsel.selected)
h_xax = Axis(
scale=hist_x, label="Returns", grids="off", set_ticks=True, tick_format="0.2%"
)
h_yax = Axis(scale=hist_y, label="Freq", orientation="vertical", grid_lines="none")
fig_hist = Figure(
marks=[hist],
axes=[h_xax, h_yax],
title="Histogram Selection Example",
interaction=br_intsel,
)
VBox([db3, fig_hist]) | examples/Interactions/Interaction Layer.ipynb | bloomberg/bqplot | apache-2.0 |
Multi Selector
This selector provides the ability to have multiple brush selectors on the same graph.
The first brush works like a regular brush.
Ctrl + click creates a new brush, which works like the regular brush.
The active brush has a Green border while all the inactive brushes have a Red border.
Shift + click deactivates the current active brush. Now, click on any inactive brush to make it active.
Ctrl + Alt + Shift + click clears and resets all the brushes. | def multi_sel_callback(change):
if not multi_sel.brushing:
db4.value = str(multi_sel.selected)
line_x = LinearScale()
line_y = LinearScale()
line = Lines(
x=np.arange(100), y=np.random.randn(100), scales={"x": line_x, "y": line_y}
)
multi_sel = MultiSelector(scale=line_x, marks=[line])
multi_sel.observe(multi_sel_callback, names=["selected"])
multi_sel.observe(multi_sel_callback, names=["brushing"])
db4 = HTML()
db4.value = str(multi_sel.selected)
h_xax = Axis(scale=line_x, label="Returns", grid_lines="none")
h_yax = Axis(scale=hist_y, label="Freq", orientation="vertical", grid_lines="none")
fig_multi = Figure(
marks=[line],
axes=[h_xax, h_yax],
title="Multi-Selector Example",
interaction=multi_sel,
)
VBox([db4, fig_multi])
# changing the names of the intervals.
multi_sel.names = ["int1", "int2", "int3"] | examples/Interactions/Interaction Layer.ipynb | bloomberg/bqplot | apache-2.0 |
Multi Selector with Date X | def multi_sel_dt_callback(change):
if not multi_sel_dt.brushing:
db_multi_dt.value = str(multi_sel_dt.selected)
line_dt_x = DateScale(min=np.datetime64(py_dtime(2007, 1, 1)))
line_dt_y = LinearScale()
line_dt = Lines(
x=dates_actual, y=sec2_data, scales={"x": line_dt_x, "y": line_dt_y}, colors=["red"]
)
multi_sel_dt = MultiSelector(scale=line_dt_x)
multi_sel_dt.observe(multi_sel_dt_callback, names=["selected"])
multi_sel_dt.observe(multi_sel_dt_callback, names=["brushing"])
db_multi_dt = HTML()
db_multi_dt.value = str(multi_sel_dt.selected)
h_xax_dt = Axis(scale=line_dt_x, label="Returns", grid_lines="none")
h_yax_dt = Axis(
scale=line_dt_y, label="Freq", orientation="vertical", grid_lines="none"
)
fig_multi_dt = Figure(
marks=[line_dt],
axes=[h_xax_dt, h_yax_dt],
title="Multi-Selector with Date Example",
interaction=multi_sel_dt,
)
VBox([db_multi_dt, fig_multi_dt]) | examples/Interactions/Interaction Layer.ipynb | bloomberg/bqplot | apache-2.0 |
Lasso Selector | lasso_sel = LassoSelector()
xs, ys = LinearScale(), LinearScale()
data = np.arange(20)
line_lasso = Lines(x=data, y=data, scales={"x": xs, "y": ys})
scatter_lasso = Scatter(x=data, y=data, scales={"x": xs, "y": ys}, colors=["skyblue"])
bar_lasso = Bars(x=data, y=data / 2.0, scales={"x": xs, "y": ys})
xax_lasso, yax_lasso = Axis(scale=xs, label="X"), Axis(
scale=ys, label="Y", orientation="vertical"
)
fig_lasso = Figure(
marks=[scatter_lasso, line_lasso, bar_lasso],
axes=[xax_lasso, yax_lasso],
title="Lasso Selector Example",
interaction=lasso_sel,
)
lasso_sel.marks = [scatter_lasso, line_lasso]
fig_lasso
scatter_lasso.selected, line_lasso.selected | examples/Interactions/Interaction Layer.ipynb | bloomberg/bqplot | apache-2.0 |
Pan Zoom | xs_pz = DateScale(min=np.datetime64(py_dtime(2007, 1, 1)))
ys_pz = LinearScale()
line_pz = Lines(
x=dates_actual, y=sec2_data, scales={"x": xs_pz, "y": ys_pz}, colors=["red"]
)
panzoom = PanZoom(scales={"x": [xs_pz], "y": [ys_pz]})
xax = Axis(scale=xs_pz, label="Date", grids="off")
yax = Axis(scale=ys_pz, label="Price", orientation="vertical", grid_lines="none")
Figure(marks=[line_pz], axes=[xax, yax], interaction=panzoom) | examples/Interactions/Interaction Layer.ipynb | bloomberg/bqplot | apache-2.0 |
Hand Draw | xs_hd = DateScale(min=np.datetime64(py_dtime(2007, 1, 1)))
ys_hd = LinearScale()
line_hd = Lines(
x=dates_actual, y=sec2_data, scales={"x": xs_hd, "y": ys_hd}, colors=["red"]
)
handdraw = HandDraw(lines=line_hd)
xax = Axis(scale=xs_hd, label="Date", grid_lines="none")
yax = Axis(scale=ys_hd, label="Price", orientation="vertical", grid_lines="none")
Figure(marks=[line_hd], axes=[xax, yax], interaction=handdraw) | examples/Interactions/Interaction Layer.ipynb | bloomberg/bqplot | apache-2.0 |
Unified Figure with All Interactions | dt_x = DateScale(date_format=date_fmt, min=py_dtime(2007, 1, 1))
lc1_x = LinearScale()
lc2_y = LinearScale()
lc2 = Lines(
x=np.linspace(0.0, 10.0, len(prices)),
y=prices * 0.25,
scales={"x": lc1_x, "y": lc2_y},
display_legend=True,
labels=["Security 1"],
)
lc3 = Lines(
x=dates_actual,
y=sec2_data,
scales={"x": dt_x, "y": lc2_y},
colors=["red"],
display_legend=True,
labels=["Security 2"],
)
lc4 = Lines(
x=np.linspace(0.0, 10.0, len(prices)),
y=sec2_data * 0.75,
scales={"x": LinearScale(min=5, max=10), "y": lc2_y},
colors=["green"],
display_legend=True,
labels=["Security 2 squared"],
)
x_ax1 = Axis(label="Date", scale=dt_x)
x_ax2 = Axis(label="Time", scale=lc1_x, side="top", grid_lines="none")
x_ay2 = Axis(label=(symbol + " Price"), scale=lc2_y, orientation="vertical")
fig = Figure(marks=[lc2, lc3, lc4], axes=[x_ax1, x_ax2, x_ay2])
## declaring the interactions
multi_sel = MultiSelector(scale=dt_x, marks=[lc2, lc3])
br_intsel = BrushIntervalSelector(scale=lc1_x, marks=[lc2, lc3])
index_sel = IndexSelector(scale=dt_x, marks=[lc2, lc3])
int_sel = FastIntervalSelector(scale=dt_x, marks=[lc3, lc2])
hd = HandDraw(lines=lc2)
hd2 = HandDraw(lines=lc3)
pz = PanZoom(scales={"x": [dt_x], "y": [lc2_y]})
deb = HTML()
deb.value = "[]"
## Call back handler for the interactions
def test_callback(change):
deb.value = str(change.new)
multi_sel.observe(test_callback, names=["selected"])
br_intsel.observe(test_callback, names=["selected"])
index_sel.observe(test_callback, names=["selected"])
int_sel.observe(test_callback, names=["selected"])
from collections import OrderedDict
selection_interacts = ToggleButtons(
options=OrderedDict(
[
("HandDraw1", hd),
("HandDraw2", hd2),
("PanZoom", pz),
("FastIntervalSelector", int_sel),
("IndexSelector", index_sel),
("BrushIntervalSelector", br_intsel),
("MultiSelector", multi_sel),
("None", None),
]
)
)
link((selection_interacts, "value"), (fig, "interaction"))
VBox([deb, fig, selection_interacts], align_self="stretch")
# Set the scales of lc4 to the ones of lc2 and check if panzoom pans the two.
lc4.scales = lc2.scales | examples/Interactions/Interaction Layer.ipynb | bloomberg/bqplot | apache-2.0 |
An Enum stands for an enumeration, it's a convenient way for you to define lists of things. Typing: | AccountType.SAVINGS | homeworks/HW6/HW6_finished.ipynb | crystalzhaizhai/cs207_yi_zhai | mit |
returns a Python representation of an enumeration. You can compare these account types: | AccountType.SAVINGS == AccountType.SAVINGS
AccountType.SAVINGS == AccountType.CHECKING | homeworks/HW6/HW6_finished.ipynb | crystalzhaizhai/cs207_yi_zhai | mit |
To get a string representation of an Enum, you can use: | AccountType.SAVINGS.name | homeworks/HW6/HW6_finished.ipynb | crystalzhaizhai/cs207_yi_zhai | mit |
Part 1: Create a BankAccount class with the following specification:
Constructor is BankAccount(self, owner, accountType) where owner is a string representing the name of the account owner and accountType is one of the AccountType enums
Methods withdraw(self, amount) and deposit(self, amount) to modify the account balance of the account
Override methods __str__ to write an informative string of the account owner and the type of account, and __len__ to return the balance of the account | class BankAccount():
def __init__(self,owner,accountType):
self.owner=owner
self.accountType=accountType
self.balance=0
def withdraw(self,amount):
if amount<0:
raise ValueError("amount<0")
if self.balance<amount:
raise ValueError("withdraw more than balance")
self.balance-=amount
def deposit(self,amount):
if amount<0:
raise ValueError("amount<0")
self.balance+=amount
def __str__(self):
return "owner:{!s} account type:{!s}".format(self.owner,self.accountType.name)
def __len__(self):
return self.balance
myaccount=BankAccount("zhaizhai",AccountType.CHECKING)
print(myaccount.balance)
| homeworks/HW6/HW6_finished.ipynb | crystalzhaizhai/cs207_yi_zhai | mit |
Part 2: Write a class BankUser with the following specification:
Constructor BankUser(self, owner) where owner is the name of the account.
Method addAccount(self, accountType) - to start, a user will have no accounts when the BankUser object is created. addAccount will add a new account to the user of the accountType specified. Only one savings/checking account per user, return appropriate error otherwise
Methods getBalance(self, accountType), deposit(self, accountType, amount), and withdraw(self, accountType, amount) for a specific AccountType.
Override __str__ to have an informative summary of user's accounts. | class BankUser():
def __init__(self,owner):
self.owner=owner
self.SavingAccount=None
self.CheckingAccount=None
def addAccount(self,accountType):
if accountType==AccountType.SAVINGS:
if self.SavingAccount==None:
self.SavingAccount=BankAccount(self.owner,accountType)
else:
print("more than one saving account!")
raise AttributeError("more than one saving account!")
elif accountType==AccountType.CHECKING:
if self.CheckingAccount==None:
self.CheckingAccount=BankAccount(self.owner,accountType)
else:
print("more than one checking account!")
raise AttributeError("more than one checking account!")
else:
print("no such account type!")
raise ValueError("no such account type!")
def getBalance(self,accountType):
if accountType==AccountType.SAVINGS:
if self.SavingAccount==None:
print("saving account not exist")
raise AttributeError("saving account not exist")
else:
return self.SavingAccount.balance
elif accountType==AccountType.CHECKING:
if self.CheckingAccount==None:
print("checking account not exist")
raise AttributeError("checking account not exist")
else:
return self.CheckingAccount.balance
else:
print("no such account type!")
raise AttributeError("no such account type!")
def deposit(self,accountType,amount):
if accountType==AccountType.SAVINGS:
if self.SavingAccount==None:
print("saving account not exist")
raise AttributeError("saving account not exist")
else:
return self.SavingAccount.deposit(amount)
elif accountType==AccountType.CHECKING:
if self.CheckingAccount==None:
print("checking account not exist")
raise AttributeError("checking account not exist")
else:
return self.CheckingAccount.deposit(amount)
else:
print("no such account type!")
raise AttributeError("no such account type!")
def withdraw(self,accountType,amount):
if accountType==AccountType.SAVINGS:
if self.SavingAccount==None:
print("saving account not exist")
raise AttributeError("saving account not exist")
else:
return self.SavingAccount.withdraw(amount)
elif accountType==AccountType.CHECKING:
if self.CheckingAccount==None:
print("checking account not exist")
raise AttributeError("checking account not exist")
else:
return self.CheckingAccount.withdraw(amount)
else:
print("no such account type!")
raise AttributeError("no such account type!")
def __str__(self):
s="owner:{!s}".format(self.owner)
if self.SavingAccount!=None:
s=s+"account type: Saving balance:{:.2f}".format(self.SavingAccount.balance)
if self.CheckingAccount!=None:
s=s+"account type: Checking balance:{:.2f}".format(self.CheckingAccount.balance)
return s
newuser=BankUser("zhaizhai")
print(newuser)
newuser.addAccount(AccountType.SAVINGS)
print(newuser)
newuser.deposit(AccountType.SAVINGS,2)
newuser.withdraw(AccountType.SAVINGS,1)
print(newuser)
newuser.withdraw(AccountType.CHECKING,1) | homeworks/HW6/HW6_finished.ipynb | crystalzhaizhai/cs207_yi_zhai | mit |
Write some simple tests to make sure this is working. Think of edge scenarios a user might try to do.
Part 3: ATM Closure
Finally, we are going to rewrite a closure to use our bank account. We will make use of the input function which takes user input to decide what actions to take.
Write a closure called ATMSession(bankUser) which takes in a BankUser object. Return a method called Interface that when called, would provide the following interface:
First screen for user will look like:
Enter Option:
1)Exit
2)Create Account
3)Check Balance
4)Deposit
5)Withdraw
Pressing 1 will exit, any other option will show the options:
Enter Option:
1)Checking
2)Savings
If a deposit or withdraw was chosen, then there must be a third screen:
Enter Integer Amount, Cannot Be Negative:
This is to keep the code relatively simple, if you'd like you can also curate the options depending on the BankUser object (for example, if user has no accounts then only show the Create Account option), but this is up to you. In any case, you must handle any input from the user in a reasonable way that an actual bank would be okay with, and give the user a proper response to the action specified.
Upon finishing a transaction or viewing balance, it should go back to the original screen | def ATMSession(bankUser):
def Interface():
option1=input("Enter Options:\
1)Exit\
2)Creat Account\
3)Check Balance\
4)Deposit\
5)Withdraw")
if option1=="1":
Interface()
return
option2=input("Enter Options:\
1)Checking\
2)Saving")
if option1=="2":
if option2=="1":
bankUser.addAccount(AccountType.CHECKING)
Interface()
return
elif option2=="2":
bankUser.addAccount(AccountType.SAVINGS)
Interface()
return
else:
print("no such account type")
raise AttributeError("no such account type")
if option1=="3":
if option2=="1":
print(bankUser.getBalance(AccountType.CHECKING))
Interface()
return
elif option2=="2":
print(bankUser.getBalance(AccountType.SAVINGS))
Interface()
return
else:
print("no such account type")
raise AttributeError("no such account type")
if option1=="4":
option3=input("Enter Interger Amount, Cannot be Negative:")
if option2=="1":
bankUser.deposit(AccountType.CHECKING,int(option3))
Interface()
return
elif option2=="2":
bankUser.deposit(AccountType.SAVINGS,int(option3))
Interface()
return
else:
print("no such account type")
raise AttributeError("no such account type")
if option1=="5":
option3=input("Enter Interger Amount, Cannot be Negative:")
if option2=="1":
bankUser.withdraw(AccountType.CHECKING,int(option3))
Interface()
return
elif option2=="2":
bankUser.withdraw(AccountType.SAVINGS,int(option3))
Interface()
return
else:
print("no such account type")
raise AttributeError("no such account type")
print("no such operation")
raise AttributeError("no such operation")
return Interface
myATM=ATMSession(newuser)
myATM()
print(newuser) | homeworks/HW6/HW6_finished.ipynb | crystalzhaizhai/cs207_yi_zhai | mit |
Part 4: Put everything in a module Bank.py
We will be grading this problem with a test suite. Put the enum, classes, and closure in a single file named Bank.py. It is very important that the class and method specifications we provided are used (with the same capitalization), otherwise you will receive no credit. | %%file bank.py
from enum import Enum
class AccountType(Enum):
SAVINGS = 1
CHECKING = 2
class BankAccount():
def __init__(self,owner,accountType):
self.owner=owner
self.accountType=accountType
self.balance=0
def withdraw(self,amount):
if type(amount)!=int:
raise ValueError("not integer amount")
if amount<0:
raise ValueError("amount<0")
if self.balance<amount:
raise ValueError("withdraw more than balance")
self.balance-=amount
def deposit(self,amount):
if type(amount)!=int:
raise ValueError("not integer amount")
if amount<0:
raise ValueError("amount<0")
self.balance+=amount
def __str__(self):
return "owner:{!s} account type:{!s}".format(self.owner,self.accountType.name)
def __len__(self):
return self.balance
def ATMSession(bankUser):
def Interface():
option1=input("Enter Options:\
1)Exit\
2)Creat Account\
3)Check Balance\
4)Deposit\
5)Withdraw")
if option1=="1":
return
option2=input("Enter Options:\
1)Checking\
2)Saving")
if option1=="2":
if option2=="1":
bankUser.addAccount(AccountType.CHECKING)
return
elif option2=="2":
bankUser.addAccount(AccountType.SAVINGS)
return
else:
print("no such account type")
raise AttributeError("no such account type")
if option1=="3":
if option2=="1":
print(bankUser.getBalance(AccountType.CHECKING))
return
elif option2=="2":
print(bankUser.getBalance(AccountType.SAVINGS))
return
else:
print("no such account type")
raise AttributeError("no such account type")
if option1=="4":
option3=input("Enter Interger Amount, Cannot be Negative:")
if option2=="1":
bankUser.deposit(AccountType.CHECKING,int(option3))
return
elif option2=="2":
bankUser.deposit(AccountType.SAVINGS,int(option3))
return
else:
print("no such account type")
raise AttributeError("no such account type")
if option1=="5":
option3=input("Enter Interger Amount, Cannot be Negative:")
if option2=="1":
bankUser.withdraw(AccountType.CHECKING,int(option3))
return
elif option2=="2":
bankUser.withdraw(AccountType.SAVINGS,int(option3))
return
else:
print("no such account type")
raise AttributeError("no such account type")
print("no such operation")
raise AttributeError("no such operation")
return Interface | homeworks/HW6/HW6_finished.ipynb | crystalzhaizhai/cs207_yi_zhai | mit |
Problem 2: Linear Regression Class
Let's say you want to create Python classes for three related types of linear regression: Ordinary Least Squares Linear Regression, Ridge Regression, and Lasso Regression.
Consider the multivariate linear model:
$$y = X\beta + \epsilon$$
where $y$ is a length $n$ vector, $X$ is an $m \times p$ matrix, and $\beta$
is a $p$ length vector of coefficients.
Ordinary Least Squares Linear Regression
OLS Regression seeks to minimize the following cost function:
$$\|y - \beta\mathbf {X}\|^{2}$$
The best fit coefficients can be obtained by:
$$\hat{\beta} = (X^T X)^{-1}X^Ty$$
where $X^T$ is the transpose of the matrix $X$ and $X^{-1}$ is the inverse of the matrix $X$.
Ridge Regression
Ridge Regression introduces an L2 regularization term to the cost function:
$$\|y - \beta\mathbf {X}\|^{2}+\|\Gamma \mathbf {x} \|^{2}$$
Where $\Gamma = \alpha I$ for some constant $\alpha$ and the identity matrix $I$.
The best fit coefficients can be obtained by:
$$\hat{\beta} = (X^T X+\Gamma^T\Gamma)^{-1}X^Ty$$
Lasso Regression
Lasso Regression introduces an L1 regularization term and restricts the total number of predictor variables in the model.
The following cost function:
$${\displaystyle \min {\beta {0},\beta }\left{{\frac {1}{m}}\left\|y-\beta {0}-X\beta \right\|{2}^{2}\right}{\text{ subject to }}\|\beta \|_{1}\leq \alpha.}$$
does not have a nice closed form solution. For the sake of this exercise, you may use the sklearn.linear_model.Lasso class, which uses a coordinate descent algorithm to find the best fit. You should only use the class in the fit() method of this exercise (ie. do not re-use the sklearn for other methods in your class).
$R^2$ score
The $R^2$ score is defined as:
$${R^{2} = {1-{SS_E \over SS_T}}}$$
Where:
$$SS_T=\sum_i (y_i-\bar{y})^2, SS_R=\sum_i (\hat{y_i}-\bar{y})^2, SS_E=\sum_i (y_i - \hat{y_i})^2$$
where ${y_i}$ are the original data values, $\hat{y_i}$ are the predicted values, and $\bar{y_i}$ is the mean of the original data values.
Part 1: Base Class
Write a class called Regression with the following methods:
$fit(X, y)$: Fits linear model to $X$ and $y$.
$get_params()$: Returns $\hat{\beta}$ for the fitted model. The parameters should be stored in a dictionary.
$predict(X)$: Predict new values with the fitted model given $X$.
$score(X, y)$: Returns $R^2$ value of the fitted model.
$set_params()$: Manually set the parameters of the linear model.
This parent class should throw a NotImplementedError for methods that are intended to be implemented by subclasses. | class Regression():
def __init__(self,X,y):
self.X=X
self.y=y
self.alpha=0.1
def fit(self,X,y):
return
def get_params(self):
return self.beta
def predict(self,X):
import numpy as np
return np.dot(X,self.beta)
def score(self,X,y):
return 1-np.sum((y-self.predict(X))**2)/np.sum((y-np.mean(y))**2)
def set_params(self,alpha):
self.alpha=alpha | homeworks/HW6/HW6_finished.ipynb | crystalzhaizhai/cs207_yi_zhai | mit |
Part 2: OLS Linear Regression
Write a class called OLSRegression that implements the OLS Regression model described above and inherits the Regression class. | class OLSRegression(Regression):
def fit(self):
import numpy as np
X=self.X
y=self.y
self.beta=np.dot(np.dot(np.linalg.pinv(np.dot(np.transpose(X),X)),np.transpose(X)),y)
ols1=OLSRegression([[2],[3]],[[1],[2]])
ols1.fit()
ols1.predict([[2],[3]])
X=[[2],[3]]
y=[[1],[2]]
beta=np.dot(np.dot(np.linalg.pinv(np.dot(np.transpose(X),X)),np.transpose(X)),y)
| homeworks/HW6/HW6_finished.ipynb | crystalzhaizhai/cs207_yi_zhai | mit |
Part 3: Ridge Regression
Write a class called RidgeRegression that implements Ridge Regression and inherits the OLSRegression class. | class RidgeRegression(Regression):
def fit(self):
import numpy as np
X=self.X
y=self.y
self.beta=np.dot(np.dot(np.linalg.pinv(np.dot(np.transpose(X),X)+self.alpha**2),np.transpose(X)),y)
return
ridge1=RidgeRegression([[2],[3]],[[1],[2]])
ridge1.fit()
ridge1.predict([[2],[3]])
ridge1.score([[2],[3]],[[1],[2]])
| homeworks/HW6/HW6_finished.ipynb | crystalzhaizhai/cs207_yi_zhai | mit |
Part 3: Lasso Regression
Write a class called LassoRegression that implements Lasso Regression and inherits the OLSRegression class. You should only use Lasso(), Lasso.fit(), Lasso.coef_, and Lasso._intercept from the sklearn.linear_model.Lasso class. | class LassoRegression(Regression):
def fit(self):
from sklearn.linear_model import Lasso
myLs=Lasso(self.alpha)
myLs.fit(self.X,self.y)
self.beta=myLs.coef_.reshape((-1,1))
self.beta0=myLs.intercept_
return
def predict(self,X):
import numpy as np
return np.dot(X,self.beta)+self.beta0
lasso1=LassoRegression([[2],[3]],[[1],[2]])
lasso1.fit()
lasso1.predict([[2],[3]])
lasso1.score([[2],[3]],[[1],[2]])
from sklearn.linear_model import Lasso
myLs=Lasso(alpha=0.1)
myLs.fit([[2],[3]],[[1],[1]])
beta=np.array(myLs.coef_)
print(beta.reshape((-1,1)))
beta0=myLs.intercept_
print(beta0) | homeworks/HW6/HW6_finished.ipynb | crystalzhaizhai/cs207_yi_zhai | mit |
Part 4: Model Scoring
You will use the Boston dataset for this part.
Instantiate each of the three models above. Using a for loop, fit (on the training data) and score (on the testing data) each model on the Boston dataset.
Print out the $R^2$ value for each model and the parameters for the best model using the get_params() method. Use an $\alpha$ value of 0.1.
Hint: You can consider using the sklearn.model_selection.train_test_split method to create the training and test datasets. | from sklearn.datasets import load_boston
from sklearn.model_selection import KFold
from sklearn.metrics import r2_score
import statsmodels.api as sm
import numpy as np
boston=load_boston()
boston_x=boston.data
boston_y=boston.target
kf=KFold(n_splits=2)
kf.get_n_splits(boston)
ols1_m=0
ridge1_m=0
lasso1_m=0
for train_index, test_index in kf.split(boston_x):
X_train, X_test = boston_x[train_index], boston_x[test_index]
y_train, y_test = boston_y[train_index], boston_y[test_index]
y_train=y_train.reshape(-1,1)
y_test=y_test.reshape(-1,1)
ols1=OLSRegression(sm.add_constant(X_train),y_train)
ols1.fit()
ols1_m+=ols1.score(sm.add_constant(X_test),y_test)
print("OLS score:",ols1.score(sm.add_constant(X_test),y_test))
ridge1=RidgeRegression(sm.add_constant(X_train),y_train)
ridge1.fit()
ridge1_m+=ridge1.score(sm.add_constant(X_test),y_test)
print("ridge score:",ridge1.score(sm.add_constant(X_test),y_test))
lasso1=LassoRegression(X_train,y_train)
lasso1.fit()
lasso1_m+=lasso1.score(X_test,y_test)
print("lasso score:",lasso1.score(X_test,y_test))
break
print(ols1_m,ridge1_m,lasso1_m)
ols1.get_params()
| homeworks/HW6/HW6_finished.ipynb | crystalzhaizhai/cs207_yi_zhai | mit |
Part 5: Visualize Model Performance
We can evaluate how the models perform for various values of $\alpha$. Calculate the $R^2$ scores for each model for $\alpha \in [0.05, 1]$ and plot the three lines on the same graph. To change the parameters, use the set_params() method. Be sure to label each line and add axis labels. | ols_r=[]
ridge_r=[]
lasso_r=[]
alpha_l=[]
for alpha_100 in range(5,100,5):
alpha=alpha_100/100
alpha_l.append(alpha)
for train_index, test_index in kf.split(boston_x):
X_train, X_test = boston_x[train_index], boston_x[test_index]
y_train, y_test = boston_y[train_index], boston_y[test_index]
y_train=y_train.reshape(-1,1)
y_test=y_test.reshape(-1,1)
ols1=OLSRegression(sm.add_constant(X_train),y_train)
ols1.set_params(alpha)
ols1.fit()
ols_r.append(ols1.score(sm.add_constant(X_test),y_test))
ridge1=RidgeRegression(sm.add_constant(X_train),y_train)
ridge1.set_params(alpha)
ridge1.fit()
ridge_r.append(ridge1.score(sm.add_constant(X_test),y_test))
lasso1=LassoRegression(X_train,y_train)
lasso1.set_params(alpha)
lasso1.fit()
lasso_r.append(lasso1.score(X_test,y_test))
break
import matplotlib.pyplot as plt
plt.plot(alpha_l,ols_r,label="linear regression")
plt.plot(alpha_l,ridge_r,label="ridge")
plt.plot(alpha_l,lasso_r,label="lasso")
plt.xlabel("alpha")
plt.ylabel("$R^{2}$")
plt.title("the relation of R squared with alpha")
plt.legend()
plt.show() | homeworks/HW6/HW6_finished.ipynb | crystalzhaizhai/cs207_yi_zhai | mit |
Co-authorship network
We start by building a mapping from authors to the set of identifiers of papers they authored. We'll be using Python's sets again for that purpose. | papers_of_author = defaultdict(set)
for (id, p) in Summaries.items():
for a in p.authors:
papers_of_author[a].add(id) | 04_analysis.ipynb | VUInformationRetrieval/IR2016_2017 | gpl-2.0 |
Let's try it out: | papers_of_author['Clauset A']
for id in papers_of_author['Clauset A']:
display_summary(id) | 04_analysis.ipynb | VUInformationRetrieval/IR2016_2017 | gpl-2.0 |
We can now build a co-authorship network, that is a graph linking authors to the set of co-authors they have published with: | coauthors = defaultdict(set)
for p in Summaries.values():
for a in p.authors:
coauthors[a].update(p.authors)
# The code above results in each author being listed as having co-authored with himself/herself.
# We remove these self-references here:
for (a, ca) in coauthors.items():
ca.remove(a) | 04_analysis.ipynb | VUInformationRetrieval/IR2016_2017 | gpl-2.0 |
And let's try it out again: | print(', '.join( coauthors['Clauset A'] )) | 04_analysis.ipynb | VUInformationRetrieval/IR2016_2017 | gpl-2.0 |
Now we can have a look at some basic statistics about our graph: | print('Number of nodes (authors): ', len(coauthors))
coauthor_rel_count = sum( len(c) for c in coauthors.values() )
print('Number of links (co-authorship relations): ', coauthor_rel_count) | 04_analysis.ipynb | VUInformationRetrieval/IR2016_2017 | gpl-2.0 |
With this data at hand, we can plot the degree distribution by showing the number of collaborators a scientist has published with: | plt.hist( x=[ len(ca) for ca in coauthors.values() ], bins=range(60) )
plt.xlabel('number of co-authors')
plt.ylabel('number of researchers')
plt.xlim(0,51); | 04_analysis.ipynb | VUInformationRetrieval/IR2016_2017 | gpl-2.0 |
Citations network
Next, we can look at the citation network. We'll start by expanding the our data about citations into two mappings:
papers_citing[id]: papers citing a given paper
cited_by[id]: papers cited by a given paper (in other words: its list of references)
papers_citing will give us the list of a node's incoming links, whereas cited_by will give us the list of its outgoing links. | papers_citing = Citations # no changes needed, this is what we are storing already in the Citations dataset
cited_by = defaultdict(list)
for ref, papers_citing_ref in papers_citing.items():
for id in papers_citing_ref:
cited_by[ id ].append( ref )
display_summary(24130474) | 04_analysis.ipynb | VUInformationRetrieval/IR2016_2017 | gpl-2.0 |
As we are dealing with a subset of the data, papers_citing can contain references to papers outside of our subset. On the other hand, the way we created cited_by, it will only contain backward references from within our dataset, meaning that it is incomplete with respect to the whole dataset. Nethertheless, we can use this citation network on our subset of malaria-related papers to implement link analysis techniques.
Let us now look at an exemlary paper, let's say the one with identifier 24130474. We can now use the cited_by mapping to retrieve its (incomplete) list of references: | paper_id = 24130474
refs = { id : Summaries[id].title for id in cited_by[paper_id] }
print(len(refs), 'references found for paper', paper_id)
refs | 04_analysis.ipynb | VUInformationRetrieval/IR2016_2017 | gpl-2.0 |
If we lookup the same paper in papers_citing, we now see that some of the cited papers are themselves in our dataset, but others are not (shown below as '??'): | { id : Summaries.get(id,['??'])[0] for id in papers_citing[paper_id] } | 04_analysis.ipynb | VUInformationRetrieval/IR2016_2017 | gpl-2.0 |
Paper 25122340, for example, is not in our dataset and we do not have any direct information about it, but its repeated occurrence in other papers' citation lists does allow us to reconstruct some of its references. Below is the list of papers in our dataset cited by that paper: | paper_id2 = 25122340
refs2 = { id : Summaries[id].title for id in cited_by[paper_id2] }
print(len(refs2), 'references identified for the paper with id', paper_id2)
refs2 | 04_analysis.ipynb | VUInformationRetrieval/IR2016_2017 | gpl-2.0 |
Now that we have a better understanding about the data we're dealing with, let us obtain again some basic statistics about our graph. | n = len(Ids)
print('Number of papers in our subset: %d (%.2f %%)' % (n, 100.0) )
with_citation = [ id for id in Ids if papers_citing[id] != [] ]
with_citation_rel = 100. * len(with_citation) / n
print('Number of papers cited at least once: %d (%.2f %%)' % (len(with_citation), with_citation_rel) )
isolated = set( id for id in Ids if papers_citing[id] == [] and id not in cited_by )
isolated_rel = 100. * len(isolated) / n
print('Number of isolated nodes: %d (%.2f %%)' % (len(isolated), isolated_rel) )
id_set = set( Ids )
citing_set = set( cited_by.keys() )
outsiders = citing_set - id_set # set difference
nodes = citing_set | id_set # set union
non_isolated = nodes - isolated # set difference
print('Overall number of nodes: %d (%.2f %%)' % (len(nodes), 100.0) )
non_isolated_rel = 100. * len(non_isolated) / len(nodes)
print('Number of non-isolated nodes: %d (%.2f %%)' % (len(non_isolated), non_isolated_rel) )
outsiders_rel = 100. * len(outsiders) / len(nodes)
print('Number of nodes outside our subset: %d (%.2f %%)' % ( len(outsiders), outsiders_rel ) )
all_citations = [ c for citing in papers_citing.values() for c in citing ]
outsider_citations = [ c for citing in papers_citing.values() for c in citing if c in outsiders ]
print('Overal number of links (citations): %d (%.2f %%)' % (len(all_citations), 100.0) )
outsider_citations_rel = 100. * len(outsider_citations) / len(all_citations)
print('Citations from outside the subset: %d (%.2f %%)' % (len(outsider_citations), outsider_citations_rel) ) | 04_analysis.ipynb | VUInformationRetrieval/IR2016_2017 | gpl-2.0 |
Let us now find which 10 papers are the most cited in our dataset. | citation_count_per_paper = [ (id, len(citations)) for (id,citations) in papers_citing.items() ]
sorted_by_citation_count = sorted(citation_count_per_paper, key=lambda i:i[1], reverse=True)
for (id, c) in sorted_by_citation_count[:10]:
display_summary(id, extra_text = 'Citation count: ' + str(c)) | 04_analysis.ipynb | VUInformationRetrieval/IR2016_2017 | gpl-2.0 |
Link Analysis for Search Engines
In order to use the citation network, we need to be able to perform some complex graph algorithms on it. To make our lives easier, we will use NetworkX, a Python package for dealing with complex networks. You might have to install the NetworkX package first. | import networkx as nx
G = nx.DiGraph(cited_by) | 04_analysis.ipynb | VUInformationRetrieval/IR2016_2017 | gpl-2.0 |
We now have a NetworkX Directed Graph stored in G, where a node represents a paper, and an edge represents a citation. This means we can now apply the algorithms and functions of NetworkX to our graph: | print(nx.info(G))
print('Directed graph:', nx.is_directed(G))
print('Density of graph:', nx.density(G)) | 04_analysis.ipynb | VUInformationRetrieval/IR2016_2017 | gpl-2.0 |
As this graph was generated from citations only, we need to add all isolated nodes (nodes that are not cited and do not cite other papers) as well: | G.add_nodes_from(isolated) | 04_analysis.ipynb | VUInformationRetrieval/IR2016_2017 | gpl-2.0 |
And now we get slightly different values: | print(nx.info(G))
print('Directed graph:', nx.is_directed(G))
print('Density of graph:', nx.density(G)) | 04_analysis.ipynb | VUInformationRetrieval/IR2016_2017 | gpl-2.0 |
Assignments
Your name: ...
Task 1
Plot the in-degree distribution (the distribution of the number of incoming links) for the citation network. What can you tell about the shape of this distribution, and what does this tell us about the network? | # Add your code here | 04_analysis.ipynb | VUInformationRetrieval/IR2016_2017 | gpl-2.0 |
Answer: [Write your answer text here]
Task 2
Using the Link Analysis algorithms provided by NetworkX, calculate the PageRank score for each node in the citation network, and store them in a variable. Print out the PageRank values for the two example papers given below.
You can also use the pagerank_scipy implementation, which tends to be considerably faster than its regular pagerank counterpart (but you have to install the SciPy package for that). To print and compare PageRank values, you might want to use commands like print('%.6f' % var) to use regular decimal notation with a fixed number of decimal places. | # Add your code here
# print PageRank for paper 10399593
# print PageRank for paper 23863622 | 04_analysis.ipynb | VUInformationRetrieval/IR2016_2017 | gpl-2.0 |
Task 3
Why do the two papers above have such different PageRank values? Write code below to investigate and show the cause of this, and then explain the cause of this difference based on the results generated by your code. | # Add your code here | 04_analysis.ipynb | VUInformationRetrieval/IR2016_2017 | gpl-2.0 |
Answer: [Write your answer text here]
Task 4
Copy the scoring function score_ntn from Task 4 of mini-assignment 3. Rename it to score_ntn_pagerank and change its code to incorporate a paper's PageRank score in it's final score, in addition to tf-idf. In other words, the new function should return a single value that is calculated based on both scores (PageRank and tf-idf). Explain your decision on how to combine the two scores. | # Add your code here | 04_analysis.ipynb | VUInformationRetrieval/IR2016_2017 | gpl-2.0 |
Answer: [Write your answer text here]
Task 5
Copy the query function query_ntn from Task 4 of mini-assignment 3. Rename it to query_ntn_pagerank and change the code to use our new scoring function score_ntn_pagerank from task 4 above. Demonstrate these functions with an example query that returns our paper 10399593 from above as the top result. | # Add your code here | 04_analysis.ipynb | VUInformationRetrieval/IR2016_2017 | gpl-2.0 |
Split this string:
s = "Hi there Sam!"
into a list. | s = "Hi there Sam!"
s.split() | 1-Python Crash course/Python-Crash-Course/Python Crash Course Exercises .ipynb | shashank14/Asterix | apache-2.0 |
Given the variables:
planet = "Earth"
diameter = 12742
Use .format() to print the following string:
The diameter of Earth is 12742 kilometers. | planet = "Earth"
diameter = 12742
print("The diameter of {} is {} kilometers.".format(planet,diameter)) | 1-Python Crash course/Python-Crash-Course/Python Crash Course Exercises .ipynb | shashank14/Asterix | apache-2.0 |
Given this nested list, use indexing to grab the word "hello" | lst = [1,2,[3,4],[5,[100,200,['hello']],23,11],1,7]
lst[3][1][2] | 1-Python Crash course/Python-Crash-Course/Python Crash Course Exercises .ipynb | shashank14/Asterix | apache-2.0 |
Given this nested dictionary grab the word "hello". Be prepared, this will be annoying/tricky | d = {'k1':[1,2,3,{'tricky':['oh','man','inception',{'target':[1,2,3,'hello']}]}]} | 1-Python Crash course/Python-Crash-Course/Python Crash Course Exercises .ipynb | shashank14/Asterix | apache-2.0 |
What is the main difference between a tuple and a list? | # Tuple is immutable
na = "[email protected]"
na.split("@")[1] | 1-Python Crash course/Python-Crash-Course/Python Crash Course Exercises .ipynb | shashank14/Asterix | apache-2.0 |
Create a function that grabs the email website domain from a string in the form:
[email protected]
So for example, passing "[email protected]" would return: domain.com | def domainGet(name):
return name.split("@")[1]
domainGet('[email protected]') | 1-Python Crash course/Python-Crash-Course/Python Crash Course Exercises .ipynb | shashank14/Asterix | apache-2.0 |
Create a basic function that returns True if the word 'dog' is contained in the input string. Don't worry about edge cases like a punctuation being attached to the word dog, but do account for capitalization. | def findDog(sentence):
x = sentence.split()
for item in x:
if item == "dog":
return True
findDog('Is there a dog here?') | 1-Python Crash course/Python-Crash-Course/Python Crash Course Exercises .ipynb | shashank14/Asterix | apache-2.0 |
Create a function that counts the number of times the word "dog" occurs in a string. Again ignore edge cases. | countDog('This dog runs faster than the other dog dude!') | 1-Python Crash course/Python-Crash-Course/Python Crash Course Exercises .ipynb | shashank14/Asterix | apache-2.0 |
Use lambda expressions and the filter() function to filter out words from a list that don't start with the letter 's'. For example:
seq = ['soup','dog','salad','cat','great']
should be filtered down to:
['soup','salad'] | seq = ['soup','dog','salad','cat','great'] | 1-Python Crash course/Python-Crash-Course/Python Crash Course Exercises .ipynb | shashank14/Asterix | apache-2.0 |
Final Problem
You are driving a little too fast, and a police officer stops you. Write a function
to return one of 3 possible results: "No ticket", "Small ticket", or "Big Ticket".
If your speed is 60 or less, the result is "No Ticket". If speed is between 61
and 80 inclusive, the result is "Small Ticket". If speed is 81 or more, the result is "Big Ticket". Unless it is your birthday (encoded as a boolean value in the parameters of the function) -- on your birthday, your speed can be 5 higher in all
cases. | def caught_speeding(speed, is_birthday):
if s_birthday == False:
if speed <= 60:
return "No ticket"
elif speed >= 61 and speed <=80:
return "small ticket"
elif speed >81:
return "Big ticket"
else:
return "pass"
caught_speeding(81,False)
caught_speeding(81,False)
lst = ["7:00","7:30"]
| 1-Python Crash course/Python-Crash-Course/Python Crash Course Exercises .ipynb | shashank14/Asterix | apache-2.0 |
Great job! | lst
type(lst)
type(lst[1])
| 1-Python Crash course/Python-Crash-Course/Python Crash Course Exercises .ipynb | shashank14/Asterix | apache-2.0 |
Network Architecture
The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.
Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.
What's going on with the decoder
Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see deconvolutional layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called "transpose convolution" which is what you'll find with the TensorFlow API, with tf.nn.conv2d_transpose.
However, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.
Exercise: Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena et al claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in tf.image.resize_images or use tf.image.resize_nearest_neighbor. | learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, shape = (None, 28, 28, 1), name = 'inputs')
targets_ = tf.placeholder(tf.float32, shape = (None, 28, 28, 1), name = 'targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3, 3), padding = 'same', activation = tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2, 2), (2, 2), padding = 'same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3, 3), padding = 'same', activation = tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2, 2), (2, 2), padding = 'same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool1, 8, (3, 3), padding = 'same', activation = tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2, 2), (2, 2), padding = 'same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7, 7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3, 3), padding = 'same', activation = tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14, 14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3, 3), padding = 'same', activation = tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28, 28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3, 3), padding = 'same', activation = tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3, 3), padding = 'same', activation = None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name = 'decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels = targets_, logits = logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost) | autoencoder/Convolutional_Autoencoder.ipynb | chusine/dlnd | mit |
Training
As before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays. | sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
if ii % 100 == 0:
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close() | autoencoder/Convolutional_Autoencoder.ipynb | chusine/dlnd | mit |
Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.
Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.
Exercise: Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers. | learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, (3, 3), padding = 'same', activation = tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, (2, 2), (2, 2), padding = 'same')
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 32, (3, 3), padding = 'same', activation = tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, (2, 2), (2, 2), padding = 'same')
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, 16, (3, 3), padding = 'same', activation = tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, (2, 2), (2, 2), padding = 'same')
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7, 7))
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, 16, (3, 3), padding = 'same', activation = tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14, 14))
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, 32, (3, 3), padding = 'same', activation = tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28, 28))
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, 32, (3, 3), padding = 'same', activation = tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(conv6, 1, (3, 3), padding = 'same', activation = None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name = 'decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels = targets_, logits = logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 10
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
if ii % 100 == 0:
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost)) | autoencoder/Convolutional_Autoencoder.ipynb | chusine/dlnd | mit |
To test different clustering methods we need a sample data. In the scikit-learining module there are built-in functions to create it. We will use make_classification() to create a dataset of 1000 points with 2 clusters. | # define dataset
X, y = make_classification(n_samples=1000, n_features=2, n_informative=2, n_redundant=0, n_clusters_per_class=1, random_state=4)
# create scatter plot for samples from each class
for class_value in range(2):
# get row indexes for samples with this class
row_ix = where(y == class_value)
# create scatter of these samples
pyplot.scatter(X[row_ix, 0], X[row_ix, 1])
# show the plot
pyplot.title('The generated dataset')
pyplot.xlabel('x')
pyplot.ylabel('y')
pyplot.show() | english/data_processing/lessons/ml_clustering.ipynb | OSGeoLabBp/tutorials | cc0-1.0 |
Now let's apply the different clustering algorithms on the dataset!
Affinity propagation
The method takes as input measures of similarity between pairs of data points. Real-valued messages are exchanged between data points until a high-quality set of exemplars and corresponding clusters gradually emerges. | from sklearn.cluster import AffinityPropagation
from numpy import unique
# define the model
model = AffinityPropagation(damping=0.9)
# fit the model
model.fit(X)
# assign a cluster to each example
yhat = model.predict(X)
# retrieve unique clusters
clusters = unique(yhat)
# create scatter plot for samples from each cluster
for cluster in clusters:
# get row indexes for samples with this cluster
row_ix = where(yhat == cluster)
# create scatter of these samples
pyplot.scatter(X[row_ix, 0], X[row_ix, 1])
# show the plot
pyplot.title('Affinity propagation clustering')
pyplot.xlabel('x')
pyplot.ylabel('y')
pyplot.show()
| english/data_processing/lessons/ml_clustering.ipynb | OSGeoLabBp/tutorials | cc0-1.0 |
Agglomerative clustering
It is type of hierarchical clustering, which is a general family of clustering algorithms that build nested clusters by merging or splitting them successively. This hierarchy of clusters is represented as a tree. The root of the tree is the unique cluster that gathers all the samples, the leaves being the clusters with only one sample.
Agglomerative clustering performs
using a bottom up approach: each observation starts in its own cluster, and clusters are successively merged together. The merging continues until the desired number of clusters is achieved.
The merge strategy contains the following steps:
- minimizes the sum of squared differences within all clusters
- minimizes the maximum distance between observations of pairs of clusters
- minimizes the average of the distances between all observations of pairs of clusters
- minimizes the distance between the closest observations of pairs of clusters
To use agglomerative clustering the number of clusters have to be defined. | from sklearn.cluster import AgglomerativeClustering
# define the model
model = AgglomerativeClustering(n_clusters=2)
# fit model and predict clusters
yhat = model.fit_predict(X)
# retrieve unique clusters
clusters = unique(yhat)
# create scatter plot for samples from each cluster
for cluster in clusters:
# get row indexes for samples with this cluster
row_ix = where(yhat == cluster)
# create scatter of these samples
pyplot.scatter(X[row_ix, 0], X[row_ix, 1])
# show the plot
pyplot.title('Agglomerative clustering')
pyplot.xlabel('x')
pyplot.ylabel('y')
pyplot.show() | english/data_processing/lessons/ml_clustering.ipynb | OSGeoLabBp/tutorials | cc0-1.0 |
BIRCH
BIRCH clustering (Balanced Iterative Reducing and Clustering using
Hierarchies) involves constructing a tree structure from which cluster centroids are extracted.
BRICH incrementally and dynamically clusters incoming multi-dimensional metric data points to try to produce the best quality clustering with the available resources. This is the first clustering algorothm that handle noise effectively. It is also effective on
large datasets like point clouds.
To use this method the threshold and number of clusters values have to be deifned. | from sklearn.cluster import Birch
model = Birch(threshold=0.01, n_clusters=2)
# fit the model
model.fit(X)
# assign a cluster to each example
yhat = model.predict(X)
# retrieve unique clusters
clusters = unique(yhat)
# create scatter plot for samples from each cluster
for cluster in clusters:
# get row indexes for samples with this cluster
row_ix = where(yhat == cluster)
# create scatter of these samples
pyplot.scatter(X[row_ix, 0], X[row_ix, 1])
# show the plot
pyplot.title('BRICH clustering')
pyplot.xlabel('x')
pyplot.ylabel('y')
pyplot.show() | english/data_processing/lessons/ml_clustering.ipynb | OSGeoLabBp/tutorials | cc0-1.0 |
DBSCAN
DBSCAN clustering (Density-Based Spatial Clustering of Applications with Noise) involves finding high-density areas in the domain and expanding those areas of the feature space around them as clusters.
It can be used on large databases with good efficiency. The usage of the DBSCAN is not complicated, it requires only one parameter. The number of clusters are determined by the algorithm. | from sklearn.cluster import DBSCAN
from matplotlib import pyplot
# define the model
model = DBSCAN(eps=0.30, min_samples=9)
# fit model and predict clusters
yhat = model.fit_predict(X)
# retrieve unique clusters
clusters = unique(yhat)
# create scatter plot for samples from each cluster
for cluster in clusters:
# get row indexes for samples with this cluster
row_ix = where(yhat == cluster)
# create scatter of these samples
pyplot.scatter(X[row_ix, 0], X[row_ix, 1])
# show the plot
pyplot.title('DBSCAN clustering')
pyplot.xlabel('x')
pyplot.ylabel('y')
pyplot.show() | english/data_processing/lessons/ml_clustering.ipynb | OSGeoLabBp/tutorials | cc0-1.0 |
k-Means clustering
May be the most widely known clustering method. During the creation of the clusters the algorithm trys to minimize the variance within each cluster.
To use it we have to define the number of clusters. | from sklearn.cluster import KMeans
# define the model
model = KMeans(n_clusters=2)
# fit the model
model.fit(X)
# assign a cluster to each example
yhat = model.predict(X)
# retrieve unique clusters
clusters = unique(yhat)
# create scatter plot for samples from each cluster
for cluster in clusters:
# get row indexes for samples with this cluster
row_ix = where(yhat == cluster)
# create scatter of these samples
pyplot.scatter(X[row_ix, 0], X[row_ix, 1])
# show the plot
pyplot.title('k-Means clustering')
pyplot.xlabel('x')
pyplot.ylabel('y')
pyplot.show()
| english/data_processing/lessons/ml_clustering.ipynb | OSGeoLabBp/tutorials | cc0-1.0 |
There is a modified version of k-Means, which is called Mini-Batch K-Means clustering. The difference between the two that updated vesion using mini-batches of samples rather than the entire dataset. It makes faster for large datasets, and more robust to statistical noise.
Mean shift clustering
The algorithm is finding and adapting centroids based on the density of examples in the feature space.
To apply it we don't have to define any parameters. | from sklearn.cluster import MeanShift
# define the model
model = MeanShift()
# fit model and predict clusters
yhat = model.fit_predict(X)
# retrieve unique clusters
clusters = unique(yhat)
# create scatter plot for samples from each cluster
for cluster in clusters:
# get row indexes for samples with this cluster
row_ix = where(yhat == cluster)
# create scatter of these samples
pyplot.scatter(X[row_ix, 0], X[row_ix, 1])
# show the plot
pyplot.title('Mean shift clustering')
pyplot.xlabel('x')
pyplot.ylabel('y')
pyplot.show() | english/data_processing/lessons/ml_clustering.ipynb | OSGeoLabBp/tutorials | cc0-1.0 |
The main characteristics of the clustering algorithms
Task
- Test the different clustering algorithms on different datasets!
- Check and use scikit-learn's documentation to compare the algorithms!
Applying ML based clustering algorithm on point cloud
The presented culstering method can be useful when we would like to separate group of points in a point cloud.
Most cases when we would like to apply clustering on a point cloud the number of clusters is unknown, but as we have seen above there are severeal algorithms (like DBSCAN, OPTICS, mean shift) where the number of clusters don't have to be defined.
Therefore, in the following section we are going to apply one of these, the DBSCAN clustering algorithm to separate roof points of buildings.
First, let's download the point cloud! | !wget -q https://github.com/OSGeoLabBp/tutorials/raw/master/english/data_processing/lessons/code/barnag_roofs.ply | english/data_processing/lessons/ml_clustering.ipynb | OSGeoLabBp/tutorials | cc0-1.0 |
Let's install Open3D! | !pip install open3d -q | english/data_processing/lessons/ml_clustering.ipynb | OSGeoLabBp/tutorials | cc0-1.0 |
After the installation import modules and display the point cloud! | import open3d as o3d
import numpy as np
from numpy import unique
from numpy import where
from sklearn.datasets import make_classification
from sklearn.cluster import DBSCAN
from matplotlib import pyplot
pc = o3d.io.read_point_cloud('barnag_roofs.ply',format='ply')
xyz = np.asarray(pc.points)
# display the point cloud
pyplot.scatter(xyz[:, 0], xyz[:, 1])
pyplot.title('The point cloud of the roofs')
pyplot.xlabel('y_EOV [m]')
pyplot.ylabel('x_EOV [m]')
pyplot.axis('equal')
pyplot.show()
'''
3d display TODO
fig = plt.figure()
ax = fig.add_subplot(projection='3d')
ax.scatter(xyz[:, 0], xyz[:, 1],xyz[:, 2])
ax.view_init(30, 70)
'''
# define the model
model = DBSCAN(eps=0.30, min_samples=100)
# fit model and predict clusters
yhat = model.fit_predict(xyz)
#print(yhat)
# retrieve unique clusters
clusters = unique(yhat)
print('Number of clusters: '+str(clusters))
| english/data_processing/lessons/ml_clustering.ipynb | OSGeoLabBp/tutorials | cc0-1.0 |
Let's use DBSCAN on the imported point cloud. | # Save clusters as
for cluster in clusters:
# get row indexes for samples with this cluster
row_ix = where(yhat == cluster)
# create scatter of these samples
pyplot.scatter(xyz[row_ix, 0], xyz[row_ix, 1], label=str(cluster)+' cluster')
# export the clusters as a point cloud
xyz_cluster = xyz[row_ix]
pc_cluster = o3d.geometry.PointCloud()
pc_cluster.points = o3d.utility.Vector3dVector(xyz_cluster)
if cluster >= 0:
o3d.io.write_point_cloud('cluster_' + str(cluster) + '.ply', pc_cluster) # export .ply format
else:
o3d.io.write_point_cloud('noise.ply', pc_cluster) # export noise
# show the plot
pyplot.title('Point cloud clusters')
pyplot.xlabel('y_EOV [m]')
pyplot.ylabel('x_EOV [m]')
pyplot.axis('equal')
pyplot.show()
| english/data_processing/lessons/ml_clustering.ipynb | OSGeoLabBp/tutorials | cc0-1.0 |
Have to install autograd module first: pip install autograd | import autograd.numpy as np # Thinly-wrapped version of Numpy
from autograd import grad | workshops/w7/Workshop6_ Auto-Differentiation.ipynb | eds-uga/csci4360-fa17 | mit |
EX1, Normal Numpy | def tanh(x):
y = np.exp(-x)
return (1.0 - y) / (1.0 + y)
start = time.time()
grad_tanh = grad(tanh)
print ("Gradient at x = 1.0\n", grad_tanh(1.0))
end = time.time()
print("Operation time:\n", end-start) | workshops/w7/Workshop6_ Auto-Differentiation.ipynb | eds-uga/csci4360-fa17 | mit |
EX2-1, Taylor approximation to sine function | def taylor_sine(x):
ans = currterm = x
i = 0
while np.abs(currterm) > 0.001:
currterm = -currterm * x**2 / ((2 * i + 3) * (2 * i + 2))
ans = ans + currterm
i += 1
return ans
start = time.time()
grad_sine = grad(taylor_sine)
print ("Gradient of sin(pi):\n", grad_sine(np.pi))
end = time.time()
print("Operation time:\n", end-start) | workshops/w7/Workshop6_ Auto-Differentiation.ipynb | eds-uga/csci4360-fa17 | mit |
EX2-2, Second-order gradient | start = time.time()
#second-order
ggrad_sine = grad(grad_sine)
print ("Gradient of second-order:\n", ggrad_sine(np.pi))
end = time.time()
print("Operation time:\n", end-start) | workshops/w7/Workshop6_ Auto-Differentiation.ipynb | eds-uga/csci4360-fa17 | mit |
EX3, Logistic Regression
A common use case for automatic differentiation is to train a probabilistic model. <br>
A Simple (but complete) example of specifying and training a logistic regression model for binary classification: | def sigmoid(x):
return 0.5*(np.tanh(x) + 1)
def logistic_predictions(weights, inputs):
# Outputs probability of a label being true according to logistic model.
return sigmoid(np.dot(inputs, weights))
def training_loss(weights):
# Training loss is the negative log-likelihood of the training labels.
preds = logistic_predictions(weights, inputs)
label_probabilities = preds * targets + (1 - preds) * (1 - targets)
return -np.sum(np.log(label_probabilities))
# Build a toy dataset.
inputs = np.array([[0.52, 1.12, 0.77],
[0.88, -1.08, 0.15],
[0.52, 0.06, -1.30],
[0.74, -2.49, 1.39]])
targets = np.array([True, True, False, True])
# Define a function that returns gradients of training loss using autograd.
training_gradient_fun = grad(training_loss)
# Optimize weights using gradient descent.
weights = np.array([0.0, 0.0, 0.0])
print ("Initial loss:", training_loss(weights))
for i in range(100):
weights -= training_gradient_fun(weights) * 0.01
print ("Trained loss:", training_loss(weights)) | workshops/w7/Workshop6_ Auto-Differentiation.ipynb | eds-uga/csci4360-fa17 | mit |
Of the three parts of this app, part 2 should be very familiar by now -- load some taxi dropoff locations, declare a Points object, datashade them, and set some plot options.
Step 1 is new: Instead of loading the bokeh extension using hv.extension('bokeh'), we get a direct handle on a bokeh renderer using the hv.renderer function. This has to be done at the top of the script, to be sure that options declared are passed to the Bokeh renderer.
Step 3 is also new: instead of typing app to see the visualization as we would in the notebook, here we create a Bokeh document from it by passing the HoloViews object to the renderer.server_doc method.
Steps 1 and 3 are essentially boilerplate, so you can now use this simple skeleton to turn any HoloViews object into a fully functional, deployable Bokeh app!
Deploying the app
Assuming that you have a terminal window open with the hvtutorial environment activated, in the notebooks/ directory, you can launch this app using Bokeh Server:
bokeh serve --show apps/server_app.py
If you don't already have a favorite way to get a terminal, one way is to open it from within Jupyter, then make sure you are in the notebooks directory, and activate the environment using source activate hvtutorial (or activate tutorial on Windows). You can also open the app script file in the inbuilt text editor, or you can use your own preferred editor. | # Exercise: Modify the app to display the pickup locations and add a tilesource, then run the app with bokeh serve
# Tip: Refer to the previous notebook
| notebooks/08-deploying-bokeh-apps.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
Iteratively building a bokeh app in the notebook
The above app script can be built entirely without using Jupyter, though we displayed it here using Jupyter for convenience in the tutorial. Jupyter notebooks are also often helpful when initially developing such apps, allowing you to quickly iterate over visualizations in the notebook, deploying it as a standalone app only once we are happy with it.
To illustrate this process, let's quickly go through such a workflow. As before we will set up our imports, load the extension, and load the taxi dataset: | import holoviews as hv
import geoviews as gv
import dask.dataframe as dd
from holoviews.operation.datashader import datashade, aggregate, shade
from bokeh.models import WMTSTileSource
hv.extension('bokeh', logo=False)
usecols = ['tpep_pickup_datetime', 'dropoff_x', 'dropoff_y']
ddf = dd.read_csv('../data/nyc_taxi.csv', parse_dates=['tpep_pickup_datetime'], usecols=usecols)
ddf['hour'] = ddf.tpep_pickup_datetime.dt.hour
ddf = ddf.persist() | notebooks/08-deploying-bokeh-apps.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
Next we define a Counter stream which we will use to select taxi trips by hour. | stream = hv.streams.Counter()
points = hv.Points(ddf, kdims=['dropoff_x', 'dropoff_y'])
dmap = hv.DynamicMap(lambda counter: points.select(hour=counter%24).relabel('Hour: %s' % (counter % 24)),
streams=[stream])
shaded = datashade(dmap)
hv.opts('RGB [width=800, height=600, xaxis=None, yaxis=None]')
url = 'https://server.arcgisonline.com/ArcGIS/rest/services/World_Imagery/MapServer/tile/{Z}/{Y}/{X}.jpg'
wmts = gv.WMTS(WMTSTileSource(url=url))
overlay = wmts * shaded | notebooks/08-deploying-bokeh-apps.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
Up to this point, we have a normal HoloViews notebook that we could display using Jupyter's rich display of overlay, as we would with an any notebook. But having come up with the objects we want interactively in this way, we can now display the result as a Bokeh app, without leaving the notebook. To do that, first edit the following cell to change "8888" to whatever port your jupyter session is using, in case your URL bar doesn't say "localhost:8888/".
Then run this cell to launch the Bokeh app within this notebook: | renderer = hv.renderer('bokeh')
server = renderer.app(overlay, show=True, websocket_origin='localhost:8888') | notebooks/08-deploying-bokeh-apps.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
We could stop here, having launched an app, but so far the app will work just the same as in the normal Jupyter notebook, responding to user inputs as they occur. Having defined a Counter stream above, let's go one step further and add a series of periodic events that will let the visualization play on its own even without any user input: | dmap.periodic(1) | notebooks/08-deploying-bokeh-apps.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
You can stop this ongoing process by clearing the cell displaying the app.
Now let's open the text editor again and make this edit to a separate app, which we can then launch using Bokeh Server separately from this notebook. | # Exercise: Copy the example above into periodic_app.py and modify it so it can be run with bokeh serve
# Hint: Use hv.renderer and renderer.server_doc
# Note that you have to run periodic **after** creating the bokeh document
| notebooks/08-deploying-bokeh-apps.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
Combining HoloViews with bokeh models
Now for a last hurrah let's put everything we have learned to good use and create a bokeh app with it. This time we will go straight to a Python script containing the app. If you run the app with bokeh serve --show ./apps/player_app.py from your terminal you should see something like this:
<img src="./assets/tutorial_app.gif"></img>
This more complex app consists of several components:
A datashaded plot of points for the indicated hour of the daty (in the slider widget)
A linked PointerX stream, to compute a cross-section
A set of custom bokeh widgets linked to the hour-of-day stream
We have already covered 1. and 2. so we will focus on 3., which shows how easily we can combine a HoloViews plot with custom Bokeh models. We will not look at the precise widgets in too much detail, instead let's have a quick look at the callback defined for slider widget updates:
python
def slider_update(attrname, old, new):
stream.event(hour=new)
Whenever the slider value changes this will trigger a stream event updating our plots. The second part is how we combine HoloViews objects and Bokeh models into a single layout we can display. Once again we can use the renderer to convert the HoloViews object into something we can display with Bokeh:
python
renderer = hv.renderer('bokeh')
plot = renderer.get_plot(hvobj, doc=curdoc())
The plot instance here has a state attribute that represents the actual Bokeh model, which means we can combine it into a Bokeh layout just like any other Bokeh model:
python
layout = layout([[plot.state], [slider, button]], sizing_mode='fixed')
curdoc().add_root(layout) | # Advanced Exercise: Add a histogram to the bokeh layout next to the datashaded plot
# Hint: Declare the histogram like this: hv.operation.histogram(aggregated, bin_range=(0, 20))
# then use renderer.get_plot and hist_plot.state and add it to the layout
| notebooks/08-deploying-bokeh-apps.ipynb | ioam/scipy-2017-holoviews-tutorial | bsd-3-clause |
Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively. | def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32 , [None, real_dim] )
inputs_z = tf.placeholder(tf.float32 , [None, z_dim] )
return inputs_real, inputs_z | gan_mnist/Intro_to_GANs_Exercises.ipynb | tanmay987/deepLearning | mit |
Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement:
python
with tf.variable_scope('scope_name', reuse=False):
# code here
Here's more from the TensorFlow documentation to get another look at using tf.variable_scope.
Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:
$$
f(x) = max(\alpha * x, x)
$$
Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
Exercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope. | def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope ('generator',reuse=reuse): # finish this
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha*h1,h1)
# Logits and tanh output
logits = tf.layers.dense(h1,out_dim,activation=None)
out = tf.tanh(logits)
return out | gan_mnist/Intro_to_GANs_Exercises.ipynb | tanmay987/deepLearning | mit |
Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope. | def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope ('discriminator',reuse=reuse):# finish this
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
logits = tf.layers.dense(h1,1, activation=None)
out = tf.sigmoid(logits)
return out, logits | gan_mnist/Intro_to_GANs_Exercises.ipynb | tanmay987/deepLearning | mit |
Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise: Build the network from the functions you defined earlier. | tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Build the model
g_model = generator(input_z, input_size, n_units=g_hidden_size, alpha=alpha)
# g_model is the generator output
d_model_real, d_logits_real = discriminator(input_real, n_units=d_hidden_size, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, n_units=d_hidden_size, alpha=alpha) | gan_mnist/Intro_to_GANs_Exercises.ipynb | tanmay987/deepLearning | mit |
Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator. | # Calculate losses
d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_logits_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = d_logits_fake, labels=tf.zeros_like(d_logits_fake) ))
d_loss = d_loss_real+d_model_fake
g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_logits_fake))) | gan_mnist/Intro_to_GANs_Exercises.ipynb | tanmay987/deepLearning | mit |
Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately. | # Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith('generator')]
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
d_train_opt =
g_train_opt = | gan_mnist/Intro_to_GANs_Exercises.ipynb | tanmay987/deepLearning | mit |
Training | batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f) | gan_mnist/Intro_to_GANs_Exercises.ipynb | tanmay987/deepLearning | mit |
It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples! | saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples]) | gan_mnist/Intro_to_GANs_Exercises.ipynb | tanmay987/deepLearning | mit |
Generate distorted image
First, we generate a distorted image from an example line. | image = 1-ocrolib.read_image_gray("../tests/010030.bin.png")
image = interpolation.affine_transform(image,array([[0.5,0.015],[-0.015,0.5]]),offset=(-30,0),output_shape=(200,1400),order=0)
imshow(image,cmap=cm.gray)
print image.shape | doc/line-normalization.ipynb | zuphilip/ocropy | apache-2.0 |
Load Normalizer and measure the image | #reload(lineest)
mv = ocrolib.lineest.CenterNormalizer()
mv.measure(image)
print mv.r
plot(mv.center)
plot(mv.center+mv.r)
plot(mv.center-mv.r)
imshow(image,cmap=cm.gray) | doc/line-normalization.ipynb | zuphilip/ocropy | apache-2.0 |
Dewarp
The dewarping of the text line (first image) tries to find the center (blue curve) and then cut out slices with some fixed radius around the center. See this illustration <img width="50%" src="https://cloud.githubusercontent.com/assets/5199995/25406275/6905c7ce-2a06-11e7-89e0-ca740cd8a21c.png"/> | dewarped = mv.dewarp(image)
print dewarped.shape
imshow(dewarped,cmap=cm.gray)
imshow(dewarped[:,:320],cmap=cm.gray,interpolation='nearest') | doc/line-normalization.ipynb | zuphilip/ocropy | apache-2.0 |
Normalize
This will also dewarp the image but additionally normalize the image size (default x_height is 48). | normalized = mv.normalize(image,order=0)
print normalized.shape
imshow(normalized,cmap=cm.gray) | doc/line-normalization.ipynb | zuphilip/ocropy | apache-2.0 |
Objetos
Em Python, tudo é objeto! | # Criando uma lista
lst_num = ["Data", "Science", "Academy", "Nota", 10, 10]
# A lista lst_num é um objeto, uma instância da classe lista em Python
type(lst_num)
lst_num.count(10)
# Usamos a função type, para verificar o tipo de um objeto
print(type(10))
print(type([]))
print(type(()))
print(type({}))
print(type('a'))
# Criando um novo tipo de objeto chamado Carro
class Carro(object):
pass
# Instância do Carro
palio = Carro()
print(type(palio))
# Criando uma classe
class Estudantes:
def __init__(self, nome, idade, nota):
self.nome = nome
self.idade = idade
self.nota = nota
# Criando um objeto chamado Estudante1 a partir da classe Estudantes
Estudante1 = Estudantes("Pele", 12, 9.5)
# Atributo da classe Estudante, utilizado por cada objeto criado a partir desta classe
Estudante1.nome
# Atributo da classe Estudante, utilizado por cada objeto criado a partir desta classe
Estudante1.idade
# Atributo da classe Estudante, utilizado por cada objeto criado a partir desta classe
Estudante1.nota
# Criando uma classe
class Funcionarios:
def __init__(self, nome, salario):
self.nome = nome
self.salario = salario
def listFunc(self):
print("O nome do funcionário é " + self.nome + " e o salário é R$" + str(self.salario))
# Criando um objeto chamado Func1 a partir da classe Funcionarios
Func1 = Funcionarios("Obama", 20000)
# Usando o método da classe
Func1.listFunc()
print("**** Usando atributos *****")
hasattr(Func1, "nome")
hasattr(Func1, "salario")
setattr(Func1, "salario", 4500)
hasattr(Func1, "salario")
getattr(Func1, "salario")
delattr(Func1, "salario")
hasattr(Func1, "salario") | Cap05/Notebooks/DSA-Python-Cap05-02-Objetos.ipynb | dsacademybr/PythonFundamentos | gpl-3.0 |
Krok drugi
Nie będziemy teraz przesyłać wartości $x$ do jądra, ale obliczymy je w locie, ze wzoru:
$$ x = x_0 + i \frac{\Delta x}{N}$$ | import pycuda.driver as cuda
import pycuda.autoinit
from pycuda.compiler import SourceModule
mod = SourceModule("""
__global__ void sin1da(float *y)
{
int idx = threadIdx.x + blockDim.x*blockIdx.x;
float x = -3.0f+6.0f*float(idx)/blockDim.x;
y[idx] = sinf(powf(x,2.0f));
}
""")
Nx = 128
x = np.linspace(-3,3,Nx).astype(np.float32)
y = np.empty_like(x)
func = mod.get_function("sin1da")
func(cuda.Out(y),block=(Nx,1,1),grid=(1,1,1))
plt.plot(x,y,'r') | CUDA/iCSE_PR_map2d.ipynb | marcinofulus/ProgramowanieRownolegle | gpl-3.0 |
Krok trzeci
Wykonamy probkowanie funkcji dwóch zmiennych, korzystając z wywołania jądra, które zawiera $N_x$ wątków w bloku i $N_y$ bloków.
Proszę zwrócić szczególną uwagę na linie:
int idx = threadIdx.x;
int idy = blockIdx.x;
zawierające wykorzystanie odpowiednich indeksów na CUDA, oraz sposób obliczania globalnego indeksu tablicy wartości, która jest w formacie "row-major"!
int gid = idx + blockDim.x*idy; | import pycuda.driver as cuda
import pycuda.autoinit
from pycuda.compiler import SourceModule
mod = SourceModule("""
__global__ void sin2d(float *z)
{
int idx = threadIdx.x;
int idy = blockIdx.x;
int gid = idx + blockDim.x*idy;
float x = -4.0f+6.0f*float(idx)/blockDim.x;
float y = -3.0f+6.0f*float(idy)/gridDim.x;
z[gid] = sinf(powf(x,2.0f)+powf(y,2.0f));
}
""")
Nx = 128
Ny = 64
x = np.linspace(-4,2,Nx).astype(np.float32)
y = np.linspace(-3,3,Ny).astype(np.float32)
XX,YY = np.meshgrid(x,y)
z = np.zeros(Nx*Ny).astype(np.float32)
func = mod.get_function("sin2d")
func(cuda.Out(z),block=(Nx,1,1),grid=(Ny,1,1))
| CUDA/iCSE_PR_map2d.ipynb | marcinofulus/ProgramowanieRownolegle | gpl-3.0 |
Porównajmy wyniki: | plt.contourf(XX,YY,z.reshape(Ny,Nx) )
plt.contourf(XX,YY,np.sin(XX**2+YY**2)) | CUDA/iCSE_PR_map2d.ipynb | marcinofulus/ProgramowanieRownolegle | gpl-3.0 |
Krok czwarty
Algorytm ten nie jest korzystny, gdyż rozmiar bloku determinuje rozmiar siatki na, której próbkujemy funkcje.
Optymalnie było by wykonywać operacje w blokach o zadanym rozmiarze, niezależnie od ilości próbek danego obszaru.
Poniższy przykład wykorzystuje dwuwymiarową strukturę zarówno bloku jak i gridu. Dzielimy wątki tak by w obrębie jednego bloku były wewnatrz kwadratu o bokach 4x4. | import pycuda.driver as cuda
import pycuda.autoinit
from pycuda.compiler import SourceModule
mod = SourceModule("""
__global__ void sin2da(float *z)
{
int ix = threadIdx.x + blockIdx.x * blockDim.x;
int iy = threadIdx.y + blockIdx.y * blockDim.y;
int gid = ix + iy * blockDim.x * gridDim.x;
float x = -4.0f+6.0f*float(ix)/(blockDim.x*gridDim.x);
float y = -3.0f+6.0f*float(iy)/(blockDim.y*gridDim.y);
z[gid] = sinf(powf(x,2.0f)+powf(y,2.0f));
}
""")
block_size = 4
Nx = 32*block_size
Ny = 32*block_size
x = np.linspace(-4,2,Nx).astype(np.float32)
y = np.linspace(-3,3,Ny).astype(np.float32)
XX,YY = np.meshgrid(x,y)
z = np.zeros(Nx*Ny).astype(np.float32)
func = mod.get_function("sin2da")
func(cuda.Out(z),\
block=(block_size,block_size,1),\
grid=(Nx//block_size,Ny//block_size,1) )
plt.contourf(XX,YY,z.reshape(Ny,Nx) ) | CUDA/iCSE_PR_map2d.ipynb | marcinofulus/ProgramowanieRownolegle | gpl-3.0 |
Boat race
Given a river (say a sinusoid) find the total length actually rowed over a given interval
$$f(x) = A \sin x$$ | x = numpy.linspace(0, 4 * numpy.pi)
plt.plot(x, 2.0 * numpy.sin(x))
plt.title("River Sine")
plt.xlabel("x")
plt.ylabel("y")
plt.axis([0, 4*numpy.pi, -2, 2])
plt.show() | 0_intro_numerical_methods.ipynb | btw2111/intro-numerical-methods | mit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.