markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
(2b) Transformando matriz de dados em vetores quantizados
O próximo passo consiste em transformar nosso RDD de frases em um RDD de pares (id, vetor quantizado). Para isso vamos criar uma função quantizador que receberá como parâmetros o objeto, o modelo de k-means, o valor de k e o dicionário word2vec.
Para cada ponto, vamos separar o id e aplicar a função tokenize na string. Em seguida, transformamos a lista de tokens em uma matriz word2vec. Finalmente, aplicamos cada vetor dessa matriz no modelo de k-Means, gerando um vetor de tamanho $k$ em que cada posição $i$ indica quantos tokens pertencem ao cluster $i$. | # EXERCICIO
def quantizador(point, model, k, w2v):
key = <COMPLETAR>
words = <COMPLETAR>
matrix = np.array( <COMPLETAR> )
features = np.zeros(k)
for v in matrix:
c = <COMPLETAR>
features[c] += 1
return (key, features)
quantRDD = dataRDD.map(lambda x: quantizador(x, modelK, 500, w2v))
print quantRDD.take(1)
# TEST Implement a TF function (2a)
assert quantRDD.take(1)[0][1].sum() == 5, 'valores incorretos' | Spark/Lab04.ipynb | folivetti/BIGDATA | mit |
Basic Histogram | import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
x = np.random.randn(500)
data = [
go.Histogram(
x=x
)
]
py.iplot(data) | handson-data-science-python/DataScience-Python3/.ipynb_checkpoints/histograms.ipynb-checkpoint.ipynb | vadim-ivlev/STUDY | mit |
Normalized Histogram | import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
x = np.random.randn(500)
data = [
go.Histogram(
x=x,
histnorm='probability'
)
]
py.iplot(data) | handson-data-science-python/DataScience-Python3/.ipynb_checkpoints/histograms.ipynb-checkpoint.ipynb | vadim-ivlev/STUDY | mit |
Horizontal Histogram | import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
y = np.random.randn(500)
data = [
go.Histogram(
y=y
)
]
py.iplot(data)
| handson-data-science-python/DataScience-Python3/.ipynb_checkpoints/histograms.ipynb-checkpoint.ipynb | vadim-ivlev/STUDY | mit |
Overlaid Histgram | import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
x0 = np.random.randn(500)
x1 = np.random.randn(500)+1
trace1 = go.Histogram(
x=x0,
opacity=0.75
)
trace2 = go.Histogram(
x=x1,
opacity=0.75
)
data = [trace1, trace2]
layout = go.Layout(
barmode='overlay'
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig)
| handson-data-science-python/DataScience-Python3/.ipynb_checkpoints/histograms.ipynb-checkpoint.ipynb | vadim-ivlev/STUDY | mit |
Stacked Histograms ### | import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
x0 = np.random.randn(500)
x1 = np.random.randn(500)+1
trace1 = go.Histogram(
x=x0
)
trace2 = go.Histogram(
x=x1
)
data = [trace1, trace2]
layout = go.Layout(
barmode='stack'
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig)
| handson-data-science-python/DataScience-Python3/.ipynb_checkpoints/histograms.ipynb-checkpoint.ipynb | vadim-ivlev/STUDY | mit |
Colored and Styled Histograms | import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
x0 = np.random.randn(500)
x1 = np.random.randn(500)+1
trace1 = go.Histogram(
x=x0,
histnorm='count',
name='control',
autobinx=False,
xbins=dict(
start=-3.2,
end=2.8,
size=0.2
),
marker=dict(
color='fuchsia',
line=dict(
color='grey',
width=0
)
),
opacity=0.75
)
trace2 = go.Histogram(
x=x1,
name='experimental',
autobinx=False,
xbins=dict(
start=-1.8,
end=4.2,
size=0.2
),
marker=dict(
color='rgb(255, 217, 102)'
),
opacity=0.75
)
data = [trace1, trace2]
layout = go.Layout(
title='Sampled Results',
xaxis=dict(
title='Value'
),
yaxis=dict(
title='Count'
),
barmode='overlay',
bargap=0.25,
bargroupgap=0.3
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig)
| handson-data-science-python/DataScience-Python3/.ipynb_checkpoints/histograms.ipynb-checkpoint.ipynb | vadim-ivlev/STUDY | mit |
Import section specific modules: | pass | 2_Mathematical_Groundwork/2_y_exercises.ipynb | griffinfoster/fundamentals_of_interferometry | gpl-2.0 |
2.y. Exercises<a id='math:sec:exercises'></a><!--\label{math:sec:exercises}-->
We provide a small set of exercises suitable for an interferometry course.
2.y.1. Fourier transforms and convolution: Fourier transform of the triangle function<a id='math:sec:exercises_fourier_triangle'></a><!--\label{math:sec:exercises_fourier_triangle}-->
Consider the triangle function given below. | def plotviewgraph(fig, ax, xmin = 0, xmax = 1., ymin = 0., ymax = 1.):
"""
Prepare a viewvgraph for plotting a function
Parameters:
fig: Matplotlib figure
ax: Matplotlib subplot
xmin (float): Minimum of range
xmax (float): Maximum of range
ymin (float): Minimum of function
ymax (float): Maximum of function
return: axis and vertical and horizontal tick length
"""
# Axis ranges
ax.axis([xmin-0.1*(xmax-xmin), xmax+0.1*(xmax-xmin), -0.2*(ymax-ymin), ymax])
ax.axis('off')
# get width and height of axes object to compute, see https://3diagramsperpage.wordpress.com/2014/05/25/arrowheads-for-axis-in-matplotlib/
# matching arrowhead length and width
dps = fig.dpi_scale_trans.inverted()
bbox = ax.get_window_extent().transformed(dps)
width, height = bbox.width, bbox.height
# manual arrowhead width and length
hw = 1./15.*(ymax-ymin)
hl = 1./30.*(xmax-xmin)
lw = 1. # axis line width
ohg = 0.3 # arrow overhang
# compute matching arrowhead length and width
yhw = hw/(ymax-ymin)*(xmax-xmin)* height/width
yhl = hl/(xmax-xmin)*(ymax-ymin)* width/height
# Draw arrows
ax.arrow(xmin-0.1*(xmax-xmin),0, 1.2*(xmax-xmin),0, fc='k', ec='k', lw = lw,
head_width=hw, head_length=hl, overhang = ohg,
length_includes_head= True, clip_on = False)
ax.arrow(0,ymin-0.1*(ymax-ymin), 0., 1.4*(ymax-ymin), fc='k', ec='k', lw = lw,
head_width=yhw, head_length=yhl, overhang = ohg,
length_includes_head= True, clip_on = False)
# Draw ticks for A, -A, and B
twv = 0.01*height # vertical tick width
twh = twv*(xmax-xmin)/(ymax-ymin)/ width*height
return twv, twh
def plottriangle():
A = 1.
B = 1.
# Start the plot, create a figure instance and a subplot
fig = plt.figure(figsize=(20,5))
ax = fig.add_subplot(111)
twv, twh = plotviewgraph(fig, ax, xmin = -A, xmax = A, ymin = 0., ymax = B)
ticx = [[-A,'-A'],[A,'A']]
for tupel in ticx:
ax.plot([tupel[0],tupel[0]],[-twv, twv], 'k-')
ax.text(tupel[0], 0.-twh, tupel[1], fontsize = 24, horizontalalignment = 'left', verticalalignment = 'top', color = 'black')
ticy = [[B,'B']]
for tupel in ticy:
ax.plot([-twh, twh], [tupel[0], tupel[0]], 'k-')
ax.text(0.+twv, tupel[0], tupel[1], fontsize = 24, horizontalalignment = 'left', verticalalignment = 'bottom', color = 'black')
# Plot the function
ax.plot([-A,0.,A],[0., B, 0.], 'r-', lw = 2)
# Annotate axes
ax.text(0.-twh, 1.2*(B), r'$f(x)$', fontsize = 24, horizontalalignment = 'right', verticalalignment = 'bottom', color = 'black')
ax.text(1.2*B, 0., r'$x$', fontsize = 24, horizontalalignment = 'left', verticalalignment = 'top', color = 'black')
# Show amplitude
# plt.annotate(s='', xy=(mu+2*sigma,0.), xytext=(mu+2*sigma,a), \
# arrowprops=dict(color = 'magenta', arrowstyle='<->'))
# ax.text(mu+2*sigma+sigma/10., a/2, '$a$', fontsize = 12, horizontalalignment = 'left', \
# verticalalignment = 'center', color = 'magenta')
plottriangle()
# <a id='math:fig:triangle'></a><!--\label{math:fig:triangle}--> | 2_Mathematical_Groundwork/2_y_exercises.ipynb | griffinfoster/fundamentals_of_interferometry | gpl-2.0 |
Figure 2.y.1: Triangle function with width $2A$ and amplitude $B$.<a id='math:fig:triangle'></a><!--\label{math:fig:triangle}-->
<b>Assignments:</b>
<ol type="A">
<li>What can you tell about the complex part of the Fourier transform of $f$ using the symmetry of the function?</li>
<li>Write down the function $f$ in two ways, once as a piece-wise defined function, once as a convolution of the rectangle function with itself.</li>
<li>Calculate the Fourier transform, making use of expressing f as a convolution of a boxcar function with itself and using the convolution theorem.</li>
</ol>
2.y.1.1 Fourier transform of the triangle function: example answer to assignment 1.<a id='math:sec:exercises_fourier_triangle_a'></a><!--\label{math:sec:exercises_fourier_triangle_a}-->
<b>What can you tell about the complex part and the symmetry of the Fourier transform of $f$ using the symmetry of the function?</b>
The function is real-valued ($f^(x)\,=\,f(x)$) and even ($f(x)\,=\,f(-x)$), so it is Hermetian ($f^(x)\,=\,f(-x)$, see definition here ➞ <!--\ref{math:sec:fourier_transforms_of_real_valued_and_hermetian_functions}-->). According to Sect. 2.4.6 ➞<!--\ref{math:sec:fourier_transforms_of_real_valued_and_hermetian_functions}-->, this means that the Fourier transform is a <b>real-valued</b> function (because it is the Fourier transform of a Hermetian function) and also Hermetian (because it is the Fourier transform of a real-valued function). Hence it is also <b>even</b> ($f^(x)\,=\,f(x) \,\land\, f^(x)\,=\,f(-x)\,\Rightarrow\,f(x)\,=\,f(-x)$). Real-valued means that the complex part of $f$ is $0$.
2.y.1.2 Fourier transform of the triangle function: example answer to assignment 2.<a id='math:sec:exercises_fourier_triangle_b'></a><!--\label{math:sec:exercises_fourier_triangle_b}-->
<b>Write down the function $f$ in two ways, once as a piece-wise defined function, once as a convolution of the rectangle function with itself.</b>
Part one is straightforward:
<a id='math:eq:y_001'></a><!--\label{math:eq:y_001}-->$$
\begin{align}
f(x) &= \left {
\begin{array}{lll}
B-\frac{B}{A}|x| & {\rm for} & |x| \leq A\
0 & {\rm for} & |x| > A
\end{array}\right .
\end{align}
$$
The solution to part two, using the definition as given in Sect. 2.4.6 ➞<!--\ref{math:sec:boxcar_and_rectangle_function}-->
<a id='math:eq:y_002'></a><!--\label{math:eq:y_002}-->
$$
\begin{align}
f(x) \,&=\,\frac{B}{A}\cdot \Pi_{-\frac{A}{2},\frac{A}{2}}\circ \Pi_{-\frac{A}{2},\frac{A}{2}}(x)\
&=\,\frac{B}{A}\cdot\Pi_A\circ \Pi_A\,\,\, {\rm , where} \,\,\,\Pi_A(x) \,=\,\Pi(\frac{x}{A})\
\end{align}
$$
requires a little calculation, but is straightforward. Using the definition of the boxcar function ➞ <!--\ref{math:sec:boxcar_and_rectangle_function}--> and the definition of the convolution ➞ <!--\ref{math:sec:definition_of_the_convolution}-->, one can see:
<a id='math:eq:y_003'></a><!--\label{math:eq:y_003}-->
$$
\begin{align}
\Pi_{-\frac{A}{2},\frac{A}{2}}\circ \Pi_{-\frac{A}{2},\frac{A}{2}}(x)\,& =\, \int_{-\infty}^{\infty}\Pi_{-\frac{A}{2},\frac{A}{2}}(t)\Pi_{-\frac{A}{2},\frac{A}{2}}(x-t)\,dt\
& =\, \int_{-\frac{A}{2}}^{\frac{A}{2}}\Pi_{-\frac{A}{2},\frac{A}{2}}(x-t)\,dt\
& \underset{u\,=\,x-t}{=} \, \int_{u(-\frac{A}{2})}^{u(\frac{A}{2})}\Pi_{-\frac{A}{2},\frac{A}{2}}(u)\frac{dx}{du}\,du\
& =\, \int_{x+\frac{A}{2}}^{x-\frac{A}{2}}\Pi_{-\frac{A}{2},\frac{A}{2}}(u)\cdot(-1)du\
& =\, \int_{x-\frac{A}{2}}^{x+\frac{A}{2}}\Pi_{-\frac{A}{2},\frac{A}{2}}(u)du\
\end{align}
$$
and, accordingly
<a id='math:eq:y_004'></a><!--\label{math:eq:y_004}-->
\begin{align}
|x| \,>\, A \,&\Rightarrow\,\Pi_{-\frac{A}{2},\frac{A}{2}}\circ \Pi_{-\frac{A}{2},\frac{A}{2}}(x)\, =\, 0\
-A\,\leq\,x\,\leq 0\,&\Rightarrow \,\Pi_{-\frac{A}{2},\frac{A}{2}}\circ \Pi_{-\frac{A}{2},\frac{A}{2}}(x)\,=\,\int_{-\frac{A}{2}}^{x+\frac{A}{2}}du\,=\,A+x\
0\,\leq\,x\,\leq A\,&\Rightarrow \,\Pi_{\frac{A}{2},\frac{A}{2}}\circ \Pi_{-\frac{A}{2},\frac{A}{2}}(x)\,=\,\int_{x-\frac{A}{2}}^{\frac{A}{2}}du\,=\,A-x\
\end{align}
This is identical to above piece-wise definition ⤵.
2.y.1.3 Fourier transform of the triangle function: example answer to assignment 3.<a id='math:sec:exercises_fourier_triangle_c'></a><!--\label{math:sec:exercises_fourier_triangle_c}-->
We know that (convolution theorem ➞<!--\ref{math:sec:convolution_theorem}-->, similarity theorem ➞<!--\ref{math:sec:similarity_theorem}-->, definition of the triangle function ⤵<!--\ref{math:eq:y_002}-->, Fourier transform of the rectangle boxcar function ➞<!--\ref{math:sec:convolution_theorem}-->):
<a id='math:eq:y_005'></a><!--\label{math:eq:y_005}-->$$
\begin{align}
\mathscr{F}{h\circ g}\,&=\,\mathscr{F}{h}\cdot\mathscr{F}{g}\
g\,=\,h(ax) \,&\Rightarrow\, \mathscr{F}{g}(s) = \frac{1}{|a|}\mathscr{F}{h}(\frac{s}{a})\
f(x) \,&=\, \frac{B}{A}\Pi_A\circ\Pi_A(x)\
\Pi_A(x)\,&=\,\Pi(\frac{x}{A})\
\mathscr{F}{\Pi}(s) \,&=\,{\rm sinc}(s) \
\end{align}
$$
This makes our calculations a lot shorter.
<a id='math:eq:y_006'></a><!--\label{math:eq:y_006}-->$$
\begin{align}
\mathscr{F}{f}(s)\,&=\,\mathscr{F}{\frac{B}{A}\Pi_A\circ\Pi_A}(s)\
&=\,\frac{B}{A}\mathscr{F}{\Pi_A}(s)\cdot\mathscr{F}{\Pi_A}(s)\
&=\,\frac{B}{A}\mathscr{F}{A\Pi}(As)\cdot\mathscr{F}{A\Pi_A}(As)\
&=\,AB\,\mathscr{F}{\Pi}(As)\cdot\mathscr{F}{\Pi}(As)\
&=\,AB\,{\rm sinc}(As)\cdot{\rm sinc}(As)\
&=\,AB\,{\rm sinc}^2(As)\
&=\,AB\,\frac{sin^2 A\pi s}{A^2\pi^2 s^2}\
\end{align}$$
So the solution looks like this: | def plotfftriangle():
A = 1.
B = 1.
# Start the plot, create a figure instance and a subplot
fig = plt.figure(figsize=(20,5))
ax = fig.add_subplot(111)
twv, twh = plotviewgraph(fig, ax, xmin = -3./A, xmax = 3./A, ymin = -0.3, ymax = B)
ticx = [[-A,r'$-\frac{1}{A}$'],[A,'A']]
ticx = [[-3.*A, r'$\frac{-3}{A}$'], [-2.*A, r'$\frac{-2}{A}$'], [-1./A, r'$\frac{-1}{A}$'], [1./A, r'$\frac{1}{A}$'], [2./A, r'$\frac{2}{A}$'], [3./A, r'$\frac{3}{A}$']]
for tupel in ticx:
ax.plot([tupel[0],tupel[0]],[-twv, twv], 'k-')
ax.text(tupel[0], 0.-2.*twh, tupel[1], fontsize = 24, horizontalalignment = 'center', verticalalignment = 'top', color = 'black')
ticx = [[0.,r'$0$']]
for tupel in ticx:
ax.plot([tupel[0],tupel[0]],[-twv, twv], 'k-')
ax.text(tupel[0]+twh, 0.-2.*twh, tupel[1], fontsize = 24, horizontalalignment = 'left', verticalalignment = 'top', color = 'black')
ticy = [[B,r'$\frac{B}{A}$']]
for tupel in ticy:
ax.plot([-twh, twh], [tupel[0], tupel[0]], 'k-')
ax.text(0.+twv, tupel[0], tupel[1], fontsize = 24, horizontalalignment = 'left', verticalalignment = 'bottom', color = 'black')
# Plot the function
x = np.linspace(-4.*A, 4.*A, 900)
y = np.power(np.sinc(x),2)
# Annotate axes
ax.text(0.-A/20, 1.2*(B), r'$f(x)$', fontsize = 24, horizontalalignment = 'right', verticalalignment = 'bottom', color = 'black')
ax.text(1.2*3.*A, 0., r'$x$', fontsize = 24, horizontalalignment = 'left', verticalalignment = 'top', color = 'black')
ax.plot(x, y, 'r-', lw = 2)
plotfftriangle()
# <a id='math:fig:fftriangle'></a><!--\label{math:fig:fftriangle}--> | 2_Mathematical_Groundwork/2_y_exercises.ipynb | griffinfoster/fundamentals_of_interferometry | gpl-2.0 |
Figure 2.y.2: Triangle function with width $2A$ and amplitude $B$.<a id='math:fig:ft_of_triangle'></a><!--\label{math:fig:ft_of_triangle}-->
2.y.2. Fourier transforms and convolution: Convolution of two functions with finite support<a id='math:sec:exercises_convolution_of_two_functions_with_finite_support'></a><!--\label{math:sec:exercises_convolution_of_two_functions_with_finite_support}-->
Consider the two functions given below: | def plotrectntria():
A = 1.
B = 1.4
# Start the plot, create a figure instance and a subplot
fig = plt.figure(figsize=(20,5))
ax = fig.add_subplot(121)
twv, twh = plotviewgraph(fig, ax, xmin = 0., xmax = 3.*A, ymin = 0., ymax = 3.)
ticx = [[1.*A, r'$A$'], [2.*A, r'$2A$'], [3.*A, r'$3A$']]
for tupel in ticx:
ax.plot([tupel[0],tupel[0]],[-twv, twv], 'k-')
ax.text(tupel[0], 0.-2.*twh, tupel[1], fontsize = 24, horizontalalignment = 'center', verticalalignment = 'top', color = 'black')
ticx = [[0.,r'$0$']]
for tupel in ticx:
ax.plot([-tupel[0],-tupel[0]],[-twv, twv], 'k-')
ax.text(tupel[0]+twh, 0.-2.*twh, tupel[1], fontsize = 24, horizontalalignment = 'left', verticalalignment = 'top', color = 'black')
ticy = [[1,r'$1$'], [2.,r'$2$'], [3.,r'$3$']]
for tupel in ticy:
ax.plot([-twh, twh], [tupel[0], tupel[0]], 'k-')
ax.text(0.-twv, tupel[0], tupel[1], fontsize = 24, horizontalalignment = 'right', verticalalignment = 'center', color = 'black')
ticy = [[B, r'$B$']]
for tupel in ticy:
ax.plot([-twh, twh], [tupel[0], tupel[0]], 'k-')
ax.text(0.+twv, tupel[0], tupel[1], fontsize = 24, horizontalalignment = 'left', verticalalignment = 'bottom', color = 'black')
# Plot the function
x = [A, A, 2*A, 2*A]
y = [0., B, B, 0.]
ax.plot(x, y, 'r-', lw = 2)
x = [0., A]
y = [B, B]
ax.plot(x, y, 'k--', lw = 1)
# Annotate axes
ax.text(0.-3.*twh, 1.2*3., r'$g(x)$', fontsize = 24, horizontalalignment = 'right', verticalalignment = 'bottom', color = 'black')
ax.text(1.1*3.*A, 0., r'$x$', fontsize = 24, horizontalalignment = 'left', verticalalignment = 'top', color = 'black')
###################
ax = fig.add_subplot(122)
twv, twh = plotviewgraph(fig, ax, xmin = 0., xmax = 3.*A, ymin = 0., ymax = 3.)
ticx = [[1.*A, r'$A$'], [2.*A, r'$2A$'], [3.*A, r'$3A$']]
for tupel in ticx:
ax.plot([tupel[0],tupel[0]],[-twv, twv], 'k-')
ax.text(tupel[0], 0.-2.*twh, tupel[1], fontsize = 24, horizontalalignment = 'center', verticalalignment = 'top', color = 'black')
ticx = [[0.,r'$0$']]
for tupel in ticx:
ax.plot([-tupel[0],-tupel[0]],[-twv, twv], 'k-')
ax.text(tupel[0]+twh, 0.-2.*twh, tupel[1], fontsize = 24, horizontalalignment = 'left', verticalalignment = 'top', color = 'black')
ticy = [[1,r'$1$'], [2.,r'$2$'], [3.,r'$3$']]
for tupel in ticy:
ax.plot([-twh, twh], [tupel[0], tupel[0]], 'k-')
ax.text(0.-twv, tupel[0], tupel[1], fontsize = 24, horizontalalignment = 'right', verticalalignment = 'center', color = 'black')
# Plot the function
x = [A, A, 2*A, 3*A, 3*A]
y = [0., 1., 3., 1., 0.]
ax.plot(x, y, 'r-', lw = 2)
x = [0., A]
y = [1., 1.]
ax.plot(x, y, 'k--', lw = 1)
x = [0., 2*A]
y = [3., 3.]
ax.plot(x, y, 'k--', lw = 1)
# Annotate axes
ax.text(0.-3.*twh, 1.2*3., r'$f(x)$', fontsize = 24, horizontalalignment = 'right', verticalalignment = 'bottom', color = 'black')
ax.text(1.1*3.*A, 0., r'$x$', fontsize = 24, horizontalalignment = 'left', verticalalignment = 'top', color = 'black')
plotrectntria()
# <a id='math:fig:two_fs_with_finite_support'></a><!--\label{math:fig:two_fs_with_finite_support}--> | 2_Mathematical_Groundwork/2_y_exercises.ipynb | griffinfoster/fundamentals_of_interferometry | gpl-2.0 |
Figure 2.y.3: Triangle function with width $2A$ and amplitude $B$.<a id='math:fig:two_fs_with_finite_support'></a><!--\label{math:fig:two_fs_with_finite_support}-->
<b>Assignments:</b>
<ol type="A">
<li>Write down the functions g and h.</li>
<li>Calculate their convolution.</li>
</ol>
2.y.2.1 Convolution of two functions with finite support: example answer to assignment 1.<a id='math:sec:exercises_convolution_of_two_functions_with_finite_support_a'></a><!--\label{math:sec:exercises_convolution_of_two_functions_with_finite_support_a}-->
<b>Write down the functions g and h.</b>
<a id='math:eq:y_007'></a><!--\label{math:eq:y_007}-->$$
\begin{align}
h(x) &= \left {
\begin{array}{lll}
B & {\rm for} & A \leq x \leq 2A\
0 & {\rm else}
\end{array}\right .\
g(x) &= \left {
\begin{array}{lll}
g_1(x)\,=\,\frac{2}{A}\left(x-\frac{A}{2}\right) & {\rm for} & A \leq x \leq 2A\
g_2(x)\,=\,-\frac{2}{A}\left(x-\frac{7A}{2}\right) & {\rm for} & 2A \leq x \leq 3A\
0 & {\rm else}
\end{array}\right .\
\end{align}
$$
2.y.2.2 Convolution of two functions with finite support: example answer to assignment 2.<a id='math:sec:exercises_convolution_of_two_functions_with_finite_support_b'></a><!--\label{math:sec:exercises_convolution_of_two_functions_with_finite_support_b}-->
We have to evaluate the integral (see definition of the convolution ➞ <!--\ref{math:sec:definition_of_the_convolution}-->):
<a id='math:eq:y_008'></a><!--\label{math:eq:y_008}-->$$
g\circ h(x) \, = \, \int_{-\infty}^{\infty}g(x-t)h(t)\,dt
$$
To do so, we calculate the integral for ranges of $x$, depending on the supports (ranges where the function in non-zero) of $g(x-t)$ and $h(t)$, or $h_1(t)$ and $g_2(t)$ respectively.
As an aid, rewrite above functions ⤵<!--\ref{math:eq:y_008}-->:
<a id='math:eq:y_009'></a><!--\label{math:eq:y_009}-->$$
\begin{align}
g(x-t) &= \left {
\begin{array}{lll}
B & {\rm for} & -2A+x \leq t \leq -A+x\
0 & {\rm else}
\end{array}\right .\
h(t) &= \left {
\begin{array}{lll}
h_1(t)\,=\,\frac{2}{A}\left(t-\frac{A}{2}\right) & {\rm for} & A \leq t \leq 2A\
h_2(t)\,=\,-\frac{2}{A}\left(t-\frac{7A}{2}\right) & {\rm for} & 2A \leq t \leq 3A\
0 & {\rm else}
\end{array}\right .\
\end{align}
$$
Case 1:
<a id='math:eq:y_010'></a><!--\label{math:eq:y_010}-->$$
\begin{align}
x \,&<\, 2A\qquad\,\Rightarrow\
g\circ h(x) \, &= \, \int_{-\infty}^{A}g(x-t)h(t)\,dt\
&=\, 0
\end{align}
$$
Case 2:
<a id='math:eq:y_011'></a><!--\label{math:eq:y_011}-->$$
\begin{align}
2A \,&\leq x \,<\, 3A\qquad\Rightarrow\
g\circ h(x) \, &= \, \int_{-\infty}^{\infty}g(x-t)h(t)\,dt\
&=\, \int_{A}^{x-A}B\,h_1(t)\,dt\,\
&=\,\int_{A}^{x-A}\frac{2B}{A}\left(t-\frac{A}{2}\right)\,dt\
&=\,\frac{B}{A}\left(x^2-3Ax+2A^2\right)\
\end{align}$$
Case 3:
<a id='math:eq:y_012'></a><!--\label{math:eq:y_012}-->$$
\begin{align}
3A \,&\leq\, x \,<\, 4A\qquad\Rightarrow\
g\circ h(x) \, &=\, \int_{x-2A}^{2A}B\,h_1(t)\,dt+ \int_{2A}^{x-A}B\,h_2(t)\,dt\
&=\,\int_{x-2A}^{2A}\frac{2B}{A}\left(t-\frac{A}{2}\right)\,dt- \int_{2A}^{x-A}\frac{2B}{A}\left(t-\frac{7A}{2}\right)\,dt\
&=\,\frac{B}{A}\left(-2x^2+14Ax-22A^2\right)\
\end{align}
$$
Case 4:
<a id='math:eq:y_013'></a><!--\label{math:eq:y_013}-->$$
\begin{align}
4A \,&\leq x \,<\, 5A\qquad\Rightarrow\
g\circ h(x) \, &=\, \int_{x-2A}^{3A}B\,h_2(t)\,dt\,=\,\int_{x-2A}^{3A}-\frac{2B}{A}\left(t-\frac{7A}{2}\right)\,dt\
&=\,\frac{B}{A}\left(x^2-11Ax+30A^2\right)\
\end{align}
$$
Case 5:
<a id='math:eq:y_014'></a><!--\label{math:eq:y_014}-->$$
\begin{align}
5A&\,\leq\,x\qquad\,\Rightarrow\
g\circ h(x) \, &= \, \int_{3A}^{\infty}g(x-t)h(t)\,dt\
&=\, 0
\end{align}
$$
Summarising, the convolution of g and h results in the following composite function:
<a id='math:eq:y_014'></a><!--\label{math:eq:y_014}-->$$
\begin{align}
g\circ h(x) \, &=
\frac{B}{A}\left{\begin{array}{lll}
0 & {\rm for} & x < 2A \
x^2-3Ax+2A^2 & {\rm for} & 2A \leq x < 3A\
-2x^2+14Ax-22A^2 & {\rm for} & 3A \leq x < 4A\
x^2-11Ax+30A^2 & {\rm for} & 4A \leq x < 5A\
0 & {\rm for} & 5A \leq x \
\end{array}\right .\
\end{align}$$ | def rectntriaconv(A,B,x):
xn = x[x < (2*A)]
yn = xn*0.
y = yn
xn = x[(x == 2*A) | (x > 2*A) & (x < 3*A)]
yn = (B/A)*(np.power(xn,2)-3*A*xn+2*np.power(A,2))
y = np.append(y,yn)
xn = x[(x == 3*A) | (x > 3*A) & (x < 4*A)]
yn = (B/A)*((-2*np.power(xn,2))+14*A*xn-22*np.power(A,2))
y = np.append(y,yn)
xn = x[(x == 4*A) | (x > 4*A) & (x < 5*A)]
yn = (B/A)*(np.power(xn,2)-11*A*xn+30*np.power(A,2))
y = np.append(y,yn)
xn = x[(x == 5*A) | (x > 5*A)]
yn = xn*0.
y = np.append(y,yn)
return y
def plotrectntriaconv():
A = 1.
B = 1.4
# Start the plot, create a figure instance and a subplot
fig = plt.figure(figsize=(20,5))
ax = fig.add_subplot(121)
twv, twh = plotviewgraph(fig, ax, xmin = 0., xmax = 6.*A, ymin = 0., ymax = 2.5*A*B)
ticx = [[1.*A, r'$A$'], [2.*A, r'$2A$'], [3.*A, r'$3A$'], [4.*A, r'$4A$'], [5.*A, r'$5A$'], [6.*A, r'$6A$']]
for tupel in ticx:
ax.plot([tupel[0],tupel[0]],[-twv, twv], 'k-')
ax.text(tupel[0], 0.-2.*twh, tupel[1], fontsize = 24, horizontalalignment = 'center', verticalalignment = 'top', color = 'black')
ticx = [[0.,r'$0$']]
for tupel in ticx:
ax.plot([-tupel[0],-tupel[0]],[-twv, twv], 'k-')
ax.text(tupel[0]+twh, 0.-2.*twh, tupel[1], fontsize = 24, horizontalalignment = 'left', verticalalignment = 'top', color = 'black')
ticy = [[2*A*B, r'$2AB$'], [2.5*A*B, r'$\frac{5}{2}AB$']]
for tupel in ticy:
ax.plot([-twh, twh], [tupel[0], tupel[0]], 'k-')
ax.text(0.+5*twv, tupel[0], tupel[1], fontsize = 24, horizontalalignment = 'left', verticalalignment = 'bottom', color = 'black')
# Plot the function
x = np.linspace(0., 7.*A, 900)
y = rectntriaconv(A,B,x)
ax.plot(x, y, 'r-', lw = 2)
# Plot a few lines
x = [0., 4*A]
y = [2.*A*B, 2.*A*B]
ax.plot(x, y, 'k--', lw = 1)
x = [0., 3.5*A]
y = [2.5*A*B, 2.5*A*B]
ax.plot(x, y, 'k--', lw = 1)
x = [3.*A, 3.*A]
y = [0., 2.*A*B]
ax.plot(x, y, 'k--', lw = 1)
x = [4.*A, 4.*A]
y = [0., 2.*A*B]
ax.plot(x, y, 'k--', lw = 1)
# Annotate axes
ax.text(0.-3.*twh, 1.25*2.5*A*B, r'$g\circ h(x)$', fontsize = 24, horizontalalignment = 'right', verticalalignment = 'bottom', color = 'black')
ax.text(1.1*6.*A, 0., r'$x$', fontsize = 24, horizontalalignment = 'left', verticalalignment = 'top', color = 'black')
plotrectntriaconv()
# <a id='math:fig:two_fs_wfs'></a><!--\label{math:fig:two_fs_wfs}--> | 2_Mathematical_Groundwork/2_y_exercises.ipynb | griffinfoster/fundamentals_of_interferometry | gpl-2.0 |
Adding Spots and Compute Options | b.add_spot(component='primary', relteff=0.8, radius=20, colat=45, colon=90, feature='spot01')
b.add_dataset('lc', times=np.linspace(0,1,101))
b.add_compute('phoebe', irrad_method='none', compute='phoebe2')
b.add_compute('legacy', irrad_method='none', compute='phoebe1') | 2.2/examples/legacy_spots.ipynb | phoebe-project/phoebe2-docs | gpl-3.0 |
Let's use the external atmospheres available for both phoebe1 and phoebe2 | b.set_value_all('atm', 'extern_planckint')
b.set_value_all('ld_mode', 'manual')
b.set_value_all('ld_func', 'logarithmic')
b.set_value_all('ld_coeffs', [0.0, 0.0])
b.run_compute('phoebe2', model='phoebe2model')
b.run_compute('phoebe1', model='phoebe1model') | 2.2/examples/legacy_spots.ipynb | phoebe-project/phoebe2-docs | gpl-3.0 |
Plotting | afig, mplfig = b.plot(legend=True, ylim=(1.95, 2.05), show=True) | 2.2/examples/legacy_spots.ipynb | phoebe-project/phoebe2-docs | gpl-3.0 |
Getting help | # information about functions with Python's help() ...
help(nest.Models)
# ... or IPython's question mark
nest.Models?
# list neuron models
nest.Models()
# choose LIF neuron with exponential synaptic currents: 'iaf_psc_exp'
# look in documentation for model description
# or (if not compiled with MPI)
nest.help('iaf_psc_exp') | session20_NEST/jupyter_notebooks/1_first_steps.ipynb | INM-6/Python-Module-of-the-Week | mit |
Creating a neuron | # before creating a new network,
# reset the simulation kernel / remove all nodes
nest.ResetKernel()
# create the neuron
neuron = nest.Create('iaf_psc_exp')
# investigate the neuron
# Create() just returns a list (tuple) with handles to the new nodes
# (handles = integer numbers called ids)
neuron
# current dynamical state/parameters of the neuron
# note that the membrane voltage is at -70 mV
nest.GetStatus(neuron) | session20_NEST/jupyter_notebooks/1_first_steps.ipynb | INM-6/Python-Module-of-the-Week | mit |
Creating a spikegenerator | # create a spike generator
spikegenerator = nest.Create('spike_generator')
# check out 'spike_times' in its parameters
nest.GetStatus(spikegenerator)
# set the spike times at 10 and 50 ms
nest.SetStatus(spikegenerator, {'spike_times': [10., 50.]}) | session20_NEST/jupyter_notebooks/1_first_steps.ipynb | INM-6/Python-Module-of-the-Week | mit |
Creating a voltmeter | # create a voltmeter for recording
voltmeter = nest.Create('voltmeter')
# investigate the voltmeter
voltmeter
# see that it records membrane voltage, senders, times
nest.GetStatus(voltmeter) | session20_NEST/jupyter_notebooks/1_first_steps.ipynb | INM-6/Python-Module-of-the-Week | mit |
Connecting | # investigate Connect() function
nest.Connect?
# connect spike generator and voltmeter to the neuron
nest.Connect(spikegenerator, neuron, syn_spec={'weight': 1e3})
nest.Connect(voltmeter, neuron) | session20_NEST/jupyter_notebooks/1_first_steps.ipynb | INM-6/Python-Module-of-the-Week | mit |
Simulating | # run simulation for 100 ms
nest.Simulate(100.)
# look at nest's KernelStatus:
# network_size (root node, neuron, spike generator, voltmeter)
# num_connections
# time (simulation duration)
nest.GetKernelStatus()
# note that voltmeter has recorded 99 events
nest.GetStatus(voltmeter)
# read out recording time and voltage from voltmeter
times = nest.GetStatus(voltmeter)[0]['events']['times']
voltages = nest.GetStatus(voltmeter)[0]['events']['V_m'] | session20_NEST/jupyter_notebooks/1_first_steps.ipynb | INM-6/Python-Module-of-the-Week | mit |
Plotting | # plot results
# units can be found in documentation
pylab.plot(times, voltages, label='Neuron 1')
pylab.xlabel('Time (ms)')
pylab.ylabel('Membrane potential (mV)')
pylab.title('Membrane potential')
pylab.legend()
# create the same plot with NEST's build-in plotting function
import nest.voltage_trace
nest.voltage_trace.from_device(voltmeter) | session20_NEST/jupyter_notebooks/1_first_steps.ipynb | INM-6/Python-Module-of-the-Week | mit |
Representational Similarity Analysis
Representational Similarity Analysis is used to perform summary statistics
on supervised classifications where the number of classes is relatively high.
It consists in characterizing the structure of the confusion matrix to infer
the similarity between brain responses and serves as a proxy for characterizing
the space of mental representations
:footcite:Shepard1980,LaaksoCottrell2000,KriegeskorteEtAl2008.
In this example, we perform RSA on responses to 24 object images (among
a list of 92 images). Subjects were presented with images of human, animal
and inanimate objects :footcite:CichyEtAl2014. Here we use the 24 unique
images of faces and body parts.
<div class="alert alert-info"><h4>Note</h4><p>this example will download a very large (~6GB) file, so we will not
build the images below.</p></div> | # Authors: Jean-Remi King <[email protected]>
# Jaakko Leppakangas <[email protected]>
# Alexandre Gramfort <[email protected]>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
from pandas import read_csv
import matplotlib.pyplot as plt
from sklearn.model_selection import StratifiedKFold
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.manifold import MDS
import mne
from mne.io import read_raw_fif, concatenate_raws
from mne.datasets import visual_92_categories
print(__doc__)
data_path = visual_92_categories.data_path()
# Define stimulus - trigger mapping
fname = op.join(data_path, 'visual_stimuli.csv')
conds = read_csv(fname)
print(conds.head(5)) | 0.23/_downloads/61268d5dc873438a743241ad21a989fd/decoding_rsa_sgskip.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Let's restrict the number of conditions to speed up computation | max_trigger = 24
conds = conds[:max_trigger] # take only the first 24 rows | 0.23/_downloads/61268d5dc873438a743241ad21a989fd/decoding_rsa_sgskip.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Define stimulus - trigger mapping | conditions = []
for c in conds.values:
cond_tags = list(c[:2])
cond_tags += [('not-' if i == 0 else '') + conds.columns[k]
for k, i in enumerate(c[2:], 2)]
conditions.append('/'.join(map(str, cond_tags)))
print(conditions[:10]) | 0.23/_downloads/61268d5dc873438a743241ad21a989fd/decoding_rsa_sgskip.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Let's make the event_id dictionary | event_id = dict(zip(conditions, conds.trigger + 1))
event_id['0/human bodypart/human/not-face/animal/natural'] | 0.23/_downloads/61268d5dc873438a743241ad21a989fd/decoding_rsa_sgskip.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Read MEG data | n_runs = 4 # 4 for full data (use less to speed up computations)
fname = op.join(data_path, 'sample_subject_%i_tsss_mc.fif')
raws = [read_raw_fif(fname % block, verbose='error')
for block in range(n_runs)] # ignore filename warnings
raw = concatenate_raws(raws)
events = mne.find_events(raw, min_duration=.002)
events = events[events[:, 2] <= max_trigger] | 0.23/_downloads/61268d5dc873438a743241ad21a989fd/decoding_rsa_sgskip.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Epoch data | picks = mne.pick_types(raw.info, meg=True)
epochs = mne.Epochs(raw, events=events, event_id=event_id, baseline=None,
picks=picks, tmin=-.1, tmax=.500, preload=True) | 0.23/_downloads/61268d5dc873438a743241ad21a989fd/decoding_rsa_sgskip.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Let's plot some conditions | epochs['face'].average().plot()
epochs['not-face'].average().plot() | 0.23/_downloads/61268d5dc873438a743241ad21a989fd/decoding_rsa_sgskip.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Representational Similarity Analysis (RSA) is a neuroimaging-specific
appelation to refer to statistics applied to the confusion matrix
also referred to as the representational dissimilarity matrices (RDM).
Compared to the approach from Cichy et al. we'll use a multiclass
classifier (Multinomial Logistic Regression) while the paper uses
all pairwise binary classification task to make the RDM.
Also we use here the ROC-AUC as performance metric while the
paper uses accuracy. Finally here for the sake of time we use
RSA on a window of data while Cichy et al. did it for all time
instants separately. | # Classify using the average signal in the window 50ms to 300ms
# to focus the classifier on the time interval with best SNR.
clf = make_pipeline(StandardScaler(),
LogisticRegression(C=1, solver='liblinear',
multi_class='auto'))
X = epochs.copy().crop(0.05, 0.3).get_data().mean(axis=2)
y = epochs.events[:, 2]
classes = set(y)
cv = StratifiedKFold(n_splits=5, random_state=0, shuffle=True)
# Compute confusion matrix for each cross-validation fold
y_pred = np.zeros((len(y), len(classes)))
for train, test in cv.split(X, y):
# Fit
clf.fit(X[train], y[train])
# Probabilistic prediction (necessary for ROC-AUC scoring metric)
y_pred[test] = clf.predict_proba(X[test]) | 0.23/_downloads/61268d5dc873438a743241ad21a989fd/decoding_rsa_sgskip.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Compute confusion matrix using ROC-AUC | confusion = np.zeros((len(classes), len(classes)))
for ii, train_class in enumerate(classes):
for jj in range(ii, len(classes)):
confusion[ii, jj] = roc_auc_score(y == train_class, y_pred[:, jj])
confusion[jj, ii] = confusion[ii, jj] | 0.23/_downloads/61268d5dc873438a743241ad21a989fd/decoding_rsa_sgskip.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Plot | labels = [''] * 5 + ['face'] + [''] * 11 + ['bodypart'] + [''] * 6
fig, ax = plt.subplots(1)
im = ax.matshow(confusion, cmap='RdBu_r', clim=[0.3, 0.7])
ax.set_yticks(range(len(classes)))
ax.set_yticklabels(labels)
ax.set_xticks(range(len(classes)))
ax.set_xticklabels(labels, rotation=40, ha='left')
ax.axhline(11.5, color='k')
ax.axvline(11.5, color='k')
plt.colorbar(im)
plt.tight_layout()
plt.show() | 0.23/_downloads/61268d5dc873438a743241ad21a989fd/decoding_rsa_sgskip.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Confusion matrix related to mental representations have been historically
summarized with dimensionality reduction using multi-dimensional scaling [1].
See how the face samples cluster together. | fig, ax = plt.subplots(1)
mds = MDS(2, random_state=0, dissimilarity='precomputed')
chance = 0.5
summary = mds.fit_transform(chance - confusion)
cmap = plt.get_cmap('rainbow')
colors = ['r', 'b']
names = list(conds['condition'].values)
for color, name in zip(colors, set(names)):
sel = np.where([this_name == name for this_name in names])[0]
size = 500 if name == 'human face' else 100
ax.scatter(summary[sel, 0], summary[sel, 1], s=size,
facecolors=color, label=name, edgecolors='k')
ax.axis('off')
ax.legend(loc='lower right', scatterpoints=1, ncol=2)
plt.tight_layout()
plt.show() | 0.23/_downloads/61268d5dc873438a743241ad21a989fd/decoding_rsa_sgskip.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Make a striplog | from striplog import Striplog, Component
s = Striplog.from_csv(text=text)
s.plot(aspect=5)
s[0] | docs/tutorial/12_Calculate_sand_proportion.ipynb | agile-geoscience/striplog | apache-2.0 |
Make a sand flag log
We'll make a log version of the striplog: | start, stop, step = 0, 25, 0.01
L = s.to_log(start=start, stop=stop, step=step)
import matplotlib.pyplot as plt
plt.figure(figsize=(15, 2))
plt.plot(L) | docs/tutorial/12_Calculate_sand_proportion.ipynb | agile-geoscience/striplog | apache-2.0 |
Convolve with running window
Convolution with a boxcar filter computes the mean in a window. | import numpy as np
window_length = 2.5 # metres.
N = int(window_length / step)
boxcar = 100 * np.ones(N) / N
z = np.linspace(start, stop, L.size)
prop = np.convolve(L, boxcar, mode='same')
plt.plot(z, prop)
plt.grid(c='k', alpha=0.2)
plt.ylim(-5, 105) | docs/tutorial/12_Calculate_sand_proportion.ipynb | agile-geoscience/striplog | apache-2.0 |
Write out as CSV
Here's the proportion log we made: | z_prop = np.stack([z, prop], axis=1)
z_prop.shape | docs/tutorial/12_Calculate_sand_proportion.ipynb | agile-geoscience/striplog | apache-2.0 |
Save it with NumPy (or you could build up a Pandas DataFrame)... | np.savetxt('prop.csv', z_prop, delimiter=',', header='elev,perc', comments='', fmt='%1.3f') | docs/tutorial/12_Calculate_sand_proportion.ipynb | agile-geoscience/striplog | apache-2.0 |
Check the file looks okay with a quick command line check (! sends commands to the shell). | !head prop.csv | docs/tutorial/12_Calculate_sand_proportion.ipynb | agile-geoscience/striplog | apache-2.0 |
Plot everything together | fig, ax = plt.subplots(figsize=(5, 10), ncols=3, sharey=True)
# Plot the striplog.
s.plot(ax=ax[0])
ax[0].set_title('Striplog')
# Fake a striplog by plotting the log... it looks nice!
ax[1].fill_betweenx(z, 0.5, 0, color='grey')
ax[1].fill_betweenx(z, L, 0, color='gold', lw=0)
ax[1].set_title('Faked with log')
# Plot the sand proportion log.
ax[2].plot(prop, z, 'r', lw=1)
ax[2].set_title(f'% sand, {window_length} m') | docs/tutorial/12_Calculate_sand_proportion.ipynb | agile-geoscience/striplog | apache-2.0 |
Make a histogram of thicknesses | thicks = [iv.thickness for iv in s]
_ = plt.hist(thicks, bins=51) | docs/tutorial/12_Calculate_sand_proportion.ipynb | agile-geoscience/striplog | apache-2.0 |
Age statistics flaws
Statistics shown above are misleading since age categories are represented by single number that does not adequately describe each bracket. | def get_money_from(money_string):
return int(money_string.split(" ")[0].split("$")[1].replace(",", ""))
int_income = extract_column_data(celebrate,
"How much total combined money did all members of your HOUSEHOLD earn last year?",
get_money_from,
except_values=["Prefer not to answer"])
celebrate["income"] = int_income
display_statistics(int_income, "Thanksgiving by households last year earnings") | 2. Data Analysis and Visualization/Analyzing Thanksgiving Dinner/Thanksgiving survey.ipynb | lesonkorenac/dataquest-projects | mit |
Household earnings statistics flaws
There are same problems as in age statistics. | travel = celebrate["How far will you travel for Thanksgiving?"]
display_counts(travel.loc[int_income[int_income < 15000].index].value_counts(), "Low income travel")
display_counts(travel.loc[int_income[int_income >= 15000].index].value_counts(), "High income travel") | 2. Data Analysis and Visualization/Analyzing Thanksgiving Dinner/Thanksgiving survey.ipynb | lesonkorenac/dataquest-projects | mit |
Travel by income
Hypothesis that people with lower income travel more, because they might be younger does not seem to be valid (assumption that younger people have lower income may be wrong, we could use values from age instead). | def thanksgiving_and_friends(data, aggregated_column):
return data.pivot_table(index="Have you ever tried to meet up with hometown friends on Thanksgiving night?",
columns='Have you ever attended a "Friendsgiving?"',
values=aggregated_column)
print(thanksgiving_and_friends(celebrate, "age"))
print(thanksgiving_and_friends(celebrate, "income")) | 2. Data Analysis and Visualization/Analyzing Thanksgiving Dinner/Thanksgiving survey.ipynb | lesonkorenac/dataquest-projects | mit |
Create CC object to setup required parameters
Please enable mprov param in '/cc_conf/cerebralcortex.yml'. mprov: pennprov. You would need to create a user on mprov server first and set the username and password in the '/cc_conf/cerebralcortex.yml'. | CC = Kernel("/home/jovyan/cc_conf/", study_name="default") | jupyter_demo/mprov_example.ipynb | MD2Korg/CerebralCortex | bsd-2-clause |
Generate synthetic GPS data | ds_gps = gen_location_datastream(user_id="bfb2ca0c-e19c-3956-9db2-5459ccadd40c", stream_name="gps--org.md2k.phonesensor--phone") | jupyter_demo/mprov_example.ipynb | MD2Korg/CerebralCortex | bsd-2-clause |
Create windows into 60 seconds chunks | windowed_gps_ds=ds_gps.window(windowDuration=60)
gps_clusters=cluster_gps(windowed_gps_ds) | jupyter_demo/mprov_example.ipynb | MD2Korg/CerebralCortex | bsd-2-clause |
Print Data | gps_clusters.show(10) | jupyter_demo/mprov_example.ipynb | MD2Korg/CerebralCortex | bsd-2-clause |
Hat potential
The following potential is often used in Physics and other fields to describe symmetry breaking and is often known as the "hat potential":
$$ V(x) = -a x^2 + b x^4 $$
Write a function hat(x,a,b) that returns the value of this function: | # YOUR CODE HERE
def hat(x,a,b):
v=-1*a*x**2+b*x**4
return v
assert hat(0.0, 1.0, 1.0)==0.0
assert hat(0.0, 1.0, 1.0)==0.0
assert hat(1.0, 10.0, 1.0)==-9.0 | assignments/assignment11/OptimizationEx01.ipynb | JackDi/phys202-2015-work | mit |
Plot this function over the range $x\in\left[-3,3\right]$ with $b=1.0$ and $a=5.0$: | x=np.linspace(-3,3)
b=1.0
a=5.0
plt.plot(x,hat(x,a,b))
# YOUR CODE HERE
x0=-2
a = 5.0
b = 1.0
y=opt.minimize(hat,x0,(a,b))
y.x
assert True # leave this to grade the plot | assignments/assignment11/OptimizationEx01.ipynb | JackDi/phys202-2015-work | mit |
Write code that finds the two local minima of this function for $b=1.0$ and $a=5.0$.
Use scipy.optimize.minimize to find the minima. You will have to think carefully about how to get this function to find both minima.
Print the x values of the minima.
Plot the function as a blue line.
On the same axes, show the minima as red circles.
Customize your visualization to make it beatiful and effective. | # YOUR CODE HERE
x0=-2
a = 5.0
b = 1.0
i=0
y.x
mini=[]
x=np.linspace(-3,3)
for i in x:
y=opt.minimize(hat,i,(a,b))
z=int(y.x *100000)
if np.any(mini[:] == z):
i=i+1
else:
mini=np.append(mini,z)
mini=mini/100000
mini
plt.plot(x,hat(x,a,b),label="Hat Function")
plt.plot(mini[0],hat(mini[0],a,b),'ro',label="Minima")
plt.plot(mini[1],hat(mini[1],a,b),'ro')
plt.xlabel=("X-Axis")
plt.ylabel=("Y-Axis")
plt.title("Graph of Function and its Local Minima")
plt.legend()
assert True # leave this for grading the plot | assignments/assignment11/OptimizationEx01.ipynb | JackDi/phys202-2015-work | mit |
Import the dataset and trained model
In the previous notebook, you imported 20 million movie recommendations and trained an ALS model with BigQuery ML.
We are going to use the same tables, but if this is a new environment, please run the below commands to copy over the clean data.
First create the BigQuery dataset and copy over the data | !bq mk movielens
%%bash
rm -r bqml_data
mkdir bqml_data
cd bqml_data
curl -O 'http://files.grouplens.org/datasets/movielens/ml-20m.zip'
unzip ml-20m.zip
yes | bq rm -r $PROJECT:movielens
bq --location=US mk --dataset \
--description 'Movie Recommendations' \
$PROJECT:movielens
bq --location=US load --source_format=CSV \
--autodetect movielens.ratings ml-20m/ratings.csv
bq --location=US load --source_format=CSV \
--autodetect movielens.movies_raw ml-20m/movies.csv | notebooks/recommendation_systems/solutions/3_als_bqml_hybrid.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
And create a cleaned movielens.movies table. | %%bigquery --project $PROJECT
CREATE OR REPLACE TABLE movielens.movies AS
SELECT * REPLACE(SPLIT(genres, "|") AS genres)
FROM movielens.movies_raw | notebooks/recommendation_systems/solutions/3_als_bqml_hybrid.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Next, copy over the trained recommendation model. Note that if you're project is in the EU you will need to change the location from US to EU below. Note that as of the time of writing you cannot copy models across regions with bq cp. | %%bash
bq --location=US cp \
cloud-training-demos:movielens.recommender \
movielens.recommender | notebooks/recommendation_systems/solutions/3_als_bqml_hybrid.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Next, ensure the model still works by invoking predictions for movie recommendations: | %%bigquery --project $PROJECT
SELECT * FROM
ML.PREDICT(MODEL `movielens.recommender`, (
SELECT
movieId, title, 903 AS userId
FROM movielens.movies, UNNEST(genres) g
WHERE g = 'Comedy'
))
ORDER BY predicted_rating DESC
LIMIT 5 | notebooks/recommendation_systems/solutions/3_als_bqml_hybrid.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Incorporating user and movie information
The matrix factorization approach does not use any information about users or movies beyond what is available from the ratings matrix. However, we will often have user information (such as the city they live, their annual income, their annual expenditure, etc.) and we will almost always have more information about the products in our catalog. How do we incorporate this information in our recommendation model?
The answer lies in recognizing that the user factors and product factors that result from the matrix factorization approach end up being a concise representation of the information about users and products available from the ratings matrix. We can concatenate this information with other information we have available and train a regression model to predict the rating.
Obtaining user and product factors
We can get the user factors or product factors from ML.WEIGHTS. For example to get the product factors for movieId=96481 and user factors for userId=54192, we would do: | %%bigquery --project $PROJECT
SELECT
processed_input,
feature,
TO_JSON_STRING(factor_weights) AS factor_weights,
intercept
FROM ML.WEIGHTS(MODEL `movielens.recommender`)
WHERE
(processed_input = 'movieId' AND feature = '96481')
OR (processed_input = 'userId' AND feature = '54192') | notebooks/recommendation_systems/solutions/3_als_bqml_hybrid.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Multiplying these weights and adding the intercept is how we get the predicted rating for this combination of movieId and userId in the matrix factorization approach.
These weights also serve as a low-dimensional representation of the movie and user behavior. We can create a regression model to predict the rating given the user factors, product factors, and any other information we know about our users and products.
Creating input features
The MovieLens dataset does not have any user information, and has very little information about the movies themselves. To illustrate the concept, therefore, let’s create some synthetic information about users: | %%bigquery --project $PROJECT
CREATE OR REPLACE TABLE movielens.users AS
SELECT
userId,
RAND() * COUNT(rating) AS loyalty,
CONCAT(SUBSTR(CAST(userId AS STRING), 0, 2)) AS postcode
FROM
movielens.ratings
GROUP BY userId | notebooks/recommendation_systems/solutions/3_als_bqml_hybrid.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Input features about users can be obtained by joining the user table with the ML weights and selecting all the user information and the user factors from the weights array. | %%bigquery --project $PROJECT
WITH userFeatures AS (
SELECT
u.*,
(SELECT ARRAY_AGG(weight) FROM UNNEST(factor_weights)) AS user_factors
FROM movielens.users u
JOIN ML.WEIGHTS(MODEL movielens.recommender) w
ON processed_input = 'userId' AND feature = CAST(u.userId AS STRING)
)
SELECT * FROM userFeatures
LIMIT 5 | notebooks/recommendation_systems/solutions/3_als_bqml_hybrid.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Similarly, we can get product features for the movies data, except that we have to decide how to handle the genre since a movie could have more than one genre. If we decide to create a separate training row for each genre, then we can construct the product features using. | %%bigquery --project $PROJECT
WITH productFeatures AS (
SELECT
p.* EXCEPT(genres),
g, (SELECT ARRAY_AGG(weight) FROM UNNEST(factor_weights))
AS product_factors
FROM movielens.movies p, UNNEST(genres) g
JOIN ML.WEIGHTS(MODEL movielens.recommender) w
ON processed_input = 'movieId' AND feature = CAST(p.movieId AS STRING)
)
SELECT * FROM productFeatures
LIMIT 5 | notebooks/recommendation_systems/solutions/3_als_bqml_hybrid.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Combining these two WITH clauses and pulling in the rating corresponding the movieId-userId combination (if it exists in the ratings table), we can create the training dataset.
TODO 1: Combine the above two queries to get the user factors and product factor for each rating. | %%bigquery --project $PROJECT
CREATE OR REPLACE TABLE movielens.hybrid_dataset AS
WITH userFeatures AS (
SELECT
u.*,
(SELECT ARRAY_AGG(weight) FROM UNNEST(factor_weights))
AS user_factors
FROM movielens.users u
JOIN ML.WEIGHTS(MODEL movielens.recommender) w
ON processed_input = 'userId' AND feature = CAST(u.userId AS STRING)
),
productFeatures AS (
SELECT
p.* EXCEPT(genres),
g, (SELECT ARRAY_AGG(weight) FROM UNNEST(factor_weights))
AS product_factors
FROM movielens.movies p, UNNEST(genres) g
JOIN ML.WEIGHTS(MODEL movielens.recommender) w
ON processed_input = 'movieId' AND feature = CAST(p.movieId AS STRING)
)
SELECT
p.* EXCEPT(movieId),
u.* EXCEPT(userId),
rating
FROM productFeatures p, userFeatures u
JOIN movielens.ratings r
ON r.movieId = p.movieId AND r.userId = u.userId | notebooks/recommendation_systems/solutions/3_als_bqml_hybrid.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
One of the rows of this table looks like this: | %%bigquery --project $PROJECT
SELECT *
FROM movielens.hybrid_dataset
LIMIT 1 | notebooks/recommendation_systems/solutions/3_als_bqml_hybrid.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Essentially, we have a couple of attributes about the movie, the product factors array corresponding to the movie, a couple of attributes about the user, and the user factors array corresponding to the user. These form the inputs to our “hybrid” recommendations model that builds off the matrix factorization model and adds in metadata about users and movies.
Training hybrid recommendation model
At the time of writing, BigQuery ML can not handle arrays as inputs to a regression model. Let’s, therefore, define a function to convert arrays to a struct where the array elements are its fields: | %%bigquery --project $PROJECT
CREATE OR REPLACE FUNCTION movielens.arr_to_input_16_users(u ARRAY<FLOAT64>)
RETURNS
STRUCT<
u1 FLOAT64,
u2 FLOAT64,
u3 FLOAT64,
u4 FLOAT64,
u5 FLOAT64,
u6 FLOAT64,
u7 FLOAT64,
u8 FLOAT64,
u9 FLOAT64,
u10 FLOAT64,
u11 FLOAT64,
u12 FLOAT64,
u13 FLOAT64,
u14 FLOAT64,
u15 FLOAT64,
u16 FLOAT64
> AS (STRUCT(
u[OFFSET(0)],
u[OFFSET(1)],
u[OFFSET(2)],
u[OFFSET(3)],
u[OFFSET(4)],
u[OFFSET(5)],
u[OFFSET(6)],
u[OFFSET(7)],
u[OFFSET(8)],
u[OFFSET(9)],
u[OFFSET(10)],
u[OFFSET(11)],
u[OFFSET(12)],
u[OFFSET(13)],
u[OFFSET(14)],
u[OFFSET(15)]
)); | notebooks/recommendation_systems/solutions/3_als_bqml_hybrid.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
which gives: | %%bigquery --project $PROJECT
SELECT movielens.arr_to_input_16_users(u).*
FROM (SELECT
[0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14., 15.] AS u) | notebooks/recommendation_systems/solutions/3_als_bqml_hybrid.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
We can create a similar function named movielens.arr_to_input_16_products to convert the product factor array into named columns.
TODO 2: Create a function that returns named columns from a size 16 product factor array. | %%bigquery --project $PROJECT
CREATE OR REPLACE FUNCTION movielens.arr_to_input_16_products(p ARRAY<FLOAT64>)
RETURNS
STRUCT<
p1 FLOAT64,
p2 FLOAT64,
p3 FLOAT64,
p4 FLOAT64,
p5 FLOAT64,
p6 FLOAT64,
p7 FLOAT64,
p8 FLOAT64,
p9 FLOAT64,
p10 FLOAT64,
p11 FLOAT64,
p12 FLOAT64,
p13 FLOAT64,
p14 FLOAT64,
p15 FLOAT64,
p16 FLOAT64
> AS (STRUCT(
p[OFFSET(0)],
p[OFFSET(1)],
p[OFFSET(2)],
p[OFFSET(3)],
p[OFFSET(4)],
p[OFFSET(5)],
p[OFFSET(6)],
p[OFFSET(7)],
p[OFFSET(8)],
p[OFFSET(9)],
p[OFFSET(10)],
p[OFFSET(11)],
p[OFFSET(12)],
p[OFFSET(13)],
p[OFFSET(14)],
p[OFFSET(15)]
)); | notebooks/recommendation_systems/solutions/3_als_bqml_hybrid.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Then, we can tie together metadata about users and products with the user factors and product factors obtained from the matrix factorization approach to create a regression model to predict the rating: | %%bigquery --project $PROJECT
CREATE OR REPLACE MODEL movielens.recommender_hybrid
OPTIONS(model_type='linear_reg', input_label_cols=['rating'])
AS
SELECT
* EXCEPT(user_factors, product_factors),
movielens.arr_to_input_16_users(user_factors).*,
movielens.arr_to_input_16_products(product_factors).*
FROM
movielens.hybrid_dataset | notebooks/recommendation_systems/solutions/3_als_bqml_hybrid.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
The kernel
The following GPU kernel computes
$$
\log v_{bj} := \log \nu_{bj} - \operatorname{logsumexp}{i} (-\frac{1}{\lambda} c{ij} + \log u_{bi}).
$$
This has two key properties that shape our implementation:
- The overall reduction structure is akin to a matrix multiplication, i.e. memory accesses to $c_{ij}$ and $\log u_{bi}$
to compute the result $\log v_{bj}$, with the additional input $\log \nu$ following the same access pattern as the result. We parallelize in the independent dimensions ($b$ and $j$) and split the reduction over $i$ amongst multiple threads then combine their intermediate results. We have not employed tiling, which is commonly used to speed up the memory accesses for matrix multiplication.
In our implementation, the stabilisation of the logsumexp calculation is carried out in an online fashion, i.e. computing the stabilisation and the reduction result in a single pass, similar to the Welford algorithm for the variance.
I explain a bit about the reduction (in particular the bits about WARP_SHFL_XOR) in this blog post. | cuda_source = """
#include <torch/extension.h>
#include <ATen/core/TensorAccessor.h>
#include <ATen/cuda/CUDAContext.h>
using at::RestrictPtrTraits;
using at::PackedTensorAccessor;
#if defined(__HIP_PLATFORM_HCC__)
constexpr int WARP_SIZE = 64;
#else
constexpr int WARP_SIZE = 32;
#endif
// The maximum number of threads in a block
#if defined(__HIP_PLATFORM_HCC__)
constexpr int MAX_BLOCK_SIZE = 256;
#else
constexpr int MAX_BLOCK_SIZE = 512;
#endif
// Returns the index of the most significant 1 bit in `val`.
__device__ __forceinline__ int getMSB(int val) {
return 31 - __clz(val);
}
// Number of threads in a block given an input size up to MAX_BLOCK_SIZE
static int getNumThreads(int nElem) {
#if defined(__HIP_PLATFORM_HCC__)
int threadSizes[5] = { 16, 32, 64, 128, MAX_BLOCK_SIZE };
#else
int threadSizes[5] = { 32, 64, 128, 256, MAX_BLOCK_SIZE };
#endif
for (int i = 0; i != 5; ++i) {
if (nElem <= threadSizes[i]) {
return threadSizes[i];
}
}
return MAX_BLOCK_SIZE;
}
template <typename T>
__device__ __forceinline__ T WARP_SHFL_XOR(T value, int laneMask, int width = warpSize, unsigned int mask = 0xffffffff)
{
#if CUDA_VERSION >= 9000
return __shfl_xor_sync(mask, value, laneMask, width);
#else
return __shfl_xor(value, laneMask, width);
#endif
}
// While this might be the most efficient sinkhorn step / logsumexp-matmul implementation I have seen,
// this is awfully inefficient compared to matrix multiplication and e.g. NVidia cutlass may provide
// many great ideas for improvement
template <typename scalar_t, typename index_t>
__global__ void sinkstep_kernel(
// compute log v_bj = log nu_bj - logsumexp_i 1/lambda dist_ij - log u_bi
// for this compute maxdiff_bj = max_i(1/lambda dist_ij - log u_bi)
// i = reduction dim, using threadIdx.x
PackedTensorAccessor<scalar_t, 2, RestrictPtrTraits, index_t> log_v,
const PackedTensorAccessor<scalar_t, 2, RestrictPtrTraits, index_t> dist,
const PackedTensorAccessor<scalar_t, 2, RestrictPtrTraits, index_t> log_nu,
const PackedTensorAccessor<scalar_t, 2, RestrictPtrTraits, index_t> log_u,
const scalar_t lambda) {
using accscalar_t = scalar_t;
__shared__ accscalar_t shared_mem[2 * WARP_SIZE];
index_t b = blockIdx.y;
index_t j = blockIdx.x;
int tid = threadIdx.x;
if (b >= log_u.size(0) || j >= log_v.size(1)) {
return;
}
// reduce within thread
accscalar_t max = -std::numeric_limits<accscalar_t>::infinity();
accscalar_t sumexp = 0;
if (log_nu[b][j] == -std::numeric_limits<accscalar_t>::infinity()) {
if (tid == 0) {
log_v[b][j] = -std::numeric_limits<accscalar_t>::infinity();
}
return;
}
for (index_t i = threadIdx.x; i < log_u.size(1); i += blockDim.x) {
accscalar_t oldmax = max;
accscalar_t value = -dist[i][j]/lambda + log_u[b][i];
max = max > value ? max : value;
if (oldmax == -std::numeric_limits<accscalar_t>::infinity()) {
// sumexp used to be 0, so the new max is value and we can set 1 here,
// because we will come back here again
sumexp = 1;
} else {
sumexp *= exp(oldmax - max);
sumexp += exp(value - max); // if oldmax was not -infinity, max is not either...
}
}
// now we have one value per thread. we'll make it into one value per warp
// first warpSum to get one value per thread to
// one value per warp
for (int i = 0; i < getMSB(WARP_SIZE); ++i) {
accscalar_t o_max = WARP_SHFL_XOR(max, 1 << i, WARP_SIZE);
accscalar_t o_sumexp = WARP_SHFL_XOR(sumexp, 1 << i, WARP_SIZE);
if (o_max > max) { // we're less concerned about divergence here
sumexp *= exp(max - o_max);
sumexp += o_sumexp;
max = o_max;
} else if (max != -std::numeric_limits<accscalar_t>::infinity()) {
sumexp += o_sumexp * exp(o_max - max);
}
}
__syncthreads();
// this writes each warps accumulation into shared memory
// there are at most WARP_SIZE items left because
// there are at most WARP_SIZE**2 threads at the beginning
if (tid % WARP_SIZE == 0) {
shared_mem[tid / WARP_SIZE * 2] = max;
shared_mem[tid / WARP_SIZE * 2 + 1] = sumexp;
}
__syncthreads();
if (tid < WARP_SIZE) {
max = (tid < blockDim.x / WARP_SIZE ? shared_mem[2 * tid] : -std::numeric_limits<accscalar_t>::infinity());
sumexp = (tid < blockDim.x / WARP_SIZE ? shared_mem[2 * tid + 1] : 0);
}
for (int i = 0; i < getMSB(WARP_SIZE); ++i) {
accscalar_t o_max = WARP_SHFL_XOR(max, 1 << i, WARP_SIZE);
accscalar_t o_sumexp = WARP_SHFL_XOR(sumexp, 1 << i, WARP_SIZE);
if (o_max > max) { // we're less concerned about divergence here
sumexp *= exp(max - o_max);
sumexp += o_sumexp;
max = o_max;
} else if (max != -std::numeric_limits<accscalar_t>::infinity()) {
sumexp += o_sumexp * exp(o_max - max);
}
}
if (tid == 0) {
log_v[b][j] = (max > -std::numeric_limits<accscalar_t>::infinity() ?
log_nu[b][j] - log(sumexp) - max :
-std::numeric_limits<accscalar_t>::infinity());
}
}
template <typename scalar_t>
torch::Tensor sinkstep_cuda_template(const torch::Tensor& dist, const torch::Tensor& log_nu, const torch::Tensor& log_u,
const double lambda) {
TORCH_CHECK(dist.is_cuda(), "need cuda tensors");
TORCH_CHECK(dist.device() == log_nu.device() && dist.device() == log_u.device(), "need tensors on same GPU");
TORCH_CHECK(dist.dim()==2 && log_nu.dim()==2 && log_u.dim()==2, "invalid sizes");
TORCH_CHECK(dist.size(0) == log_u.size(1) &&
dist.size(1) == log_nu.size(1) &&
log_u.size(0) == log_nu.size(0), "invalid sizes");
auto log_v = torch::empty_like(log_nu);
using index_t = int32_t;
auto log_v_a = log_v.packed_accessor<scalar_t, 2, RestrictPtrTraits, index_t>();
auto dist_a = dist.packed_accessor<scalar_t, 2, RestrictPtrTraits, index_t>();
auto log_nu_a = log_nu.packed_accessor<scalar_t, 2, RestrictPtrTraits, index_t>();
auto log_u_a = log_u.packed_accessor<scalar_t, 2, RestrictPtrTraits, index_t>();
auto stream = at::cuda::getCurrentCUDAStream();
int tf = getNumThreads(log_u.size(1));
dim3 blocks(log_v.size(1), log_u.size(0));
dim3 threads(tf);
sinkstep_kernel<<<blocks, threads, 2*WARP_SIZE*sizeof(scalar_t), stream>>>(
log_v_a, dist_a, log_nu_a, log_u_a, static_cast<scalar_t>(lambda)
);
return log_v;
}
torch::Tensor sinkstep_cuda(const torch::Tensor& dist, const torch::Tensor& log_nu, const torch::Tensor& log_u,
const double lambda) {
return AT_DISPATCH_FLOATING_TYPES(log_u.scalar_type(), "sinkstep", [&] {
return sinkstep_cuda_template<scalar_t>(dist, log_nu, log_u, lambda);
});
}
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
m.def("sinkstep", &sinkstep_cuda, "sinkhorn step");
}
""" | wasserstein-distance/Pytorch_Wasserstein.ipynb | t-vi/pytorch-tvmisc | mit |
Incorporating it in PyTorch
We make this into a PyTorch extension module and add a convenience function (and "manual" implementation for the CPU). | wasserstein_ext = torch.utils.cpp_extension.load_inline("wasserstein", cpp_sources="", cuda_sources=cuda_source,
extra_cuda_cflags=["--expt-relaxed-constexpr"] )
def sinkstep(dist, log_nu, log_u, lam: float):
# dispatch to optimized GPU implementation for GPU tensors, slow fallback for CPU
if dist.is_cuda:
return wasserstein_ext.sinkstep(dist, log_nu, log_u, lam)
assert dist.dim() == 2 and log_nu.dim() == 2 and log_u.dim() == 2
assert dist.size(0) == log_u.size(1) and dist.size(1) == log_nu.size(1) and log_u.size(0) == log_nu.size(0)
log_v = log_nu.clone()
for b in range(log_u.size(0)):
log_v[b] -= torch.logsumexp(-dist/lam+log_u[b, :, None], 0)
return log_v | wasserstein-distance/Pytorch_Wasserstein.ipynb | t-vi/pytorch-tvmisc | mit |
We use this update step in a building block for the Sinkhorn iteration: | class SinkhornOT(torch.autograd.Function):
@staticmethod
def forward(ctx, mu, nu, dist, lam=1e-3, N=100):
assert mu.dim() == 2 and nu.dim() == 2 and dist.dim() == 2
bs = mu.size(0)
d1, d2 = dist.size()
assert nu.size(0) == bs and mu.size(1) == d1 and nu.size(1) == d2
log_mu = mu.log()
log_nu = nu.log()
log_u = torch.full_like(mu, -math.log(d1))
log_v = torch.full_like(nu, -math.log(d2))
for i in range(N):
log_v = sinkstep(dist, log_nu, log_u, lam)
log_u = sinkstep(dist.t(), log_mu, log_v, lam)
# this is slight abuse of the function. it computes (diag(exp(log_u))*Mt*exp(-Mt/lam)*diag(exp(log_v))).sum()
# in an efficient (i.e. no bxnxm tensors) way in log space
distances = (-sinkstep(-dist.log()+dist/lam, -log_v, log_u, 1.0)).logsumexp(1).exp()
ctx.log_v = log_v
ctx.log_u = log_u
ctx.dist = dist
ctx.lam = lam
return distances
@staticmethod
def backward(ctx, grad_out):
return grad_out[:, None] * ctx.log_u * ctx.lam, grad_out[:, None] * ctx.log_v * ctx.lam, None, None, None
| wasserstein-distance/Pytorch_Wasserstein.ipynb | t-vi/pytorch-tvmisc | mit |
We also define a function to get the coupling itself: | def get_coupling(mu, nu, dist, lam=1e-3, N=1000):
assert mu.dim() == 2 and nu.dim() == 2 and dist.dim() == 2
bs = mu.size(0)
d1, d2 = dist.size()
assert nu.size(0) == bs and mu.size(1) == d1 and nu.size(1) == d2
log_mu = mu.log()
log_nu = nu.log()
log_u = torch.full_like(mu, -math.log(d1))
log_v = torch.full_like(nu, -math.log(d2))
for i in range(N):
log_v = sinkstep(dist, log_nu, log_u, lam)
log_u = sinkstep(dist.t(), log_mu, log_v, lam)
return (log_v[:, None, :]-dist/lam+log_u[:, :, None]).exp() | wasserstein-distance/Pytorch_Wasserstein.ipynb | t-vi/pytorch-tvmisc | mit |
We define some test distributions. These are similar to examples from Python Optimal Transport. | # some test distribution densities
n = 100
lam = 1e-3
x = torch.linspace(0, 100, n)
mu1 = torch.distributions.Normal(20., 10.).log_prob(x).exp()
mu2 = torch.distributions.Normal(60., 30.).log_prob(x).exp()
mu3 = torch.distributions.Normal(40., 20.).log_prob(x).exp()
mu1 /= mu1.sum()
mu2 /= mu2.sum()
mu3 /= mu3.sum()
mu123 = torch.stack([mu1, mu2, mu3], dim=0)
mu231 = torch.stack([mu2, mu3, mu1], dim=0)
cost = (x[None, :]-x[:, None])**2
cost /= cost.max()
pyplot.plot(mu1, label="$\mu_1$")
pyplot.plot(mu2, label="$\mu_2$")
pyplot.plot(mu3, label="$\mu_3$")
pyplot.legend(); | wasserstein-distance/Pytorch_Wasserstein.ipynb | t-vi/pytorch-tvmisc | mit |
We run a sanity check for the distance:
(This will take longer than you might expect, as it computes a rather large gradient numerically, but it finishes in $<1$ minute on a GTX 1080) | t = time.time()
device = "cuda"
res = torch.autograd.gradcheck(lambda x: SinkhornOT.apply(x.softmax(1),
mu231.to(device=device, dtype=torch.double),
cost.to(device=device, dtype=torch.double),
lam, 500),
(mu123.log().to(device=device, dtype=torch.double).requires_grad_(),))
print("OK? {} took {:.0f} sec".format(res, time.time()-t)) | wasserstein-distance/Pytorch_Wasserstein.ipynb | t-vi/pytorch-tvmisc | mit |
We might also check that sinkstep is the same on GPU and CPU (Kai Zhao pointed out that this was not the case for an earlier versions of this notebook, thank you, and indeed, there was a bug in the CPU implementation.) | res_cpu = sinkstep(cost.cpu(), mu123.log().cpu(), mu231.log().cpu(), lam)
res_gpu = sinkstep(cost.to(device), mu123.log().to(device), mu231.log().to(device), lam).cpu()
assert (res_cpu - res_gpu).abs().max() < 1e-5 | wasserstein-distance/Pytorch_Wasserstein.ipynb | t-vi/pytorch-tvmisc | mit |
We can visiualize the coupling along with the marginals: | coupling = get_coupling(mu123.cuda(), mu231.cuda(), cost.cuda())
pyplot.figure(figsize=(10,10))
pyplot.subplot(2, 2, 1)
pyplot.plot(mu2.cpu())
pyplot.subplot(2, 2, 4)
pyplot.plot(mu1.cpu(), transform=matplotlib.transforms.Affine2D().rotate_deg(270) + pyplot.gca().transData)
pyplot.subplot(2, 2, 3)
pyplot.imshow(coupling[0].cpu());
| wasserstein-distance/Pytorch_Wasserstein.ipynb | t-vi/pytorch-tvmisc | mit |
This looks a lot like the coupling form Python Optimal Transport and in fact all three match results computed with POT: | o_coupling12 = torch.tensor(ot.bregman.sinkhorn_stabilized(mu1.cpu(), mu2.cpu(), cost.cpu(), reg=1e-3))
o_coupling23 = torch.tensor(ot.bregman.sinkhorn_stabilized(mu2.cpu(), mu3.cpu(), cost.cpu(), reg=1e-3))
o_coupling31 = torch.tensor(ot.bregman.sinkhorn_stabilized(mu3.cpu(), mu1.cpu(), cost.cpu(), reg=1e-3))
pyplot.imshow(o_coupling12)
o_coupling = torch.stack([o_coupling12, o_coupling23, o_coupling31], dim=0)
(o_coupling.float() - coupling.cpu()).abs().max().item() | wasserstein-distance/Pytorch_Wasserstein.ipynb | t-vi/pytorch-tvmisc | mit |
Performance comparison to existing implementations
We copy the code of Dazac's recent blog post in order to compare performance.
Dazac uses early stopping, but this comes at the cost of introducing a synchronization point after each iteration. I modified the code to take the distance matrix as an argument. | # Copyright 2018 Daniel Dazac
# MIT Licensed
# License and source: https://github.com/dfdazac/wassdistance/
class SinkhornDistance(torch.nn.Module):
r"""
Given two empirical measures each with :math:`P_1` locations
:math:`x\in\mathbb{R}^{D_1}` and :math:`P_2` locations :math:`y\in\mathbb{R}^{D_2}`,
outputs an approximation of the regularized OT cost for point clouds.
Args:
eps (float): regularization coefficient
max_iter (int): maximum number of Sinkhorn iterations
reduction (string, optional): Specifies the reduction to apply to the output:
'none' | 'mean' | 'sum'. 'none': no reduction will be applied,
'mean': the sum of the output will be divided by the number of
elements in the output, 'sum': the output will be summed. Default: 'none'
Shape:
- Input: :math:`(N, P_1, D_1)`, :math:`(N, P_2, D_2)`
- Output: :math:`(N)` or :math:`()`, depending on `reduction`
"""
def __init__(self, eps, max_iter, reduction='none'):
super(SinkhornDistance, self).__init__()
self.eps = eps
self.max_iter = max_iter
self.reduction = reduction
def forward(self, mu, nu, C):
u = torch.zeros_like(mu)
v = torch.zeros_like(nu)
# To check if algorithm terminates because of threshold
# or max iterations reached
actual_nits = 0
# Stopping criterion
thresh = 1e-1
# Sinkhorn iterations
for i in range(self.max_iter):
u1 = u # useful to check the update
u = self.eps * (torch.log(mu+1e-8) - torch.logsumexp(self.M(C, u, v), dim=-1)) + u
v = self.eps * (torch.log(nu+1e-8) - torch.logsumexp(self.M(C, u, v).transpose(-2, -1), dim=-1)) + v
err = (u - u1).abs().sum(-1).mean()
actual_nits += 1
if err.item() < thresh:
break
U, V = u, v
# Transport plan pi = diag(a)*K*diag(b)
pi = torch.exp(self.M(C, U, V))
# Sinkhorn distance
cost = torch.sum(pi * C, dim=(-2, -1))
self.actual_nits = actual_nits
if self.reduction == 'mean':
cost = cost.mean()
elif self.reduction == 'sum':
cost = cost.sum()
return cost, pi, C
def M(self, C, u, v):
"Modified cost for logarithmic updates"
"$M_{ij} = (-c_{ij} + u_i + v_j) / \epsilon$"
return (-C + u.unsqueeze(-1) + v.unsqueeze(-2)) / self.eps
@staticmethod
def ave(u, u1, tau):
"Barycenter subroutine, used by kinetic acceleration through extrapolation."
return tau * u + (1 - tau) * u1
n = 100
x = torch.linspace(0, 100, n)
mu1 = torch.distributions.Normal(20., 10.).log_prob(x).exp()
mu2 = torch.distributions.Normal(60., 30.).log_prob(x).exp()
mu1 /= mu1.sum()
mu2 /= mu2.sum()
mu1, mu2, cost = mu1.cuda(), mu2.cuda(), cost.cuda()
sinkhorn = SinkhornDistance(eps=1e-3, max_iter=200)
def x():
mu1_ = mu1.detach().requires_grad_()
dist, P, C = sinkhorn(mu1_, mu2, cost)
gr, = torch.autograd.grad(dist, mu1_)
torch.cuda.synchronize()
dist, P, C = sinkhorn(mu1.cuda(), mu2.cuda(), cost.cuda())
torch.cuda.synchronize()
x()
%timeit x()
pyplot.imshow(P.cpu())
sinkhorn.actual_nits
def y():
mu1_ = mu1.detach().requires_grad_()
l = SinkhornOT.apply(mu1_.unsqueeze(0), mu2.unsqueeze(0), cost, 1e-3, 200)
gr, = torch.autograd.grad(l.sum(), mu1_)
torch.cuda.synchronize()
y()
%timeit y() | wasserstein-distance/Pytorch_Wasserstein.ipynb | t-vi/pytorch-tvmisc | mit |
With this problem size and forward + backward, we achieve a speedup factor of approximately 6.5 when doing about 3 times as many iterations.
Barycenters
We can also do barycenters. Let's go 2d to do so. I use relative small $N$ because at the time of writing, my GPU is partially occupied by a long-running training. | N = 50
a, b, c = torch.zeros(3, N, N, device="cuda")
x = torch.linspace(-5, 5, N, device="cuda")
a[N//5:-N//5, N//5:-N//5] = 1
b[(x[None]**2+x[:,None]**2 > 4) & (x[None]**2+x[:,None]**2 < 9)] = 1
c[((x[None]-2)**2+(x[:,None]-2)**2 < 4) | ((x[None]+2)**2+(x[:,None]+2)**2 < 4)] = 1
pyplot.imshow(c.cpu(), cmap=pyplot.cm.gray_r)
coords = torch.stack([x[None, :].expand(N, N), x[:, None].expand(N, N)], 2).view(-1, 2)
dist = ((coords[None]-coords[:, None])**2).sum(-1)
dist /= dist.max()
a = (a / a.sum()).view(1, -1)
b = (c / b.sum()).view(1, -1)
c = (c / c.sum()).view(1, -1)
SinkhornOT.apply(a, b, dist, 1e-3, 200)
def get_barycenter(mu, dist, weights, lam=1e-3, N=1000):
assert mu.dim() == 2 and dist.dim() == 2 and weights.dim() == 1
bs = mu.size(0)
d1, d2 = dist.size()
assert mu.size(1) == d1 and d1 == d2 and weights.size(0) == bs
log_mu = mu.log()
log_u = torch.full_like(mu, -math.log(d1))
zeros = torch.zeros_like(log_u)
for i in range(N):
log_v = sinkstep(dist.t(), log_mu, log_u, lam)
log_u = sinkstep(dist, zeros, log_v, lam)
a = torch.sum(-weights[:, None] * log_u, dim=0, keepdim=True)
log_u += a
return (log_v[:, None, :]-dist/lam+log_u[:, :, None]).exp()
| wasserstein-distance/Pytorch_Wasserstein.ipynb | t-vi/pytorch-tvmisc | mit |
It's fast enough to just use baricenters for interpolation: | res = []
for i in torch.linspace(0, 1, 10):
res.append(get_barycenter(torch.cat([a, b, c], 0), dist, torch.tensor([i*0.9, (1-i)*0.9, 0], device="cuda"), N=100))
pyplot.figure(figsize=(15,5))
pyplot.imshow(torch.cat([r[0].sum(1).view(N, N).cpu() for r in res], 1), cmap=pyplot.cm.gray_r) | wasserstein-distance/Pytorch_Wasserstein.ipynb | t-vi/pytorch-tvmisc | mit |
Will also need to execute some raw SQL, so I'll import a helper function in order to make the results more readable: | from project import sql_to_agate | calaccess-exploration/decoding-filing-periods.ipynb | california-civic-data-coalition/python-calaccess-notebooks | mit |
Let's start by examining the distinct values of the statement type on CVR_CAMPAIGN_DISCLOSURE_CD. And let's narrow the scope to only the Form 460 filings. | sql_to_agate(
"""
SELECT UPPER("STMT_TYPE"), COUNT(*)
FROM "CVR_CAMPAIGN_DISCLOSURE_CD"
WHERE "FORM_TYPE" = 'F460'
GROUP BY 1
ORDER BY COUNT(*) DESC;
"""
).print_table() | calaccess-exploration/decoding-filing-periods.ipynb | california-civic-data-coalition/python-calaccess-notebooks | mit |
Not all of these values are defined, as previously noted in our docs:
* PR might be pre-election
* QS is pro probably quarterly statement
* YE might be...I don't know "Year-end"?
* S is probably semi-annual
Maybe come back later and look at the actual filings. There aren't that many.
There's another similar-named column on FILER_FILINGS_CD, but this seems to be a completely different thing: | sql_to_agate(
"""
SELECT FF."STMNT_TYPE", LU."CODE_DESC", COUNT(*)
FROM "FILER_FILINGS_CD" FF
JOIN "LOOKUP_CODES_CD" LU
ON FF."STMNT_TYPE" = LU."CODE_ID"
AND LU."CODE_TYPE" = 10000
GROUP BY 1, 2;
"""
).print_table() | calaccess-exploration/decoding-filing-periods.ipynb | california-civic-data-coalition/python-calaccess-notebooks | mit |
One of the tables that caught my eye is FILING_PERIOD_CD, which appears to have a row for each quarterly filing period: | sql_to_agate(
"""
SELECT *
FROM "FILING_PERIOD_CD"
"""
).print_table() | calaccess-exploration/decoding-filing-periods.ipynb | california-civic-data-coalition/python-calaccess-notebooks | mit |
Every period is described as a quarter, and the records are equally divided among them: | sql_to_agate(
"""
SELECT "PERIOD_DESC", COUNT(*)
FROM "FILING_PERIOD_CD"
GROUP BY 1;
"""
).print_table() | calaccess-exploration/decoding-filing-periods.ipynb | california-civic-data-coalition/python-calaccess-notebooks | mit |
The difference between every START_DATE and END_DATE is actually a three-month interval: | sql_to_agate(
"""
SELECT "END_DATE" - "START_DATE" AS duration, COUNT(*)
FROM "FILING_PERIOD_CD"
GROUP BY 1;
"""
).print_table() | calaccess-exploration/decoding-filing-periods.ipynb | california-civic-data-coalition/python-calaccess-notebooks | mit |
And they have covered every year between 1973 and 2334 (how optimistic!): | sql_to_agate(
"""
SELECT DATE_PART('year', "START_DATE")::int as year, COUNT(*)
FROM "FILING_PERIOD_CD"
GROUP BY 1
ORDER BY 1 DESC;
"""
).print_table() | calaccess-exploration/decoding-filing-periods.ipynb | california-civic-data-coalition/python-calaccess-notebooks | mit |
Filings are linked to filing periods via FILER_FILINGS_CD.PERIOD_ID. While that column is not always populated, it is if you limit your results to just the Form 460 filings: | sql_to_agate(
"""
SELECT ff."PERIOD_ID", fp."START_DATE", fp."END_DATE", fp."PERIOD_DESC", COUNT(*)
FROM "FILER_FILINGS_CD" ff
JOIN "CVR_CAMPAIGN_DISCLOSURE_CD" cvr
ON ff."FILING_ID" = cvr."FILING_ID"
AND ff."FILING_SEQUENCE" = cvr."AMEND_ID"
AND cvr."FORM_TYPE" = 'F460'
JOIN "FILING_PERIOD_CD" fp
ON ff."PERIOD_ID" = fp."PERIOD_ID"
GROUP BY 1, 2, 3, 4
ORDER BY fp."START_DATE" DESC;
"""
).print_table() | calaccess-exploration/decoding-filing-periods.ipynb | california-civic-data-coalition/python-calaccess-notebooks | mit |
Also, is Schwarzenegger running this cycle? Who else could be filing from so far into the future?
AAANNNNYYYway...Also need to check to make sure the join between FILER_FILINGS_CD and CVR_CAMPAIGN_DISCLOSURE_CD isn't filtering out too many filings: | sql_to_agate(
"""
SELECT cvr."FILING_ID", cvr."FORM_TYPE", cvr."FILER_NAML"
FROM "CVR_CAMPAIGN_DISCLOSURE_CD" cvr
LEFT JOIN "FILER_FILINGS_CD" ff
ON cvr."FILING_ID" = ff."FILING_ID"
AND cvr."AMEND_ID" = ff."FILING_SEQUENCE"
WHERE cvr."FORM_TYPE" = 'F460'
AND (ff."FILING_ID" IS NULL OR ff."FILING_SEQUENCE" IS NULL)
ORDER BY cvr."FILING_ID";
"""
).print_table(max_column_width=60) | calaccess-exploration/decoding-filing-periods.ipynb | california-civic-data-coalition/python-calaccess-notebooks | mit |
So only a handful, mostly local campaigns or just nonsense test data.
So another important thing to check is how well these the dates from the filing period look-up records line up with the dates on the Form 460 filing records. It would be bad if the CVR_CAMPAIGN_DISCLOSURE_CD.FROM_DATE were before FILING_PERIOD_CD.START_DATE or if CVR_CAMPAIGN_DISCLOSURE_CD.THRU_DATE were after FILING_PERIOD_CD.END_DATE. | sql_to_agate(
"""
SELECT
CASE
WHEN cvr."FROM_DATE" < fp."START_DATE" THEN 'filing from_date before period start_date'
WHEN cvr."THRU_DATE" > fp."END_DATE" THEN 'filing thru_date after period end_date'
ELSE 'okay'
END as test,
COUNT(*)
FROM "CVR_CAMPAIGN_DISCLOSURE_CD" cvr
JOIN "FILER_FILINGS_CD" ff
ON cvr."FILING_ID" = ff."FILING_ID"
AND cvr."AMEND_ID" = ff."FILING_SEQUENCE"
JOIN "FILING_PERIOD_CD" fp
ON ff."PERIOD_ID" = fp."PERIOD_ID"
WHERE cvr."FORM_TYPE" = 'F460'
GROUP BY 1;
"""
).print_table(max_column_width=60) | calaccess-exploration/decoding-filing-periods.ipynb | california-civic-data-coalition/python-calaccess-notebooks | mit |
So half of the time, the THRU_DATE on the filing is later than the FROM_DATE on the filing period. How big of a difference can exist between these two dates? | sql_to_agate(
"""
SELECT
cvr."THRU_DATE" - fp."END_DATE" as date_diff,
COUNT(*)
FROM "CVR_CAMPAIGN_DISCLOSURE_CD" cvr
JOIN "FILER_FILINGS_CD" ff
ON cvr."FILING_ID" = ff."FILING_ID"
AND cvr."AMEND_ID" = ff."FILING_SEQUENCE"
JOIN "FILING_PERIOD_CD" fp
ON ff."PERIOD_ID" = fp."PERIOD_ID"
WHERE cvr."FORM_TYPE" = 'F460'
AND cvr."THRU_DATE" > fp."END_DATE"
GROUP BY 1
ORDER BY COUNT(*) DESC;
"""
).print_table(max_column_width=60) | calaccess-exploration/decoding-filing-periods.ipynb | california-civic-data-coalition/python-calaccess-notebooks | mit |
Ugh. Looks like, in most of the problem cases, the from date can be a whole quarter later than the end date of the filing period. Let's take a closer look at these... | sql_to_agate(
"""
SELECT
cvr."FILING_ID",
cvr."AMEND_ID",
cvr."FROM_DATE",
cvr."THRU_DATE",
fp."START_DATE",
fp."END_DATE"
FROM "CVR_CAMPAIGN_DISCLOSURE_CD" cvr
JOIN "FILER_FILINGS_CD" ff
ON cvr."FILING_ID" = ff."FILING_ID"
AND cvr."AMEND_ID" = ff."FILING_SEQUENCE"
JOIN "FILING_PERIOD_CD" fp
ON ff."PERIOD_ID" = fp."PERIOD_ID"
WHERE cvr."FORM_TYPE" = 'F460'
AND 90 < cvr."THRU_DATE" - fp."END_DATE"
AND cvr."THRU_DATE" - fp."END_DATE" < 93
ORDER BY cvr."THRU_DATE" DESC;
"""
).print_table(max_column_width=60) | calaccess-exploration/decoding-filing-periods.ipynb | california-civic-data-coalition/python-calaccess-notebooks | mit |
So, actually, this sort of makes sense: Quarterly filings are for three month intervals, while the semi-annual filings are for six month intervals. And FILING_PERIOD_CD only has records for three month intervals. Let's test this theory by getting the distinct CVR_CAMPAIGN_DISCLOSURE_CD.STMT_TYPE values from these records: | sql_to_agate(
"""
SELECT UPPER(cvr."STMT_TYPE"), COUNT(*)
FROM "CVR_CAMPAIGN_DISCLOSURE_CD" cvr
JOIN "FILER_FILINGS_CD" ff
ON cvr."FILING_ID" = ff."FILING_ID"
AND cvr."AMEND_ID" = ff."FILING_SEQUENCE"
JOIN "FILING_PERIOD_CD" fp
ON ff."PERIOD_ID" = fp."PERIOD_ID"
WHERE cvr."FORM_TYPE" = 'F460'
AND 90 < cvr."THRU_DATE" - fp."END_DATE"
AND cvr."THRU_DATE" - fp."END_DATE" < 93
GROUP BY 1
ORDER BY COUNT(*) DESC;
"""
).print_table(max_column_width=60) | calaccess-exploration/decoding-filing-periods.ipynb | california-civic-data-coalition/python-calaccess-notebooks | mit |
Optionally, you can call tnp.experimental_enable_numpy_behavior() to enable type promotion in TensorFlow.
This allows TNP to more closely follow the NumPy standard. | tnp.experimental_enable_numpy_behavior() | examples/keras_recipes/ipynb/tensorflow_numpy_models.ipynb | keras-team/keras-io | apache-2.0 |
To test our models we will use the Boston housing prices regression dataset. | (x_train, y_train), (x_test, y_test) = tf.keras.datasets.boston_housing.load_data(
path="boston_housing.npz", test_split=0.2, seed=113
)
def evaluate_model(model: keras.Model):
[loss, percent_error] = model.evaluate(x_test, y_test, verbose=0)
print("Mean absolute percent error before training: ", percent_error)
model.fit(x_train, y_train, epochs=200, verbose=0)
[loss, percent_error] = model.evaluate(x_test, y_test, verbose=0)
print("Mean absolute percent error after training:", percent_error)
| examples/keras_recipes/ipynb/tensorflow_numpy_models.ipynb | keras-team/keras-io | apache-2.0 |
Subclassing keras.Model with TNP
The most flexible way to make use of the Keras API is to subclass the
keras.Model class. Subclassing the Model class
gives you the ability to fully customize what occurs in the training loop. This makes
subclassing Model a popular option for researchers.
In this example, we will implement a Model subclass that performs regression over the
boston housing dataset using the TNP API. Note that differentiation and gradient
descent is handled automatically when using the TNP API alongside keras.
First let's define a simple TNPForwardFeedRegressionNetwork class. |
class TNPForwardFeedRegressionNetwork(keras.Model):
def __init__(self, blocks=None, **kwargs):
super(TNPForwardFeedRegressionNetwork, self).__init__(**kwargs)
if not isinstance(blocks, list):
raise ValueError(f"blocks must be a list, got blocks={blocks}")
self.blocks = blocks
self.block_weights = None
self.biases = None
def build(self, input_shape):
current_shape = input_shape[1]
self.block_weights = []
self.biases = []
for i, block in enumerate(self.blocks):
self.block_weights.append(
self.add_weight(
shape=(current_shape, block), trainable=True, name=f"block-{i}"
)
)
self.biases.append(
self.add_weight(shape=(block,), trainable=True, name=f"bias-{i}")
)
current_shape = block
self.linear_layer = self.add_weight(
shape=(current_shape, 1), name="linear_projector", trainable=True
)
def call(self, inputs):
activations = inputs
for w, b in zip(self.block_weights, self.biases):
activations = tnp.matmul(activations, w) + b
# ReLu activation function
activations = tnp.maximum(activations, 0.0)
return tnp.matmul(activations, self.linear_layer)
| examples/keras_recipes/ipynb/tensorflow_numpy_models.ipynb | keras-team/keras-io | apache-2.0 |
Just like with any other Keras model we can utilize any supported optimizer, loss,
metrics or callbacks that we want.
Let's see how the model performs! | model = TNPForwardFeedRegressionNetwork(blocks=[3, 3])
model.compile(
optimizer="adam",
loss="mean_squared_error",
metrics=[keras.metrics.MeanAbsolutePercentageError()],
)
evaluate_model(model) | examples/keras_recipes/ipynb/tensorflow_numpy_models.ipynb | keras-team/keras-io | apache-2.0 |
Great! Our model seems to be effectively learning to solve the problem at hand.
We can also write our own custom loss function using TNP. |
def tnp_mse(y_true, y_pred):
return tnp.mean(tnp.square(y_true - y_pred), axis=0)
keras.backend.clear_session()
model = TNPForwardFeedRegressionNetwork(blocks=[3, 3])
model.compile(
optimizer="adam",
loss=tnp_mse,
metrics=[keras.metrics.MeanAbsolutePercentageError()],
)
evaluate_model(model) | examples/keras_recipes/ipynb/tensorflow_numpy_models.ipynb | keras-team/keras-io | apache-2.0 |
Implementing a Keras Layer Based Model with TNP
If desired, TNP can also be used in layer oriented Keras code structure. Let's
implement the same model, but using a layered approach! |
def tnp_relu(x):
return tnp.maximum(x, 0)
class TNPDense(keras.layers.Layer):
def __init__(self, units, activation=None):
super().__init__()
self.units = units
self.activation = activation
def build(self, input_shape):
self.w = self.add_weight(
name="weights",
shape=(input_shape[1], self.units),
initializer="random_normal",
trainable=True,
)
self.bias = self.add_weight(
name="bias",
shape=(self.units,),
initializer="random_normal",
trainable=True,
)
def call(self, inputs):
outputs = tnp.matmul(inputs, self.w) + self.bias
if self.activation:
return self.activation(outputs)
return outputs
def create_layered_tnp_model():
return keras.Sequential(
[
TNPDense(3, activation=tnp_relu),
TNPDense(3, activation=tnp_relu),
TNPDense(1),
]
)
model = create_layered_tnp_model()
model.compile(
optimizer="adam",
loss="mean_squared_error",
metrics=[keras.metrics.MeanAbsolutePercentageError()],
)
model.build((None, 13,))
model.summary()
evaluate_model(model) | examples/keras_recipes/ipynb/tensorflow_numpy_models.ipynb | keras-team/keras-io | apache-2.0 |
You can also seamlessly switch between TNP layers and native Keras layers! |
def create_mixed_model():
return keras.Sequential(
[
TNPDense(3, activation=tnp_relu),
# The model will have no issue using a normal Dense layer
layers.Dense(3, activation="relu"),
# ... or switching back to tnp layers!
TNPDense(1),
]
)
model = create_mixed_model()
model.compile(
optimizer="adam",
loss="mean_squared_error",
metrics=[keras.metrics.MeanAbsolutePercentageError()],
)
model.build((None, 13,))
model.summary()
evaluate_model(model) | examples/keras_recipes/ipynb/tensorflow_numpy_models.ipynb | keras-team/keras-io | apache-2.0 |
The Keras API offers a wide variety of layers. The ability to use them alongside NumPy
code can be a huge time saver in projects.
Distribution Strategy
TensorFlow NumPy and Keras integrate with
TensorFlow Distribution Strategies.
This makes it simple to perform distributed training across multiple GPUs,
or even an entire TPU Pod. | gpus = tf.config.list_logical_devices("GPU")
if gpus:
strategy = tf.distribute.MirroredStrategy(gpus)
else:
# We can fallback to a no-op CPU strategy.
strategy = tf.distribute.get_strategy()
print("Running with strategy:", str(strategy.__class__.__name__))
with strategy.scope():
model = create_layered_tnp_model()
model.compile(
optimizer="adam",
loss="mean_squared_error",
metrics=[keras.metrics.MeanAbsolutePercentageError()],
)
model.build((None, 13,))
model.summary()
evaluate_model(model) | examples/keras_recipes/ipynb/tensorflow_numpy_models.ipynb | keras-team/keras-io | apache-2.0 |
TensorBoard Integration
One of the many benefits of using the Keras API is the ability to monitor training
through TensorBoard. Using the TensorFlow NumPy API alongside Keras allows you to easily
leverage TensorBoard. | keras.backend.clear_session() | examples/keras_recipes/ipynb/tensorflow_numpy_models.ipynb | keras-team/keras-io | apache-2.0 |
To load the TensorBoard from a Jupyter notebook, you can run the following magic:
%load_ext tensorboard | models = [
(TNPForwardFeedRegressionNetwork(blocks=[3, 3]), "TNPForwardFeedRegressionNetwork"),
(create_layered_tnp_model(), "layered_tnp_model"),
(create_mixed_model(), "mixed_model"),
]
for model, model_name in models:
model.compile(
optimizer="adam",
loss="mean_squared_error",
metrics=[keras.metrics.MeanAbsolutePercentageError()],
)
model.fit(
x_train,
y_train,
epochs=200,
verbose=0,
callbacks=[keras.callbacks.TensorBoard(log_dir=f"logs/{model_name}")],
) | examples/keras_recipes/ipynb/tensorflow_numpy_models.ipynb | keras-team/keras-io | apache-2.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.