markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
visualize with plotly:
we make three diagrams:
1) a horizontal bar plot comparing the overall papers per db
2) a vertical bar plot differentiating time and db
3) a vertical bar plot differentiating tima and db with a logarithmic y-scale (allows for better
inspection of smaller numbers) | #set data for horizontal bar plot:
data = [go.Bar(
x=[pd.DataFrame.sum(df2)['wos'],pd.DataFrame.sum(df2)['scopus'],pd.DataFrame.sum(df2)['ARTICLE_TITLE']],
y=['Web of Science', 'Scopus', 'Total'],
orientation = 'h',
marker=dict(
color=colorlist
)
)]
#py.plot(data, filename='big_data_papers_horizontal') #for uploading to plotly
py.iplot(data, filename='horizontal-bar')
#set data for stacked bar plot:
trace1 = go.Bar(
x=df2['PUBYEAR_y'],
y=df2['wos'],
name='Web of Science',
marker=dict(
color=colorlist[0]
)
)
trace2 = go.Bar(
x=df2['PUBYEAR_y'],
y=df2['scopus'],
name='Scopus',
marker=dict(
color=colorlist[1]
)
)
trace3 = go.Bar(
x=df2['PUBYEAR_y'],
y=df2['ARTICLE_TITLE'],
name='All Papers',
marker=dict(
color=colorlist[2]
)
)
data = [trace1, trace2,trace3]
#set layout for stacked bar chart with logarithmic y scale:
#set layout for stacked bar chart with normal y scale:
layout_no_log = go.Layout(
title='Big data papers over time',
barmode='group',
xaxis=dict(
title='year',
titlefont=dict(
family='Arial, sans-serif',
size=14,
color='lightgrey'
),
tickfont=dict(
family='Arial, sans-serif',
size=10,
color='black'
),
showticklabels=True,
dtick=1,
tickangle=45,
)
)
#plot:
fig1 = go.Figure(data=data, layout=layout_no_log)
py.iplot(fig1, filename='big_data_papers_no_log')
layout_log = go.Layout(
title='Big data papers over time (log y-scale)',
barmode='group',
xaxis=dict(
title='year',
titlefont=dict(
family='Arial, sans-serif',
size=14,
color='lightgrey'
),
tickfont=dict(
family='Arial, sans-serif',
size=10,
color='black'
),
showticklabels=True,
dtick=1,
tickangle=45,
),
yaxis=dict(
type='log'
)
)
fig2 = go.Figure(data=data, layout=layout_log)
py.iplot(fig2, filename='big_data_papers_log') | 1-number of papers over time/Creating overview bar-plots.ipynb | MathiasRiechert/BigDataPapers | gpl-3.0 |
Source localization with MNE/dSPM/sLORETA/eLORETA
The aim of this tutorial is to teach you how to compute and apply a linear
inverse method such as MNE/dSPM/sLORETA/eLORETA on evoked/raw/epochs data. | import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse | 0.19/_downloads/ff83425ee773d1d588a6994e5560c06c/plot_mne_dspm_source_localization.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Process MEG data | data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname) # already has an average reference
events = mne.find_events(raw, stim_channel='STI 014')
event_id = dict(aud_l=1) # event trigger and conditions
tmin = -0.2 # start of each epoch (200ms before the trigger)
tmax = 0.5 # end of each epoch (500ms after the trigger)
raw.info['bads'] = ['MEG 2443', 'EEG 053']
baseline = (None, 0) # means from the first instant to t = 0
reject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=('meg', 'eog'), baseline=baseline, reject=reject) | 0.19/_downloads/ff83425ee773d1d588a6994e5560c06c/plot_mne_dspm_source_localization.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Compute the evoked response
Let's just use MEG channels for simplicity. | evoked = epochs.average().pick('meg')
evoked.plot(time_unit='s')
evoked.plot_topomap(times=np.linspace(0.05, 0.15, 5), ch_type='mag',
time_unit='s')
# Show whitening
evoked.plot_white(noise_cov, time_unit='s')
del epochs # to save memory | 0.19/_downloads/ff83425ee773d1d588a6994e5560c06c/plot_mne_dspm_source_localization.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Note that there is a relationship between the orientation of the dipoles and
the surface of the cortex. For this reason, we do not use an inflated
cortical surface for visualization, but the original surface used to define
the source space.
For more information about dipole orientations, see
tut-dipole-orientations.
Now let's look at each solver: | for mi, (method, lims) in enumerate((('dSPM', [8, 12, 15]),
('sLORETA', [3, 5, 7]),
('eLORETA', [0.75, 1.25, 1.75]),)):
surfer_kwargs['clim']['lims'] = lims
stc = apply_inverse(evoked, inverse_operator, lambda2,
method=method, pick_ori=None)
brain = stc.plot(figure=mi, **surfer_kwargs)
brain.add_text(0.1, 0.9, method, 'title', font_size=20)
del stc | 0.19/_downloads/ff83425ee773d1d588a6994e5560c06c/plot_mne_dspm_source_localization.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
์ ์ ํ๋ฅ ์ ๊ฐ์ด ์์ฃผ ์์ผ๋ฉด ๊ท๋ฌด ๊ฐ์ค์ด ๋ง๋ค๋ ๊ฐ์ ํ์ ๊ณ์ฐ๋ ๊ฒ์ ํต๊ณ๋์ด ๋์ฌ ๊ฐ๋ฅ์ฑ์ด ํฌ๊ทํ๋ค๋ ์๋ฏธ์ด๋ค.
๋ค์ ์๋ฅผ ๋ค์๋ฉด "์ด๋ค ๋ณ์ ๊ฑธ๋ ธ๋ค"๋ ๊ท๋ฌด ๊ฐ์ค์ ์ฆ๋ช
ํ๊ธฐ ์ํ ๊ฒ์ ์์ ํ์ก ๊ฒ์ฌ๋ฅผ ์ฌ์ฉํ์ฌ ๊ณ์ฐํ ์ ์ํ๋ฅ ์ด 0.02%๋ผ๋ ์๋ฏธ๋ ์ค์ ๋ก ๋ณ์ ๊ฑธ๋ฆฐ ํ์๋ค ์ค ํ์ก ๊ฒ์ฌ ์์น๊ฐ ํด๋น ํ์์ ํ์ก ๊ฒ์ฌ ์์น๋ณด๋ค ๋ฎ์ ์ฌ๋์ 0.02% ๋ฟ์ด์๋ค๋ ๋ป์ด๊ณ "์ด๋ค ํ์์ด ์ฐ๋ฑ์์ด๋ค."๋ผ๋ ๊ท๋ฌด์ฌ์ค์ ์ฆ๋ช
ํ๊ธฐ ์ํ ๊ฒ์ ์์ ์ํ ์ฑ์ ์ ์ฌ์ฉํ์ฌ ๊ณ์ฐํ ์ ์ํ๋ฅ ์ด 0.3%๋ผ๋ ์๋ฏธ๋ ์ค์ ๋ก ์ฐ๋ฑ์์ ์ฑ์ ์ ๋ถ์ํด ๋ณด๋ฉด ์ค์๋ก ์ํ์ ์ ๋ชป์น๋ฅธ ๊ฒฝ์ฐ๋ฅผ ํฌํจํด๋ ํด๋น ์ ์๋ณด๋ค ๋์ ๊ฒฝ์ฐ๋ 0.3%์ ์ง๋์ง ์๋๋ค๋ ๋ป์ด๋ค.
๋ฐ๋ผ์ ์ด๋ ๊ฒ ์ ์ ํ๋ฅ ์ ๊ฐ์ด ์์ฃผ ์์ ์ซ์๊ฐ ๋์ค๋ฉด ํด๋น ๊ท๋ฌด ๊ฐ์ค์ ๊ธฐ๊ฐํ ์ ์๋ค.
์ ์ ์์ค๊ณผ ๊ธฐ๊ฐ์ญ
๊ณ์ฐ๋ ์ ์ ํ๋ฅ ๊ฐ์ ๋ํด ๊ท๋ฌด ๊ฐ์ค์ ๊ธฐ๊ฐํ๋์ง ์ฑํํ๋์ง๋ฅผ ๊ฒฐ์ ํ ์ ์๋ ๊ธฐ์ค ๊ฐ์ ์ ์ ์์ค(level of significance)๋ผ๊ณ ํ๋ค. ์ผ๋ฐ์ ์ผ๋ก ์ฌ์ฉ๋๋ ์ ์ ์์ค์ 1%, 5%, 10% ๋ฑ์ด๋ค.
๊ฒ์ ํต๊ณ๋์ด ๋์ค๋ฉด ํ๋ฅ ๋ฐ๋ ํจ์(๋๋ ๋์ ํ๋ฅ ํจ์)๋ฅผ ์ฌ์ฉํ์ฌ ์ ์ ํ๋ฅ ์ ๊ณ์ฐํ ์ ์๋ ๊ฒ์ฒ๋ผ ๋ฐ๋๋ก ํน์ ํ ์ ์ ํ๋ฅ ๊ฐ์ ๋ํด ํด๋นํ๋ ๊ฒ์ ํต๊ณ๋์ ๊ณ์ฐํ ์๋ ์๋ค. ์ ์ ์์ค์ ๋ํด ๊ณ์ฐ๋ ๊ฒ์ ํต๊ณ๋์ ๊ธฐ๊ฐ์ญ(critical value)๋ผ๊ณ ํ๋ค.
๊ธฐ๊ฐ์ญ ๊ฐ์ ์๊ณ ์๋ค๋ฉด ์ ์ ํ๋ฅ ์ ์ ์ ์์ค๊ณผ ๋น๊ตํ๋ ๊ฒ์ด ์๋๋ผ ๊ฒ์ ํต๊ณ๋์ ์ง์ ๊ธฐ๊ฐ์ญ๊ณผ ๋น๊ตํ์ฌ ๊ธฐ๊ฐ/์ฑํ ์ฌ๋ถ๋ฅผ ํ๋จํ ์๋ ์๋ค.
๊ฒ์ ์ ์
์ด์ ์๋์์ ์ ๊ธฐํ ๋ฌธ์ ๋ฅผ ๋ค์ ํ์ด๋ณด์.
๋ฌธ์ 1
<blockquote>
์ด๋ค ๋์ ์ 15๋ฒ ๋์ก๋๋ 12๋ฒ์ด ์๋ฉด์ด ๋์๋ค. ์ด ๋์ ์ ํ์ด์ง์ง ์์ ๊ณต์ ํ ๋์ (fair coin)์ธ๊ฐ?
</blockquote>
๋์ ์ ์๋ฉด์ด ๋์ค๋ ๊ฒ์ ์ซ์ 1, ๋ท๋ฉด์ด ๋์ค๋ ๊ฒ์ ์ซ์ 0์ผ๋ก ๋ํ๋ธ๋ค๋ฉด ์ด ๋ฌธ์ ๋ ๋ฒ ๋ฅด๋์ด ํ๋ฅ ๋ณ์์ ๋ชจ์ ๊ฒ์ ๋ฌธ์ ๋ก ์๊ฐํ ์ ์๋ค. ํ๋จํ๊ณ ์ํ๋ ๊ท๋ฌด ๊ฐ์ค์ ๋ฒ ๋ฅด๋์ด ํ๋ฅ ๋ถํฌ ๋ชจ์ $\theta = 0.5$์ด๋ค.
์ด ๋ฌธ์ ์ ๋ํ ๊ฒ์ ํต๊ณ๋์ 15๋ฒ ๋์ ธ ์๋ฉด์ด ๋์จ ํ์๊ฐ 12์ด๊ณ ์ด ๊ฐ์ ์์ ๋๊ฐ 15์ธ ์ดํญ ๋ถํฌ๋ฅผ ๋ฐ๋ฅธ๋ค. ์ด ๊ฒฝ์ฐ์ ์ ์ ํ๋ฅ ์ ๊ณ์ฐํ๋ฉด
1.76% ์ด๋ค.
$$ \text{Bin}(n \geq 12;N=15) = 0.017578125 $$ | 1 - sp.stats.binom(15, 0.5).cdf(12-1) | 12. ์ถ์ ๋ฐ ๊ฒ์ /02. ๊ฒ์ ๊ณผ ์ ์ ํ๋ฅ .ipynb | zzsza/Datascience_School | mit |
์ด ๊ฐ์ 5% ๋ณด๋ค๋ ์๊ณ 1% ๋ณด๋ค๋ ํฌ๊ธฐ ๋๋ฌธ์ ์ ์ ์์ค์ด 5% ๋ผ๋ฉด ๊ธฐ๊ฐํ ์ ์์ผ๋ฉฐ(์ฆ ๊ณต์ ํ ๋์ ์ด ์๋๋ผ๊ณ ๋งํ ์ ์๋ค.) ์ ์ ์์ค์ด 1% ๋ผ๋ฉด ๊ธฐ๊ฐํ ์ ์๋ค.(์ฆ, ๊ณต์ ํ ๋์ ์ด ์๋๋ผ๊ณ ๋งํ ์ ์๋ค.)
๋ฌธ์ 2
<blockquote>
์ด๋ค ํธ๋ ์ด๋์ ์ผ์ฃผ์ผ ์์ต๋ฅ ์ ๋ค์๊ณผ ๊ฐ๋ค.:<br>
-2.5%, -5%, 4.3%, -3.7% -5.6% <br>
์ด ํธ๋ ์ด๋๋ ๋์ ๋ฒ์ด๋ค ์ค ์ฌ๋์ธ๊ฐ, ์๋๋ฉด ๋์ ์์ ์ฌ๋์ธ๊ฐ?
</blockquote>
์์ต๋ฅ ์ด ์ ๊ท ๋ถํฌ๋ฅผ ๋ฐ๋ฅธ ๋ค๊ณ ๊ฐ์ ํ๋ฉด ์ด ํธ๋ ์ด๋์ ๊ฒ์ ํต๊ณ๋์ ๋ค์๊ณผ ๊ฐ์ด ๊ณ์ฐ๋๋ค.
$$ t = \dfrac{m}{\frac{s}{\sqrt{N}}} = -1.4025 $$
์ด ๊ฒ์ ํต๊ณ๋์ ๋ํ ์ ์ ํ๋ฅ ์ 11.67%์ด๋ค.
$$ F(t=-1.4025;4) = 0.1167 $$ | x = np.array([-0.025, -0.05, 0.043, -0.037, -0.056])
t = x.mean()/x.std(ddof=1)*np.sqrt(len(x))
t, sp.stats.t(df=4).cdf(t) | 12. ์ถ์ ๋ฐ ๊ฒ์ /02. ๊ฒ์ ๊ณผ ์ ์ ํ๋ฅ .ipynb | zzsza/Datascience_School | mit |
Incremental
Incremental returns a sequence of numbers that increase in regular steps. | g = Incremental(start=200, step=4)
print_generated_sequence(g, num=20, seed=12345) | notebooks/v6/Primitive_generators.ipynb | maxalbert/tohu | mit |
Integer
Integer returns a random integer between low and high (both inclusive). | g = Integer(low=100, high=200)
print_generated_sequence(g, num=20, seed=12345) | notebooks/v6/Primitive_generators.ipynb | maxalbert/tohu | mit |
Timestamp | g = Timestamp(start="2018-01-01 11:22:33", end="2018-02-13 12:23:34")
type(next(g))
print_generated_sequence(g, num=10, seed=12345, sep='\n')
g = Timestamp(start="2018-01-01 11:22:33", end="2018-02-13 12:23:34").strftime("%-d %b %Y, %H:%M (%a)")
type(next(g))
print_generated_sequence(g, num=10, seed=12345, sep='\n') | notebooks/v6/Primitive_generators.ipynb | maxalbert/tohu | mit |
Date | g = Date(start="2018-01-01", end="2018-02-13")
type(next(g))
print_generated_sequence(g, num=10, seed=12345, sep='\n')
g = Date(start="2018-01-01", end="2018-02-13").strftime("%-d %b %Y")
type(next(g))
print_generated_sequence(g, num=10, seed=12345, sep='\n') | notebooks/v6/Primitive_generators.ipynb | maxalbert/tohu | mit |
Ensure the two .xls data files are in the same folder as the Jupyter notebook
Before we proceed, let's make sure the two .xls data files are in the same folder as our running Jupyter notebook. We'll use a Jupyter notebook magic command to print out the contents of the folder that our notebook is in. The %ls command lists the contents of the current folder. | %ls | content/code/matplotlib_plots/stress_strain_curves/stress_strain_curve_with_python.ipynb | ProfessorKazarinoff/staticsite | gpl-3.0 |
We can see our Jupyter notebook stress_strain_curve_with_python.ipynb as well as the two .xls data files aluminum6061.xls and steel1045.xls are in our current folder.
Now that we are sure the two .xls data files are in the same folder as our notebook, we can import the data in the two two .xls files using Panda's pd.read_excel() function. The data from the two excel files will be stored in two Pandas dataframes called steel_df and al_df. | steel_df = pd.read_excel("steel1045.xls")
al_df = pd.read_excel("aluminum6061.xls") | content/code/matplotlib_plots/stress_strain_curves/stress_strain_curve_with_python.ipynb | ProfessorKazarinoff/staticsite | gpl-3.0 |
We can use Pandas .head() method to view the first five rows of each dataframe. | steel_df.head()
al_df.head() | content/code/matplotlib_plots/stress_strain_curves/stress_strain_curve_with_python.ipynb | ProfessorKazarinoff/staticsite | gpl-3.0 |
We see a number of columns in each dataframe. The columns we are interested in are FORCE, EXT, and CH5. Below is a description of what these columns mean.
FORCE Force measurements from the load cell in pounds (lb), force in pounds
EXT Extension measurements from the mechanical extensometer in percent (%), strain in percent
CH5 Extension readings from the laser extensometer in percent (%), strain in percent
Create stress and strain series from the FORCE, EXT, and CH5 columns
Next we'll create a four Pandas series from the ['CH5'] and ['FORCE'] columns of our al_df and steel_df dataframes. The equations below show how to calculate stress, $\sigma$, and strain, $\epsilon$, from force $F$ and cross-sectional area $A$. Cross-sectional area $A$ is the formula for the area of a circle. For the steel and aluminum samples we tested, the diameter $d$ was $0.506 \ in$.
$$ \sigma = \frac{F}{A_0} $$
$$ F \ (kip) = F \ (lb) \times 0.001 $$
$$ A_0 = \pi (d/2)^2 $$
$$ d = 0.506 \ in $$
$$ \epsilon \ (unitless) = \epsilon \ (\%) \times 0.01 $$ | strain_steel = steel_df['CH5']*0.01
d_steel = 0.506 # test bar diameter = 0.506 inches
stress_steel = (steel_df['FORCE']*0.001)/(np.pi*((d_steel/2)**2))
strain_al = al_df['CH5']*0.01
d_al = 0.506 # test bar diameter = 0.506 inches
stress_al = (al_df['FORCE']*0.001)/(np.pi*((d_al/2)**2)) | content/code/matplotlib_plots/stress_strain_curves/stress_strain_curve_with_python.ipynb | ProfessorKazarinoff/staticsite | gpl-3.0 |
Build a quick plot
Now that we have the data from the tensile test in four series, we can build a quick plot using Matplotlib's plt.plot() method. The first x,y pair we pass to plt.plot() is strain_steel,stress_steel and the second x,y pair we pass in is strain_al,stress_al. The command plt.show() shows the plot. | plt.plot(strain_steel,stress_steel,strain_al,stress_al)
plt.show() | content/code/matplotlib_plots/stress_strain_curves/stress_strain_curve_with_python.ipynb | ProfessorKazarinoff/staticsite | gpl-3.0 |
We see a plot with two lines. One line represents the steel sample and one line represents the aluminum sample. We can improve our plot by adding axis labels with units, a title and a legend.
Add axis labels, title and a legend
Axis labels, titles and a legend are added to our plot with three Matplotlib methods. The methods are summarized in the table below.
| Matplotlib method | description | example |
| --- | --- | --- |
| plt.xlabel() | x-axis label | plt.xlabel('strain (in/in)') |
| plt.ylabel() | y-axis label | plt.ylabel('stress (ksi)') |
| plt.title() | plot title | plt.title('Stress Strain Curve') |
| plt.legend() | legend | plt.legend(['steel','aluminum']) |
The code cell below shows these four methods in action and produces a plot. | plt.plot(strain_steel,stress_steel,strain_al,stress_al)
plt.xlabel('strain (in/in)')
plt.ylabel('stress (ksi)')
plt.title('Stress Strain Curve of Steel 1045 and Aluminum 6061 in tension')
plt.legend(['Steel 1045','Aluminum 6061'])
plt.show() | content/code/matplotlib_plots/stress_strain_curves/stress_strain_curve_with_python.ipynb | ProfessorKazarinoff/staticsite | gpl-3.0 |
The plot we see has two lines, axis labels, a title and a legend. Next we'll save the plot to a .png image file.
Save the plot as a .png image
Now we can save the plot as a .png image using Matplotlib's plt.savefig() method. The code cell below builds the plot and saves an image file called stress-strain_curve.png. The argument dpi=300 inside of Matplotlib's plt.savefig() method specifies the resolution of our saved image. The image stress-strain_curve.png will be saved in the same folder as our running Jupyter notebook. | plt.plot(strain_steel,stress_steel,strain_al,stress_al)
plt.xlabel('strain (in/in)')
plt.ylabel('stress (ksi)')
plt.title('Stress Strain Curve of Steel 1045 and Aluminum 6061 in tension')
plt.legend(['Steel 1045','Aluminum 6061'])
plt.savefig('stress-strain_curve.png', dpi=300, bbox_inches='tight')
plt.show() | content/code/matplotlib_plots/stress_strain_curves/stress_strain_curve_with_python.ipynb | ProfessorKazarinoff/staticsite | gpl-3.0 |
Label Network Embeddings
The label network embeddings approaches require a working tensorflow installation and the OpenNE library. To install them, run the following code:
bash
pip install networkx tensorflow
git clone https://github.com/thunlp/OpenNE/
pip install -e OpenNE/src
For an example we will use the LINE embedding method, one of the most efficient and well-performing state of the art approaches, for the meaning of parameters consult the OpenNE documentation. We select order = 3 which means that the method will take both first and second order proximities between labels for embedding. We select a dimension of 5 times the number of labels, as the linear embeddings tend to need more dimensions for best performance, normalize the label weights to maintain normalized distances in the network and agregate label embedings per sample by summation which is a classical approach. | from skmultilearn.embedding import OpenNetworkEmbedder
from skmultilearn.cluster import LabelCooccurrenceGraphBuilder
graph_builder = LabelCooccurrenceGraphBuilder(weighted=True, include_self_edges=False)
openne_line_params = dict(batch_size=1000, order=3)
embedder = OpenNetworkEmbedder(
graph_builder,
'LINE',
dimension = 5*y_train.shape[1],
aggregation_function = 'add',
normalize_weights=True,
param_dict = openne_line_params
) | docs/source/multilabelembeddings.ipynb | scikit-multilearn/scikit-multilearn | bsd-2-clause |
We now need to select a regressor and a classifier, we use random forest regressors with MLkNN which is a well working combination often used for multi-label embedding: | from skmultilearn.embedding import EmbeddingClassifier
from sklearn.ensemble import RandomForestRegressor
from skmultilearn.adapt import MLkNN
clf = EmbeddingClassifier(
embedder,
RandomForestRegressor(n_estimators=10),
MLkNN(k=5)
)
clf.fit(X_train, y_train)
predictions = clf.predict(X_test) | docs/source/multilabelembeddings.ipynb | scikit-multilearn/scikit-multilearn | bsd-2-clause |
Cost-Sensitive Label Embedding with Multidimensional Scaling
CLEMS is another well-perfoming method in multi-label embeddings. It uses weighted multi-dimensional scaling to embedd a cost-matrix of unique label combinations. The cost-matrix contains the cost of mistaking a given label combination for another, thus real-valued functions are better ideas than discrete ones. Also, the is_score parameter is used to tell the embedder if the cost function is a score (the higher the better) or a loss (the lower the better). Additional params can be also assigned to the weighted scaler. The most efficient parameter for the number of dimensions is equal to number of labels, and is thus enforced here. | from skmultilearn.embedding import CLEMS, EmbeddingClassifier
from sklearn.ensemble import RandomForestRegressor
from skmultilearn.adapt import MLkNN
dimensional_scaler_params = {'n_jobs': -1}
clf = EmbeddingClassifier(
CLEMS(metrics.jaccard_similarity_score, is_score=True, params=dimensional_scaler_params),
RandomForestRegressor(n_estimators=10, n_jobs=-1),
MLkNN(k=1),
regressor_per_dimension= True
)
clf.fit(X_train, y_train)
predictions = clf.predict(X_test) | docs/source/multilabelembeddings.ipynb | scikit-multilearn/scikit-multilearn | bsd-2-clause |
Scikit-learn based embedders
Any scikit-learn embedder can be used for multi-label classification embeddings with scikit-multilearn, just select one and try, here's a spectral embedding approach with 10 dimensions of the embedding space: | from skmultilearn.embedding import SKLearnEmbedder, EmbeddingClassifier
from sklearn.manifold import SpectralEmbedding
from sklearn.ensemble import RandomForestRegressor
from skmultilearn.adapt import MLkNN
clf = EmbeddingClassifier(
SKLearnEmbedder(SpectralEmbedding(n_components = 10)),
RandomForestRegressor(n_estimators=10),
MLkNN(k=5)
)
clf.fit(X_train, y_train)
predictions = clf.predict(X_test) | docs/source/multilabelembeddings.ipynb | scikit-multilearn/scikit-multilearn | bsd-2-clause |
On the GeoIDE Wiki they give some example CSW examples to illustrate the range possibilities. Here's one where to search for PACIOOS WMS services: | HTML('<iframe src=https://geo-ide.noaa.gov/wiki/index.php?title=ESRI_Geoportal#PacIOOS_WAF width=950 height=350></iframe>')
| CSW/CSW_ISO_Queryables-IOOS.ipynb | rsignell-usgs/notebook | mit |
Also on the GEO-IDE Wiki we find the list of UUIDs for each region/provider, which we turn into a dictionary here: | regionids = {'AOOS': '{1E96581F-6B73-45AD-9F9F-2CC3FED76EE6}',
'CENCOOS': '{BE483F24-52E7-4DDE-909F-EE8D4FF118EA}',
'CARICOOS': '{0C4CA8A6-5967-4590-BFE0-B8A21CD8BB01}',
'GCOOS': '{E77E250D-2D65-463C-B201-535775D222C9}',
'GLOS': '{E4A9E4F4-78A4-4BA0-B653-F548D74F68FA}',
'MARACOOS': '{A26F8553-798B-4B1C-8755-1031D752F7C2}',
'NANOOS': '{C6F4754B-30DC-459E-883A-2AC79DA977AB}',
'NAVY': '{FB160233-7C3B-4841-AD4B-EB5AD843E743}',
'NDBC': '{B3F50F38-3DE4-4EC9-ABF8-955887829FCC}',
'NERACOOS': '{E13C88D9-3FF3-4232-A379-84B6A1D7083E}',
'NOS/CO-OPS': '{2F58127E-A139-4A45-83F2-9695FB704306}',
'PacIOOS': '{78C0463E-2FCE-4AB2-A9C9-6A34BF261F52}',
'SCCOOS': '{20A3408F-9EC4-4B36-8E10-BBCDB1E81BDF}',
'SECOORA': '{E796C954-B248-4118-896C-42E6FAA6EDE9}',
'USACE': '{4C080A33-F3C3-4F27-AF16-F85BF3095C41}',
'USGS/CMGP': '{275DFB94-E58A-4157-8C31-C72F372E72E}'}
[op.name for op in csw.operations]
def dateRange(start_date='1900-01-01',stop_date='2100-01-01',constraint='overlaps'):
if constraint == 'overlaps':
start = fes.PropertyIsLessThanOrEqualTo(propertyname='startDate', literal=stop_date)
stop = fes.PropertyIsGreaterThanOrEqualTo(propertyname='endDate', literal=start_date)
elif constraint == 'within':
start = fes.PropertyIsGreaterThanOrEqualTo(propertyname='startDate', literal=start_date)
stop = fes.PropertyIsLessThanOrEqualTo(propertyname='endDate', literal=stop_date)
return start,stop
# get specific ServiceType URL from records
def service_urls(records,service_string='urn:x-esri:specification:ServiceType:odp:url'):
urls=[]
for key,rec in records.iteritems():
#create a generator object, and iterate through it until the match is found
#if not found, gets the default value (here "none")
url = next((d['url'] for d in rec.references if d['scheme'] == service_string), None)
if url is not None:
urls.append(url)
return urls
# Perform the CSW query, using Kyle's cool new filters on ISO queryables
# find all datasets in a bounding box and temporal extent that have
# specific keywords and also can be accessed via OPeNDAP
box=[-89.0, 30.0, -87.0, 31.0]
start_date='2013-08-21'
stop_date='2013-08-30'
std_name = 'temperature'
service_type='SOS'
region_id = regionids['GCOOS']
# convert User Input into FES filters
start,stop = dateRange(start_date,stop_date,constraint='overlaps')
bbox = fes.BBox(box)
keywords = fes.PropertyIsLike(propertyname='anyText', literal=std_name)
serviceType = fes.PropertyIsLike(propertyname='apiso:ServiceType', literal=('*%s*' % service_type))
siteid = fes.PropertyIsEqualTo(propertyname='sys.siteuuid', literal=region_id)
# try simple query with serviceType and keyword first
csw.getrecords2(constraints=[[serviceType,keywords]],maxrecords=15,esn='full')
for rec,item in csw.records.iteritems():
print item.title | CSW/CSW_ISO_Queryables-IOOS.ipynb | rsignell-usgs/notebook | mit |
The filters can be passed as a list to getrecords2, with AND or OR implied by syntax:
<pre>
[a,b,c] --> a || b || c
[[a,b,c]] --> a && b && c
[[a,b],[c],[d],[e]] or [[a,b],c,d,e] --> (a && b) || c || d || e
</pre> | # try simple query with serviceType and keyword first
csw.getrecords2(constraints=[[serviceType,keywords]],maxrecords=15,esn='full')
for rec,item in csw.records.iteritems():
print item.title
# check out references for one of the returned records
csw.records['NOAA.NOS.CO-OPS SOS'].references
# filter for GCOOS SOS data
csw.getrecords2(constraints=[[keywords,serviceType,siteid]],maxrecords=15,esn='full')
for rec,item in csw.records.iteritems():
print item.title
# filter for SOS data in BBOX
csw.getrecords2(constraints=[[keywords,serviceType,bbox]],maxrecords=15,esn='full')
for rec,item in csw.records.iteritems():
print item.title
urls = service_urls(csw.records,service_string='urn:x-esri:specification:ServiceType:sos:url')
print "\n".join(urls)
urls = [url for url in urls if 'oostethys' not in url]
print "\n".join(urls)
sos = SensorObservationService(urls[0])
getob = sos.get_operation_by_name('getobservation')
print getob.parameters
off = sos.offerings[1]
offerings = [off.name]
responseFormat = off.response_formats[0]
observedProperties = [off.observed_properties[0]]
print sos.offerings[0] | CSW/CSW_ISO_Queryables-IOOS.ipynb | rsignell-usgs/notebook | mit |
Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int. | def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
"""
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
"""
source_id_text = [[source_vocab_to_int[word] for word in line.split()] for line in source_text.split('\n')]
target_id_text = [[target_vocab_to_int[word] for word in line.split()] + [target_vocab_to_int['<EOS>']] for line in target_text.split('\n')]
return source_id_text, target_id_text
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_text_to_ids(text_to_ids) | language-translation/dlnd_language_translation.ipynb | mu4farooqi/deep-learning-projects | gpl-3.0 |
Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
import helper
import problem_unittests as tests
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() | language-translation/dlnd_language_translation.ipynb | mu4farooqi/deep-learning-projects | gpl-3.0 |
Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) | language-translation/dlnd_language_translation.ipynb | mu4farooqi/deep-learning-projects | gpl-3.0 |
Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoder_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Target sequence length placeholder named "target_sequence_length" with rank 1
Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.
Source sequence length placeholder named "source_sequence_length" with rank 1
Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)
Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch. | def model_inputs():
"""
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
"""
input_data = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
lr = tf.placeholder(tf.float32, name='learning_rate')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
target_sequence_length = tf.placeholder(tf.int32, (None,), name='target_sequence_length')
max_target_sequence_length = tf.reduce_max(target_sequence_length, name='max_target_len')
source_sequence_length = tf.placeholder(tf.int32, (None,), name='source_sequence_length')
return input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
"""
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
"""
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
return tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_process_encoding_input(process_decoder_input) | language-translation/dlnd_language_translation.ipynb | mu4farooqi/deep-learning-projects | gpl-3.0 |
Encoding
Implement encoding_layer() to create a Encoder RNN layer:
* Embed the encoder input using tf.contrib.layers.embed_sequence
* Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper
* Pass cell and embedded input to tf.nn.dynamic_rnn() | from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
"""
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
"""
rnn_inputs = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size)
lstm = lambda: tf.contrib.rnn.DropoutWrapper(tf.contrib.rnn.LSTMCell(rnn_size), output_keep_prob=keep_prob)
return tf.nn.dynamic_rnn(tf.contrib.rnn.MultiRNNCell([lstm() for _ in range(num_layers)]), rnn_inputs, source_sequence_length, dtype=tf.float32)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_encoding_layer(encoding_layer) | language-translation/dlnd_language_translation.ipynb | mu4farooqi/deep-learning-projects | gpl-3.0 |
Decoding - Training
Create a training decoding layer:
* Create a tf.contrib.seq2seq.TrainingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode |
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
"""
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
"""
taining_helper = tf.contrib.seq2seq.TrainingHelper(dec_embed_input, target_sequence_length)
decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, taining_helper, encoder_state, output_layer)
output = tf.contrib.seq2seq.dynamic_decode(decoder, maximum_iterations=max_summary_length)
return output[0]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_train(decoding_layer_train) | language-translation/dlnd_language_translation.ipynb | mu4farooqi/deep-learning-projects | gpl-3.0 |
Decoding - Inference
Create inference decoder:
* Create a tf.contrib.seq2seq.GreedyEmbeddingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode | def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
"""
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
"""
start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens')
taining_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, end_of_sequence_id)
decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, taining_helper, encoder_state, output_layer)
output = tf.contrib.seq2seq.dynamic_decode(decoder, maximum_iterations=max_target_sequence_length)
return output[0]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_infer(decoding_layer_infer) | language-translation/dlnd_language_translation.ipynb | mu4farooqi/deep-learning-projects | gpl-3.0 |
Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference. | def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
"""
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
"""
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
lstm = lambda: tf.contrib.rnn.DropoutWrapper(tf.contrib.rnn.LSTMCell(rnn_size), output_keep_prob=keep_prob)
dec_cell = tf.contrib.rnn.MultiRNNCell([lstm() for _ in range(num_layers)])
output_layer = Dense(target_vocab_size,
kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))
with tf.variable_scope('decoder'):
dec_train = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length,
max_target_sequence_length, output_layer, keep_prob)
with tf.variable_scope('decoder', reuse=True):
dec_infer = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'],
target_vocab_to_int['<EOS>'], max_target_sequence_length,
target_vocab_size, output_layer, batch_size, keep_prob)
return dec_train, dec_infer
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer(decoding_layer) | language-translation/dlnd_language_translation.ipynb | mu4farooqi/deep-learning-projects | gpl-3.0 |
Build the Neural Network
Apply the functions you implemented above to:
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).
Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.
Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function. | def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
"""
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
"""
_, encoder_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob, source_sequence_length,
source_vocab_size, enc_embedding_size)
dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)
return decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sentence_length,
rnn_size, num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, dec_embedding_size)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_seq2seq_model(seq2seq_model) | language-translation/dlnd_language_translation.ipynb | mu4farooqi/deep-learning-projects | gpl-3.0 |
Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
Set display_step to state how many steps between each debug output statement | # Number of Epochs
epochs = 3
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 256
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 300
decoding_embedding_size = 300
# Learning Rate
learning_rate = 0.01
# Dropout Keep Probability
keep_probability = 0.75
display_step = 20 | language-translation/dlnd_language_translation.ipynb | mu4farooqi/deep-learning-projects | gpl-3.0 |
Build the Graph
Build the graph using the neural network you implemented. | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
| language-translation/dlnd_language_translation.ipynb | mu4farooqi/deep-learning-projects | gpl-3.0 |
Batch and pad the source and target sequences | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
def pad_sentence_batch(sentence_batch, pad_int):
"""Pad sentences with <PAD> so that each sentence of a batch has the same length"""
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
"""Batch targets, sources, and the lengths of their sentences together"""
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
| language-translation/dlnd_language_translation.ipynb | mu4farooqi/deep-learning-projects | gpl-3.0 |
Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
def get_accuracy(target, logits):
"""
Calculate accuracy
"""
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved') | language-translation/dlnd_language_translation.ipynb | mu4farooqi/deep-learning-projects | gpl-3.0 |
Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id. | def sentence_to_seq(sentence, vocab_to_int):
"""
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
"""
return [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in sentence.lower().split()]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_sentence_to_seq(sentence_to_seq) | language-translation/dlnd_language_translation.ipynb | mu4farooqi/deep-learning-projects | gpl-3.0 |
Translate
This will translate translate_sentence from English to French. | translate_sentence = 'he saw a old yellow truck .'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
| language-translation/dlnd_language_translation.ipynb | mu4farooqi/deep-learning-projects | gpl-3.0 |
We will create a Twitter API handle for fetching data
Inorder to qualify for a Twitter API handle you need to be a Phone Verified Twitter user.
Goto Twitter settings page twitter.com/settings/account
Choose Mobile tab on left pane, then enter your phone number and verify by OTP
Now you should be able to register new API handle for your account for programmatic tweeting
Now goto Twitter Application Management page
Click Create New Appbutton
Enter a Unique App name(global namespace), you might have to try few time to get it correct
Description can be anything you wish
website can be some <yourname>.com, you dont really have to own the domain
Leave the callback URL empty, agree to the terms and condition unconditionally
Click create
You can find the api credentials in Application Management consol
Choose the App and goto keys and access tokens tab to get API_KEY, API_SECRET, ACCESS_TOKEN and ACCESS_TOKEN_SECRET
RUN THE CODE BLOCK BELOW ONLY ON FIRST TIME YOU CONFIGURE THE TWITTER API | # make sure to exclued this folder in git ignore
path_to_cred_file = path.abspath('../restricted/api_credentials.p')
# we will store twitter handle credentials in a pickle file (object de-serialization)
# code for pickling credentials need to be run only once during initial configuration
# fill the following dictionary with your twitter credentials
twitter_credentials = {'api_key':'API_KEY', \
'api_secret':'API_SECRET', \
'access_token':'ACCESS_TOKEN', \
'access_token_secret':'ACCESS_TOKEN_SECRET'}
pickle.dump(twitter_credentials,open(path_to_cred_file, "wb"))
print("Pickled credentials saved to :\n"+path_to_cred_file+"\n")
print("\n".join(["{:20} : {}".format(key,value) for key,value in twitter_credentials.items()])) | sentiment_analysis/twitter_sentiment_analysis-jallikattu/code/twitter_sentiment_analysis-jallikattu_FINAL.ipynb | nixphix/ml-projects | mit |
From second run you can load the credentials securely form stored file
If you want to check the credentials uncomment the last line in below code block | # make sure to exclued this folder in git ignore
path_to_cred_file = path.abspath('../restricted/api_credentials.p')
# load saved twitter credentials
twitter_credentials = pickle.load(open(path_to_cred_file,'rb'))
#print("\n".join(["{:20} : {}".format(key,value) for key,value in twitter_credentials.items()])) | sentiment_analysis/twitter_sentiment_analysis-jallikattu/code/twitter_sentiment_analysis-jallikattu_FINAL.ipynb | nixphix/ml-projects | mit |
Creating an Open Auth Instance
With the created api and token we will open an open auth instance to authenticate our twitter account.
If you feel that your twitter api credentials have been compromised you can just generate a new set of access token-secret pair, access token is like RSA to authenticate your api key. | # lets create an open authentication handler and initialize it with our twitter handlers api key
auth = tweepy.OAuthHandler(twitter_credentials['api_key'],twitter_credentials['api_secret'])
# access token is like password for the api key,
auth.set_access_token(twitter_credentials['access_token'],twitter_credentials['access_token_secret']) | sentiment_analysis/twitter_sentiment_analysis-jallikattu/code/twitter_sentiment_analysis-jallikattu_FINAL.ipynb | nixphix/ml-projects | mit |
Twitter API Handle
Tweepy comes with a Twitter API wrapper class called 'API', passing the open auth instance to this API creates a live Twitter handle to our account.
ATTENTION: Please beware that this is a handle you your own account not any pseudo account, if you tweet something with this it will be your tweet This is the reason I took care not to expose my api credentials, if you expose anyone can mess up your Twitter account.
Let's open the twitter handle and print the Name and Location of the twitter account owner, you should be seeing your name. | # lets create an instance of twitter api wrapper
api = tweepy.API(auth)
# lets do some self check
user = api.me()
print("{}\n{}".format(user.name,user.location)) | sentiment_analysis/twitter_sentiment_analysis-jallikattu/code/twitter_sentiment_analysis-jallikattu_FINAL.ipynb | nixphix/ml-projects | mit |
Inspiration for this Project
I drew inspiration for this project from the ongoing issue on traditional bull fighting AKA Jallikattu. Here I'm trying read pulse of the people based on tweets.
We are searching for key word Jallikattu in Twitters public tweets, in the retured search result we are taking 150 tweets to do our Sentiment Analysis. Please dont go for large number of tweets there is an upper limit of 450 tweets, for more on api rate limits checkout Twitter Developer Doc. | # now lets get some data to check the sentiment on it
# lets search for key word jallikattu and check the sentiment on it
query = 'jallikattu'
tweet_cnt = 150
peta_tweets = api.search(q=query,count=tweet_cnt) | sentiment_analysis/twitter_sentiment_analysis-jallikattu/code/twitter_sentiment_analysis-jallikattu_FINAL.ipynb | nixphix/ml-projects | mit |
Processing Tweets
Once we get the tweets, we will iterate through the tweets and do following oprations
1. Pass the tweet text to TextBlob to process the tweet
2. Processed tweets will have two attributes
* Polarity which is a numerical value between -1 to 1, the sentiment of the text can be infered from this.
* Subjectivity this shows wheather the text is stated as a fact or an opinion, value ranges from 0 to 1
3. For each tweet we will find sentiment of the text (positive, neutral or negative) and update a counter variable accordingly, this counter is later ploted as a pie chart.
4. Then we pass the tweet text to a regular expression to extract hash tags, which we later use to create an awesome word cloud visualization. | # lets go over the tweets
sentiment_polarity = [0,0,0]
tags = []
for tweet in peta_tweets:
processed_tweet = textblob.TextBlob(tweet.text)
polarity = processed_tweet.sentiment.polarity
upd_index = 0 if polarity > 0 else (1 if polarity == 0 else 2)
sentiment_polarity[upd_index] = sentiment_polarity[upd_index]+1
tags.extend(re.findall(r"#(\w+)", tweet.text))
#print(tweet.text)
#print(processed_tweet.sentiment,'\n')
sentiment_label = ['Positive','Neutral','Negative']
#print("\n".join(["{:8} tweets count {}".format(s,val) for s,val in zip(sentiment_label,sentiment_polarity)]))
# plotting sentiment pie chart
colors = ['yellowgreen', 'gold', 'coral']
# lets explode the positive sentiment for visual appeal
explode = (0.1, 0, 0)
plt.pie(sentiment_polarity,labels=sentiment_label,colors=colors,explode=explode,shadow=True,autopct='%1.1f%%')
plt.axis('equal')
plt.legend(bbox_to_anchor=(1.3,1))
plt.title('Twitter Sentiment on \"'+query+'\"')
plt.show() | sentiment_analysis/twitter_sentiment_analysis-jallikattu/code/twitter_sentiment_analysis-jallikattu_FINAL.ipynb | nixphix/ml-projects | mit |
Sentiment Analysis
We can see that majority is neutral which is contributed by
1. Tweets with media only(photo, video)
2. Tweets in regional language. Textblob do not work on our indian languages.
3. Some tweets contains only stop words or the words that do not give any positive or negative perspective.
4. Polarity is calculated by the number of positive words like "great, awesome, etc." or negative words like "hate, bad, etc"
One more point to note is that TextBlob is not a complete NLP package it does not do context aware search, such sophisticated deep learing abilities are available only with likes of Google. | # lets process the hash tags in the tweets and make a word cloud visualization
# normalizing tags by converting all tags to lowercase
tags = [t.lower() for t in tags]
# get unique count of tags to take count for each
uniq_tags = list(set(tags))
tag_count = []
# for each unique hash tag take frequency of occurance
for tag in uniq_tags:
tag_count.append((tag,tags.count(tag)))
# lets print the top five tags
tag_count =sorted(tag_count,key=lambda x:-x[1])[:5]
print("\n".join(["{:8} {}".format(tag,val) for tag,val in tag_count])) | sentiment_analysis/twitter_sentiment_analysis-jallikattu/code/twitter_sentiment_analysis-jallikattu_FINAL.ipynb | nixphix/ml-projects | mit |
Simple Word Cloud with Twitter #tags
Let us viualize the tags used in for Jallikattu by creating a tag cloud. The wordcloud package takes a single string of tags separated by whitespace. We will concatinate the tags and pass it to generate method to create a tag cloud image. | # we will create a vivid tag cloud visualization
# creating a single string of texts from tags, the tag's font size is proportional to its frequency
text = " ".join(tags)
# this generates an image from the long string, if you wish you may save it to local
wc = WordCloud().generate(text)
# we will display the image with matplotlibs image show, removed x and y axis ticks
plt.imshow(wc)
plt.axis("off")
plt.show() | sentiment_analysis/twitter_sentiment_analysis-jallikattu/code/twitter_sentiment_analysis-jallikattu_FINAL.ipynb | nixphix/ml-projects | mit |
Masked Word Cloud
The tag cloud can be masked using a grascale stencil image the wordcloud package neatly arranges the word in side the mask image. I have supreimposed generated word cloud image on to the mask image to provide a detailing otherwise the background of the word cloud will be white and it will appeare like words are hanging in space instead.
Inorder to make the image superimposing work well, we need to manipulate image transparency using image alpha channel. If you look at the visual only fine detail of mask image is seen in the tag cloud this is bacause word cloud is layed on mask image and the transparency of word cloud image is 90% so only 10% of mask image is seen. | # we can also create a masked word cloud from the tags by using grayscale image as stencil
# lets load the mask image from local
bull_mask = np.array(Image.open(path.abspath('../asset/bull_mask_1.jpg')))
wc_mask = WordCloud(background_color="white", mask=bull_mask).generate(text)
mask_image = plt.imshow(bull_mask, cmap=plt.cm.gray)
word_cloud = plt.imshow(wc_mask,alpha=0.9)
plt.axis("off")
plt.title("Twitter Hash Tag Word Cloud for "+query)
plt.show() | sentiment_analysis/twitter_sentiment_analysis-jallikattu/code/twitter_sentiment_analysis-jallikattu_FINAL.ipynb | nixphix/ml-projects | mit |
Gromov-Wasserstein example
This example is designed to show how to use the Gromov-Wassertsein distance
computation in POT. | # Author: Erwan Vautier <[email protected]>
# Nicolas Courty <[email protected]>
#
# License: MIT License
import scipy as sp
import numpy as np
import matplotlib.pylab as pl
from mpl_toolkits.mplot3d import Axes3D # noqa
import ot | docs/source/auto_examples/plot_gromov.ipynb | rflamary/POT | mit |
Sample two Gaussian distributions (2D and 3D)
The Gromov-Wasserstein distance allows to compute distances with samples that
do not belong to the same metric space. For demonstration purpose, we sample
two Gaussian distributions in 2- and 3-dimensional spaces. | n_samples = 30 # nb samples
mu_s = np.array([0, 0])
cov_s = np.array([[1, 0], [0, 1]])
mu_t = np.array([4, 4, 4])
cov_t = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]])
xs = ot.datasets.make_2D_samples_gauss(n_samples, mu_s, cov_s)
P = sp.linalg.sqrtm(cov_t)
xt = np.random.randn(n_samples, 3).dot(P) + mu_t | docs/source/auto_examples/plot_gromov.ipynb | rflamary/POT | mit |
Plotting the distributions | fig = pl.figure()
ax1 = fig.add_subplot(121)
ax1.plot(xs[:, 0], xs[:, 1], '+b', label='Source samples')
ax2 = fig.add_subplot(122, projection='3d')
ax2.scatter(xt[:, 0], xt[:, 1], xt[:, 2], color='r')
pl.show() | docs/source/auto_examples/plot_gromov.ipynb | rflamary/POT | mit |
Compute distance kernels, normalize them and then display | C1 = sp.spatial.distance.cdist(xs, xs)
C2 = sp.spatial.distance.cdist(xt, xt)
C1 /= C1.max()
C2 /= C2.max()
pl.figure()
pl.subplot(121)
pl.imshow(C1)
pl.subplot(122)
pl.imshow(C2)
pl.show() | docs/source/auto_examples/plot_gromov.ipynb | rflamary/POT | mit |
Compute Gromov-Wasserstein plans and distance | p = ot.unif(n_samples)
q = ot.unif(n_samples)
gw0, log0 = ot.gromov.gromov_wasserstein(
C1, C2, p, q, 'square_loss', verbose=True, log=True)
gw, log = ot.gromov.entropic_gromov_wasserstein(
C1, C2, p, q, 'square_loss', epsilon=5e-4, log=True, verbose=True)
print('Gromov-Wasserstein distances: ' + str(log0['gw_dist']))
print('Entropic Gromov-Wasserstein distances: ' + str(log['gw_dist']))
pl.figure(1, (10, 5))
pl.subplot(1, 2, 1)
pl.imshow(gw0, cmap='jet')
pl.title('Gromov Wasserstein')
pl.subplot(1, 2, 2)
pl.imshow(gw, cmap='jet')
pl.title('Entropic Gromov Wasserstein')
pl.show() | docs/source/auto_examples/plot_gromov.ipynb | rflamary/POT | mit |
Resolviendo el problema en forma analรญtica con SymPy
Para resolver el problema, debemos utilizar la La Ecuaciรณn diferencial de la ley del enfriamiento de Newton. Los datos que tenemos son:
Temperatura inicial = 34.5
Temperatura 1 hora despues = 33.9
Temperatura del ambiente = 15
Temperatura normal promedio de un ser humano = 37 | # defino las incognitas
t, k = sympy.symbols('t k')
y = sympy.Function('y')
# expreso la ecuacion
f = k*(y(t) -15)
sympy.Eq(y(t).diff(t), f)
# Resolviendo la ecuaciรณn
edo_sol = sympy.dsolve(y(t).diff(t) - f)
edo_sol | content/notebooks/ecuaciones-diferenciales.ipynb | relopezbriega/mi-python-blog | gpl-2.0 |
Ahora que tenemos la soluciรณn de la Ecuaciรณn diferencial, despejemos constante de integraciรณn utilizando la condiciรณn inicial. | # Condiciรณn inicial
ics = {y(0): 34.5}
C_eq = sympy.Eq(edo_sol.lhs.subs(t, 0).subs(ics), edo_sol.rhs.subs(t, 0))
C_eq
C = sympy.solve(C_eq)[0]
C | content/notebooks/ecuaciones-diferenciales.ipynb | relopezbriega/mi-python-blog | gpl-2.0 |
Ahora que ya sabemos el valor de C, podemos determinar el valor de $k$. | eq = sympy.Eq(y(t), C * sympy.E**(k*t) +15)
eq
ics = {y(1): 33.9}
k_eq = sympy.Eq(eq.lhs.subs(t, 1).subs(ics), eq.rhs.subs(t, 1))
kn = round(sympy.solve(k_eq)[0], 4)
kn | content/notebooks/ecuaciones-diferenciales.ipynb | relopezbriega/mi-python-blog | gpl-2.0 |
Ahora que ya tenemos todos los datos, podemos determinar la hora aproximada de la muerte. | hmuerte = sympy.Eq(37, 19.5 * sympy.E**(kn*t) + 15)
hmuerte
t = round(sympy.solve(hmuerte)[0],2)
t
h, m = divmod(t*-60, 60)
print "%d horas, %d minutos" % (h, m) | content/notebooks/ecuaciones-diferenciales.ipynb | relopezbriega/mi-python-blog | gpl-2.0 |
Es decir, que pasaron aproximadamente 3 horas y 51 minutos desde que ocurriรณ el crimen, por lo tanto la hora del asesinato debio haber sido alredor de las 10:50 pm.
Transformada de Laplace
Un mรฉtodo alternativo que podemos utilizar para resolver en forma analรญtica Ecuaciones diferenciales ordinarias complejas, es utilizar la Transformada de Laplace, que es un tipo particular de transformada integral. La idea es que podemos utilizar esta tรฉcnica para transformar nuestra Ecuaciรณn diferencial en algo mรกs simple, resolver esta ecuaciรณn mรกs simple y, a continuaciรณn, invertir la transformaciรณn para recuperar la soluciรณn a la Ecuaciรณn diferencial original.
ยฟQuรฉ es una Transformada de Laplace?
Para poder comprender la Transformada de Laplace, primero debemos revisar la definiciรณn general de la transformada integral, la cuรกl adapta la siguiente forma:
$$T(f(t)) = \int_{\alpha}^{\beta} K (s, t) \ f(t) \ dt = F(s) $$
En este caso, $f(t)$ es la funciรณn que queremos transformar, y $F(s)$ es la funciรณn transformada. Los lรญmites de la integraciรณn, $\alpha$ y $\beta$, pueden ser cualquier valor entre $-\infty$ y $+\infty$ y $K(s, t)$ es lo que se conoce como el nรบcleo o kernel de la transformada, y podemos elegir el kernel que nos plazca. La idea es poder elegir un kernel que nos dรฉ la oportunidad de simplificar la Ecuaciรณn diferencial con mayor facilidad.
Si nos restringimos a Ecuaciones diferenciales con coeficientes constantes, entonces un kernel que resulta realmente รบtil es $e^{-st}$, ya que al diferenciar este kernel con respecto de $t$, terminamos obteniendo potencias de $s$, que podemos equiparar a los coeficientes constantes. De esta forma, podemos arribar a la definiciรณn de la Transformada de Laplace:
$$\mathcal{L}{f(t)}=\int_0^{\infty} e^{-st} \ f(t) \ dt$$
Transformada de Laplace con SymPy
La principal ventaja de utilizar Transformadas de Laplace es que cambia la Ecuaciรณn diferencial en una ecuaciรณn algebraica, lo que simplifica el proceso para calcular su soluciรณn. La รบnica parte complicada es encontrar las transformaciones y las inversas de las transformaciones de los varios tรฉrminos de la Ecuaciรณn diferencial que queramos resolver. Aquรญ es donde nos podemos ayudar de SymPy.
Vamos a intentar resolver la siguiente ecuaciรณn:
$$y'' + 3y' + 2y = 0$$
con las siguientes condiciones iniciales: $y(0) = 2$ y $y'(0) = -3$ | # Ejemplo de transformada de Laplace
# Defino las incognitas
t = sympy.symbols("t", positive=True)
y = sympy.Function("y")
# simbolos adicionales.
s, Y = sympy.symbols("s, Y", real=True)
# Defino la ecuaciรณn
edo = y(t).diff(t, t) + 3*y(t).diff(t) + 2*y(t)
sympy.Eq(edo)
# Calculo la transformada de Laplace
L_edo = sympy.laplace_transform(edo, t, s, noconds=True)
L_edo_2 = laplace_transform_derivatives(L_edo)
# reemplazamos la transfomada de Laplace de y(t) por la incognita Y
# para facilitar la lectura de la ecuaciรณn.
L_edo_3 = L_edo_2.subs(sympy.laplace_transform(y(t), t, s), Y)
sympy.Eq(L_edo_3) | content/notebooks/ecuaciones-diferenciales.ipynb | relopezbriega/mi-python-blog | gpl-2.0 |
Aquรญ ya logramos convertir a la Ecuaciรณn diferencial en una ecuaciรณn algebraica. Ahora podemos aplicarle las condiciones iniciales para resolverla. | # Definimos las condiciones iniciales
ics = {y(0): 2, y(t).diff(t).subs(t, 0): -3}
ics
# Aplicamos las condiciones iniciales
L_edo_4 = L_edo_3.subs(ics)
# Resolvemos la ecuaciรณn y arribamos a la Transformada de Laplace
# que es equivalente a nuestra ecuaciรณn diferencial
Y_sol = sympy.solve(L_edo_4, Y)
Y_sol
# Por รบltimo, calculamos al inversa de la Transformada de Laplace que
# obtuvimos arriba, para obtener la soluciรณn de nuestra ecuaciรณn diferencial.
y_sol = sympy.inverse_laplace_transform(Y_sol[0], s, t)
y_sol
# Comprobamos la soluciรณn.
y_sol.subs(t, 0), sympy.diff(y_sol).subs(t, 0) | content/notebooks/ecuaciones-diferenciales.ipynb | relopezbriega/mi-python-blog | gpl-2.0 |
Las Transformadas de Laplace, pueden ser una buena alternativa para resolver Ecuaciones diferenciales en forma analรญtica. Pero aรบn asรญ, siguen existiendo ecuaciones que se resisten a ser resueltas por medios analรญticos, para estos casos, debemos recurrir a los mรฉtodos numรฉricos.
Series de potencias y campos de direcciones
Supongamos ahora que queremos resolver con SymPy la siguiente Ecuaciรณn diferencial:
$$\frac{dy}{dx} = x^2 + y^2 -1$$
con una condiciรณn inicial de $y(0) = 0$.
Si aplicamos lo que vimos hasta ahora, vamos a obtener el siguiente resultado: | # Defino incognitas
x = sympy.symbols('x')
y = sympy.Function('y')
# Defino la funciรณn
f = y(x)**2 + x**2 -1
# Condiciรณn inicial
ics = {y(0): 0}
# Resolviendo la ecuaciรณn diferencial
edo_sol = sympy.dsolve(y(x).diff(x) - f, ics=ics)
edo_sol | content/notebooks/ecuaciones-diferenciales.ipynb | relopezbriega/mi-python-blog | gpl-2.0 |
El resultado que nos da SymPy, es una aproximaciรณn con Series de potencias (una serie de Taylor); y el problema con las Series de potencias es que sus resultados sรณlo suelen vรกlidos para un rango determinado de valores. Una herramienta que nos puede ayudar a visualizar el rango de validez de una aproximaciรณn con Series de potencias son los Campos de direcciones.
Campos de direcciones
Los Campos de direcciones es una tรฉcnica sencilla pero รบtil para visualizar posibles soluciones a las ecuaciones diferenciales de primer orden. Se compone de lรญneas cortas que muestran la pendiente de la funciรณn incรณgnita en el plano x-y. Este grรกfico se puede producir fรกcilmente debido a que la pendiente de $y(x)$ en los puntos arbitrarios del plano x-y estรก dada por la definiciรณn misma de la Ecuaciรณn diferencial ordinaria:
$$\frac{dy}{dx} = f(x, y(x))$$
Es decir, que sรณlo tenemos que iterar sobre los valores $x$ e $y$ en la grilla de coordenadas de interรฉs y evaluar $f(x, y(x))$ para saber la pendiente de $y(x)$ en ese punto. Cuantos mรกs segmentos de lรญneas trazamos en un Campo de direcciรณn, mรกs clara serรก la imagen. La razรณn por la cual el grรกfico de Campos de direcciones es รบtil, es que las curvas suaves y continuos que son <a href="https://es.wikipedia.org/wiki/Tangente_(geometr%C3%ADa)">tangentes</a> a las lรญneas de pendiente en cada punto del grรกfico, son las posibles soluciones a la Ecuaciรณn diferencial ordinaria.
Por ejemplo, el Campos de direcciones de la ecuaciรณn:
$$\frac{dy}{dx} = x^2 + y^2 -1$$
es el siguiente: | # grafico de campo de direcciรณn
fig, axes = plt.subplots(1, 1, figsize=(7, 5))
campo_dir = plot_direction_field(x, y(x), f, ax=axes) | content/notebooks/ecuaciones-diferenciales.ipynb | relopezbriega/mi-python-blog | gpl-2.0 |
Rango de validez de la soluciรณn de series de potencia
Ahora que ya conocemos a los Campos de direcciones, volvamos a la soluciรณn aproximada con Series de potencias que habiamos obtenido anteriormente. Podemos graficar esa soluciรณn en el Campos de direcciones, y compararla con una soluciรณn por mรฉtodo nรบmericos.
<img title="Campo de direcciones" src="https://relopezbriega.github.io/images/campo_direcciones.png" width="600" height="250">
En el panel de la izquierda podemos ver el grรกfico de la soluciรณn aproximada por la Serie de potencias. La soluciรณn aproximada se alinea bien con el campo de direcciones para los valores de $x$ entre $-1.5$ y $1.5$, luego comienza a desviarse, lo que nos indica que la soluciรณn aproximada ya no serรญa vรกlida. | fig, axes = plt.subplots(1, 2, figsize=(10, 5))
# panel izquierdo - soluciรณn aproximada por Serie de potencias
plot_direction_field(x, y(x), f, ax=axes[0])
x_vec = np.linspace(-3, 3, 100)
axes[0].plot(x_vec, sympy.lambdify(x, edo_sol.rhs.removeO())(x_vec),
'b', lw=2)
# panel derecho - Soluciรณn por mรฉtodo iterativo
plot_direction_field(x, y(x), f, ax=axes[1])
x_vec = np.linspace(-1, 1, 100)
axes[1].plot(x_vec, sympy.lambdify(x, edo_sol.rhs.removeO())(x_vec),
'b', lw=2)
# Resolviendo la EDO en forma iterativa
edo_sol_m = edo_sol_p = edo_sol
dx = 0.125
# x positivos
for x0 in np.arange(1, 2., dx):
x_vec = np.linspace(x0, x0 + dx, 100)
ics = {y(x0): edo_sol_p.rhs.removeO().subs(x, x0)}
edo_sol_p = sympy.dsolve(y(x).diff(x) - f, ics=ics, n=6)
axes[1].plot(x_vec, sympy.lambdify(x, edo_sol_p.rhs.removeO())(x_vec),
'r', lw=2)
# x negativos
for x0 in np.arange(1, 5, dx):
x_vec = np.linspace(-x0-dx, -x0, 100)
ics = {y(-x0): edo_sol_m.rhs.removeO().subs(x, -x0)}
edo_sol_m = sympy.dsolve(y(x).diff(x) - f, ics=ics, n=6)
axes[1].plot(x_vec, sympy.lambdify(x, edo_sol_m.rhs.removeO())(x_vec),
'r', lw=2) | content/notebooks/ecuaciones-diferenciales.ipynb | relopezbriega/mi-python-blog | gpl-2.0 |
<center><h1>Soluciones numรฉricas con Python</h1>
<br>
<h2>SciPy<h2>
<br>
<a href="https://scipy.org/" target="_blank"><img src="https://www2.warwick.ac.uk/fac/sci/moac/people/students/peter_cock/python/scipy_logo.png?maxWidth=175&maxHeight=61" title="SciPy"></a>
</center>
# SciPy
[SciPy](https://www.scipy.org/) es un conjunto de paquetes donde cada uno ellos ataca un problema distinto dentro de la computaciรณn cientรญfica y el anรกlisis numรฉrico. Algunos de los paquetes que incluye, son:
* **`scipy.integrate`**: que proporciona diferentes funciones para resolver problemas de integraciรณn numรฉrica.
* **`scipy.linalg`**: que proporciona funciones para resolver problemas de รกlgebra lineal.
* **`scipy.optimize`**: para los problemas de optimizaciรณn y minimizaciรณn.
* **`scipy.signal`**: para el anรกlisis y procesamiento de seรฑales.
* **`scipy.sparse`**: para matrices dispersas y solucionar sistemas lineales dispersos
* **`scipy.stats`**: para el anรกlisis de estadรญstica y probabilidades.
Para resolver las [Ecuaciones diferenciales](https://relopezbriega.github.io/blog/2016/01/10/ecuaciones-diferenciales-con-python/), el paquete que nos interesa es `scipy.integrate`.
## Resolviendo Ecuaciones diferenciales con SciPy
[SciPy](https://www.scipy.org/) nos ofrece dos solucionadores de [ecuaciones diferenciales ordinarias](https://relopezbriega.github.io/blog/2016/01/10/ecuaciones-diferenciales-con-python/), `integrate.odeint` y `integrate.ode`. La principal diferencia entre ambos, es que `integrate.ode` es mรกs flexible, ya que nos ofrece la posibilidad de elegir entre distintos *solucionadores*; aunque `integrate.odeint` es mรกs fรกcil de utilizar.
Tratemos de resolver la siguiente ecuaciรณn:
$$\frac{dy}{dx} = x + y^2$$ | # Defino la funciรณn
f = y(x)**2 + x
f
# la convierto en una funciรณn ejecutable
f_np = sympy.lambdify((y(x), x), f)
# Definimos los valores de la condiciรณn inicial y el rango de x sobre los
# que vamos a iterar para calcular y(x)
y0 = 0
xp = np.linspace(0, 1.9, 100)
# Calculando la soluciรณn numerica para los valores de y0 y xp
yp = integrate.odeint(f_np, y0, xp)
# Aplicamos el mismo procedimiento para valores de x negativos
xn = np.linspace(0, -5, 100)
yn = integrate.odeint(f_np, y0, xn) | content/notebooks/ecuaciones-diferenciales.ipynb | relopezbriega/mi-python-blog | gpl-2.0 |
Los resultados son dos matrices unidimensionales de NumPy $yp$ y $yn$, de la misma longitud que las correspondientes matrices de coordenadas $xp$ y $xn$, que contienen las soluciones numรฉricas de la ecuaciรณn diferencial ordinaria para esos puntos especรญficos. Para visualizar la soluciรณn, podemos graficar las matrices $yp$ y $yn$, junto con su Campo de direcciones. | # graficando la solucion con el campo de direcciones
fig, axes = plt.subplots(1, 1, figsize=(8, 6))
plot_direction_field(x, y(x), f, ax=axes)
axes.plot(xn, yn, 'b', lw=2)
axes.plot(xp, yp, 'r', lw=2)
plt.show() | content/notebooks/ecuaciones-diferenciales.ipynb | relopezbriega/mi-python-blog | gpl-2.0 |
Sistemas de ecuaciones diferenciales
En este ejemplo, solucionamos solo una ecuaciรณn. Generalmente, la mayorรญa de los problemas se presentan en la forma de sistemas de ecuaciones diferenciales ordinarias, es decir, que incluyen varias ecuaciones a resolver. Para ver como podemos utilizar a integrate.odeint para resolver este tipo de problemas, consideremos el siguiente sistema de ecuaciones diferenciales ordinarias, conocido el atractor de Lorenz:
$$x'(t) = \sigma(y -x), \
y'(t) = x(\rho -z)-y, \
z'(t) = xy - \beta z
$$
Estas ecuaciones son conocidas por sus soluciones caรณticas, que dependen sensiblemente de los valores de los parรกmetros $\sigma$, $\rho$ y $\beta$. Veamos como podemos resolverlas con la ayuda de Python. | # Definimos el sistema de ecuaciones
def f(xyz, t, sigma, rho, beta):
x, y, z = xyz
return [sigma * (y - x),
x * (rho - z) - y,
x * y - beta * z]
# Asignamos valores a los parรกmetros
sigma, rho, beta = 8, 28, 8/3.0
# Condiciรณn inicial y valores de t sobre los que calcular
xyz0 = [1.0, 1.0, 1.0]
t = np.linspace(0, 25, 10000)
# Resolvemos las ecuaciones
xyz1 = integrate.odeint(f, xyz0, t, args=(sigma, rho, beta))
xyz2 = integrate.odeint(f, xyz0, t, args=(sigma, rho, 0.6*beta))
xyz3 = integrate.odeint(f, xyz0, t, args=(2*sigma, rho, 0.6*beta))
# Graficamos las soluciones
from mpl_toolkits.mplot3d.axes3d import Axes3D
fig, (ax1,ax2,ax3) = plt.subplots(1, 3, figsize=(12, 4),
subplot_kw={'projection':'3d'})
for ax, xyz, c in [(ax1, xyz1, 'r'), (ax2, xyz2, 'b'), (ax3, xyz3, 'g')]:
ax.plot(xyz[:,0], xyz[:,1], xyz[:,2], c, alpha=0.5)
ax.set_xlabel('$x$', fontsize=16)
ax.set_ylabel('$y$', fontsize=16)
ax.set_zlabel('$z$', fontsize=16)
ax.set_xticks([-15, 0, 15])
ax.set_yticks([-20, 0, 20])
ax.set_zticks([0, 20, 40]) | content/notebooks/ecuaciones-diferenciales.ipynb | relopezbriega/mi-python-blog | gpl-2.0 |
Ecuaciones en derivadas parciales
Los casos que vimos hasta ahora, se trataron de ecuaciones diferenciales ordinarias, pero ยฟcรณmo podemos hacer para resolver ecuaciones en derivadas parciales?
Estas ecuaciones son mucho mรกs difรญciles de resolver, pero podes recurrir a la poderosa herramienta que nos proporciona el Mรฉtodo de los Elementos Finitos para resolverlas en forma numรฉrica.
Mรฉtodo de los elementos finitos
La idea general detrรกs del Mรฉtodo de los Elementos Finitos es la divisiรณn de un continuo en un
conjunto de pequeรฑos elementos interconectados por una serie de puntos llamados nodos.
Las ecuaciones que rigen el comportamiento del continuo regirรกn tambiรฉn el del elemento.
De esta forma se consigue pasar de un sistema continuo (infinitos grados de libertad), que
es regido por una ecuaciรณn diferencial o un sistema de ecuaciones diferenciales, a un
sistema con un nรบmero de grados de libertad finito cuyo comportamiento se modela por un
sistema de ecuaciones, lineales o no.
Por ejemplo, en siguiente imagen, podemos ver que en primer lugar tenemos una placa con un hueco en el centro, supongamos que queremos determinar su distribuciรณn de temperatura. Para realizar esto, deberรญamos resolver la ecuaciรณn del calor para cada punto en la placa. El enfoque que utiliza el Mรฉtodo de los Elementos Finitos es el de dividir al objeto en elementos finitos conectados entre sรญ por nodos; como lo muestran la tercera y cuarta imagen. Este nuevo objeto, constituido por los elementos finitos (los triรกngulos de la segunda imagen) se llama malla y es una representaciรณn aproximada del objeto original. Mientras mรกs nodos tengamos, mรกs exacta serรก la soluciรณn.
<img alt="Mรฉtodo de los Elementos Finitos con python" title="Mรฉtodo de los Elementos Finitos con python" src="https://relopezbriega.github.io/images/FEM.png
" >
El proyecto FEniCS
El proyecto FEniCS es un framework para resolver numรฉricamente problemas generales de ecuaciones en derivadas parciales utilizando el mรฉtodos de los elementos finitos.
Podemos instalarlo en Ubuntu con los siguientes comandos:
sudo add-apt-repository ppa:fenics-packages/fenics
sudo apt-get update
sudo apt-get install fenics
La interfaz principal que vamos a utilizar para trabajar con este framework nos la proporcionan las librerรญas dolfin y mshr; las cuales debemos importar para poder trabajar con el. Por ahora solo funciona con Python 2.
Problema a resolver
El problema que vamos a resolver con la ayuda de FEniCS, va a ser la ecuaciรณn del calor en dos dimensiones en estado estacionario, definida por:
$$u_{xx} + u_{yy} = f$$
donde f es la funciรณn fuente y donde tenemos las siguientes condiciones de frontera:
$$u(x=0) = 3 ; \ u(x=1)=-1 ; \ u(y=0) = -5 ; \ u(y=1) = 5$$
El primer paso en la soluciรณn de una EDP utilizando el mรฉtodos de los elementos finitos, es definir una malla que describa la discretizaciรณn del dominio del problema. Para este caso, vamos a utilizar la funciรณn RectangleMesh que nos ofrece FEniCS. | # Discretizando el problema
N1 = N2 = 75
mesh = dolfin.RectangleMesh(dolfin.Point(0, 0), dolfin.Point(1, 1), N1, N2)
# grafico de la malla.
dolfin.RectangleMesh(dolfin.Point(0, 0), dolfin.Point(1, 1), 10, 10) | content/notebooks/ecuaciones-diferenciales.ipynb | relopezbriega/mi-python-blog | gpl-2.0 |
El siguiente paso es definir una representaciรณn del espacio funcional para las funciones de ensayo y prueba. Para esto vamos a utilizar la clase FunctionSpace. El constructor de esta clase tiene al menos tres argumentos: un objeto de malla, el nombre del tipo de funciรณn base, y el grado de la funciรณn base. En este caso, vamos a utilizar la funciรณn de Lagrange. | # Funciones bases
V = dolfin.FunctionSpace(mesh, 'Lagrange', 1)
u = dolfin.TrialFunction(V)
v = dolfin.TestFunction(V) | content/notebooks/ecuaciones-diferenciales.ipynb | relopezbriega/mi-python-blog | gpl-2.0 |
Ahora debemos definir a nuestra EDP en su formulaciรณn dรฉbil equivalente para poder tratarla como un problema de รกlgebra lineal que podamos resolver con el MEF. | # Formulaciรณn debil de la EDP
a = dolfin.inner(dolfin.nabla_grad(u), dolfin.nabla_grad(v)) * dolfin.dx
f = dolfin.Constant(0.0)
L = f * v * dolfin.dx | content/notebooks/ecuaciones-diferenciales.ipynb | relopezbriega/mi-python-blog | gpl-2.0 |
Y definimos las condiciones de frontera. | # Defino condiciones de frontera
def u0_top_boundary(x, on_boundary):
return on_boundary and abs(x[1]-1) < 1e-8
def u0_bottom_boundary(x, on_boundary):
return on_boundary and abs(x[1]) < 1e-8
def u0_left_boundary(x, on_boundary):
return on_boundary and abs(x[0]) < 1e-8
def u0_right_boundary(x, on_boundary):
return on_boundary and abs(x[0]-1) < 1e-8
# Definiendo condiciones de frontera de Dirichlet
bc_t = dolfin.DirichletBC(V, dolfin.Constant(5), u0_top_boundary)
bc_b = dolfin.DirichletBC(V, dolfin.Constant(-5), u0_bottom_boundary)
bc_l = dolfin.DirichletBC(V, dolfin.Constant(3), u0_left_boundary)
bc_r = dolfin.DirichletBC(V, dolfin.Constant(-1), u0_right_boundary)
# Lista de condiciones de frontera
bcs = [bc_t, bc_b, bc_r, bc_l] | content/notebooks/ecuaciones-diferenciales.ipynb | relopezbriega/mi-python-blog | gpl-2.0 |
Ahora ya podemos resolver la EDP utilizando la funciรณn dolfin.solve. El vector resultante, luego lo podemos convertir a una matriz de NumPy y utilizarla para graficar la soluciรณn con Matplotlib. | # Resolviendo la EDP
u_sol = dolfin.Function(V)
dolfin.solve(a == L, u_sol, bcs)
# graficando la soluciรณn
u_mat = u_sol.vector().array().reshape(N1+1, N2+1)
x = np.linspace(0, 1, N1+2)
y = np.linspace(0, 1, N1+2)
X, Y = np.meshgrid(x, y)
fig, ax = plt.subplots(1, 1, figsize=(8, 6))
c = ax.pcolor(X, Y, u_mat, vmin=-5, vmax=5, cmap=mpl.cm.get_cmap('RdBu_r'))
cb = plt.colorbar(c, ax=ax)
ax.set_xlabel(r"$x_1$", fontsize=18)
ax.set_ylabel(r"$x_2$", fontsize=18)
cb.set_label(r"$u(x_1, x_2)$", fontsize=18)
fig.tight_layout() | content/notebooks/ecuaciones-diferenciales.ipynb | relopezbriega/mi-python-blog | gpl-2.0 |
Inintialize model and solve | # Input model parameters
beta = 0.99
sigma= 1
eta = 1
omega= 0.8
kappa= (sigma+eta)*(1-omega)*(1-beta*omega)/omega
rhor = 0.9
phipi= 1.5
phiy = 0
rhog = 0.5
rhou = 0.5
rhov = 0.9
Sigma = 0.001*np.eye(3)
# Store parameters
parameters = pd.Series({
'beta':beta,
'sigma':sigma,
'eta':eta,
'omega':omega,
'kappa':kappa,
'rhor':rhor,
'phipi':phipi,
'phiy':phiy,
'rhog':rhog,
'rhou':rhou,
'rhov':rhov
})
# Define function that computes equilibrium conditions
def equations(variables_forward,variables_current,parameters):
# Parameters
p = parameters
# Variables
fwd = variables_forward
cur = variables_current
# Exogenous demand
g_proc = p.rhog*cur.g - fwd.g
# Exogenous inflation
u_proc = p.rhou*cur.u - fwd.u
# Exogenous monetary policy
v_proc = p.rhov*cur.v - fwd.v
# Euler equation
euler_eqn = fwd.y -1/p.sigma*(cur.i-fwd.pi) + fwd.g - cur.y
# NK Phillips curve evolution
phillips_curve = p.beta*fwd.pi + p.kappa*cur.y + fwd.u - cur.pi
# interest rate rule
interest_rule = p.phiy*cur.y+p.phipi*cur.pi + fwd.v - cur.i
# Fisher equation
fisher_eqn = cur.i - fwd.pi - cur.r
# Stack equilibrium conditions into a numpy array
return np.array([
g_proc,
u_proc,
v_proc,
euler_eqn,
phillips_curve,
interest_rule,
fisher_eqn
])
# Initialize the nk model
nk = ls.model(equations=equations,
n_states=3,
n_exo_states = 3,
var_names=['g','u','v','i','r','y','pi'],
parameters=parameters)
# Set the steady state of the nk model
nk.set_ss([0,0,0,0,0,0,0])
# Find the log-linear approximation around the non-stochastic steady state
nk.linear_approximation()
# Solve the nk model
nk.solve_klein(nk.a,nk.b) | examples/nk_model.ipynb | letsgoexploring/linearsolve-package | mit |
Compute impulse responses and plot
Compute impulse responses of the endogenous variables to a one percent shock to each exogenous variable. | # Compute impulse responses
nk.impulse(T=11,t0=1,shocks=None)
# Create the figure and axes
fig = plt.figure(figsize=(12,12))
ax1 = fig.add_subplot(3,1,1)
ax2 = fig.add_subplot(3,1,2)
ax3 = fig.add_subplot(3,1,3)
# Plot commands
nk.irs['e_g'][['g','y','i','pi','r']].plot(lw='5',alpha=0.5,grid=True,title='Demand shock',ax=ax1).legend(loc='upper right',ncol=5)
nk.irs['e_u'][['u','y','i','pi','r']].plot(lw='5',alpha=0.5,grid=True,title='Inflation shock',ax=ax2).legend(loc='upper right',ncol=5)
nk.irs['e_v'][['v','y','i','pi','r']].plot(lw='5',alpha=0.5,grid=True,title='Interest rate shock',ax=ax3).legend(loc='upper right',ncol=5) | examples/nk_model.ipynb | letsgoexploring/linearsolve-package | mit |
Construct a stochastic simulation and plot
Contruct a 151 period stochastic simulation by first siumlating the model for 251 periods and then dropping the first 100 values. The seed for the numpy random number generator is set to 0. | # Compute stochastic simulation
nk.stoch_sim(T=151,drop_first=100,cov_mat=Sigma,seed=0)
# Create the figure and axes
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(2,1,1)
ax2 = fig.add_subplot(2,1,2)
# Plot commands
nk.simulated[['y','i','pi','r']].plot(lw='5',alpha=0.5,grid=True,title='Output, inflation, and interest rates',ax=ax1).legend(ncol=4)
nk.simulated[['g','u','v']].plot(lw='5',alpha=0.5,grid=True,title='Exogenous demand, inflation, and policy',ax=ax2).legend(ncol=4,loc='lower right')
# Plot simulated exogenous shocks
nk.simulated[['e_g','g']].plot(lw='5',alpha=0.5,grid=True).legend(ncol=2)
nk.simulated[['e_u','u']].plot(lw='5',alpha=0.5,grid=True).legend(ncol=2)
nk.simulated[['e_v','v']].plot(lw='5',alpha=0.5,grid=True).legend(ncol=2) | examples/nk_model.ipynb | letsgoexploring/linearsolve-package | mit |
Filling in Missing Values | # Impute missing values
# Manually set metadata properties, as current py_entitymatching.impute_table()
# requires 'fk_ltable', 'fk_rtable', 'ltable', 'rtable' properties
em.set_property(A, 'fk_ltable', 'id')
em.set_property(A, 'fk_rtable', 'id')
em.set_property(A, 'ltable', A)
em.set_property(A, 'rtable', A)
A_all_attrs = list(A.columns.values)
A_impute_attrs = ['year','min_num_players','max_num_players','min_gameplay_time','max_gameplay_time','min_age']
A_exclude_attrs = list(set(A_all_attrs) - set(A_impute_attrs))
A1 = em.impute_table(A, exclude_attrs=A_exclude_attrs, missing_val='NaN', strategy='most_frequent', axis=0, val_all_nans=0, verbose=True)
# Compare number of missing values to check the results
print(sum(A['min_num_players'].isnull()))
print(sum(A1['min_num_players'].isnull()))
# Do the same thing for B
em.set_property(B, 'fk_ltable', 'id')
em.set_property(B, 'fk_rtable', 'id')
em.set_property(B, 'ltable', B)
em.set_property(B, 'rtable', B)
B_all_attrs = list(B.columns.values)
# TODO: add 'min_age'
B_impute_attrs = ['year','min_num_players','max_num_players','min_gameplay_time','max_gameplay_time']
B_exclude_attrs = list(set(B_all_attrs) - set(B_impute_attrs))
B1 = em.impute_table(B, exclude_attrs=B_exclude_attrs, missing_val='NaN', strategy='most_frequent', axis=0, val_all_nans=0, verbose=True)
# Compare number of missing values to check the results
print(sum(B['min_num_players'].isnull()))
print(sum(B1['min_num_players'].isnull()))
# Load the pre-labeled data
S = em.read_csv_metadata('sample_labeled.csv',
key='_id',
ltable=A1, rtable=B1,
fk_ltable='ltable_id', fk_rtable='rtable_id')
# Split S into I an J
IJ = em.split_train_test(S, train_proportion=0.75, random_state=35)
I = IJ['train']
J = IJ['test']
corres = em.get_attr_corres(A1, B1)
print(corres) | stage4/stage4_report.ipynb | malnoxon/board-game-data-science | gpl-3.0 |
Generating Features
Here, we generate all the features we decided upon after our final iteration of cross validation and debugging. We only use the relevant subset of all these features in the reported iterations below. | # Generate a set of features
#import pdb; pdb.set_trace();
import py_entitymatching.feature.attributeutils as au
import py_entitymatching.feature.simfunctions as sim
import py_entitymatching.feature.tokenizers as tok
ltable = A1
rtable = B1
# Get similarity functions for generating the features for matching
sim_funcs = sim.get_sim_funs_for_matching()
# Get tokenizer functions for generating the features for matching
tok_funcs = tok.get_tokenizers_for_matching()
# Get the attribute types of the input tables
attr_types_ltable = au.get_attr_types(ltable)
attr_types_rtable = au.get_attr_types(rtable)
# Get the attribute correspondence between the input tables
attr_corres = au.get_attr_corres(ltable, rtable)
print(attr_types_ltable['name'])
print(attr_types_rtable['name'])
attr_types_ltable['name'] = 'str_bt_5w_10w'
attr_types_rtable['name'] = 'str_bt_5w_10w'
# Get the features
F = em.get_features(ltable, rtable, attr_types_ltable,
attr_types_rtable, attr_corres,
tok_funcs, sim_funcs)
#F = em.get_features_for_matching(A1, B1)
print(F['feature_name'])
#TODO get name feature!
#http://pradap-www.cs.wisc.edu/cs638/py_entitymatching/user-manual/_modules/py_entitymatching/feature/simfunctions.html#get_sim_funs_for_matching
#name_feature = em.get_feature_fn('name', em.get_tokenizers_for_matching(), em.get_sim_funs_for_matching())
#print(name_feature)
#em.add_feature(F, 'name_dist', name_feature)
#print(F['feature_name']) | stage4/stage4_report.ipynb | malnoxon/board-game-data-science | gpl-3.0 |
Cross Validation Method | def cross_validation_eval(H):
cv_iter = pd.DataFrame(columns=['Precision', 'Recall', 'F1'])
# Matchers
matchers = [em.DTMatcher(name='DecisionTree', random_state=0),
em.RFMatcher(name='RandomForest', random_state=0),
em.SVMMatcher(name='SVM', random_state=0),
em.NBMatcher(name='NaiveBayes'),
em.LogRegMatcher(name='LogReg', random_state=0),
]
for m in matchers:
prec_result = em.select_matcher([m], table=H,
exclude_attrs=['_id', 'ltable_id', 'rtable_id','label'],
k=5,
target_attr='label', metric='precision', random_state=0)
recall_result = em.select_matcher([m], table=H,
exclude_attrs=['_id', 'ltable_id', 'rtable_id','label'],
k=5,
target_attr='label', metric='recall', random_state=0)
f1_result = em.select_matcher([m], table=H,
exclude_attrs=['_id', 'ltable_id', 'rtable_id','label'],
k=5,
target_attr='label', metric='f1', random_state=0)
cv_iter = cv_iter.append(
pd.DataFrame([
[prec_result['cv_stats']['Mean score'][0],
recall_result['cv_stats']['Mean score'][0],
f1_result['cv_stats']['Mean score'][0],
]],
index=[m.name],
columns=['Precision', 'Recall', 'F1']))
return cv_iter | stage4/stage4_report.ipynb | malnoxon/board-game-data-science | gpl-3.0 |
Iteration 1: CV | # Subset of features we used on our first iteration
include_features = [
'min_num_players_min_num_players_lev_dist',
'max_num_players_max_num_players_lev_dist',
'min_gameplay_time_min_gameplay_time_lev_dist',
'max_gameplay_time_max_gameplay_time_lev_dist',
]
F_1 = F.loc[F['feature_name'].isin(include_features)]
# Convert the I into a set of feature vectors using F
H_1 = em.extract_feature_vecs(I, feature_table=F_1, attrs_after='label', show_progress=False)
H_1.head(10)
cross_validation_eval(H_1) | stage4/stage4_report.ipynb | malnoxon/board-game-data-science | gpl-3.0 |
Iteration 2: Debug | PQ = em.split_train_test(H_1, train_proportion=0.80, random_state=0)
P = PQ['train']
Q = PQ['test']
# Convert the I into a set of feature vectors using F
# Here, we add name edit distance as a feature
include_features_2 = [
'min_num_players_min_num_players_lev_dist',
'max_num_players_max_num_players_lev_dist',
'min_gameplay_time_min_gameplay_time_lev_dist',
'max_gameplay_time_max_gameplay_time_lev_dist',
'name_name_lev_dist'
]
F_2 = F.loc[F['feature_name'].isin(include_features_2)]
H_2 = em.extract_feature_vecs(I, feature_table=F_2, attrs_after='label', show_progress=False)
H_2.head(10)
# Split H into P and Q
PQ = em.split_train_test(H_2, train_proportion=0.75, random_state=0)
P = PQ['train']
Q = PQ['test']
| stage4/stage4_report.ipynb | malnoxon/board-game-data-science | gpl-3.0 |
Iteration 3: CV | # Convert the I into a set of feature vectors using F
# Here, we add name edit distance as a feature
include_features_3 = [
'min_num_players_min_num_players_lev_dist',
'max_num_players_max_num_players_lev_dist',
'min_gameplay_time_min_gameplay_time_lev_dist',
'max_gameplay_time_max_gameplay_time_lev_dist',
'name_name_lev_dist'
]
F_3 = F.loc[F['feature_name'].isin(include_features_3)]
H_3 = em.extract_feature_vecs(I, feature_table=F_3, attrs_after='label', show_progress=False)
cross_validation_eval(H_3) | stage4/stage4_report.ipynb | malnoxon/board-game-data-science | gpl-3.0 |
Iteration 4: CV | # Convert the I into a set of feature vectors using F
# Here, we add name edit distance as a feature
include_features_4 = [
'min_num_players_min_num_players_lev_dist',
'max_num_players_max_num_players_lev_dist',
'min_gameplay_time_min_gameplay_time_lev_dist',
'max_gameplay_time_max_gameplay_time_lev_dist',
'name_name_jac_qgm_3_qgm_3'
]
F_4 = F.loc[F['feature_name'].isin(include_features_4)]
H_4 = em.extract_feature_vecs(I, feature_table=F_4, attrs_after='label', show_progress=False)
cross_validation_eval(H_4) | stage4/stage4_report.ipynb | malnoxon/board-game-data-science | gpl-3.0 |
Train-Test Set Accuracy | # Apply train, test set evaluation
I_table = em.extract_feature_vecs(I, feature_table=F_2, attrs_after='label', show_progress=False)
J_table = em.extract_feature_vecs(J, feature_table=F_2, attrs_after='label', show_progress=False)
matchers = [
#em.DTMatcher(name='DecisionTree', random_state=0),
#em.RFMatcher(name='RF', random_state=0),
#em.NBMatcher(name='NaiveBayes'),
em.LogRegMatcher(name='LogReg', random_state=0),
#em.SVMMatcher(name='SVM', random_state=0)
]
for m in matchers:
m.fit(table=I_table, exclude_attrs=['_id', 'ltable_id', 'rtable_id','label'], target_attr='label')
J_table['prediction'] = m.predict(
table=J_table,
exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'],
target_attr='label',
)
print(m.name)
em.print_eval_summary(em.eval_matches(J_table, 'label', 'prediction'))
J_table.drop('prediction', axis=1, inplace=True)
print('')
log_matcher = matchers[0]
J_table['prediction'] = m.predict(
table=J_table,
exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'],
target_attr='label',
)
print(m.name)
em.print_eval_summary(em.eval_matches(J_table, 'label', 'prediction'))
J_table.drop('prediction', axis=1, inplace=True)
print('')
candidate_set_C1.csv | stage4/stage4_report.ipynb | malnoxon/board-game-data-science | gpl-3.0 |
In that case libm_cbrt is expected to be a capsule containing the function pointer to libm's cbrt (cube root) function.
This capsule can be created using ctypes: | import ctypes
# capsulefactory
PyCapsule_New = ctypes.pythonapi.PyCapsule_New
PyCapsule_New.restype = ctypes.py_object
PyCapsule_New.argtypes = ctypes.c_void_p, ctypes.c_char_p, ctypes.c_void_p
# load libm
libm = ctypes.CDLL('libm.so.6')
# extract the proper symbol
cbrt = libm.cbrt
# wrap it
cbrt_capsule = PyCapsule_New(cbrt, "double(double)".encode(), None) | docs/examples/Third Party Libraries.ipynb | pombredanne/pythran | bsd-3-clause |
The capsule is not usable from Python context (it's some kind of opaque box) but Pythran knows how to use it. beware, it does not try to do any kind of type verification. It trusts your #pythran export line. | pythran_cbrt(cbrt_capsule, 8.) | docs/examples/Third Party Libraries.ipynb | pombredanne/pythran | bsd-3-clause |
With Pointers
Now, let's try to use the sincos function. It's C signature is void sincos(double, double*, double*). How do we pass that to Pythran? | %%pythran
#pythran export pythran_sincos(None(float64, float64*, float64*), float64)
def pythran_sincos(libm_sincos, val):
import numpy as np
val_sin, val_cos = np.empty(1), np.empty(1)
libm_sincos(val, val_sin, val_cos)
return val_sin[0], val_cos[0] | docs/examples/Third Party Libraries.ipynb | pombredanne/pythran | bsd-3-clause |
There is some magic happening here:
None is used to state the function pointer does not return anything.
In order to create pointers, we actually create empty one-dimensional array and let pythran handle them as pointer. Beware that you're in charge of all the memory checking stuff!
Apart from that, we can now call our function with the proper capsule parameter. | sincos_capsule = PyCapsule_New(libm.sincos, "unchecked anyway".encode(), None)
pythran_sincos(sincos_capsule, 0.) | docs/examples/Third Party Libraries.ipynb | pombredanne/pythran | bsd-3-clause |
With Pythran
It is naturally also possible to use capsule generated by Pythran. In that case, no type shenanigans is required, we're in our small world.
One just need to use the capsule keyword to indicate we want to generate a capsule. | %%pythran
## This is the capsule.
#pythran export capsule corp((int, str), str set)
def corp(param, lookup):
res, key = param
return res if key in lookup else -1
## This is some dummy callsite
#pythran export brief(int, int((int, str), str set)):
def brief(val, capsule):
return capsule((val, "doctor"), {"some"})
| docs/examples/Third Party Libraries.ipynb | pombredanne/pythran | bsd-3-clause |
It's not possible to call the capsule directly, it's an opaque structure. | try:
corp((1,"some"),set())
except TypeError as e:
print(e) | docs/examples/Third Party Libraries.ipynb | pombredanne/pythran | bsd-3-clause |
It's possible to pass it to the according pythran function though. | brief(1, corp) | docs/examples/Third Party Libraries.ipynb | pombredanne/pythran | bsd-3-clause |
With Cython
The capsule pythran uses may come from Cython-generated code. This uses a little-known feature from cython: api and __pyx_capi__. nogil is of importance here: Pythran releases the GIL, so better not call a cythonized function that uses it. | !find -name 'cube*' -delete
%%file cube.pyx
#cython: language_level=3
cdef api double cube(double x) nogil:
return x * x * x
from setuptools import setup
from Cython.Build import cythonize
_ = setup(
name='cube',
ext_modules=cythonize("cube.pyx"),
zip_safe=False,
# fake CLI call
script_name='setup.py',
script_args=['--quiet', 'build_ext', '--inplace']
) | docs/examples/Third Party Libraries.ipynb | pombredanne/pythran | bsd-3-clause |
The cythonized module has a special dictionary that holds the capsule we're looking for. | import sys
sys.path.insert(0, '.')
import cube
print(type(cube.__pyx_capi__['cube']))
cython_cube = cube.__pyx_capi__['cube']
pythran_cbrt(cython_cube, 2.) | docs/examples/Third Party Libraries.ipynb | pombredanne/pythran | bsd-3-clause |
Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/hammoz-consortium/cmip6/models/sandbox-1/toplevel.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/hammoz-consortium/cmip6/models/sandbox-1/toplevel.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/hammoz-consortium/cmip6/models/sandbox-1/toplevel.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/hammoz-consortium/cmip6/models/sandbox-1/toplevel.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/hammoz-consortium/cmip6/models/sandbox-1/toplevel.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/hammoz-consortium/cmip6/models/sandbox-1/toplevel.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/hammoz-consortium/cmip6/models/sandbox-1/toplevel.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/hammoz-consortium/cmip6/models/sandbox-1/toplevel.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/hammoz-consortium/cmip6/models/sandbox-1/toplevel.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.