markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
Split dataframes into categorical, continuous, discrete, dummy, and response | catD = df.loc[:,varTypes['categorical']]
contD = df.loc[:,varTypes['continuous']]
disD = df.loc[:,varTypes['discrete']]
dummyD = df.loc[:,varTypes['dummy']]
respD = df.loc[:,['id','Response']] | .ipynb_checkpoints/data-exploration-life-insurance-checkpoint.ipynb | ramabrahma/data-sci-int-capstone | gpl-3.0 |
Descriptive statistics and scatter plot relating Product_Info_2 and Response | prod_info = [ "Product_Info_"+str(i) for i in range(1,8)]
a = catD.loc[:, prod_info[1]]
stats = catD.groupby(prod_info[1]).describe()
c = gb_PI2.Response.count()
plt.figure(0)
plt.scatter(c[0],c[1])
plt.figure(0)
plt.title("Histogram of "+"Product_Info_"+str(i))
plt.xlabel("Categories " + str((a.describe())['count']))
plt.ylabel("Frequency")
for i in range(1,8):
a = catD.loc[:, "Product_Info_"+str(i)]
if(i is not 4):
print a.describe()
print ""
plt.figure(i)
plt.title("Histogram of "+"Product_Info_"+str(i))
plt.xlabel("Categories " + str((catD.groupby(key).describe())['count']))
plt.ylabel("Frequency")
#fig, axes = plt.subplots(nrows = 1, ncols = 2)
#catD[key].value_counts(normalize=True).hist(ax=axes[0]); axes[0].set_title("Histogram: "+str(key))
#catD[key].value_counts(normalize=True).hist(cumulative=True,ax=axes[1]); axes[1].set_title("Cumulative HG: "+str(key))
if a.dtype in (np.int64, np.float, float, int):
a.hist()
# Random functions
#catD.Product_Info_1.describe()
#catD.loc[:, prod_info].groupby('Product_Info_2').describe()
#df[varTypes['categorical']].hist()
catD.head(5)
#Exploration of the discrete data
disD.describe()
disD.head(5)
#Iterate through each categorical column of data
#Perform a 2D histogram later
i=0
for key in varTypes['categorical']:
#print "The category is: {0} with value_counts: {1} and detailed tuple: {2} ".format(key, l.count(), l)
plt.figure(i)
plt.title("Histogram of "+str(key))
plt.xlabel("Categories " + str((df.groupby(key).describe())['count']))
#fig, axes = plt.subplots(nrows = 1, ncols = 2)
#catD[key].value_counts(normalize=True).hist(ax=axes[0]); axes[0].set_title("Histogram: "+str(key))
#catD[key].value_counts(normalize=True).hist(cumulative=True,ax=axes[1]); axes[1].set_title("Cumulative HG: "+str(key))
if df[key].dtype in (np.int64, np.float, float, int):
df[key].hist()
i+=1
#Iterate through each 'discrete' column of data
#Perform a 2D histogram later
i=0
for key in varTypes['discrete']:
#print "The category is: {0} with value_counts: {1} and detailed tuple: {2} ".format(key, l.count(), l)
plt.figure(i)
fig, axes = plt.subplots(nrows = 1, ncols = 2)
#Histogram based on normalized value counts of the data set
disD[key].value_counts().hist(ax=axes[0]); axes[0].set_title("Histogram: "+str(key))
#Cumulative histogram based on normalized value counts of the data set
disD[key].value_counts().hist(cumulative=True,ax=axes[1]); axes[1].set_title("Cumulative HG: "+str(key))
i+=1
#2D Histogram
i=0
for key in varTypes['categorical']:
#print "The category is: {0} with value_counts: {1} and detailed tuple: {2} ".format(key, l.count(), l)
plt.figure(i)
#fig, axes = plt.subplots(nrows = 1, ncols = 2)
x = catD[key].value_counts(normalize=True)
y = df['Response']
plt.hist2d(x[1], y, bins=40, norm=LogNorm())
plt.colorbar()
#catD[key].value_counts(normalize=True).hist(ax=axes[0]); axes[0].set_title("Histogram: "+str(key))
#catD[key].value_counts(normalize=True).hist(cumulative=True,ax=axes[1]); axes[1].set_title("Cumulative HG: "+str(key))
i+=1
#Iterate through each categorical column of data
#Perform a 2D histogram later
i=0
for key in varTypes['categorical']:
#print "The category is: {0} with value_counts: {1} and detailed tuple: {2} ".format(key, l.count(), l)
plt.figure(i)
#fig, axes = plt.subplots(nrows = 1, ncols = 2)
#catD[key].value_counts(normalize=True).hist(ax=axes[0]); axes[0].set_title("Histogram: "+str(key))
#catD[key].value_counts(normalize=True).hist(cumulative=True,ax=axes[1]); axes[1].set_title("Cumulative HG: "+str(key))
if df[key].dtype in (np.int64, np.float, float, int):
#(1.*df[key].value_counts()/len(df[key])).hist()
df[key].value_counts(normalize=True).plot(kind='bar')
i+=1
df.loc('Product_Info_1') | .ipynb_checkpoints/data-exploration-life-insurance-checkpoint.ipynb | ramabrahma/data-sci-int-capstone | gpl-3.0 |
Get Response Spectrum - Nigam & Jennings | # Create an instance of the Newmark-Beta class
nigam_jennings = rsp.NigamJennings(x_record, x_time_step, periods, damping=0.05, units="cm/s/s")
sax, time_series, acc, vel, dis = nigam_jennings.evaluate()
# Plot Response Spectrum
rsp.plot_response_spectra(sax, axis_type="semilogx", filename="images/response_nigam_jennings.pdf",filetype="pdf") | gmpe-smtk/Ground Motion IMs Short.ipynb | g-weatherill/notebooks | agpl-3.0 |
Plot Time Series | rsp.plot_time_series(time_series["Acceleration"],
x_time_step,
time_series["Velocity"],
time_series["Displacement"]) | gmpe-smtk/Ground Motion IMs Short.ipynb | g-weatherill/notebooks | agpl-3.0 |
Intensity Measures
Get PGA, PGV and PGD | pga_x, pgv_x, pgd_x, _, _ = ims.get_peak_measures(0.002, x_record, True, True)
print "PGA = %10.4f cm/s/s, PGV = %10.4f cm/s, PGD = %10.4f cm" % (pga_x, pgv_x, pgd_x)
pga_y, pgv_y, pgd_y, _, _ = ims.get_peak_measures(0.002, y_record, True, True)
print "PGA = %10.4f cm/s/s, PGV = %10.4f cm/s, PGD = %10.4f cm" % (pga_y, pgv_y, pgd_y) | gmpe-smtk/Ground Motion IMs Short.ipynb | g-weatherill/notebooks | agpl-3.0 |
Get Durations: Bracketed, Uniform, Significant | print "Bracketed Duration (> 5 cm/s/s) = %9.3f s" % ims.get_bracketed_duration(x_record, x_time_step, 5.0)
print "Uniform Duration (> 5 cm/s/s) = %9.3f s" % ims.get_uniform_duration(x_record, x_time_step, 5.0)
print "Significant Duration (5 - 95 Arias ) = %9.3f s" % ims.get_significant_duration(x_record, x_time_step, 0.05, 0.95) | gmpe-smtk/Ground Motion IMs Short.ipynb | g-weatherill/notebooks | agpl-3.0 |
Get Arias Intensity, CAV, CAV5 and rms acceleration | print "Arias Intensity = %12.4f cm-s" % ims.get_arias_intensity(x_record, x_time_step)
print "Arias Intensity (5 - 95) = %12.4f cm-s" % ims.get_arias_intensity(x_record, x_time_step, 0.05, 0.95)
print "CAV = %12.4f cm-s" % ims.get_cav(x_record, x_time_step)
print "CAV5 = %12.4f cm-s" % ims.get_cav(x_record, x_time_step, threshold=5.0)
print "Arms = %12.4f cm-s" % ims.get_arms(x_record, x_time_step) | gmpe-smtk/Ground Motion IMs Short.ipynb | g-weatherill/notebooks | agpl-3.0 |
Spectrum Intensities: Housner Intensity, Acceleration Spectrum Intensity | # Get response spectrum
sax = ims.get_response_spectrum(x_record, x_time_step, periods)[0]
print "Velocity Spectrum Intensity (cm/s/s) = %12.5f" % ims.get_response_spectrum_intensity(sax)
print "Acceleration Spectrum Intensity (cm-s) = %12.5f" % ims.get_acceleration_spectrum_intensity(sax)
| gmpe-smtk/Ground Motion IMs Short.ipynb | g-weatherill/notebooks | agpl-3.0 |
Get the response spectrum pair from two records | sax, say = ims.get_response_spectrum_pair(x_record, x_time_step,
y_record, y_time_step,
periods,
damping=0.05,
units="cm/s/s",
method="Nigam-Jennings")
| gmpe-smtk/Ground Motion IMs Short.ipynb | g-weatherill/notebooks | agpl-3.0 |
Get Geometric Mean Spectrum | sa_gm = ims.geometric_mean_spectrum(sax, say)
rsp.plot_response_spectra(sa_gm, "semilogx", filename="images/geometric_mean_spectrum.pdf", filetype="pdf") | gmpe-smtk/Ground Motion IMs Short.ipynb | g-weatherill/notebooks | agpl-3.0 |
Get Envelope Spectrum | sa_env = ims.envelope_spectrum(sax, say)
rsp.plot_response_spectra(sa_env, "semilogx", filename="images/envelope_spectrum.pdf", filetype="pdf") | gmpe-smtk/Ground Motion IMs Short.ipynb | g-weatherill/notebooks | agpl-3.0 |
Rotationally Dependent and Independent IMs
GMRotD50 and GMRotI50 | gmrotd50 = ims.gmrotdpp(x_record, x_time_step, y_record, y_time_step, periods, percentile=50.0,
damping=0.05, units="cm/s/s")
gmroti50 = ims.gmrotipp(x_record, x_time_step, y_record, y_time_step, periods, percentile=50.0,
damping=0.05, units="cm/s/s")
# Plot all of the rotational angles!
plt.figure(figsize=(8, 6))
for row in gmrotd50["GeoMeanPerAngle"]:
plt.semilogx(periods, row, "-", color="LightGray")
plt.semilogx(periods, gmrotd50["GMRotDpp"], 'b-', linewidth=2, label="GMRotD50")
plt.semilogx(periods, gmroti50["Pseudo-Acceleration"], 'r-', linewidth=2, label="GMRotI50")
plt.xlabel("Period (s)", fontsize=18)
plt.ylabel("Acceleration (cm/s/s)", fontsize=18)
plt.legend(loc=0)
plt.savefig("images/rotational_spectra.pdf", dpi=300, format="pdf")
| gmpe-smtk/Ground Motion IMs Short.ipynb | g-weatherill/notebooks | agpl-3.0 |
Fourier Spectra, Smoothing and HVSR
Show the Fourier Spectrum | ims.plot_fourier_spectrum(x_record, x_time_step,
filename="images/fourier_spectrum.pdf", filetype="pdf") | gmpe-smtk/Ground Motion IMs Short.ipynb | g-weatherill/notebooks | agpl-3.0 |
Smooth the Fourier Spectrum Using the Konno & Omachi (1998) Method | from smtk.smoothing.konno_ohmachi import KonnoOhmachi
# Get the original Fourier spectrum
freq, amplitude = ims.get_fourier_spectrum(x_record, x_time_step)
# Configure Smoothing Parameters
smoothing_config = {"bandwidth": 40, # Size of smoothing window (lower = more smoothing)
"count": 1, # Number of times to apply smoothing (may be more for noisy records)
"normalize": True}
# Apply the Smoothing
smoother = KonnoOhmachi(smoothing_config)
smoothed_spectra = smoother.apply_smoothing(amplitude, freq)
# Compare the Two Spectra
plt.figure(figsize=(7,5))
plt.loglog(freq, amplitude, "k-", lw=1.0,label="Original")
plt.loglog(freq, smoothed_spectra, "r", lw=2.0, label="Smoothed")
plt.xlabel("Frequency (Hz)", fontsize=14)
plt.xlim(0.05, 200)
plt.ylabel("Fourier Amplitude", fontsize=14)
plt.tick_params(labelsize=12)
plt.legend(loc=0, fontsize=14)
plt.grid(True)
plt.savefig("images/SmoothedFourierSpectra.pdf", format="pdf", dpi=300) | gmpe-smtk/Ground Motion IMs Short.ipynb | g-weatherill/notebooks | agpl-3.0 |
Get the HVSR
Load in the Time Series | # Load in a three component data set
record_file = "data/record_3component.csv"
record_3comp = np.genfromtxt(record_file, delimiter=",")
time_vector = record_3comp[:, 0]
x_record = record_3comp[:, 1]
y_record = record_3comp[:, 2]
v_record = record_3comp[:, 3]
time_step = 0.002
# Plot the records
fig = plt.figure(figsize=(8,12))
fig.set_tight_layout(True)
ax = plt.subplot(311)
ax.plot(time_vector, x_record)
ax.set_ylim(-80., 80.)
ax.set_xlim(0., 10.5)
ax.grid(True)
ax.set_xlabel("Time (s)", fontsize=14)
ax.set_ylabel("Acceleration (cm/s/s)", fontsize=14)
ax.tick_params(labelsize=12)
ax.set_title("EW", fontsize=16)
ax = plt.subplot(312)
ax.plot(time_vector, y_record)
ax.set_xlim(0., 10.5)
ax.set_ylim(-80., 80.)
ax.grid(True)
ax.set_xlabel("Time (s)", fontsize=14)
ax.set_ylabel("Acceleration (cm/s/s)", fontsize=14)
ax.set_title("NS", fontsize=16)
ax.tick_params(labelsize=12)
ax = plt.subplot(313)
ax.plot(time_vector, v_record)
ax.set_xlim(0., 10.5)
ax.set_ylim(-40., 40.)
ax.grid(True)
ax.set_xlabel("Time (s)", fontsize=14)
ax.set_ylabel("Acceleration (cm/s/s)", fontsize=14)
ax.set_title("Vertical", fontsize=16)
ax.tick_params(labelsize=12)
plt.savefig("images/3component_timeseries.pdf", format="pdf", dpi=300) | gmpe-smtk/Ground Motion IMs Short.ipynb | g-weatherill/notebooks | agpl-3.0 |
Look at the Fourier Spectra | x_freq, x_four = ims.get_fourier_spectrum(x_record, time_step)
y_freq, y_four = ims.get_fourier_spectrum(y_record, time_step)
v_freq, v_four = ims.get_fourier_spectrum(v_record, time_step)
plt.figure(figsize=(7, 5))
plt.loglog(x_freq, x_four, "k-", lw=1.0, label="EW")
plt.loglog(y_freq, y_four, "b-", lw=1.0, label="NS")
plt.loglog(v_freq, v_four, "r-", lw=1.0, label="V")
plt.xlim(0.05, 200.)
plt.tick_params(labelsize=12)
plt.grid(True)
plt.xlabel("Frequency (Hz)", fontsize=16)
plt.ylabel("Fourier Amplitude", fontsize=16)
plt.legend(loc=3, fontsize=16)
plt.savefig("images/3component_fas.pdf", format="pdf", dpi=300) | gmpe-smtk/Ground Motion IMs Short.ipynb | g-weatherill/notebooks | agpl-3.0 |
Calculate the Horizontal To Vertical Spectral Ratio | # Setup parameters
params = {"Function": "KonnoOhmachi",
"bandwidth": 40.0,
"count": 1.0,
"normalize": True
}
# Returns
# 1. Horizontal to Vertical Spectral Ratio
# 2. Frequency
# 3. Maximum H/V
# 4. Period of Maximum H/V
hvsr, freq, max_hv, t_0 = ims.get_hvsr(x_record, time_step, y_record, time_step, v_record, time_step, params)
plt.figure(figsize=(7,5))
plt.semilogx(freq, hvsr, 'k-', lw=2.0)
# Show T0
t_0_line = np.array([[t_0, 0.0],
[t_0, 1.1 * max_hv]])
plt.semilogx(1.0 / t_0_line[:, 0], t_0_line[:, 1], "r--", lw=1.5)
plt.xlabel("Frequency (Hz)", fontsize=14)
plt.ylabel("H / V", fontsize=14)
plt.tick_params(labelsize=14)
plt.xlim(0.1, 10.0)
plt.grid(True)
plt.title(r"$T_0 = %.4f s$" % t_0, fontsize=16)
plt.savefig("images/hvsr_example1.pdf", format="pdf", dpi=300) | gmpe-smtk/Ground Motion IMs Short.ipynb | g-weatherill/notebooks | agpl-3.0 |
Selecting cell bags
A table is also "bag of cells", which just so happens to be a set of all the cells in the table.
A "bag of cells" is like a Python set (and looks like one when you print it), but it has extra selection functions that help you navigate around the table.
We will learn these as we go along, but you can see the full list on the tutorial_reference notebook. | # Preview the table as a table inline
savepreviewhtml(tab)
bb = tab.is_bold()
print("The cells with bold font are", bb)
print("The", len(bb), "cells immediately below these bold font cells are", bb.shift(DOWN))
cc = tab.filter("Cars")
print("The single cell with the text 'Cars' is", cc)
cc.assert_one() # proves there is only one cell in this bag
print("Everything in the column below the 'Cars' cell is", cc.fill(DOWN))
hcc = tab.filter("Cars").expand(DOWN)
print("If you wanted to include the 'Cars' heading, then use expand", hcc)
print("You can print the cells in row-column order if you don't mind unfriendly code")
shcc = sorted(hcc.unordered_cells, key=lambda Cell:(Cell.y, Cell.x))
print(shcc)
print("It can be easier to see the set of cells coloured within the table")
savepreviewhtml(hcc) | databaker/tutorial/Finding_your_way.ipynb | scraperwiki/databaker | agpl-3.0 |
Note: As you work through this tutorial, do please feel free to temporarily insert new Jupyter-Cells in order to give yourself a place to experiment with any of the functions that are available. (Remember, the value of the last line in a Jupyter-Cell is always printed out -- in addition to any earlier print-statements.) | "All the cells that have an 'o' in them:", tab.regex(".*?o") | databaker/tutorial/Finding_your_way.ipynb | scraperwiki/databaker | agpl-3.0 |
Observations and dimensions
Let's get on with some actual work. In our terminology, an "Observation" is a numerical measure (eg anything in the 3x4 array of numbers in the example table), and a "Dimension" is one of the headings.
Both are made up of a bag of cells, however a Dimension also needs to know how to "look up" from the Observation to its dimensional value. | # We get the array of observations by selecting its corner and expanding down and to the right
obs = tab.excel_ref('B4').expand(DOWN).expand(RIGHT)
savepreviewhtml(obs)
# the two main headings are in a row and a column
r1 = tab.excel_ref('B3').expand(RIGHT)
r2 = tab.excel_ref('A3').fill(DOWN)
# here we pass in a list containing two cell bags and get two colours
savepreviewhtml([r1, r2])
# HDim is made from a bag of cells, a name, and an instruction on how to look it up
# from an observation cell.
h1 = HDim(r1, "Vehicles", DIRECTLY, ABOVE)
# Here is an example cell
cc = tab.excel_ref('C5')
# You can preview a dimension as well as just a cell bag
savepreviewhtml([h1, cc])
# !!! This is the important look-up stage from a cell into a dimension
print("Cell", cc, "matches", h1.cellvalobs(cc), "in dimension", h1.label)
# You can start to see through to the final result of all this work when you
# print out the lookup values for every observation in the table at once.
for ob in obs:
print("Obs", ob, "maps to", h1.cellvalobs(ob)) | databaker/tutorial/Finding_your_way.ipynb | scraperwiki/databaker | agpl-3.0 |
Note the value of h1.cellvalobs(ob) is actually a pair composed of the heading cell and its value. This is is because we can over-ride its output value without actually rewriting the original table, as we shall see. | # You can change an output value like this:
h1.AddCellValueOverride("Cars", "Horses")
for ob in obs:
print("Obs", ob, "maps to", h1.cellvalobs(ob))
# Alternatively, you can override by the reference to a single cell to a value
# (This will work even if the cell C3 is empty, which helps with filling in blank headings)
h1.AddCellValueOverride(tab.excel_ref('C3'), "Submarines")
for ob in obs:
print("Obs", ob, "maps to", h1.cellvalobs(ob))
# You can override the header value for an individual observation element.
b4cell = tab.excel_ref('B4')
h1.AddCellValueOverride(b4cell, "Clouds")
for ob in obs:
print("Obs", ob, "maps to", h1.cellvalobs(ob))
# The preview table shows how things have changed
savepreviewhtml([h1, obs])
wob = tab.excel_ref('A1')
print("Wrong-Obs", wob, "maps to", h1.cellvalobs(wob), " <--- ie Nothing")
h1.AddCellValueOverride(None, "Who knows?")
print("After giving a default value Wrong-Obs", wob, "now maps to", h1.cellvalobs(wob))
# The default even works if the cell bag set is empty. In which case we have a special
# constant case that maps every observation to the same value
h3 = HDimConst("Category", "Beatles")
for ob in obs:
print("Obs", ob, "maps to", h3.cellvalobs(ob)) | databaker/tutorial/Finding_your_way.ipynb | scraperwiki/databaker | agpl-3.0 |
Conversion segments and output
A ConversionSegment is a collection of Dimensions with an Observation set that is going to be processed and output as a table all at once.
You can preview them in HTML (just like the cell bags and dimensions), only this time the observation cells can be clicked on interactively to show how they look up. |
dimensions = [
HDim(tab.excel_ref('B1'), TIME, CLOSEST, ABOVE),
HDim(r1, "Vehicles", DIRECTLY, ABOVE),
HDim(r2, "Name", DIRECTLY, LEFT),
HDimConst("Category", "Beatles")
]
c1 = ConversionSegment(obs, dimensions, processTIMEUNIT=False)
savepreviewhtml(c1)
# If the table is too big, we can preview it in another file is openable in another browser window.
# (It's very useful if you are using two computer screens.)
savepreviewhtml(c1, "preview.html", verbose=False)
print("Looking up all the observations against all the dimensions and print them out")
for ob in c1.segment:
print(c1.lookupobs(ob))
df = c1.topandas()
df | databaker/tutorial/Finding_your_way.ipynb | scraperwiki/databaker | agpl-3.0 |
WDA Technical CSV
The ONS uses their own data system for publishing their time-series data known as WDA.
If you need to output to it, then this next section is for you.
The function which outputs to the WDA format is writetechnicalCSV(filename, [conversionsegments]) The format is very verbose because it repeats each dimension name and its value twice in each row, and every row begins with the following list of column entries, whether or not they exist.
observation, data_marking, statistical_unit_eng, statistical_unit_cym, measure_type_eng, measure_type_cym, observation_type, obs_type_value, unit_multiplier, unit_of_measure_eng, unit_of_measure_cym, confidentuality, geographic_area
The writetechnicalCSV() function accepts a single conversion segment, a list of conversion segments, or equivalently a pandas dataframe. | print(writetechnicalCSV(None, c1))
# This is how to write to a file
writetechnicalCSV("exampleWDA.csv", c1)
# We can read this file back in to a list of pandas dataframes
dfs = readtechnicalCSV("exampleWDA.csv")
print(dfs[0])
| databaker/tutorial/Finding_your_way.ipynb | scraperwiki/databaker | agpl-3.0 |
Note If you were wondering what the processTIMEUNIT=False was all about in the ConversionSegment constructor, it's a feature to help the WDA output automatically set the TIMEUNIT column according to whether it should be Year, Month, or Quarter.
You will note that the TIME column above is 2014.0 when it really should be 2014 with the TIMEUNIT set to Year.
By setting it to True the ConversionSegment object will identify the timeunit from the value of the TIME column and then force its format to conform. | # See that the `2014` no longer ends with `.0`
c1 = ConversionSegment(obs, dimensions, processTIMEUNIT=True)
c1.topandas()
| databaker/tutorial/Finding_your_way.ipynb | scraperwiki/databaker | agpl-3.0 |
Additive model
The first example of conservative estimation consider an additive model $\eta : \mathbb R^d \rightarrow \mathbb R$ with Gaussian margins. The objectives are to estimate a quantity of interest $\mathcal C(Y)$ of the model output distribution. Unfortunately, the dependence structure is unknown. In order to be conservative we aim to give bounds to $\mathcal C(Y)$.
The model
This example consider the simple additive example. | from depimpact.tests import func_sum
help(func_sum) | notebooks/grid-search.ipynb | NazBen/impact-of-dependence | mit |
Dimension 2
We consider the problem in dimension $d=2$ and a number of pairs $p=1$ for gaussian margins. | dim = 2
margins = [ot.Normal()]*dim | notebooks/grid-search.ipynb | NazBen/impact-of-dependence | mit |
Copula families
We consider a gaussian copula for this first example | families = np.zeros((dim, dim), dtype=int)
families[1, 0] = 1 | notebooks/grid-search.ipynb | NazBen/impact-of-dependence | mit |
Estimations
We create an instance of the main class for a conservative estimate. | from depimpact import ConservativeEstimate
quant_estimate = ConservativeEstimate(model_func=func_sum, margins=margins, families=families) | notebooks/grid-search.ipynb | NazBen/impact-of-dependence | mit |
First, we compute the quantile at independence | n = 1000
indep_result = quant_estimate.independence(n_input_sample=n, random_state=random_state) | notebooks/grid-search.ipynb | NazBen/impact-of-dependence | mit |
We aim to minimize the output quantile. To do that, we create a q_func object from the function quantile_func to associate a probability $\alpha$ to a function that computes the empirical quantile from a given sample. | from depimpact import quantile_func
alpha = 0.05
q_func = quantile_func(alpha)
indep_result.q_func = q_func | notebooks/grid-search.ipynb | NazBen/impact-of-dependence | mit |
The computation returns a DependenceResult instance. This object gather the informations of the computation. It also computes the output quantity of interest (which can also be changed). | sns.jointplot(indep_result.input_sample[:, 0], indep_result.input_sample[:, 1]);
h = sns.distplot(indep_result.output_sample_id, axlabel='Model output', label="Output Distribution")
plt.plot([indep_result.quantity]*2, h.get_ylim(), label='Quantile at %d%%' % (alpha*100))
plt.legend(loc=0)
print('Output quantile :', indep_result.quantity) | notebooks/grid-search.ipynb | NazBen/impact-of-dependence | mit |
A boostrap can be done on the output quantity | indep_result.compute_bootstrap(n_bootstrap=5000) | notebooks/grid-search.ipynb | NazBen/impact-of-dependence | mit |
And we can plot it | sns.distplot(indep_result.bootstrap_sample, axlabel='Output quantile');
ci = [0.025, 0.975]
quantity_ci = indep_result.compute_quantity_bootstrap_ci(ci)
h = sns.distplot(indep_result.output_sample_id, axlabel='Model output', label="Output Distribution")
plt.plot([indep_result.quantity]*2, h.get_ylim(), 'g-', label='Quantile at %d%%' % (alpha*100))
plt.plot([quantity_ci[0]]*2, h.get_ylim(), 'g--', label='%d%% confidence intervals' % ((1. - (ci[0] + 1. - ci[1]))*100))
plt.plot([quantity_ci[1]]*2, h.get_ylim(), 'g--')
plt.legend(loc=0)
print('Quantile at independence: %.2f with a C.O.V at %.1f %%' % (indep_result.boot_mean, indep_result.boot_cov)) | notebooks/grid-search.ipynb | NazBen/impact-of-dependence | mit |
Grid Search Approach
Firstly, we consider a grid search approach in order to compare the perfomance with the iterative algorithm. The discretization can be made on the parameter space or on other concordance measure such as the kendall's Tau. This below example shows a grid-search on the parameter space. | K = 20
n = 10000
grid_type = 'lhs'
dep_measure = 'parameter'
grid_result = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type, dep_measure=dep_measure,
random_state=random_state) | notebooks/grid-search.ipynb | NazBen/impact-of-dependence | mit |
The computation returns a ListDependenceResult which is a list of DependenceResult instances and some bonuses. | print('The computation did %d model evaluations.' % (grid_result.n_evals)) | notebooks/grid-search.ipynb | NazBen/impact-of-dependence | mit |
Lets set the quantity function and search for the minimum among the grid results. | grid_result.q_func = q_func
min_result = grid_result.min_result
print('Minimum quantile: {} at param: {}'.format(min_result.quantity, min_result.dep_param)) | notebooks/grid-search.ipynb | NazBen/impact-of-dependence | mit |
We can plot the result in grid results. The below figure shows the output quantiles in function of the dependence parameters. | plt.plot(grid_result.dep_params, grid_result.quantities, '.', label='Quantiles')
plt.plot(min_result.dep_param[0], min_result.quantity, 'ro', label='minimum')
plt.xlabel('Dependence parameter')
plt.ylabel('Quantile value')
plt.legend(loc=0); | notebooks/grid-search.ipynb | NazBen/impact-of-dependence | mit |
As for the individual problem, we can do a boostrap also, for each parameters. Because we have $K$ parameters, we can do a bootstrap for the $K$ samples, compute the $K$ quantiles for all the bootstrap and get the minimum quantile for each bootstrap. | grid_result.compute_bootstraps(n_bootstrap=500)
boot_min_quantiles = grid_result.bootstrap_samples.min(axis=0)
boot_argmin_quantiles = grid_result.bootstrap_samples.argmin(axis=0).ravel().tolist()
boot_min_params = [grid_result.dep_params[idx][0] for idx in boot_argmin_quantiles]
fig, axes = plt.subplots(1, 2, figsize=(14, 5))
sns.distplot(boot_min_quantiles, axlabel="Minimum quantiles", ax=axes[0])
sns.distplot(boot_min_params, axlabel="Parameters of the minimum", ax=axes[1]) | notebooks/grid-search.ipynb | NazBen/impact-of-dependence | mit |
For the parameter that have the most occurence for the minimum, we compute its bootstrap mean. | # The parameter with most occurence
boot_id_min = max(set(boot_argmin_quantiles), key=boot_argmin_quantiles.count)
boot_min_result = grid_result[boot_id_min]
boot_mean = boot_min_result.bootstrap_sample.mean()
boot_std = boot_min_result.bootstrap_sample.std()
print('Worst Quantile: {} at {} with a C.O.V of {} %'.format(boot_min_result.boot_mean, min_result.dep_param, boot_min_result.boot_cov*100.)) | notebooks/grid-search.ipynb | NazBen/impact-of-dependence | mit |
Kendall's Tau
An interesting feature is to convert the dependence parameters to Kendall's Tau values. | plt.plot(grid_result.kendalls, grid_result.quantities, '.', label='Quantiles')
plt.plot(min_result.kendall_tau, min_result.quantity, 'ro', label='Minimum quantile')
plt.xlabel("Kendall's tau")
plt.ylabel('Quantile')
plt.legend(loc=0); | notebooks/grid-search.ipynb | NazBen/impact-of-dependence | mit |
As we can see, the bounds
With bounds on the dependencies
An interesting option in the ConservativeEstimate class is to bound the dependencies, due to some prior informations. | bounds_tau = np.asarray([[0., 0.7], [0.1, 0.]])
quant_estimate.bounds_tau = bounds_tau
K = 20
n = 10000
grid_type = 'lhs'
grid_result = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type, random_state=random_state)
grid_result.q_func = q_func
min_result = grid_result.min_result
print('Minimum quantile: {} at param: {}'.format(min_result.quantity, min_result.dep_param))
plt.plot(grid_result.dep_params, grid_result.quantities, '.', label='Quantiles')
plt.plot(min_result.dep_param[0], min_result.quantity, 'ro', label='minimum')
plt.xlabel('Dependence parameter')
plt.ylabel('Quantile value')
plt.legend(loc=0); | notebooks/grid-search.ipynb | NazBen/impact-of-dependence | mit |
Saving the results
It is usefull to save the result in a file to load it later and compute other quantities or anything you need! | filename = './result.hdf'
grid_result.to_hdf(filename)
from dependence import ListDependenceResult
load_grid_result = ListDependenceResult.from_hdf(filename, q_func=q_func, with_input_sample=False)
np.testing.assert_array_equal(grid_result.output_samples, load_grid_result.output_samples)
import os
os.remove(filename) | notebooks/grid-search.ipynb | NazBen/impact-of-dependence | mit |
Taking the extreme values of the dependence parameter
If the output quantity of interest seems to have a monotonicity with the dependence parameter, it is better to directly take the bounds of the dependence problem. Obviously, the minimum should be at the edges of the design space | K = None
n = 1000
grid_type = 'vertices'
grid_result = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type, random_state=random_state)
grid_result.q_func = q_func
print("Kendall's Tau : {}, Quantile: {}".format(grid_result.kendalls.ravel(), grid_result.quantities))
from depimpact.plots import matrix_plot_input
matrix_plot_input(grid_result.min_result); | notebooks/grid-search.ipynb | NazBen/impact-of-dependence | mit |
Higher Dimension
We consider the problem in dimension $d=5$. | dim = 5
quant_estimate.margins = [ot.Normal()]*dim | notebooks/grid-search.ipynb | NazBen/impact-of-dependence | mit |
Copula families with one dependent pair
We consider a gaussian copula for this first example, but for the moment only one pair is dependent. | families = np.zeros((dim, dim), dtype=int)
families[2, 0] = 1
quant_estimate.families = families
families
quant_estimate.bounds_tau = None
quant_estimate.bounds_tau | notebooks/grid-search.ipynb | NazBen/impact-of-dependence | mit |
We reset the families and bounds for the current instance. (I don't want to create a new instance, just to check if the setters are good). | quant_estimate.vine_structure | notebooks/grid-search.ipynb | NazBen/impact-of-dependence | mit |
Let's do the grid search to see | K = 20
n = 10000
grid_type = 'vertices'
grid_result = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type, random_state=random_state) | notebooks/grid-search.ipynb | NazBen/impact-of-dependence | mit |
The quantile is lower compare to the problem of dimension 1. Indeed, there is more variables, more uncertainty, so a larger deviation of the output. | grid_result.q_func = q_func
min_result = grid_result.min_result
print('Worst Quantile: {} at {}'.format(min_result.quantity, min_result.dep_param))
matrix_plot_input(min_result)
plt.plot(grid_result.dep_params, grid_result.quantities, '.', label='Quantiles')
plt.plot(min_result.dep_param[0], min_result.quantity, 'ro', label='Minimum')
plt.xlabel('Dependence parameter')
plt.ylabel('Quantile value')
plt.legend(loc=0); | notebooks/grid-search.ipynb | NazBen/impact-of-dependence | mit |
Copula families with all dependent pairs
We consider a gaussian copula for this first example, but for the moment only one pair is dependent. | families = np.zeros((dim, dim), dtype=int)
for i in range(1, dim):
for j in range(i):
families[i, j] = 1
quant_estimate.margins = margins
quant_estimate.families = families
quant_estimate.vine_structure = None
quant_estimate.bounds_tau = None
quant_estimate.bounds_tau
K = 100
n = 1000
grid_type = 'lhs'
grid_result = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type, random_state=random_state)
min_result = grid_result.min_result
print('Worst Quantile: {0} at {1}'.format(min_result.quantity, min_result.dep_param)) | notebooks/grid-search.ipynb | NazBen/impact-of-dependence | mit |
With one fixed pair | families[3, 2] = 0
quant_estimate = ConservativeEstimate(model_func=func_sum, margins=margins, families=families)
K = 100
n = 10000
grid_type = 'lhs'
grid_result = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type,
q_func=q_func, random_state=random_state)
min_result = grid_result.min_result
print('Worst Quantile: {0} at {1}'.format(min_result.quantity, min_result.dep_param))
grid_result.vine_structure
from depimpact.plots import matrix_plot_input
matrix_plot_input(min_result) | notebooks/grid-search.ipynb | NazBen/impact-of-dependence | mit |
Save the used grid and load it again | K = 100
n = 1000
grid_type = 'lhs'
grid_result_1 = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type, save_grid=True, grid_path='./output')
grid_result_2 = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type,
q_func=q_func, use_grid=0, grid_path='./output') | notebooks/grid-search.ipynb | NazBen/impact-of-dependence | mit |
Then gather the results from the same grid with the same configurations | grid_result_1.n_input_sample, grid_result_2.n_input_sample
grid_result = grid_result_1 + grid_result_2 | notebooks/grid-search.ipynb | NazBen/impact-of-dependence | mit |
Because the configurations are the same, we can gather the results from two different runs | grid_result.n_input_sample | notebooks/grid-search.ipynb | NazBen/impact-of-dependence | mit |
Source localization with a custom inverse solver
The objective of this example is to show how to plug a custom inverse solver
in MNE in order to facilate empirical comparison with the methods MNE already
implements (wMNE, dSPM, sLORETA, eLORETA, LCMV, DICS, (TF-)MxNE etc.).
This script is educational and shall be used for methods
evaluations and new developments. It is not meant to be an example
of good practice to analyse your data.
The example makes use of 2 functions apply_solver and solver
so changes can be limited to the solver function (which only takes three
parameters: the whitened data, the gain matrix and the number of orientations)
in order to try out another inverse algorithm. | import numpy as np
from scipy import linalg
import mne
from mne.datasets import sample
from mne.viz import plot_sparse_source_estimates
data_path = sample.data_path()
meg_path = data_path / 'MEG' / 'sample'
fwd_fname = meg_path / 'sample_audvis-meg-eeg-oct-6-fwd.fif'
ave_fname = meg_path / 'sample_audvis-ave.fif'
cov_fname = meg_path / 'sample_audvis-shrunk-cov.fif'
subjects_dir = data_path / 'subjects'
condition = 'Left Auditory'
# Read noise covariance matrix
noise_cov = mne.read_cov(cov_fname)
# Handling average file
evoked = mne.read_evokeds(ave_fname, condition=condition, baseline=(None, 0))
evoked.crop(tmin=0.04, tmax=0.18)
evoked = evoked.pick_types(eeg=False, meg=True)
# Handling forward solution
forward = mne.read_forward_solution(fwd_fname) | stable/_downloads/bcaf3ed1f43ea7377c6c0b00137d728f/custom_inverse_solver.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Auxiliary function to run the solver | def apply_solver(solver, evoked, forward, noise_cov, loose=0.2, depth=0.8):
"""Call a custom solver on evoked data.
This function does all the necessary computation:
- to select the channels in the forward given the available ones in
the data
- to take into account the noise covariance and do the spatial whitening
- to apply loose orientation constraint as MNE solvers
- to apply a weigthing of the columns of the forward operator as in the
weighted Minimum Norm formulation in order to limit the problem
of depth bias.
Parameters
----------
solver : callable
The solver takes 3 parameters: data M, gain matrix G, number of
dipoles orientations per location (1 or 3). A solver shall return
2 variables: X which contains the time series of the active dipoles
and an active set which is a boolean mask to specify what dipoles are
present in X.
evoked : instance of mne.Evoked
The evoked data
forward : instance of Forward
The forward solution.
noise_cov : instance of Covariance
The noise covariance.
loose : float in [0, 1] | 'auto'
Value that weights the source variances of the dipole components
that are parallel (tangential) to the cortical surface. If loose
is 0 then the solution is computed with fixed orientation.
If loose is 1, it corresponds to free orientations.
The default value ('auto') is set to 0.2 for surface-oriented source
space and set to 1.0 for volumic or discrete source space.
depth : None | float in [0, 1]
Depth weighting coefficients. If None, no depth weighting is performed.
Returns
-------
stc : instance of SourceEstimate
The source estimates.
"""
# Import the necessary private functions
from mne.inverse_sparse.mxne_inverse import \
(_prepare_gain, is_fixed_orient,
_reapply_source_weighting, _make_sparse_stc)
all_ch_names = evoked.ch_names
# Handle depth weighting and whitening (here is no weights)
forward, gain, gain_info, whitener, source_weighting, mask = _prepare_gain(
forward, evoked.info, noise_cov, pca=False, depth=depth,
loose=loose, weights=None, weights_min=None, rank=None)
# Select channels of interest
sel = [all_ch_names.index(name) for name in gain_info['ch_names']]
M = evoked.data[sel]
# Whiten data
M = np.dot(whitener, M)
n_orient = 1 if is_fixed_orient(forward) else 3
X, active_set = solver(M, gain, n_orient)
X = _reapply_source_weighting(X, source_weighting, active_set)
stc = _make_sparse_stc(X, active_set, forward, tmin=evoked.times[0],
tstep=1. / evoked.info['sfreq'])
return stc | stable/_downloads/bcaf3ed1f43ea7377c6c0b00137d728f/custom_inverse_solver.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Define your solver | def solver(M, G, n_orient):
"""Run L2 penalized regression and keep 10 strongest locations.
Parameters
----------
M : array, shape (n_channels, n_times)
The whitened data.
G : array, shape (n_channels, n_dipoles)
The gain matrix a.k.a. the forward operator. The number of locations
is n_dipoles / n_orient. n_orient will be 1 for a fixed orientation
constraint or 3 when using a free orientation model.
n_orient : int
Can be 1 or 3 depending if one works with fixed or free orientations.
If n_orient is 3, then ``G[:, 2::3]`` corresponds to the dipoles that
are normal to the cortex.
Returns
-------
X : array, (n_active_dipoles, n_times)
The time series of the dipoles in the active set.
active_set : array (n_dipoles)
Array of bool. Entry j is True if dipole j is in the active set.
We have ``X_full[active_set] == X`` where X_full is the full X matrix
such that ``M = G X_full``.
"""
inner = np.dot(G, G.T)
trace = np.trace(inner)
K = linalg.solve(inner + 4e-6 * trace * np.eye(G.shape[0]), G).T
K /= np.linalg.norm(K, axis=1)[:, None]
X = np.dot(K, M)
indices = np.argsort(np.sum(X ** 2, axis=1))[-10:]
active_set = np.zeros(G.shape[1], dtype=bool)
for idx in indices:
idx -= idx % n_orient
active_set[idx:idx + n_orient] = True
X = X[active_set]
return X, active_set | stable/_downloads/bcaf3ed1f43ea7377c6c0b00137d728f/custom_inverse_solver.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Apply your custom solver | # loose, depth = 0.2, 0.8 # corresponds to loose orientation
loose, depth = 1., 0. # corresponds to free orientation
stc = apply_solver(solver, evoked, forward, noise_cov, loose, depth) | stable/_downloads/bcaf3ed1f43ea7377c6c0b00137d728f/custom_inverse_solver.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
View in 2D and 3D ("glass" brain like 3D plot) | plot_sparse_source_estimates(forward['src'], stc, bgcolor=(1, 1, 1),
opacity=0.1) | stable/_downloads/bcaf3ed1f43ea7377c6c0b00137d728f/custom_inverse_solver.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Latent Variables | dag_with_latent_variables = CausalGraphicalModel(
nodes=["x", "y", "z"],
edges=[
("x", "z"),
("z", "y"),
],
latent_edges=[
("x", "y")
]
)
dag_with_latent_variables.draw()
# here there are no observed backdoor adjustment sets
dag_with_latent_variables.get_all_backdoor_adjustment_sets("x", "y")
# but there is a frontdoor adjustment set
dag_with_latent_variables.get_all_frontdoor_adjustment_sets("x", "y") | notebooks/cgm-examples.ipynb | ijmbarr/causalgraphicalmodels | mit |
StructuralCausalModels
For Structural Causal Models (SCM) we need to specify the functional form of each node: | from causalgraphicalmodels import StructuralCausalModel
import numpy as np
scm = StructuralCausalModel({
"x1": lambda n_samples: np.random.binomial(n=1,p=0.7,size=n_samples),
"x2": lambda x1, n_samples: np.random.normal(loc=x1, scale=0.1),
"x3": lambda x2, n_samples: x2 ** 2,
}) | notebooks/cgm-examples.ipynb | ijmbarr/causalgraphicalmodels | mit |
The only requirement on the functions are:
- that variable names are consistent
- each function accepts keyword variables in the form of numpy arrays and output numpy arrays of shape [n_samples]
- that in addition to it's parents, each function takes a n_samples variables indicating how many samples to generate
- that any function acts on each row independently. This ensure that the output samples are independent
Wrapping these functions in the StructuralCausalModel object allows us to easily generate samples: | ds = scm.sample(n_samples=100)
ds.head()
# and visualise the samples
import seaborn as sns
%matplotlib inline
sns.kdeplot(
data=ds.x2,
data2=ds.x3,
) | notebooks/cgm-examples.ipynb | ijmbarr/causalgraphicalmodels | mit |
And to access the implied CGM" | scm.cgm.draw() | notebooks/cgm-examples.ipynb | ijmbarr/causalgraphicalmodels | mit |
And to apply an intervention: | scm_do = scm.do("x1")
scm_do.cgm.draw() | notebooks/cgm-examples.ipynb | ijmbarr/causalgraphicalmodels | mit |
And sample from the distribution implied by this intervention: | scm_do.sample(n_samples=5, set_values={"x1": np.arange(5)}) | notebooks/cgm-examples.ipynb | ijmbarr/causalgraphicalmodels | mit |
Case Study Data
There are a number of different sites that you can utilize to access past model output analyses and even forecasts. The most robust collection is housed at the National Center for Environmental Information (NCEI, formerly NCDC) on a THREDDS server. The general website to begin your search is
https://www.ncdc.noaa.gov/data-access
this link contains links to many different data sources (some of which we will come back to later in this tutorial). But for now, lets investigate what model output is avaiable
https://www.ncdc.noaa.gov/data-access/model-data/model-datasets
The gridded model output that are available
Reanalysis
* Climate Forecast System Reanalysis (CFSR)
* CFSR provides a global reanalysis (a best estimate of the observed state of the atmosphere) of past weather from January 1979 through March 2011 at a horizontal resolution of 0.5°.
* North American Regional Reanalysis (NARR)
* NARR is a regional reanalysis of North America containing temperatures, winds, moisture, soil data, and dozens of other parameters at 32km horizontal resolution.
* Reanalysis-1 / Reanalysis-2 (R1/R2)
* Reanalysis-1 / Reanalysis-2 are two global reanalyses of atmospheric data spanning 1948/1979 to present at a 2.5° horizontal resolution.
Numerical Weather Prediction
* Climate Forecast System (CFS)
* CFS provides a global reanalysis, a global reforecast of past weather, and an operational, seasonal forecast of weather out to nine months.
* Global Data Assimilation System (GDAS)
* GDAS is the set of assimilation data, both input and output, in various formats for the Global Forecast System model.
* Global Ensemble Forecast System (GEFS)
* GEFS is a global-coverage weather forecast model made up of 21 separate forecasts, or ensemble members, used to quantify the amount of uncertainty in a forecast. GEFS produces output four times a day with weather forecasts going out to 16 days.
* Global Forecast System (GFS)
* The GFS model is a coupled weather forecast model, composed of four separate models which work together to provide an accurate picture of weather conditions. GFS covers the entire globe down to a horizontal resolution of 28km.
* North American Mesoscale (NAM)
* NAM is a regional weather forecast model covering North America down to a horizontal resolution of 12km. Dozens of weather parameters are available from the NAM grids, from temperature and precipitation to lightning and turbulent kinetic energy.
* Rapid Refresh (RAP)
* RAP is a regional weather forecast model of North America, with separate sub-grids (with different horizontal resolutions) within the overall North America domain. RAP produces forecasts every hour with forecast lengths going out 18 hours. RAP replaced the Rapid Update Cycle (RUC) model on May 1, 2012.
* Navy Operational Global Atmospheric Prediction System (NOGAPS)
* NOGAPS analysis data are available in six-hourly increments on regularly spaced latitude-longitude grids at 1-degree and one-half-degree resolutions. Vertical resolution varies from 18 to 28 pressure levels, 34 sea level depths, the surface, and other various levels.
Ocean Models
* Hybrid Coordinate Ocean Model (HYCOM), Global
* The Navy implementation of HYCOM is the successor to Global NCOM. This site hosts regions covering U.S. coastal waters as well as a global surface model.
* Navy Coastal Ocean Model (NCOM), Global
* Global NCOM was run by the Naval Oceanographic Office (NAVOCEANO) as the Navy’s operational global ocean-prediction system prior to its replacement by the Global HYCOM system in 2013. This site hosts regions covering U.S., European, West Pacific, and Australian coastal waters as well as a global surface model.
* Navy Coastal Ocean Model (NCOM), Regional
* The Regional NCOM is a high-resolution version of NCOM for specific areas. NCEI serves the Americas Seas, U.S. East, and Alaska regions of NCOM.
* Naval Research Laboratory Adaptive Ecosystem Climatology (AEC)
* The Naval Research Laboratory AEC combines an ocean model with Earth observations to provide a synoptic view of the typical (climatic) state of the ocean for every day of the year. This dataset covers the Gulf of Mexico and nearby areas.
* National Centers for Environmental Prediction (NCEP) Real Time Ocean Forecast System (RTOFS)–Atlantic
* RTOFS–Atlantic is a data-assimilating nowcast-forecast system operated by NCEP. This dataset covers the Gulf of Mexico and most of the northern and central Atlantic.
Climate Prediction
* CM2 Global Coupled Climate Models (CM2.X)
* CM2.X consists of two climate models to model the changes in climate over the past century and into the 21st century.
* Coupled Model Intercomparison Project Phase 5 (CMIP5) (link is external)
* The U.N. Intergovernmental Panel on Climate Change (IPCC) coordinates global analysis of climate models under the Climate Model Intercomparison Project (CMIP). CMIP5 is in its fifth iteration. Data are available through the Program for Climate Model Diagnosis and Intercomparison (PCMDI) website.
Derived / Other Model Data
* Service Records Retention System (SRRS)
* SRRS is a store of weather observations, summaries, forecasts, warnings, and advisories generated by the National Weather Service for public use.
* NOMADS Ensemble Probability Tool
* The NOMADS Ensemble Probability Tool allows a user to query the Global Ensemble Forecast System (GEFS) to determine the probability that a set of forecast conditions will occur at a given location using all of the 21 separate GEFS ensemble members.
* National Digital Forecast Database (NDFD)
* NDFD are gridded forecasts created from weather data collected by National Weather Service field offices and processed through the National Centers for Environmental Prediction. NDFD data are available by WMO header or by date range.
* National Digital Guidance Database (NDGD)
* NDGD consists of forecasts, observations, model probabilities, climatological normals, and other digital data that complement the National Digital Forecast Database.
NARR Output
Lets investigate what specific NARR output is available to work with from NCEI.
https://www.ncdc.noaa.gov/data-access/model-data/model-datasets/north-american-regional-reanalysis-narr
We specifically want to look for data that has "TDS" data access, since that is short for a THREDDS server data access point. There are a total of four different GFS datasets that we could potentially use.
Choosing our data source
Let's go ahead and use the NARR Analysis data to investigate the past case we identified (The Storm of the Century).
https://www.ncei.noaa.gov/thredds/catalog/narr-a-files/199303/19930313/catalog.html?dataset=narr-a-files/199303/19930313/narr-a_221_19930313_0000_000.grb
And we will use a python package called Siphon to read this data through the NetCDFSubset (NetCDFServer) link.
https://www.ncei.noaa.gov/thredds/ncss/grid/narr-a-files/199303/19930313/narr-a_221_19930313_0000_000.grb/dataset.html | # Case Study Date
year = 1993
month = 3
day = 13
hour = 0
dt = datetime(year, month, day, hour)
# Read NARR Data from THREDDS server
base_url = 'https://www.ncei.noaa.gov/thredds/catalog/narr-a-files/'
# Programmatically generate the URL to the day of data we want
cat = TDSCatalog(f'{base_url}{dt:%Y%m}/{dt:%Y%m%d}/catalog.xml')
# Have Siphon find the appropriate dataset
ds = cat.datasets.filter_time_nearest(dt)
# Download data using the NetCDF Subset Service
ncss = ds.subset()
query = ncss.query().lonlat_box(north=60, south=18, east=300, west=225)
query.all_times().variables('Geopotential_height_isobaric', 'Temperature_isobaric',
'u-component_of_wind_isobaric',
'v-component_of_wind_isobaric').add_lonlat().accept('netcdf')
data = ncss.get_data(query)
# Back up in case of bad internet connection.
# Uncomment the following line to read local netCDF file of NARR data
# data = Dataset('../../data/NARR_19930313_0000.nc','r') | notebooks/MetPy_Case_Study/MetPy_Case_Study.ipynb | Unidata/unidata-python-workshop | mit |
Let's see what dimensions are in the file: | data.dimensions | notebooks/MetPy_Case_Study/MetPy_Case_Study.ipynb | Unidata/unidata-python-workshop | mit |
Pulling Data for Calculation/Plotting
The object that we get from Siphon is netCDF-like, so we can pull data using familiar calls for all of the variables that are desired for calculations and plotting purposes.
NOTE:
Due to the curvilinear nature of the NARR grid, there is a need to smooth the data that we import for calculation and plotting purposes. For more information about why, please see the following link: http://www.atmos.albany.edu/facstaff/rmctc/narr/
Additionally, we want to attach units to our values for use in MetPy calculations later and it will also allow for easy conversion to other units.
<div class="alert alert-success">
<b>EXERCISE</b>:
Replace the `0`'s in the template below with your code:
<ul>
<li>Use the `gaussian_filter` function to smooth the `Temperature_isobaric`, `Geopotential_height_isobaric`, `u-component_of_wind_isobaric`, and `v-component_of_wind_isobaric` variables from the netCDF object with a `sigma` value of 1.</li>
<li>Assign the units of `kelvin`, `meter`, `m/s`, and `m/s` resectively.</li>
<li>Extract the `lat`, `lon`, and `isobaric1` variables.</li>
</ul>
</div> | # Extract data and assign units
tmpk = gaussian_filter(data.variables['Temperature_isobaric'][0], sigma=1.0) * units.K
hght = 0
uwnd = 0
vwnd = 0
# Extract coordinate data for plotting
lat = data.variables['lat'][:]
lon = data.variables['lon'][:]
lev = 0 | notebooks/MetPy_Case_Study/MetPy_Case_Study.ipynb | Unidata/unidata-python-workshop | mit |
<button data-toggle="collapse" data-target="#sol1" class='btn btn-primary'>View Solution</button>
<div id="sol1" class="collapse">
<code><pre>
# Extract data and assign units
tmpk = gaussian_filter(data.variables['Temperature_isobaric'][0],
sigma=1.0) * units.K
hght = gaussian_filter(data.variables['Geopotential_height_isobaric'][0],
sigma=1.0) * units.meter
uwnd = gaussian_filter(data.variables['u-component_of_wind_isobaric'][0], sigma=1.0) * units('m/s')
vwnd = gaussian_filter(data.variables['v-component_of_wind_isobaric'][0], sigma=1.0) * units('m/s')
\# Extract coordinate data for plotting
lat = data.variables['lat'][:]
lon = data.variables['lon'][:]
lev = data.variables['isobaric1'][:]
</pre></code>
</div>
Next we need to extract the time variable. It's not in very useful units, but the num2date function can be used to easily create regular datetime objects. | time = data.variables['time1']
print(time.units)
vtime = num2date(time[0], units=time.units)
print(vtime) | notebooks/MetPy_Case_Study/MetPy_Case_Study.ipynb | Unidata/unidata-python-workshop | mit |
Finally, we need to calculate the spacing of the grid in distance units instead of degrees using the MetPy helper function lat_lon_grid_spacing. | # Calcualte dx and dy for calculations
dx, dy = mpcalc.lat_lon_grid_deltas(lon, lat) | notebooks/MetPy_Case_Study/MetPy_Case_Study.ipynb | Unidata/unidata-python-workshop | mit |
Finding Pressure Level Data
A robust way to parse the data for a certain pressure level is to find the index value using the np.where function. Since the NARR pressure data ('levels') is in hPa, then we'll want to search that array for our pressure levels 850, 500, and 300 hPa.
<div class="alert alert-success">
<b>EXERCISE</b>:
Replace the `0`'s in the template below with your code:
<ul>
<li>Find the index of the 850 hPa, 500 hPa, and 300 hPa levels.</li>
<li>Extract the heights, temperature, u, and v winds at those levels.</li>
</ul>
</div> | # Specify 850 hPa data
ilev850 = np.where(lev==850)[0][0]
hght_850 = hght[ilev850]
tmpk_850 = 0
uwnd_850 = 0
vwnd_850 = 0
# Specify 500 hPa data
ilev500 = 0
hght_500 = 0
uwnd_500 = 0
vwnd_500 = 0
# Specify 300 hPa data
ilev300 = 0
hght_300 = 0
uwnd_300 = 0
vwnd_300 = 0 | notebooks/MetPy_Case_Study/MetPy_Case_Study.ipynb | Unidata/unidata-python-workshop | mit |
<button data-toggle="collapse" data-target="#sol2" class='btn btn-primary'>View Solution</button>
<div id="sol2" class="collapse">
<code><pre>
# Specify 850 hPa data
ilev850 = np.where(lev == 850)[0][0]
hght_850 = hght[ilev850]
tmpk_850 = tmpk[ilev850]
uwnd_850 = uwnd[ilev850]
vwnd_850 = vwnd[ilev850]
\# Specify 500 hPa data
ilev500 = np.where(lev == 500)[0][0]
hght_500 = hght[ilev500]
uwnd_500 = uwnd[ilev500]
vwnd_500 = vwnd[ilev500]
\# Specify 300 hPa data
ilev300 = np.where(lev == 300)[0][0]
hght_300 = hght[ilev300]
uwnd_300 = uwnd[ilev300]
vwnd_300 = vwnd[ilev300]
</pre></code>
</div>
Using MetPy to Calculate Atmospheric Dynamic Quantities
MetPy has a large and growing list of functions to calculate many different atmospheric quantities. Here we want to use some classic functions to calculate wind speed, advection, planetary vorticity, relative vorticity, and divergence.
Wind Speed: mpcalc.wind_speed()
Advection: mpcalc.advection()
Planetary Vorticity: mpcalc.coriolis_parameter()
Relative Vorticity: mpcalc.vorticity()
Divergence: mpcalc.divergence()
Note: For the above, MetPy Calculation module is imported in the following manner import metpy.calc as mpcalc.
Temperature Advection
A classic QG forcing term is 850-hPa temperature advection. MetPy has a function for advection
advection(scalar quantity, [advecting vector components], (grid spacing components))
So for temperature advection our scalar quantity would be the tempertaure, the advecting vector components would be our u and v components of the wind, and the grid spacing would be our dx and dy we computed in an earier cell.
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Uncomment and fill out the advection calculation below.</li>
</ul>
</div> | # Temperature Advection
# tmpc_adv_850 = mpcalc.advection(--Fill in this call--).to('degC/s') | notebooks/MetPy_Case_Study/MetPy_Case_Study.ipynb | Unidata/unidata-python-workshop | mit |
<button data-toggle="collapse" data-target="#sol3" class='btn btn-primary'>View Solution</button>
<div id="sol3" class="collapse">
<code><pre>
# Temperature Advection
tmpc_adv_850 = mpcalc.advection(tmpk_850, [uwnd_850, vwnd_850],
(dx, dy), dim_order='yx').to('degC/s')
</pre></code>
</div>
Vorticity Calculations
There are a couple of different vorticities that we are interested in for various calculations, planetary vorticity, relative vorticity, and absolute vorticity. Currently MetPy has two of the three as functions within the calc module.
Planetary Vorticity (Coriolis Parameter)
coriolis_parameter(latitude in radians)
Note: You must can convert your array of latitudes to radians...NumPy give a great function np.deg2rad() or have units attached to your latitudes in order for MetPy to convert them for you! Always check your output to make sure that your code is producing what you think it is producing.
Relative Vorticity
When atmospheric scientists talk about relative vorticity, we are really refering to the relative vorticity that is occuring about the vertical axis (the k-hat component). So in MetPy the function is
vorticity(uwind, vwind, dx, dy)
Absolute Vorticity
Currently there is no specific function for Absolute Vorticity, but this is easy for us to calculate from the previous two calculations because we just need to add them together!
ABS Vort = Rel. Vort + Coriolis Parameter
Here having units are great, becase we won't be able to add things together that don't have the same units! Its a nice safety check just in case you entered something wrong in another part of the calculation, you'll get a units error.
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Fill in the function calls below to complete the vorticity calculations.</li>
</ul>
</div> | # Vorticity and Absolute Vorticity Calculations
# Planetary Vorticity
# f = mpcalc.coriolis_parameter(-- Fill in here --).to('1/s')
# Relative Vorticity
# vor_500 = mpcalc.vorticity(-- Fill in here --)
# Abosolute Vorticity
# avor_500 = vor_500 + f | notebooks/MetPy_Case_Study/MetPy_Case_Study.ipynb | Unidata/unidata-python-workshop | mit |
<button data-toggle="collapse" data-target="#sol4" class='btn btn-primary'>View Solution</button>
<div id="sol4" class="collapse">
<code><pre>
# Vorticity and Absolute Vorticity Calculations
\# Planetary Vorticity
f = mpcalc.coriolis_parameter(np.deg2rad(lat)).to('1/s')
\# Relative Vorticity
vor_500 = mpcalc.vorticity(uwnd_500, vwnd_500, dx, dy,
dim_order='yx')
\# Abosolute Vorticity
avor_500 = vor_500 + f
</pre></code>
</div>
Vorticity Advection
We use the same MetPy function for temperature advection for our vorticity advection, we just have to change the scalar quantity (what is being advected) and have appropriate vector quantities for the level our scalar is from. So for vorticity advections well want our wind components from 500 hPa. | # Vorticity Advection
f_adv = mpcalc.advection(f, [uwnd_500, vwnd_500], (dx, dy), dim_order='yx')
relvort_adv = mpcalc.advection(vor_500, [uwnd_500, vwnd_500], (dx, dy), dim_order='yx')
absvort_adv = mpcalc.advection(avor_500, [uwnd_500, vwnd_500], (dx, dy), dim_order='yx') | notebooks/MetPy_Case_Study/MetPy_Case_Study.ipynb | Unidata/unidata-python-workshop | mit |
Divergence and Stretching Vorticity
If we want to analyze another component of the vorticity tendency equation other than advection, we might want to assess the stretching forticity term.
-(Abs. Vort.)*(Divergence)
We already have absolute vorticity calculated, so now we need to calculate the divergence of the level, which MetPy has a function
divergence(uwnd, vwnd, dx, dy)
This function computes the horizontal divergence. | # Stretching Vorticity
div_500 = mpcalc.divergence(uwnd_500, vwnd_500, dx, dy, dim_order='yx')
stretch_vort = -1 * avor_500 * div_500 | notebooks/MetPy_Case_Study/MetPy_Case_Study.ipynb | Unidata/unidata-python-workshop | mit |
Wind Speed, Geostrophic and Ageostrophic Wind
Wind Speed
Calculating wind speed is not a difficult calculation, but MetPy offers a function to calculate it easily keeping units so that it is easy to convert units for plotting purposes.
wind_speed(uwnd, vwnd)
Geostrophic Wind
The geostrophic wind can be computed from a given height gradient and coriolis parameter
geostrophic_wind(heights, coriolis parameter, dx, dy)
This function will return the two geostrophic wind components in a tuple. On the left hand side you'll be able to put two variables to save them off separately, if desired.
Ageostrophic Wind
Currently, there is not a function in MetPy for calculating the ageostrophic wind, however, it is again a simple arithmatic operation to get it from the total wind (which comes from our data input) and out calculated geostrophic wind from above.
Ageo Wind = Total Wind - Geo Wind | # Divergence 300 hPa, Ageostrophic Wind
wspd_300 = mpcalc.wind_speed(uwnd_300, vwnd_300).to('kts')
div_300 = mpcalc.divergence(uwnd_300, vwnd_300, dx, dy, dim_order='yx')
ugeo_300, vgeo_300 = mpcalc.geostrophic_wind(hght_300, f, dx, dy, dim_order='yx')
uageo_300 = uwnd_300 - ugeo_300
vageo_300 = vwnd_300 - vgeo_300 | notebooks/MetPy_Case_Study/MetPy_Case_Study.ipynb | Unidata/unidata-python-workshop | mit |
Maps and Projections | # Data projection; NARR Data is Earth Relative
dataproj = ccrs.PlateCarree()
# Plot projection
# The look you want for the view, LambertConformal for mid-latitude view
plotproj = ccrs.LambertConformal(central_longitude=-100., central_latitude=40.,
standard_parallels=[30, 60])
def create_map_background():
fig=plt.figure(figsize=(14, 12))
ax=plt.subplot(111, projection=plotproj)
ax.set_extent([-125, -73, 25, 50],ccrs.PlateCarree())
ax.coastlines('50m', linewidth=0.75)
ax.add_feature(cfeature.STATES, linewidth=0.5)
return fig, ax | notebooks/MetPy_Case_Study/MetPy_Case_Study.ipynb | Unidata/unidata-python-workshop | mit |
850-hPa Temperature Advection
Add one contour (Temperature in Celsius with a dotted linestyle
Add one colorfill (Temperature Advection in C/hr)
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Add one contour (Temperature in Celsius with a dotted linestyle</li>
<li>Add one filled contour (Temperature Advection in C/hr)</li>
</ul>
</div> | fig, ax = create_map_background()
# Contour 1 - Temperature, dotted
# Your code here!
# Contour 2
clev850 = np.arange(0, 4000, 30)
cs = ax.contour(lon, lat, hght_850, clev850, colors='k',
linewidths=1.0, linestyles='solid', transform=dataproj)
plt.clabel(cs, fontsize=10, inline=1, inline_spacing=10, fmt='%i',
rightside_up=True, use_clabeltext=True)
# Filled contours - Temperature advection
contours = [-3, -2.2, -2, -1.5, -1, -0.5, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0]
# Your code here!
# Vector
ax.barbs(lon, lat, uwnd_850.to('kts').m, vwnd_850.to('kts').m,
regrid_shape=15, transform=dataproj)
# Titles
plt.title('850-hPa Geopotential Heights, Temperature (C), \
Temp Adv (C/h), and Wind Barbs (kts)', loc='left')
plt.title(f'VALID: {vtime}', loc='right')
plt.tight_layout()
plt.show() | notebooks/MetPy_Case_Study/MetPy_Case_Study.ipynb | Unidata/unidata-python-workshop | mit |
<button data-toggle="collapse" data-target="#sol5" class='btn btn-primary'>View Solution</button>
<div id="sol5" class="collapse">
<code><pre>
fig, ax = create_map_background()
\# Contour 1 - Temperature, dotted
cs2 = ax.contour(lon, lat, tmpk_850.to('degC'), range(-50, 50, 2),
colors='grey', linestyles='dotted', transform=dataproj)
plt.clabel(cs2, fontsize=10, inline=1, inline_spacing=10, fmt='%i',
rightside_up=True, use_clabeltext=True)
\# Contour 2
clev850 = np.arange(0, 4000, 30)
cs = ax.contour(lon, lat, hght_850, clev850, colors='k',
linewidths=1.0, linestyles='solid', transform=dataproj)
plt.clabel(cs, fontsize=10, inline=1, inline_spacing=10, fmt='%i',
rightside_up=True, use_clabeltext=True)
\# Filled contours - Temperature advection
contours = [-3, -2.2, -2, -1.5, -1, -0.5, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0]
cf = ax.contourf(lon, lat, tmpc_adv_850*3600, contours,
cmap='bwr', extend='both', transform=dataproj)
plt.colorbar(cf, orientation='horizontal', pad=0, aspect=50,
extendrect=True, ticks=contours)
\# Vector
ax.barbs(lon, lat, uwnd_850.to('kts').m, vwnd_850.to('kts').m,
regrid_shape=15, transform=dataproj)
\# Titles
plt.title('850-hPa Geopotential Heights, Temperature (C), \
Temp Adv (C/h), and Wind Barbs (kts)', loc='left')
plt.title(f'VALID: {vtime}', loc='right')
plt.tight_layout()
plt.show()
</pre></code>
</div>
500-hPa Absolute Vorticity
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Add code for plotting vorticity as filled contours with given levels and colors.</li>
</ul>
</div> | fig, ax = create_map_background()
# Contour 1
clev500 = np.arange(0, 7000, 60)
cs = ax.contour(lon, lat, hght_500, clev500, colors='k',
linewidths=1.0, linestyles='solid', transform=dataproj)
plt.clabel(cs, fontsize=10, inline=1, inline_spacing=4,
fmt='%i', rightside_up=True, use_clabeltext=True)
# Filled contours
# Set contour intervals for Absolute Vorticity
clevavor500 = [-4, -3, -2, -1, 0, 7, 10, 13, 16, 19,
22, 25, 28, 31, 34, 37, 40, 43, 46]
# Set colorfill colors for absolute vorticity
# purple negative
# yellow to orange positive
colorsavor500 = ('#660066', '#660099', '#6600CC', '#6600FF',
'#FFFFFF', '#ffE800', '#ffD800', '#ffC800',
'#ffB800', '#ffA800', '#ff9800', '#ff8800',
'#ff7800', '#ff6800', '#ff5800', '#ff5000',
'#ff4000', '#ff3000')
# YOUR CODE HERE!
plt.colorbar(cf, orientation='horizontal', pad=0, aspect=50)
# Vector
ax.barbs(lon, lat, uwnd_500.to('kts').m, vwnd_500.to('kts').m,
regrid_shape=15, transform=dataproj)
# Titles
plt.title('500-hPa Geopotential Heights, Absolute Vorticity \
(1/s), and Wind Barbs (kts)', loc='left')
plt.title(f'VALID: {vtime}', loc='right')
plt.tight_layout()
plt.show() | notebooks/MetPy_Case_Study/MetPy_Case_Study.ipynb | Unidata/unidata-python-workshop | mit |
<button data-toggle="collapse" data-target="#sol6" class='btn btn-primary'>View Solution</button>
<div id="sol6" class="collapse">
<code><pre>
fig, ax = create_map_background()
\# Contour 1
clev500 = np.arange(0, 7000, 60)
cs = ax.contour(lon, lat, hght_500, clev500, colors='k',
linewidths=1.0, linestyles='solid', transform=dataproj)
plt.clabel(cs, fontsize=10, inline=1, inline_spacing=4,
fmt='%i', rightside_up=True, use_clabeltext=True)
\# Filled contours
\# Set contour intervals for Absolute Vorticity
clevavor500 = [-4, -3, -2, -1, 0, 7, 10, 13, 16, 19,
22, 25, 28, 31, 34, 37, 40, 43, 46]
\# Set colorfill colors for absolute vorticity
\# purple negative
\# yellow to orange positive
colorsavor500 = ('#660066', '#660099', '#6600CC', '#6600FF',
'#FFFFFF', '#ffE800', '#ffD800', '#ffC800',
'#ffB800', '#ffA800', '#ff9800', '#ff8800',
'#ff7800', '#ff6800', '#ff5800', '#ff5000',
'#ff4000', '#ff3000')
cf = ax.contourf(lon, lat, avor_500 * 10**5, clevavor500,
colors=colorsavor500, transform=dataproj)
plt.colorbar(cf, orientation='horizontal', pad=0, aspect=50)
\# Vector
ax.barbs(lon, lat, uwnd_500.to('kts').m, vwnd_500.to('kts').m,
regrid_shape=15, transform=dataproj)
\# Titles
plt.title('500-hPa Geopotential Heights, Absolute Vorticity \
(1/s), and Wind Barbs (kts)', loc='left')
plt.title(f'VALID: {vtime}', loc='right')
plt.tight_layout()
plt.show()
</pre></code>
</div>
300-hPa Wind Speed, Divergence, and Ageostrophic Wind
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Add code to plot 300-hPa Ageostrophic Wind vectors using matplotlib's quiver function.</li>
</ul>
</div> | fig, ax = create_map_background()
# Contour 1
clev300 = np.arange(0, 11000, 120)
cs2 = ax.contour(lon, lat, div_300 * 10**5, range(-10, 11, 2),
colors='grey', transform=dataproj)
plt.clabel(cs2, fontsize=10, inline=1, inline_spacing=4,
fmt='%i', rightside_up=True, use_clabeltext=True)
# Contour 2
cs = ax.contour(lon, lat, hght_300, clev300, colors='k',
linewidths=1.0, linestyles='solid', transform=dataproj)
plt.clabel(cs, fontsize=10, inline=1, inline_spacing=4,
fmt='%i', rightside_up=True, use_clabeltext=True)
# Filled Contours
spd300 = np.arange(50, 250, 20)
cf = ax.contourf(lon, lat, wspd_300, spd300, cmap='BuPu',
transform=dataproj, zorder=0)
plt.colorbar(cf, orientation='horizontal', pad=0.0, aspect=50)
# Vector of 300-hPa Ageostrophic Wind Vectors
# Your code goes here!
# Titles
plt.title('300-hPa Geopotential Heights, Divergence (1/s),\
Wind Speed (kts), Ageostrophic Wind Vector (m/s)',
loc='left')
plt.title(f'VALID: {vtime}', loc='right')
plt.tight_layout()
plt.show() | notebooks/MetPy_Case_Study/MetPy_Case_Study.ipynb | Unidata/unidata-python-workshop | mit |
<button data-toggle="collapse" data-target="#sol7" class='btn btn-primary'>View Solution</button>
<div id="sol7" class="collapse">
<code><pre>
fig, ax = create_map_background()
\# Contour 1
clev300 = np.arange(0, 11000, 120)
cs2 = ax.contour(lon, lat, div_300 * 10**5, range(-10, 11, 2),
colors='grey', transform=dataproj)
plt.clabel(cs2, fontsize=10, inline=1, inline_spacing=4,
fmt='%i', rightside_up=True, use_clabeltext=True)
\# Contour 2
cs = ax.contour(lon, lat, hght_300, clev300, colors='k',
linewidths=1.0, linestyles='solid', transform=dataproj)
plt.clabel(cs, fontsize=10, inline=1, inline_spacing=4,
fmt='%i', rightside_up=True, use_clabeltext=True)
\# Filled Contours
spd300 = np.arange(50, 250, 20)
cf = ax.contourf(lon, lat, wspd_300, spd300, cmap='BuPu',
transform=dataproj, zorder=0)
plt.colorbar(cf, orientation='horizontal', pad=0.0, aspect=50)
\# Vector of 300-hPa Ageostrophic Wind Vectors
ax.quiver(lon, lat, uageo_300.m, vageo_300.m, regrid_shape=15,
pivot='mid', transform=dataproj, zorder=10)
\# Titles
plt.title('300-hPa Geopotential Heights, Divergence (1/s),\
Wind Speed (kts), Ageostrophic Wind Vector (m/s)',
loc='left')
plt.title(f'VALID: {vtime}', loc='right')
plt.tight_layout()
plt.show()
</pre></code>
</div>
Vorticity Tendency Terms
Here is an example of a four-panel plot for a couple of terms in the Vorticity Tendency equation
Upper-left Panel: Planetary Vorticity Advection
Upper-right Panel: Relative Vorticity Advection
Lower-left Panel: Absolute Vorticity Advection
Lower-right Panel: Stretching Vorticity | fig=plt.figure(1,figsize=(21.,16.))
# Upper-Left Panel
ax=plt.subplot(221,projection=plotproj)
ax.set_extent([-125.,-73,25.,50.],ccrs.PlateCarree())
ax.coastlines('50m', linewidth=0.75)
ax.add_feature(cfeature.STATES,linewidth=0.5)
# Contour #1
clev500 = np.arange(0,7000,60)
cs = ax.contour(lon,lat,hght_500,clev500,colors='k',
linewidths=1.0,linestyles='solid',transform=dataproj)
plt.clabel(cs, fontsize=10, inline=1, inline_spacing=3, fmt='%i', rightside_up=True, use_clabeltext=True)
# Contour #2
cs2 = ax.contour(lon,lat,f*10**4,np.arange(0,3,.05),colors='grey',
linewidths=1.0,linestyles='dashed',transform=dataproj)
plt.clabel(cs2, fontsize=10, inline=1, inline_spacing=3, fmt='%.2f', rightside_up=True, use_clabeltext=True)
# Colorfill
cf = ax.contourf(lon,lat,f_adv*10**10,np.arange(-10,11,0.5),
cmap='PuOr_r',extend='both',transform=dataproj)
plt.colorbar(cf, orientation='horizontal',pad=0.0,aspect=50,extendrect=True)
# Vector
ax.barbs(lon,lat,uwnd_500.to('kts').m,vwnd_500.to('kts').m,regrid_shape=15,transform=dataproj)
# Titles
plt.title(r'500-hPa Geopotential Heights, Planetary Vorticity Advection ($*10^{10}$ 1/s^2)',loc='left')
plt.title('VALID: %s' %(vtime),loc='right')
# Upper-Right Panel
ax=plt.subplot(222,projection=plotproj)
ax.set_extent([-125.,-73,25.,50.],ccrs.PlateCarree())
ax.coastlines('50m', linewidth=0.75)
ax.add_feature(cfeature.STATES, linewidth=0.5)
# Contour #1
clev500 = np.arange(0,7000,60)
cs = ax.contour(lon,lat,hght_500,clev500,colors='k',
linewidths=1.0,linestyles='solid',transform=dataproj)
plt.clabel(cs, fontsize=10, inline=1, inline_spacing=3, fmt='%i', rightside_up=True, use_clabeltext=True)
# Contour #2
cs2 = ax.contour(lon,lat,vor_500*10**5,np.arange(-40,41,4),colors='grey',
linewidths=1.0,transform=dataproj)
plt.clabel(cs2, fontsize=10, inline=1, inline_spacing=3, fmt='%d', rightside_up=True, use_clabeltext=True)
# Colorfill
cf = ax.contourf(lon,lat,relvort_adv*10**8,np.arange(-5,5.5,0.5),
cmap='BrBG',extend='both',transform=dataproj)
plt.colorbar(cf, orientation='horizontal',pad=0.0,aspect=50,extendrect=True)
# Vector
ax.barbs(lon,lat,uwnd_500.to('kts').m,vwnd_500.to('kts').m,regrid_shape=15,transform=dataproj)
# Titles
plt.title(r'500-hPa Geopotential Heights, Relative Vorticity Advection ($*10^{8}$ 1/s^2)',loc='left')
plt.title('VALID: %s' %(vtime),loc='right')
# Lower-Left Panel
ax=plt.subplot(223,projection=plotproj)
ax.set_extent([-125.,-73,25.,50.],ccrs.PlateCarree())
ax.coastlines('50m', linewidth=0.75)
ax.add_feature(cfeature.STATES, linewidth=0.5)
# Contour #1
clev500 = np.arange(0,7000,60)
cs = ax.contour(lon,lat,hght_500,clev500,colors='k',
linewidths=1.0,linestyles='solid',transform=dataproj)
plt.clabel(cs, fontsize=10, inline=1, inline_spacing=3, fmt='%i', rightside_up=True, use_clabeltext=True)
# Contour #2
cs2 = ax.contour(lon,lat,avor_500*10**5,np.arange(-5,41,4),colors='grey',
linewidths=1.0,transform=dataproj)
plt.clabel(cs2, fontsize=10, inline=1, inline_spacing=3, fmt='%d', rightside_up=True, use_clabeltext=True)
# Colorfill
cf = ax.contourf(lon,lat,absvort_adv*10**8,np.arange(-5,5.5,0.5),
cmap='RdBu',extend='both',transform=dataproj)
plt.colorbar(cf, orientation='horizontal',pad=0.0,aspect=50,extendrect=True)
# Vector
ax.barbs(lon,lat,uwnd_500.to('kts').m,vwnd_500.to('kts').m,regrid_shape=15,transform=dataproj)
# Titles
plt.title(r'500-hPa Geopotential Heights, Absolute Vorticity Advection ($*10^{8}$ 1/s^2)',loc='left')
plt.title('VALID: %s' %(vtime),loc='right')
# Lower-Right Panel
ax=plt.subplot(224,projection=plotproj)
ax.set_extent([-125.,-73,25.,50.],ccrs.PlateCarree())
ax.coastlines('50m', linewidth=0.75)
ax.add_feature(cfeature.STATES, linewidth=0.5)
# Contour #1
clev500 = np.arange(0,7000,60)
cs = ax.contour(lon,lat,hght_500,clev500,colors='k',
linewidths=1.0,linestyles='solid',transform=dataproj)
plt.clabel(cs, fontsize=10, inline=1, inline_spacing=3, fmt='%i', rightside_up=True, use_clabeltext=True)
# Contour #2
cs2 = ax.contour(lon,lat,gaussian_filter(avor_500*10**5,sigma=1.0),np.arange(-5,41,4),colors='grey',
linewidths=1.0,transform=dataproj)
plt.clabel(cs2, fontsize=10, inline=1, inline_spacing=3, fmt='%d', rightside_up=True, use_clabeltext=True)
# Colorfill
cf = ax.contourf(lon,lat,gaussian_filter(stretch_vort*10**9,sigma=1.0),np.arange(-15,16,1),
cmap='PRGn',extend='both',transform=dataproj)
plt.colorbar(cf, orientation='horizontal',pad=0.0,aspect=50,extendrect=True)
# Vector
ax.barbs(lon,lat,uwnd_500.to('kts').m,vwnd_500.to('kts').m,regrid_shape=15,transform=dataproj)
# Titles
plt.title(r'500-hPa Geopotential Heights, Stretching Vorticity ($*10^{9}$ 1/s^2)',loc='left')
plt.title('VALID: %s' %(vtime),loc='right')
plt.tight_layout()
plt.show() | notebooks/MetPy_Case_Study/MetPy_Case_Study.ipynb | Unidata/unidata-python-workshop | mit |
Plotting Data for Hand Calculation
Calculating dynamic quantities with a computer is great and can allow for many different educational opportunities, but there are times when we want students to calculate those quantities by hand. So can we plot values of geopotential height, u-component of the wind, and v-component of the wind on a map? Yes! And its not too hard to do.
Since we are using NARR data, we'll plot every third point to get a roughly 1 degree by 1 degree separation of grid points and thus an average grid spacing of 111 km (not exact, but close enough for back of the envelope calculations).
To do our plotting we'll be using the functionality of MetPy to plot station plot data, but we'll use our gridded data to plot around our points. To do this we'll have to make or 2D data into 1D (which is made easy by the ravel() method associated with our data objects).
First we'll want to set some bounds (so that we only plot what we want) and create a mask to make plotting easier.
Second we'll set up our figure with a projection and then set up our "stations" at the grid points we desire using the MetPy class StationPlot
https://unidata.github.io/MetPy/latest/api/generated/metpy.plots.StationPlot.html#metpy.plots.StationPlot
Third we'll plot our points using matplotlibs scatter() function and use our stationplot object to plot data around our "stations" | # Set lat/lon bounds for region to plot data
LLlon = -104
LLlat = 33
URlon = -94
URlat = 38.1
# Set up mask so that you only plot what you want
skip_points = (slice(None, None, 3), slice(None, None, 3))
mask_lon = ((lon[skip_points].ravel() > LLlon + 0.05) & (lon[skip_points].ravel() < URlon + 0.01))
mask_lat = ((lat[skip_points].ravel() < URlat - 0.01) & (lat[skip_points].ravel() > LLlat - 0.01))
mask = mask_lon & mask_lat | notebooks/MetPy_Case_Study/MetPy_Case_Study.ipynb | Unidata/unidata-python-workshop | mit |
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Plot markers and data around the markers.</li>
</ul>
</div> | # Set up plot basics and use StationPlot class from MetPy to help with plotting
fig = plt.figure(figsize=(14, 8))
ax = plt.subplot(111,projection=ccrs.LambertConformal(central_latitude=50,central_longitude=-107))
ax.set_extent([LLlon,URlon,LLlat,URlat],ccrs.PlateCarree())
ax.coastlines('50m', edgecolor='grey', linewidth=0.75)
ax.add_feature(cfeature.STATES, edgecolor='grey', linewidth=0.5)
# Set up station plotting using only every third element from arrays for plotting
stationplot = StationPlot(ax, lon[skip_points].ravel()[mask],
lat[skip_points].ravel()[mask],
transform=ccrs.PlateCarree(), fontsize=12)
# Plot markers then data around marker for calculation purposes
# Your code goes here!
# Title
plt.title('Geopotential (m; top), U-wind (m/s; Lower Left), V-wind (m/s; Lower Right)')
plt.tight_layout()
plt.show() | notebooks/MetPy_Case_Study/MetPy_Case_Study.ipynb | Unidata/unidata-python-workshop | mit |
Download and manage data
Download the following series from FRED:
FRED series ID | Name | Frequency |
---------------|------|-----------|
GDP | Gross Domestic Product | Q |
PCEC | Personal Consumption Expenditures | Q |
GPDI | Gross Private Domestic Investment | Q |
GCE | Government Consumption Expenditures and Gross Investment | Q |
EXPGS | Exports of Goods and Services | Q |
IMPGS | Imports of Goods and Services | Q |
NETEXP | Net Exports of Goods and Services | Q |
HOANBS | Nonfarm Business Sector: Hours Worked for All Employed Persons | Q |
GDPDEF | Gross Domestic Product: Implicit Price Deflator | Q |
PCECTPI | Personal Consumption Expenditures: Chain-type Price Index | Q |
CPIAUCSL | Consumer Price Index for All Urban Consumers: All Items in U.S. City Average | M |
M2SL | M2 | M |
TB3MS | 3-Month Treasury Bill Secondary Market Rate | M |
UNRATE | Unemployment Rate | M |
Monthly series (M2, T-Bill, unemployment rate) are converted to quarterly frequencies. CPI and PCE inflation rates are computed as the percent change in the indices over the previous year. GDP, consumption, investment, government expenditures, net exports and M2 are deflated by the GDP deflator. The data ranges for nataional accounts series (GDP, consumption, investment, government expenditures, net exports) and hours are equalized to the largest common date range. | # Download data
gdp = fp.series('GDP')
consumption = fp.series('PCEC')
investment = fp.series('GPDI')
government = fp.series('GCE')
exports = fp.series('EXPGS')
imports = fp.series('IMPGS')
net_exports = fp.series('NETEXP')
hours = fp.series('HOANBS')
deflator = fp.series('GDPDEF')
pce_deflator = fp.series('PCECTPI')
cpi = fp.series('CPIAUCSL')
m2 = fp.series('M2SL')
tbill_3mo = fp.series('TB3MS')
unemployment = fp.series('UNRATE')
# Base year for NIPA deflators
cpi_base_year = cpi.units.split(' ')[1].split('=')[0]
# Base year for CPI
nipa_base_year = deflator.units.split(' ')[1].split('=')[0]
# Convert monthly M2, 3-mo T-Bill, and unemployment to quarterly
m2 = m2.as_frequency('Q')
tbill_3mo = tbill_3mo.as_frequency('Q')
unemployment = unemployment.as_frequency('Q')
cpi = cpi.as_frequency('Q')
# Deflate GDP, consumption, investment, government expenditures, net exports, and m2 with the GDP deflator
def deflate(series,deflator):
deflator, series = fp.window_equalize([deflator, series])
series = series.divide(deflator).times(100)
return series
gdp = deflate(gdp,deflator)
consumption = deflate(consumption,deflator)
investment = deflate(investment,deflator)
government = deflate(government,deflator)
net_exports = deflate(net_exports,deflator)
exports = deflate(exports,deflator)
imports = deflate(imports,deflator)
m2 = deflate(m2,deflator)
# pce inflation as percent change over past year
pce_deflator = pce_deflator.apc()
# cpi inflation as percent change over past year
cpi = cpi.apc()
# GDP deflator inflation as percent change over past year
deflator = deflator.apc()
# Convert unemployment, 3-mo T-Bill, pce inflation, cpi inflation, GDP deflator inflation data to rates
unemployment = unemployment.divide(100)
tbill_3mo = tbill_3mo.divide(100)
pce_deflator = pce_deflator.divide(100)
cpi = cpi.divide(100)
deflator = deflator.divide(100)
# Make sure that the RBC data has the same data range
gdp,consumption,investment,government,exports,imports,net_exports,hours = fp.window_equalize([gdp,consumption,investment,government,exports,imports,net_exports,hours])
# T-Bill data doesn't neet to go all the way back to 1930s
tbill_3mo = tbill_3mo.window([gdp.data.index[0],'2222'])
metadata = pd.Series(dtype=str,name='Values')
metadata['nipa_base_year'] = nipa_base_year
metadata['cpi_base_year'] = cpi_base_year
metadata.to_csv(export_path+'/business_cycle_metadata.csv') | business-cycle-data/python/.ipynb_checkpoints/business_cycle_data-checkpoint.ipynb | letsgoexploring/economicData | mit |
Compute capital stock for US using the perpetual inventory method
Next, compute the quarterly capital stock series for the US using the perpetual inventory method. The discrete-time Solow growth model is given by:
\begin{align}
Y_t & = A_tK_t^{\alpha}L_t^{1-\alpha} \tag{1}\
C_t & = (1-s)Y_t \tag{2}\
Y_t & = C_t + I_t \tag{3}\
K_{t+1} & = I_t + (1-\delta)K_t \tag{4}\
A_{t+1} & = (1+g)A_t \tag{5}\
L_{t+1} & = (1+n)L_t \tag{6}.
\end{align}
Here the model is assumed to be quarterly so $n$ is the quarterly growth rate of labor hours, $g$ is the quarterly growth rate of TFP, and $\delta$ is the quarterly rate of depreciation of the capital stock. Given a value of the quarterly depreciation rate $\delta$, an investment series $I_t$, and an initial capital stock $K_0$, the law of motion for the capital stock, Equation (4), can be used to compute an implied capital series. But we don't know $K_0$ or $\delta$ so we'll have to calibrate these values using statistics computed from the data that we've already obtained.
Let lowercase letters denote a variable that's been divided by $A_t^{1/(1-\alpha)}L_t$. E.g.,
\begin{align}
y_t = \frac{Y_t}{A_t^{1/(1-\alpha)}L_t}\tag{7}
\end{align}
Then (after substituting consumption from the model), the scaled version of the model can be written as:
\begin{align}
y_t & = k_t^{\alpha} \tag{8}\
i_t & = sy_t \tag{9}\
k_{t+1} & = i_t + (1-\delta-n-g')k_t,\tag{10}
\end{align}
where $g' = g/(1-\alpha)$ is the growth rate of $A_t^{1/(1-\alpha)}$. In the steady state:
\begin{align}
k & = \left(\frac{s}{\delta+n+g'}\right)^{\frac{1}{1-\alpha}} \tag{11}
\end{align}
which means that the ratio of capital to output is constant:
\begin{align}
\frac{k}{y} & = \frac{s}{\delta+n+g'} \tag{12}
\end{align}
and therefore the steady state ratio of depreciation to output is:
\begin{align}
\overline{\delta K/ Y} & = \frac{\delta s}{\delta + n + g'} \tag{13}
\end{align}
where $\overline{\delta K/ Y}$ is the long-run average ratio of depreciation to output. We can use Equation (13) to calibrate $\delta$ given $\overline{\delta K/ Y}$, $s$, $n$, and $g'$.
Furthermore, in the steady state, the growth rate of output is constant:
\begin{align}
\frac{\Delta Y}{Y} & = n + g' \tag{14}
\end{align}
Assume $\alpha = 0.35$.
Calibrate $s$ as the average of ratio of investment to GDP.
Calibrate $n$ as the average quarterly growth rate of labor hours.
Calibrate $g'$ as the average quarterly growth rate of real GDP minus n.
Calculate the average ratio of depreciation to GDP $\overline{\delta K/ Y}$ and use the result to calibrate $\delta$. That is, find the average ratio of Current-Cost Depreciation of Fixed Assets (FRED series ID: M1TTOTL1ES000) to GDP (FRED series ID: GDPA). Then calibrate $\delta$ from the following steady state relationship:
\begin{align}
\delta & = \frac{\left( \overline{\delta K/ Y} \right)\left(n + g' \right)}{s - \left( \overline{\delta K/ Y} \right)} \tag{15}
\end{align}
Calibrate $K_0$ by asusming that the capital stock is initially equal to its steady state value:
\begin{align}
K_0 & = \left(\frac{s}{\delta + n + g'}\right) Y_0 \tag{16}
\end{align}
Then, armed with calibrated values for $K_0$ and $\delta$, compute $K_1, K_2, \ldots$ recursively. See Timothy Kehoe's notes for more information on the perpetual inventory method:
http://users.econ.umn.edu/~tkehoe/classes/GrowthAccountingNotes.pdf | # Set the capital share of income
alpha = 0.35
# Average saving rate
s = np.mean(investment.data/gdp.data)
# Average quarterly labor hours growth rate
n = (hours.data[-1]/hours.data[0])**(1/(len(hours.data)-1)) - 1
# Average quarterly real GDP growth rate
g = ((gdp.data[-1]/gdp.data[0])**(1/(len(gdp.data)-1)) - 1) - n
# Compute annual depreciation rate
depA = fp.series('M1TTOTL1ES000')
gdpA = fp.series('gdpa')
gdpA = gdpA.window([gdp.data.index[0],gdp.data.index[-1]])
gdpA,depA = fp.window_equalize([gdpA,depA])
deltaKY = np.mean(depA.data/gdpA.data)
delta = (n+g)*deltaKY/(s-deltaKY)
# print calibrated values:
print('Avg saving rate: ',round(s,5))
print('Avg annual labor growth:',round(4*n,5))
print('Avg annual gdp growth: ',round(4*g,5))
print('Avg annual dep rate: ',round(4*delta,5))
# Construct the capital series. Note that the GPD and investment data are reported on an annualized basis
# so divide by 4 to get quarterly data.
capital = np.zeros(len(gdp.data))
capital[0] = gdp.data[0]/4*s/(n+g+delta)
for t in range(len(gdp.data)-1):
capital[t+1] = investment.data[t]/4 + (1-delta)*capital[t]
# Save in a fredpy series
capital = fp.to_fred_series(data = capital,dates =gdp.data.index,units = gdp.units,title='Capital stock of the US',frequency='Quarterly') | business-cycle-data/python/.ipynb_checkpoints/business_cycle_data-checkpoint.ipynb | letsgoexploring/economicData | mit |
Compute total factor productivity
Use the Cobb-Douglas production function:
\begin{align}
Y_t & = A_tK_t^{\alpha}L_t^{1-\alpha} \tag{17}
\end{align}
and data on GDP, capital, and hours with $\alpha=0.35$ to compute an implied series for $A_t$. | # Compute TFP
tfp = gdp.data/capital.data**alpha/hours.data**(1-alpha)
tfp = fp.to_fred_series(data = tfp,dates =gdp.data.index,units = gdp.units,title='TFP of the US',frequency='Quarterly') | business-cycle-data/python/.ipynb_checkpoints/business_cycle_data-checkpoint.ipynb | letsgoexploring/economicData | mit |
Additional data management
Now that we have used the aggregate production data to compute an implied capital stock and TFP, we can scale the production data and M2 by the population. | # Convert real GDP, consumption, investment, government expenditures, net exports and M2
# into thousands of dollars per civilian 16 and over
gdp = gdp.per_capita(civ_pop=True).times(1000)
consumption = consumption.per_capita(civ_pop=True).times(1000)
investment = investment.per_capita(civ_pop=True).times(1000)
government = government.per_capita(civ_pop=True).times(1000)
exports = exports.per_capita(civ_pop=True).times(1000)
imports = imports.per_capita(civ_pop=True).times(1000)
net_exports = net_exports.per_capita(civ_pop=True).times(1000)
hours = hours.per_capita(civ_pop=True).times(1000)
capital = capital.per_capita(civ_pop=True).times(1000)
m2 = m2.per_capita(civ_pop=True).times(1000)
# Scale hours per person to equal 100 on October (Quarter III) of GDP deflator base year.
hours.data = hours.data/hours.data.loc[base_year+'-10-01']*100 | business-cycle-data/python/.ipynb_checkpoints/business_cycle_data-checkpoint.ipynb | letsgoexploring/economicData | mit |
Plot aggregate data | fig, axes = plt.subplots(3,4,figsize=(6*4,4*3))
axes[0][0].plot(gdp.data)
axes[0][0].set_title('GDP')
axes[0][0].set_ylabel('Thousands of '+base_year+' $')
axes[0][1].plot(consumption.data)
axes[0][1].set_title('Consumption')
axes[0][1].set_ylabel('Thousands of '+base_year+' $')
axes[0][2].plot(investment.data)
axes[0][2].set_title('Investment')
axes[0][2].set_ylabel('Thousands of '+base_year+' $')
axes[0][3].plot(government.data)
axes[0][3].set_title('Gov expenditure')
axes[0][3].set_ylabel('Thousands of '+base_year+' $')
axes[1][0].plot(capital.data)
axes[1][0].set_title('Capital')
axes[1][0].set_ylabel('Thousands of '+base_year+' $')
axes[1][1].plot(hours.data)
axes[1][1].set_title('Hours')
axes[1][1].set_ylabel('Index ()'+base_year+'=100)')
axes[1][2].plot(tfp.data)
axes[1][2].set_title('TFP')
axes[1][3].plot(m2.data)
axes[1][3].set_title('M2')
axes[1][3].set_ylabel('Thousands of '+base_year+' $')
axes[2][0].plot(tbill_3mo.data*100)
axes[2][0].set_title('3mo T-Bill')
axes[2][0].set_ylabel('Percent')
axes[2][1].plot(pce_deflator.data*100)
axes[2][1].set_title('PCE Inflation')
axes[2][1].set_ylabel('Percent')
axes[2][2].plot(cpi.data*100)
axes[2][2].set_title('CPI Inflation')
axes[2][2].set_ylabel('Percent')
axes[2][3].plot(unemployment.data*100)
axes[2][3].set_title('Unemployment rate')
axes[2][3].set_ylabel('Percent'); | business-cycle-data/python/.ipynb_checkpoints/business_cycle_data-checkpoint.ipynb | letsgoexploring/economicData | mit |
Compute HP filter of data | # HP filter to isolate trend and cyclical components
gdp_log_cycle,gdp_log_trend= gdp.log().hp_filter()
consumption_log_cycle,consumption_log_trend= consumption.log().hp_filter()
investment_log_cycle,investment_log_trend= investment.log().hp_filter()
government_log_cycle,government_log_trend= government.log().hp_filter()
exports_log_cycle,exports_log_trend= exports.log().hp_filter()
imports_log_cycle,imports_log_trend= imports.log().hp_filter()
# net_exports_log_cycle,net_exports_log_trend= net_exports.log().hp_filter()
capital_log_cycle,capital_log_trend= capital.log().hp_filter()
hours_log_cycle,hours_log_trend= hours.log().hp_filter()
tfp_log_cycle,tfp_log_trend= tfp.log().hp_filter()
deflator_cycle,deflator_trend= deflator.hp_filter()
pce_deflator_cycle,pce_deflator_trend= pce_deflator.hp_filter()
cpi_cycle,cpi_trend= cpi.hp_filter()
m2_log_cycle,m2_log_trend= m2.log().hp_filter()
tbill_3mo_cycle,tbill_3mo_trend= tbill_3mo.hp_filter()
unemployment_cycle,unemployment_trend= unemployment.hp_filter() | business-cycle-data/python/.ipynb_checkpoints/business_cycle_data-checkpoint.ipynb | letsgoexploring/economicData | mit |
Plot aggregate data with trends | fig, axes = plt.subplots(3,4,figsize=(6*4,4*3))
axes[0][0].plot(gdp.data)
axes[0][0].plot(np.exp(gdp_log_trend.data),c='r')
axes[0][0].set_title('GDP')
axes[0][0].set_ylabel('Thousands of '+base_year+' $')
axes[0][1].plot(consumption.data)
axes[0][1].plot(np.exp(consumption_log_trend.data),c='r')
axes[0][1].set_title('Consumption')
axes[0][1].set_ylabel('Thousands of '+base_year+' $')
axes[0][2].plot(investment.data)
axes[0][2].plot(np.exp(investment_log_trend.data),c='r')
axes[0][2].set_title('Investment')
axes[0][2].set_ylabel('Thousands of '+base_year+' $')
axes[0][3].plot(government.data)
axes[0][3].plot(np.exp(government_log_trend.data),c='r')
axes[0][3].set_title('Gov expenditure')
axes[0][3].set_ylabel('Thousands of '+base_year+' $')
axes[1][0].plot(capital.data)
axes[1][0].plot(np.exp(capital_log_trend.data),c='r')
axes[1][0].set_title('Capital')
axes[1][0].set_ylabel('Thousands of '+base_year+' $')
axes[1][1].plot(hours.data)
axes[1][1].plot(np.exp(hours_log_trend.data),c='r')
axes[1][1].set_title('Hours')
axes[1][1].set_ylabel('Index ()'+base_year+'=100)')
axes[1][2].plot(tfp.data)
axes[1][2].plot(np.exp(tfp_log_trend.data),c='r')
axes[1][2].set_title('TFP')
axes[1][3].plot(m2.data)
axes[1][3].plot(np.exp(m2_log_trend.data),c='r')
axes[1][3].set_title('M2')
axes[1][3].set_ylabel('Thousands of '+base_year+' $')
axes[2][0].plot(tbill_3mo.data*100)
axes[2][0].plot(tbill_3mo_trend.data*100,c='r')
axes[2][0].set_title('3mo T-Bill')
axes[2][0].set_ylabel('Percent')
axes[2][1].plot(pce_deflator.data*100)
axes[2][1].plot(pce_deflator_trend.data*100,c='r')
axes[2][1].set_title('PCE Inflation')
axes[2][1].set_ylabel('Percent')
axes[2][2].plot(cpi.data*100)
axes[2][2].plot(cpi_trend.data*100,c='r')
axes[2][2].set_title('CPI Inflation')
axes[2][2].set_ylabel('Percent')
axes[2][3].plot(unemployment.data*100)
axes[2][3].plot(unemployment_trend.data*100,c='r')
axes[2][3].set_title('Unemployment rate')
axes[2][3].set_ylabel('Percent')
ax = fig.add_subplot(1,1,1)
ax.axis('off')
ax.plot(0,0,label='Actual')
ax.plot(0,0,c='r',label='Trend')
ax.legend(loc='upper center', bbox_to_anchor=(0.5, -0.05),ncol=2) | business-cycle-data/python/.ipynb_checkpoints/business_cycle_data-checkpoint.ipynb | letsgoexploring/economicData | mit |
Plot cyclical components of the data | fig, axes = plt.subplots(3,4,figsize=(6*4,4*3))
axes[0][0].plot(gdp_log_cycle.data)
axes[0][0].set_title('GDP')
axes[0][0].set_ylabel('Thousands of '+base_year+' $')
axes[0][1].plot(consumption_log_cycle.data)
axes[0][1].set_title('Consumption')
axes[0][1].set_ylabel('Thousands of '+base_year+' $')
axes[0][2].plot(investment_log_cycle.data)
axes[0][2].set_title('Investment')
axes[0][2].set_ylabel('Thousands of '+base_year+' $')
axes[0][3].plot(government_log_cycle.data)
axes[0][3].set_title('Gov expenditure')
axes[0][3].set_ylabel('Thousands of '+base_year+' $')
axes[1][0].plot(capital_log_cycle.data)
axes[1][0].set_title('Capital')
axes[1][0].set_ylabel('Thousands of '+base_year+' $')
axes[1][1].plot(hours_log_cycle.data)
axes[1][1].set_title('Hours')
axes[1][1].set_ylabel('Index ()'+base_year+'=100)')
axes[1][2].plot(tfp_log_cycle.data)
axes[1][2].set_title('TFP')
axes[1][3].plot(m2_log_cycle.data)
axes[1][3].set_title('M2')
axes[1][3].set_ylabel('Thousands of '+base_year+' $')
axes[2][0].plot(tbill_3mo_cycle.data)
axes[2][0].set_title('3mo T-Bill')
axes[2][0].set_ylabel('Percent')
axes[2][1].plot(pce_deflator_cycle.data)
axes[2][1].set_title('PCE Inflation')
axes[2][1].set_ylabel('Percent')
axes[2][2].plot(cpi_cycle.data)
axes[2][2].set_title('CPI Inflation')
axes[2][2].set_ylabel('Percent')
axes[2][3].plot(unemployment_cycle.data)
axes[2][3].set_title('Unemployment rate')
axes[2][3].set_ylabel('Percent'); | business-cycle-data/python/.ipynb_checkpoints/business_cycle_data-checkpoint.ipynb | letsgoexploring/economicData | mit |
Create data files | # Create a DataFrame with actual and trend data
data = pd.DataFrame({
'gdp':gdp.data,
'gdp_trend':np.exp(gdp_log_trend.data),
'gdp_cycle':gdp_log_cycle.data,
'consumption':consumption.data,
'consumption_trend':np.exp(consumption_log_trend.data),
'consumption_cycle':consumption_log_cycle.data,
'investment':investment.data,
'investment_trend':np.exp(investment_log_trend.data),
'investment_cycle':investment_log_cycle.data,
'government':government.data,
'government_trend':np.exp(government_log_trend.data),
'government_cycle':government_log_cycle.data,
'exports':exports.data,
'exports_trend':np.exp(exports_log_trend.data),
'exports_cycle':exports_log_cycle.data,
'imports':imports.data,
'imports_trend':np.exp(imports_log_trend.data),
'imports_cycle':imports_log_cycle.data,
'hours':hours.data,
'hours_trend':np.exp(hours_log_trend.data),
'hours_cycle':hours_log_cycle.data,
'capital':capital.data,
'capital_trend':np.exp(capital_log_trend.data),
'capital_cycle':capital_log_cycle.data,
'tfp':tfp.data,
'tfp_trend':np.exp(tfp_log_trend.data),
'tfp_cycle':tfp_log_cycle.data,
'real_m2':m2.data,
'real_m2_trend':np.exp(m2_log_trend.data),
'real_m2_cycle':m2_log_cycle.data,
't_bill_3mo':tbill_3mo.data,
't_bill_3mo_trend':tbill_3mo_trend.data,
't_bill_3mo_cycle':tbill_3mo_cycle.data,
'cpi_inflation':cpi.data,
'cpi_inflation_trend':cpi_trend.data,
'cpi_inflation_cycle':cpi_cycle.data,
'pce_inflation':pce_deflator.data,
'pce_inflation_trend':pce_deflator_trend.data,
'pce_inflation_cycle':pce_deflator_cycle.data,
'unemployment':unemployment.data,
'unemployment_trend':unemployment_trend.data,
'unemployment_cycle':unemployment_cycle.data,
})
# RBC data
columns_ordered =[]
names = ['gdp','consumption','investment','hours','capital','tfp']
for name in names:
columns_ordered.append(name)
columns_ordered.append(name+'_trend')
data[columns_ordered].dropna().to_csv(export_path+'rbc_data_actual_trend.csv',index=True)
# Create a DataFrame with actual, trend, and cycle data
columns_ordered =[]
names = ['gdp','consumption','investment','hours','capital','tfp']
for name in names:
columns_ordered.append(name)
columns_ordered.append(name+'_trend')
columns_ordered.append(name+'_cycle')
data[columns_ordered].dropna().to_csv(export_path+'rbc_data_actual_trend_cycle.csv',index=True)
# More comprehensive Business Cycle Data
columns_ordered =[]
names = ['gdp','consumption','investment','hours','capital','tfp','real_m2','t_bill_3mo','pce_inflation','unemployment']
for name in names:
columns_ordered.append(name)
columns_ordered.append(name+'_trend')
data[columns_ordered].dropna().to_csv(export_path+'business_cycle_data_actual_trend.csv',index=True)
# Create a DataFrame with actual, trend, and cycle data
columns_ordered =[]
names = ['gdp','consumption','investment','hours','capital','tfp','real_m2','t_bill_3mo','pce_inflation','unemployment']
for name in names:
columns_ordered.append(name)
columns_ordered.append(name+'_trend')
columns_ordered.append(name+'_cycle')
data[columns_ordered].dropna().to_csv(export_path+'business_cycle_data_actual_trend_cycle.csv') | business-cycle-data/python/.ipynb_checkpoints/business_cycle_data-checkpoint.ipynb | letsgoexploring/economicData | mit |
At first, we need custom score function, described in task.
https://www.kaggle.com/c/bike-sharing-demand/overview/evaluation
Why do we need +1 in score function? | def rmsle(y_true, y_pred):
y_pred_clipped = np.clip(y_pred, 0., None)
return mean_squared_error(np.log1p(y_true), np.log1p(y_pred_clipped)) ** .5 | BikeSharing-Linear.ipynb | dmittov/misc | apache-2.0 |
What happens without np.clip?
Let's start with the exisiting features and simple linear regression.
All that feature extractors and grid search would be more clear further. | class SimpleFeatureExtractor(BaseEstimator, TransformerMixin):
def fit(self, X, y=None):
return self
def transform(self, X, y=None):
return X[["holiday", "workingday", "season", "weather", "temp", "atemp", "humidity", "windspeed"]].values
exctractor = SimpleFeatureExtractor()
clf = Pipeline([
("extractor", exctractor),
("regression", linear_model.LinearRegression()),
])
param_grid = {}
scorerer = make_scorer(rmsle, greater_is_better=False)
researcher = GridSearchCV(clf, param_grid, scoring=scorerer, cv=5, n_jobs=4, verbose=1, refit=False)
researcher.fit(df, df["count"].values) | BikeSharing-Linear.ipynb | dmittov/misc | apache-2.0 |
Hyperparameters Searcher always maximizes the score function, so if we need to decrease it, it just adds the minus. | researcher.best_score_ | BikeSharing-Linear.ipynb | dmittov/misc | apache-2.0 |
Add regularization and grid search the hyperparameters
Now it's more clear why we have Grid Searcher ;-) | exctractor = SimpleFeatureExtractor()
clf = Pipeline([
("extractor", exctractor),
("regression", linear_model.ElasticNet()),
])
param_grid = {
"regression__alpha": np.logspace(-3, 2, 10),
"regression__l1_ratio": np.linspace(0, 1, 10)
}
scorerer = make_scorer(rmsle, greater_is_better=False)
researcher = GridSearchCV(clf, param_grid, scoring=scorerer, cv=5, n_jobs=4, verbose=1, refit=False)
researcher.fit(df, df["count"].values)
researcher.best_score_
researcher.best_params_ | BikeSharing-Linear.ipynb | dmittov/misc | apache-2.0 |
Try to add some custom features | class FeatureExtractor(BaseEstimator, TransformerMixin):
ohe = OneHotEncoder(categories='auto', sparse=False)
scaler = StandardScaler()
categorical_columns = ["week_day", "hour", "season", "weather"]
numerical_columns = ["temp", "atemp", "humidity", "windspeed"]
def _add_features(self, X):
X["week_day"] = X.datetime.apply(lambda dttm: parse(dttm).weekday())
X["hour"] = X.datetime.apply(lambda dttm: parse(dttm).hour)
def _combine(self, *feature_groups):
return np.hstack(feature_groups)
def collect_stats(self, X):
self._add_features(X)
self.ohe.fit(X[self.categorical_columns])
self.scaler.fit(X[self.numerical_columns])
def fit(self, X, y=None):
return self
def transform(self, X, y=None):
self._add_features(X)
custom_binary_features = self.ohe.transform(X[self.categorical_columns])
scaled_features = self.scaler.transform(X[self.numerical_columns])
return self._combine(
custom_binary_features,
scaled_features,
X[["holiday", "workingday"]].values
)
exctractor = FeatureExtractor()
exctractor.collect_stats(df)
clf = Pipeline([
("extractor", exctractor),
("regression", linear_model.ElasticNet()),
])
param_grid = {
"regression__alpha": np.logspace(-3, 2, 10),
"regression__l1_ratio": np.linspace(0, 1, 10)
}
pd.options.mode.chained_assignment = None
scorerer = make_scorer(rmsle, greater_is_better=False)
researcher = GridSearchCV(clf, param_grid, scoring=scorerer, cv=5, n_jobs=4, verbose=1, refit=False)
researcher.fit(df, df["count"].values)
researcher.best_score_
researcher.best_params_
scorerer = make_scorer(mean_squared_error, greater_is_better=False)
scores = cross_val_score(clf, df, df["count"].values, cv=5, n_jobs=4, scoring=scorerer)
np.mean((-np.array(scores)) ** .5) | BikeSharing-Linear.ipynb | dmittov/misc | apache-2.0 |
What we can theoretically get if we optimize RMSE | param_grid = {
"regression__alpha": np.logspace(-3, 2, 10),
"regression__l1_ratio": np.linspace(0, 1, 10)
}
pd.options.mode.chained_assignment = None
def rmse(y_true, y_pred):
return mean_squared_error(y_true, y_pred) ** .5
scorerer = make_scorer(rmse, greater_is_better=False)
researcher = GridSearchCV(clf, param_grid, scoring=scorerer, cv=5, n_jobs=4, verbose=1, refit=False)
researcher.fit(df, df["count"].values)
researcher.best_score_
researcher.best_params_ | BikeSharing-Linear.ipynb | dmittov/misc | apache-2.0 |
11 min!!! Now we also learn FeaureExtractor every time and the pipeline becomes heavier. Why? Can you speed it up?
What was the point about Maximum Likelihood
The process is described by possion distribution better
https://en.wikipedia.org/wiki/Poisson_distribution
In probability theory and statistics, the Poisson distribution (French pronunciation: [pwasɔ̃]; in English often rendered /ˈpwɑːsɒn/), named after French mathematician Siméon Denis Poisson, is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known constant rate and independently of the time since the last event.[1] The Poisson distribution can also be used for the number of events in other specified intervals such as distance, area or volume.
The other point of view: we have 200 people with 3% probability to pick up the bike.
What about CLT??? It works when $n \rightarrow \inf$. For poisson distribution there is a special case called De Moivre–Laplace theorem.
The list of different kinds of Generalized Linear Regression methods in sklearn: https://scikit-learn.org/stable/modules/linear_model.html
And there is no Poisson regression there.
So, let's write a probabilistic model for poisson distribution and optimize maximum likelihood.
Hausaufgaben: try to do it.
Hint:
start from the assumption $\hat{y} = \exp{\langle x, \theta \rangle}$ and find the derivative of log-likelihood by $\theta$. It's zero + check the sign of the second derivative.
The conclusion: we can simulate poisson regression with simple wrapper.
Poisson hierarchical regression
Check if we have issues with np.log(y == 0) | df[df["count"] == 0]
np.log(0)
class PoissonRegression(linear_model.ElasticNet):
def __init__(self, alpha=1.0, l1_ratio=0.5, fit_intercept=True,
normalize=False, precompute=False, max_iter=1000,
copy_X=True, tol=1e-4, warm_start=False, positive=False,
random_state=None, selection='cyclic'):
super().__init__(alpha, l1_ratio, fit_intercept, normalize, precompute, max_iter,
copy_X, tol, warm_start, positive, random_state, selection)
def fit(self, X, y, *args):
return super().fit(X, np.log(y), *args)
def predict(self, X):
return np.exp(super().predict(X))
exctractor = FeatureExtractor()
exctractor.collect_stats(df)
clf = Pipeline([
("extractor", exctractor),
("regression", PoissonRegression()),
])
param_grid = {
"regression__alpha": np.logspace(-5, 1, 20),
"regression__l1_ratio": np.linspace(0, 1, 10)
}
pd.options.mode.chained_assignment = None
scorerer = make_scorer(rmsle, greater_is_better=False)
researcher = GridSearchCV(clf, param_grid, scoring=scorerer, cv=5, n_jobs=4, verbose=1, refit=False)
researcher.fit(df, df["count"].values)
researcher.best_params_
researcher.best_score_ | BikeSharing-Linear.ipynb | dmittov/misc | apache-2.0 |
In terms of MSE the score is worse. But it doesn't mean MSE is the most relevant metric. At least poisson regression never predicts negative values.
When you expect poisson regression to have better MSE score? | scorerer = make_scorer(mean_squared_error, greater_is_better=False)
scores = cross_val_score(clf, df, df["count"].values, cv=5, n_jobs=4, scoring=scorerer)
np.mean((-np.array(scores)) ** .5) | BikeSharing-Linear.ipynb | dmittov/misc | apache-2.0 |
Skill vs Education
When you need to predict counts, try to use Poisson Regression.
You can get good enough results with experience, but you can't handle on just your skills when face a new type of tasks. More complicated tasks you have less your previous experience can help you.
The key to success is to have good enough education. With education you can do research. | df_test = pd.read_csv("test.csv")
cols = df_test.columns
all_data = pd.concat([df[cols], df_test[cols]])
exctractor = FeatureExtractor()
exctractor.collect_stats(all_data)
clf = Pipeline([
("extractor", exctractor),
("regression", PoissonRegression(alpha=0.001623776739188721, l1_ratio=0.1111111111111111)),
])
clf.fit(df, df["count"].values)
df_test["count"] = clf.predict(df_test)
df_test[["datetime","count"]].set_index("datetime").to_csv("linear.csv")
# !kaggle competitions submit -f linear.csv -m "linear regression" bike-sharing-demand
# score 0.64265 | BikeSharing-Linear.ipynb | dmittov/misc | apache-2.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.