markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
Step 2: Putting in the Operations
Load the word embeddings for the word ids. You can do this using tf.nn.embedding_lookup. Remember to use your embeddings placeholder. You should end up with a Tensor of dimensions batch_size * sentence_length * embedding size.
To represent a whole tweet, let's use a neural bag of words. This means we represent each tweet by the words that occur in it; it's a basic representation but gets us pretty far. To do this in a neural way, we can just average the embeddings in the tweet, leaving a single vector of embedding size for each tweet. You should end up with a Tensor of dimensions batch_size * embedding size.
Apply projection to the hidden layer of size 100 (ie. multiply the input by a weight vector and add a bias )
Apply a nonlinearity like tf.tanh
Project this to the output layer of size 1 (ie. multiply the input by a wieght vector and add a bias). Put this in a python variable called logits. | "TODO" | draft/part1.ipynb | yala/introdeeplearning | mit |
Set up loss function, and optimizer to minimize it. We'll be using Adam as our optimizer | ## Make sure to call your output embedding logits, and your sentiments placeholder sentiments in python
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits, sentiments)
loss = tf.reduce_sum(loss)
optimizer = tf.train.AdamOptimizer(1e-2).minimize(loss) | draft/part1.ipynb | yala/introdeeplearning | mit |
Run the Graph
Step 3: Set up training, and fetch optimizer at each iteration to train the model
First initialize all variables as in the toy example
Sample 20 random tweet,sentiment pairs for our feed_dict dictionary. Remember to feed in the embedding matrix.
fetch dictionary, the ops we want to run and tensors we want back
Execute this many times to train | trainSet = p.load( open('data/trainTweets_preprocessed.p','rb'))
random.shuffle(trainSet)
" TODO Init vars"
losses = []
for i in range(5000):
trainTweet = np.array( [ t[0] for t in trainSet[i: i+ minibatch_size]])
trainLabels = np.array( [int(t[1]) for t in trainSet[i: i+ minibatch_size] ])
results = "TODO, run graph with data"
losses.append(results['loss'])
if i % 500 == 0:
print("Iteration",i,"Loss", sum(losses[-500:-1])/500. if i > 0 else losses[-1])
| draft/part1.ipynb | yala/introdeeplearning | mit |
Step 4: Check validation results, and tune
Try running the graph on validation data, without fetching the train op.
See how the results compare. If the train loss is much lower than the development loss, we may be overfitting. If the train loss is still high, try experimenting with the model archetecture to increase it's capacity. | validationSet = p.load( open('data/devTweets_preprocessed.p','rb'))
random.shuffle(validationSet)
losses = []
for i in range(20000/20):
valTweet = np.array( [ t[0] for t in validationSet[i: i+ minibatch_size]])
valLabels = np.array( [int(t[1]) for t in validationSet[i: i+ minibatch_size] ])
results = "TODO"
losses.append(results['loss'])
print("Dev Loss", sum(losses)*1./len(losses)) | draft/part1.ipynb | yala/introdeeplearning | mit |
Future Steps:
Things to try on your own:
- Adding in a tensor for accuracy, and log it at each step.
- Iterate over whole validation dataset to get more stable validation score
- Try tensorboard and graphing accuracy over both sets time.
- experiment with different archetectures that maximize validation score. Maybe bag of words, which doesn't distinguish between "bad not good" and "good not bad" isn't a good enough representation.
- test it on the test data
- Do the RNN tutorial!
Solutions!
Do not look unless you really have to. Ask TA's for help first. Fight for the intuition, you'll get more out of it. | # Step 1:
tf.reset_default_graph()
session = tf.Session()
minibatch_size = 20
tweet_length = 20
embedding_size = 100
hidden_dim_size = 100
output_size = 1
init_bias = 0
tweets = tf.placeholder(tf.int32, shape=[minibatch_size,tweet_length])
sentiments = tf.placeholder(tf.float32, shape=[minibatch_size])
embeddingMatrix = tf.placeholder(tf.float32, shape =[vocab_size, embedding_size] )
W_hidden = tf.get_variable("W_hidden", [embedding_size, hidden_dim_size], tf.float32, tf.random_normal_initializer(stddev=1.0 / hidden_dim_size))
b_hidden = tf.get_variable("b_hidden", [hidden_dim_size], initializer=tf.constant_initializer(init_bias))
W_output = tf.get_variable("W_output", [hidden_dim_size, output_size], tf.float32, tf.random_normal_initializer(stddev=1.0 / hidden_dim_size))
b_output = tf.get_variable("b_output", [output_size], initializer=tf.constant_initializer(init_bias))
# Step 2:
tweet_embedded = tf.nn.embedding_lookup(embeddingMatrix, tweets)
averagedTweets = tf.reduce_mean(tweet_embedded, axis=1)
hidden_proj = tf.matmul( averagedTweets, W_hidden) + b_hidden
non_linearity = tf.nn.tanh(hidden_proj)
logits = tf.matmul( non_linearity, W_output)+ b_output
logits = tf.reshape(logits, shape=[minibatch_size])
## Make sure to call your output embedding logits, and your sentiments placeholder sentiments in python
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits, sentiments)
loss = tf.reduce_sum(loss)
optimizer = tf.train.AdamOptimizer().minimize(loss)
# Step 3:
trainSet = p.load( open('data/trainTweets_preprocessed.p','rb'))
random.shuffle(trainSet)
tf.global_variables_initializer().run(session=session)
losses = []
for i in range(5000):
trainTweet = np.array( [ t[0] for t in trainSet[i: i+ minibatch_size]])
trainLabels = np.array( [int(t[1]) for t in trainSet[i: i+ minibatch_size] ])
feed_dict = {
embeddingMatrix: word_embeddings,
tweets: trainTweet,
sentiments: trainLabels
}
fetch = {
'loss': loss,
'trainOp': optimizer
}
results = session.run(fetch, feed_dict=feed_dict)
losses.append(results['loss'])
if i % 500 == 0:
print("Iteration",i,"Loss", sum(losses[-500:-1])/500. if i > 0 else losses[-1])
# Step 4:
validationSet = p.load( open('data/devTweets_preprocessed.p','rb'))
random.shuffle(validationSet)
losses = []
for i in range(20000/20):
valTweet = np.array( [ t[0] for t in validationSet[i: i+ minibatch_size]])
valLabels = np.array( [int(t[1]) for t in validationSet[i: i+ minibatch_size] ])
feed_dict = {
embeddingMatrix: word_embeddings,
tweets: valTweet,
sentiments: valLabels
}
fetch = {
'loss': loss,
}
results = session.run(fetch, feed_dict=feed_dict)
losses.append(results['loss'])
print("Dev Loss", sum(losses)*1./len(losses)) | draft/part1.ipynb | yala/introdeeplearning | mit |
TOC Thematic Report - February 2019 (Part 2: Annual trends)
1. Get list of stations | # Select projects
prj_grid = nivapy.da.select_resa_projects(eng)
prj_grid
prj_df = prj_grid.get_selected_df()
print (len(prj_df))
prj_df
# Get stations
stn_df = nivapy.da.select_resa_project_stations(prj_df, eng)
print(len(stn_df))
stn_df.head()
# Map
nivapy.spatial.quickmap(stn_df, popup='station_code') | toc_report_feb_2019_part2.ipynb | JamesSample/icpw | mit |
2. Calculate annual trends | # User input
# Specify projects of interest
proj_list = ['ICPWaters US', 'ICPWaters NO', 'ICPWaters CA',
'ICPWaters UK', 'ICPWaters FI', 'ICPWaters SE',
'ICPWaters CZ', 'ICPWaters IT', 'ICPWaters PL',
'ICPWaters CH', 'ICPWaters LV', 'ICPWaters EE',
'ICPWaters IE', 'ICPWaters MD', 'ICPWaters DE']
# Specify results folder
res_fold = (r'../../../Thematic_Trends_Report_2019/results') | toc_report_feb_2019_part2.ipynb | JamesSample/icpw | mit |
1. 1990 to 2016 | # Specify period of interest
st_yr, end_yr = 1990, 2016
# Build output paths
plot_fold = os.path.join(res_fold, 'trends_plots_%s-%s' % (st_yr, end_yr))
res_csv = os.path.join(res_fold, 'res_%s-%s.csv' % (st_yr, end_yr))
dup_csv = os.path.join(res_fold, 'dup_%s-%s.csv' % (st_yr, end_yr))
nd_csv = os.path.join(res_fold, 'nd_%s-%s.csv' % (st_yr, end_yr))
# Run analysis
res_df, dup_df, nd_df = resa2_trends.run_trend_analysis(proj_list,
eng,
st_yr=st_yr,
end_yr=end_yr,
plot=False,
fold=plot_fold)
# Delete mk_std_dev col as not relevant here
del res_df['mk_std_dev']
# Write output
res_df.to_csv(res_csv, index=False)
dup_df.to_csv(dup_csv, index=False)
if nd_df is not None:
nd_df.to_csv(nd_csv, index=False) | toc_report_feb_2019_part2.ipynb | JamesSample/icpw | mit |
There are lots of warnings printed above, but the main one of interest is:
Some stations have no relevant data in the period specified.
Which station(s) are missing data? | # Get stations with no data
stn_df[stn_df['station_id'].isin(nd_df['station_id'])] | toc_report_feb_2019_part2.ipynb | JamesSample/icpw | mit |
It seems that one Irish station has no associated data. This is as expected, because all the data supplied by Julian for this site comes from "near-shore" sampling (rather than "open water") and these have been omitted from the data upload - see here for details.
3. Basic checking
3.1. Boxplots | # Set up plot
fig = plt.figure(figsize=(20,10))
sn.set(style="ticks", palette="muted",
color_codes=True, font_scale=2)
# Horizontal boxplots
ax = sn.boxplot(x="mean", y="par_id", data=res_df,
whis=np.inf, color="c")
# Add "raw" data points for each observation, with some "jitter"
# to make them visible
sn.stripplot(x="mean", y="par_id", data=res_df, jitter=True,
size=3, color=".3", linewidth=0)
# Remove axis lines
sn.despine(trim=True) | toc_report_feb_2019_part2.ipynb | JamesSample/icpw | mit |
4. Data restructuring
The code below is taken from here. It is used to generate output files in the format requested by Heleen.
4.1. Combine datasets | # Change 'period' col to 'data_period' and add 'analysis_period'
res_df['data_period'] = res_df['period']
del res_df['period']
res_df['analysis_period'] = '1990-2016'
# Join
df = pd.merge(res_df, stn_df, how='left', on='station_id')
# Re-order columns
df = df[['station_id',
'station_code', 'station_name',
'latitude', 'longitude', 'analysis_period', 'data_period',
'par_id', 'non_missing', 'n_start', 'n_end', 'mean', 'median',
'std_dev', 'mk_stat', 'norm_mk_stat', 'mk_p_val', 'trend',
'sen_slp']]
df.head() | toc_report_feb_2019_part2.ipynb | JamesSample/icpw | mit |
4.3. SO4 at Abiskojaure
SO4 for this station ('station_id=36458' in the "core" dataset) should be removed. See here. | # Remove sulphate-related series at Abiskojaure
df = df.query('not((station_id==36458) and ((par_id=="ESO4") or '
'(par_id=="ESO4X") or '
'(par_id=="ESO4_ECl")))') | toc_report_feb_2019_part2.ipynb | JamesSample/icpw | mit |
7.5. Tidy | # Remove unwanted cols
df.drop(labels=['mean', 'n_end', 'n_start', 'mk_stat', 'norm_mk_stat'],
axis=1, inplace=True)
# Reorder columns
df = df[['station_id', 'station_code',
'station_name', 'latitude', 'longitude', 'analysis_period',
'data_period', 'par_id', 'non_missing', 'median', 'std_dev',
'mk_p_val', 'trend', 'sen_slp', 'rel_sen_slp', 'include']]
# Write to output
out_path = os.path.join(res_fold, 'toc_core_trends_long_format.csv')
df.to_csv(out_path, index=False, encoding='utf-8')
df.head() | toc_report_feb_2019_part2.ipynb | JamesSample/icpw | mit |
7.6. Convert to "wide" format | del df['data_period']
# Melt to "long" format
melt_df = pd.melt(df,
id_vars=['station_id', 'station_code',
'station_name', 'latitude', 'longitude',
'analysis_period', 'par_id', 'include'],
var_name='stat')
# Get only values where include='yes'
melt_df = melt_df.query('include == "yes"')
del melt_df['include']
# Build multi-index on everything except "value"
melt_df.set_index(['station_id', 'station_code',
'station_name', 'latitude', 'longitude', 'par_id',
'analysis_period',
'stat'], inplace=True)
# Unstack levels of interest to columns
wide_df = melt_df.unstack(level=['par_id', 'analysis_period', 'stat'])
# Drop unwanted "value" level in index
wide_df.columns = wide_df.columns.droplevel(0)
# Replace multi-index with separate components concatenated with '_'
wide_df.columns = ["_".join(item) for item in wide_df.columns]
# Reset multiindex on rows
wide_df = wide_df.reset_index()
# Save output
out_path = os.path.join(res_fold, 'toc_trends_wide_format.csv')
wide_df.to_csv(out_path, index=False, encoding='utf-8')
wide_df.head() | toc_report_feb_2019_part2.ipynb | JamesSample/icpw | mit |
Throughout the tutorial we'll want to visualize GPs. So we define a helper function for plotting: | # note that this helper function does three different things:
# (i) plots the observed data;
# (ii) plots the predictions from the learned GP after conditioning on data;
# (iii) plots samples from the GP prior (with no conditioning on observed data)
def plot(plot_observed_data=False, plot_predictions=False, n_prior_samples=0,
model=None, kernel=None, n_test=500):
plt.figure(figsize=(12, 6))
if plot_observed_data:
plt.plot(X.numpy(), y.numpy(), 'kx')
if plot_predictions:
Xtest = torch.linspace(-0.5, 5.5, n_test) # test inputs
# compute predictive mean and variance
with torch.no_grad():
if type(model) == gp.models.VariationalSparseGP:
mean, cov = model(Xtest, full_cov=True)
else:
mean, cov = model(Xtest, full_cov=True, noiseless=False)
sd = cov.diag().sqrt() # standard deviation at each input point x
plt.plot(Xtest.numpy(), mean.numpy(), 'r', lw=2) # plot the mean
plt.fill_between(Xtest.numpy(), # plot the two-sigma uncertainty about the mean
(mean - 2.0 * sd).numpy(),
(mean + 2.0 * sd).numpy(),
color='C0', alpha=0.3)
if n_prior_samples > 0: # plot samples from the GP prior
Xtest = torch.linspace(-0.5, 5.5, n_test) # test inputs
noise = (model.noise if type(model) != gp.models.VariationalSparseGP
else model.likelihood.variance)
cov = kernel.forward(Xtest) + noise.expand(n_test).diag()
samples = dist.MultivariateNormal(torch.zeros(n_test), covariance_matrix=cov)\
.sample(sample_shape=(n_prior_samples,))
plt.plot(Xtest.numpy(), samples.numpy().T, lw=2, alpha=0.4)
plt.xlim(-0.5, 5.5) | tutorial/source/gp.ipynb | uber/pyro | apache-2.0 |
Data
The data consist of $20$ points sampled from
$$ y = 0.5\sin(3x) + \epsilon, \quad \epsilon \sim \mathcal{N}(0, 0.2).$$
with $x$ sampled uniformly from the interval $[0, 5]$. | N = 20
X = dist.Uniform(0.0, 5.0).sample(sample_shape=(N,))
y = 0.5 * torch.sin(3*X) + dist.Normal(0.0, 0.2).sample(sample_shape=(N,))
plot(plot_observed_data=True) # let's plot the observed data | tutorial/source/gp.ipynb | uber/pyro | apache-2.0 |
Define model
First we define a RBF kernel, specifying the values of the two hyperparameters variance and lengthscale. Then we construct a GPRegression object. Here we feed in another hyperparameter, noise, that corresponds to $\epsilon$ above. | kernel = gp.kernels.RBF(input_dim=1, variance=torch.tensor(5.),
lengthscale=torch.tensor(10.))
gpr = gp.models.GPRegression(X, y, kernel, noise=torch.tensor(1.)) | tutorial/source/gp.ipynb | uber/pyro | apache-2.0 |
Let's see what samples from this GP function prior look like. Note that this is before we've conditioned on the data. The shape these functions take—their smoothness, their vertical scale, etc.—is controlled by the GP kernel. | plot(model=gpr, kernel=kernel, n_prior_samples=2) | tutorial/source/gp.ipynb | uber/pyro | apache-2.0 |
For example, if we make variance and noise smaller we will see function samples with smaller vertical amplitude: | kernel2 = gp.kernels.RBF(input_dim=1, variance=torch.tensor(0.1),
lengthscale=torch.tensor(10.))
gpr2 = gp.models.GPRegression(X, y, kernel2, noise=torch.tensor(0.1))
plot(model=gpr2, kernel=kernel2, n_prior_samples=2) | tutorial/source/gp.ipynb | uber/pyro | apache-2.0 |
Inference
In the above we set the kernel hyperparameters by hand. If we want to learn the hyperparameters from the data, we need to do inference. In the simplest (conjugate) case we do gradient ascent on the log marginal likelihood. In pyro.contrib.gp, we can use any PyTorch optimizer to optimize parameters of a model. In addition, we need a loss function which takes inputs are the pair model and guide and returns an ELBO loss (see SVI Part I tutorial). | optimizer = torch.optim.Adam(gpr.parameters(), lr=0.005)
loss_fn = pyro.infer.Trace_ELBO().differentiable_loss
losses = []
num_steps = 2500 if not smoke_test else 2
for i in range(num_steps):
optimizer.zero_grad()
loss = loss_fn(gpr.model, gpr.guide)
loss.backward()
optimizer.step()
losses.append(loss.item())
# let's plot the loss curve after 2500 steps of training
plt.plot(losses); | tutorial/source/gp.ipynb | uber/pyro | apache-2.0 |
Let's see if we're learned anything reasonable: | plot(model=gpr, plot_observed_data=True, plot_predictions=True) | tutorial/source/gp.ipynb | uber/pyro | apache-2.0 |
Here the thick red curve is the mean prediction and the blue band represents the 2-sigma uncertainty around the mean. It seems we learned reasonable kernel hyperparameters, as both the mean and uncertainty give a reasonable fit to the data. (Note that learning could have easily gone wrong if we e.g. chose too large of a learning rate or chose bad initital hyperparameters.)
Note that the kernel is only well-defined if variance and lengthscale are positive. Under the hood Pyro is using PyTorch constraints (see docs) to ensure that hyperparameters are constrained to the appropriate domains. Let's see the constrained values we've learned. | gpr.kernel.variance.item()
gpr.kernel.lengthscale.item()
gpr.noise.item() | tutorial/source/gp.ipynb | uber/pyro | apache-2.0 |
The period of the sinusoid that generated the data is $T = 2\pi/3 \approx 2.09$ so learning a lengthscale that's approximiately equal to a quarter period makes sense.
Fit the model using MAP
We need to define priors for the hyperparameters. | # Define the same model as before.
pyro.clear_param_store()
kernel = gp.kernels.RBF(input_dim=1, variance=torch.tensor(5.),
lengthscale=torch.tensor(10.))
gpr = gp.models.GPRegression(X, y, kernel, noise=torch.tensor(1.))
# note that our priors have support on the positive reals
gpr.kernel.lengthscale = pyro.nn.PyroSample(dist.LogNormal(0.0, 1.0))
gpr.kernel.variance = pyro.nn.PyroSample(dist.LogNormal(0.0, 1.0))
optimizer = torch.optim.Adam(gpr.parameters(), lr=0.005)
loss_fn = pyro.infer.Trace_ELBO().differentiable_loss
losses = []
num_steps = 2500 if not smoke_test else 2
for i in range(num_steps):
optimizer.zero_grad()
loss = loss_fn(gpr.model, gpr.guide)
loss.backward()
optimizer.step()
losses.append(loss.item())
plt.plot(losses);
plot(model=gpr, plot_observed_data=True, plot_predictions=True) | tutorial/source/gp.ipynb | uber/pyro | apache-2.0 |
Let's inspect the hyperparameters we've learned: | # tell gpr that we want to get samples from guides
gpr.set_mode('guide')
print('variance = {}'.format(gpr.kernel.variance))
print('lengthscale = {}'.format(gpr.kernel.lengthscale))
print('noise = {}'.format(gpr.noise)) | tutorial/source/gp.ipynb | uber/pyro | apache-2.0 |
Note that the MAP values are different from the MLE values due to the prior.
Sparse GPs
For large datasets computing the log marginal likelihood is costly due to the expensive matrix operations involved (e.g. see Section 2.2 of [1]). A variety of so-called 'sparse' variational methods have been developed to make GPs viable for larger datasets. This is a big area of research and we won't be going into all the details. Instead we quickly show how we can use SparseGPRegression in pyro.contrib.gp to make use of these methods.
First, we generate more data. | N = 1000
X = dist.Uniform(0.0, 5.0).sample(sample_shape=(N,))
y = 0.5 * torch.sin(3*X) + dist.Normal(0.0, 0.2).sample(sample_shape=(N,))
plot(plot_observed_data=True) | tutorial/source/gp.ipynb | uber/pyro | apache-2.0 |
Using the sparse GP is very similar to using the basic GP used above. We just need to add an extra parameter $X_u$ (the inducing points). | # initialize the inducing inputs
Xu = torch.arange(20.) / 4.0
# initialize the kernel and model
pyro.clear_param_store()
kernel = gp.kernels.RBF(input_dim=1)
# we increase the jitter for better numerical stability
sgpr = gp.models.SparseGPRegression(X, y, kernel, Xu=Xu, jitter=1.0e-5)
# the way we setup inference is similar to above
optimizer = torch.optim.Adam(sgpr.parameters(), lr=0.005)
loss_fn = pyro.infer.Trace_ELBO().differentiable_loss
losses = []
num_steps = 2500 if not smoke_test else 2
for i in range(num_steps):
optimizer.zero_grad()
loss = loss_fn(sgpr.model, sgpr.guide)
loss.backward()
optimizer.step()
losses.append(loss.item())
plt.plot(losses);
# let's look at the inducing points we've learned
print("inducing points:\n{}".format(sgpr.Xu.data.numpy()))
# and plot the predictions from the sparse GP
plot(model=sgpr, plot_observed_data=True, plot_predictions=True) | tutorial/source/gp.ipynb | uber/pyro | apache-2.0 |
We can see that the model learns a reasonable fit to the data. There are three different sparse approximations that are currently implemented in Pyro:
"DTC" (Deterministic Training Conditional)
"FITC" (Fully Independent Training Conditional)
"VFE" (Variational Free Energy)
By default, SparseGPRegression will use "VFE" as the inference method. We can use other methods by passing a different approx flag to SparseGPRegression.
More Sparse GPs
Both GPRegression and SparseGPRegression above are limited to Gaussian likelihoods. We can use other likelihoods with GPs—for example, we can use the Bernoulli likelihood for classification problems—but the inference problem becomes more difficult. In this section, we show how to use the VariationalSparseGP module, which can handle non-Gaussian likelihoods. So we can compare to what we've done above, we're still going to use a Gaussian likelihood. The point is that the inference that's being done under the hood can support other likelihoods. | # initialize the inducing inputs
Xu = torch.arange(10.) / 2.0
# initialize the kernel, likelihood, and model
pyro.clear_param_store()
kernel = gp.kernels.RBF(input_dim=1)
likelihood = gp.likelihoods.Gaussian()
# turn on "whiten" flag for more stable optimization
vsgp = gp.models.VariationalSparseGP(X, y, kernel, Xu=Xu, likelihood=likelihood, whiten=True)
# instead of defining our own training loop, we will
# use the built-in support provided by the GP module
num_steps = 1500 if not smoke_test else 2
losses = gp.util.train(vsgp, num_steps=num_steps)
plt.plot(losses);
plot(model=vsgp, plot_observed_data=True, plot_predictions=True) | tutorial/source/gp.ipynb | uber/pyro | apache-2.0 |
<span class="exercize">Exercise: RGB intensity plot</span>
Plot the intensity of each channel of the image along a given row.
Start with the following template: | def plot_intensity(image, row):
# Fill in the three lines below
red_values = ...
green_values = ...
blue_values = ...
plt.figure()
plt.plot(red_values)
plt.plot(green_values)
plt.plot(blue_values)
pass | scikit_image/lectures/00_images_are_arrays.v3.ipynb | M-R-Houghton/euroscipy_2015 | mit |
K-Means Clustering
Motivation
Given a set of objects, we specify that we want $k$ clusters. Each cluster has a mean representing it, and we assign each point to the clusters based on which cluster mean its closest to. For each point, the reconstruction error is defined to be its distance to its cluster mean. This gives us the total reconstruction error, the sum of all the individual reconstruction errors, as an error value that we want to minimize.
Formally, we have a set of object vectors ${x_n}{n = 1}^N$ and a set of $K$ cluster means ${\mu_k}{k = 1}^K$, where $x_n, \mu_k \in \mathbb{R}^D$. We represent each object's cluster assignment with the $K$ dimensional vector $r_n$, where $r_{nk} = 1$ if $x_n$ is in cluster $k$, and 0 otherwise. This gives us the individual reconstruction error
$$J_n(r_n, {\mu_k}) = \sum_{k = 1}^K r_{nk} \cdot |x_n - \mu_k|^2$$
and the total reconstruction error
$$J({r_n}, {\mu_k}) = \sum_{n = 1}^N \sum_{k = 1}^K r_{nk} \cdot |x_n - \mu_k|^2$$
As you can see, the reconstruction error on a set of objects is a function of assignments and means. How would you go about choosing the assignments ${r_n}$ and means ${\mu_k}$ that minimize the reconstruction error? Lloyd's Algorithm proposes a two step error minimization.
Step 1: minimize $J$ by updating the $r_n$, assigning each $x_n$ to its closest cluster mean.
Step 2: minimize $J$ by recalculting the $\mu_k$ to be the average over all vectors assigned to cluster $k$.
Repeating this process until the assignments do not change, the algorithm will converge upon a local minima. The algorithm can be optimized by starting with reasonable distributions for cluster centers rather than choosing randomly (K-Means ++) or adding the condition that the cluster mean must be a point (K-Medoids).
Application
SKLearn has a K-Means implementation that is documented here. In this example, we use K-Means to classify flowers into 3 classes based on their properties. Note that choosing the right $k$ is crucial. The true number of categories within the dataset is 3, so with this knowlegde, we can let $k$ be 3 to get the most logical split. However, if we didn't know that the dataset consisted of three types of flowers, choosing $k$ to be a value like 7 might result in less logical clusters. | import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from sklearn.cluster import KMeans
from sklearn.cluster import AgglomerativeClustering
from sklearn import datasets
centers = [[1, 1], [-1, -1], [1, -1]]
iris = datasets.load_iris()
X = iris.data
y = iris.target
estimators = {'K-Means 3': KMeans(n_clusters=3),
'K-Means 8': KMeans(n_clusters=7)}
fignum = 1
for name, est in estimators.items():
fig = plt.figure(fignum, figsize=(4, 3))
plt.clf()
ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=48, azim=134)
plt.cla()
est.fit(X)
labels = est.labels_
ax.scatter(X[:, 3], X[:, 0], X[:, 2], c=labels.astype(np.float))
ax.w_xaxis.set_ticklabels([])
ax.w_yaxis.set_ticklabels([])
ax.w_zaxis.set_ticklabels([])
ax.set_xlabel('Petal width')
ax.set_ylabel('Sepal length')
ax.set_zlabel('Petal length')
ax.set_title(name)
fignum = fignum + 1
plt.show() | 3/1-Clustering.ipynb | dataventures/workshops | mit |
Hierarchical Clustering
Motivation
K-Means is one of the most common and simple clustering methods, but it has a couple of key limitations. First off, it is nondeterministic as it depends on the initial choice of cluster means, and Lloyd's Algorithm only arrives upon local minima rather than the global minimum. Furthermore, the algorithm requires you to decide what $k$ is, as we saw earlier. Finally, K-Means can be inflexible, as the only way cluster centers are derived is through the mean distance from all of the points assigned to the cluster. Because of this construction, K-Means doesn't perform well on clusters that are connected but not compact.
Hierarchical clustering solves many of these issues. The motivation between hierarchical clustering is building up clusters as a hierarchy. All objects start out in their individual groups, and at each level, the groups that are a certain distance apart are joined to form a larger group. A variety of different distance metrics can be used in building up these groups to result in different types of hierarchical clusters.
A dendrogram is a way of visualizing the groups as they are aggregated together in the hierarchy. As you can see, hierarchical clustering not only resolves some of the problems explained concerning K-Means - it also prevents a very nice way of representing the structure of the clusters, identifying subclusters within the clusters.
Application
Once again, we can use the SKLearn hierarchical clustering implementation, in the same way as we used clustering. However, there are many resources on the page documenting the different distance metrics that you can use. | estimators = {'Hierarchical 3': AgglomerativeClustering(n_clusters=3)}
fignum = 1
for name, est in estimators.items():
fig = plt.figure(fignum, figsize=(4, 3))
plt.clf()
ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=48, azim=134)
plt.cla()
est.fit(X)
labels = est.labels_
ax.scatter(X[:, 3], X[:, 0], X[:, 2], c=labels.astype(np.float))
ax.w_xaxis.set_ticklabels([])
ax.w_yaxis.set_ticklabels([])
ax.w_zaxis.set_ticklabels([])
ax.set_xlabel('Petal width')
ax.set_ylabel('Sepal length')
ax.set_zlabel('Petal length')
ax.set_title(name)
fignum = fignum + 1
plt.show() | 3/1-Clustering.ipynb | dataventures/workshops | mit |
Challenge
Look at the other clustering methods provided by SKLearn, and consider their use cases. Pick three to run on the sample iris dataset that you think will produce the most accurate clusters. Tune parameters and look at the different options to try and get your clusters as close to the ground truth as possible. This challenge hopes to help you familiarize yourself with the documentation that SKLearn provides and figure out the best clustering method given a problem to solve. We only covered two clustering models in depth to give you a taste of what clustering can do, but there many more clustering models out there all with their own optimal use cases. | np.random.seed(5)
centers = [[1, 1], [-1, -1], [1, -1]]
iris = datasets.load_iris()
X = iris.data
y = iris.target
# TODO: choose three additional estimators here that will give the best results.
estimators = {'Hierarchical 3': AgglomerativeClustering(n_clusters=3),
'K-Means 3': KMeans(n_clusters=3),
'K-Means 7': KMeans(n_clusters=7)}
fignum = 1
for name, est in estimators.items():
fig = plt.figure(fignum, figsize=(4, 3))
plt.clf()
ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=48, azim=134)
plt.cla()
est.fit(X)
labels = est.labels_
ax.scatter(X[:, 3], X[:, 0], X[:, 2], c=labels.astype(np.float))
ax.w_xaxis.set_ticklabels([])
ax.w_yaxis.set_ticklabels([])
ax.w_zaxis.set_ticklabels([])
ax.set_xlabel('Petal width')
ax.set_ylabel('Sepal length')
ax.set_zlabel('Petal length')
ax.set_title(name)
fignum = fignum + 1
# Plot the ground truth
fig = plt.figure(fignum, figsize=(4, 3))
plt.clf()
ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=48, azim=134)
plt.cla()
for name, label in [('Setosa', 0), ('Versicolour', 1), ('Virginica', 2)]:
ax.text3D(X[y == label, 3].mean(),
X[y == label, 0].mean() + 1.5,
X[y == label, 2].mean(), name,
horizontalalignment='center',
bbox=dict(alpha=.5, edgecolor='w', facecolor='w'))
y = np.choose(y, [1, 2, 0]).astype(np.float)
ax.scatter(X[:, 3], X[:, 0], X[:, 2], c=y)
ax.w_xaxis.set_ticklabels([])
ax.w_yaxis.set_ticklabels([])
ax.w_zaxis.set_ticklabels([])
ax.set_xlabel('Petal width')
ax.set_ylabel('Sepal length')
ax.set_zlabel('Petal length')
ax.set_title('Ground Truth')
plt.show() | 3/1-Clustering.ipynb | dataventures/workshops | mit |
Visualize Data
View a sample from the dataset.
You do not need to modify this section. | import random
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
index = random.randint(1, len(X_train))
image = X_train[index].squeeze()
plt.figure(figsize=(1,1))
plt.imshow(image, cmap="gray")
print(y_train[index]) | CarND-LetNet/LeNet-Lab.ipynb | swirlingsand/self-driving-car-nanodegree-nd013 | mit |
Setup TensorFlow
The EPOCH and BATCH_SIZE values affect the training speed and model accuracy.
You do not need to modify this section. | import tensorflow as tf
EPOCHS = 20
BATCH_SIZE = 64 | CarND-LetNet/LeNet-Lab.ipynb | swirlingsand/self-driving-car-nanodegree-nd013 | mit |
TODO: Implement LeNet-5
Implement the LeNet-5 neural network architecture.
This is the only cell you need to edit.
Input
The LeNet architecture accepts a 32x32xC image as input, where C is the number of color channels. Since MNIST images are grayscale, C is 1 in this case.
Architecture
Layer 1: Convolutional. The output shape should be 28x28x6.
Activation. Your choice of activation function.
Pooling. The output shape should be 14x14x6.
Layer 2: Convolutional. The output shape should be 10x10x16.
Activation. Your choice of activation function.
Pooling. The output shape should be 5x5x16.
Flatten. Flatten the output shape of the final pooling layer such that it's 1D instead of 3D. The easiest way to do is by using tf.contrib.layers.flatten, which is already imported for you.
Layer 3: Fully Connected. This should have 120 outputs.
Activation. Your choice of activation function.
Layer 4: Fully Connected. This should have 84 outputs.
Activation. Your choice of activation function.
Layer 5: Fully Connected (Logits). This should have 10 outputs.
Output
Return the result of the 2nd fully connected layer. | from tensorflow.contrib.layers import flatten
def LeNet(x):
# Hyperparameters
mu = 0
sigma = 0.1
# Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
convolutional_1_weights = tf.Variable(tf.truncated_normal(shape=(5,5,1,6), mean = mu, stddev = sigma))
convolutional_1_bias = tf.Variable(tf.zeros(6)) # set to 6 as output is 6
convolutional_1 = tf.nn.conv2d(x, convolutional_1_weights, strides=[1,1,1,1], padding='VALID') + convolutional_1_bias
# Activation.
convolutional_1 = tf.nn.relu(convolutional_1)
# Pooling. Input = 28x28x6. Output = 14x14x6.
# Stride of 2 reduces output by 2
convolutional_1 = tf.nn.max_pool(convolutional_1, ksize=[1,2,2,1], strides=[1,2,2,1], padding='VALID')
## END Layer 1
# Layer 2: Convolutional. Output = 10x10x16.
convolutional_2_weights = tf.Variable(tf.truncated_normal(shape=(5,5,6,16), mean = mu, stddev = sigma))
convolutional_2_bias = tf.Variable(tf.zeros(16))
# pass the first layer
convolutional_2 = tf.nn.conv2d(convolutional_1, convolutional_2_weights, strides=[1,1,1,1], padding='VALID' ) + convolutional_2_bias
# Activation.
convolutional_2 = tf.nn.relu(convolutional_2)
# Pooling. Input = 10x10x16. Output = 5x5x16.
convolutional_2 = tf.nn.max_pool(convolutional_2, ksize=[1,2,2,1], strides=[1,2,2,1], padding='VALID')
# Flatten. Input = 5x5x16. Output = 400.
fully_connected_0 = flatten(convolutional_2)
### End Layer 2
# Layer 3: Fully Connected. Input = 400. Output = 120.
fully_connected_1_weights = tf.Variable(tf.truncated_normal(shape=(400,120), mean=mu, stddev=sigma))
fully_connected_1_bias = tf.Variable(tf.zeros(120))
fully_connected_1 = tf.matmul(fully_connected_0, fully_connected_1_weights) + fully_connected_1_bias
# Activation.
fully_connected_1 = tf.nn.relu(fully_connected_1)
# Layer 4: Fully Connected. Input = 120. Output = 84.
# shape = (input, output)
fully_connected_2_weights = tf.Variable(tf.truncated_normal(shape=(120,84), mean=mu, stddev=sigma))
fully_connected_2_bias = tf.Variable(tf.zeros(84))
fully_connected_2 = tf.matmul(fully_connected_1, fully_connected_2_weights) + fully_connected_2_bias
# Activation.
fully_connected_2 = tf.nn.relu(fully_connected_2)
# Layer 5: Fully Connected. Input = 84. Output = 10.
fully_connected_3_weights = tf.Variable(tf.truncated_normal(shape=(84,10), mean=mu, stddev=sigma))
fully_connected_3_bias = tf.Variable(tf.zeros(10))
logits = tf.matmul(fully_connected_2, fully_connected_3_weights) + fully_connected_3_bias
return logits | CarND-LetNet/LeNet-Lab.ipynb | swirlingsand/self-driving-car-nanodegree-nd013 | mit |
Features and Labels
Train LeNet to classify MNIST data.
x is a placeholder for a batch of input images.
y is a placeholder for a batch of output labels.
You do not need to modify this section. | x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
# added this to fix bug CUDA_ERROR_ILLEGAL_ADDRESS / kernal crash
with tf.device('/cpu:0'):
one_hot_y = tf.one_hot(y, 10) | CarND-LetNet/LeNet-Lab.ipynb | swirlingsand/self-driving-car-nanodegree-nd013 | mit |
Exercício 2: Crie uma função para determinar se um ano é bissexto.
O ano é bissexto se for múltiplo de 400 ou múltiplo de 4 e não múltiplo de 100. Utilize o operador de resto da divisão (%) para determinar se um número é múltiplo de outro. |
print (Bissexto(2000)) # True
print (Bissexto(2004)) # True
print (Bissexto(1900)) # False | ListaEX_02.ipynb | folivetti/PIPYTHON | mit |
Exercício 3: Crie uma função que receba três valores x, y, z como parâmetros e retorne-os em ordem crescente.
O Python permite que você faça comparações relacionais entre as 3 variáveis em uma única instrução:
Python
x < y < z |
print (Crescente(1,2,3))
print (Crescente(1,3,2))
print (Crescente(2,1,3))
print (Crescente(2,3,1))
print (Crescente(3,1,2))
print (Crescente(3,2,1))
print (Crescente(1,2,2)) | ListaEX_02.ipynb | folivetti/PIPYTHON | mit |
Exercício 4: O peso ideial de uma pessoa segue a seguinte tabela:
|Altura|Peso Homem|Peso Mulher|
|--|--|--|
|1,5 m|50 kg|48 kg|
|1,7 m|74 kg|68 kg|
|1,9 m|98 kg|88 kg|
|2,1 m|122 kg|108 kg|
Faça uma função que receba como parâmetro o gênero, altura e peso da pessoa e retorne True se ela está com o peso ideal. |
print (PesoIdeal("masculino", 1.87, 75)) # True
print (PesoIdeal("masculino", 1.92, 200)) # False
print (PesoIdeal("feminino", 1.87, 90)) # False
print (PesoIdeal("feminino", 1.6, 40)) # True | ListaEX_02.ipynb | folivetti/PIPYTHON | mit |
Exercício 5: Crie uma função que receba as coordenadas cx, cy, o raio r correspondentes ao centro e raio de uma circunferência e receba também coordenadas x, y de um ponto.
A função deve retornar True se o ponto está dentro da circunferência e False, caso contrário. |
print (Circunferencia(0,0,10,5,5) ) # True
print (Circunferencia(0,0,10,15,5)) # False | ListaEX_02.ipynb | folivetti/PIPYTHON | mit |
Exercício 5b: Crie uma função chamada Circunferencia que recebe como entrada as coordenadas do centro cx e cy e o raio r da circunferência. Essa função deve criar uma outra função chamada VerificaPonto que recebe como entrada as coordenadas x e y de um ponto e retorna True caso o ponto esteja dentro da circunferência, ou False caso contrário.
A função Circunferencia deve retornar a função Verifica. |
Verifica = Circunferencia(0,0,10)
print (Verifica(5,5))
print (Verifica(15,5)) | ListaEX_02.ipynb | folivetti/PIPYTHON | mit |
Exercício 6:
A Estrela da Morte é uma arma desenvolvida pelo império para dominar o universo.
Um telescópio digital foi desenvolvido pelas forças rebeldes para detectar o local dela.
Mas tal telescópio só consegue mostrar o contorno das circunferências encontradas indicando o centro e o raio delas.
Sabendo que uma Estrela da Morte é definida por:
O raio de uma circunferência for 10 vezes maior que o raio da outra
A circunferência menor se encontrar totalmente dentro da maior
O contorno da circunferência menor está a pelo menos 2 unidades de distância do contorno da maior
Faça uma função (utilizando os exercícios anteriores), para detectar se duas circunferências definidas por (cx1,cy1,r1) e (cx2,cy2,r2) podem formar uma Estrela da Morte.
Bônus: plote as circunferências utilizando a biblioteca gráfica. | import math
print (EstrelaMorte(0,0,20,3,3,10))
print (EstrelaMorte(0,0,200,3,3,10))
print (EstrelaMorte(0,0,200,195,3,10)) | ListaEX_02.ipynb | folivetti/PIPYTHON | mit |
Exercício 7: Crie uma função para determinar as raízes reais da equação do segundo grau:
$$
a.x^{2} + b.x + c = 0
$$
Faça com que a função retorne:
Uma raíz quando $b^2 = 4ac$
Raízes complexas quando $b^2 < 4ac$
Raízes reais, caso contrário
Utilize a biblioteca cmath para calcular a raíz quadrada para números complexos. | import math, cmath
print (RaizSegundoGrau(2,4,2) ) # -1.0
print (RaizSegundoGrau(2,2,2)) # -0.5 - 0.9j, -0.5+0.9j
print (RaizSegundoGrau(2,6,2)) # -2.6, -0.38 | ListaEX_02.ipynb | folivetti/PIPYTHON | mit |
Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropys, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images. | # Calculate losses
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real,
labels=tf.ones_like(d_logits_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.zeros_like(d_logits_fake)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.ones_like(d_logits_fake))) | gan_mnist/Intro_to_GANs_Solution.ipynb | udacity/deep-learning | mit |
1. Data
We should now have all the data loaded, named as it was before. As a reminder, these are the NGC numbers of the galaxies in the data set: | ngc_numbers | tutorials/cepheids_all_galaxies.ipynb | KIPAC/StatisticalMethods | gpl-2.0 |
2. Independent fits for each galaxy
This class will package up the fitting using the "4b" method from the previous notebook (emcee plus analytic integration). In particular, it relies on the log_prior, log_posterior and log_likelihood_B functions (as well as the data, among other previous global-scope definitions). If you want to use a different approach instead, feel free.
There are various defaults here (e.g. nsteps, burn, maxlag) that you might want to tweak, but in principle they should work well enough for this problem. | class singleFitter:
def __init__(self, ngc):
'''
ngc: NGC identifier of the galaxy to fit
'''
self.ngc = ngc
self.data = data[ngc] # from global scope
# reproducing this for paranoia's sake
self.param_names = ['a', 'b', 'sigma']
self.param_labels = [r'$a$', r'$b$', r'$\sigma$']
def _logpost_vecarg_B(self, pvec):
params = {name:pvec[i] for i,name in enumerate(self.param_names)}
return log_posterior(self.data, log_likelihood_B, **params)
def fit(self, guess, nsteps=7500):
npars = len(self.param_names)
nwalkers = 2*npars
sampler = emcee.EnsembleSampler(nwalkers, npars, self._logpost_vecarg_B)
start = np.array([np.array(guess)*(1.0 + 0.01*np.random.randn(npars)) for j in range(nwalkers)])
%time sampler.run_mcmc(start, nsteps)
plt.rcParams['figure.figsize'] = (16.0, 3.0*npars)
fig, ax = plt.subplots(npars, 1);
cr.plot_traces(sampler.chain[:min(8,nwalkers),:,:], ax, labels=self.param_labels);
self.sampler = sampler
self.nwalkers = nwalkers
self.npars = npars
self.nsteps = nsteps
def burnin(self, burn=1000, maxlag=1000):
tmp_samples = [self.sampler.chain[i,burn:,:] for i in range(self.nwalkers)]
print('R =', cr.GelmanRubinR(tmp_samples))
print('neff =', cr.effective_samples(tmp_samples, maxlag=maxlag))
print('NB: Since walkers are not independent, these will be optimistic!')
self.samples = self.sampler.chain[:,burn:,:].reshape(self.nwalkers*(self.nsteps-burn), self.npars)
del self.sampler
# make it simpler/more readable to access the parameter samples
# (could have been fancier and more robust by using self.param_names here)
self.a = self.samples[:,0]
self.b = self.samples[:,1]
self.sigma = self.samples[:,2]
def thin(self, thinto=1000):
j = np.round(np.linspace(0, self.samples.shape[0]-1, thinto)).astype(int)
self.a = self.samples[j,0]
self.b = self.samples[j,1]
self.sigma = self.samples[j,2] | tutorials/cepheids_all_galaxies.ipynb | KIPAC/StatisticalMethods | gpl-2.0 |
Let's set up and run each of these fits, which hopefully shouldn't take too long. As always, you are responsible for looking over the trace plots and making sure everything is ok. | independent_fits = [singleFitter(ngc) for ngc in ngc_numbers]
independent_fits[0].fit(guessvec)
independent_fits[1].fit(guessvec)
independent_fits[2].fit(guessvec)
independent_fits[3].fit(guessvec)
independent_fits[4].fit(guessvec)
independent_fits[5].fit(guessvec)
independent_fits[6].fit(guessvec)
independent_fits[7].fit(guessvec)
independent_fits[8].fit(guessvec) | tutorials/cepheids_all_galaxies.ipynb | KIPAC/StatisticalMethods | gpl-2.0 |
Based on the plots above, remove some burn-in. Check that the quantitative diagnostics are acceptable as they are printed out. | TBC(1) # burn = ...
for f in independent_fits:
print('NGC', f.ngc)
f.burnin(burn=burn) # optionally, set maxlag here also
print('') | tutorials/cepheids_all_galaxies.ipynb | KIPAC/StatisticalMethods | gpl-2.0 |
Now we'll use pygtc to plot all the individual posteriors, and see how they compare. | plotGTC([f.samples for f in independent_fits], paramNames=param_labels,
chainLabels=['NGC'+str(f.ngc) for f in independent_fits],
figureSize=8, customLabelFont={'size':12}, customTickFont={'size':12}, customLegendFont={'size':16}); | tutorials/cepheids_all_galaxies.ipynb | KIPAC/StatisticalMethods | gpl-2.0 |
Visually, would you say that it's likely that all the scaling parameters, or some subset, are universal?
TBC commentary
2. A hierarchical model for all galaxies
On the basis of the last section, it should be clear that at least one of the scaling parameters in question is not universal amongst galaxies in the data set, and at least one may well be. Further, it isn't obvious that there is any particular correlation or anticorrelation between the galaxy-to-galaxy differences in these parameters. If we were doing this as a research project, the latter would be an important thing to investigate, along with possible physical explanations for outliers. But we'll keep it relatively simple here.
Let's add a level of hierarchy to the model by assuming that the values of $a$ for each galaxy come from a normal distribution with mean $\mu_a$ and standard deviation $\tau_a$, and similarly $b$ and $\sigma$ come from their own normal distributions. We will not consider the possibility that, for example, all 3 come from a joint, multivariate normal distribution with possible correlations between them, although that could easily be justified. In practice, fitting for independent distributions for each parameter is a reasonable first step, much as fitting each galaxies data independently in Section 1 was a reasonable zeroth step.
Make the relatively simple modifications to your PGM and probabilistic expressions from Section 2 of the previous notebook to accomodate this model.
TBC probabilistic expressions and PGM
We will adopt wide, uniform priors on the new hyperparameters of the model, to make life easier.
3. Strategy
Even more than last time, the total number of free parameters in the model is, technically, staggering. We already know some ways of reducing the overhead associated with each galaxy. For example, using the analytic integration approach from the previous notebook, we have have only 3 parameters left to sample per galaxy, for a total of $3N_\mathrm{gal}+6=33$ parameters. Brute force sampling of these 33 parameters is not unthinkable, although in practice it may or may not be a headache.
Another option is to make use of the samples we obtained in Section 1. These are samples of the posterior (for each galaxy) when the priors on the scaling parameters are very wide and uniform, i.e. constant over the domain where the likelihood is significantly non-zero. They are, therefore, also samples from a PDF that is proportional to the likelihood function. To see why that might be helpful, consider the posterior for the hyperparameters of the new model, $\vec{\alpha} = (\mu_a,\tau_a,\mu_b,\tau_b,\mu_\sigma,\tau_\sigma)$, marginalized over all the pesky $a_i$, $b_i$ and $\sigma_i$ parameters:
$p(\vec{\alpha}|\mathrm{data}) \propto p(\vec{\alpha}) \prod_{i=1}^{N_\mathrm{gal}} \int da_i db_i d\sigma_i \, p(a_i,b_i,\sigma_i|\vec{\alpha}) \, p(\mathrm{data}|a_i,b_i,\sigma_i)$.
To restate what we said above, our individual fits (with uniform priors) give us samples from PDFs
$q(a_i,b_i,\sigma_i|\mathrm{data}) \propto p(\mathrm{data}|a_i,b_i,\sigma_i)$.
We can do this integral by simple monte carlo as
$p(\vec{\alpha}|\mathrm{data}) \propto p(\vec{\alpha}) \prod_{i=1}^{N_\mathrm{gal}} \frac{1}{n_i}\sum_{k=1}^{n_i} p(a_{ik},b_{ik},\sigma_{ik}|\vec{\alpha})$,
where the $n_i$ samples of $(a_{ik},b_{ik},\sigma_{ik}) \sim q(a_i,b_i,\sigma_i|\mathrm{data})$. Our samples from Section 1 happen to satisfy this. (Had we used a non-uniform prior before, we could do something similar, but would need to divide by that prior density in the sum above.) This approach has the advantage that we only need to sample the 6 parameters in $\vec{\alpha}$ to constrain our hierarchical model, since a lot of work is already done. On the other hand, carrying out the sums for each galaxy can become its own numerical challenge.
If we're really stuck in terms of computing power, we could consider a more compressed version of this, by approximating the posterior from each individual galaxy fit as a 3-dimensional Gaussian, or some other simple function. This approximation may or may not be a significant concession on our parts; here it's clearly a bit sketchy in the case of $\sigma$, which has a hard cut at $\sigma=0$ that at least one individual galaxy is consistent with. But, with this approximation, the integral in the first equation above could be done analytically, much as we simplified things for the single-galaxy analysis.
Finally, not that this is an exhaustive list, we could again consider whether conjugate Gibbs sampling is an option. Since the normal distribution has nice conjugacies, we could consider a scheme where we sample $\mu_a|\tau_a,{a_i}$, then $\tau_a|\mu_a,{a_i}$, then similarly for $\mu_b$, $\tau_b$, $\mu_\sigma$ and $\tau_\sigma$, and then all the individual $a_i$, $b_i$, $\sigma_i$ and $M_{ij}$ parameters as we did with LRGS in the previous notebook (accounting for the normal "prior" on $a_i$ given by $\mu_a$ and $\tau_a$, etc.). Or we could conjugate-Gibbs sample the $\mu$'s and $\tau$'s, while using some other method entirely for the galaxy-specific parameters. (We will not actually walk through this, since (a) LRGS (in python) doesn't implement Gaussian priors on the intercept/slope parameters, even though it's a simple addition; (b) I don't feel like dragging yet another code into the mix; and (c) the Gaussian parent distribution is not conjugate for the $\sigma$ parameters, so we'd have to use a different sampling method for those parameters anyway.)
4. Obtain the posterior
4a. Brute force
Let's again start by trying brute force, although in this case we'll still use the analytic integration method from the last notebook rather than the brutest force, which would have a free absolute magnitude for every cepheid in every galaxy. We can make use of our array of singleFitter objects, and specifically their _logpost_vecarg_B methods to do that part of the calculation.
The prototypes below assume the 33 parameters are ordered as: $(\mu_a,\tau_a,\mu_b,\tau_b,\mu_\sigma,\tau_\sigma,a_1,b_1,\sigma_1,a_2,b_2,\sigma_2,\ldots)$. Also, Let's... not include all the individual galaxy parameters in these lists of parameter names: | param_names_all = ['mu_a', 'tau_a', 'mu_b', 'tau_b', 'mu_s', 'tau_s']
param_labels_all = [r'$\mu_a$', r'$\tau_a$', r'$\mu_b$', r'$\tau_b$', r'$\mu_\sigma$', r'$\tau_\sigma$'] | tutorials/cepheids_all_galaxies.ipynb | KIPAC/StatisticalMethods | gpl-2.0 |
Complete the log-likelihood function for this part. Similarly to the way we dealt with Mtrue before, the galparams argument will end up being an array containing $(a_1,b_1,\sigma_1,a_2,b_2,\sigma_2,\ldots)$, from which we can extract arrays of $a_i$, $b_i$ and $\sigma_i$ if we want. The line given to you accounts for the $\prod_{i=1}^{N_\mathrm{gal}} p(\mathrm{data}|a_i,b_i,\sigma_i)$ part, ultimately calling log_likelihood_B and log_prior from the last notebook (see comments below). | def log_likelihood_all_A(mu_a, tau_a, mu_b, tau_b, mu_s, tau_s, galparams):
lnp = np.sum([f._logpost_vecarg_B(galparams[(0+3*i):(3+3*i)]) for i,f in enumerate(independent_fits)])
TBC() # lnp += ... more stuff ...
return lnp
TBC_above() | tutorials/cepheids_all_galaxies.ipynb | KIPAC/StatisticalMethods | gpl-2.0 |
As a consequence of the code above calling _logpost_vecarg_B (note post), the old priors for the $a_i$, $b_i$ and $\sigma_i$ will be included in the return value. This is ok only because we're using uniform priors, so in the log those priors are either a finite constant or $-\infty$. In general, we would need to divide the old priors out somewhere in the new posterior calculation. Even better, we would not write such dangerously lazy code.
But for our limited purposes, it should work. The bottom line is that we don't need to worry about the priors for the $a_i$, $b_i$ and $\sigma_i$ in the function below, just the hyperparameters of their parent distributions.
Again like the last notebook, we will make galparams an optional argument to the log-prior function, so we can re-use the function later, when the $a_i$, $b_i$ and $\sigma_i$ are not being sampled. | def log_prior_all(mu_a, tau_a, mu_b, tau_b, mu_s, tau_s, galparams=None):
TBC()
TBC_above() | tutorials/cepheids_all_galaxies.ipynb | KIPAC/StatisticalMethods | gpl-2.0 |
You can have the log-posterior functions. | def log_posterior_all(loglike, **params):
lnp = log_prior_all(**params)
if lnp != -np.inf:
lnp += loglike(**params)
return lnp
def logpost_vecarg_all_A(pvec):
params = {name:pvec[i] for i,name in enumerate(param_names_all)}
params['galparams'] = pvec[len(param_names_all):]
return log_posterior_all(log_likelihood_all_A, **params) | tutorials/cepheids_all_galaxies.ipynb | KIPAC/StatisticalMethods | gpl-2.0 |
Based on the triangle plot in the first section, guess rough starting values for $(\mu_a,\tau_a,\mu_b,\tau_b,\mu_\sigma,\tau_\sigma)$. (NB: make this a list rather than the usual dictionary.) We'll re-use the previous guess for the galaxy-specific parameters. | TBC() # guess_all = [list of hyperparameter starting values]
guess_all_A = np.array(guess_all + guessvec*9) | tutorials/cepheids_all_galaxies.ipynb | KIPAC/StatisticalMethods | gpl-2.0 |
Quick check that the functions above work: | logpost_vecarg_all_A(guess_all_A) | tutorials/cepheids_all_galaxies.ipynb | KIPAC/StatisticalMethods | gpl-2.0 |
Below, we run emcee as before.
IMPORTANT
You should find this to be more tractable than the "brute force" solution in the previous notebook, but still very slow compared to what we normally see in class. Again, you do not need to run this version long enough to get what we would normally consider acceptable results, in terms of convergence and number of independent samples. Just convince yourself that it's functioning, and see how it performs. Again, please do not turn in a notebook where the sampling cell below takes longer than $\sim30$ seconds to evaluate. | %%time
nsteps = 100 # or whatever
npars = len(guess_all_A)
nwalkers = 2*npars
sampler = emcee.EnsembleSampler(nwalkers, npars, logpost_vecarg_all_A)
start = np.array([np.array(guess_all_A)*(1.0 + 0.01*np.random.randn(npars)) for j in range(nwalkers)])
sampler.run_mcmc(start, nsteps)
print('Yay!') | tutorials/cepheids_all_galaxies.ipynb | KIPAC/StatisticalMethods | gpl-2.0 |
Look at the traces (we'll only include one of the galaxy's scaling parameters). | npars = len(guess_all)+3
plt.rcParams['figure.figsize'] = (16.0, 3.0*npars)
fig, ax = plt.subplots(npars, 1);
cr.plot_traces(sampler.chain[:min(8,nwalkers),:,:npars], ax, labels=param_labels_all+param_labels);
npars = len(guess_all_A) | tutorials/cepheids_all_galaxies.ipynb | KIPAC/StatisticalMethods | gpl-2.0 |
Go through the usual motions, making sure to set burn and maxlag to something appropriate for the length of the chain. | TBC()
# burn = ...
# maxlag = ...
tmp_samples = [sampler.chain[i,burn:,:9] for i in range(nwalkers)]
print('R =', cr.GelmanRubinR(tmp_samples))
print('neff =', cr.effective_samples(tmp_samples, maxlag=maxlag))
print('NB: Since walkers are not independent, these will be optimistic!')
print("Plus, there's a good chance that the results in this section are garbage...") | tutorials/cepheids_all_galaxies.ipynb | KIPAC/StatisticalMethods | gpl-2.0 |
As before, we'll be comparing the posteriors from the methods we attempt: | samples_all_A = sampler.chain[:,burn:,:].reshape(nwalkers*(nsteps-burn), npars)
plotGTC([samples_all_A[:,:9]], paramNames=param_labels_all+param_labels, chainLabels=['emcee/brute'],
figureSize=12, customLabelFont={'size':12}, customTickFont={'size':12}, customLegendFont={'size':16}); | tutorials/cepheids_all_galaxies.ipynb | KIPAC/StatisticalMethods | gpl-2.0 |
To be more thorough, we would also want to see how well the new hierarchical part of the model fits, meaning whether the posteriors of $a_i$, $b_i$ and $\sigma_i$ are collectively consistent with being drawn from their respective fitted Gaussians. Things might look slightly different than the plots we made above, since those fits used uniform priors rather than the hierarchical model. With only 9 galaxies, it seems unlikely that we could really rule out a Gaussian distribution, and it's tangential to the point of this tutorial. So this can be an exercise for the reader, if you want.
4b. Sampling with numerical marginalization
Let's see how we do trying to marginalize out the per-galaxy parameters by simple monte carlo, as described above,
$p(\mathrm{data}|\vec{\alpha}) = \prod_{i=1}^{N_\mathrm{gal}} \frac{1}{n_i}\sum_{k=1}^{n_i} p(a_{ik},b_{ik},\sigma_{ik}|\vec{\alpha})$.
Note that, because we are taking a sum of probabilities above, we do actually need to work with probabilities, as opposed to log-probabilities. You might reasonably worry about numerical stability here, but in this case a naive implementation seems to be ok. (In general, what we would need to check is whether the summands contributing most of the sum are easily floating-point representable, i.e. not so tiny that they underflow. We could always renormalize the summands to avoid this, since we will just end up taking the log afterwards.)
Implement the log-likelihood for this approach below. | def log_likelihood_all_B(mu_a, tau_a, mu_b, tau_b, mu_s, tau_s):
TBC()
TBC_above() | tutorials/cepheids_all_galaxies.ipynb | KIPAC/StatisticalMethods | gpl-2.0 |
This is for free: | def logpost_vecarg_all_B(pvec):
params = {name:pvec[i] for i,name in enumerate(param_names_all)}
return log_posterior_all(log_likelihood_all_B, **params) | tutorials/cepheids_all_galaxies.ipynb | KIPAC/StatisticalMethods | gpl-2.0 |
The usual sanity check: | logpost_vecarg_all_B(guess_all) | tutorials/cepheids_all_galaxies.ipynb | KIPAC/StatisticalMethods | gpl-2.0 |
Let's get an idea of how computationally expensive all these sums are by running a very short chain. | nsteps = 10
npars = len(guess_all)
nwalkers = 2*npars
sampler = emcee.EnsembleSampler(nwalkers, npars, logpost_vecarg_all_B)
start = np.array([np.array(guess_all)*(1.0 + 0.01*np.random.randn(npars)) for j in range(nwalkers)])
%time sampler.run_mcmc(start, nsteps)
print('Yay?') | tutorials/cepheids_all_galaxies.ipynb | KIPAC/StatisticalMethods | gpl-2.0 |
For me this comes out to about 7 seconds for 10 steps - slower than we'd ideally like, at least without more serious computing resources than my laptop. (If you run longer, though, you should see performance better than in part A.)
However, its worth asking if we can get away with using fewer samples. In principle, we are well justified in doing this, since the effective number of independent samples estimated for some of the individual fits are only $\sim500$ (when I ran them, anyway).
Note that the cell below is destructive, in that we can't easily get the original chains back after running it. Keep that in mind if you plan to play around, or improve on the code at the start of the notebook. | for f in independent_fits:
f.thin(500) | tutorials/cepheids_all_galaxies.ipynb | KIPAC/StatisticalMethods | gpl-2.0 |
With only 500 samples left in the sum for each galaxy, it should be possible to get results that appear basically converged with a couple of minutes runtime (and you should do so). Nevertheless, before turning in the notebook, please reduce the number of steps such that the sampling cell below takes longer than $\sim30$ seconds to evaluate. (You can leave a comment saying what number of steps you actually used, if you like.) | %%time
TBC() # nsteps =
npars = len(guess_all)
nwalkers = 2*npars
sampler = emcee.EnsembleSampler(nwalkers, npars, logpost_vecarg_all_B)
start = np.array([np.array(guess_all)*(1.0 + 0.01*np.random.randn(npars)) for j in range(nwalkers)])
sampler.run_mcmc(start, nsteps);
print('Yay!') | tutorials/cepheids_all_galaxies.ipynb | KIPAC/StatisticalMethods | gpl-2.0 |
Let's see how it does: | plt.rcParams['figure.figsize'] = (16.0, 3.0*npars)
fig, ax = plt.subplots(npars, 1);
cr.plot_traces(sampler.chain[:min(8,nwalkers),:,:npars], ax, labels=param_labels_all); | tutorials/cepheids_all_galaxies.ipynb | KIPAC/StatisticalMethods | gpl-2.0 |
The sampler is probably struggling to move around efficiently, but you could imagine running patiently for a while and ending up with something useful. Let's call this approach viable, but not ideal. Still, make sure you have reasonable convergence before continuing. | TBC()
# burn = ...
# maxlag = ...
tmp_samples = [sampler.chain[i,burn:,:] for i in range(nwalkers)]
print('R =', cr.GelmanRubinR(tmp_samples))
print('neff =', cr.effective_samples(tmp_samples, maxlag=maxlag))
print('NB: Since walkers are not independent, these will be optimistic!') | tutorials/cepheids_all_galaxies.ipynb | KIPAC/StatisticalMethods | gpl-2.0 |
And now the burning question: how does the posterior compare with the brute force version? | samples_all_B = sampler.chain[:,burn:,:].reshape(nwalkers*(nsteps-burn), npars)
plotGTC([samples_all_A[:,:len(param_names_all)], samples_all_B], paramNames=param_labels_all, chainLabels=['emcee/brute', 'emcee/SMC'],
figureSize=10, customLabelFont={'size':12}, customTickFont={'size':12}, customLegendFont={'size':16}); | tutorials/cepheids_all_galaxies.ipynb | KIPAC/StatisticalMethods | gpl-2.0 |
Checkpoint: Your posterior is compared with our solution by the cell below. Keep in mind they may have very different numbers of samples - we let ours run for several minutes. | sol = np.loadtxt('solutions/ceph2.dat.gz')
plotGTC([sol, samples_all_B], paramNames=param_labels_all, chainLabels=['solution', 'my emcee/SMC'],
figureSize=8, customLabelFont={'size':12}, customTickFont={'size':12}, customLegendFont={'size':16}); | tutorials/cepheids_all_galaxies.ipynb | KIPAC/StatisticalMethods | gpl-2.0 |
Setup
This example depends on the following software:
IPython
NumPy
SciPy
SymPy >= 0.7.6
matplotlib
The easiest way to install the Python packages it is to use conda:
$ conda install ipython-notebook numpy scipy sympy matplotlib
To create animations you need a video encoder like ffmpeg installed.
Equations of Motion
We'll start by generating the equations of motion for the system with SymPy mechanics. The functionality that mechanics provides is much more in depth than Mathematica's functionality. In the Mathematica example, Lagrangian mechanics were implemented manually with Mathematica's symbolic functionality. mechanics provides an assortment of functions and classes to derive the equations of motion for arbitrarily complex (i.e. configuration constraints, nonholonomic motion constraints, etc) multibody systems in a very natural way. First we import the necessary functionality from SymPy. | from __future__ import division, print_function
import sympy as sm
import sympy.physics.mechanics as me | examples/npendulum/n-pendulum-control.ipynb | oliverlee/pydy | bsd-3-clause |
The coordinate trajectories are plotted below. | lines = plt.plot(t, x[:, :x.shape[1] // 2])
lab = plt.xlabel('Time [sec]')
leg = plt.legend(dynamic[:x.shape[1] // 2]) | examples/npendulum/n-pendulum-control.ipynb | oliverlee/pydy | bsd-3-clause |
And the generalized speed trajectories. | lines = plt.plot(t, x[:, x.shape[1] // 2:])
lab = plt.xlabel('Time [sec]')
leg = plt.legend(dynamic[x.shape[1] // 2:]) | examples/npendulum/n-pendulum-control.ipynb | oliverlee/pydy | bsd-3-clause |
The following function was modeled from Jake Vanderplas's post on matplotlib animations. The default animation writer is used (typically ffmpeg), you can change it by adding writer argument to anim.save call. | def animate_pendulum(t, states, length, filename=None):
"""Animates the n-pendulum and optionally saves it to file.
Parameters
----------
t : ndarray, shape(m)
Time array.
states: ndarray, shape(m,p)
State time history.
length: float
The length of the pendulum links.
filename: string or None, optional
If true a movie file will be saved of the animation. This may take some time.
Returns
-------
fig : matplotlib.Figure
The figure.
anim : matplotlib.FuncAnimation
The animation.
"""
# the number of pendulum bobs
numpoints = states.shape[1] // 2
# first set up the figure, the axis, and the plot elements we want to animate
fig = plt.figure()
# some dimesions
cart_width = 0.4
cart_height = 0.2
# set the limits based on the motion
xmin = np.around(states[:, 0].min() - cart_width / 2.0, 1)
xmax = np.around(states[:, 0].max() + cart_width / 2.0, 1)
# create the axes
ax = plt.axes(xlim=(xmin, xmax), ylim=(-1.1, 1.1), aspect='equal')
# display the current time
time_text = ax.text(0.04, 0.9, '', transform=ax.transAxes)
# create a rectangular cart
rect = Rectangle([states[0, 0] - cart_width / 2.0, -cart_height / 2],
cart_width, cart_height, fill=True, color='red',
ec='black')
ax.add_patch(rect)
# blank line for the pendulum
line, = ax.plot([], [], lw=2, marker='o', markersize=6)
# initialization function: plot the background of each frame
def init():
time_text.set_text('')
rect.set_xy((0.0, 0.0))
line.set_data([], [])
return time_text, rect, line,
# animation function: update the objects
def animate(i):
time_text.set_text('time = {:2.2f}'.format(t[i]))
rect.set_xy((states[i, 0] - cart_width / 2.0, -cart_height / 2))
x = np.hstack((states[i, 0], np.zeros((numpoints - 1))))
y = np.zeros((numpoints))
for j in np.arange(1, numpoints):
x[j] = x[j - 1] + length * np.cos(states[i, j])
y[j] = y[j - 1] + length * np.sin(states[i, j])
line.set_data(x, y)
return time_text, rect, line,
# call the animator function
anim = animation.FuncAnimation(fig, animate, frames=len(t), init_func=init,
interval=t[-1] / len(t) * 1000, blit=True, repeat=False)
# save the animation if a filename is given
if filename is not None:
anim.save(filename, fps=30, codec='libx264') | examples/npendulum/n-pendulum-control.ipynb | oliverlee/pydy | bsd-3-clause |
Also convert equilibrium_point to a numeric array: | equilibrium_point = np.asarray([x.evalf() for x in equilibrium_point], dtype=float) | examples/npendulum/n-pendulum-control.ipynb | oliverlee/pydy | bsd-3-clause |
Now that we have a linear system, the SciPy package can be used to design an optimal controller for the system. | from numpy.linalg import matrix_rank
from scipy.linalg import solve_continuous_are | examples/npendulum/n-pendulum-control.ipynb | oliverlee/pydy | bsd-3-clause |
First we can check to see if the system is, in fact, controllable. The rank of the controllability matrix must be equal to the number of rows in $A$, but the matrix_rank algorithm is numerically ill conditioned and for certain values of $n$ this will fail, as seen below for $n=5$. Nevertheless, the system is controllable, no matter the number of links. | def controllable(a, b):
"""Returns true if the system is controllable and false if not.
Parameters
----------
a : array_like, shape(n,n)
The state matrix.
b : array_like, shape(n,r)
The input matrix.
Returns
-------
controllable : boolean
"""
a = np.matrix(a)
b = np.matrix(b)
n = a.shape[0]
controllability_matrix = []
for i in range(n):
controllability_matrix.append(a ** i * b)
controllability_matrix = np.hstack(controllability_matrix)
return np.linalg.matrix_rank(controllability_matrix) == n
controllable(A, B) | examples/npendulum/n-pendulum-control.ipynb | oliverlee/pydy | bsd-3-clause |
So now we can compute the optimal gains with a linear quadratic regulator. I chose identity matrices for the weightings for simplicity. | Q = np.eye(A.shape[0])
R = np.eye(B.shape[1])
S = solve_continuous_are(A, B, Q, R);
K = np.dot(np.dot(np.linalg.inv(R), B.T), S)
K | examples/npendulum/n-pendulum-control.ipynb | oliverlee/pydy | bsd-3-clause |
The plots show that we seem to have a stable system. | lines = plt.plot(t, x[:, :x.shape[1] // 2])
lab = plt.xlabel('Time [sec]')
leg = plt.legend(dynamic[:x.shape[1] // 2])
lines = plt.plot(t, x[:, x.shape[1] // 2:])
lab = plt.xlabel('Time [sec]')
leg = plt.legend(dynamic[x.shape[1] // 2:])
animate_pendulum(t, x, arm_length, filename="closed-loop.mp4")
from IPython.display import HTML
html = \
"""
<video width="640" height="480" controls>
<source src="closed-loop.mp4" type="video/mp4">
Your browser does not support the video tag, check out the YouTube version instead: http://youtu.be/SpgBHqW9om0
</video>
"""
HTML(html) | examples/npendulum/n-pendulum-control.ipynb | oliverlee/pydy | bsd-3-clause |
The video clearly shows that the controller can balance all $n$ of the pendulum links. The weightings in the lqr design can be tweaked to give different performance if needed.
This example shows that the free and open source scientific Python tools for dynamics are easily comparable in ability and quality to a commercial package such as Mathematica.
The IPython notebook for this example can be downloaded from https://github.com/pydy/pydy/tree/master/examples/npendulum. You can try out different $n$ values. I've gotten the equations of motion to compute for an open loop simulation of 10 links. My computer ran out of memory when I tried to compute for $n=50$. The controller weightings and initial conditions will probably have to be adjusted for better performance for $n>5$, but it should work. | # Install with pip install version_information
%load_ext version_information
%version_information numpy, sympy, scipy, matplotlib, control | examples/npendulum/n-pendulum-control.ipynb | oliverlee/pydy | bsd-3-clause |
경우 2
만약
$$
\mu = \begin{bmatrix}2 \ 3 \end{bmatrix}. \;\;\;
\Sigma = \begin{bmatrix}2 & 3 \ 3 & 7 \end{bmatrix}
$$
이면
$$
; \Sigma ; = 5,\;\;\;
\Sigma^{-1} = \begin{bmatrix}1.4 & -0.6 \ -0.6 & 0.4 \end{bmatrix}
$$
$$
(x-\mu)^T \Sigma^{-1} (x-\mu) =
\begin{bmatrix}x_1 - 2 & x_2 - 3 \end{bmatrix}
\begin{bmatrix}1.4 & -0.6 \ -0.6 & 0.4\end{bmatrix}
\begin{bmatrix}x_1 - 2 \ x_2 - 3 \end{bmatrix}
=
\dfrac{1}{10}\left(14(x_1 - 2)^2 - 12(x_1 - 2)(x_2 - 3) + 4(x_2 - 3)^2\right)
$$
$$
\mathcal{N}(x_1, x_2) = \dfrac{1}{20\pi}
\exp \left( -\dfrac{1}{10}\left(7(x_1 - 2)^2 - 6(x_1 - 2)(x_2 - 3) + 2(x_2 - 3)^2\right) \right)
$$
이 확률 밀도 함수의 모양은 다음과 같다. | mu = [2, 3]
cov = [[2, 3],[3, 7]]
rv = sp.stats.multivariate_normal(mu, cov)
xx = np.linspace(0, 4, 120)
yy = np.linspace(1, 5, 150)
XX, YY = np.meshgrid(xx, yy)
plt.grid(False)
plt.contourf(XX, YY, rv.pdf(np.dstack([XX, YY])))
plt.axis("equal")
plt.show() | 10. 기초 확률론3 - 확률 분포 모형/13. 다변수 가우시안 정규 분포.ipynb | zzsza/Datascience_School | mit |
Installing django and selenium | # Create a virtual env to load with selenium and django
#!conda create -yn django_env django python=3 # y flag automatically selects yes to install
#!source activate django_eng # activate virtual environment
#!pip install --upgrade selenium # install selenium. | wk9/notebooks/.ipynb_checkpoints/ch.1-getting-started-with-django-checkpoint.ipynb | saashimi/code_guild | mit |
Fixing our failure | # Use django to create a project called 'superlists'
#django-admin.py startproject superlists
!tree ../examples/ | wk9/notebooks/.ipynb_checkpoints/ch.1-getting-started-with-django-checkpoint.ipynb | saashimi/code_guild | mit |
Next, we evaluate scikit-learn accuracy where we predict feed implementation based on latency. | import graphviz
import pandas
from sklearn import tree
from sklearn.model_selection import train_test_split
clf = tree.DecisionTreeClassifier()
input = pandas.read_csv("/home/glenn/git/clojure-news-feed/client/ml/etl/latency.csv")
data = input[input.columns[7:9]]
data['cloud'] = input['cloud'].apply(lambda x: 1.0 if x == 'GKE' else 0.0)
target = input['feed']
X_train, X_test, y_train, y_test = train_test_split(data, target, test_size=0.4, random_state=0)
clf = clf.fit(X_train, y_train)
clf.score(X_test, y_test) | client/ml/dt/accuracy.ipynb | gengstrand/clojure-news-feed | epl-1.0 |
As you can see, scikit-learn has a 99% accuracy rate. We now do the same thing with tensorflow. | import tensorflow as tf
import numpy as np
import pandas
from tensorflow.python.ops import parsing_ops
from tensorflow.contrib.tensor_forest.python import tensor_forest
from tensorflow.contrib.learn.python.learn.utils import input_fn_utils
from sklearn.model_selection import train_test_split
input = pandas.read_csv("/home/glenn/git/clojure-news-feed/client/ml/etl/latency.csv")
data = input[input.columns[7:9]]
data['cloud'] = input['cloud'].apply(lambda x: 1.0 if x == 'GKE' else 0.0)
X_train, X_test, y_train, y_test = train_test_split(data, input['feed'], test_size=0.4, random_state=0)
X_train_np = np.array(X_train, dtype=np.float32)
y_train_np = np.array(y_train, dtype=np.int32)
X_test_np = np.array(X_test, dtype=np.float32)
y_test_np = np.array(y_test, dtype=np.int32)
hparams = tensor_forest.ForestHParams(num_classes=7,
num_features=3,
num_trees=1,
regression=False,
max_nodes=500).fill()
classifier = tf.contrib.tensor_forest.client.random_forest.TensorForestEstimator(hparams)
c = classifier.fit(x=X_train_np, y=y_train_np)
c.evaluate(x=X_test_np, y=y_test_np)
| client/ml/dt/accuracy.ipynb | gengstrand/clojure-news-feed | epl-1.0 |
Looks like tensorflow has a 98% accuracy rate which is 1% less than scikit-learn algo. Let us use Tensorflow to look at the accuracy of predicting cloud vendor based on throughput. | import tensorflow as tf
import numpy as np
import pandas
from tensorflow.python.ops import parsing_ops
from tensorflow.contrib.tensor_forest.python import tensor_forest
from tensorflow.contrib.learn.python.learn.utils import input_fn_utils
from sklearn.model_selection import train_test_split
input = pandas.read_csv("/home/glenn/git/clojure-news-feed/client/ml/etl/throughput.csv")
data = input[input.columns[6:9]]
target = input['cloud'].apply(lambda x: 1.0 if x == 'GKE' else 0.0)
X_train, X_test, y_train, y_test = train_test_split(data, target, test_size=0.4, random_state=0)
X_train_np = np.array(X_train, dtype=np.float32)
y_train_np = np.array(y_train, dtype=np.int32)
X_test_np = np.array(X_test, dtype=np.float32)
y_test_np = np.array(y_test, dtype=np.int32)
hparams = tensor_forest.ForestHParams(num_classes=3,
num_features=3,
num_trees=1,
regression=False,
max_nodes=500).fill()
classifier = tf.contrib.tensor_forest.client.random_forest.TensorForestEstimator(hparams)
c = classifier.fit(x=X_train_np, y=y_train_np)
c.evaluate(x=X_test_np, y=y_test_np) | client/ml/dt/accuracy.ipynb | gengstrand/clojure-news-feed | epl-1.0 |
Using PCA to Plot Datasets
PCA is a useful preprocessing technique for both visualizing data in 2 or 3 dimensions, and for improving the performance of downstream algorithms such as classifiers. We will see more details about using PCA as part of a machine learning pipeline in the net section, but here we will explore the intuition behind what PCA does, and why it is useful for certain tasks.
The goal of PCA is to find the dimensions of maximum variation in the data, and project onto them. This is helpful for data that is stretched in a particular dimension. Here we show an example in two dimensions, to get an understanding for how PCA can help classification. | import numpy as np
random_state = np.random.RandomState(1999)
X = np.random.randn(500, 2)
red_idx = np.where(X[:, 0] < 0)[0]
blue_idx = np.where(X[:, 0] >= 0)[0]
# Stretching
s_matrix = np.array([[1, 0],
[0, 20]])
# Rotation
r_angle = 33
r_rad = np.pi * r_angle / 180
r_matrix = np.array([[np.cos(r_rad), -np.sin(r_rad)],
[np.sin(r_rad), np.cos(r_rad)]])
X = np.dot(X, s_matrix).dot(r_matrix)
plt.scatter(X[red_idx, 0], X[red_idx, 1], color="darkred")
plt.scatter(X[blue_idx, 0], X[blue_idx, 1], color="steelblue")
# Fix axes to show mismatched dimensions
plt.axis('off')
plt.title("Skewed Data")
from sklearn.decomposition import PCA
pca = PCA()
X_t = pca.fit_transform(X)
plt.scatter(X_t[red_idx, 0], X_t[red_idx, 1], color="darkred")
plt.scatter(X_t[blue_idx, 0], X_t[blue_idx, 1], color="steelblue")
plt.axis('off')
plt.title("PCA Corrected Data") | notebooks/03.2 Methods - Unsupervised Preprocessing.ipynb | samstav/scipy_2015_sklearn_tutorial | cc0-1.0 |
We can also use PCA to visualize complex data in low dimensions in order to see how "close" and "far" different datapoints are in a 2D space. There are many different ways to do this visualization, and some common algorithms are found in sklearn.manifold. PCA is one of the simplest and most common methods for quickly visualizing a dataset.
Now we'll take a look at unsupervised learning on a facial recognition example.
This uses a dataset available within scikit-learn consisting of a
subset of the Labeled Faces in the Wild
data. Note that this is a relatively large download (~200MB) so it may
take a while to execute. | from sklearn import datasets
lfw_people = datasets.fetch_lfw_people(min_faces_per_person=70, resize=0.4,
data_home='datasets')
lfw_people.data.shape | notebooks/03.2 Methods - Unsupervised Preprocessing.ipynb | samstav/scipy_2015_sklearn_tutorial | cc0-1.0 |
决策树原理与实现简介
前言
为什么讲决策树?
原理简单,直白易懂。
可解释性好。
变种在工业上应用多:随机森林、GBDT。
深化拓展
理论,考古:ID3, C4.5, CART
工程,实现细节:
demo
scikit-learn
spark
xgboost
应用,调参分析
演示
理论
算法:
ID3
C4.5
C5.0
CART
CHAID
MARS
行业黑话
分类问题 vs 回归问题
样本 = (特征$x$,真实值$y$)
目的:找到模型$h(\cdot)$,使得预测值$\hat{y} = h(x)$ $\to$ 真实值$y$ | from sklearn.datasets import load_iris
data = load_iris()
# 准备特征数据
X = pd.DataFrame(data.data,
columns=["sepal_length", "sepal_width", "petal_length", "petal_width"])
# 准备标签数据
y = pd.DataFrame(data.target, columns=['target'])
y.replace(to_replace=range(3), value=data.target_names, inplace=True)
# 组建样本 [特征,标签]
samples = pd.concat([X, y], axis=1, keys=["x", "y"])
samples.head(5)
samples["y", "target"].value_counts()
samples["x"].describe() | machine_learning/tree/decision_tree/presentation.ipynb | facaiy/book_notes | cc0-1.0 |
三分钟明白决策树 | Image(url="https://upload.wikimedia.org/wikipedia/commons/f/f3/CART_tree_titanic_survivors.png")
Image(url="http://scikit-learn.org/stable/_images/iris.svg")
Image(url="http://scikit-learn.org/stable/_images/sphx_glr_plot_iris_0011.png")
samples = pd.concat([X, y], axis=1)
samples.head(3) | machine_learning/tree/decision_tree/presentation.ipynb | facaiy/book_notes | cc0-1.0 |
工程
Demo实现
其主要问题是在每次决策时找到一个分割点,让生成的子集尽可能地纯净。这里涉及到四个问题:
如何分割样本?
如何评价子集的纯净度?
如何找到单个最佳的分割点,其子集最为纯净?
如何找到最佳的分割点序列,其最终分割子集总体最为纯净? | Image(url="https://upload.wikimedia.org/wikipedia/commons/f/f3/CART_tree_titanic_survivors.png") | machine_learning/tree/decision_tree/presentation.ipynb | facaiy/book_notes | cc0-1.0 |
2. 如何评价子集的纯净度?
常用的评价函数正是计算各标签 $c_k$ 在子集中的占比 $p_k = c_k / \sum (c_k)$,并通过组合 $p_k$ 来描述占比集中或分散。 | def calc_class_proportion(node):
# 计算各标签在集合中的占比
y = node["target"]
return y.value_counts() / y.count()
calc_class_proportion(split["left_nodes"])
calc_class_proportion(split["right_nodes"]) | machine_learning/tree/decision_tree/presentation.ipynb | facaiy/book_notes | cc0-1.0 |
Data setup
We are going to use data from Fermi-LAT, Fermi-GBM and Swift-XRT. Let's go through the process of setting up the data from each instrument. We will work from high energy to low energy.
Fermi-LAT
Once we have obtained the Fermi-LAT data, in this case, the LAT Low Energy (LLE) data, we can reduce the data into an plugin using the light curve tools provided in 3ML. LLE data is in the format of FITS event files with an associated spacecraft point history file and energy dispersion response. The TimeSeriesBuilder class has special methods for dealing with the LLE data. |
lle = TimeSeriesBuilder.from_lat_lle('lle',ft2_file="lle_pt.fit",
lle_file="lle.fit",
rsp_file="lle.rsp")
lle.set_background_interval('-100--10','150-500')
lle.set_active_time_interval('68-110')
lle.view_lightcurve(-200,500);
lle_plugin = lle.to_spectrumlike()
lle_plugin.use_effective_area_correction()
lle_plugin.display()
lle_plugin.view_count_spectrum(); | examples/grb_multi_analysis.ipynb | giacomov/3ML | bsd-3-clause |
Fermi-GBM | gbm_detectors = ["n4", "n7", "n8", "b0"]
for det in gbm_detectors:
ts_cspec = TimeSeriesBuilder.from_gbm_cspec_or_ctime(
det, cspec_or_ctime_file=f"cspec_{det}.pha", rsp_file=f"cspec_{det}.rsp2"
)
ts_cspec.set_background_interval("-400--10", "700-1200")
ts_cspec.save_background(filename=f"{det}_bkg.h5", overwrite=True)
gbm_time_series = {}
gbm_plugins = {}
for det in gbm_detectors:
ts = TimeSeriesBuilder.from_gbm_tte(
det,
tte_file=f"tte_{det}.fit.gz",
rsp_file=f"cspec_{det}.rsp2",
restore_background=f"{det}_bkg.h5",
)
gbm_time_series[det] = ts
ts.view_lightcurve(-10, 200)
ts.set_active_time_interval("68-110")
gbm_plugins[det] = ts.to_spectrumlike()
for det, plugin in gbm_plugins.items():
if det.startswith("b"):
plugin.set_active_measurements("250-30000")
else:
plugin.set_active_measurements("10-900")
if det != "n3":
plugin.use_effective_area_correction()
plugin.rebin_on_background(1)
plugin.view_count_spectrum() | examples/grb_multi_analysis.ipynb | giacomov/3ML | bsd-3-clause |
Swift-XRT
For Swift-XRT, we can use the normal OGIPLike plugin, but the energy resolution of the instrument is so fine that we would waste time integrating over the photon bins during forward-folding. Thus, there is a special plugin that overrides the computation of the photon integrals with a simple sum. | xrt = SwiftXRTLike(
"xrt",
observation="awt.pi",
background="awtback.pi",
arf_file="awt.arf",
response="awt.rmf",
)
xrt.display()
xrt.remove_rebinning()
xrt.set_active_measurements('4-10')
xrt.rebin_on_background(1.)
xrt.use_effective_area_correction()
xrt.view_count_spectrum();
| examples/grb_multi_analysis.ipynb | giacomov/3ML | bsd-3-clause |
Combining all the plugins | all_plugins = [lle_plugin, xrt]
for _ , plugin in gbm_plugins.items():
all_plugins.append(plugin)
datalist = DataList(*all_plugins) | examples/grb_multi_analysis.ipynb | giacomov/3ML | bsd-3-clause |
Fitting
Model setup
Band Function | sbpl = SmoothlyBrokenPowerLaw(pivot=1E3)
sbpl.alpha.prior = Truncated_gaussian(lower_bound=-1.5, upper_bound=0, mu=-1, sigma=.5)
sbpl.beta.prior = Truncated_gaussian(lower_bound=-3., upper_bound=-1.6, mu=-2, sigma=.5)
sbpl.break_energy.prior = Log_uniform_prior(lower_bound=1, upper_bound=1E3)
sbpl.break_scale.prior = Log_uniform_prior(lower_bound=1E-4, upper_bound=10.)
sbpl.K.prior = Log_uniform_prior(lower_bound=1E-2, upper_bound=1E2)
sbpl.K = 1E-1
sbpl.break_energy.bounds = (0, None)
sbpl.break_scale.free=True
ps = PointSource('grb',0,0,spectral_shape=sbpl)
model = Model(ps)
bayes = BayesianAnalysis(model,datalist)
bayes.set_sampler('multinest')
for k,v in model.free_parameters.items():
if "cons" in k:
v.prior = Truncated_gaussian(lower_bound=.8, upper_bound=1.2, mu=1, sigma=.1)
#bayes.sampler.setup(dlogz=10.,frac_remain=.5)
bayes.sampler.setup(n_live_points=1000)
bayes.sample()
bayes.restore_median_fit()
#sbpl.K = 1E-1
display_spectrum_model_counts(bayes, min_rate=[-1,5,5,5,5,5]);
bayes.results.corner_plot();
plot_point_source_spectra(bayes.results, flux_unit='erg2/(cm2 s keV)',ene_max =1E5); | examples/grb_multi_analysis.ipynb | giacomov/3ML | bsd-3-clause |
There are warnings, but that's okay - this happens a lot these days due to the whole ipython/jupyter renaming process. You can ignore them.
Get a database
Using the bash shell (not a notebook!), follow the instructions at the SW Carpentry db lessons discussion page to get the survey.db file. This is a sqlite3 database.
I recommend following up with the rest of the instructions on that page to explore sqlite3.
Connecting to a Sqlite3 database
This part is easy, just connect like so (assuming the survey.db file is in the same directory as this notebook): | %sql sqlite:///survey.db
%sql SELECT * FROM Person; | lectures/week-03/sql-demo.ipynb | dchud/warehousing-course | cc0-1.0 |
You should be able to execute all the standard SQL queries from the lesson here now. Note that you can also do this on the command line.
Note specialized sqlite3 commands like ".schema" might not work.
Connecting to a MySQL database
Now that you've explored the survey.db sample database with sqlite3, let's try working with mysql: | %sql mysql://mysqluser:mysqlpass@localhost/ | lectures/week-03/sql-demo.ipynb | dchud/warehousing-course | cc0-1.0 |
note if you get an error about MySQLdb not being installed here, enter this back in your bash shell:
% sudo pip install mysql-python
If it asks for your password, it's "vagrant".
After doing this, try executing the above cell again. You should see:
u'Connected: mysqluser@'
...if it works.
Creating a database
Now that we're connected, let's create a database. | %sql CREATE DATABASE week3demo; | lectures/week-03/sql-demo.ipynb | dchud/warehousing-course | cc0-1.0 |
Now that we've created the database week3demo, we need to tell MySQL that we want to use it: | %sql USE week3demo; | lectures/week-03/sql-demo.ipynb | dchud/warehousing-course | cc0-1.0 |
But there's nothing in it: | %sql SHOW TABLES; | lectures/week-03/sql-demo.ipynb | dchud/warehousing-course | cc0-1.0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.