markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
Typical, the UWIs are a disaster. Let's ignore this for now.
The Project is really just a list-like thing, so you can index into it to get at a single well. Each well is represented by a welly.Well object. | p[0] | docs/_userguide/Projects.ipynb | agile-geoscience/welly | apache-2.0 |
Some of the fields of this LAS file are messed up; see the Well notebook for more on how to fix this.
Plot curves from several wells
The DT log is called DT4P in one of the wells. We can deal with this sort of issue with aliases. Let's set up an alias dictionary, then plot the DT log from each well: | alias = {'Sonic': ['DT', 'DT4P'],
'Caliper': ['HCAL', 'CALI'],
}
import matplotlib.pyplot as plt
fig, axs = plt.subplots(figsize=(7, 14),
ncols=len(p),
sharey=True,
)
for i, (ax, w) in enumerate(zip(axs, p)):
log = w.get_curve('Sonic', alias=alias)
if log is not None:
ax = log.plot(ax=ax)
ax.set_title("Sonic log for\n{}".format(w.uwi))
min_z, max_z = p.basis_range
plt.ylim(max_z, min_z)
plt.show() | docs/_userguide/Projects.ipynb | agile-geoscience/welly | apache-2.0 |
Get a pandas.DataFrame
The df() method makes a DataFrame using a dual index of UWI and Depth.
Before we export our wells, let's give Kennetcook #2 a better UWI: | p[0].uwi = p[0].name
p[0] | docs/_userguide/Projects.ipynb | agile-geoscience/welly | apache-2.0 |
That's better.
When creating the DataFrame, you can pass a list of the keys (mnemonics) you want, and use aliases as usual. | alias
keys = ['Caliper', 'GR', 'Sonic']
df = p.df(keys=keys, alias=alias, rename_aliased=True)
df | docs/_userguide/Projects.ipynb | agile-geoscience/welly | apache-2.0 |
Quality
Welly can run quality tests on the curves in your project. Some of the tests take arguments. You can test for things like this:
all_positive: Passes if all the values are greater than zero.
all_above(50): Passes if all the values are greater than 50.
mean_below(100): Passes if the mean of the log is less than 100.
no_nans: Passes if there are no NaNs in the log.
no_flat: Passes if there are no sections of well log with the same values (e.g. because a gap was interpolated across with a constant value).
no_monotonic: Passes if there are no monotonic ramps in the log (e.g. because a gap was linearly interpolated across).
Insert lists of tests into a dictionary with any of the following key examples:
'GR': The test(s) will run against the GR log.
'Gamma': The test(s) will run against the log matching according to the alias dictionary.
'Each': The test(s) will run against every log in a well.
'All': Some tests take multiple logs as input, for example quality.no_similarities. These test(s) will run against all the logs as a group. Could be quite slow, because there may be a lot of pairwise comparisons to do.
The tests are run against all wells in the project. If you only want to run against a subset of the wells, make a new project for them. | import welly.quality as q
tests = {
'All': [q.no_similarities],
'Each': [q.no_gaps, q.no_monotonic, q.no_flat],
'GR': [q.all_positive],
'Sonic': [q.all_positive, q.all_between(50, 200)],
} | docs/_userguide/Projects.ipynb | agile-geoscience/welly | apache-2.0 |
Let's add our own test for units: | def has_si_units(curve):
return curve.units.lower() in ['mm', 'gapi', 'us/m', 'k/m3']
tests['Each'].append(has_si_units) | docs/_userguide/Projects.ipynb | agile-geoscience/welly | apache-2.0 |
We'll use the same alias dictionary as before: | alias | docs/_userguide/Projects.ipynb | agile-geoscience/welly | apache-2.0 |
Now we can run the tests and look at the results, which are in an HTML table: | from IPython.display import HTML
HTML(p.curve_table_html(keys=['Caliper', 'GR', 'Sonic', 'SP', 'RHOB'],
tests=tests, alias=alias)
) | docs/_userguide/Projects.ipynb | agile-geoscience/welly | apache-2.0 |
Decoding sensor space data with generalization across time and conditions
This example runs the analysis described in :footcite:KingDehaene2014. It
illustrates how one can
fit a linear classifier to identify a discriminatory topography at a given time
instant and subsequently assess whether this linear model can accurately
predict all of the time samples of a second set of conditions. | # Authors: Jean-Remi King <[email protected]>
# Alexandre Gramfort <[email protected]>
# Denis Engemann <[email protected]>
#
# License: BSD-3-Clause
import matplotlib.pyplot as plt
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
import mne
from mne.datasets import sample
from mne.decoding import GeneralizingEstimator
print(__doc__)
# Preprocess data
data_path = sample.data_path()
# Load and filter data, set up epochs
meg_path = data_path / 'MEG' / 'sample'
raw_fname = meg_path / 'sample_audvis_filt-0-40_raw.fif'
events_fname = meg_path / 'sample_audvis_filt-0-40_raw-eve.fif'
raw = mne.io.read_raw_fif(raw_fname, preload=True)
picks = mne.pick_types(raw.info, meg=True, exclude='bads') # Pick MEG channels
raw.filter(1., 30., fir_design='firwin') # Band pass filtering signals
events = mne.read_events(events_fname)
event_id = {'Auditory/Left': 1, 'Auditory/Right': 2,
'Visual/Left': 3, 'Visual/Right': 4}
tmin = -0.050
tmax = 0.400
# decimate to make the example faster to run, but then use verbose='error' in
# the Epochs constructor to suppress warning about decimation causing aliasing
decim = 2
epochs = mne.Epochs(raw, events, event_id=event_id, tmin=tmin, tmax=tmax,
proj=True, picks=picks, baseline=None, preload=True,
reject=dict(mag=5e-12), decim=decim, verbose='error') | dev/_downloads/00e78bba5d10188fcf003ef05e32a6f7/decoding_time_generalization_conditions.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
We will train the classifier on all left visual vs auditory trials
and test on all right visual vs auditory trials. | clf = make_pipeline(
StandardScaler(),
LogisticRegression(solver='liblinear') # liblinear is faster than lbfgs
)
time_gen = GeneralizingEstimator(clf, scoring='roc_auc', n_jobs=None,
verbose=True)
# Fit classifiers on the epochs where the stimulus was presented to the left.
# Note that the experimental condition y indicates auditory or visual
time_gen.fit(X=epochs['Left'].get_data(),
y=epochs['Left'].events[:, 2] > 2) | dev/_downloads/00e78bba5d10188fcf003ef05e32a6f7/decoding_time_generalization_conditions.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Score on the epochs where the stimulus was presented to the right. | scores = time_gen.score(X=epochs['Right'].get_data(),
y=epochs['Right'].events[:, 2] > 2) | dev/_downloads/00e78bba5d10188fcf003ef05e32a6f7/decoding_time_generalization_conditions.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Plot | fig, ax = plt.subplots(1)
im = ax.matshow(scores, vmin=0, vmax=1., cmap='RdBu_r', origin='lower',
extent=epochs.times[[0, -1, 0, -1]])
ax.axhline(0., color='k')
ax.axvline(0., color='k')
ax.xaxis.set_ticks_position('bottom')
ax.set_xlabel('Testing Time (s)')
ax.set_ylabel('Training Time (s)')
ax.set_title('Generalization across time and condition')
plt.colorbar(im, ax=ax)
plt.show() | dev/_downloads/00e78bba5d10188fcf003ef05e32a6f7/decoding_time_generalization_conditions.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Load the data | # First we load the file
file_location = '../results_database/text_wall_street_big.hdf5'
f = h5py.File(file_location, 'r')
# Now we need to get the letters and align them
text_directory = '../data/wall_street_letters.npy'
letters_sequence = np.load(text_directory)
Nletters = len(letters_sequence)
symbols = set(letters_sequence)
# Load the particular example
Nspatial_clusters = 5
Ntime_clusters = 15
Nembedding = 3
run_name = '/low-resolution'
parameters_string = '/' + str(Nspatial_clusters)
parameters_string += '-' + str(Ntime_clusters)
parameters_string += '-' + str(Nembedding)
nexa = f[run_name + parameters_string]
# Now we load the time and the code vectors
time = nexa['time']
code_vectors = nexa['code-vectors']
code_vectors_distance = nexa['code-vectors-distance']
code_vectors_softmax = nexa['code-vectors-softmax']
code_vectors_winner = nexa['code-vectors-winner'] | presentations/2016-01-21(Wall-Street-Letter-Latency-Prediction).ipynb | h-mayorquin/time_series_basic | bsd-3-clause |
Study the Latency of the Data by Accuracy
Make prediction with winner takes all
Make the prediction for each delay. This takes a bit | N = 50000 # Amount of data
delays = np.arange(0, 10)
accuracy = []
# Make prediction with scikit-learn
for delay in delays:
X = code_vectors_winner[:(N - delay)]
y = letters_sequence[delay:N]
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.10)
clf = svm.SVC(C=1.0, cache_size=200, kernel='linear')
clf.fit(X_train, y_train)
score = clf.score(X_test, y_test) * 100.0
accuracy.append(score)
print('delay', delay)
print('score', score)
| presentations/2016-01-21(Wall-Street-Letter-Latency-Prediction).ipynb | h-mayorquin/time_series_basic | bsd-3-clause |
Plot it | import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
plt.plot(delays, accuracy, 'o-', lw=2, markersize=10)
plt.xlabel('Delays')
plt.ylim([0, 105])
plt.xlim([-0.5, 10])
plt.ylabel('Accuracy %')
plt.title('Delays vs Accuracy')
fig = plt.gcf()
fig.set_size_inches((12, 9)) | presentations/2016-01-21(Wall-Street-Letter-Latency-Prediction).ipynb | h-mayorquin/time_series_basic | bsd-3-clause |
Make predictions with representation standarization | from sklearn import preprocessing
N = 50000 # Amount of data
delays = np.arange(0, 10)
accuracy_std = []
# Make prediction with scikit-learn
for delay in delays:
X = code_vectors_winner[:(N - delay)]
y = letters_sequence[delay:N]
X = preprocessing.scale(X)
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.10)
clf = svm.SVC(C=1.0, cache_size=200, kernel='linear')
clf.fit(X_train, y_train)
score = clf.score(X_test, y_test) * 100.0
accuracy_std.append(score)
print('delay', delay)
print('score', score) | presentations/2016-01-21(Wall-Street-Letter-Latency-Prediction).ipynb | h-mayorquin/time_series_basic | bsd-3-clause |
Plot it | plt.plot(delays, accuracy, 'o-', lw=2, markersize=10., label='Accuracy')
plt.plot(delays, accuracy_std, 'o-', lw=2, markersize=10, label='Standarized Representations')
plt.xlabel('Delays')
plt.ylim([0, 105])
plt.xlim([-0.5, 10])
plt.ylabel('Accuracy %')
plt.title('Delays vs Accuracy')
fig = plt.gcf()
fig.set_size_inches((12, 9))
plt.legend() | presentations/2016-01-21(Wall-Street-Letter-Latency-Prediction).ipynb | h-mayorquin/time_series_basic | bsd-3-clause |
Make prediction with softmax | accuracy_softmax = []
# Make prediction with scikit-learn
for delay in delays:
X = code_vectors_softmax[:(N - delay)]
y = letters_sequence[delay:N]
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.10)
clf = svm.SVC(C=1.0, cache_size=200, kernel='linear')
clf.fit(X_train, y_train)
score = clf.score(X_test, y_test) * 100.0
accuracy_softmax.append(score)
print('delay', delay)
print('score', score)
| presentations/2016-01-21(Wall-Street-Letter-Latency-Prediction).ipynb | h-mayorquin/time_series_basic | bsd-3-clause |
Standarized predictions with softmax | accuracy_softmax_std = []
# Make prediction with scikit-learn
for delay in delays:
X = code_vectors_winner[:(N - delay)]
y = letters_sequence[delay:N]
X = preprocessing.scale(X)
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.10)
clf = svm.SVC(C=1.0, cache_size=200, kernel='linear')
clf.fit(X_train, y_train)
score = clf.score(X_test, y_test) * 100.0
accuracy_softmax_std.append(score)
print('delay', delay)
print('score', score)
plt.plot(delays, accuracy_softmax, 'o-', lw=2, markersize=10., label='Accuracy')
plt.plot(delays, accuracy_softmax_std, 'o-', lw=2, markersize=10, label='Standarized Representations')
plt.xlabel('Delays')
plt.ylim([0, 105])
plt.xlim([-0.5, 10])
plt.ylabel('Accuracy %')
plt.title('Delays vs Accuracy (Softmax)')
fig = plt.gcf()
fig.set_size_inches((12, 9))
plt.legend() | presentations/2016-01-21(Wall-Street-Letter-Latency-Prediction).ipynb | h-mayorquin/time_series_basic | bsd-3-clause |
NOTE: In the output of the above cell you may ignore any WARNINGS or ERRORS related to the following: "apache-beam", "pyarrow", "tensorflow-transform", "tensorflow-model-analysis", "tensorflow-data-validation", "joblib", "google-cloud-storage" etc.
If you get any related errors mentioned above please rerun the above cell.
Note: Restart your kernel to use updated packages. | import tensorflow as tf
import apache_beam as beam
import shutil
print(tf.__version__) | courses/machine_learning/feateng/feateng.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
<h2> 1. Environment variables for project and bucket </h2>
Your project id is the unique string that identifies your project (not the project name). You can find this from the GCP Console dashboard's Home page. My dashboard reads: <b>Project ID:</b> cloud-training-demos
Cloud training often involves saving and restoring model files. Therefore, we should <b>create a single-region bucket</b>. If you don't have a bucket already, I suggest that you create one from the GCP console (because it will dynamically check whether the bucket name you want is available)
<b>Change the cell below</b> to reflect your Project ID and bucket name. | import os
PROJECT = 'cloud-training-demos' # CHANGE THIS
BUCKET = 'cloud-training-demos' # REPLACE WITH YOUR BUCKET NAME. Use a regional bucket in the region you selected.
REGION = 'us-central1' # Choose an available region for Cloud AI Platform
# for bash
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '2.6'
## ensure we're using python3 env
os.environ['CLOUDSDK_PYTHON'] = 'python3.7'
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
## ensure we predict locally with our current Python environment
gcloud config set ml_engine/local_python `which python` | courses/machine_learning/feateng/feateng.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
<h2> 2. Specifying query to pull the data </h2>
Let's pull out a few extra columns from the timestamp. | def create_query(phase, EVERY_N):
if EVERY_N == None:
EVERY_N = 4 #use full dataset
#select and pre-process fields
base_query = """
#legacySQL
SELECT
(tolls_amount + fare_amount) AS fare_amount,
DAYOFWEEK(pickup_datetime) AS dayofweek,
HOUR(pickup_datetime) AS hourofday,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
CONCAT(STRING(pickup_datetime), STRING(pickup_longitude), STRING(pickup_latitude), STRING(dropoff_latitude), STRING(dropoff_longitude)) AS key
FROM
[nyc-tlc:yellow.trips]
WHERE
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
"""
#add subsampling criteria by modding with hashkey
if phase == 'train':
query = "{} AND ABS(HASH(pickup_datetime)) % {} < 2".format(base_query,EVERY_N)
elif phase == 'valid':
query = "{} AND ABS(HASH(pickup_datetime)) % {} == 2".format(base_query,EVERY_N)
elif phase == 'test':
query = "{} AND ABS(HASH(pickup_datetime)) % {} == 3".format(base_query,EVERY_N)
return query
print(create_query('valid', 100)) #example query using 1% of data | courses/machine_learning/feateng/feateng.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Try the query above in https://bigquery.cloud.google.com/table/nyc-tlc:yellow.trips if you want to see what it does (ADD LIMIT 10 to the query!)
<h2> 3. Preprocessing Dataflow job from BigQuery </h2>
This code reads from BigQuery and saves the data as-is on Google Cloud Storage. We can do additional preprocessing and cleanup inside Dataflow, but then we'll have to remember to repeat that prepreprocessing during inference. It is better to use tf.transform which will do this book-keeping for you, or to do preprocessing within your TensorFlow model. We will look at this in future notebooks. For now, we are simply moving data from BigQuery to CSV using Dataflow.
While we could read from BQ directly from TensorFlow (See: https://www.tensorflow.org/api_docs/python/tf/contrib/cloud/BigQueryReader), it is quite convenient to export to CSV and do the training off CSV. Let's use Dataflow to do this at scale.
Because we are running this on the Cloud, you should go to the GCP Console (https://console.cloud.google.com/dataflow) to look at the status of the job. It will take several minutes for the preprocessing job to launch. | %%bash
if gsutil ls | grep -q gs://${BUCKET}/taxifare/ch4/taxi_preproc/; then
gsutil -m rm -rf gs://$BUCKET/taxifare/ch4/taxi_preproc/
fi | courses/machine_learning/feateng/feateng.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
First, let's define a function for preprocessing the data | import datetime
####
# Arguments:
# -rowdict: Dictionary. The beam bigquery reader returns a PCollection in
# which each row is represented as a python dictionary
# Returns:
# -rowstring: a comma separated string representation of the record with dayofweek
# converted from int to string (e.g. 3 --> Tue)
####
def to_csv(rowdict):
days = ['null', 'Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']
CSV_COLUMNS = 'fare_amount,dayofweek,hourofday,pickuplon,pickuplat,dropofflon,dropofflat,passengers,key'.split(',')
rowdict['dayofweek'] = days[rowdict['dayofweek']]
rowstring = ','.join([str(rowdict[k]) for k in CSV_COLUMNS])
return rowstring
####
# Arguments:
# -EVERY_N: Integer. Sample one out of every N rows from the full dataset.
# Larger values will yield smaller sample
# -RUNNER: 'DirectRunner' or 'DataflowRunner'. Specify to run the pipeline
# locally or on Google Cloud respectively.
# Side-effects:
# -Creates and executes dataflow pipeline.
# See https://beam.apache.org/documentation/programming-guide/#creating-a-pipeline
####
def preprocess(EVERY_N, RUNNER):
job_name = 'preprocess-taxifeatures' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')
print('Launching Dataflow job {} ... hang on'.format(job_name))
OUTPUT_DIR = 'gs://{0}/taxifare/ch4/taxi_preproc/'.format(BUCKET)
#dictionary of pipeline options
options = {
'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
'job_name': 'preprocess-taxifeatures' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S'),
'project': PROJECT,
'runner': RUNNER,
'num_workers' : 4,
'max_num_workers' : 5
}
#instantiate PipelineOptions object using options dictionary
opts = beam.pipeline.PipelineOptions(flags=[], **options)
#instantantiate Pipeline object using PipelineOptions
with beam.Pipeline(options=opts) as p:
for phase in ['train', 'valid']:
query = create_query(phase, EVERY_N)
outfile = os.path.join(OUTPUT_DIR, '{}.csv'.format(phase))
(
p | 'read_{}'.format(phase) >> beam.io.Read(beam.io.BigQuerySource(query=query))
| 'tocsv_{}'.format(phase) >> beam.Map(to_csv)
| 'write_{}'.format(phase) >> beam.io.Write(beam.io.WriteToText(outfile))
)
print("Done") | courses/machine_learning/feateng/feateng.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Now, let's run pipeline locally. This takes upto <b>5 minutes</b>. You will see a message "Done" when it is done. | preprocess(50*10000, 'DirectRunner')
%%bash
gsutil ls gs://$BUCKET/taxifare/ch4/taxi_preproc/ | courses/machine_learning/feateng/feateng.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
4. Run Beam pipeline on Cloud Dataflow
Run pipeline on cloud on a larger sample size. | %%bash
if gsutil ls | grep -q gs://${BUCKET}/taxifare/ch4/taxi_preproc/; then
gsutil -m rm -rf gs://$BUCKET/taxifare/ch4/taxi_preproc/
fi | courses/machine_learning/feateng/feateng.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
The following step will take <b>10-15 minutes.</b> Monitor job progress on the Cloud Console in the Dataflow section.
Note: If the error occurred regarding enabling of Dataflow API then disable and re-enable the Dataflow API and re-run the below cell. | preprocess(50*100, 'DataflowRunner')
| courses/machine_learning/feateng/feateng.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Once the job completes, observe the files created in Google Cloud Storage | %%bash
gsutil ls -l gs://$BUCKET/taxifare/ch4/taxi_preproc/
%%bash
#print first 10 lines of first shard of train.csv
gsutil cat "gs://$BUCKET/taxifare/ch4/taxi_preproc/train.csv-00000-of-*" | head | courses/machine_learning/feateng/feateng.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
5. Develop model with new inputs
Download the first shard of the preprocessed data to enable local development. | %%bash
if [ -d sample ]; then
rm -rf sample
fi
mkdir sample
gsutil cat "gs://$BUCKET/taxifare/ch4/taxi_preproc/train.csv-00000-of-*" > sample/train.csv
gsutil cat "gs://$BUCKET/taxifare/ch4/taxi_preproc/valid.csv-00000-of-*" > sample/valid.csv | courses/machine_learning/feateng/feateng.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
We have two new inputs in the INPUT_COLUMNS, three engineered features, and the estimator involves bucketization and feature crosses. | %%bash
grep -A 20 "INPUT_COLUMNS =" taxifare/trainer/model.py
%%bash
grep -A 50 "build_estimator" taxifare/trainer/model.py
%%bash
grep -A 15 "add_engineered(" taxifare/trainer/model.py | courses/machine_learning/feateng/feateng.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Try out the new model on the local sample (this takes <b>5 minutes</b>) to make sure it works fine. | %%bash
rm -rf taxifare.tar.gz taxi_trained
export PYTHONPATH=${PYTHONPATH}:${PWD}/taxifare
python -m trainer.task \
--train_data_paths=${PWD}/sample/train.csv \
--eval_data_paths=${PWD}/sample/valid.csv \
--output_dir=${PWD}/taxi_trained \
--train_steps=10 \
--job-dir=/tmp
%%bash
ls taxi_trained/export/exporter/ | courses/machine_learning/feateng/feateng.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
You can use saved_model_cli to look at the exported signature. Note that the model doesn't need any of the engineered features as inputs. It will compute latdiff, londiff, euclidean from the provided inputs, thanks to the add_engineered call in the serving_input_fn. | %%bash
model_dir=$(ls ${PWD}/taxi_trained/export/exporter | tail -1)
saved_model_cli show --dir ${PWD}/taxi_trained/export/exporter/${model_dir} --all
%%writefile /tmp/test.json
{"dayofweek": "Sun", "hourofday": 17, "pickuplon": -73.885262, "pickuplat": 40.773008, "dropofflon": -73.987232, "dropofflat": 40.732403, "passengers": 2}
%%bash
model_dir=$(ls ${PWD}/taxi_trained/export/exporter)
gcloud ai-platform local predict \
--model-dir=${PWD}/taxi_trained/export/exporter/${model_dir} \
--json-instances=/tmp/test.json | courses/machine_learning/feateng/feateng.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
6. Train on cloud
This will take <b> 10-15 minutes </b> even though the prompt immediately returns after the job is submitted. Monitor job progress on the Cloud Console, in the AI Platform section and wait for the training job to complete. | %%bash
OUTDIR=gs://${BUCKET}/taxifare/ch4/taxi_trained
JOBNAME=lab4a_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${PWD}/taxifare/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC \
--runtime-version 2.3 \
--python-version 3.5 \
-- \
--train_data_paths="gs://$BUCKET/taxifare/ch4/taxi_preproc/train*" \
--eval_data_paths="gs://${BUCKET}/taxifare/ch4/taxi_preproc/valid*" \
--train_steps=5000 \
--output_dir=$OUTDIR | courses/machine_learning/feateng/feateng.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Create some plot data
Function assumes data to plot is an array-like object in a single cell per row. | density_func = 78
mean, var, skew, kurt = stats.chi.stats(density_func, moments='mvsk')
x_chi = np.linspace(stats.chi.ppf(0.01, density_func),
stats.chi.ppf(0.99, density_func), 100)
y_chi = stats.chi.pdf(x_chi, density_func)
x_expon = np.linspace(stats.expon.ppf(0.01), stats.expon.ppf(0.99), 100)
y_expon = stats.expon.pdf(x_expon)
a_gamma = 1.99
x_gamma = np.linspace(stats.gamma.ppf(0.01, a_gamma),
stats.gamma.ppf(0.99, a_gamma), 100)
y_gamma = stats.gamma.pdf(x_gamma, a_gamma)
n = 100
np.random.seed(0) # keep generated data the same for git commit
data = [np.random.rand(n),
np.random.randn(n),
np.random.beta(2, 1, size=n),
np.random.binomial(3.4, 0.22, size=n),
np.random.exponential(size=n),
np.random.geometric(0.5, size=n),
np.random.laplace(size=n),
y_chi,
y_expon,
y_gamma]
function = ['rand',
'randn',
'beta',
'binomial',
'exponential',
'geometric',
'laplace',
'chi',
'expon',
'gamma']
df = pd.DataFrame(data)
df['function'] = function
df | Pandas Sparklines Demo.ipynb | crdietrich/sparklines | mit |
Define range of data to make sparklines
Note: data must be row wise | a = df.ix[:, 0:100] | Pandas Sparklines Demo.ipynb | crdietrich/sparklines | mit |
Output to new DataFrame of Sparklines | df_out = pd.DataFrame()
df_out['sparkline'] = sparklines.create(data=a)
sparklines.show(df_out[['sparkline']]) | Pandas Sparklines Demo.ipynb | crdietrich/sparklines | mit |
Insert Sparklines into source DataFrame | df['sparkline'] = sparklines.create(data=a)
sparklines.show(df[['function', 'sparkline']]) | Pandas Sparklines Demo.ipynb | crdietrich/sparklines | mit |
Detailed Formatting
Return only sparklines, format the line, fill and marker. | df_out = pd.DataFrame()
df_out['sparkline'] = sparklines.create(data=a,
color='#1b470a',
fill_color='#99a894',
fill_alpha=0.2,
point_color='blue',
point_fill='none',
point_marker='*',
point_size=3,
figsize=(6, 0.25))
sparklines.show(df_out[['sparkline']]) | Pandas Sparklines Demo.ipynb | crdietrich/sparklines | mit |
Example Data and Sparklines Layout | df_copy = df[['function', 'sparkline']].copy()
df_copy['value'] = df.ix[:, 100]
df_copy['change'] = df.ix[:,98] - df.ix[:,99]
df_copy['change_%'] = df_copy.change / df.ix[:,99]
sparklines.show(df_copy) | Pandas Sparklines Demo.ipynb | crdietrich/sparklines | mit |
Export to HTML
Inline Jupyter Notebook | sparklines.to_html(df_copy, 'pandas_sparklines_demo') | Pandas Sparklines Demo.ipynb | crdietrich/sparklines | mit |
HTML text for rendering elsewhere | html = sparklines.to_html(df_copy) | Pandas Sparklines Demo.ipynb | crdietrich/sparklines | mit |
Examine a single patient | patientunitstayid = 237395
query = query_schema + """
select *
from medication
where patientunitstayid = {}
order by drugorderoffset
""".format(patientunitstayid)
df = pd.read_sql_query(query, con)
df.head()
df.columns
# Look at a subset of columns
cols = ['medicationid','patientunitstayid',
'drugorderoffset','drugorderoffset', 'drugstopoffset',
'drugivadmixture', 'drugordercancelled', 'drugname','drughiclseqno', 'gtc',
'dosage','routeadmin','loadingdose', 'prn']
df[cols].head().T | notebooks/medication.ipynb | mit-eicu/eicu-code | mit |
Here we can see that, roughly on ICU admission, the patient had an order for vancomycin, aztreonam, and tobramycin.
Identifying patients admitted on a single drug
Let's look for patients who have an order for vancomycin using exact text matching. | drug = 'VANCOMYCIN'
query = query_schema + """
select
distinct patientunitstayid
from medication
where drugname like '%{}%'
""".format(drug)
df_drug = pd.read_sql_query(query, con)
print('{} unit stays with {}.'.format(df_drug.shape[0], drug)) | notebooks/medication.ipynb | mit-eicu/eicu-code | mit |
Exact text matching is fairly weak, as there's no systematic reason to prefer upper case or lower case. Let's relax the case matching. | drug = 'VANCOMYCIN'
query = query_schema + """
select
distinct patientunitstayid
from medication
where drugname ilike '%{}%'
""".format(drug)
df_drug = pd.read_sql_query(query, con)
print('{} unit stays with {}.'.format(df_drug.shape[0], drug)) | notebooks/medication.ipynb | mit-eicu/eicu-code | mit |
HICL codes are used to group together drugs which have the same underlying ingredient (i.e. most frequently this is used to group brand name drugs with the generic name drugs). We can see above the HICL for vancomycin is 10093, so let's try grabbing that. | hicl = 10093
query = query_schema + """
select
distinct patientunitstayid
from medication
where drughiclseqno = {}
""".format(hicl)
df_hicl = pd.read_sql_query(query, con)
print('{} unit stays with HICL = {}.'.format(df_hicl.shape[0], hicl)) | notebooks/medication.ipynb | mit-eicu/eicu-code | mit |
No luck! I wonder what we missed? Let's go back to the original query, this time retaining HICL and the name of the drug. | drug = 'VANCOMYCIN'
query = query_schema + """
select
drugname, drughiclseqno, count(*) as n
from medication
where drugname ilike '%{}%'
group by drugname, drughiclseqno
order by n desc
""".format(drug)
df_drug = pd.read_sql_query(query, con)
df_drug.head() | notebooks/medication.ipynb | mit-eicu/eicu-code | mit |
It appears there are more than one HICL - we can group by HICL in this query to get an idea. | df_drug['drughiclseqno'].value_counts() | notebooks/medication.ipynb | mit-eicu/eicu-code | mit |
Unfortunately, we can't be sure that these HICLs always identify only vancomycin. For example, let's look at drugnames for HICL = 1403. | hicl = 1403
query = query_schema + """
select
drugname, count(*) as n
from medication
where drughiclseqno = {}
group by drugname
order by n desc
""".format(hicl)
df_hicl = pd.read_sql_query(query, con)
df_hicl.head() | notebooks/medication.ipynb | mit-eicu/eicu-code | mit |
This HICL seems more focused on the use of creams than on vancomycin. Let's instead inspect the top 3. | for hicl in [4042, 10093, 37442]:
query = query_schema + """
select
drugname, count(*) as n
from medication
where drughiclseqno = {}
group by drugname
order by n desc
""".format(hicl)
df_hicl = pd.read_sql_query(query, con)
print('HICL {}'.format(hicl))
print('Number of rows: {}'.format(df_hicl['n'].sum()))
print('Top 5 rows by frequency:')
print(df_hicl.head())
print() | notebooks/medication.ipynb | mit-eicu/eicu-code | mit |
This is fairly convincing that these only refer to vancomycin. An alternative approach is to acquire the code book for HICL codes and look up vancomycin there.
Hospitals with data available | query = query_schema + """
with t as
(
select distinct patientunitstayid
from medication
)
select
pt.hospitalid
, count(distinct pt.patientunitstayid) as number_of_patients
, count(distinct t.patientunitstayid) as number_of_patients_with_tbl
from patient pt
left join t
on pt.patientunitstayid = t.patientunitstayid
group by pt.hospitalid
""".format(patientunitstayid)
df = pd.read_sql_query(query, con)
df['data completion'] = df['number_of_patients_with_tbl'] / df['number_of_patients'] * 100.0
df.sort_values('number_of_patients_with_tbl', ascending=False, inplace=True)
df.head(n=10)
df[['data completion']].vgplot.hist(bins=10,
var_name='Number of hospitals',
value_name='Percent of patients with data') | notebooks/medication.ipynb | mit-eicu/eicu-code | mit |
Getting the data ready for work
If the data is in GSLIB format you can use the function gslib.read_gslib_file(filename) to import the data into a Pandas DataFrame. | #get the data in gslib format into a pandas Dataframe
mydata= pygslib.gslib.read_gslib_file('../data/cluster.dat')
#view data in a 2D projection
plt.scatter(mydata['Xlocation'],mydata['Ylocation'], c=mydata['Primary'])
plt.colorbar()
plt.grid(True)
plt.show() | pygslib/Ipython_templates/backtr_raw.ipynb | opengeostat/pygslib | mit |
The nscore transformation table function | print (pygslib.gslib.__dist_transf.backtr.__doc__) | pygslib/Ipython_templates/backtr_raw.ipynb | opengeostat/pygslib | mit |
Get the transformation table | transin,transout, error = pygslib.gslib.__dist_transf.ns_ttable(mydata['Primary'],mydata['Declustering Weight'])
print ('there was any error?: ', error!=0) | pygslib/Ipython_templates/backtr_raw.ipynb | opengeostat/pygslib | mit |
Get the normal score transformation
Note that the declustering is applied on the transformation tables | mydata['NS_Primary'] = pygslib.gslib.__dist_transf.nscore(mydata['Primary'],transin,transout,getrank=False)
mydata['NS_Primary'].hist(bins=30) | pygslib/Ipython_templates/backtr_raw.ipynb | opengeostat/pygslib | mit |
Doing the back transformation | mydata['NS_Primary_BT'],error = pygslib.gslib.__dist_transf.backtr(mydata['NS_Primary'],
transin,transout,
ltail=1,utail=1,ltpar=0,utpar=60,
zmin=0,zmax=60,getrank=False)
print ('there was any error?: ', error!=0, error)
mydata[['Primary','NS_Primary_BT']].hist(bins=30)
mydata[['Primary','NS_Primary_BT', 'NS_Primary']].head() | pygslib/Ipython_templates/backtr_raw.ipynb | opengeostat/pygslib | mit |
Then define basic constant, function and define our neural network | original_dim = 4000 # Our 1D images dimension, each image has 4000 pixel
intermediate_dim = 256 # Number of neurone our fully connected neural net has
batch_size = 50
epochs = 15
epsilon_std = 1.0
def blackbox_image_generator(pixel, center, sigma):
return norm.pdf(pixel, center, sigma)
def model_vae(latent_dim):
"""
Main Model + Encoder
"""
x = Input(shape=(original_dim,))
h = Dense(intermediate_dim, activation='relu')(x)
z_mu = Dense(latent_dim, kernel_regularizer=regularizers.l2(1e-4))(h)
z_log_var = Dense(latent_dim)(h)
z_mu, z_log_var = KLDivergenceLayer()([z_mu, z_log_var])
z_sigma = Lambda(lambda t: tf.exp(.5*t))(z_log_var)
eps = Input(tensor=tf.random.normal(mean=0, stddev=epsilon_std, shape=(tf.shape(x)[0], latent_dim)))
z_eps = Multiply()([z_sigma, eps])
z = Add()([z_mu, z_eps])
decoder = Sequential()
decoder.add(Dense(intermediate_dim, input_dim=latent_dim, activation='relu'))
decoder.add(Dense(original_dim, activation='sigmoid'))
x_pred = decoder(z)
vae = Model(inputs=[x, eps], outputs=x_pred)
encoder = Model(x, z_mu)
return vae, encoder | demo_tutorial/VAE/variational_autoencoder_demo.ipynb | henrysky/astroNN | mit |
Now we will generate some true latent variable so we can pass them to a blackbox image generator to generate some 1D images.
The blackbox image generator (which is deterministic) will take two numbers and generate images in a predictable way. This is important because if the generator generate image in a random way, then there is nothing neural network can learn.
But for simplicity, we will fix the first latent variable of the blackbox image generator a constant and only use the second one to generate images. | s_1 = np.random.normal(30, 1.5, 900)
s_2 = np.random.normal(15, 1, 900)
s_3 = np.random.normal(10, 1, 900)
s = np.concatenate([s_1, s_2, s_3])
plt.figure(figsize=(12, 12))
plt.hist(s[:900], 70, density=1, facecolor='green', alpha=0.75, label='Population 1')
plt.hist(s[900:1800], 70, density=1, facecolor='red', alpha=0.75, label='Population 2')
plt.hist(s[1800:], 70, density=1, facecolor='blue', alpha=0.75, label='Population 3')
plt.title('Disturbution of hidden variable used to generate data', fontsize=15)
plt.xlabel('True Latent Variable Value', fontsize=15)
plt.ylabel('Probability Density', fontsize=15)
plt.tick_params(labelsize=12, width=1, length=10)
plt.legend(loc='best', fontsize=15)
plt.show() | demo_tutorial/VAE/variational_autoencoder_demo.ipynb | henrysky/astroNN | mit |
Now we will pass the true latent variable to the blackbox image generator to generate some images. Below are the example
images from the three populations. They may seems to have no difference but neural network will pick up some subtle features
usually. | # We have some images, each has 4000 pixels
x_train = np.zeros((len(s), original_dim))
for counter, S in enumerate(s):
xs = np.linspace(0, 40, original_dim)
x_train[counter] = blackbox_image_generator(xs, 20, S)
# Prevent nan causes error
x_train[np.isnan(x_train.astype(float))] = 0
x_train *= 10
# Add some noise to our images
x_train += np.random.normal(0, 0.2, x_train.shape)
plt.figure(figsize=(8, 8))
plt.title('Example image from Population 1', fontsize=15)
plt.plot(x_train[500])
plt.xlabel('Pixel', fontsize=15)
plt.ylabel('Flux', fontsize=15)
plt.tick_params(labelsize=12, width=1, length=10)
plt.show()
plt.figure(figsize=(8, 8))
plt.title('Example image from Population 2', fontsize=15)
plt.plot(x_train[1000])
plt.xlabel('Pixel', fontsize=15)
plt.ylabel('Flux', fontsize=15)
plt.tick_params(labelsize=12, width=1, length=10)
plt.show()
plt.figure(figsize=(8, 8))
plt.title('Example image from Population 3', fontsize=15)
plt.plot(x_train[1600])
plt.xlabel('Pixel', fontsize=15)
plt.ylabel('Flux', fontsize=15)
plt.tick_params(labelsize=12, width=1, length=10)
plt.show() | demo_tutorial/VAE/variational_autoencoder_demo.ipynb | henrysky/astroNN | mit |
Now we will pass the images to the neural network and train with them. | latent_dim = 1 # Dimension of our latent space
vae, encoder = model_vae(latent_dim)
vae.compile(optimizer='rmsprop', loss=nll,
weighted_metrics=None,
loss_weights=None,
sample_weight_mode=None)
vae.fit(x_train, x_train, shuffle=True, epochs=epochs, batch_size=batch_size, verbose=0)
z_test = encoder.predict(x_train, batch_size=batch_size)
plt.figure(figsize=(12, 12))
plt.hist(z_test[:900], 70, density=1, facecolor='green', alpha=0.75, label='Population 1')
plt.hist(z_test[900:1800], 70, density=1, facecolor='red', alpha=0.75, label='Population 2')
plt.hist(z_test[1800:], 70, density=1, facecolor='blue', alpha=0.75, label='Population 3')
plt.title('Disturbution of latent variable value from neural net', fontsize=15)
plt.xlabel('Latent Variable Value from Neural Net', fontsize=15)
plt.ylabel('Probability Density', fontsize=15)
plt.tick_params(labelsize=12, width=1, length=10)
plt.legend(loc='best', fontsize=15)
plt.show() | demo_tutorial/VAE/variational_autoencoder_demo.ipynb | henrysky/astroNN | mit |
Yay!! Seems like the neural network recovered the three population successfully. Althought the recovered latent variable is not exactly the same as the original ones we generated (I mean at least the scale isn't the same), usually you won't expect the neural network can learn the real phyiscs. In this case, the latent variable is just some transformations from the original ones.
You should still remember that we have fixed the first latent variable of the blackbox image generator. What happes if we also generate 3 populations for the first latent variable, and the first latent variable will have no correlation with the second latent variable (Meaning if you know the first latent value of an object, you have no information gain on the second latent value of that object because the first and second have nothing to do with each other) | m_1A = np.random.normal(28, 2, 300)
m_1B = np.random.normal(19, 2, 300)
m_1C = np.random.normal(12, 1, 300)
m_2A = np.random.normal(28, 2, 300)
m_2B = np.random.normal(19, 2, 300)
m_2C = np.random.normal(12, 1, 300)
m_3A = np.random.normal(28, 2, 300)
m_3B = np.random.normal(19, 2, 300)
m_3C = np.random.normal(12, 1, 300)
m = np.concatenate([m_1A, m_1B, m_1C, m_2A, m_2B, m_2C, m_3A, m_3B, m_3C])
x_train = np.zeros((len(s), original_dim))
for counter in range(len(s)):
xs = np.linspace(0, 40, original_dim)
x_train[counter] = blackbox_image_generator(xs, m[counter], s[counter])
# Prevent nan causes error
x_train[np.isnan(x_train.astype(float))] = 0
x_train *= 10
# Add some noise to our images
x_train += np.random.normal(0, 0.1, x_train.shape)
plt.figure(figsize=(12, 12))
plt.hist(s[:900], 70, density=1, facecolor='green', alpha=0.75, label='Population 1')
plt.hist(s[900:1800], 70, density=1, facecolor='red', alpha=0.75, label='Population 2')
plt.hist(s[1800:], 70, density=1, facecolor='blue', alpha=0.75, label='Population 3')
plt.title('Disturbution of hidden variable 1 used to generate data', fontsize=15)
plt.xlabel('True Latent Variable Value', fontsize=15)
plt.ylabel('Probability Density', fontsize=15)
plt.tick_params(labelsize=12, width=1, length=10)
plt.legend(loc='best', fontsize=15)
plt.show()
plt.figure(figsize=(12, 12))
plt.hist(m[:900], 70, density=1, facecolor='green', alpha=0.75, label='Population 1')
plt.hist(m[900:1800], 70, density=1, facecolor='red', alpha=0.75, label='Population 2')
plt.hist(m[1800:], 70, density=1, facecolor='blue', alpha=0.75, label='Population 3')
plt.title('Disturbution of hidden variable 2 used to generate data', fontsize=15)
plt.xlabel('True Latent Variable Value', fontsize=15)
plt.ylabel('Probability Density', fontsize=15)
plt.tick_params(labelsize=12, width=1, length=10)
plt.legend(loc='best', fontsize=15)
plt.show() | demo_tutorial/VAE/variational_autoencoder_demo.ipynb | henrysky/astroNN | mit |
Since we have two independent variables to generate our images, what happened if you still try to force the neural network to explain the images with just one variable?
Before we run the training, we should think about what we expect first. Lets denate the first latent variable population as 1, 2 and 3 , while the second latent variable population as A, B and C. If we know an object is in population 2, it has equal chance that its in population A, B and C. With this logic, we should have 9 unique population in total (1A, 1B, 1C, 2A, 2B, 2C, 3A, 3B, 3C). If the neural network want to explain the images with 1 latent variable, it should has 9 peaks in the plot. | latent_dim = 1 # Dimension of our latent space
vae, encoder = model_vae(latent_dim)
vae.compile(optimizer='rmsprop', loss=nll,
weighted_metrics=None,
loss_weights=None,
sample_weight_mode=None)
epochs = 15
vae.fit(x_train, x_train, shuffle=True, epochs=epochs, batch_size=batch_size, verbose=0)
z_test = encoder.predict(x_train, batch_size=batch_size)
plt.figure(figsize=(12, 12))
# plt.hist(z_test[:900], 70, density=1, facecolor='green', alpha=0.75, label='Population 1')
# plt.hist(z_test[900:1800], 70, density=1, facecolor='red', alpha=0.75, label='Population 2')
# plt.hist(z_test[1800:], 70, density=1, facecolor='blue', alpha=0.75, label='Population 3')
plt.hist(z_test[:300], 70, density=1, alpha=0.75, label='Population 1A')
plt.hist(z_test[300:600], 70, density=1, alpha=0.75, label='Population 1B')
plt.hist(z_test[600:900], 70, density=1, alpha=0.75, label='Population 1C')
plt.hist(z_test[900:1200], 70, density=1, alpha=0.75, label='Population 2A')
plt.hist(z_test[1200:1500], 70, density=1, alpha=0.75, label='Population 2B')
plt.hist(z_test[1500:1800], 70, density=1, alpha=0.75, label='Population 2C')
plt.hist(z_test[1800:2100], 70, density=1, alpha=0.75, label='Population 3A')
plt.hist(z_test[2100:2400], 70, density=1, alpha=0.75, label='Population 3B')
plt.hist(z_test[2400:2700], 70, density=1, alpha=0.75, label='Population 3C')
plt.title('Disturbution of latent variable value from neural net', fontsize=15)
plt.xlabel('Latent Variable Value from Neural Net', fontsize=15)
plt.ylabel('Probability Density', fontsize=15)
plt.tick_params(labelsize=12, width=1, length=10)
plt.legend(loc='best', fontsize=15)
plt.show() | demo_tutorial/VAE/variational_autoencoder_demo.ipynb | henrysky/astroNN | mit |
By visual inspection, seems like the neural network only recovered 6 population :(
What will happen if we increase the latent space of the nerual network to 2? | latent_dim = 2 # Dimension of our latent space
epochs = 40
vae, encoder = model_vae(latent_dim)
vae.compile(optimizer='rmsprop', loss=nll,
weighted_metrics=None,
loss_weights=None,
sample_weight_mode=None)
vae.fit(x_train, x_train, shuffle=True, epochs=epochs, batch_size=batch_size, verbose=0)
z_test = encoder.predict(x_train, batch_size=batch_size)
plt.figure(figsize=(12, 12))
plt.scatter(z_test[:300, 0], z_test[:300, 1], s=4, label='Population 1A')
plt.scatter(z_test[300:600, 0], z_test[300:600, 1], s=4, label='Population 1B')
plt.scatter(z_test[600:900, 0], z_test[600:900, 1], s=4, label='Population 1C')
plt.scatter(z_test[900:1200, 0], z_test[900:1200, 1], s=4, label='Population 2A')
plt.scatter(z_test[1200:1500, 0], z_test[1200:1500, 1], s=4, label='Population 2B')
plt.scatter(z_test[1500:1800, 0], z_test[1500:1800, 1], s=4, label='Population 2C')
plt.scatter(z_test[1800:2100, 0], z_test[1800:2100, 1], s=4, label='Population 3A')
plt.scatter(z_test[2100:2400, 0], z_test[2100:2400, 1], s=4, label='Population 3B')
plt.scatter(z_test[2400:2700, 0], z_test[2400:2700, 1], s=4, label='Population 3C')
plt.title('Latent Space (Middle layer of Neurones)', fontsize=15)
plt.xlabel('Second Latent Variable (Neurone)', fontsize=15)
plt.ylabel('First Latent Variable (Neurone)', fontsize=15)
plt.tick_params(labelsize=12, width=1, length=10)
plt.legend(loc='best', fontsize=15, markerscale=6)
plt.show() | demo_tutorial/VAE/variational_autoencoder_demo.ipynb | henrysky/astroNN | mit |
Plot one of the hurricanes
Let's just plot the track of Hurricane MARIA | maria = df[df['name'] == 'MARIA'].sort_values('iso_time')
m = Basemap(llcrnrlon=-100.,llcrnrlat=0.,urcrnrlon=-20.,urcrnrlat=57.,
projection='lcc',lat_1=20.,lat_2=40.,lon_0=-60.,
resolution ='l',area_thresh=1000.)
x, y = m(maria['longitude'].values,maria['latitude'].values)
m.plot(x,y,linewidth=5,color='r')
# draw coastlines, meridians and parallels.
m.drawcoastlines()
m.drawcountries()
m.drawmapboundary(fill_color='#99ffff')
m.fillcontinents(color='#cc9966',lake_color='#99ffff')
m.drawparallels(np.arange(10,70,20),labels=[1,1,0,0])
m.drawmeridians(np.arange(-100,0,20),labels=[0,0,0,1])
plt.title('Hurricane Maria (2017)'); | blogs/goes16/maria/hurricanes2017.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Plot all the hurricanes
Use line thickness based on the maximum category reached by the hurricane | names = df.name.unique()
names
m = Basemap(llcrnrlon=-100.,llcrnrlat=0.,urcrnrlon=-20.,urcrnrlat=57.,
projection='lcc',lat_1=20.,lat_2=40.,lon_0=-60.,
resolution ='l',area_thresh=1000.)
for name in names:
if name != 'NOT_NAMED':
named = df[df['name'] == name].sort_values('iso_time')
x, y = m(named['longitude'].values,named['latitude'].values)
maxcat = max(named['usa_sshs'])
m.plot(x,y,linewidth=maxcat,color='b')
# draw coastlines, meridians and parallels.
m.drawcoastlines()
m.drawcountries()
m.drawmapboundary(fill_color='#99ffff')
m.fillcontinents(color='#cc9966',lake_color='#99ffff')
m.drawparallels(np.arange(10,70,20),labels=[1,1,0,0])
m.drawmeridians(np.arange(-100,0,20),labels=[0,0,0,1])
plt.title('Named North-Atlantic hurricanes (2017)'); | blogs/goes16/maria/hurricanes2017.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Data
The data records the divorce rate $D$, marriage rate $M$, and average age $A$ that people get married at for 50 US states. | # load data and copy
url = "https://raw.githubusercontent.com/fehiepsi/rethinking-numpyro/master/data/WaffleDivorce.csv"
WaffleDivorce = pd.read_csv(url, sep=";")
d = WaffleDivorce
# standardize variables
d["A"] = d.MedianAgeMarriage.pipe(lambda x: (x - x.mean()) / x.std())
d["D"] = d.Divorce.pipe(lambda x: (x - x.mean()) / x.std())
d["M"] = d.Marriage.pipe(lambda x: (x - x.mean()) / x.std()) | notebooks/misc/linreg_divorce_numpyro.ipynb | probml/pyprobml | mit |
Model (Gaussian likelihood)
We predict divorce rate D given marriage rate M and age A. | def model(M, A, D=None):
a = numpyro.sample("a", dist.Normal(0, 0.2))
bM = numpyro.sample("bM", dist.Normal(0, 0.5))
bA = numpyro.sample("bA", dist.Normal(0, 0.5))
sigma = numpyro.sample("sigma", dist.Exponential(1))
mu = numpyro.deterministic("mu", a + bM * M + bA * A)
numpyro.sample("D", dist.Normal(mu, sigma), obs=D)
m5_3 = AutoLaplaceApproximation(model)
svi = SVI(model, m5_3, optim.Adam(1), Trace_ELBO(), M=d.M.values, A=d.A.values, D=d.D.values)
p5_3, losses = svi.run(random.PRNGKey(0), 1000)
post = m5_3.sample_posterior(random.PRNGKey(1), p5_3, (1000,))
param_names = {"a", "bA", "bM", "sigma"}
for p in param_names:
print(f"posterior for {p}")
print_summary(post[p], 0.95, False) | notebooks/misc/linreg_divorce_numpyro.ipynb | probml/pyprobml | mit |
Posterior predicted vs actual | # call predictive without specifying new data
# so it uses original data
post = m5_3.sample_posterior(random.PRNGKey(1), p5_3, (int(1e4),))
post_pred = Predictive(m5_3.model, post)(random.PRNGKey(2), M=d.M.values, A=d.A.values)
mu = post_pred["mu"]
# summarize samples across cases
mu_mean = jnp.mean(mu, 0)
mu_PI = jnp.percentile(mu, q=(5.5, 94.5), axis=0)
ax = plt.subplot(ylim=(float(mu_PI.min()), float(mu_PI.max())), xlabel="Observed divorce", ylabel="Predicted divorce")
plt.plot(d.D, mu_mean, "o")
x = jnp.linspace(mu_PI.min(), mu_PI.max(), 101)
plt.plot(x, x, "--")
for i in range(d.shape[0]):
plt.plot([d.D[i]] * 2, mu_PI[:, i], "b")
fig = plt.gcf()
for i in range(d.shape[0]):
if d.Loc[i] in ["ID", "UT", "AR", "ME"]:
ax.annotate(d.Loc[i], (d.D[i], mu_mean[i]), xytext=(-25, -5), textcoords="offset pixels")
plt.tight_layout()
plt.savefig("linreg_divorce_postpred.pdf")
plt.show()
fig | notebooks/misc/linreg_divorce_numpyro.ipynb | probml/pyprobml | mit |
Per-point LOO scores
We compute the predicted probability of each point given the others, following
sec 7.5.2 of Statistical Rethinking ed 2.
The numpyro code is from Du Phan's site | # post = m5_3.sample_posterior(random.PRNGKey(24071847), p5_3, (1000,))
logprob = log_likelihood(m5_3.model, post, A=d.A.values, M=d.M.values, D=d.D.values)["D"]
az5_3 = az.from_dict(
posterior={k: v[None, ...] for k, v in post.items()},
log_likelihood={"D": logprob[None, ...]},
)
PSIS_m5_3 = az.loo(az5_3, pointwise=True, scale="deviance")
WAIC_m5_3 = az.waic(az5_3, pointwise=True, scale="deviance")
penalty = az5_3.log_likelihood.stack(sample=("chain", "draw")).var(dim="sample")
fig, ax = plt.subplots()
ax.plot(PSIS_m5_3.pareto_k.values, penalty.D.values, "o", mfc="none")
ax.set_xlabel("PSIS Pareto k")
ax.set_ylabel("WAIC penalty")
plt.savefig("linreg_divorce_waic_vs_pareto.pdf")
plt.show()
plt.show()
pareto = PSIS_m5_3.pareto_k.values
waic = penalty.D.values
ndx = np.where(pareto > 0.4)[0]
for i in ndx:
print(d.Loc[i], pareto[i], waic[i])
for i in ndx:
ax.annotate(d.Loc[i], (pareto[i], waic[i]), xytext=(5, 0), textcoords="offset pixels")
fig | notebooks/misc/linreg_divorce_numpyro.ipynb | probml/pyprobml | mit |
Student likelihood | def model(M, A, D=None):
a = numpyro.sample("a", dist.Normal(0, 0.2))
bM = numpyro.sample("bM", dist.Normal(0, 0.5))
bA = numpyro.sample("bA", dist.Normal(0, 0.5))
sigma = numpyro.sample("sigma", dist.Exponential(1))
# mu = a + bM * M + bA * A
mu = numpyro.deterministic("mu", a + bM * M + bA * A)
numpyro.sample("D", dist.StudentT(2, mu, sigma), obs=D)
m5_3t = AutoLaplaceApproximation(model)
svi = SVI(model, m5_3t, optim.Adam(0.3), Trace_ELBO(), M=d.M.values, A=d.A.values, D=d.D.values)
p5_3t, losses = svi.run(random.PRNGKey(0), 1000)
# call predictive without specifying new data
# so it uses original data
post_t = m5_3t.sample_posterior(random.PRNGKey(1), p5_3t, (int(1e4),))
post_pred_t = Predictive(m5_3t.model, post_t)(random.PRNGKey(2), M=d.M.values, A=d.A.values)
mu = post_pred_t["mu"]
# summarize samples across cases
mu_mean = jnp.mean(mu, 0)
mu_PI = jnp.percentile(mu, q=(5.5, 94.5), axis=0)
ax = plt.subplot(ylim=(float(mu_PI.min()), float(mu_PI.max())), xlabel="Observed divorce", ylabel="Predicted divorce")
plt.plot(d.D, mu_mean, "o")
x = jnp.linspace(mu_PI.min(), mu_PI.max(), 101)
plt.plot(x, x, "--")
for i in range(d.shape[0]):
plt.plot([d.D[i]] * 2, mu_PI[:, i], "b")
fig = plt.gcf() | notebooks/misc/linreg_divorce_numpyro.ipynb | probml/pyprobml | mit |
Grading
We will create a grader instance below and use it to collect your answers. Note that these outputs will be stored locally inside grader and will be uploaded to the platform only after running submitting function in the last part of this assignment. If you want to make a partial submission, you can run that cell anytime you want. | grader = MCMCGrader() | python/coursera-BayesianML/04_mcmc_assignment.ipynb | saketkc/notebooks | bsd-2-clause |
Task 1. Alice and Bob
Alice and Bob are trading on the market. Both of them are selling the Thing and want to get as high profit as possible.
Every hour they check out with each other's prices and adjust their prices to compete on the market. Although they have different strategies for price setting.
Alice: takes Bob's price during the previous hour, multiply by 0.6, add \$90, add Gaussian noise from $N(0, 20^2)$.
Bob: takes Alice's price during the current hour, multiply by 1.2 and subtract \$20, add Gaussian noise from $N(0, 10^2)$.
The problem is to find the joint distribution of Alice and Bob's prices after many hours of such an experiment.
Task 1.1
Implement the run_simulation function according to the description above. | def run_simulation(alice_start_price=300.0, bob_start_price=300.0, seed=42, num_hours=10000, burnin=1000):
"""Simulates an evolution of prices set by Bob and Alice.
The function should simulate Alice and Bob behavior for `burnin' hours, then ignore the obtained
simulation results, and then simulate it for `num_hours' more.
The initial burnin (also sometimes called warmup) is done to make sure that the distribution stabilized.
Please don't change the signature of the function.
Returns:
two lists, with Alice and with Bob prices. Both lists should be of length num_hours.
"""
np.random.seed(seed)
alice_prices = [alice_start_price]
bob_prices = [bob_start_price]
#### YOUR CODE HERE ####
for hour in range(burnin + num_hours - 1):
#Alice: takes Bob's price during the previous hour, multiply by 0.6, add $90, add Gaussian noise from N(0,202) .
#Bob: takes Alice's price during the current hour, multiply by 1.2 and subtract $20, add Gaussian noise from N(0,102) .
alice_current = bob_prices[-1]*0.6 + 90 + np.random.normal(loc=0, scale=20)
bob_current = alice_current*1.2 - 20 + np.random.normal(loc=0, scale=10)
alice_prices.append(alice_current)
bob_prices.append(bob_current)
### END OF YOUR CODE ###
#print(len(alice_prices[burnin:]), len(bob_prices[burnin:]))
return alice_prices[burnin:], bob_prices[burnin:]
alice_prices, bob_prices = run_simulation(alice_start_price=300, bob_start_price=300, seed=42, num_hours=3, burnin=1)
if len(alice_prices) != 3:
raise RuntimeError('Make sure that the function returns `num_hours` data points.')
grader.submit_simulation_trajectory(alice_prices, bob_prices) | python/coursera-BayesianML/04_mcmc_assignment.ipynb | saketkc/notebooks | bsd-2-clause |
Task 1.2
What is the average price for Alice and Bob after the burn-in period? Whose prices are higher? | #### YOUR CODE HERE ####
alice_prices, bob_prices = run_simulation(alice_start_price=300, bob_start_price=300)
average_alice_price = np.mean(alice_prices)
average_bob_price = np.mean(bob_prices)
### END OF YOUR CODE ###
grader.submit_simulation_mean(average_alice_price, average_bob_price) | python/coursera-BayesianML/04_mcmc_assignment.ipynb | saketkc/notebooks | bsd-2-clause |
Task 1.3
Let's look at the 2-d histogram of prices, computed using kernel density estimation. | data = np.array(run_simulation())
sns.jointplot(data[0, :], data[1, :], stat_func=None, kind='kde') | python/coursera-BayesianML/04_mcmc_assignment.ipynb | saketkc/notebooks | bsd-2-clause |
Clearly, the prices of Bob and Alce are highly correlated. What is the Pearson correlation coefficient of Alice and Bob prices? | #### YOUR CODE HERE ####
correlation = np.corrcoef(alice_prices, bob_prices)[0,1]
### END OF YOUR CODE ###
grader.submit_simulation_correlation(correlation) | python/coursera-BayesianML/04_mcmc_assignment.ipynb | saketkc/notebooks | bsd-2-clause |
Task 1.4
We observe an interesting effect here: seems like the bivariate distribution of Alice and Bob prices converges to a correlated bivariate Gaussian distribution.
Let's check, whether the results change if we use different random seed and starting points. | # Pick different starting prices, e.g 10, 1000, 10000 for Bob and Alice.
# Does the joint distribution of the two prices depend on these parameters?
POSSIBLE_ANSWERS = {
0: 'Depends on random seed and starting prices',
1: 'Depends only on random seed',
2: 'Depends only on starting prices',
3: 'Does not depend on random seed and starting prices'
}
idx = 3### TYPE THE INDEX OF THE CORRECT ANSWER HERE ###
answer = POSSIBLE_ANSWERS[idx]
grader.submit_simulation_depends(answer) | python/coursera-BayesianML/04_mcmc_assignment.ipynb | saketkc/notebooks | bsd-2-clause |
Task 2. Logistic regression with PyMC3
Logistic regression is a powerful model that allows you to analyze how a set of features affects some binary target label. Posterior distribution over the weights gives us an estimation of the influence of each particular feature on the probability of the target being equal to one. But most importantly, posterior distribution gives us the interval estimates for each weight of the model. This is very important for data analysis when you want to not only provide a good model but also estimate the uncertainty of your conclusions.
In this task, we will learn how to use PyMC3 library to perform approximate Bayesian inference for logistic regression.
This part of the assignment is based on the logistic regression tutorial by Peadar Coyle and J. Benjamin Cook.
Logistic regression.
The problem here is to model how the probability that a person has salary $\geq$ \$50K is affected by his/her age, education, sex and other features.
Let $y_i = 1$ if i-th person's salary is $\geq$ \$50K and $y_i = 0$ otherwise. Let $x_{ij}$ be $j$-th feature of $i$-th person.
Logistic regression models this probabilty in the following way:
$$p(y_i = 1 \mid \beta) = \sigma (\beta_1 x_{i1} + \beta_2 x_{i2} + \dots + \beta_k x_{ik} ), $$
where $\sigma(t) = \frac1{1 + e^{-t}}$
Odds ratio.
Let's try to answer the following question: does the gender of a person affects his or her salary? To do it we will use the concept of odds.
If we have a binary random variable $y$ (which may indicate whether a person makes \$50K) and if the probabilty of the positive outcome $p(y = 1)$ is for example 0.8, we will say that the odds are 4 to 1 (or just 4 for short), because succeding is 4 time more likely than failing $\frac{p(y = 1)}{p(y = 0)} = \frac{0.8}{0.2} = 4$.
Now, let's return to the effect of gender on the salary. Let's compute the ratio between the odds of a male having salary $\geq $ \$50K and the odds of a female (with the same level of education, experience and everything else) having salary $\geq$ \$50K. The first feature of each person in the dataset is gender. Specifically, $x_{i1} = 0$ if the person is female and $x_{i1} = 1$ otherwise. Consider two people $i$ and $j$ having all but one features the same with the only difference in $x_{i1} \neq x_{j1}$.
If the logistic regression model above estimates the probabilities exactly, the odds for a male will be (check it!):
$$
\frac{p(y_i = 1 \mid x_{i1}=1, x_{i2}, \ldots, x_{ik})}{p(y_i = 0 \mid x_{i1}=1, x_{i2}, \ldots, x_{ik})} = \frac{\sigma(\beta_1 + \beta_2 x_{i2} + \ldots)}{1 - \sigma(\beta_1 + \beta_2 x_{i2} + \ldots)} = \exp(\beta_1 + \beta_2 x_{i2} + \ldots)
$$
Now the ratio of the male and female odds will be:
$$
\frac{\exp(\beta_1 \cdot 1 + \beta_2 x_{i2} + \ldots)}{\exp(\beta_1 \cdot 0 + \beta_2 x_{i2} + \ldots)} = \exp(\beta_1)
$$
So given the correct logistic regression model, we can estimate odds ratio for some feature (gender in this example) by just looking at the corresponding coefficient. But of course, even if all the logistic regression assumptions are met we cannot estimate the coefficient exactly from real-world data, it's just too noisy. So it would be really nice to build an interval estimate, which would tell us something along the lines "with probability 0.95 the odds ratio is greater than 0.8 and less than 1.2, so we cannot conclude that there is any gender discrimination in the salaries" (or vice versa, that "with probability 0.95 the odds ratio is greater than 1.5 and less than 1.9 and the discrimination takes place because a male has at least 1.5 higher probability to get >$50k than a female with the same level of education, age, etc."). In Bayesian statistics, this interval estimate is called credible interval.
Unfortunately, it's impossible to compute this credible interval analytically. So let's use MCMC for that!
Credible interval
A credible interval for the value of $\exp(\beta_1)$ is an interval $[a, b]$ such that $p(a \leq \exp(\beta_1) \leq b \mid X_{\text{train}}, y_{\text{train}})$ is $0.95$ (or some other predefined value). To compute the interval, we need access to the posterior distribution $p(\exp(\beta_1) \mid X_{\text{train}}, y_{\text{train}})$.
Lets for simplicity focus on the posterior on the parameters $p(\beta_1 \mid X_{\text{train}}, y_{\text{train}})$ since if we compute it, we can always find $[a, b]$ such that $p(\log a \leq \beta_1 \leq \log b \mid X_{\text{train}}, y_{\text{train}}) = p(a \leq \exp(\beta_1) \leq b \mid X_{\text{train}}, y_{\text{train}}) = 0.95$
Task 2.1 MAP inference
Let's read the dataset. This is a post-processed version of the UCI Adult dataset. | data = pd.read_csv("adult_us_postprocessed.csv")
data.head() | python/coursera-BayesianML/04_mcmc_assignment.ipynb | saketkc/notebooks | bsd-2-clause |
Each row of the dataset is a person with his (her) features. The last column is the target variable $y$. One indicates that this person's annual salary is more than $50K.
First of all let's set up a Bayesian logistic regression model (i.e. define priors on the parameters $\alpha$ and $\beta$ of the model) that predicts the value of "income_more_50K" based on person's age and education:
$$
p(y = 1 \mid \alpha, \beta_1, \beta_2) = \sigma(\alpha + \beta_1 x_1 + \beta_2 x_2) \
\alpha \sim N(0, 100^2) \
\beta_1 \sim N(0, 100^2) \
\beta_2 \sim N(0, 100^2), \
$$
where $x_1$ is a person's age, $x_2$ is his/her level of education, y indicates his/her level of income, $\alpha$, $\beta_1$ and $\beta_2$ are paramters of the model. | with pm.Model() as manual_logistic_model:
# Declare pymc random variables for logistic regression coefficients with uninformative
# prior distributions N(0, 100^2) on each weight using pm.Normal.
# Don't forget to give each variable a unique name.
#### YOUR CODE HERE ####
alpha = pm.Normal('alpha', mu=0, sigma=100)
beta1 = pm.Normal('beta1', mu=0, sigma=100)
beta2 = pm.Normal('beta2', mu=0, sigma=100)
### END OF YOUR CODE ###
# Transform these random variables into vector of probabilities p(y_i=1) using logistic regression model specified
# above. PyMC random variables are theano shared variables and support simple mathematical operations.
# For example:
# z = pm.Normal('x', 0, 1) * np.array([1, 2, 3]) + pm.Normal('y', 0, 1) * np.array([4, 5, 6])`
# is a correct PyMC expression.
# Use pm.invlogit for the sigmoid function.
#### YOUR CODE HERE ####
prob = pm.invlogit(alpha + beta1*data['age'] + beta2*data['educ'] )
### END OF YOUR CODE ###
# Declare PyMC Bernoulli random vector with probability of success equal to the corresponding value
# given by the sigmoid function.
# Supply target vector using "observed" argument in the constructor.
#### YOUR CODE HERE ####
likelihood = pm.Bernoulli('likelihood', p=prob, observed=data['income_more_50K'])
### END OF YOUR CODE ###
# Use pm.find_MAP() to find the maximum a-posteriori estimate for the vector of logistic regression weights.
map_estimate = pm.find_MAP()
print(map_estimate)
| python/coursera-BayesianML/04_mcmc_assignment.ipynb | saketkc/notebooks | bsd-2-clause |
Sumbit MAP estimations of corresponding coefficients: | with pm.Model() as logistic_model:
# There's a simpler interface for generalized linear models in pymc3.
# Try to train the same model using pm.glm.GLM.from_formula.
# Do not forget to specify that the target variable is binary (and hence follows Binomial distribution).
#### YOUR CODE HERE ####
formula = 'income_more_50K ~ age + educ'
likelihood = pm.glm.GLM.from_formula(formula, data, family=pm.glm.families.Binomial())
### END OF YOUR CODE ###
map_estimate = pm.find_MAP()
print(map_estimate)
beta_age_coefficient = 0.04348259### TYPE MAP ESTIMATE OF THE AGE COEFFICIENT HERE ###
beta_education_coefficient = 0.36210894### TYPE MAP ESTIMATE OF THE EDUCATION COEFFICIENT HERE ###
grader.submit_pymc_map_estimates(beta_age_coefficient, beta_education_coefficient) | python/coursera-BayesianML/04_mcmc_assignment.ipynb | saketkc/notebooks | bsd-2-clause |
Task 2.2 MCMC
To find credible regions let's perform MCMC inference. | # You will need the following function to visualize the sampling process.
# You don't need to change it.
def plot_traces(traces, burnin=200):
'''
Convenience function:
Plot traces with overlaid means and values
'''
ax = pm.traceplot(traces[burnin:], figsize=(12,len(traces.varnames)*1.5),
lines={k: v['mean'] for k, v in pm.summary(traces[burnin:]).iterrows()})
for i, mn in enumerate(pm.summary(traces[burnin:])['mean']):
ax[i,0].annotate('{:.2f}'.format(mn), xy=(mn,0), xycoords='data'
,xytext=(5,10), textcoords='offset points', rotation=90
,va='bottom', fontsize='large', color='#AA0022') | python/coursera-BayesianML/04_mcmc_assignment.ipynb | saketkc/notebooks | bsd-2-clause |
Metropolis-Hastings
Let's use the Metropolis-Hastings algorithm for finding the samples from the posterior distribution.
Once you wrote the code, explore the hyperparameters of Metropolis-Hastings such as the proposal distribution variance to speed up the convergence. You can use plot_traces function in the next cell to visually inspect the convergence.
You may also use MAP-estimate to initialize the sampling scheme to speed things up. This will make the warmup (burn-in) period shorter since you will start from a probable point. | with pm.Model() as logistic_model:
# Since it is unlikely that the dependency between the age and salary is linear, we will include age squared
# into features so that we can model dependency that favors certain ages.
# Train Bayesian logistic regression model on the following features: sex, age, age^2, educ, hours
# Use pm.sample to run MCMC to train this model.
# To specify the particular sampler method (Metropolis-Hastings) to pm.sample,
# use `pm.Metropolis`.
# Train your model for 400 samples.
# Save the output of pm.sample to a variable: this is the trace of the sampling procedure and will be used
# to estimate the statistics of the posterior distribution.
#### YOUR CODE HERE ####
data['age_squared'] = data['age'] ** 2
formula = 'income_more_50K ~ sex + age + age_squared + educ + hours'
likelihood = pm.glm.GLM.from_formula(formula, data, family=pm.glm.families.Binomial())
trace = pm.sample(400, step=pm.Metropolis(), chains=1)
### END OF YOUR CODE ###
pm.__version__
plot_traces(trace) | python/coursera-BayesianML/04_mcmc_assignment.ipynb | saketkc/notebooks | bsd-2-clause |
NUTS sampler
Use pm.sample without specifying a particular sampling method (pymc3 will choose it automatically).
The sampling algorithm that will be used in this case is NUTS, which is a form of Hamiltonian Monte Carlo, in which parameters are tuned automatically. This is an advanced method that we hadn't cover in the lectures, but it usually converges faster and gives less correlated samples compared to vanilla Metropolis-Hastings. | with pm.Model() as logistic_model:
# Train Bayesian logistic regression model on the following features: sex, age, age_squared, educ, hours
# Use pm.sample to run MCMC to train this model.
# Train your model for 400 samples.
# Training can take a while, so relax and wait :)
#### YOUR CODE HERE ####
formula = 'income_more_50K ~ sex + age + age_squared + educ + hours'
likelihood = pm.glm.GLM.from_formula(formula, data, family=pm.glm.families.Binomial())
trace = pm.sample(400, step=pm.NUTS(), chains=1)
### END OF YOUR CODE ###
plot_traces(trace) | python/coursera-BayesianML/04_mcmc_assignment.ipynb | saketkc/notebooks | bsd-2-clause |
Estimating the odds ratio
Now, let's build the posterior distribution on the odds ratio given the dataset (approximated by MCMC). | # We don't need to use a large burn-in here, since we initialize sampling
# from a good point (from our approximation of the most probable
# point (MAP) to be more precise).
burnin = 100
b = trace['sex[T. Male]'][burnin:]
plt.hist(np.exp(b), bins=20, density=True)
plt.xlabel("Odds Ratio")
plt.show() | python/coursera-BayesianML/04_mcmc_assignment.ipynb | saketkc/notebooks | bsd-2-clause |
Finally, we can find a credible interval (recall that credible intervals are Bayesian and confidence intervals are frequentist) for this quantity. This may be the best part about Bayesian statistics: we get to interpret credibility intervals the way we've always wanted to interpret them. We are 95% confident that the odds ratio lies within our interval! | lb, ub = np.percentile(b, 2.5), np.percentile(b, 97.5)
print("P(%.3f < Odds Ratio < %.3f) = 0.95" % (np.exp(lb), np.exp(ub)))
# Submit the obtained credible interval.
grader.submit_pymc_odds_ratio_interval(np.exp(lb), np.exp(ub)) | python/coursera-BayesianML/04_mcmc_assignment.ipynb | saketkc/notebooks | bsd-2-clause |
Task 2.3 interpreting the results | # Does the gender affects salary in the provided dataset?
# (Note that the data is from 1996 and maybe not representative
# of the current situation in the world.)
POSSIBLE_ANSWERS = {
0: 'No, there is certainly no discrimination',
1: 'We cannot say for sure',
2: 'Yes, we are 95% sure that a female is *less* likely to get >$50K than a male with the same age, level of education, etc.',
3: 'Yes, we are 95% sure that a female is *more* likely to get >$50K than a male with the same age, level of education, etc.',
}
idx = 2### TYPE THE INDEX OF THE CORRECT ANSWER HERE ###
answer = POSSIBLE_ANSWERS[idx]
grader.submit_is_there_discrimination(answer) | python/coursera-BayesianML/04_mcmc_assignment.ipynb | saketkc/notebooks | bsd-2-clause |
Authorization & Submission
To submit assignment parts to Cousera platform, please, enter your e-mail and token into variables below. You can generate a token on this programming assignment's page. <b>Note:</b> The token expires 30 minutes after generation. | STUDENT_EMAIL = '[email protected]'
STUDENT_TOKEN = '6r463miiML4NWB9M'
grader.status() | python/coursera-BayesianML/04_mcmc_assignment.ipynb | saketkc/notebooks | bsd-2-clause |
If you want to submit these answers, run cell below | grader.submit(STUDENT_EMAIL, STUDENT_TOKEN) | python/coursera-BayesianML/04_mcmc_assignment.ipynb | saketkc/notebooks | bsd-2-clause |
(Optional) generating videos of sampling process
In this part you will generate videos showing the sampling process.
Setting things up
You don't need to modify the code below, it sets up the plotting functions. The code is based on MCMC visualization tutorial. | from IPython.display import HTML
# Number of MCMC iteration to animate.
samples = 400
figsize(6, 6)
fig = plt.figure()
s_width = (0.81, 1.29)
a_width = (0.11, 0.39)
samples_width = (0, samples)
ax1 = fig.add_subplot(221, xlim=s_width, ylim=samples_width)
ax2 = fig.add_subplot(224, xlim=samples_width, ylim=a_width)
ax3 = fig.add_subplot(223, xlim=s_width, ylim=a_width,
xlabel='male coef',
ylabel='educ coef')
fig.subplots_adjust(wspace=0.0, hspace=0.0)
line1, = ax1.plot([], [], lw=1)
line2, = ax2.plot([], [], lw=1)
line3, = ax3.plot([], [], 'o', lw=2, alpha=.1)
line4, = ax3.plot([], [], lw=1, alpha=.3)
line5, = ax3.plot([], [], 'k', lw=1)
line6, = ax3.plot([], [], 'k', lw=1)
ax1.set_xticklabels([])
ax2.set_yticklabels([])
lines = [line1, line2, line3, line4, line5, line6]
def init():
for line in lines:
line.set_data([], [])
return lines
def animate(i):
with logistic_model:
if i == 0:
# Burnin
for j in range(samples): iter_sample.__next__()
trace = iter_sample.__next__()
line1.set_data(trace['sex[T. Male]'][::-1], range(len(trace['sex[T. Male]'])))
line2.set_data(range(len(trace['educ'])), trace['educ'][::-1])
line3.set_data(trace['sex[T. Male]'], trace['educ'])
line4.set_data(trace['sex[T. Male]'], trace['educ'])
male = trace['sex[T. Male]'][-1]
educ = trace['educ'][-1]
line5.set_data([male, male], [educ, a_width[1]])
line6.set_data([male, s_width[1]], [educ, educ])
return lines | python/coursera-BayesianML/04_mcmc_assignment.ipynb | saketkc/notebooks | bsd-2-clause |
Animating Metropolis-Hastings | with pm.Model() as logistic_model:
# Again define Bayesian logistic regression model on the following features: sex, age, age_squared, educ, hours
#### YOUR CODE HERE ####
### END OF YOUR CODE ###
step = pm.Metropolis()
iter_sample = pm.iter_sample(2 * samples, step, start=map_estimate)
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=samples, interval=5, blit=True)
HTML(anim.to_html5_video())
# Note that generating the video may take a while. | python/coursera-BayesianML/04_mcmc_assignment.ipynb | saketkc/notebooks | bsd-2-clause |
Resolving Conflicts Using Precedence Declarations
This file shows how shift/reduce and reduce/reduce conflicts can be resolved using operator precedence declarations.
The following grammar is ambiguous because it does not specify the precedence of the arithmetical operators:
expr : expr '+' expr
| expr '-' expr
| expr '*' expr
| expr '/' expr
| expr '^' expr
| '(' expr ')'
| NUMBER
;
We will see how the use of precedence declarations can be used to resolve shift/reduce-conflicts.
Specification of the Scanner
We implement a minimal scanner for arithmetic expressions. | import ply.lex as lex
tokens = [ 'NUMBER' ]
def t_NUMBER(t):
r'0|[1-9][0-9]*'
t.value = int(t.value)
return t
literals = ['+', '-', '*', '/', '^', '(', ')']
t_ignore = ' \t'
def t_newline(t):
r'\n+'
t.lexer.lineno += t.value.count('\n')
def t_error(t):
print(f"Illegal character '{t.value[0]}'")
t.lexer.skip(1)
__file__ = 'main'
lexer = lex.lex() | Ply/Conflicts-Resolved.ipynb | karlstroetmann/Formal-Languages | gpl-2.0 |
Specification of the Parser | import ply.yacc as yacc | Ply/Conflicts-Resolved.ipynb | karlstroetmann/Formal-Languages | gpl-2.0 |
The start variable of our grammar is expr, but we don't have to specify that. The default
start variable is the first variable that is defined. | start = 'expr' | Ply/Conflicts-Resolved.ipynb | karlstroetmann/Formal-Languages | gpl-2.0 |
The following operator precedence declarations declare that the operators '+'and '-' have a lower precedence than the operators '*'and '/'. The operator '^' has the highest precedence. Furthermore, the declarations specify that the operators '+', '-', '*', and '/' are left associative, while the operator '^' is declared as right associative using the keyword right.
Operators can also be defined as being non-associative using the keyword nonassoc. | precedence = (
('left', '+', '-') , # precedence 1
('left', '*', '/'), # precedence 2
('right', '^') # precedence 3
)
def p_expr_plus(p):
"expr : expr '+' expr"
p[0] = ('+', p[1], p[3])
def p_expr_minus(p):
"expr : expr '-' expr"
p[0] = ('-', p[1], p[3])
def p_expr_mult(p):
"expr : expr '*' expr"
p[0] = ('*', p[1], p[3])
def p_expr_div(p):
"expr : expr '/' expr"
p[0] = ('/', p[1], p[3])
def p_expr_power(p):
"expr : expr '^' expr"
p[0] = ('^', p[1], p[3])
def p_expr_paren(p):
"expr : '(' expr ')'"
p[0] = p[2]
def p_expr_NUMBER(p):
"expr : NUMBER"
p[0] = p[1]
def p_error(p):
if p:
print(f"Syntax error at character number {p.lexer.lexpos} at token '{p.value}' in line {p.lexer.lineno}.")
else:
print('Syntax error at end of input.') | Ply/Conflicts-Resolved.ipynb | karlstroetmann/Formal-Languages | gpl-2.0 |
Setting the optional argument write_tables to False <B style="color:red">is required</B> to prevent an obscure bug where the parser generator tries to read an empty parse table. | parser = yacc.yacc(write_tables=False, debug=True) | Ply/Conflicts-Resolved.ipynb | karlstroetmann/Formal-Languages | gpl-2.0 |
As there are no warnings all conflicts have been resolved using the precedence declarations.
Let's look at the action table that is generated. | !type parser.out
!cat parser.out
%run ../ANTLR4-Python/AST-2-Dot.ipynb | Ply/Conflicts-Resolved.ipynb | karlstroetmann/Formal-Languages | gpl-2.0 |
The function test(s) takes a string s as its argument an tries to parse this string. If all goes well, an abstract syntax tree is returned.
If the string can't be parsed, an error message is printed by the parser. | def test(s):
t = yacc.parse(s)
d = tuple2dot(t)
display(d)
return t
test('2^3*4+5')
test('1+2*3^4')
test('1 + 2 * -3^4') | Ply/Conflicts-Resolved.ipynb | karlstroetmann/Formal-Languages | gpl-2.0 |
Chapter 1 | def sigmoid(z):
return 1./(1. + np.exp(-z))
def sigmoid_vector(w,x,b):
return 1./(1. + np.exp(-1 * np.sum(w * x) - b))
def sigmoid_prime(z):
return sigmoid(z) * (1 - sigmoid(z))
# Plot behavior of sigmoid. Continuous symmetric function,
# asymptotically bounded by [0,1] in x = [-inf, inf]
x = np.linspace(-10,10)
plt.plot(x,sigmoid(x))
plt.ylim(-0.05,1.05);
# Test the vectorized output
w = np.array([1,2,3])
x = np.array([0.5,0.5,0.7])
b = 0
print sigmoid_vector(w,x,b) | notebooks/neural_networks_and_deep_learning.ipynb | willettk/insight | apache-2.0 |
Exercises
Take all the weights and biases in a network of perceptrons and multiply them by a positive constant $c > 0$. Show that the behavior of the network doesn't change.
Input: $[x_1,x_2,\ldots,x_j]$
Old behavior
Weights: $[w_1,w_2,\ldots,w_j]$
Bias: $b$
Perceptron output:
output = 0 if $w \cdot x + b \leq 0$
output = 1 if $w \cdot x + b > 0$
New input:
$w_\mathrm{new} = [c w_1,c w_2,\ldots,c w_j]$
$b_\mathrm{new} = c b$
New output of the perceptron:
$w_\mathrm{new} \cdot x + b_\mathrm{new} = c w \cdot x + c b = c (w \cdot x + b)$.
This is just a positive scaling, so $w_\mathrm{new} \cdot x + b_\mathrm{new} = w \cdot x + b$ at 0 and keeps the same sign on either side since $c > 0$. So the behavior of the perceptron network doesn't change.
Take a network of perceptrons and fix the input $\boldsymbol{x}$. Assume $\boldsymbol{w}\cdot\boldsymbol{x} + b \neq 0$ for all perceptrons.
Original output:
0 if $(w \cdot x + b) < 0$
1 if $(w \cdot x + b) > 0$
Replace perceptrons with sigmoid functions and multiply both weights and biases by a constant $c > 0$.
$w_\mathrm{new} = [c w_1,c w_2,\ldots,c w_j]$
$b_\mathrm{new} = c b$
New output:
$\sigma[c\boldsymbol{w},\boldsymbol{x},c b] \equiv \frac{1}{1 + \exp{\left(-\sum_j{(c w_j) x_j} - c b\right)}} = \frac{1}{1 + \exp{\left(c(-\sum_j{w_j x_j} - b)\right)}}$
As $c \rightarrow \infty$, the term $\exp{\left(c(-\sum_j{w_j x_j} - b)\right)}$ becomes $\infty$ if $(-\sum_j{w_j x_j} - b) > 0$, and so $\sigma \rightarrow 0$. This is equivalent to $(\sum_j{w_j x_j} + b) < 0$, or the same as the first output of the perceptron. Similarly, if $(-\sum_j{w_j x_j} - b) < 0$, then the term goes to 0 and $\sigma \rightarrow 1$. So the behavior of the sigmoid network is the same as perceptrons is the same for very large $c$.
If $w \cdot x + b = 0$ for one of the perceptrons, then $\sigma=1/2$ regardless of the value of $c$. So the sigmoid approximation will fail to match the perceptron output.
Design a set of weights and biases such that digits are converted to their bitwise representation. | # One set of possible weights and a bias; infinite amount
# of legal combinations
digits = np.identity(10) * 0.99 + 0.005
weights = np.ones((10,4)) * -1
weights[1::2,0] = 3
weights[2::4,1] = 3
weights[3::4,1] = 3
weights[4:8,2] = 3
weights[8:10,3] = 3
weights[0,1:3] = -2
bias = -2
print "Weights: \n{}".format(weights)
print "Bias: {}".format(bias)
print "Bitwise output: \n{}".format((np.sign(np.dot(digits,weights) + bias).astype(int) + 1) / 2)
# Initialize the network object
class Network(object):
def __init__(self,sizes):
# Initialize the Network object with random (normal) biases, weights
self.num_layers = len(sizes)
self.sizes = sizes
self.biases = [np.random.randn(y,1) for y in sizes[1:]]
self.weights = [np.random.randn(y,x) for x,y in zip(sizes[:-1],sizes[1:])]
def feedforward(self,a):
# Return the output of the network
for b,w in zip(self.biases, self.weights):
a = sigmoid(np.dot(w,a) + b)
return a
def SGD(self, training_data, epochs, mini_batch_size,
eta, test_data=None):
if test_data:
n_test = len(test_data)
n = len(training_data)
for j in xrange(epochs):
random.shuffle(training_data)
mini_batches = [training_data[k:k+mini_batch_size] for k in xrange(0,n,mini_batch_size)]
for mini_batch in mini_batches:
self.update_mini_batch(mini_batch,eta)
if test_data:
print "Epoch {}: {} / {}".format(j,self.evaluate(test_data),n_test)
else:
print "Epoch {} complete.".format(j)
def update_mini_batch(self,mini_batch,eta):
nabla_b = [np.zeros(b.shape) for b in self.biases]
nabla_w = [np.zeros(w.shape) for w in self.weights]
for x,y in mini_batch:
delta_nabla_b, delta_nabla_w = self.backprop(x,y)
nabla_b = [nb + dnb for nb,dnb in zip(nabla_b,delta_nabla_b)]
nabla_w = [nw + dnw for nw,dnw in zip(nabla_w,delta_nabla_w)]
self.weights = [w - (eta/len(mini_batch))*nw for w,nw in zip(self.weights,nabla_w)]
self.biases = [b - (eta/len(mini_batch))*nb for b,nb in zip(self.biases,nabla_b)]
def evaluate(self, test_data):
test_results = [(np.argmax(self.feedforward(x)),y) for (x,y) in test_data]
return sum(int(x == y) for (x,y) in test_results)
def backprop(self, x, y):
nabla_b = [np.zeros(b.shape) for b in self.biases]
nabla_w = [np.zeros(w.shape) for w in self.weights]
# feedforward
activation = x
activations = [x] # list to store all the activations, layer by layer
zs = [] # list to store all the z vectors, layer by layer
for b, w in zip(self.biases, self.weights):
z = np.dot(w, activation)+b
zs.append(z)
activation = sigmoid(z)
activations.append(activation)
# backward pass
delta = self.cost_derivative(activations[-1], y) * \
sigmoid_prime(zs[-1])
nabla_b[-1] = delta
nabla_w[-1] = np.dot(delta, activations[-2].transpose())
for l in xrange(2, self.num_layers):
z = zs[-l]
sp = sigmoid_prime(z)
delta = np.dot(self.weights[-l+1].transpose(), delta) * sp
nabla_b[-l] = delta
nabla_w[-l] = np.dot(delta, activations[-l-1].transpose())
return (nabla_b, nabla_w)
def cost_derivative(self,output_activations,y):
return (output_activations - y)
| notebooks/neural_networks_and_deep_learning.ipynb | willettk/insight | apache-2.0 |
Load the MNIST data | import cPickle as pickle
import gzip
def load_data():
with gzip.open("neural-networks-and-deep-learning/data/mnist.pkl.gz","rb") as f:
training_data,validation_data,test_data = pickle.load(f)
return training_data,validation_data,test_data
def load_data_wrapper():
tr_d,va_d,te_d = load_data()
training_inputs = [np.reshape(x,(784,1)) for x in tr_d[0]]
training_results = [vectorized_result(y) for y in tr_d[1]]
training_data = zip(training_inputs,training_results)
validation_inputs = [np.reshape(x,(784,1)) for x in va_d[0]]
validation_data = zip(validation_inputs,va_d[1])
test_inputs = [np.reshape(x,(784,1)) for x in te_d[0]]
test_data = zip(test_inputs,te_d[1])
return (training_data,validation_data,test_data)
def vectorized_result(j):
e = np.zeros((10,1))
e[j] = 1.0
return e | notebooks/neural_networks_and_deep_learning.ipynb | willettk/insight | apache-2.0 |
Run the network | training_data,validation_data,test_data = load_data_wrapper()
net = Network([784,30,10])
net.SGD(training_data,30,10,3.0,test_data = test_data)
net100 = Network([784,100,10])
net100.SGD(training_data,30,10,3.0,test_data=test_data)
net2 = Network([784,10])
net2.SGD(training_data,30,10,3.0,test_data=test_data) | notebooks/neural_networks_and_deep_learning.ipynb | willettk/insight | apache-2.0 |
As mentioned before, if we want to perform a 3 dimensional displacement model of the composite plate, we would have 6 reaction forces that are a function of x and y. Those 6 reaction forces are related by 3 equalibrium equations | # # hyer page 584
# # Equations of equilibrium
# Nxf = Function('N_x')(x,y)
# Nyf = Function('N_y')(x,y)
# Nxyf = Function('N_xy')(x,y)
# Mxf = Function('M_x')(x,y)
# Myf = Function('M_y')(x,y)
# Mxyf = Function('M_xy')(x,y)
# symbols for force and moments
Nx,Ny,Nxy,Mx,My,Mxy = symbols('N_x,N_y,N_xy,M_x,M_y,M_xy')
Nxf,Nyf,Nxyf,Mxf,Myf,Mxyf = symbols('Nxf,Nyf,Nxyf,Mxf,Myf,Mxyf')
Eq(0,diff(Nx(x,y), x)+diff(Nxy(x,y),y))
Eq(0,diff(Nxy(x,y), x)+diff(Ny(x,y),y))
Eq(0, diff(Mx(x,y),x,2) + 2*diff(Mxy(x,y),x,y) + diff(My(x,y) ,y,2)+ q ) | tutorials/Composite_Plate_Mechanics_with_Python_Theory.ipynb | nagordon/mechpy | mit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.