markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
Let us get related data: the type_of_inkind of all contributions. For each contribution we need only the ids of the related type_of_inkind values. | for d in dbm.contrib.find({}, {'typeContribution': True}).limit(10):
print(d)
for d in dbm.contrib.find({}, {'country': True}).limit(10):
print(d)
x = dict(_id=5, value='66')
y = dict(_id=5, value='66')
x == y | static/tools/.ipynb_checkpoints/from_filemaker-checkpoint.ipynb | Dans-labs/dariah | mit |
We need to import several things from Keras. Note the long import-statements. This might be a bug. Hopefully it will be possible to write shorter and more elegant lines in the future. | # from tf.keras.models import Sequential # This does not work!
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import InputLayer, Input
from tensorflow.python.keras.layers import Reshape, MaxPooling2D
from tensorflow.python.keras.layers import Conv2D, Dense, Flatten | 03C_Keras_API.ipynb | newworldnewlife/TensorFlow-Tutorials | mit |
This was developed using Python 3.6 (Anaconda) and TensorFlow version: | tf.__version__
tf.keras.__version__ | 03C_Keras_API.ipynb | newworldnewlife/TensorFlow-Tutorials | mit |
Data Dimensions
The data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below. | # We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
# This is used for plotting the images.
img_shape = (img_size, img_size)
# Tuple with height, width and depth used to reshape arrays.
# This is used for reshaping in Keras.
img_shape_full = (img_size, img_size, 1)
# Number of colour channels for the images: 1 channel for gray-scale.
num_channels = 1
# Number of classes, one class for each of 10 digits.
num_classes = 10 | 03C_Keras_API.ipynb | newworldnewlife/TensorFlow-Tutorials | mit |
Helper-function to plot example errors
Function for plotting examples of images from the test-set that have been mis-classified. | def plot_example_errors(cls_pred):
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Boolean array whether the predicted class is incorrect.
incorrect = (cls_pred != data.test.cls)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9]) | 03C_Keras_API.ipynb | newworldnewlife/TensorFlow-Tutorials | mit |
Model Compilation
The Neural Network has now been defined and must be finalized by adding a loss-function, optimizer and performance metrics. This is called model "compilation" in Keras.
We can either define the optimizer using a string, or if we want more control of its parameters then we need to instantiate an object. For example, we can set the learning-rate. | from tensorflow.python.keras.optimizers import Adam
optimizer = Adam(lr=1e-3) | 03C_Keras_API.ipynb | newworldnewlife/TensorFlow-Tutorials | mit |
Training
Now that the model has been fully defined with loss-function and optimizer, we can train it. This function takes numpy-arrays and performs the given number of training epochs using the given batch-size. An epoch is one full use of the entire training-set. So for 10 epochs we would iterate randomly over the entire training-set 10 times. | model.fit(x=data.train.images,
y=data.train.labels,
epochs=1, batch_size=128) | 03C_Keras_API.ipynb | newworldnewlife/TensorFlow-Tutorials | mit |
Evaluation
Now that the model has been trained we can test its performance on the test-set. This also uses numpy-arrays as input. | result = model.evaluate(x=data.test.images,
y=data.test.labels) | 03C_Keras_API.ipynb | newworldnewlife/TensorFlow-Tutorials | mit |
Prediction
We can also predict the classification for new images. We will just use some images from the test-set but you could load your own images into numpy arrays and use those instead. | images = data.test.images[0:9] | 03C_Keras_API.ipynb | newworldnewlife/TensorFlow-Tutorials | mit |
These are the true class-number for those images. This is only used when plotting the images. | cls_true = data.test.cls[0:9] | 03C_Keras_API.ipynb | newworldnewlife/TensorFlow-Tutorials | mit |
Examples of Mis-Classified Images
We can plot some examples of mis-classified images from the test-set.
First we get the predicted classes for all the images in the test-set: | y_pred = model.predict(x=data.test.images) | 03C_Keras_API.ipynb | newworldnewlife/TensorFlow-Tutorials | mit |
Compile the Keras model using the rmsprop optimizer and with a loss-function for multiple categories. The only performance metric we are interested in is the classification accuracy, but you could use a list of metrics here. | model2.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy']) | 03C_Keras_API.ipynb | newworldnewlife/TensorFlow-Tutorials | mit |
Training
The model has now been defined and compiled so it can be trained using the same fit() function as used in the Sequential Model above. This also takes numpy-arrays as input. | model2.fit(x=data.train.images,
y=data.train.labels,
epochs=1, batch_size=128) | 03C_Keras_API.ipynb | newworldnewlife/TensorFlow-Tutorials | mit |
Evaluation
Once the model has been trained we can evaluate its performance on the test-set. This is the same syntax as for the Sequential Model. | result = model2.evaluate(x=data.test.images,
y=data.test.labels) | 03C_Keras_API.ipynb | newworldnewlife/TensorFlow-Tutorials | mit |
The result is a list of values, containing the loss-value and all the metrics we defined when we compiled the model. Note that 'accuracy' is now called 'acc' which is a small inconsistency. | for name, value in zip(model.metrics_names, result):
print(name, value) | 03C_Keras_API.ipynb | newworldnewlife/TensorFlow-Tutorials | mit |
We can also print the classification accuracy as a percentage: | print("{0}: {1:.2%}".format(model.metrics_names[1], result[1])) | 03C_Keras_API.ipynb | newworldnewlife/TensorFlow-Tutorials | mit |
Examples of Mis-Classified Images
We can plot some examples of mis-classified images from the test-set.
First we get the predicted classes for all the images in the test-set: | y_pred = model2.predict(x=data.test.images) | 03C_Keras_API.ipynb | newworldnewlife/TensorFlow-Tutorials | mit |
We can then use the model again e.g. to make predictions. We get the first 9 images from the test-set and their true class-numbers. | images = data.test.images[0:9]
cls_true = data.test.cls[0:9] | 03C_Keras_API.ipynb | newworldnewlife/TensorFlow-Tutorials | mit |
Output of Convolutional Layer - Method 1
There are different ways of getting the output of a layer in a Keras model. This method uses a so-called K-function which turns a part of the Keras model into a function. | from tensorflow.python.keras import backend as K
output_conv1 = K.function(inputs=[layer_input.input],
outputs=[layer_conv1.output]) | 03C_Keras_API.ipynb | newworldnewlife/TensorFlow-Tutorials | mit |
We can then call this function with the input image. Note that the image is wrapped in two lists because the function expects an array of that dimensionality. Likewise, the function returns an array with one more dimensionality than we want so we just take the first element. | layer_output1 = output_conv1([[image1]])[0]
layer_output1.shape | 03C_Keras_API.ipynb | newworldnewlife/TensorFlow-Tutorials | mit |
We can then plot the output of all 16 channels of the convolutional layer. | plot_conv_output(values=layer_output1) | 03C_Keras_API.ipynb | newworldnewlife/TensorFlow-Tutorials | mit |
Output of Convolutional Layer - Method 2
Keras also has another method for getting the output of a layer inside the model. This creates another Functional Model using the same input as the original model, but the output is now taken from the convolutional layer that we are interested in. | output_conv2 = Model(inputs=layer_input.input,
outputs=layer_conv2.output) | 03C_Keras_API.ipynb | newworldnewlife/TensorFlow-Tutorials | mit |
cpu计算耗时与分段数关系图表 | b=[]
for i in range(5,35,5):
temp=[]
for j in range(3):
temp.append(X[i][j]['time'])
b.append(temp)
bs=np.array(b).T
ind=range(5,35,5)
d={'M1':pd.Series(bs[0],index=ind),
'M2':pd.Series(bs[1],index=ind),
'M3':pd.Series(bs[2],index=ind)}
df = pd.DataFrame(d)
df.columns.name='function'
df.index.name='partition'
df
df.plot(kind='bar',fontsize=20)
leg = plt.gca().get_legend()
ltext = leg.get_texts()
plt.setp(ltext, fontsize='20')
plt.title("CPU time ",fontsize=16)
df.plot()
leg = plt.gca().get_legend()
ltext = leg.get_texts()
plt.setp(ltext, fontsize='20')
plt.title("CPU time",fontsize=16) | CMA-ES/分段数对效果的影响/分段数对拟合效果的影响.ipynb | luzhijun/Optimization | apache-2.0 |
Now write a set of assert tests for your number_to_words function that verifies that it is working as expected. | Z = list(range(1,1001))
X=[]
for i in Z:
X.append(number_to_words(i)) # LIST OF ALL THE WORDS!
assert number_to_words(10)=='ten'
assert number_to_words(55)=='fifty-five'
assert number_to_words(99)=='ninety-nine'
assert number_to_words(155)=='one hundred and fifty-five'
assert number_to_words(777)=='seven hundred and seventy-seven'
assert True # use this for grading the number_to_words tests. | assignments/assignment06/ProjectEuler17.ipynb | phungkh/phys202-2015-work | mit |
Now define a count_letters(n) that returns the number of letters used to write out the words for all of the the numbers 1 to n inclusive. | def count_letters(n):
"""Count the number of letters used to write out the words for 1-n inclusive."""
number_of_characters= ' '.join(number_to_words(x) for x in range(1,n+1))
count=0
for i in number_of_characters:
if i !='-'and i !=' ':
count+=1
return count
| assignments/assignment06/ProjectEuler17.ipynb | phungkh/phys202-2015-work | mit |
Now write a set of assert tests for your count_letters function that verifies that it is working as expected. | count_letters(1)==3
count_letters(5)==19
assert True # use this for grading the count_letters tests. | assignments/assignment06/ProjectEuler17.ipynb | phungkh/phys202-2015-work | mit |
Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.
Also, convert the reviews to integers and store the reviews in a new list called reviews_ints. | reviews[:2]
from collections import Counter
words_dummy = ['qwe','ert','yui', 'fgh', 'dfg', 'kjg','fgh', 'dfg', 'kjg']
counts_dummy = Counter(words_dummy)
print(counts_dummy)
v = enumerate(counts_dummy,1)
print(list(v))
print(counts_dummy.get('qwe'))
vocab_dummy = sorted(counts_dummy, key=counts_dummy.get, reverse=True)
vocab_to_int_dummy = {word: ii for ii, word in enumerate(vocab_dummy, 1)}
print(vocab_dummy)
print(vocab_to_int_dummy)
counts = Counter(words)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
reviews_ints = []
for each in reviews:
reviews_ints.append([vocab_to_int[word] for word in each.split()])
labels = labels.split('\n')
labels = np.array([1 if each == 'positive' else 0 for each in labels]) | references/sentiment-rnn/Sentiment RNN.ipynb | msampathkumar/kaggle-quora-tensorflow | apache-2.0 |
Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise: Convert labels from positive and negative to 1 and 0, respectively. | review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
print(max(review_lens))
print(min(review_lens))
# max? | references/sentiment-rnn/Sentiment RNN.ipynb | msampathkumar/kaggle-quora-tensorflow | apache-2.0 |
With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2501, 200)
Build the graph
Here, we'll build the graph. First up, defining the hyperparameters.
lstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.
lstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.
batch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.
learning_rate: Learning rate | lstm_size = 256
lstm_layers = 1
batch_size = 250
learning_rate = 0.001 | references/sentiment-rnn/Sentiment RNN.ipynb | msampathkumar/kaggle-quora-tensorflow | apache-2.0 |
<a id='first_step'></a>
Step 1. Extract relevant photos from YFCC100M dataset
The original YFCC100M dataset contains 100million photos. We are only interested in user behavior near Melbourne area. Therefore, we first extract the photos belongs to the below region to reduce the further computational cost.
The latitudes and longitudes of this region is described in data/Melbourne-bbox.kml file.
This process can be done by src/filtering_bigbox.py.
filtering_bigbox.py file takes the original YFCC100M file to extract photos and videos from above region as well as a time window [2000-01-01 00:00:00, 2015-03-05 23:59:59], then generates a cvs file containing:
Photo/video ID
NSID (user ID)
Date
Longitude
Latitude
Accuracy (GPS accuracy)
Photo/video URL
Photo/video identifier (0 = photo, 1 = video)
The usage of this file is :
python filtering_bigbox.py YFCC100M_DATA_FILE
which will generate filtered output out.YFCC100M_DATA_FILE file.
The original YFCC100M data files are not incorporated in this repository. But we incorporate the filtered output in data/Melb_photos_bigbox.csv
1.1. Basic stats of initial dataset
Here are some basic statistics after extracting relevant photos from the YFCC100M. | raw = pd.read_csv(raw_table, parse_dates=[2], skipinitialspace=True)
print('Number of photos:', raw['Photo_ID'].shape[0])
print('Number of users: ', raw['User_ID'].unique().shape[0])
raw[['Longitude', 'Latitude', 'Accuracy']].describe() | src/trajectory_construction.ipynb | cdawei/flickr-photo | gpl-2.0 |
1.2. Scatter plot of extracted points
We also plot the location of extracted photos. The high density area represents the areas where a lot of photos has been taken. | plt.figure(figsize=[8, 8])
plt.xlabel('Longitude')
plt.ylabel('Latitude')
plt.scatter(raw['Longitude'], raw['Latitude']) | src/trajectory_construction.ipynb | cdawei/flickr-photo | gpl-2.0 |
Step 2. Extract candidate trajectories from extracted points
From the extracted photos, we reconstruct user trajectories using geo-tag and timestamp of photos as follows:
Step 2.1: Group the extracted photos by user
Step 2.2: Sort the grouped photos by timestamp
Step 2.3: Split the sorted photos into trajectories if the time gap between two consecutive photos is greater than 8 (\$time_gap) hours
Step 2.4: We plot the trajectories on map. Keep trajectories at least one (\$minimum_photo) photo is taken from the central Melbourne area below. To make sure that the travel is not far from Melbourne
src/generate_tables.py will generate the inital trajectories using arguments.
The usage of this file is :
`python generate_tables.py extracted_points_file lng_min lat_min lng_max lat_max minimum_photo time_gap
with arguments:
extracted_points_file = the output of src/filtering_bigbox.py
lng_min = min longitude of target region
lat_min = min latitude of target region
lng_max = max longtidue of target region
lat_max = max latitude of targer region
minimum_photo = minimum number of photos for each trajectory
time_gap = Split the sorted photos into trajectories if the time gap between two consecutive photos is greater than this | extracted_points_file = raw_table # outputfile path of extracted points
%run generate_tables $extracted_points_file $lng_min $lat_min $lng_max $lat_max $minimum_photo $time_gap | src/trajectory_construction.ipynb | cdawei/flickr-photo | gpl-2.0 |
This will result two data files: 1 trajectory data file (\$photo table), and 2 trajectory statistic file (\$traj_table).
trajectory data file: each entry(line) of this file reprsents single photo with additional information about the photo:
* Trajectory_ID: trajectory ID of entry (multiple entries belong to the same trajectory will have the same trajectory ID)
* Photo_ID: Unique Photo ID of entry
* User_ID: User ID
* Timestamp: Timestamp of when the photo was taken
* Longitude: Longitude of entry
* Latitude: Latitude of entry
* Accuracy: GPS Accuracy level (16 - the most accurate, 1 - the least accurate)
* Marker: 0 if the entry is photo, 1 if the entry is video
* URL: flickr URL to the entry
trajectory statistic file: each entry(line) of this file represents single trajectory with addtional information about the trajectory:
* Trajectory_ID: Unique trajectory ID
* User_ID: User ID
* #Photo: Number of photos in the trajectory
* Start_Time: When the first photo was taken
* Travel_Distance(km): Sum of the distances between sequantially consecutive photos (Euclidean Distance)
* Total_Time(min): The time gap between the first photo and the last photo
* Average_Speed(km/h): Travel_Distances(km)/Total_Time(h)
We read these files by using pandas library for further processing: | traj = pd.read_csv(photo_table, parse_dates=[3], skipinitialspace=True)
traj_stats = pd.read_csv(traj_table, parse_dates=[3], skipinitialspace=True) | src/trajectory_construction.ipynb | cdawei/flickr-photo | gpl-2.0 |
2.1. Basic statistics about the candidate trajectories
Here are the basic statistics about candidate trajectories from src/generate tables.py: | num_photo = traj['Photo_ID'].shape[0]
num_user = traj_stats['User_ID'].unique().shape[0]
num_traj = traj_stats['Trajectory_ID'].shape[0]
print('Number of photos:', num_photo)
print('Number of users: ', num_user)
print('Number of trajectories:', num_traj)
print('Average number of photos per user:', num_photo / num_user)
print('Average number of trajectories per user:', num_traj / num_user)
traj_stats[['#Photo', 'Travel_Distance(km)', 'Total_Time(min)', 'Average_Speed(km/h)']].describe() | src/trajectory_construction.ipynb | cdawei/flickr-photo | gpl-2.0 |
2.2. Scatter plot of points in all candidate trajectories
We plot the location of extracted photos in the cadidate trajectories. The high density area represents the areas where a lot of photos has been taken. | plt.figure(figsize=[8, 8])
plt.xlabel('Longitude')
plt.ylabel('Latitude')
plt.scatter(traj['Longitude'], traj['Latitude']) | src/trajectory_construction.ipynb | cdawei/flickr-photo | gpl-2.0 |
2.3. Histograms of number of photos in trajectories, total time/distance and average speed of trajectories | plt.figure(figsize=[18, 10])
plt.subplot(2,2,1)
plt.xlabel('#Photo')
plt.ylabel('#Trajectory')
plt.title('Histogram of #Photo in trajectories')
ax0 = traj_stats['#Photo'].hist(bins=50)
ax0.set_yscale('log')
plt.subplot(2,2,2)
plt.xlabel('Travel Distance (km)')
plt.ylabel('#Trajectory')
plt.title('Histogram of Travel Distance of Trajectories')
ax1 = traj_stats['Travel_Distance(km)'].hist(bins=50)
ax1.set_yscale('log')
plt.subplot(2,2,3)
plt.xlabel('Total Time (minute)')
plt.ylabel('#Trajectory')
plt.title('Histogram of Total Time of Trajectories')
ax2 = traj_stats['Total_Time(min)'].hist(bins=50)
ax2.set_yscale('log')
plt.subplot(2,2,4)
plt.xlabel('Average Speed (km/h)')
plt.ylabel('#Trajectory')
plt.title('Histogram of Average Speed of Trajectories')
ax3 = traj_stats['Average_Speed(km/h)'].hist(bins=50)
ax3.set_yscale('log') | src/trajectory_construction.ipynb | cdawei/flickr-photo | gpl-2.0 |
As these histogram indicates, there are several abnormal trajectories in this dataset. For example, some trajectories span several days, some trajectory shows improbably high speed (3500000 km/h), and travel distance of some trajectories is almost zero which might not be interesting as a travel trajectory.
In the following section, we will provide guidelines to filter out these abnormal trajectories.
3. Filter Trajectory
After getting an initial list of trajectories, we further filter out improbable trajectories with various criteria.
We use four different criteria as follows:
Travel time: Some suspicious trajectory span over more than several days. We remove trajectories spanning more than several days or only few minutes. (maximum_duration, minimum_duration).
Travel distance: Trajectories consist of photos taken from single location is not meaningful as a trajectory. We remove these trajectories (minimum_distance)
Travel speed: Due to the GPS error, there are some trajectories in which a user moves unbelievably fast speed. We remove these trajectories, but try to recover as much information as possible from some trajectories.
The list of arguments we used to generate final trajectories are available at the top of the notebook.
3.1. Filter by travel time
First, we filter out trajectories which have suspiciously long or short travel times. We want to see the one-day long travel trajectories of users, and also want to avoid the trajectory that are captured in very short time.
In this step, we filtered out the trajectories of which travel time is greather than maximum_duration or less than minimum_duration. | traj_stats1 = traj_stats[traj_stats['Total_Time(min)'] < maximum_duration]
traj_stats1 = traj_stats1[traj_stats1['Total_Time(min)'] > minimum_duration]
traj1 = traj[traj['Trajectory_ID'].isin(traj_stats1['Trajectory_ID'])] | src/trajectory_construction.ipynb | cdawei/flickr-photo | gpl-2.0 |
3.1.1. Histogram of travel time
Here's the histogram of travel time before and after filtering. We removed several trajectories of which travel time is less than 30 min and greater than 24 hours. | plt.figure(figsize=[18, 5])
plt.subplot(1,2,1)
plt.xlabel('Travel_Time(min)')
plt.ylabel('#traj')
plt.title('Before Filtering')
ax0 = traj_stats['Total_Time(min)'].hist(bins=50)
ax0.set_yscale('log')
plt.subplot(1,2,2)
plt.xlabel('Travel_Time(min)')
plt.ylabel('#traj')
plt.title('After filtering')
ax1 = traj_stats1['Total_Time(min)'].hist(bins=50)
ax1.set_yscale('log') | src/trajectory_construction.ipynb | cdawei/flickr-photo | gpl-2.0 |
3.2. Filter by travel distance
To be a meaningful trajectory, the travel distance of trajactory spans at least several hundred meters. Extremely short travel distance only shows the interesting area where the photo has been taken.
To get the trajectory, we filter out the trajectories of which travel distance is less than minimum_distance. | traj_stats2 = traj_stats1[traj_stats1['Travel_Distance(km)'] > minimum_distance]
traj2 = traj[traj['Trajectory_ID'].isin(traj_stats2['Trajectory_ID'])] | src/trajectory_construction.ipynb | cdawei/flickr-photo | gpl-2.0 |
3.2.1. Histogram of trajectory length
Here's the histogram of travel distances before and after filtering. Trajectories with very short travel distance has been removed from our dataset. | plt.figure(figsize=[18, 5])
plt.subplot(1,2,1)
plt.xlabel('Travel_Distance(km)')
plt.ylabel('#traj')
plt.title('Before Filtering')
ax1 = traj_stats1['Travel_Distance(km)'].hist(bins=50)
ax1.set_yscale('log')
plt.subplot(1,2,2)
plt.xlabel('Travel_Distance(km)')
plt.ylabel('#traj')
plt.title('After filtering')
ax2 = traj_stats2['Travel_Distance(km)'].hist(bins=50)
ax2.set_yscale('log')
traj_stats_new = traj_stats2
traj_new = traj2 | src/trajectory_construction.ipynb | cdawei/flickr-photo | gpl-2.0 |
3.3. Filter by travel speed
Some trajectories have suspiciously high speed. It may caused by various reasons. For example, errors in GPS system or errors in time stampmight yeild super sonic users.
There are two (or more) alternative ways to filter out trajectory which has suspiciously high speed.
Here, we provide two filtering method: (the switch to use one of the methods can be set at the top of the notebook)
Filtered by average speed
Filtered by speed of adjacency points
3.3.1. Drop trajectory by average speed
We check average speed of every trajectory, and then throw out all trajectories of which average speed is less than predefined maximum_speed | if speed_filter == 0:
traj_stats_new = traj_stats_new[traj_stats_new['Average_Speed(km/h)'] < maximum_speed]
traj_new = traj_new[traj_new['Trajectory_ID'].isin(traj_stats_new['Trajectory_ID'])] | src/trajectory_construction.ipynb | cdawei/flickr-photo | gpl-2.0 |
3.4.1.1. Histogram of trajectory speed | if speed_filter == 0:
plt.figure(figsize=[18, 5])
plt.subplot(1,2,1)
plt.xlabel('Average_Speed(km/h)')
plt.ylabel('#traj')
ax = traj_stats_new['Average_Speed(km/h)'].hist(bins=50)
ax.set_yscale('log') | src/trajectory_construction.ipynb | cdawei/flickr-photo | gpl-2.0 |
3.3.2. Drop trajectory by point-to-point speed
The first approach might be inefficient when the improbable speed occurs by GPS calibration error. To keep as much information as possible, we propose more sophisticated method to recover information from abnormal trajectories.
There are four cases of improbably fast trajectory might be happened
The first point of trajectory is far away from the rest of the trajectory (GPS calibrating/entering building etc..)
The last point of trajectory is far away from the rest of the trajectory
One or more middle points of trajectory are far way from the rest of the trajectory (GPS error)
Mixture of previous three cases
The first and second cases are easy to recover by cutting the corresponding point. But it seems we could not easily decide which point(s) should be cut for third and fourth cases. We've decided to remove trajectories in case 3 and 4.
Compute point-to-point speed before filtering | speeds = []
if speed_filter == 1:
for tid in traj_stats_new['Trajectory_ID']:
photos = traj_new[traj_new['Trajectory_ID'] == tid]
if photos.shape[0] < 2: continue
for i in range(len(photos.index)-1):
idx1 = photos.index[i]
idx2 = photos.index[i+1]
dist = generate_tables.calc_dist(photos.loc[idx1, 'Longitude'], photos.loc[idx1, 'Latitude'], \
photos.loc[idx2, 'Longitude'], photos.loc[idx2, 'Latitude'])
seconds = (photos.loc[idx1, 'Timestamp'] - photos.loc[idx2, 'Timestamp']).total_seconds()
if seconds == 0: continue
speed = dist * 60. * 60. / abs(seconds)
speeds.append(speed) | src/trajectory_construction.ipynb | cdawei/flickr-photo | gpl-2.0 |
Histogram of point-to-point speed before filtering | #S = [100, 150, 200, 250, 500, 1000, 1236, 100000]
S = [100, 150, 200, 250]
if speed_filter == 1:
p2pspeeds = pd.Series(speeds)
plt.figure(figsize=[18,20])
for it in range(len(S)):
plt.subplot(4,2,it+1)
plt.xlabel('Point-to-Point Speed (km/h)')
plt.ylabel('#Point-pair')
plt.title('Speed < ' + str(S[it]) + ' km/h')
ax = p2pspeeds[p2pspeeds < S[it]].hist(bins=50)
ax.set_yscale('log') | src/trajectory_construction.ipynb | cdawei/flickr-photo | gpl-2.0 |
Drop the first/last point in a trajectories for case1/case2, drop enter trajectories for case3 and case4 | if speed_filter == 1:
# raise an exception if assigning value to a copy (instead of the original data) of DataFrame
pd.set_option('mode.chained_assignment','raise')
traj_stats_new = traj_stats_new.copy()
traj_new = traj_new.copy()
indicator_traj = pd.Series(data=np.ones(traj_stats_new.shape[0], dtype=np.bool), index=traj_stats_new.index)
indicator_photo = pd.Series(data=np.ones(traj_new.shape[0], dtype=np.bool), index=traj_new.index)
cnt1 = 0
cnt2 = 0
cnt34 = 0
for i in traj_stats_new['Trajectory_ID'].index:
tid = traj_stats_new.loc[i, 'Trajectory_ID']
photos = traj_new[traj_new['Trajectory_ID'] == tid]
if photos.shape[0] <= 2:
if traj_stats_new.loc[i, 'Average_Speed(km/h)'] > maximum_speed: # drop the trajectory
indicator_traj.loc[i] = False
indicator_photo.loc[photos.index] = False
continue
# trajectory: 1-->2-->...-->3-->4, 2 and 3 could be the same
idx1 = photos.index[0]
idx2 = photos.index[1]
idx3 = photos.index[-2]
idx4 = photos.index[-1]
d12 = generate_tables.calc_dist(photos.loc[idx1, 'Longitude'], photos.loc[idx1, 'Latitude'], \
photos.loc[idx2, 'Longitude'], photos.loc[idx2, 'Latitude'])
d24 = traj_stats_new.loc[i, 'Travel_Distance(km)'] - d12
t12 = abs((photos.loc[idx1, 'Timestamp'] - photos.loc[idx2, 'Timestamp']).total_seconds())
t24 = abs((photos.loc[idx2, 'Timestamp'] - photos.loc[idx4, 'Timestamp']).total_seconds())
# check case 1
if t12 == 0 or (d12 * 60. * 60. / t12) > maximum_speed: #photo1-->photo2, inf speed or large speed
if t24 == 0 or abs(d24) < 1e-3 or (d24 * 60. * 60. / t24) > maximum_speed: # drop the trajectory
indicator_traj.loc[i] = False
indicator_photo.loc[photos.index] = False
continue
else: # case 1, drop the first photo, update trajectory statistics
assert(d24 > 0.)
#traj_stats.ix[i]['Start_Time'] = photos.ix[idx2]['Timestamp'] # SettingWithCopyWarning
indicator_photo.loc[idx1] = False
traj_stats_new.loc[i, 'Start_Time'] = photos.loc[idx2, 'Timestamp']
traj_stats_new.loc[i, 'Travel_Distance(km)'] = d24
traj_stats_new.loc[i, 'Total_Time(min)'] = t24 / 60.
traj_stats_new.loc[i, 'Average_Speed(km/h)'] = d24 * 60. * 60. / t24
cnt1 += 1
continue
# check case 2
d34 = generate_tables.calc_dist(photos.loc[idx3, 'Longitude'], photos.loc[idx3, 'Latitude'], \
photos.loc[idx4, 'Longitude'], photos.loc[idx4, 'Latitude'])
d13 = traj_stats_new.loc[i, 'Travel_Distance(km)'] - d34
t34 = abs((photos.loc[idx3, 'Timestamp'] - photos.loc[idx4, 'Timestamp']).total_seconds())
t13 = abs((photos.loc[idx1, 'Timestamp'] - photos.loc[idx3, 'Timestamp']).total_seconds())
if t34 == 0 or (d34 * 60. * 60. / t34) > maximum_speed: #photo3-->photo4, inf speed or large speed
if t13 == 0 or abs(d13) < 1e-3 or (d13 * 60. * 60. / t13) > maximum_speed: # drop the trajectory
indicator_traj.loc[i] = False
indicator_photo.loc[photos.index] = False
continue
else: # case 2, drop the last photo, update trajectory statistics
assert(d13 > 0.)
#traj_stats.ix[i]['Travel_Distance(km)'] = d13 # SettingWithCopyWarning
indicator_photo.loc[idx4] = False
traj_stats_new.loc[i, 'Travel_Distance(km)'] = d13
traj_stats_new.loc[i, 'Total_Time(min)'] = d13 / 60.
traj_stats_new.loc[i, 'Average_Speed(km/h)'] = d13 * 60. * 60. / t13
cnt2 += 1
continue
# case 3 or 4, drop trajectory
if traj_stats_new.loc[i, 'Average_Speed(km/h)'] > maximum_speed:
indicator_traj.loc[i] = False
indicator_photo.loc[photos.index] = False
cnt34 += 1
print('Number of trajectories in case 1:', cnt1)
print('Number of trajectories in case 2:', cnt2)
print('Number of trajectories in case 3 & 4:', cnt34)
traj_new = traj_new[indicator_photo]
traj_stats_new = traj_stats_new[indicator_traj] | src/trajectory_construction.ipynb | cdawei/flickr-photo | gpl-2.0 |
Compute point-to-point speed after filtering | speeds_new = []
if speed_filter == 1:
for tid in traj_stats_new['Trajectory_ID']:
photos = traj_new[traj_new['Trajectory_ID'] == tid]
if photos.shape[0] < 2: continue
for i in range(len(photos.index)-1):
idx1 = photos.index[i]
idx2 = photos.index[i+1]
dist = generate_tables.calc_dist(photos.loc[idx1, 'Longitude'], photos.loc[idx1, 'Latitude'], \
photos.loc[idx2, 'Longitude'], photos.loc[idx2, 'Latitude'])
seconds = (photos.loc[idx1, 'Timestamp'] - photos.loc[idx2, 'Timestamp']).total_seconds()
if seconds == 0: continue
speed = dist * 60. * 60. / abs(seconds)
speeds_new.append(speed) | src/trajectory_construction.ipynb | cdawei/flickr-photo | gpl-2.0 |
Histogram of point-to-point speed after filtering | #S = [100, 150, 200, 250, 500, 1000, 1236, 100000]
S = [100, 150, 200, 250]
if speed_filter == 1:
p2pspeeds_new = pd.Series(speeds_new)
plt.figure(figsize=[18,20])
for it in range(len(S)):
plt.subplot(4,2,it+1)
plt.xlabel('Point-to-Point Speed(km/h)')
plt.ylabel('#Point-pair')
plt.title('Speed < ' + str(S[it]) + ' km/h')
ax = p2pspeeds_new[p2pspeeds_new < S[it]].hist(bins=50)
ax.set_yscale('log') | src/trajectory_construction.ipynb | cdawei/flickr-photo | gpl-2.0 |
4. Final Trajectory
In this section, we will show some basic statistics about our final trajectory data.
4.1. Basic Stats
More detail analysis will be included in filckr_analysis.ipynb and slides. Here we show simple stats from the final result. | num_photo = traj_new['Photo_ID'].shape[0]
num_user = traj_stats_new['User_ID'].unique().shape[0]
num_traj = traj_stats_new['Trajectory_ID'].shape[0]
print('Number of photos:', num_photo)
print('Number of users: ', num_user)
print('Number of trajectories:', num_traj)
print('Average number of photos per user:', num_photo / num_user)
print('Average number of trajectories per user:', num_traj / num_user)
traj_stats_new[['#Photo', 'Travel_Distance(km)', 'Total_Time(min)', 'Average_Speed(km/h)']].describe() | src/trajectory_construction.ipynb | cdawei/flickr-photo | gpl-2.0 |
Histograms of number of photos in trajectories, total time/distance and average speed of trajectories | plt.figure(figsize=[18, 10])
plt.subplot(2,2,1)
plt.xlabel('#Photo')
plt.ylabel('#Trajectory')
plt.title('Histogram of #Photo in trajectories after Filtering')
ax0 = traj_stats_new['#Photo'].hist(bins=50)
ax0.set_yscale('log')
plt.subplot(2,2,2)
plt.xlabel('Travel Distance (km)')
plt.ylabel('#Trajectory')
plt.title('Histogram of Travel Distance of Trajectories after Filtering')
ax1 = traj_stats_new['Travel_Distance(km)'].hist(bins=50)
ax1.set_yscale('log')
plt.subplot(2,2,3)
plt.xlabel('Total Time (minute)')
plt.ylabel('#Trajectory')
plt.title('Histogram of Total Time of Trajectories after Filtering')
ax2 = traj_stats_new['Total_Time(min)'].hist(bins=50)
ax2.set_yscale('log')
plt.subplot(2,2,4)
plt.xlabel('Average Speed (km/h)')
plt.ylabel('#Trajectory')
plt.title('Histogram of Average Speed of Trajectories after Filtering')
ax3 = traj_stats_new['Average_Speed(km/h)'].hist(bins=50)
ax3.set_yscale('log') | src/trajectory_construction.ipynb | cdawei/flickr-photo | gpl-2.0 |
Save final trajectories to the data folder | file1 = os.path.join(data_dir + table1)
file2 = os.path.join(data_dir + table2)
traj_new.to_csv(file1, index=False)
traj_stats_new.to_csv(file2, index=False) | src/trajectory_construction.ipynb | cdawei/flickr-photo | gpl-2.0 |
Class 12: Stochastic Time Series Processes
Simulating normal random variables with Numpy
The numpy.random module has bunch of functions for generating random variables and evaluating probability and cumulative density functions for a wide variety of probability distributions. Learn more about the module here:
https://docs.scipy.org/doc/numpy/reference/routines.random.html
We're going to make use of the numpy.random.normal() function to crate arrays of random draws from the normal distribution. The function takes three arguments:
* loc: the mean of the distribution (default=0)
* scale: the standard deviation of the distribution (default=1)
* size: how many to numbers to draw (default = None)
Evidently the default is to draw numbers from the standard normal distribution. | # Create an array with 5 draws from the normal(0,1) distribution and print
np.random.normal(size=5)
# Create an array with 5 draws from the normal(0,1) distribution and print
np.random.normal(size=5) | winter2017/econ129/python/Econ129_Class_12_Complete.ipynb | letsgoexploring/teaching | mit |
Computers by definition cannot generate truly random numbers. The Mersenne Twister is a widely-used algorithm for generating pseudo random numbers form a deterministic process. That is, while the numbers generated from the algorithm are not random in the literal sense, they exhibit distributional qualities that make them indistinguishable from truly random numbers.
A nice feature of pseudo random numbers is that they can be replicated by specifying the seed, or starting point, for the random number generating algorithm. | # Set the seed for the random number generator
np.random.seed(129)
# Create an array with 5 draws from the normal(0,1) distribution and print
np.random.normal(size=5) | winter2017/econ129/python/Econ129_Class_12_Complete.ipynb | letsgoexploring/teaching | mit |
Example
Draw 500 values each from the $\mathcal{N}(0,1)$ and $\mathcal{N}(0,2^2)$ distributions. Plot. | # Set the seed for the random number generator
np.random.seed(129)
# Create two arrays:
# x: 500 draws from the normal(0,1) distribution
# y: 500 draws from the normal(0,2) distribution
x = np.random.normal(loc=0,scale=1,size=500)
y = np.random.normal(loc=0,scale=2,size=500)
# Plot
plt.plot(x,lw=3,alpha = 0.6,label='$\sigma=1$')
plt.plot(y,lw=3,alpha = 0.6,label='$\sigma=2$')
plt.grid(linestyle=':')
plt.legend(ncol=2,loc='lower right') | winter2017/econ129/python/Econ129_Class_12_Complete.ipynb | letsgoexploring/teaching | mit |
The white noise process
In the previous example, we created two variables that stored draws from normal distrbutions with means of zero but with different standard deviations. Both of the variables were simulations of whit noise processes. A white noise process is a random variable $\epsilon_t$ with constant mean and constant variance. We are concerned only with zero-mean white noise process and we'll often denote that a variable is a zero-mean white noise process with the following shorthand notation:
\begin{align}
\epsilon_t & \sim \text{WN}(0,\sigma^2),
\end{align}
where $\sigma^2$ is the variance of the processes. Strictly speaking, a white noise process can follow any distribution as long as the mean and and variance are constant, but we'll concentrate exclusively white noise process drawn from the normal distribution.
The AR(1) process
A random variable $X_t$ is an autoregressive of order 1 process or AR(1) process if it can be written in the following form:
\begin{align}
X_t & (1+\rho)\mu + \rho X_{t+1} + \epsilon_t,
\end{align}
where $\rho$ and $\mu$ are constants and $\epsilon \sim \text{WN}(0,\sigma^2)$. The AR(1) process is the stochastic analog of the first-order difference equation.
Example
Simulate an AR(1) process for 51 periods using the following parameter values:
\begin{align}
\rho & = 0.5\
\mu & = 1 \
\sigma & = 1
\end{align} | # Simulate an AR(1) process for 51 periods. Set the RNG seed to 129
np.random.seed(129)
T = 51
x0=0
mu=1
rho=0.5
sigma=1
x = np.zeros(T)
x[0] = x0
# draw random numbers for white noise process
eps= np.random.normal(loc=0,scale=sigma,size=T-1)
for t in range(T-1):
x[t+1] = mu*(1-rho) + rho*x[t] + eps[t]
# Plot
plt.plot(x,lw=3,alpha = 0.6,label='$\sigma=1$')
plt.grid(linestyle=':') | winter2017/econ129/python/Econ129_Class_12_Complete.ipynb | letsgoexploring/teaching | mit |
Example
Simulate an AR(1) process for 51 periods using the following parameter values:
\begin{align}
\rho & = 1.5\
\mu & = 1 \
\sigma & = 1
\end{align} | # Simulate an AR(1) process for 51 periods. Set the RNG seed to 129
np.random.seed(129)
T = 51
x0=0
mu=1
rho=1.5
sigma=1
import time
# Wait for 5 seconds
x = np.zeros(T)
x[:] = np.NAN
x[0] = x0
# draw random numbers for white noise process
eps= np.random.normal(loc=0,scale=sigma,size=T-1)
for t in range(T-1):
x[t+1] = mu*(1-rho) + rho*x[t] + eps[t]
# Plot
plt.plot(x,lw=3,alpha = 0.6,label='$\sigma=1$')
plt.grid(linestyle=':') | winter2017/econ129/python/Econ129_Class_12_Complete.ipynb | letsgoexploring/teaching | mit |
Example
Construct a $2\times2$ grid of AR(1) processes simulated for 51 periods with $\sigma = 1$ and $\mu = 0$.
Use the following values for $\rho$:
* Top-left: $\rho=0$
* Top-right: $\rho=0.5$
* lower-left: $\rho=0.9$
* lower-left: $\rho=-0.5$
Be sure to use the same seed for each simulation so you can see how changing $\rho$ affects the output | fig = plt.figure(figsize=(12,8))
np.random.seed(129)
y = ar1(mu=0,rho=0,sigma=1,x0=0,T=51)
ax1 = fig.add_subplot(2,2,1)
ax1.plot(y,lw=3,alpha=0.7)
ax1.set_title('$X_t = \epsilon_t$')
ax1.grid()
np.random.seed(129)
y = ar1(mu=0,rho=0,sigma=1,x0=0,T=51)
ax2 = fig.add_subplot(2,2,2)
ax2.plot(y,lw=3,alpha=0.7)
ax2.set_title('$X_t = 0.5\cdot X_{t-1} + \epsilon_t$')
ax2.grid()
np.random.seed(129)
y = ar1(mu=0,rho=0.9,sigma=1,x0=0,T=51)
ax3 = fig.add_subplot(2,2,3)
ax3.plot(y,lw=3,alpha=0.7)
ax3.set_title('$X_t = 0.9\cdot X_{t-1} + \epsilon_t$')
ax3.grid()
np.random.seed(129)
y = ar1(mu=0,rho=-0.5,sigma=1,x0=0,T=51)
ax4 = fig.add_subplot(2,2,4)
ax4.plot(y,lw=3,alpha=0.7)
ax4.set_title('$X_t = -0.5\cdot X_{t-1} + \epsilon_t$')
ax4.grid() | winter2017/econ129/python/Econ129_Class_12_Complete.ipynb | letsgoexploring/teaching | mit |
The random walk process
The random walk process is an AR(1) process with $\rho=1$:
\begin{align}
X_t = X_{t-1} + \epsilon_t
\end{align}
The random walk process has an important place in finance since the evidence suggests that stock prices follow a random walk process.
Example
Simulate 7 random walk processes for 501 periods. Set $\sigma = 1$. Plot all 7 simulated processes on the same axes. | np.random.seed(129)
for i in range(7):
plt.plot(ar1(rho=1,T=501))
plt.grid()
plt.title('Five random walk processes') | winter2017/econ129/python/Econ129_Class_12_Complete.ipynb | letsgoexploring/teaching | mit |
Make the catalog:
For all high-K stars, classify as unimodal or not based on TheJoker samples. Then do same for MCMC samples, AND the selections: | unimodal_thejoker = []
with h5py.File(samples_file, 'r') as f:
for star in tqdm.tqdm(high_K_stars):
samples = JokerSamples.from_hdf5(f[star.apogee_id])
data = star.apogeervdata()
unimodal_thejoker.append(unimodal_P(samples, data))
unimodal_thejoker = np.array(unimodal_thejoker)
unimodal_thejoker.sum()
unimodal_mcmc = []
converged_mcmc = []
with h5py.File(mcmc_samples_file, 'r') as f:
for star in tqdm.tqdm(high_K_stars):
if star.apogee_id not in f:
unimodal_mcmc.append(False)
converged_mcmc.append(True)
continue
R = f[star.apogee_id]['chain-stats/gelman_rubin'][:]
converged_mcmc.append(np.mean(R) <= 1.1)
samples = JokerSamples.from_hdf5(f[star.apogee_id])
data = star.apogeervdata()
unimodal_mcmc.append(unimodal_P(samples, data))
unimodal_mcmc = np.array(unimodal_mcmc)
converged_mcmc = np.array(converged_mcmc)
unimodal_mcmc.sum(), converged_mcmc.sum()
unimodal_mask = unimodal_thejoker | unimodal_mcmc
unimodal_converged_mask = unimodal_thejoker & (unimodal_mcmc & converged_mcmc)
unimodal_converged_idx = np.where(unimodal_converged_mask)[0]
unimodal_mask.sum(), unimodal_converged_mask.sum()
unimodal_stars = np.array(high_K_stars)[unimodal_mask]
unimodal_converged = converged_mcmc[unimodal_mask]
rows = dict()
rows['APOGEE_ID'] = []
for k in JokerSamples._valid_keys:
rows[k] = []
rows[k + '_err'] = []
rows['t0'] = []
rows['converged'] = []
rows['Gelman-Rubin'] = []
with h5py.File(mcmc_samples_file, 'r') as mcmc_f, h5py.File(samples_file, 'r') as joker_f:
for i, star in tqdm.tqdm(enumerate(unimodal_stars)):
data = star.apogeervdata()
if star.apogee_id in mcmc_f: # and unimodal_converged[i]:
samples = JokerSamples.from_hdf5(mcmc_f[star.apogee_id])
R = mcmc_f[star.apogee_id]['chain-stats/gelman_rubin'][:]
else:
samples = JokerSamples.from_hdf5(joker_f[star.apogee_id])
R = np.full(7, np.nan)
rows['APOGEE_ID'].append(star.apogee_id)
MAP = MAP_sample(data, samples, joker_pars)
for k in samples.keys():
rows[k].append(MAP[k])
# if unimodal_converged[i]:
# rows[k+'_err'].append(1.5 * median_absolute_deviation(samples[k]))
# else:
# rows[k+'_err'].append(np.nan * samples[k].unit)
rows[k+'_err'].append(1.5 * median_absolute_deviation(samples[k]))
rows['t0'].append(data.t0.tcb.mjd)
rows['converged'].append(unimodal_converged[i])
rows['Gelman-Rubin'].append(R)
for k in rows:
if hasattr(rows[k][0], 'unit'):
rows[k] = u.Quantity(rows[k])
rows['t0'] = Time(rows['t0'], format='mjd', scale='tcb')
tbl = Table(rows, masked=True) | notebooks/figures/HighK-unimodal.ipynb | adrn/TwoFace | mit |
Add Ness masses to table: | ness_tbl = Table.read('../../data/NessRG.fits')
ness_tbl.rename_column('2MASS', 'APOGEE_ID')
ness_tbl = ness_tbl[np.isin(ness_tbl['APOGEE_ID'], tbl['APOGEE_ID'])]
# trim the duplicates...
_, unq_idx = np.unique(ness_tbl['APOGEE_ID'], return_index=True)
ness_tbl = ness_tbl[unq_idx] | notebooks/figures/HighK-unimodal.ipynb | adrn/TwoFace | mit |
Compute m2_min, a2sini, R1 using Ness mass | def stddev(vals):
return 1.5 * median_absolute_deviation(vals, ignore_nan=True)
rnd = np.random.RandomState(seed=42)
N = rnd.normal
tbl['M1'] = np.full(len(tbl), np.nan) * u.Msun
tbl['M1_err'] = np.full(len(tbl), np.nan) * u.Msun
tbl['M2_min'] = np.full(len(tbl), np.nan) * u.Msun
tbl['M2_min_err'] = np.full(len(tbl), np.nan) * u.Msun
tbl['q_min'] = np.full(len(tbl), np.nan)
tbl['q_min_err'] = np.full(len(tbl), np.nan)
tbl['R1'] = np.full(len(tbl), np.nan) * u.Rsun
tbl['R1_err'] = np.full(len(tbl), np.nan) * u.Rsun
tbl['a_sini'] = np.full(len(tbl), np.nan) * u.au
tbl['a_sini_err'] = np.full(len(tbl), np.nan) * u.au
tbl['a2_sini'] = np.full(len(tbl), np.nan) * u.au
tbl['a2_sini_err'] = np.full(len(tbl), np.nan) * u.au
n_samples = 8192
for i, row in tqdm.tqdm(enumerate(tbl)):
ness_row = ness_tbl[ness_tbl['APOGEE_ID'] == row['APOGEE_ID']]
if len(ness_row) == 0:
continue
star = AllStar.get_apogee_id(session, row['APOGEE_ID'])
m1_samples = np.exp(N(ness_row['lnM'], ness_row['e_logM'], size=n_samples)) * u.Msun
loggs = N(star.logg, star.logg_err, n_samples)
Ps = N(row['P'], row['P_err'], n_samples) * tbl['P'].unit
Ks = N(row['K'], row['K_err'], n_samples) * tbl['K'].unit
es = N(row['e'], row['e_err'], n_samples)
# else:
# Ps = ([row['P']] * n_samples) * tbl['P'].unit
# Ks = ([row['K']] * n_samples) * tbl['K'].unit
# es = np.array([row['e']] * n_samples)
mass_func = mf(P=Ps, K=Ks, e=es)
m2_mins = get_m2_min(m1_samples, mass_func)
asinis = asini(Ps, es, Ks, m1_samples, m2_mins)
a2sinis = a2sini(Ps, es, Ks, m1_samples, m2_mins)
R1s = stellar_radius(loggs, m1_samples).to(u.Rsun)
tbl['M1'][i] = np.median(m1_samples).to(u.Msun).value
tbl['M2_min'][i] = np.nanmedian(m2_mins).to(u.Msun).value
tbl['a_sini'][i] = np.nanmedian(asinis).to(u.au).value
tbl['a2_sini'][i] = np.nanmedian(a2sinis).to(u.au).value
tbl['R1'][i] = np.nanmedian(R1s).to(u.Rsun).value
tbl['M1_err'][i] = stddev(m1_samples).to(u.Msun).value
tbl['M2_min_err'][i] = stddev(m2_mins).to(u.Msun).value
tbl['a_sini_err'][i] = stddev(asinis).to(u.au).value
tbl['a2_sini_err'][i] = stddev(a2sinis).to(u.au).value
tbl['R1_err'][i] = stddev(R1s).to(u.Rsun).value
tbl['q_min'] = (u.Quantity(tbl['M2_min']) / u.Quantity(tbl['M1'])).decompose()
tbl['q_min_err'] = tbl['q_min'] * \
np.sqrt((tbl['M2_min_err']/tbl['M2_min'])**2 +
(tbl['M1_err']/tbl['M1'])**2)
mask_ = np.isnan(tbl['M1']) | np.isnan(tbl['M2_min'])
tbl['M1'].mask = mask_
tbl['M1_err'].mask = mask_
tbl['M2_min'].mask = mask_
tbl['M2_min_err'].mask = mask_ | notebooks/figures/HighK-unimodal.ipynb | adrn/TwoFace | mit |
Add Ness columns following our columns: | tbl_with_ness = join(tbl, ness_tbl, keys='APOGEE_ID', join_type='outer')
assert len(tbl_with_ness) == len(tbl) | notebooks/figures/HighK-unimodal.ipynb | adrn/TwoFace | mit |
Now we load the APOGEE AllStar table to join the APOGEE data with our orbits: | allstar_tbl = fits.getdata('/Users/adrian/data/APOGEE_DR14/allStar-l31c.2.fits')
allstar_tbl = allstar_tbl[np.isin(allstar_tbl['APOGEE_ID'], tbl['APOGEE_ID'])]
# trim the duplicates...
_, unq_idx = np.unique(allstar_tbl['APOGEE_ID'], return_index=True)
allstar_tbl = allstar_tbl[unq_idx]
assert len(allstar_tbl) == len(tbl)
allstar_tbl = Table(allstar_tbl)
allstar_tbl.rename_column('K', 'KS')
allstar_tbl.rename_column('K_ERR', 'KS_ERR')
full_catalog = join(tbl_with_ness, allstar_tbl, keys='APOGEE_ID')
full_catalog[:1] | notebooks/figures/HighK-unimodal.ipynb | adrn/TwoFace | mit |
Add binary flags "DR14RC" if in DR14 RC catalog, "TINGRC" if in Yuan-Sen's recent paper: | from astropy.io import ascii
rcdr14 = Table.read('/Users/adrian/data/APOGEE_DR14/apogee-rc-DR14.fits')
rcting = ascii.read('../../data/ting-2018.txt')
(rcting['Classification'] == 'RC_Pristine').sum()
full_catalog['DR14RC'] = np.isin(full_catalog['APOGEE_ID'], rcdr14['APOGEE_ID'])
full_catalog['TINGRC'] = np.isin(full_catalog['APOGEE_ID'], rcting[rcting['Classification'] == 'RC_Pristine']['Designation'])
# full_catalog['TINGRC'] = np.isin(full_catalog['APOGEE_ID'], rcting['Designation'])
len(full_catalog), full_catalog['DR14RC'].sum(), full_catalog['TINGRC'].sum()
full_catalog['M1'][full_catalog['M1'].mask] = np.nan
full_catalog['M2_min'][full_catalog['M2_min'].mask] = np.nan
for name in full_catalog.colnames[:30]:
c1 = '\\texttt{{{0}}}'.format(name.replace('_', '\\_'))
try:
c2 = '{0:latex_inline}'.format(full_catalog[name].unit)
except TypeError:
c2 = ''
except AttributeError:
c2 = ''
if len(c1) < 26:
c1 = c1 + ' '*(26 - len(c1))
if len(c2) < 24:
c2 = c2 + ' '*(24 - len(c2))
print('{0} & {1} & <description> \\\\'.format(c1, c2)) | notebooks/figures/HighK-unimodal.ipynb | adrn/TwoFace | mit |
TODO: describe in README with data to use QTable.read('', astropy_native=True)
By-eye vetting:
Plot all of the stars, see what orbits look like bad (2) or questionable (1) fits: | # _path = '../../plots/unimodal/'
# os.makedirs(_path, exist_ok=True)
# units = dict()
# for c in full_catalog.colnames:
# if full_catalog[c].unit is not None:
# units[c] = full_catalog[c].unit
# else:
# units[c] = 1.
# for row in full_catalog:
# apogee_id = row['APOGEE_ID']
# star = AllStar.get_apogee_id(session, apogee_id)
# data = star.apogeervdata()
# row = row[JokerSamples._valid_keys]
# sample = JokerSamples(**{c: row[c]*units[c] for c in row.colnames})
# sample.t0 = data.t0
# fig, axes = plt.subplots(1, 2, figsize=(12, 5), sharey=True)
# plot_data_orbits(data, sample[None], highlight_P_extrema=False,
# ax=axes[0], plot_kwargs=dict(alpha=1., linewidth=1.))
# plot_phase_fold(data, sample, ax=axes[1], label=False)
# axes[1].set_xlabel('phase')
# axes[0].set_title(apogee_id)
# fig.tight_layout()
# fig.savefig(path.join(_path, '{0}.png'.format(apogee_id)), dpi=200)
# plt.close(fig)
# unimodal:
check = np.array([
'2M05224382+4300425',
'2M08505498+1156503',
'2M08510723+1153019',
'2M08512530+1202563',
'2M09522871+3811487',
'2M10264342+1340172',
'2M10513288-0250550',
'2M13011859+2844170',
'2M13162279+1739074',
'2M13175687+7151180',
'2M13484871+1913474',
'2M14574438+2106271',
'2M15054553+2220325',
'2M15101168+6708289',
'2M16342938-1248117',
'2M18012240-0920302',
'2M18343302+1949166',
'2M18481414-0251133',
'2M17223366+4850318',
'2M15184139+0206004',
'2M21260907+1100178',
'2M17105698+4301117'
])
# Suspect:
# SUSPECT_BROAD_LINES, or SUSPECT_RV_COMBINATIONS
suspect = full_catalog['APOGEE_ID'][(full_catalog['STARFLAG'] & np.sum(2**np.array([16]))) != 0]
check = check[~np.isin(check, suspect)]
print(len(suspect), len(check))
clean_flag = np.zeros(len(full_catalog), dtype=int)
clean_flag[np.isin(full_catalog['APOGEE_ID'], check)] = 1
clean_flag[np.isin(full_catalog['APOGEE_ID'], suspect)] = 2
full_catalog['clean_flag'] = clean_flag
(full_catalog['clean_flag'] == 0).sum()
full_catalog.write(path.join(table_path, 'highK-unimodal.fits'), overwrite=True)
test = QTable.read(path.join(table_path, 'highK-unimodal.fits'),
astropy_native=True, character_as_bytes=False) | notebooks/figures/HighK-unimodal.ipynb | adrn/TwoFace | mit |
Make paper figure: | full_catalog = Table.read(path.join(table_path, 'highK-unimodal.fits'))
arr = np.array(full_catalog[full_catalog['converged'] & np.isfinite(full_catalog['Gelman-Rubin'][:, 0])]['APOGEE_ID'],
dtype='U20')
np.random.seed(42)
rc = {
'axes.labelsize': 18,
'xtick.labelsize': 14,
'ytick.labelsize': 14
}
subset = full_catalog[full_catalog['converged'] & np.isfinite(full_catalog['Gelman-Rubin'][:, 0])]
rand_subset = np.random.choice(len(subset), size=8, replace=False)
rand_subset = rand_subset[np.argsort(subset['e'][rand_subset])]
with h5py.File(samples_file, 'r') as jok_f, h5py.File(mcmc_samples_file, 'r') as mcmc_f:
with mpl.rc_context(rc):
fig, axes = plt.subplots(4, 2, figsize=(8, 10), sharex=True)
for i, idx in enumerate(rand_subset):
ax = axes.flat[i]
apogee_id = subset[idx]['APOGEE_ID']
star = AllStar.get_apogee_id(session, apogee_id)
data = star.apogeervdata()
if apogee_id in mcmc_f:
f = mcmc_f
print('mcmc')
else:
f = jok_f
print('thejoker')
samples = JokerSamples.from_hdf5(f[star.apogee_id])
samples.t0 = data.t0
if len(samples) > 1:
sample = MAP_sample(data, samples, joker_pars)
else:
sample = samples[0]
fig = plot_phase_fold(data, sample, ax=ax,
jitter_errorbar=True, label=False)
xlim = ax.get_xlim()
ylim = (data.rv.value.min(), data.rv.value.max())
yspan = ylim[1]-ylim[0]
ylim = ax.set_ylim(ylim[0]-0.35*yspan, ylim[1]+0.35*yspan)
text = ('{0}, '.format(star.apogee_id) +
'$P = {0.value:.2f}$ {0.unit:latex}, '.format(sample['P']) +
'$e = {0:.2f}$'.format(sample['e']))
ax.text(xlim[0] + (xlim[1]-xlim[0])/15,
ylim[1] - (ylim[1]-ylim[0])/20,
text, fontsize=10, va='top', ha='left')
# _ = plot_two_panel(data, samples)
ax.set_xlim(-0.02, 1.02)
for i in [0,1]:
axes[-1, i].set_xlabel(r'phase, $\frac{M-M_0}{2\pi}$')
for i in range(4):
axes[i, 0].set_ylabel(_RV_LBL.format(u.km/u.s))
fig.suptitle('High-$K$, unimodal',
x=0.55, y=0.96, fontsize=18)
fig.tight_layout()
fig.subplots_adjust(top=0.92)
fig.savefig(path.join(plot_path, 'highK-unimodal.pdf')) | notebooks/figures/HighK-unimodal.ipynb | adrn/TwoFace | mit |
For my own sake, make the same for unconverged stars: | np.random.seed(123)
rc = {
'axes.labelsize': 18,
'xtick.labelsize': 14,
'ytick.labelsize': 14
}
subset = full_catalog[np.logical_not(full_catalog['converged'])]
rand_subset = np.random.choice(len(subset), size=8, replace=False)
rand_subset = rand_subset[np.argsort(subset['e'][rand_subset])]
with h5py.File(samples_file, 'r') as jok_f, h5py.File(mcmc_samples_file, 'r') as mcmc_f:
with mpl.rc_context(rc):
fig, axes = plt.subplots(4, 2, figsize=(8, 10), sharex=True)
for i, idx in enumerate(rand_subset):
ax = axes.flat[i]
star = AllStar.get_apogee_id(session, subset[idx]['APOGEE_ID'])
data = star.apogeervdata()
if apogee_id in mcmc_f:
f = mcmc_f
print('mcmc')
else:
f = jok_f
print('thejoker')
samples = JokerSamples.from_hdf5(jok_f[star.apogee_id])
samples.t0 = data.t0
if len(samples) > 1:
sample = MAP_sample(data, samples, joker_pars)
else:
sample = samples[0]
fig = plot_phase_fold(data, sample, ax=ax,
jitter_errorbar=True, label=False)
xlim = ax.get_xlim()
ylim = (data.rv.value.min(), data.rv.value.max())
yspan = ylim[1]-ylim[0]
ylim = ax.set_ylim(ylim[0]-0.35*yspan, ylim[1]+0.35*yspan)
text = ('{0}, '.format(star.apogee_id) +
'$P = {0.value:.2f}$ {0.unit:latex}, '.format(sample['P']) +
'$e = {0:.2f}$'.format(sample['e']))
ax.text(xlim[0] + (xlim[1]-xlim[0])/15,
ylim[1] - (ylim[1]-ylim[0])/20,
text, fontsize=10, va='top', ha='left')
# _ = plot_two_panel(data, samples)
ax.set_xlim(-0.02, 1.02)
for i in [0,1]:
axes[-1, i].set_xlabel(r'phase, $\frac{M-M_0}{2\pi}$')
for i in range(4):
axes[i, 0].set_ylabel(_RV_LBL.format(u.km/u.s))
fig.suptitle('Example stars from the high-$K$, unimodal sample',
x=0.55, y=0.96, fontsize=18)
fig.tight_layout()
fig.subplots_adjust(top=0.92) | notebooks/figures/HighK-unimodal.ipynb | adrn/TwoFace | mit |
Bulk properties | full_catalog['converged'].sum(), len(full_catalog)-full_catalog['converged'].sum()
# plt.hist(full_catalog['e'][~full_catalog['converged']], bins='auto');
plt.hist(full_catalog['e'], bins='auto'); | notebooks/figures/HighK-unimodal.ipynb | adrn/TwoFace | mit |
emcee_converged = full_catalog[full_catalog['emcee_converged']]
_path = '../../plots/emcee_converged'
os.makedirs(_path, exist_ok=True)
with h5py.File(mcmc_samples_file, 'r') as mcmc_f, h5py.File(samples_file, 'r') as f:
for row in emcee_converged:
star = AllStar.get_apogee_id(session, row['APOGEE_ID'])
data = star.apogeervdata()
if star.apogee_id in mcmc_f:
samples = JokerSamples.from_hdf5(mcmc_f[star.apogee_id])
print('mcmc')
else:
samples = JokerSamples.from_hdf5(f[star.apogee_id])
print('thejoker')
samples.t0 = data.t0
fig = plot_two_panel(data, samples,
plot_data_orbits_kw=dict(n_times=16384,
highlight_P_extrema=False))
fig.axes[0].set_title(star.apogee_id)
fig.tight_layout()
fig.savefig(path.join(_path, '{0}.png'.format(star.apogee_id)), dpi=200)
plt.close(fig) | notebooks/figures/HighK-unimodal.ipynb | adrn/TwoFace | mit |
|
By-eye vetting: these ones are suspicious | suspicious_ids = ['2M05224382+4300425',
'2M08505498+1156503',
'2M10264342+1340172',
'2M10513288-0250550',
'2M14574438+2106271',
'2M16131259+5043080',
'2M17121495+3211467',
'2M17212080+6003296',
'2M18571262-0328064',
'2M21260907+1100178',
'2M21374395+4304268']
derp = emcee_converged[~np.isin(emcee_converged['APOGEE_ID'], suspicious_ids)]
derp = full_catalog
fig, ax = plt.subplots(1, 1, figsize=(6,6))
ax.errorbar(derp['P'], derp['LOGG'],
xerr=derp['P_err'], yerr=derp['LOGG_ERR'],
marker='o', linestyle='none', alpha=0.8)
ax.set_xscale('log')
ax.set_xlim(0.8, 2000)
ax.set_ylim(4., 0)
ax.set_xlabel('P')
ax.set_ylabel('logg')
# -----
fig, ax = plt.subplots(1, 1, figsize=(6,6))
ax.errorbar(derp['P'], derp['e'],
xerr=derp['P_err'], yerr=derp['e_err'],
marker='o', linestyle='none', alpha=0.8)
ax.set_xscale('log')
ax.set_xlim(0.8, 2000)
ax.set_ylim(0, 1)
ax.set_xlabel('P')
ax.set_ylabel('e')
# -----
fig, axes = plt.subplots(1, 2, figsize=(10, 5))
ax = axes[0]
ax.errorbar(derp['M1'], derp['M2_min']/derp['M1'],
xerr=derp['M1_err'], yerr=np.sqrt(derp['M1_err']**2+derp['M2_min_err']**2),
marker='o', linestyle='none', alpha=0.8)
ax.set_xlabel('M1')
ax.set_ylabel('M2/M1')
ax = axes[1]
mass_ratio = derp['M2_min']/derp['M1']
ax.hist(mass_ratio[np.isfinite(mass_ratio)], bins='auto')
ax.set_xlabel('M2/M1')
with h5py.File(mcmc_samples_file, 'r') as mcmc_f, h5py.File(samples_file, 'r') as f:
for row in derp[rc_mask & (derp['P'] < 20)]:
star = AllStar.get_apogee_id(session, row['APOGEE_ID'])
data = star.apogeervdata()
if star.apogee_id in mcmc_f:
samples = JokerSamples.from_hdf5(mcmc_f[star.apogee_id])
print('mcmc')
else:
samples = JokerSamples.from_hdf5(f[star.apogee_id])
print('thejoker')
samples.t0 = data.t0
fig = plot_two_panel(data, samples,
plot_data_orbits_kw=dict(n_times=16384,
highlight_P_extrema=False))
fig.axes[0].set_title('P = {0:.2f}'.format(samples['P'][0]))
fig.tight_layout()
derp[rc_mask & (derp['P'] < 20)] | notebooks/figures/HighK-unimodal.ipynb | adrn/TwoFace | mit |
Be aware of version compatibility. This copybook uses functions form Trensorflow package version 1.3.0 and higher.
Imports | # Some important imports
import math
import numpy as np
import colorsys
import matplotlib.pyplot as plt
%matplotlib inline
import random
import pickle | datasetProcessor.ipynb | antonpavlov/traffic-sign-recognition | mit |
Load data | # If your files are named differently or placed in a different folder, please update lines below.
training_file ="./raw_data/train.p"
validation_file = "./raw_data/valid.p"
testing_file = "./raw_data/test.p"
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
# Make sure that the number of features equals the number of labels
assert(len(X_train) == len(y_train))
assert(len(X_valid) == len(y_valid))
assert(len(X_test) == len(y_test)) | datasetProcessor.ipynb | antonpavlov/traffic-sign-recognition | mit |
Basic Summary | # Number of training examples
n_train = X_train.shape[0]
# Number of training labels
n_train_lables = y_train.shape[0]
# Number of validation examples
n_validation = X_valid.shape[0]
# Number of validation labels
n_validation_labels = y_valid.shape[0]
# Number of testing examples
n_test = X_test.shape[0]
# Number of test labels
n_test_labels = y_test.shape[0]
# The shape of an traffic sign image
train_image_shape = [X_train.shape[1], X_train.shape[2], X_train.shape[3]]
valid_image_shape = [X_valid.shape[1], X_valid.shape[2], X_valid.shape[3]]
test_image_shape = [X_test.shape[1], X_test.shape[2], X_test.shape[3]]
# Number of unique classes/labels in the dataset.
n_classes = len(set(train['labels']))
print("Number of training examples =", n_train)
print("Number of training labels =", n_train_lables)
print()
print("Number of validation examples =", n_validation)
print("Number of validation labels =", n_validation)
print()
print("Number of testing examples =", n_test)
print("Number of testing labels =", n_test)
print()
print("Training image data shape =", train_image_shape)
print("Validation image data shape =", valid_image_shape)
print("Test image data shape =", test_image_shape)
print()
print("Number of classes =", n_classes) | datasetProcessor.ipynb | antonpavlov/traffic-sign-recognition | mit |
Some exploratory visualizations | n_pics_row = 5
n_pic_col = 10
plots = []
for i in range(n_pics_row):
for j in range(n_pic_col):
ax = plt.subplot2grid((n_pics_row,n_pic_col), (i,j))
ax.imshow(X_train[random.randint(0, n_train)][:][:][:], cmap='gray')
ax.set_xticks([])
ax.set_yticks([])
plt.show()
# Frequencies of training data per class
plt.hist(y_train, bins = np.arange(n_classes)) # arguments are passed to np.histogram
plt.title("Frequencies of classes in training set")
plt.show()
# Frequencies of validation data per class
plt.hist(y_valid, bins = np.arange(n_classes)) # arguments are passed to np.histogram
plt.title("Frequencies of classes in validation set")
plt.show()
# Frequencies of test data per class
plt.hist(y_test, bins = np.arange(n_classes)) # arguments are passed to np.histogram
plt.title("Frequencies of classes in testing set")
plt.show() | datasetProcessor.ipynb | antonpavlov/traffic-sign-recognition | mit |
Note: in terms of frequencies, it can be confirmed that the dataset was divided correctly. Training, validation and testing data have similar histograms of class frequencies.
Normalize all data | def normalize_img(image_data):
"""
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9],
:param image_data: The image data to be normalized,
:return: Normalized image data.
"""
a = 0.1
b = 0.9
scale_min = 0
scale_max = 255
return a + (((image_data - scale_min)*(b - a))/(scale_max - scale_min))
X_train_norm = normalize_img(X_train)
X_valid_norm = normalize_img(X_valid)
X_test_norm = normalize_img(X_test) | datasetProcessor.ipynb | antonpavlov/traffic-sign-recognition | mit |
Transform normalized RGB image to grayscale | tf.reset_default_graph()
X_train2gray = tf.image.rgb_to_grayscale(X_train_norm)
with tf.Session() as sess:
X_train_gray = sess.run(X_train2gray) | datasetProcessor.ipynb | antonpavlov/traffic-sign-recognition | mit |
Create rotated images from normalized original data
At this point, the training data will be extended with rotated images (-15, +15 deg). | tf.reset_default_graph()
X_train_rotated_ccw = tf.contrib.image.rotate(X_train_norm, 15 * math.pi / 180, interpolation='BILINEAR')
X_train_rotated_cw = tf.contrib.image.rotate(X_train_norm, -15 * math.pi / 180, interpolation='BILINEAR')
with tf.Session() as sess:
rotated_images_ccw = sess.run(X_train_rotated_ccw)
rotated_images_cw = sess.run(X_train_rotated_cw)
tf.reset_default_graph()
rotated_ccw2gray = tf.image.rgb_to_grayscale(rotated_images_ccw) # Ready to export
rotated_cw2gray = tf.image.rgb_to_grayscale(rotated_images_cw) # Ready to export
with tf.Session() as sess:
rotated_images_ccw_gray = sess.run(rotated_ccw2gray)
rotated_images_cw_gray = sess.run(rotated_cw2gray)
# Copy labels for rotated images
rotated_ccw_labels = y_train
rotated_cw_labels = y_train | datasetProcessor.ipynb | antonpavlov/traffic-sign-recognition | mit |
Modify brightness randomly
Make a copy of training data and modify randomly a brightness of each image. | # Time consuming task! Function is sequential. TODO: optimize it.
def random_brightness(image):
"""
Modify image bightness with following formula: brightness = 0.2 + np.random.uniform(),
:param image: The image data to be processed,
:return: Modified image data
"""
result = image
for i in range(image.shape[0]):
one_image = image[i][:][:][:]
brightness = 0.2 + np.random.uniform()
for x in range(one_image.shape[0]):
for y in range(one_image.shape[1]):
h, s, v = colorsys.rgb_to_hsv(one_image[x][y][0], one_image[x][y][1], one_image[x][y][2])
v = v * brightness
one_image[x][y][0], one_image[x][y][1], one_image[x][y][2] = colorsys.hsv_to_rgb(h, s, v)
result[i][:][:][:] = one_image[:][:][:]
return result
## Create a copy of original dataset and modify imeges' brightness
X_train_bright = random_brightness(X_train_norm)
y_train_bright = y_train | datasetProcessor.ipynb | antonpavlov/traffic-sign-recognition | mit |
Convert processed images to grayscale. | tf.reset_default_graph()
X_train_bright2gray = tf.image.rgb_to_grayscale(X_train_bright)
with tf.Session() as sess:
X_train_bright_gray = sess.run(X_train_bright2gray) | datasetProcessor.ipynb | antonpavlov/traffic-sign-recognition | mit |
Add random noise | # Time consuming task! Function is sequential. TODO: optimize it.
def random_noise(image):
result = image
for i in range(image.shape[0]):
one_image = image[i][:][:][:]
for x in range(one_image.shape[0]):
for y in range(one_image.shape[1]):
brightness = np.random.uniform(low=0.0, high=0.3) # be careful with upper limit -> impact validation
h, s, v = colorsys.rgb_to_hsv(one_image[x][y][0], one_image[x][y][1], one_image[x][y][2])
v = v * brightness
one_image[x][y][0], one_image[x][y][1], one_image[x][y][2] = colorsys.hsv_to_rgb(h, s, v)
result[i][:][:][:] = one_image[:][:][:]
return result
X_train_noise = random_noise(X_train_norm)
y_train_noise = y_train
tf.reset_default_graph()
X_train_noise2gray = tf.image.rgb_to_grayscale(X_train_noise)
with tf.Session() as sess:
X_train_noise_gray = sess.run(X_train_noise2gray) | datasetProcessor.ipynb | antonpavlov/traffic-sign-recognition | mit |
Concatenate all training data together | X_train_ready = X_train_gray
y_train_ready = y_train
X_train_ready = np.append(X_train_ready, rotated_images_ccw_gray, axis=0)
y_train_ready = np.append(y_train_ready, rotated_ccw_labels, axis=0)
X_train_ready = np.append(X_train_ready, rotated_images_cw_gray, axis=0)
y_train_ready = np.append(y_train_ready, rotated_cw_labels, axis=0)
X_train_ready = np.append(X_train_ready, X_train_bright_gray, axis=0)
y_train_ready = np.append(y_train_ready, y_train_bright, axis=0)
X_train_ready = np.append(X_train_ready, X_train_noise_gray, axis=0)
y_train_ready = np.append(y_train_ready, y_train_noise, axis=0) | datasetProcessor.ipynb | antonpavlov/traffic-sign-recognition | mit |
Convert to grayscale validation and test data | tf.reset_default_graph()
X_valid_gray = tf.image.rgb_to_grayscale(X_valid_norm) # Ready to export
X_test_gray = tf.image.rgb_to_grayscale(X_test_norm) # Ready to export
with tf.Session() as sess:
X_valid_ready = sess.run(X_valid_gray)
X_test_ready = sess.run(X_test_gray)
# Propagate their labels
y_valid_ready = y_valid
y_test_ready = y_test
print("Training dataset shape: ", X_train_ready.shape)
print("Validation dataset shape: ", X_valid_ready.shape)
print("Test dataset shape: ", X_test_ready.shape)
# Make sure that the number of features equals the number of labels
assert(len(X_train_ready) == len(y_train_ready))
assert(len(X_valid_ready) == len(y_valid_ready))
assert(len(X_test_ready) == len(y_test_ready))
with open('./train_data/aug_train_features_ready2.pickle', 'wb') as output:
pickle.dump(X_train_ready, output)
with open('./train_data/aug_train_labels_ready2.pickle', 'wb') as output:
pickle.dump(y_train_ready, output)
with open('./train_data/aug_valid_features_ready2.pickle', 'wb') as output:
pickle.dump(X_valid_ready, output)
with open('./train_data/aug_valid_labels_ready2.pickle', 'wb') as output:
pickle.dump(y_valid_ready, output)
with open('./train_data/aug_test_features_ready2.pickle', 'wb') as output:
pickle.dump(X_test_ready, output)
with open('./train_data/aug_test_labels_ready2.pickle', 'wb') as output:
pickle.dump(y_test_ready, output) | datasetProcessor.ipynb | antonpavlov/traffic-sign-recognition | mit |
Set up problem | # options: Bananas, GermanCredit, Brownian
problem_name = 'Bananas'
if (problem_name == 'Bananas'):
target = gym.targets.VectorModel(gym.targets.Banana(),
flatten_sample_transformations=True)
num_dimensions = target.event_shape[0]
init_step_size = 1.
if (problem_name == 'GermanCredit'):
# This problem seems to require that we load TF datasets first.
import tensorflow_datasets
target = gym.targets.VectorModel(gym.targets.GermanCreditNumericSparseLogisticRegression(),
flatten_sample_transformations=True)
num_dimensions = target.event_shape[0]
init_step_size = 0.02
if (problem_name == 'Brownian'):
target = gym.targets.BrownianMotionMissingMiddleObservations()
target = gym.targets.VectorModel(target,
flatten_sample_transformations = True)
num_dimensions = target.event_shape[0]
init_step_size = 0.01
def target_log_prob_fn(x):
"""Unnormalized, unconstrained target density.
This is a thin wrapper that applies the default bijectors so that we can
ignore any constraints.
"""
y = target.default_event_space_bijector(x)
fldj = target.default_event_space_bijector.forward_log_det_jacobian(x)
return target.unnormalized_log_prob(y) + fldj
# NOTE: use a large factor to get overdispered initializations.
# NOTE: don't set offset to 0 when the target mean is 0.
# CHECK: what scale should we use? Poor inits can make the problem much more
# difficult.
# NOTE: we probably want inits that allow us to get decent estimates
# in the long regime
# if (problem_name == 'Bananas'):
if (problem_name == 'Bananas'):
offset = 2
def initialize (shape, key = random.PRNGKey(37272709)):
return 3 * random.normal(key, shape + (num_dimensions,)) + offset
if (problem_name == 'GermanCredit'):
offset = 0.1
def initialize (shape, key = random.PRNGKey(37272709)):
return 0.5 * random.normal(key, shape + (num_dimensions,)) + offset
# offset = 0.5
# def initialize (shape, key = random.PRNGKey(37272709)):
# return 0.01 * random.normal(key, shape + (num_dimensions,)) + offset
| nested_rhat/rhat_locker.ipynb | google-research/google-research | apache-2.0 |
Run MCMC
We consider two regimes: the "long" regime in which a few chains are run for many warmup and sampling iterations, and the "short" regime, wherein many chains are run for a few warmup and sampling iterations. Note that in the short regime we're willing to not warmup our chains (i.e. possibly adapt step size, trajectory length, mass matrix) as well as in the long regime, the hope being that the variance decreases enough because we're running many chains. | # Transition kernel for long regime
num_chains_long = 4
if (problem_name == 'GermanCredit'):
num_warmup_long, num_sampling_long = 500, 1000
if (problem_name == 'Bananas'):
num_warmup_long, num_sampling_long = 200, 1000
total_samples_long = num_warmup_long + num_sampling_long
# CHECK: is this the transition kernel we want to use?
# REMARK: the step size is picked based on the model we're fitting
if (problem_name == 'Bananas' or problem_name == 'GermanCredit'):
kernel_long = tfp.mcmc.HamiltonianMonteCarlo(target_log_prob_fn, init_step_size, 1)
kernel_long = tfp.experimental.mcmc.GradientBasedTrajectoryLengthAdaptation(kernel_long, num_warmup_long)
kernel_long = tfp.mcmc.DualAveragingStepSizeAdaptation(
kernel_long, num_warmup_long, target_accept_prob = 0.75,
reduce_fn=tfp.math.reduce_log_harmonic_mean_exp)
# Follow the inference gym tutorial
# NOTE: transition kernel below is untested.
if (problem_name == 'Brownian'):
kernel_long = tfp.mcmc.HamiltonianMonteCarlo(target_log_prob_fn, init_step_size, 1)
# Adapt step size.
kernel_long = tfp.mcmc.DualAveragingStepSizeAdaptation(
kernel_long, num_warmup_long, # int(num_samples // 2 * 0.8),
target_accept_prob = 0.9)
# Adapt trajectory length.
kernel_long = tfp.experimental.mcmc.GradientBasedTrajectoryLengthAdaptation(
kernel_long,
num_adaptation_steps = num_warmup_long) # int(num_steps // 2 * 0.8))
# TODO: work out what an appropriate transition kernel for this problem would be.
# if (problem_name == 'GermanCredit'):
# kernel_long = tfp.mcmc.HamiltonianMonteCarlo(target_log_prob_fn, init_step_size, 1)
# kernel_long = tfp.experimental.mcmc.GradientBasedTrajectoryLengthAdaptation(kernel_long, num_warmup_long)
# kernel_long = tfp.mcmc.DualAveragingStepSizeAdaptation(
# kernel_long, num_warmup_long, target_accept_prob = 0.75,
# reduce_fn=tfp.math.reduce_log_harmonic_mean_exp)
initial_state = initialize((num_chains_long,))
# initial_state = initialize((num_chains_long,))
result_long = tfp.mcmc.sample_chain(
total_samples_long, initial_state, kernel = kernel_long, seed = random.PRNGKey(1954))
# Transition kernel for short regime
# CHECK: how many warmup iterations should we use here?
# Suggested options: 512, 1024, 2048, 2500
num_chains_short = 512
num_super_chains = 4
if (problem_name == 'GermanCredit'):
num_warmup_short, num_sampling_short = 1000, 1000
if (problem_name == 'Bananas'):
num_warmup_short, num_sampling_short = 100, 1000 # 100, 1000
total_samples_short = num_warmup_short + num_sampling_short
if (problem_name == 'Bananas' or problem_name == 'GermanCredit'):
kernel_short = tfp.mcmc.HamiltonianMonteCarlo(target_log_prob_fn, init_step_size, 1)
kernel_short = tfp.experimental.mcmc.GradientBasedTrajectoryLengthAdaptation(kernel_short, num_warmup_short)
kernel_short = tfp.mcmc.DualAveragingStepSizeAdaptation(
kernel_short, num_warmup_short, target_accept_prob = 0.75, #0.75,
reduce_fn = tfp.math.reduce_log_harmonic_mean_exp)
different_location = False
if (different_location):
# initialize each chain at a different location
initial_state = initialize((num_chains_short,))
else:
# Chains within a super chain are all initialized at the same location
# Here we use the same initial points as in the long regime.
initial_state = initial_state # initialize((num_super_chains,))
initial_state = np.repeat(initial_state, num_chains_short // num_super_chains,
axis = 0)
result_short = tfp.mcmc.sample_chain(
total_samples_short, initial_state, kernel = kernel_short,
seed = random.PRNGKey(1954))
| nested_rhat/rhat_locker.ipynb | google-research/google-research | apache-2.0 |
Analyze results
Squared error for Monte Carlo estimate of the mean and variance | # Get some estimates of the mean and variance.
try:
mean_est = target.sample_transformations['identity'].ground_truth_mean
except:
print('no ground truth mean')
mean_est = (result.all_states[num_warmup:, :]).mean(0).mean(0)
try:
var_est = target.sample_transformations['identity'].ground_truth_standard_deviation**2
except:
print('no ground truth std dev')
var_est = ((result.all_states[num_warmup:, :]**2).mean(0).mean(0) -
mean_est**2)
jnp.linalg.norm(var_est[0] / 100) | nested_rhat/rhat_locker.ipynb | google-research/google-research | apache-2.0 |
As a first step plot the squared error based on a Monte Carlo estimator that discards the first half of the samples, and doesn't discriminate between warmup and sampling iterations. We also plot the target precision whith the "true" variance -- when available for instance via the inference gym -- divided by 100. This is the precision we expect our Monte Carlo estimates to reach with an effective sample size of 100. |
# Map MCMC samples from the unconstrained space to the original space
# CHECK: does this mess up the banana example?
result_state_long = target.default_event_space_bijector(result_long.all_states)
result_state_short = target.default_event_space_bijector(result_short.all_states)
def mc_est(x, axis = 0):
"""Computes the running sample mean based on sampling iterations, with
warmup iterations discarded.
By default, we focus on the first parameter.
"""
# NOTE: why discard half of the samples?
cum_x = np.cumsum(x, axis)
return ((cum_x[1::2] - cum_x[:cum_x.shape[0]//2]) /
np.arange(1, cum_x.shape[0] // 2 + 1).reshape([-1] + [1] * (len(cum_x.shape) - 1)))
long_error = mc_est(result_state_long.mean(1) - mean_est)
short_error = mc_est(result_state_short.mean(1) - mean_est)
true_var_available = True
if (true_var_available):
target_precision = jnp.linalg.norm(var_est[0] / 100)
else:
target_precision = jnp.linalg.norm(long_error[len(long_error) - 1], axis = -1)
figure(figsize = [6, 6])
semilogy(jnp.linalg.norm(long_error, axis = -1), label = '4 chains')
semilogy(jnp.linalg.norm(short_error, axis = -1), label = '1024 chains')
hlines(target_precision, 0, total_samples_long / 2,
linestyles = '--',
label = 'Target: Var / 100')
ylabel("Squared error for Mean estimate")
xlabel("Iterations (excluding warmup)")
legend(loc = 'best')
show() | nested_rhat/rhat_locker.ipynb | google-research/google-research | apache-2.0 |
I don't think the variance of the variance is stored in the inference gym, although it's probably possible to access this information using the error in the variance estimate. For now, we'll use the final result reported by the long chain as the target precision. | long_var_error = mc_est(result_state_long.var(1)) - var_est
short_var_error = mc_est(result_state_short.var(1)) - var_est
long_var_estimate = jnp.linalg.norm(long_var_error[len(long_var_error) - 1], axis = -1)
figure(figsize = [6, 6])
semilogy(jnp.linalg.norm(long_var_error, axis = -1), label = 'long')
semilogy(jnp.linalg.norm(short_var_error, axis = -1), label = 'short')
hlines(long_var_estimate, 0, total_samples_long / 2,
linestyles = '--',
label = 'long var estimate')
ylabel("Squared error for Variance estimate")
legend(loc = 'best')
show()
# NOTE: why are the estimates in the long regime so poor?? | nested_rhat/rhat_locker.ipynb | google-research/google-research | apache-2.0 |
Repeat the above, using a Monte Carlo estimator based on sampling iterations with only the warmup samples discard. | if (False):
print(result_state_long[num_warmup_long:, :, :].mean(0)[0][0])
print(result_state_short[num_warmup_short:, :, :].mean(0)[0][0])
print(mean_est[0])
print(long_error[len(long_error) - 1][0])
print(short_error[len(short_error) - 1][0])
result_state_long[num_warmup_long:, :, :].mean(1).shape
def mc_est_warm(x, axis = 0):
""" compute running average without discarding half of the samples."""
return np.cumsum(x, axis) / np.arange(1, x.shape[0] + 1).reshape([-1] + [1] * (len(x.shape) - 1))
discard_warmup = True
if (discard_warmup):
long_error = mc_est_warm(result_state_long[num_warmup_long:, :, :].mean(1)) - mean_est
short_error = mc_est_warm(result_state_short[num_warmup_short:, :, :].mean(1)) - mean_est
else:
long_error = result_state_long[num_warmup_long:, :, :].mean(1) - mean_est
short_error = result_state_short[num_warmup_short:, :, :].mean(1) - mean_est
true_var_available = True
if (true_var_available):
target_precision = jnp.linalg.norm(var_est[0] / 100)
else:
target_precision = jnp.linalg.norm(long_error[len(long_error) - 1], axis = -1)
figure(figsize = [6, 6])
semilogy(jnp.linalg.norm(long_error, axis = -1), label = '4 chains')
semilogy(jnp.linalg.norm(short_error, axis = -1), label = '512 chains')
hlines(target_precision, 0, num_sampling_long,
linestyles = '--',
label = 'target: var / 100')
ylabel("Squared error for Mean estimate")
xlabel("Sampling iterations (i.e. warmup excluded)")
legend(loc = 'best')
show() | nested_rhat/rhat_locker.ipynb | google-research/google-research | apache-2.0 |
Remark: if after one iteration we are below the target precision, than we're probably running a warmup which is too long and / or running too many chains. | long_var_error = mc_est_warm(result_state_long[num_warmup_long:, :, :].var(1)) - var_est
short_var_error = mc_est_warm(result_state_short[num_warmup_short:, :, :].var(1)) - var_est
long_var_mc_estimate = jnp.linalg.norm(long_var_error[len(long_var_error) - 1], axis = -1)
figure(figsize = [6, 6])
semilogy(jnp.linalg.norm(long_var_error, axis = -1), label = 'long')
semilogy(jnp.linalg.norm(short_var_error, axis = -1), label = 'short')
hlines(long_var_mc_estimate, 0, num_sampling_long,
linestyles = '--',
label = 'long MC estimate')
ylabel("Squared error for Variance estimate")
legend(loc = 'best')
show() | nested_rhat/rhat_locker.ipynb | google-research/google-research | apache-2.0 |
Staring at the plot above it's clear that the short regime reaches a reasonable precision in fewer iterations than the long regime, even though the long regime warms up chains for many more iterations. The dotted line represent the Monte Carlo estimate using all the samples from the long regime. We'll use this as our target precision. | if (False):
print(long_mc_estimate)
print(jnp.linalg.norm(short_error, axis = -1)[0:10])
print(long_var_estimate)
print(jnp.linalg.norm(short_var_error, axis = -1)[0:10])
# Identify the number of iterations after which the short regime matches
# the precision of the long regime.
# TODO: find a better criterion
item_index = np.where(jnp.linalg.norm(short_error, axis = -1) <= target_precision)
target_iter_mean = item_index[0][0]
print("Reasonable precision for mean reached in", target_iter_mean + 1, "iteration(s).")
item_index = np.where(jnp.linalg.norm(short_var_error, axis = -1) <= long_var_estimate)
target_iter_var = item_index[0][0]
print("Reasonable precision for variance reached in", target_iter_var + 1, "iteration(s).") | nested_rhat/rhat_locker.ipynb | google-research/google-research | apache-2.0 |
Check for convergence
Let's first examine whether we're passed the transient bias regime (we should be since we're discarding the warmup phase). | # Plot last-sample estimarors
figure(figsize = [6, 6])
semilogy(jnp.linalg.norm(result_state_long.mean(1) - var_est, axis=-1),
label='Long mean Error')
semilogy(jnp.linalg.norm(result_state_short.mean(1) - mean_est, axis=-1),
label='Short Mean Error')
hlines(jnp.sqrt(var_est.sum() / 100), 0, total_samples_long, label='Norm of Posterior Scales / 10')
legend(loc='best')
xlabel('Iteration')
ylabel('Norm of Error of Estimate')
title(target.name)
xlim([0, 200])
show()
# NOTE: Note sure what's going on here. | nested_rhat/rhat_locker.ipynb | google-research/google-research | apache-2.0 |
Making due diligence, let's look at the samples returned by both methods, after discarding the warmup iterations. | plot(result_long.all_states[num_warmup_short:, :, 0].flatten(),
result_long.all_states[num_warmup_short:, :, 1].flatten(), '.', alpha = 0.2)
title('Long regime')
show()
plot(result_long.all_states[num_warmup_long:total_samples_long, :10, 1])
show()
# NOTE: (for Banana problem) With 4 samples after warmup we already samples spread
# out accross the parameter space.
num_samples_plot = 4 # target_iter_mean
plot(result_short.all_states[num_warmup_short:num_samples_plot + num_warmup_short, :, 0].flatten(),
result_short.all_states[num_warmup_short:num_samples_plot + num_warmup_short, :, 1].flatten(), '.', alpha = 0.2)
title('Short regime')
show()
plot(result_short.all_states[num_warmup_short:100 + num_warmup_short, [10, 20, 100, 500, 1000], 1])
show()
# REMARK: the mixing for the banana problem is slow. This is obvious if we
# only plot the first few samples of each chain.
num_samples_plot = 4 # target_iter_mean
plot(result_short.all_states[num_warmup_short:, :, 0].flatten(),
result_short.all_states[num_warmup_short:, :, 1].flatten(), '.', alpha = 0.2)
title('Short regime')
show()
plot(result_short.all_states[:, [1, 200, 400, 600, 800, 1000], 1])
show() | nested_rhat/rhat_locker.ipynb | google-research/google-research | apache-2.0 |
Let's compute $\hat R$ as a function of iteration and pay attention to how quickly $\hat R$ goes to 1 in both regimes. | # NOTE: the warmup is not stored.
# NOTE: compute rhat for the samples on the original space, since these are
# the quantities of interest.
def compute_rhat(result_state, num_samples, num_warmup = 0):
return tfp.mcmc.potential_scale_reduction(result_state[num_warmup:num_warmup + num_samples + 1],
independent_chain_ndims = 1).T
# TODO: do this without a for loop
# WARNING: this cell takes a minute to run
# TODO: use a single variable num_sampling, instead of num_sampling_long and
# num_sampling_var.
rhat_long = np.array([])
rhat_short = np.array([])
range_iter = range(2, num_sampling_long, 10) # range(2, num_samples, 8)
# NOTE: depending on the problem, it can be interesting to look at both.
# However, to be consistent with earlier analysis, the warmup samples should
# be discarded.
discard_warmup = True
for i in range_iter:
if (discard_warmup):
discard_long = num_warmup_long
discard_short = num_warmup_short
else:
discard_long = 0
discard_short = 0
rhat_long = np.append(rhat_long,
compute_rhat(result_state_long, i, discard_long)[0, ])
rhat_short = np.append(rhat_short,
compute_rhat(result_state_short, i, discard_short)[0, ])
| nested_rhat/rhat_locker.ipynb | google-research/google-research | apache-2.0 |
Remark: the $\hat R$ estimate can be quite noisy, especially when computed with a small number of samples. One manifestation of this is the fact that $\hat R < 1$. In the German credit score model, $\hat R$ is as low as 0.6!! When this is the case, $\hat R$ will typically be large for other parameters. Hence, inspecting many parameters (presumably all of interest) can safeguard us against crying "victory" too early.
This type of noise can explain why the change in $\hat R$ isn't always quite monotone, sometimes with an increase at first, and then the expected decrease. | result_snip = result_state_long[num_warmup_long:num_warmup_long + 2]
tfp.mcmc.potential_scale_reduction(result_snip, independent_chain_ndims = 1).T
# Plot result
figure(figsize = [6, 6])
semilogy(np.array(range_iter), rhat_long - 1, label = '4 chains')
semilogy(np.array(range_iter), rhat_short - 1, label = '512 chains')
legend(loc = 'best')
xlim([0, 500])
ylabel("Rhat - 1")
show() | nested_rhat/rhat_locker.ipynb | google-research/google-research | apache-2.0 |
(Banana example) As expected, $\hat R$ decreases with the number of iterations per chain, although crucially not with the total number of samples! As one might suspect, the short regime produces a less noisy estimate of $\hat R$. To be more precise, we expect $\hat R$ to decrease with the effective sample size per chain. Since the long regime benefits from a longer warmup, the effective sample size per iteration should be better, although it might not make a difference in this example.
Crucially, $\hat R$ as a convergence diagnostic isn't sensitive to the fact we are running many chains (although the estimator does become less noisy...). | # Compare Rhat at the point where both methods have reached a comparable squared
# error.
# NOTE: not super reliable -- sometimes rhat is noisy and goes to 1 (or below)
# before jumping back up...
index = np.where(range_iter > target_iter_mean)[0][0]
print("Rhat for short regime after hitting target precision:", rhat_short[index])
print("Rhat for long regime after hitting target precision:", rhat_long[len(rhat_long) - 1])
| nested_rhat/rhat_locker.ipynb | google-research/google-research | apache-2.0 |
Proposition: Concerned with how noisy $\hat R$ might be, let's use a bootstrap scheme to get a standard deviation on the estimator. The short regime should be amiable to this, since we can resample chains. Unfortunately, if we sample with replacement, we underestimate the between chain variance, because some of the chains are identical. One idea is to randomly sample a subset of the chains without replacement and compute $\hat R$.
This will overestimate the uncertainty in our calculations, since we have reduced the sample size. | n_bootstrap_samples = 64 # 64
rhat_estimates = np.array([])
n_sampling_iter = max(range_iter[index], 2) # max(target_iter_mean, 2) # range_iter[index]
for i in range(1, n_bootstrap_samples):
choose_samples_randomly = True
if (choose_samples_randomly):
bootstrap_sample = np.random.choice(np.array(range(1, num_chains_short + 1)),
n_bootstrap_samples, replace = False)
# num_chains_short // 16, replace = False)
else:
bootstrap_sample = np.array(range(1 + (i - 1) * n_bootstrap_samples, i * n_bootstrap_samples))
# print(bootstrap_sample)
# print(result_state_short[:, bootstrap_sample, :].shape)
rhat_estimates = np.append(rhat_estimates,
compute_rhat(result_state_short[:, bootstrap_sample, :], n_sampling_iter, num_warmup_short)[0, ])
print("Mean rhat (short) = ", rhat_estimates.mean(), "+/-", rhat_estimates.std())
| nested_rhat/rhat_locker.ipynb | google-research/google-research | apache-2.0 |
Nested $\hat R$
To remedy the identified issue, we propose to pool chains together in the short regime, thereby building super-chains, and then checking that the super chains are mixing.
We index each sample by $n$ the iteration, $m$ the chain, and $k$ the cluster of chains, and write $\theta^{(n, m, k)}$. The within-chain variance is estimated by
$$
s^2_{km} = \frac{1}{N - 1} \sum_{n = 1}^N \left (\theta^{(nmk)} - \bar \theta^{(.mk)} \right)^2.
$$
Next the between-chain variance, or within super chain variance is
\begin{eqnarray}
s^2_{k.} & = & \frac{1}{M - 1} \sum_{m = 1}^M \left (\bar \theta^{(.mk)} - \bar \theta^{(..k)} \right)^2,
\end{eqnarray}
and the total variance for a super chain is
\begin{eqnarray}
S^2_k & = & \frac{1}{M - 1} \sum_{m = 1}^M \left (\bar \theta^{(.mk)} - \bar \theta^{(..k)} \right)^2 + \frac{1}{M (N - 1)} \sum_{m = 1}^M \sum_{n = 1}^N \left (\theta^{(nmk)} - \bar \theta^{(.mk)} \right)^2 \
& = & s^2_{k.} + \frac{1}{M} \sum_{m = 1}^M s^2_{km}
\end{eqnarray}
Notice that this calculation accounts for the fact the super-chain is made up of multiple chains.
Finally the within-super-chain variance is estimated as
$$
W = \frac{1}{K} \sum_{k = 1}^K S^2_k.
$$
Now it remains to compute the between super-chain variance
$$
B = \frac{1}{K - 1} \sum_{k = 1}^K \left (\bar \theta^{(..k)} - \bar \theta^{(...)} \right)^2,
$$
yielding an estimate of the posterior variance
$$
\widehat{\mathrm{var}}^+(\theta) = B + W,
$$
which very much looks like the posterior variance estimate used in the in the long regime, except that I've been a bit more consistent about making the estimator unbiased. We then compute
$$
\hat R = \sqrt{\frac{\widehat{\mathrm{var}}^+(\theta)}{W}}.
$$
Remark. The $\theta$ can be replaced by the rank-normalized $z$ as presrcribed by Vehtari et al 2020.
Implementation of nested-$\hat R$ using TensorFlow. | # Remark: eager execution is disabled and would have to be enabled at the
# start of the program. I however suspect this would interfere with
# TensorFlow probability.
tf.executing_eagerly()
# Follow procedure described in source code for potential scale reduction.
# NOTE: some of the tf argument need to be adjusted (e.g. keepdims = False,
# instead of True). Not quite sure why.
# QUESTION: can these be accessed as internal functions of tf?
# TODO: following Pavel's example, rewrite this without using tf.
# TODO: add error message when the number of samples is less than 2.
# REMARK: this function doesn't seem to work, returns NaN.
# As a result, can only use _reduce_variance with biased = False.
def _axis_size(x, axis = None):
"""Get number of elements of `x` in `axis`, as type `x.dtype`."""
if axis is None:
return ps.cast(ps.size(x), x.dtype)
return ps.cast(
ps.reduce_prod(
ps.gather(ps.shape(x), axis)), x.dtype)
def _reduce_variance(x, axis=None, biased=True, keepdims=False):
with tf.name_scope('reduce_variance'):
x = tf.convert_to_tensor(x, name='x')
mean = tf.reduce_mean(x, axis=axis, keepdims=True)
biased_var = tf.reduce_mean(
tf.math.squared_difference(x, mean), axis=axis, keepdims=keepdims)
if biased:
return biased_var
n = _axis_size(x, axis)
return (n / (n - 1.)) * biased_var
def nested_rhat(result_state, num_super_chain):
used_samples = result_state.shape[0]
num_sub_chains = result_state.shape[1] // num_super_chains
num_dimensions = result_state.shape[2]
chain_states = result_state.reshape(used_samples, -1, num_sub_chains,
num_dimensions)
state = tf.convert_to_tensor(chain_states, name = 'state')
mean_chain = tf.reduce_mean(state, axis = 0)
mean_super_chain = tf.reduce_mean(state, axis = [0, 2])
variance_chain = _reduce_variance(state, axis = 0, biased = False)
variance_super_chain = _reduce_variance(mean_chain, axis = 1, biased = False) \
+ tf.reduce_mean(variance_chain, axis = 1)
W = tf.reduce_mean(variance_super_chain, axis = 0)
B = _reduce_variance(mean_super_chain, axis = 0, biased = False)
return tf.sqrt((W + B) / W)
| nested_rhat/rhat_locker.ipynb | google-research/google-research | apache-2.0 |
CASE 1 (sanity check): $\hat R$ after a few iterations
The super chains are such that they have the same number of samples as the chains in the long regime. Because of the slow mixing, 4 iterations per chain is not enough to overcome the transient bias and the nested Rhat is high, even though each super chain has many iterations. Note we're looking at the first warmup iterations. | # num_super_chains = 4
# super_chain_size = num_chains_short // num_super_chains # 250
used_samples = 4 # total_samples_long // super_chain_size # 4
result_state = result_short.all_states[0:used_samples, :, :]
print("short rhat: ", nested_rhat(result_state, num_super_chains)) | nested_rhat/rhat_locker.ipynb | google-research/google-research | apache-2.0 |
CASE 2: $\hat R$ after "enough" iterations
The number of iterations in each chain corresponds to the number of samples required by the short regime to match the precision for the mean attained by the long regime after 1000 sampling iterations (meaning we've discarded the warmup iterations). The diagnostic is quite happy, even though there are only two iterations per chain. | result_state.shape
target_iter_mean
used_samples = max(target_iter_mean, 2)
result_state = result_short.all_states[num_warmup_short:num_warmup_short + used_samples, :, :]
print("short nested-rhat: ", nested_rhat(result_state, num_super_chains)[0])
print("short rhat: ", rhat_short[index])
print("long rhat: ", rhat_long[len(rhat_long) - 1])
print(range_iter)
# Let's find out how quickly nested-rhat compared to traditional rhat goes down.
nested_rhat_short = np.array([])
for i in range_iter:
nested_rhat_short = np.append(nested_rhat_short,
nested_rhat(result_short.all_states[num_warmup_short:num_warmup_short + i, :, :],
num_super_chains).numpy()[0])
figure(figsize = [6, 6])
semilogy(np.array(range_iter), rhat_long - 1, label = '$\hat R$, 4 chains')
semilogy(np.array(range_iter), rhat_short - 1, label = '$\hat R$, 512 chains')
semilogy(np.array(range_iter), nested_rhat_short - 1, label = '$n \hat R$, 512 chains')
legend(loc = 'best')
xlim([0, 1000])
ylabel("Rhat - 1")
xlabel("Post-warmup sampling iterations")
show()
threshold = 1.1
index_classic = np.where((rhat_short < threshold) & (rhat_short > 1.))
if (len(index_classic[0]) > 0):
print("Rhat =", threshold, "after",range_iter[index_classic[0][0]], "iterations.")
else:
print("Rhat doesn't hit the target threshold = ", threshold, ".")
index_short = np.where((nested_rhat_short < threshold) & (nested_rhat_short > 1.))
if (len(index_short[0]) > 0):
print("Nested Rhat =", threshold, "after", range_iter[index_short[0][0]], "iterations.")
else:
print("Nested Rhat doesn't hit the target threshold = ", threshold, ".")
threshold = 1.01
index_classic = np.where((rhat_short < threshold) & (rhat_short > 1.))
if (len(index_classic[0]) > 0):
print("Rhat =", threshold, "after",range_iter[index_classic[0][0]], "iterations.")
else:
print("Rhat doesn't hit the target threshold = ", threshold, ".")
index_short = np.where((nested_rhat_short < threshold) & (nested_rhat_short > 1.))
if (len(index_short[0]) > 0):
print("Nested Rhat =", threshold, "after", range_iter[index_short[0][0]], "iterations.")
else:
print("Nested Rhat doesn't hit the target threshold = ", threshold, ".") | nested_rhat/rhat_locker.ipynb | google-research/google-research | apache-2.0 |
Effective sample size
We'll now compute the effective sample size. We might in fact expect the classic diagnostic to work relatively well. | ess_long = np.sum(tfp.mcmc.effective_sample_size(
result_state_long[num_warmup_long:, : , :]), axis = 0)
ess_short = np.sum(tfp.mcmc.effective_sample_size(
result_state_short[num_warmup_short:, :, :]), axis = 0)
ess_short_target = np.sum(tfp.mcmc.effective_sample_size(
result_state_short[num_warmup_short:num_warmup_short + 3, :, :]), axis = 0)
# NOTE: it seems we need at least 3 samples to compute the ess estimate...
print("Ess long (discarding warmup): ", ess_long[0])
print("Ess short (discarding warmup): ", ess_short[0])
print("Ess short (when hitting target precision): ", ess_short_target[0]) | nested_rhat/rhat_locker.ipynb | google-research/google-research | apache-2.0 |
Adaptive warmup length
Playing around a little, we find that once the algorithm is properly warmed up, the short regime can reach good precision in very few iterations. The primary limitation hence becomes the warm up time.
Proper warmup means (i) we've overcomed the transient bias and have already moved across the "typical set" -- it isn't enough to be in the "typical set" if where we are is determined by our starting point -- and (ii) our algorithm tuned well-enough such that it can explore every part of the parameter space in a reasonable time and has a relatively short relaxation time. The first item is essential to both sampling regimes, though intuitively, it seems we might be able to compromise on the second item in the short regime.
In many cases, the number of warmup samples is determined ahead of time when calling the algorithm. Ideally we'd stop the warmup once we have suitable tuning parameters and then move to the sampling phase. Zhang et al (2020) propose to run warmups over short windows of $w = 100$ iterations and compute $\hat R$ and the ESS at the end of each of window to check if we should continue warming up. Once both diagnostic estimates are passed a certain threshold, the warmup ends and the sampling begins. In theory, this scheme can be adapted to the short regime by replacing $\hat R$ with the nested $\hat R$.
My guess is that by using nested $\hat R$ and the classic ESS (computed using many independent chains) we'll implicitly compromise on item (ii) -- so a priori, the described warmup method requires little adjustment. | # Define function to extract the adapted parameters
# (Follow what's done in the inference gym tutorial)
# REMARK: if we pass only initial step size, only one step size is adapted for
# the whole transition kernel (as opposed to one step size per chain).
# REMARK: we won't use this scheme. Instead, we'll pass the whole transition.
from tensorflow_probability.python.internal.unnest import get_innermost
# NOTE: presumable we're not going to use this, and instead get the full
# kernel result back.
def trace_fn(_, pkr):
return (
get_innermost(pkr, 'step_size'),
get_innermost(pkr, 'num_leapfrog_steps')
# get_innermost(pkr, 'max_trajectory_length')
)
def forge_chain (target_rhat, warmup_window_size, kernel_cold, initial_state,
max_num_steps, seed, monitor = False,
use_nested_rhat = True, use_log_joint = False,
num_super_chains = 4):
# store certain variables
rhat_forge = np.array([])
warmup_is_acceptable = False
store_results = []
warmup_iteration = 0
current_state = initial_state
final_kernel_args = None
while (not warmup_is_acceptable and warmup_iteration <= max_num_steps):
warmup_iteration += 1
# 1) Run MCMC on short warmup window
result_cold, target_log_prob, final_kernel_args = tfp.mcmc.sample_chain(
num_results = warmup_window_size,
current_state = current_state,
kernel = kernel_cold,
previous_kernel_results = final_kernel_args,
seed = kernel_seed,
trace_fn = lambda _, pkr: unnest.get_innermost(pkr, 'target_log_prob'),
return_final_kernel_results = True)
if (warmup_iteration == 1) :
store_results = result_cold
else :
store_results = np.append(store_results, result_cold, axis = 0)
current_state = result_cold[-1]
# 2) Check if warmup is acceptable
if (used_nested_rhat):
if (use_log_joint):
shape_lp = target_log_prob.shape
rhat_warmup = nested_rhat(target_log_prob.reshape(shape_lp[0], shape_lp[1], 1),
num_super_chains)
else:
rhat_warmup = max(nested_rhat(result_cold, num_super_chains))
else:
if (use_log_joint):
rhat_warmup = tfp.mcmc.potential_scale_reduction(target_log_prob)
else:
rhat_warmup = max(tfp.mcmc.potential_scale_reduction(result_cold))
# ess_warmup = np.sum(tfp.mcmc.effective_sample_size(result_cold), axis = 0)
# print(rhat_warmup)
if (rhat_warmup < target_rhat): warmup_is_acceptable = True
# if (max(rhat_warmup) < 1.01 and min(ess_warmup) > 100): warmup_is_acceptable = True
if (monitor):
print("step:", final_kernel_args.step)
# print("max rhat:", max(rhat_warmup))
# print("min ess warmup:" , min(ess_warmup))
# print("step size:", step_size)
# print("number of leapfrog steps:", num_leapfrog_steps)
save_values = True
if (save_values):
rhat_forge = np.append(rhat_forge, rhat_warmup)
# While loop ends
return store_results, final_kernel_args, rhat_forge
# Set up adaptive warmup scheme
warmup_window_size = 5
target_rhat = 1.01
target_ess = 100
max_num_steps = 1000 // warmup_window_size
current_state = initial_state
num_leapfrog_steps = 1
warmup_iteration = 0
kernel_seed = random.PRNGKey(1957)
used_nested_rhat = True
# define kernel using most recent step size
kernel_cold = tfp.mcmc.HamiltonianMonteCarlo(target_log_prob_fn, init_step_size, 1)
kernel_cold = tfp.experimental.mcmc.GradientBasedTrajectoryLengthAdaptation(kernel_cold, warmup_window_size)
kernel_cold = tfp.mcmc.DualAveragingStepSizeAdaptation(
kernel_cold, warmup_window_size, target_accept_prob = 0.75,
reduce_fn = tfp.math.reduce_log_harmonic_mean_exp)
kernel_warm = tfp.mcmc.HamiltonianMonteCarlo(target_log_prob_fn, init_step_size, 1)
kernel_warm = tfp.experimental.mcmc.GradientBasedTrajectoryLengthAdaptation(kernel_warm, 0)
kernel_warm = tfp.mcmc.DualAveragingStepSizeAdaptation(
kernel_warm, warmup_window_size, target_accept_prob = 0.75,
reduce_fn = tfp.math.reduce_log_harmonic_mean_exp)
result_cold, final_kernel_args, rhat_forge = \
forge_chain(target_rhat = target_rhat,
warmup_window_size = warmup_window_size,
kernel_cold = kernel_cold,
initial_state = initial_state,
max_num_steps = max_num_steps,
seed = random.PRNGKey(1954), monitor = False,
use_nested_rhat = True,
use_log_joint = True)
print("iterations:", len(rhat_forge) * warmup_window_size)
print(rhat_forge)
print(target_rhat)
# print(tfp.mcmc.potential_scale_reduction(result_cold[-50]))
# print(nested_rhat(result_short.all_states[num_warmup_short:num_warmup_short + 5, :, :], num_super_chains))
# Run sampling iterations
# def trace_fn(_, pkr):
# return (
# get_innermost(pkr, 'unnormalized_log_prob'))
current_state = result_cold[-1]
result_warm, target_log_prob, final_kernel_args_warm = tfp.mcmc.sample_chain(
num_results = 5,
current_state = current_state,
kernel = kernel_warm, # kernel_cold
previous_kernel_results = final_kernel_args,
seed = random.PRNGKey(100001),
return_final_kernel_results = True,
trace_fn = lambda _, pkr: unnest.get_innermost(pkr, 'target_log_prob'))
print(tfp.mcmc.potential_scale_reduction(target_log_prob))
# print(nested_rhat(target_log_prob, num_super_chains))
shape_lp = target_log_prob.shape
lp__ = target_log_prob.reshape(shape_lp[0], shape_lp[1], 1)
lp__.shape
print(nested_rhat(lp__, num_super_chains))
print(tfp.mcmc.potential_scale_reduction(result_warm))
nested_rhat(result_warm, num_super_chain = num_super_chains)
# options: result_cold[result_cold.shape[0] - 30:], result_state_short, result_warm, store_results
states_to_read = result_warm
print("mean estimate:", np.mean(states_to_read.mean(0), axis = 0))
print("variance estimate:", np.mean(states_to_read.var(1), axis = 0))
print(nested_rhat(states_to_read, num_super_chain = 4))
print(tfp.mcmc.potential_scale_reduction(states_to_read))
print(mean_est)
print(var_est)
# Check output of the last run
plot(result_warm[:, :, 0].flatten(),
result_warm[:, :, 1].flatten(), '.', alpha = 0.2)
title('Long regime')
show()
plot(result_warm[:, :30, 1])
show()
# Compare to output we get with uninterrupted run.
# (Examine the iterations before the warmup ends)
chain_state_short = result_short.all_states[num_warmup_short - 10:num_warmup_short - 10 + warmup_window_size, :, :]
plot(chain_state_short[:, :, 0].flatten(),
chain_state_short[:, :, 1].flatten(), '.', alpha = 0.2)
show()
plot(chain_state_short[:, :30, 1])
show() | nested_rhat/rhat_locker.ipynb | google-research/google-research | apache-2.0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.