markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Finally, let's code the models! The tf.keras API accepts an array of layers into a model object, so we can create a dictionary of layers based on the different model types we want to use. The below file has two functions: get_layers and create_and_train_model. We will build the structure of our model in get_layers. Last but not least, we'll copy over the training code from the previous lab into train_and_evaluate. TODO 1: Define the Keras layers for a DNN model TODO 2: Define the Keras layers for a dropout model TODO 3: Define the Keras layers for a CNN model Hint: These models progressively build on each other. Look at the imported tensorflow.keras.layers modules and the default values for the variables defined in get_layers for guidance.
%%writefile mnist_models/trainer/model.py import os import shutil import matplotlib.pyplot as plt import numpy as np import tensorflow as tf from tensorflow.keras import Sequential from tensorflow.keras.callbacks import TensorBoard from tensorflow.keras.layers import ( Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Softmax) from . import util # Image Variables WIDTH = 28 HEIGHT = 28 def get_layers( model_type, nclasses=10, hidden_layer_1_neurons=400, hidden_layer_2_neurons=100, dropout_rate=0.25, num_filters_1=64, kernel_size_1=3, pooling_size_1=2, num_filters_2=32, kernel_size_2=3, pooling_size_2=2): """Constructs layers for a keras model based on a dict of model types.""" model_layers = { 'linear': [ Flatten(), Dense(nclasses), Softmax() ], 'dnn': [ Flatten(), Dense(hidden_layer_1_neurons, activation='relu'), Dense(hidden_layer_2_neurons, activation='relu'), Dense(nclasses), Softmax() ], 'dnn_dropout': [ Flatten(), Dense(hidden_layer_1_neurons, activation='relu'), Dense(hidden_layer_2_neurons, activation='relu'), Dropout(dropout_rate), Dense(nclasses), Softmax() ], 'cnn': [ Conv2D(num_filters_1, kernel_size=kernel_size_1, activation='relu', input_shape=(WIDTH, HEIGHT, 1)), MaxPooling2D(pooling_size_1), Conv2D(num_filters_2, kernel_size=kernel_size_2, activation='relu'), MaxPooling2D(pooling_size_2), Flatten(), Dense(hidden_layer_1_neurons, activation='relu'), Dense(hidden_layer_2_neurons, activation='relu'), Dropout(dropout_rate), Dense(nclasses), Softmax() ] } return model_layers[model_type] def build_model(layers, output_dir): """Compiles keras model for image classification.""" model = Sequential(layers) model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) return model def train_and_evaluate(model, num_epochs, steps_per_epoch, output_dir): """Compiles keras model and loads data into it for training.""" mnist = tf.keras.datasets.mnist.load_data() train_data = util.load_dataset(mnist) validation_data = util.load_dataset(mnist, training=False) callbacks = [] if output_dir: tensorboard_callback = TensorBoard(log_dir=output_dir) callbacks = [tensorboard_callback] history = model.fit( train_data, validation_data=validation_data, epochs=num_epochs, steps_per_epoch=steps_per_epoch, verbose=2, callbacks=callbacks) if output_dir: export_path = os.path.join(output_dir, 'keras_export') model.save(export_path, save_format='tf') return history
notebooks/image_models/solutions/2_mnist_models.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Local Training With everything set up, let's run locally to test the code. Some of the previous tests have been copied over into a testing script mnist_models/trainer/test.py to make sure the model still passes our previous checks. On line 13, you can specify which model types you would like to check. line 14 and line 15 has the number of epochs and steps per epoch respectively. Moment of truth! Run the code below to check your models against the unit tests. If you see "OK" at the end when it's finished running, congrats! You've passed the tests!
!python3 -m mnist_models.trainer.test
notebooks/image_models/solutions/2_mnist_models.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Now that we know that our models are working as expected, let's run it on the Google Cloud AI Platform. We can run it as a python module locally first using the command line. The below cell transfers some of our variables to the command line as well as create a job directory including a timestamp.
current_time = datetime.now().strftime("%Y%m%d_%H%M%S") model_type = "cnn" os.environ["MODEL_TYPE"] = model_type os.environ["JOB_DIR"] = "mnist_models/models/{}_{}/".format( model_type, current_time )
notebooks/image_models/solutions/2_mnist_models.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
The cell below runs the local version of the code. The epochs and steps_per_epoch flag can be changed to run for longer or shorther, as defined in our mnist_models/trainer/task.py file.
%%bash python3 -m mnist_models.trainer.task \ --job-dir=$JOB_DIR \ --epochs=5 \ --steps_per_epoch=50 \ --model_type=$MODEL_TYPE
notebooks/image_models/solutions/2_mnist_models.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Training on the cloud We will use a Deep Learning Container to train this model on AI Platform. Below is a simple Dockerlife which copies our code to be used in a TensorFlow 2.3 environment.
%%writefile mnist_models/Dockerfile FROM gcr.io/deeplearning-platform-release/tf2-cpu.2-3 COPY mnist_models/trainer /mnist_models/trainer ENTRYPOINT ["python3", "-m", "mnist_models.trainer.task"]
notebooks/image_models/solutions/2_mnist_models.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
The below command builds the image and ships it off to Google Cloud so it can be used for AI Platform. When built, it will show up here with the name mnist_models. (Click here to enable Cloud Build)
!docker build -f mnist_models/Dockerfile -t $IMAGE_URI ./ !docker push $IMAGE_URI
notebooks/image_models/solutions/2_mnist_models.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Finally, we can kickoff the AI Platform training job. We can pass in our docker image using the master-image-uri flag.
current_time = datetime.now().strftime("%Y%m%d_%H%M%S") model_type = "cnn" os.environ["MODEL_TYPE"] = model_type os.environ["JOB_DIR"] = "gs://{}/mnist_{}_{}/".format( BUCKET, model_type, current_time ) os.environ["JOB_NAME"] = f"mnist_{model_type}_{current_time}" %%bash echo $JOB_DIR $REGION $JOB_NAME gcloud ai-platform jobs submit training $JOB_NAME \ --staging-bucket=gs://$BUCKET \ --region=$REGION \ --master-image-uri=$IMAGE_URI \ --scale-tier=BASIC_GPU \ --job-dir=$JOB_DIR \ -- \ --model_type=$MODEL_TYPE
notebooks/image_models/solutions/2_mnist_models.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Deploying and predicting with model Once you have a model you're proud of, let's deploy it! All we need to do is give AI Platform the location of the model. Below uses the keras export path of the previous job, but ${JOB_DIR}keras_export/ can always be changed to a different path.
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") MODEL_NAME = f"mnist_{TIMESTAMP}" %env MODEL_NAME = $MODEL_NAME %%bash MODEL_VERSION=${MODEL_TYPE} MODEL_LOCATION=${JOB_DIR}keras_export/ echo "Deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes" gcloud ai-platform models create ${MODEL_NAME} --region $REGION gcloud ai-platform versions create ${MODEL_VERSION} \ --model ${MODEL_NAME} \ --origin ${MODEL_LOCATION} \ --framework tensorflow \ --runtime-version=2.3
notebooks/image_models/solutions/2_mnist_models.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
To predict with the model, let's take one of the example images. TODO 4: Write a .json file with image data to send to an AI Platform deployed model
import codecs import json import matplotlib.pyplot as plt import tensorflow as tf HEIGHT = 28 WIDTH = 28 IMGNO = 12 mnist = tf.keras.datasets.mnist.load_data() (x_train, y_train), (x_test, y_test) = mnist test_image = x_test[IMGNO] jsondata = test_image.reshape(HEIGHT, WIDTH, 1).tolist() json.dump(jsondata, codecs.open("test.json", "w", encoding="utf-8")) plt.imshow(test_image.reshape(HEIGHT, WIDTH));
notebooks/image_models/solutions/2_mnist_models.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Finally, we can send it to the prediction service. The output will have a 1 in the index of the corresponding digit it is predicting. Congrats! You've completed the lab!
%%bash gcloud ai-platform predict \ --model=${MODEL_NAME} \ --version=${MODEL_TYPE} \ --json-instances=./test.json
notebooks/image_models/solutions/2_mnist_models.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Now we import a few general packages that we need to start with. The following imports basic numerics and algebra routines (numpy) and plotting routines (matplotlib), and makes sure that all plots are shown inside the notebook rather than in a separate window (nicer that way).
import matplotlib.pylab as plt import numpy as np %pylab inline
python/markov_analysis/PyEMMA-API.ipynb
jeiros/Jupyter_notebooks
mit
Now we import the pyEMMA package that we will be using in the beginning: the coordinates package. This package contains functions and classes for reading and writing trajectory files, extracting order parameters from them (such as distances or angles), as well as various methods for dimensionality reduction and clustering. The shortcuts module is a bunch of functions specific to this workshop - they help us to visualize some of our results. Some of them might become part of the pyemma package once they are more mature.
import pyemma.coordinates as coor import pyemma.msm as msm import pyemma.plots as mplt from pyemma import config # some helper funcs def average_by_state(dtraj, x, nstates): assert(len(dtraj) == len(x)) N = len(dtraj) res = np.zeros((nstates)) for i in range(nstates): I = np.argwhere(dtraj == i)[:,0] res[i] = np.mean(x[I]) return res def avg_by_set(x, sets): # compute mean positions of sets. This is important because of some technical points the set order # in the coarse-grained TPT object can be different from the input order. avg = np.zeros(len(sets)) for i in range(len(sets)): I = list(sets[i]) avg[i] = np.mean(x[I]) return avg shortcuts = {'average_by_state': average_by_state, 'avg_by_set': avg_by_set} import glob trajfiles = sorted(glob.glob('./*/05*nc')) for file in trajfiles: print("%s\n" % file) topfile = "./test.pdb" feat = coor.featurizer(topfile) feat.add_backbone_torsions(cossin=True) feat.add_chi1_torsions(cossin=True) inp = coor.source(trajfiles, feat) print("Number of trajectories: %s" % inp.number_of_trajectories()) print("Aggregate simulation time: %.2f ns" % (inp.n_frames_total() * 0.02)) print("Number of dimensions: %s" % inp.dimension())
python/markov_analysis/PyEMMA-API.ipynb
jeiros/Jupyter_notebooks
mit
TICA and clustering So we would like to first reduce our dimension by throwing out the ‘uninteresting’ ones and only keeping the ‘relevant’ ones. But how do we do that? It turns out that a really good way to do that if you are interesting in the slow kinetics of the molecule - e.g. for constructing a Markov model, is to use the time-lagged independent component analysis (TICA) [2]. Amongst linear methods, TICA is optimal in its ability to approximate the relevant slow coordinates / reaction coordinates from MD simulation [3], and therefore it’s ideal to construct Markov models.
tica_obj = coor.tica(inp, lag=100) Y = tica_obj.get_output()[0]
python/markov_analysis/PyEMMA-API.ipynb
jeiros/Jupyter_notebooks
mit
By default, TICA will choose a number of output dimensions to cover 95% of the kinetic variance and scale the output to produce a kinetic map. In this case we retain 575 dimensions, which is a lot but note that they are scaled by eigenvalue, so it’s mostly the first dimensions that contribute.
print("Projected data shape: (%s,%s)" % (Y.shape[0], Y.shape[1])) print('Retained dimensions: %s' % tica_obj.dimension()) plot(tica_obj.cumvar, linewidth=2) plot([tica_obj.dimension(), tica_obj.dimension()], [0, 1], color='black', linewidth=2) plot([0, Y.shape[0]], [0.95, 0.95], color='black', linewidth=2) xlabel('Number of dimensions'); ylabel('Cum. kinetic variance fraction')
python/markov_analysis/PyEMMA-API.ipynb
jeiros/Jupyter_notebooks
mit
The TICA object has a number of properties that we can extract and work with. We have already obtained the projected trajectory and wrote it in a variable Y that is a matrix of size (103125 x 2). The rows are the MD steps, the 2 columns are the independent component coordinates projected onto. So each columns is a trajectory. Let us plot them:
mplt.plot_free_energy(np.vstack(Y)[:, 0], np.vstack(Y)[:, 1]) xlabel('independent component 1'); ylabel('independent component 2')
python/markov_analysis/PyEMMA-API.ipynb
jeiros/Jupyter_notebooks
mit
A particular thing about the IC’s is that they have zero mean and variance one. We can easily check that:
print("Mean values: %s" % np.mean(Y[0], axis = 0)) print("Variances: %s" % np.var(Y[0], axis = 0))
python/markov_analysis/PyEMMA-API.ipynb
jeiros/Jupyter_notebooks
mit
The small deviations from 0 and 1 come from statistical and numerical issues. That’s not a problem. Note that if we had set kinetic_map=True when doing TICA, then the variances would not be 1 but rather the square of the corresponding TICA eigenvalue. TICA is a special transformation because it will project the data such that the autocorrelation along the independent components is as slow as possible. The eigenvalues of the TICA transform are the values of these autocorrelations at the chosen lag time (here 100). We can even interpret them in terms of relaxation timescales:
print(-100/np.log(tica_obj.eigenvalues[:5]))
python/markov_analysis/PyEMMA-API.ipynb
jeiros/Jupyter_notebooks
mit
We will see more timescales later when we estimate a Markov model, and there will be some differences. For now you should treat these numbers as a rough guess of your molecule’s timescales, and we will see later that this guess is actually a bit too fast. The timescales are relative to the 10 ns saving interval, so we have
subplot2grid((2,1),(0,0)) plot(Y[:,0]) ylabel('ind. comp. 1') subplot2grid((2,1),(1,0)) plot(Y[:,1]) ylabel('ind. comp. 2') xlabel('time (10 ns)') tica_obj.chunksize mplt.plot_implied_timescales(tica_obj)
python/markov_analysis/PyEMMA-API.ipynb
jeiros/Jupyter_notebooks
mit
Build network Now we're building the network from the functions defined above. First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z. Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes. Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
tf.reset_default_graph() # Create our input placeholders input_real, input_z = model_inputs(input_size, z_size) # Build the model g_model = generator(input_z, input_size) # g_model is the generator output d_model_real, d_logits_real = discriminator(input_real) d_model_fake, d_logits_fake = discriminator(g_model, reuse=True)
gan_mnist/Intro_to_GANs_Solution.ipynb
mdiaz236/DeepLearningFoundations
mit
Training
!mkdir checkpoints batch_size = 100 epochs = 100 samples = [] losses = [] # Only save generator variables saver = tf.train.Saver(var_list=g_vars) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) # Get images, reshape and rescale to pass to D batch_images = batch[0].reshape((batch_size, 784)) batch_images = batch_images*2 - 1 # Sample random noise for G batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size)) # Run optimizers _ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z}) _ = sess.run(g_train_opt, feed_dict={input_z: batch_z}) # At the end of each epoch, get the losses and print them out train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images}) train_loss_g = g_loss.eval({input_z: batch_z}) print("Epoch {}/{}...".format(e+1, epochs), "Discriminator Loss: {:.4f}...".format(train_loss_d), "Generator Loss: {:.4f}".format(train_loss_g)) # Save losses to view after training losses.append((train_loss_d, train_loss_g)) # Sample from generator as we're training for viewing afterwards sample_z = np.random.uniform(-1, 1, size=(16, z_size)) gen_samples = sess.run( generator(input_z, input_size, reuse=True), feed_dict={input_z: sample_z}) samples.append(gen_samples) saver.save(sess, './checkpoints/generator.ckpt') # Save training generator samples with open('train_samples.pkl', 'wb') as f: pkl.dump(samples, f)
gan_mnist/Intro_to_GANs_Solution.ipynb
mdiaz236/DeepLearningFoundations
mit
It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise like 1s and 9s. Sampling from the generator We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
saver = tf.train.Saver(var_list=g_vars) with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) sample_z = np.random.uniform(-1, 1, size=(16, z_size)) gen_samples = sess.run( generator(input_z, input_size, reuse=True), feed_dict={input_z: sample_z}) _ = view_samples(0, [gen_samples])
gan_mnist/Intro_to_GANs_Solution.ipynb
mdiaz236/DeepLearningFoundations
mit
Spark Configuration and Preparation Edit the variables in the cell below. If you are running Spark in local mode, please set the local flag to true and adjust the resources you wish to use on your local machine. The same goes for the case when you are running Spark 2.0 and higher.
# Modify these variables according to your needs. application_name = "Distributed Deep Learning: Analysis" using_spark_2 = False local = False if local: # Tell master to use local resources. master = "local[*]" num_cores = 3 num_executors = 1 else: # Tell master to use YARN. master = "yarn-client" num_executors = 8 num_cores = 2 # This variable is derived from the number of cores and executors, and will be used to assign the number of model trainers. num_workers = num_executors * num_cores print("Number of desired executors: " + `num_executors`) print("Number of desired cores / executor: " + `num_cores`) print("Total number of workers: " + `num_workers`) conf = SparkConf() conf.set("spark.app.name", application_name) conf.set("spark.master", master) conf.set("spark.executor.cores", `num_cores`) conf.set("spark.executor.instances", `num_executors`) conf.set("spark.executor.memory","2g") conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer"); # Check if the user is running Spark 2.0 + if using_spark_2: sc = SparkSession.builder.config(conf=conf) \ .appName(application_name) \ .getOrCreate() else: # Create the Spark context. sc = SparkContext(conf=conf) # Add the missing imports from pyspark import SQLContext sqlContext = SQLContext(sc)
examples/example_1_analysis.ipynb
ad960009/dist-keras
gpl-3.0
Data Preparation After the Spark Context (or Spark Session if you are using Spark 2.0) has been set up, we can start reading the preprocessed dataset from storage.
# Check if we are using Spark 2.0 if using_spark_2: reader = sc else: reader = sqlContext # Read the dataset. raw_dataset = reader.read.parquet("data/processed.parquet") # Check the schema. raw_dataset.printSchema()
examples/example_1_analysis.ipynb
ad960009/dist-keras
gpl-3.0
After reading the dataset from storage, we will extract several metrics such as nb_features, which basically is the number of input neurons, and nb_classes, which is the number of classes (signal and background).
nb_features = len(raw_dataset.select("features_normalized").take(1)[0]["features_normalized"]) nb_classes = len(raw_dataset.select("label").take(1)[0]["label"]) print("Number of features: " + str(nb_features)) print("Number of classes: " + str(nb_classes))
examples/example_1_analysis.ipynb
ad960009/dist-keras
gpl-3.0
Finally, we split up the dataset for training and testing purposes, and fetch some additional statistics on the number of training and testing instances.
# Finally, we create a trainingset and a testset. (training_set, test_set) = raw_dataset.randomSplit([0.7, 0.3]) training_set.cache() test_set.cache() # Distribute the training and test set to the workers. test_set = test_set.repartition(num_workers) training_set = training_set.repartition(num_workers) num_test_set = test_set.count() num_training_set = training_set.count() print("Number of testset instances: " + str(num_test_set)) print("Number of trainingset instances: " + str(num_training_set)) print("Total number of instances: " + str(num_test_set + num_training_set))
examples/example_1_analysis.ipynb
ad960009/dist-keras
gpl-3.0
Model construction
model = Sequential() model.add(Dense(500, input_shape=(nb_features,))) model.add(Activation('relu')) model.add(Dropout(0.4)) model.add(Dense(500)) model.add(Activation('relu')) model.add(Dropout(0.6)) model.add(Dense(500)) model.add(Activation('relu')) model.add(Dense(nb_classes)) model.add(Activation('softmax')) # Summarize the model. model.summary() optimizer = 'adagrad' loss = 'categorical_crossentropy'
examples/example_1_analysis.ipynb
ad960009/dist-keras
gpl-3.0
Model evaluation
def evaluate(model): global test_set metric_name = "f1" evaluator = MulticlassClassificationEvaluator(metricName=metric_name, predictionCol="prediction_index", labelCol="label_index") # Clear the prediction column from the testset. test_set = test_set.select("features_normalized", "label", "label_index") # Apply a prediction from a trained model. predictor = ModelPredictor(keras_model=trained_model, features_col="features_normalized") test_set = predictor.predict(test_set) # Transform the prediction vector to an indexed label. index_transformer = LabelIndexTransformer(output_dim=nb_classes) test_set = index_transformer.transform(test_set) # Store the F1 score of the SingleTrainer. score = evaluator.evaluate(test_set) return score results = {} time_spent = {}
examples/example_1_analysis.ipynb
ad960009/dist-keras
gpl-3.0
Model training and evaluation In the next sections we train and evaluate the models trained by different (distributed) optimizers. Single Trainer
trainer = SingleTrainer(keras_model=model, loss=loss, worker_optimizer=optimizer, features_col="features_normalized", num_epoch=1, batch_size=64) trained_model = trainer.train(training_set) # Fetch the training time. dt = trainer.get_training_time() print("Time spent (SingleTrainer): " + `dt` + " seconds.") # Evaluate the model. score = evaluate(trained_model) print("F1 (SingleTrainer): " + `score`) # Store the training metrics. results['single'] = score time_spent['single'] = dt
examples/example_1_analysis.ipynb
ad960009/dist-keras
gpl-3.0
Asynchronous EASGD
trainer = AEASGD(keras_model=model, worker_optimizer=optimizer, loss=loss, num_workers=num_workers, batch_size=64, features_col="features_normalized", num_epoch=1, communication_window=32, rho=5.0, learning_rate=0.1) trainer.set_parallelism_factor(1) trained_model = trainer.train(training_set) # Fetch the training time. dt = trainer.get_training_time() print("Time spent (AEASGD): " + `dt` + " seconds.") # Evaluate the model. score = evaluate(trained_model) print("F1 (AEASGD): " + `score`) # Store the training metrics. results['aeasgd'] = score time_spent['aeasgd'] = dt
examples/example_1_analysis.ipynb
ad960009/dist-keras
gpl-3.0
DOWNPOUR
trainer = DOWNPOUR(keras_model=model, worker_optimizer=optimizer, loss=loss, num_workers=num_workers, batch_size=64, communication_window=5, learning_rate=0.1, num_epoch=1, features_col="features_normalized") trainer.set_parallelism_factor(1) trained_model = trainer.train(training_set) # Fetch the training time. dt = trainer.get_training_time() print("Time spent (DOWNPOUR): " + `dt` + " seconds.") # Evaluate the model. score = evaluate(trained_model) print("F1 (DOWNPOUR): " + `score`) # Store the training metrics. results['downpour'] = score time_spent['downpour'] = dt
examples/example_1_analysis.ipynb
ad960009/dist-keras
gpl-3.0
Results As we can see from the plots below, the distributed optimizers finish a single epoch ~7 times however. However, for this, the distributed optimizers use 16 times the amount of resources. However, a not very descriptive measure since some of jobs are scheduled on the same machines, some machines have a higher load etc. Nevertheless, the statistical performance of the optimizers is within 1% error. Which means that the classifiers would have near-identical performance. Furthermore, it is our guess that the statistical performance of the distributed optimizers can be improved by adding adaptive learning rates.
# Plot the time. fig = plt.figure() st = fig.suptitle("Lower is better.", fontsize="x-small") plt.bar(range(len(time_spent)), time_spent.values(), align='center') plt.xticks(range(len(time_spent)), time_spent.keys()) plt.xlabel("Optimizers") plt.ylabel("Seconds") plt.ylim([0, 7000]) plt.show() # Plot the statistical performanc of the optimizers. fig = plt.figure() st = fig.suptitle("Higer is better.", fontsize="x-small") plt.bar(range(len(results)), results.values(), align='center') plt.xticks(range(len(results)), results.keys()) plt.xlabel("Optimizers") plt.ylabel("F1") plt.ylim([0.83,0.85]) plt.show()
examples/example_1_analysis.ipynb
ad960009/dist-keras
gpl-3.0
A different view of logistic regression Consider a schematic reframing of the LR model above. This time we'll treat the inputs as nodes, and they connect to other nodes via vertices that represent the weight coefficients. <img src="img/NN-2.jpeg"> The diagram above is a (simplified form of a) single-neuron model in biology. <img src="img/neuron.gif"> As a result, this is the same model that is used to demonstrate a computational neural network. So that's great. Logistic regression works, why do we need something like a neural network? To start, consider an example where the LR model breaks down:
rng = np.random.RandomState(1) X = rng.randn(samples, 2) y = np.array(np.logical_xor(X[:, 0] > 0, X[:, 1] > 0), dtype=int) clf = LogisticRegression().fit(X,y) plot_decision_regions(X=X, y=y, clf=clf, res=0.02, legend=2) plt.xlabel('x1'); plt.ylabel('x2'); plt.title('LR (XOR)')
neural-networks-101/Neural Networks - Part 1.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
Why does this matter? Well... Neural Networks Some history In the 1960s, when the concept of neural networks were first gaining steam, this type of data was a show-stopper. In particular, the reason our model fails to be effective with this data is that it's not linearly separable; it has interaction terms. This is a specific type of data that is representative of an XOR logic gate. It's not magic, just well-known, and a fundamental type of logic in computing. We can say it in words, as approximately: "label is 1, if either x1 or x2 is 1, but not if both are 1." At the time, this led to an interesting split in computational work in the field: on the one hand, some people set off on efforts to design very custom data and feature engineering tactics so that existing models would still work. On the other hand, people set out to solve the challenge of designing new algorithms; for example, this is approximately the era when the support vector machine was developed. Since progress on neural network models slowed significantly in this era (rememeber that computers were entire rooms!), this is often referred to as the first "AI winter." Even though the multi-layer network was designed a few years later, and solved the XOR problem, the attention on the field of AI and neural networks had faded. Today, you might (sensibly) suggest something like an 'rbf-kernel SVM' to solve this problem, and that would totally work! But that's not where we're going today. With the acceleration of computational power in the last decade, there has been a resurgence in the interest (and capability) of neural network computation. So what does a neural network look like? What is a multi-layer model, and how does it help solve this problem? Non-linearity and feature mixing leads to new features that we don't have to encode by hand. In particular, we no longer depend just on combinations of input features. We combine input features, apply non-linearities, then combine all of those as new features, apply additional non-linearities, and so on until basically forever. It sounds like a mess, and it pretty much can be. But first, we'll start simply. Imagine that we put just a single layer of "neurons" between our input data and output. How would that change the evaluation approach we looked at earlier? <img src="img/NN-3.jpeg"> DIY neural network! Reminder: manually writing out algorithms is a terrible idea for using them, but a great idea for learning how they work. To get a sense for how the diagram above works, let's first write out the "single-layer" version (which we saw above is equivalent to logistic regression and doesn't work!). We just want to see how it looks in the form of forward- and backward-propagation. Remember, we have a (samples x 2) input matrix, so we need a (2x1) matrix of weights. And to save space, we won't use the fully-accurate and correct implementation of backprop and SGD; instead, we'll use a simplified version that's easier to read but has very similar results.
# make the same data as above (just a little closer so it's easier to find) rng = np.random.RandomState(1) X = rng.randn(samples, 2) y = np.array(np.logical_xor(X[:, 0] > 0, X[:, 1] > 0), dtype=int) def activate(x, deriv=False): """sigmoid activation function and its derivative wrt the argument""" if deriv is True: return x*(1-x) return 1/(1+np.exp(-x)) # initialize synapse0 weights randomly with mean 0 syn0 = 2*np.random.random((2,1)) - 1 # nothing to see here... just some numpy vector hijinks for the next code y = y[None].T
neural-networks-101/Neural Networks - Part 1.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
This is the iterative phase. We propagate the input data forward through the synapse (weights), calculate the errors, and then back-propogate those errors through the synapses (weights) according to the proper gradients. Note that the number of iterations is arbitary at this point. We'll come back to that.
for i in range(10000): # first "layer" is the input data l0 = X # forward propagation l1 = activate(np.dot(l0, syn0)) ### # this is an oversimplified version of backprop + gradient descent # # how much did we miss? l1_error = y - l1 # # how much should we scale the adjustments? # (how much we missed by) * (gradient at l1 value) # ~an "error-weighted derivative" l1_delta = l1_error * activate(l1,True) ### # how much should we update the weight matrix (synapse)? syn0 += np.dot(l0.T,l1_delta) # some insight into the update progress if (i% 2000) == 0: print("Mean error @ iteration {}: {}".format(i, np.mean(np.abs(l1_error))))
neural-networks-101/Neural Networks - Part 1.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
As expected, this basically didn't work at all! Even though we aren't looking at the actual output data, we can use it to look at the accuracy; it never got much better than random guessing. Even after thousands of iterations! But remember, we knew that would be the case, because this single-layer network is functionally the same as vanilla logistic regression, which we saw fail on the xor data above! But, now that we have the framework and understanding for how to optimize backprogation, we can add an additional layer to the network (a so-called "hidden" layer of neurons), which will introduce the kind of mixing we need to represent this data. As we saw above in the diagram (and talked about), introduction of a new layer means that we get an extra step in both the forward- and backward-propagation steps. This new step means we need an additional weight (synapse) matrix, and an additional derivative calculation. Other than that, the code looks pretty much the same.
# hold tight, we'll come back to choosing this number hidden_layer_width = 3 # initialize synapse (weight) matrices randomly with mean 0 syn0 = 2*np.random.random((2,hidden_layer_width)) - 1 syn1 = 2*np.random.random((hidden_layer_width,1)) - 1 for i in range(60000): # forward propagation through layers 0, 1, and 2 l0 = X l1 = activate(np.dot(l0,syn0)) l2 = activate(np.dot(l1,syn1)) # how much did we miss the final target value? l2_error = y - l2 # how much should we scale the adjustments? l2_delta = l2_error*activate(l2,deriv=True) # project l2 error back onto l1 values according to weights l1_error = l2_delta.dot(syn1.T) # how much should we scale the adjustments? l1_delta = l1_error * activate(l1,deriv=True) # how much should we update the weight matrices (synapses)? syn1 += l1.T.dot(l2_delta) syn0 += l0.T.dot(l1_delta) if (i % 10000) == 0: print("Error @ iteration {}: {}".format(i, np.mean(np.abs(l2_error))))
neural-networks-101/Neural Networks - Part 1.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
Ok, this time we started at random guessing (sensible), but notice that we quickly reduced our overall error! That's excellent! Note: I didn't have time to debug the case where the full XOR data only trained to label one quadrant correctly. To get a sense for how it can look with a smaller set, change the "fall-back data" cell to code, and run the cells starting there! Knowing that the error is lower is great, but we can also inspect the results of the fit network by looking at the forward propagation results from the trained synapses (weights).
def forward_prop(X): """forward-propagate data X through the pre-fit network""" l1 = activate(np.dot(X,syn0)) l2 = activate(np.dot(l1,syn1)) return l2 # numpy and plotting shenanigans come from: # http://scikit-learn.org/stable/auto_examples/svm/plot_iris.html # mesh step size h = .02 # create a mesh to plot in x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) # calculate the surface (by forward-propagating) Z = forward_prop(np.c_[xx.ravel(), yy.ravel()]) # reshape the result into a grid Z = Z.reshape(xx.shape) plt.contourf(xx, yy, Z, cmap=plt.cm.Paired, alpha=0.8) # we can use this to inspect the smaller dataset #plt.plot(X[:, 0], X[:, 1], 'o')
neural-networks-101/Neural Networks - Part 1.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
Executes with mpiexec
!mpiexec -n 4 python2.7 hellompi.py
Untitled5.ipynb
PepSalehi/tuthpc
bsd-3-clause
Coding for multiple "personalities" (nodes, actually) Point to point communication
%%file mpipt2pt.py from mpi4py import MPI comm = MPI.COMM_WORLD rank, size = comm.Get_rank(), comm.Get_size() if rank == 0: data = range(10) more = range(0,20,2) print 'rank %i sends data:' % rank, data comm.send(data, dest=1, tag=1337) print 'rank %i sends data:' % rank, more comm.send(more, dest=2 ,tag=1456) elif rank == 1: data = comm.recv(source=0, tag=1337) print 'rank %i got data:' % rank, data elif rank == 2: more = comm.recv(source=0, tag=1456) print 'rank %i got data:' % rank, more !mpiexec -n 4 python2.7 mpipt2pt.py %%file mpipt2pt2.py '''nonblocking communication ''' from mpi4py import MPI import numpy as np import time comm = MPI.COMM_WORLD rank, size = comm.Get_rank(), comm.Get_size() pair = {0:1, 1:0} # rank 0 sends to 1 and vice versa sendbuf = np.zeros(5) + rank recvbuf = np.empty_like(sendbuf) print 'rank %i sends data:' % rank, sendbuf sreq = comm.Isend(sendbuf, dest=pair[rank], tag=1337) rreq = comm.Irecv(recvbuf, source=pair[rank], tag=1337) # rreq.Wait(); sreq.Wait() MPI.Request.Waitall([rreq, sreq]) if rank == 1: time.sleep(0.001) # delay slightly for better printing print 'rank %i got data:' % rank, recvbuf !mpiexec -n 2 python2.7 mpipt2pt2.py
Untitled5.ipynb
PepSalehi/tuthpc
bsd-3-clause
Collective communication
%%file mpiscattered.py '''mpi scatter ''' from mpi4py import MPI import numpy as np import time comm = MPI.COMM_WORLD rank, size = comm.Get_rank(), comm.Get_size() if rank == 0: data = np.arange(10) print 'rank %i has data' % rank, data data_split_list = np.array_split(data, size) else: data_split_list = None data_split = comm.scatter(data_split_list, root=0) # some delays for printing purposes if rank == 1: time.sleep(0.001) elif rank == 2: time.sleep(0.002) print 'rank %i got data' % rank, data_split !mpiexec -n 3 python2.7 mpiscattered.py %%file mpibroadcasted.py '''mpi broadcast ''' from mpi4py import MPI import numpy as np import time comm = MPI.COMM_WORLD rank, size = comm.Get_rank(), comm.Get_size() N = 10. data = np.arange(N) if rank == 0 else np.zeros(N) if rank == 1: time.sleep(0.001) elif rank == 2: time.sleep(0.002) print 'rank %i has data' % rank, data comm.Bcast(data, root=0) if rank == 1: time.sleep(0.001) elif rank == 2: time.sleep(0.002) print 'rank %i got data' % rank, data !mpiexec -n 3 python2.7 mpibroadcasted.py
Untitled5.ipynb
PepSalehi/tuthpc
bsd-3-clause
Not covered: shared memory and shared objects Better serialization
from mpi4py import MPI try: import dill MPI._p_pickle.dumps = dill.dumps MPI._p_pickle.loads = dill.loads except ImportError, AttributeError: pass
Untitled5.ipynb
PepSalehi/tuthpc
bsd-3-clause
Working with cluster schedulers, the JOB file
%%file jobscript.sh #!/bin/sh #PBS -l nodes=1:ppn=4 #PBS -l walltime=00:03:00 cd ${PBS_O_WORKDIR} || exit 2 mpiexec -np 4 python hellompi.py
Untitled5.ipynb
PepSalehi/tuthpc
bsd-3-clause
Beyond mpi4py The task Pool: pyina and emcee.utils
%%file pyinapool.py def test_pool(obj): from pyina.launchers import Mpi x = range(6) p = Mpi(8) # worker pool strategy + dill p.scatter = False print p.map(obj, x) # worker pool strategy + dill.source p.source = True print p.map(obj, x) # scatter-gather strategy + dill.source p.scatter = True print p.map(obj, x) # scatter-gather strategy + dill p.source = False print p.map(obj, x) if __name__ == '__main__': from math import sin f = lambda x:x+1 def g(x): return x+2 for func in [g, f, abs, sin]: test_pool(func) !python2.7 pyinapool.py
Untitled5.ipynb
PepSalehi/tuthpc
bsd-3-clause
5.1 Log-Normal Chain-Ladder This corresponds to Section 5.1 in the paper. The data are taken from Verrall et al. (2010). Kuang et al. (2015) fitted a log-normal chain-ladder model to this data. The model is given by $$ M^{LN}{\mu, \sigma^2}: \quad \log(Y{ij}) \stackrel{D}{=} N(\alpha_i + \beta_j + \delta, \sigma^2). $$ They found that the largest residuals could be found within the first five accident years. Consequently, they raised the question whether the model is misspecified. Here, we investigate this question. Full model We set up and estimate the full, most restrictive, model $M^{LN}_{\mu, \sigma^2}$. We begin by setting up a model class.
model_VNJ = apc.Model()
apc/vignettes/vignette_misspecification.ipynb
JonasHarnau/apc
gpl-3.0
Next, we attach the data for the model. The data come pre-formatted in the package.
model_VNJ.data_from_df(apc.loss_VNJ(), data_format='CL')
apc/vignettes/vignette_misspecification.ipynb
JonasHarnau/apc
gpl-3.0
We fit a log-normal chain-ladder model to the full data.
model_VNJ.fit('log_normal_response', 'AC')
apc/vignettes/vignette_misspecification.ipynb
JonasHarnau/apc
gpl-3.0
and confirm that we get the same result as in the paper for the log-data variance estimate $\hat{\sigma}^{2,LN}$ and the degrees of freedom $df$. This should correspond to the values for $\mathcal{I}$ in Figure 2(b).
print('log-data variance full model: {:.3f}'.format(model_VNJ.s2)) print('degrees of freedom full model: {:.0f}'.format(model_VNJ.df_resid))
apc/vignettes/vignette_misspecification.ipynb
JonasHarnau/apc
gpl-3.0
This matches the results in the paper. Sub-models We move on to split the data into sub-samples. The sub-samples $\mathcal{I}_1$ and $\mathcal{I}_2$ contain the first and the last five accident years, respectively. Accident years correspond to "cohorts" in age-period-cohort terminology. Rather than first splitting the sample and the generating a new model and fitting it, we make use of the "sub_model" functionality of the package which does all that for us. Combined, the sub-models correspond to $M^{LN}$.
sub_model_VNJ_1 = model_VNJ.sub_model(coh_from_to=(1,5), fit=True) sub_model_VNJ_2 = model_VNJ.sub_model(coh_from_to=(6,10), fit=True)
apc/vignettes/vignette_misspecification.ipynb
JonasHarnau/apc
gpl-3.0
We can check that this generated the estimates $\hat{\sigma}^{2, LN}\ell$ and degrees of freedom $df\ell$ from the paper.
print('First five accident years (I_1)') print('-------------------------------') print('log-data variance: {:.3f}'.format(sub_model_VNJ_1.s2)) print('degrees of freedom: {:.0f}\n'.format(sub_model_VNJ_1.df_resid)) print('Last five accident years (I_2)') print('------------------------------') print('log-data variance: {:.3f}'.format(sub_model_VNJ_2.s2)) print('degrees of freedom: {:.0f}'.format(sub_model_VNJ_2.df_resid))
apc/vignettes/vignette_misspecification.ipynb
JonasHarnau/apc
gpl-3.0
Reassuringly, it does. We can then also compute the weighted average predictor $\bar{\sigma}^{2,LN}$
s2_bar_VNJ = ((sub_model_VNJ_1.s2 * sub_model_VNJ_1.df_resid + sub_model_VNJ_2.s2 * sub_model_VNJ_2.df_resid) /(sub_model_VNJ_1.df_resid + sub_model_VNJ_2.df_resid)) print('Weighted avg of log-data variance: {:.3f}'.format(s2_bar_VNJ))
apc/vignettes/vignette_misspecification.ipynb
JonasHarnau/apc
gpl-3.0
Check! Testing for common variances Now we can move on to test the hypothesis of common variances $$ H_{\sigma^2}: \sigma^2_1 = \sigma^2_2. $$ This corresponds to testing for a reduction from $M^{LN}$ to $M^{LN}_{\sigma^2}$. First, we can conduct a Bartlett test. This functionality is pre-implemented in the package.
bartlett_VNJ = apc.bartlett_test([sub_model_VNJ_1, sub_model_VNJ_2])
apc/vignettes/vignette_misspecification.ipynb
JonasHarnau/apc
gpl-3.0
The test statistic $B^{LN}$ is computed as the ratio of $LR^{LN}$ to the Bartlett correction factor $C$. The p-value is computed by the $\chi^2$ approximation to the distribution of $B^{LN}$. The number of sub-samples is given by $m$.
for key, value in bartlett_VNJ.items(): print('{}: {:.2f}'.format(key, value))
apc/vignettes/vignette_misspecification.ipynb
JonasHarnau/apc
gpl-3.0
We get the same results as in the paper. Specifically, we get a p-value of $0.09$ for the hypothesis so that the Bartlett test does not arm us with strong evidence against the null hypothesis. In the paper, we also conduct an $F$-test for the same hypothesis. The statistic is computed as $$ F_{\sigma^2}^{LN} = \frac{\hat\sigma^{2,LN}2}{\hat\sigma^{2,LN}_1} $$ which, under the null, is distributed as $\mathrm{F}{df_2, df_1}$. This is not directly implemented in the package but still easily computed. First we compute the test statistic
F_VNJ_sigma2 = sub_model_VNJ_2.s2/sub_model_VNJ_1.s2 print('F statistic for common variances: {:.2f}'.format(F_VNJ_sigma2))
apc/vignettes/vignette_misspecification.ipynb
JonasHarnau/apc
gpl-3.0
Now we can compute p-values in one-sided and two-sided tests. For an (equal-tailed) two-sided test, we first find the percentile $P(F_{\sigma^2}^{LN} \leq \mathrm{F}_{df_2, df_1})$. This is given by
from scipy import stats F_VNJ_sigma2_percentile = stats.f.cdf( F_VNJ_sigma2, dfn=sub_model_VNJ_2.df_resid, dfd=sub_model_VNJ_1.df_resid ) print('Percentile of F statistic: {:.2f}'.format(F_VNJ_sigma2_percentile))
apc/vignettes/vignette_misspecification.ipynb
JonasHarnau/apc
gpl-3.0
If this is below the 50th percentile, the p-value is simply twice the percentile, otherwise we subtract the percentile from unity and multiply that by two. For intuition, we can look at the plot below. The green areas in the lower and upper tail of the distribution contain the same probability mass, namely $P(F_{\sigma^2}^{LN} \leq \mathrm{F}_{df_2, df_1})$. The two-sided p-value corresponds to the sum of the two areas.
import matplotlib.pyplot as plt import numpy as np %matplotlib inline x = np.linspace(0.01,5,1000) y = stats.f.pdf(x, dfn=sub_model_VNJ_2.df_resid, dfd=sub_model_VNJ_1.df_resid) plt.figure() plt.plot(x, y, label='$\mathrm{F}_{df_2, df_1}$ density') plt.axvline(F_VNJ_sigma2, color='black', linewidth=1, label='$F^{LN}_{\sigma^2}$') tmp = stats.f.cdf(F_VNJ_sigma2, dfn=sub_model_VNJ_2.df_resid, dfd=sub_model_VNJ_1.df_resid) plt.fill_between(x[x < F_VNJ_sigma2], y[x < F_VNJ_sigma2], color='green', alpha=0.3) tmp = stats.f.ppf(1-tmp, dfn=sub_model_VNJ_2.df_resid, dfd=sub_model_VNJ_1.df_resid) plt.fill_between(x[x > tmp], y[x > tmp], color='green', alpha=0.3) plt.annotate('Area 0.06', xy=(0.15, 0.1), xytext=(0.75, 0.15), arrowprops=dict(facecolor='black')) plt.annotate('Area 0.06', xy=(2.75, 0.025), xytext=(3, 0.2), arrowprops=dict(facecolor='black')) plt.legend() plt.title('Two-sided F-test') plt.show()
apc/vignettes/vignette_misspecification.ipynb
JonasHarnau/apc
gpl-3.0
Since $F_{\sigma^2}^{LN}$ is below the 50th percentile, the two-sided equal tailed p-value is in our case given by
print('F test two-sided p-value: {:.2f}'.format( 2*np.min([F_VNJ_sigma2_percentile, 1-F_VNJ_sigma2_percentile]) ) )
apc/vignettes/vignette_misspecification.ipynb
JonasHarnau/apc
gpl-3.0
The one-sided p-value for the hypothesis $H_{\sigma^2}: \sigma^2_1 \leq \sigma^2_2$ simply corresponds to the area in the lower tail of the distribution. This is because the statistic is $\hat\sigma^{2,LN}_2/\hat\sigma^{2,LN}_1$ so that smaller values work against our hypothesis. Thus, the rejection region is the lower tail. Remark: in the paper, the one-sided hypothesis is given as $H_{\sigma^2}: \sigma^2_1 > \sigma^2_2$. This is a mistake as this corresponds to the alternative.
print('F statistic one-sided p-value: {:.2f}'.format(F_VNJ_sigma2_percentile))
apc/vignettes/vignette_misspecification.ipynb
JonasHarnau/apc
gpl-3.0
Testing for common linear predictors We can move on to test for common linear predictors: $$ H_{\mu, \sigma^2}: \sigma^2_1 = \sigma^2_2 \quad \text{and} \quad \alpha_{i,\ell} + \beta_{j,\ell} + \delta_\ell = \alpha_i + \beta_j + \delta $$ If we are happy to accept the hypothesis of common variances $H_{\sigma^2}: \sigma^2_1 = \sigma^2_2$, we can test $H_{\mu, \sigma^2}: \sigma^2_1$ with a simple $F$-test; corresponding to a reduction from $M^{LN}{\sigma^2}$ to $M^{LN}{\mu, \sigma^2}$ The test is implemented in the package.
f_linpred_VNJ = apc.f_test(model_VNJ, [sub_model_VNJ_1, sub_model_VNJ_2])
apc/vignettes/vignette_misspecification.ipynb
JonasHarnau/apc
gpl-3.0
This returns the test statistic $F_\mu^{LN}$ along with the p-value.
for key, value in f_linpred_VNJ.items(): print('{}: {:.2f}'.format(key, value))
apc/vignettes/vignette_misspecification.ipynb
JonasHarnau/apc
gpl-3.0
These results, too, much those from the paper. 5.2 Over-dispersed Poisson Chain-Ladder This corresponds to Section 5.2 in the paper. The data are taken from Taylor and Ashe (1983). For this data, the desired full model is an over-dispersed Poisson model given by $$ M^{ODP}{\mu, \sigma^2}: \quad E(Y{ij}) = \exp(\alpha_i + \beta_j + \delta), \quad \frac{\mathrm{var}(Y_{ij})}{E(Y_{ij})} = \sigma^2. $$ We proceed just as we did above. First, we set up and estimate the full model and the sub-models. Second, we compute the Bartlett test for common over-dispersion. Third, we test for common linear predictors. Finally, we repeat the testing procedure for different sub-sample structures. Full model We set up and estimate the model $M^{ODP}_{\mu, \sigma^2}$ on the full data set.
model_TA = apc.Model() model_TA.data_from_df(apc.data.pre_formatted.loss_TA(), data_format='CL') model_TA.fit('od_poisson_response', 'AC') print('log-data variance full model: {:.0f}'.format(model_TA.s2)) print('degrees of freedom full model: {:.0f}'.format(model_TA.df_resid))
apc/vignettes/vignette_misspecification.ipynb
JonasHarnau/apc
gpl-3.0
Sub-models We set up and estimate the models on the four sub-samples. Combined, these models correspond to $M^{ODP}$.
sub_model_TA_1 = model_TA.sub_model(per_from_to=(1,5), fit=True) sub_model_TA_2 = model_TA.sub_model(coh_from_to=(1,5), age_from_to=(1,5), per_from_to=(6,10), fit=True) sub_model_TA_3 = model_TA.sub_model(age_from_to=(6,10), fit=True) sub_model_TA_4 = model_TA.sub_model(coh_from_to=(6,10), fit=True) sub_models_TA = [sub_model_TA_1, sub_model_TA_2, sub_model_TA_3, sub_model_TA_4] for i, sm in enumerate(sub_models_TA): print('Sub-sample I_{}'.format(i+1)) print('--------------') print('over-dispersion: {:.0f}'.format(sm.s2)) print('degrees of freedom: {:.0f}\n'.format(sm.df_resid)) s2_bar_TA = np.array([sm.s2 for sm in sub_models_TA]).dot( np.array([sm.df_resid for sm in sub_models_TA]) )/np.sum([sm.df_resid for sm in sub_models_TA]) print('Weighted avg of over-dispersion: {:.0f}'.format(s2_bar_TA))
apc/vignettes/vignette_misspecification.ipynb
JonasHarnau/apc
gpl-3.0
Testing for common over-dispersion We perform a Bartlett test for the hypothesis of common over-dispersion across sub-samples $H_{\sigma^2}: \sigma^2_\ell = \sigma^2$. This corresponds to testing a reduction from $M^{ODP}$ to $M^{ODP}_{\sigma^2}$.
bartlett_TA = apc.bartlett_test(sub_models_TA) for key, value in bartlett_TA.items(): print('{}: {:.2f}'.format(key, value))
apc/vignettes/vignette_misspecification.ipynb
JonasHarnau/apc
gpl-3.0
These results match those in the paper. The Bartlett test yields a p-value of 0.08. Testing for common linear predictors If we are happy to impose common over-dispersion, we can test for common linear predictors across sub-samples. Then, this corresponds to a reduction from $M^{ODP}{\sigma^2}$ to $M^{ODP}{\mu, \sigma^2}$.
f_linpred_TA = apc.f_test(model_TA, sub_models_TA) for key, value in f_linpred_TA.items(): print('{}: {:.2f}'.format(key, value))
apc/vignettes/vignette_misspecification.ipynb
JonasHarnau/apc
gpl-3.0
Repeated testing In the paper, we also suggest a procedure to repeat the tests for different sub-sample structures, using a Bonferroni correction for size-control.
sub_models_TA_2 = [model_TA.sub_model(coh_from_to=(1,5), fit=True), model_TA.sub_model(coh_from_to=(6,10), fit=True)] sub_models_TA_3 = [model_TA.sub_model(per_from_to=(1,4), fit=True), model_TA.sub_model(per_from_to=(5,7), fit=True), model_TA.sub_model(per_from_to=(8,10), fit=True)] print('Two sub-samples') print('---------------') print('Bartlett') print('--------') for key, value in apc.bartlett_test(sub_models_TA_2).items(): print('{}: {:.2f}'.format(key, value)) print('\nF-test') print('------') for key, value in apc.f_test(model_TA, sub_models_TA_2).items(): print('{}: {:.2f}'.format(key, value)) print('\nThree sub-samples') print('-----------------') print('Bartlett') print('--------') for key, value in apc.bartlett_test(sub_models_TA_3).items(): print('{}: {:.2f}'.format(key, value)) print('\nF-test') print('------') for key, value in apc.f_test(model_TA, sub_models_TA_3).items(): print('{}: {:.2f}'.format(key, value))
apc/vignettes/vignette_misspecification.ipynb
JonasHarnau/apc
gpl-3.0
The test results match those in the paper. For a quick refresher on the Bonferroni correction we turn to Wikipedia. The idea is to control the family wise error rate, the probability of rejecting at least one null hypothesis when the null is true. In our scenario, we repeat testing three times. Each individual repetition is comprised of two sequential tests: a Bartlett and an $F$-test. Under the null hypothesis (so the true model is $M_{\mu, \sigma^2}^{ODP}$), the two tests are independent so $$P(\text{reject $F$-test } | \text{ not-reject Bartlett test}) = P(\text{reject $F$-test}).$$ Thus, if we test at level $\alpha$, the probability to reject at least once within a repetition is not $\alpha$ but $1-(1-\alpha)^2 \approx 2\alpha$: $$ P(\text{Reject Bartlett or F-test at level }\alpha \text{ for a given split}) \approx 2 \alpha .$$ For thrice repeated testing, we replace $\alpha$ by $\alpha/3$. Then, we bound the probability to reject when the null is true with $$ P\left{\cup_{i=1}^3\left(\text{Reject Bartlett or F-test at level } \frac{\alpha}{3} \text{ for split }i\right)\right} \leq 2\alpha \quad \text{(approximately)} .$$ 5.3 Log-Normal (Extended) Chain-Ladder This corresponds to Section 5.3 in the paper. The data are taken from Barnett and Zehnwirth (2000). These data are commonly modeled with a calendar effect. We consider misspecification tests both for a model without $M^{LN}$ and with $M^{LNe}$ a calendar effect $\gamma$. The models are given by $$ M^{LN}{\mu, \sigma^2}: \quad \log(Y{ij}) \stackrel{D}{=} N(\alpha_i + \beta_j + \delta, \sigma^2)$$ and $$ M^{LNe}{\mu, \sigma^2}: \quad \log(Y{ij}) \stackrel{D}{=} N(\alpha_i + \beta_j + \gamma_k + \delta, \sigma^2). $$ No calendar effect We set up and estimate the model $M^{LN}_{\mu, \sigma^2}$ on the full data set.
model_BZ = apc.Model() model_BZ.data_from_df(apc.data.pre_formatted.loss_BZ(), time_adjust=1, data_format='CL') model_BZ.fit('log_normal_response', 'AC') print('log-data variance full model: {:.4f}'.format(model_BZ.s2)) print('degrees of freedom full model: {:.0f}'.format(model_BZ.df_resid))
apc/vignettes/vignette_misspecification.ipynb
JonasHarnau/apc
gpl-3.0
Next, the models for the sub-samples.
sub_models_BZ = [model_BZ.sub_model(per_from_to=(1977,1981), fit=True), model_BZ.sub_model(per_from_to=(1982,1984), fit=True), model_BZ.sub_model(per_from_to=(1985,1987), fit=True)] for i, sm in enumerate(sub_models_BZ): print('Sub-sample I_{}'.format(i+1)) print('--------------') print('over-dispersion: {:.4f}'.format(sm.s2)) print('degrees of freedom: {:.0f}\n'.format(sm.df_resid)) s2_bar_BZ = np.array([sm.s2 for sm in sub_models_BZ]).dot( np.array([sm.df_resid for sm in sub_models_BZ]) )/np.sum([sm.df_resid for sm in sub_models_BZ]) print('Weighted avg of over-dispersion: {:.4f}'.format(s2_bar_BZ))
apc/vignettes/vignette_misspecification.ipynb
JonasHarnau/apc
gpl-3.0
We move on the Bartlett test for the hypothesis of common log-data variances across sub-samples $H_{\sigma^2}: \sigma^2_\ell = \sigma^2$.
bartlett_BZ = apc.bartlett_test(sub_models_BZ) for key, value in bartlett_BZ.items(): print('{}: {:.2f}'.format(key, value))
apc/vignettes/vignette_misspecification.ipynb
JonasHarnau/apc
gpl-3.0
The Bartlett test yields a p-value of 0.05 as in the paper. We test for common linear predictors across sub-samples.
f_linpred_BZ = apc.f_test(model_BZ, sub_models_BZ) for key, value in f_linpred_BZ.items(): print('{}: {:.2f}'.format(key, value))
apc/vignettes/vignette_misspecification.ipynb
JonasHarnau/apc
gpl-3.0
Calendar effect Now we redo the same for the model with calendar effect.
model_BZe = apc.Model() model_BZe.data_from_df(apc.data.pre_formatted.loss_BZ(), time_adjust=1, data_format='CL') model_BZe.fit('log_normal_response', 'APC') # The only change is in this line. print('log-data variance full model: {:.4f}'.format(model_BZe.s2)) print('degrees of freedom full model: {:.0f}'.format(model_BZe.df_resid)) sub_models_BZe = [model_BZe.sub_model(per_from_to=(1977,1981), fit=True), model_BZe.sub_model(per_from_to=(1982,1984), fit=True), model_BZe.sub_model(per_from_to=(1985,1987), fit=True)] for i, sm in enumerate(sub_models_BZe): print('Sub-sample I_{}'.format(i+1)) print('--------------') print('over-dispersion: {:.4f}'.format(sm.s2)) print('degrees of freedom: {:.0f}\n'.format(sm.df_resid)) s2_bar_BZe = np.array([sm.s2 for sm in sub_models_BZe]).dot( np.array([sm.df_resid for sm in sub_models_BZe]) )/np.sum([sm.df_resid for sm in sub_models_BZe]) print('Weighted avg of log-data variances: {:.4f}'.format(s2_bar_BZe)) bartlett_BZe = apc.bartlett_test(sub_models_BZe) print('\nBartlett test') print('-------------') for key, value in bartlett_BZe.items(): print('{}: {:.2f}'.format(key, value)) print('\nF-test') print('------') f_linpred_BZe = apc.f_test(model_BZe, sub_models_BZe) for key, value in f_linpred_BZe.items(): print('{}: {:.2f}'.format(key, value))
apc/vignettes/vignette_misspecification.ipynb
JonasHarnau/apc
gpl-3.0
With this, we replicated Figure 4b. Closer look at the effect of dropping the calendar effect In the paper, we move on to take a closer look at the effect of dropping the calendar effect. We do so in two ways starting with $$M^{LNe}{\sigma^2}: \stackrel{D}{=} N(\alpha{i, \ell} + \beta_{j, \ell} + \gamma_{k, \ell} + \delta_\ell, \sigma^2).$$ We want to test for a reduction to $$M^{LN}_{\mu, \sigma^2}: \stackrel{D}{=} N(\alpha_i + \beta_j + \delta, \sigma^2).$$ In the figure below, we illustrate two different testing procedures that would get us to there. <center> <img src="https://user-images.githubusercontent.com/25103918/41599423-27d94fec-73a1-11e8-9fe1-3f3a1a9e184a.png" alt="Two ways to test for reduction to the same model" width="400px"/> </center> We can move down, testing $H^{LNe}{\sigma^2, \mu}$, and then right, testing $H\gamma: \gamma_k = 0$ We can move right, testing $H_{\gamma_{k, \ell}}: \gamma_{k, \ell} = 0$, and then down, testing $H^{LN}_{\sigma^2, \mu}$ Looking at the first way, we already saw that $H_{\gamma_{k, \ell}}: \gamma_{k, \ell} = 0$ cannot be rejected. To test for the absence of a calendar effect, we can do an (exact) $F$ test.
model_BZe.fit_table(attach_to_self=False).loc[['AC']]
apc/vignettes/vignette_misspecification.ipynb
JonasHarnau/apc
gpl-3.0
We see that the p-value (P&gt;F) is close to zero. Next, we consider the second way. We first test $H_{\gamma_{k, \ell}}$. Since $\sigma^2$ is common across the array from the outset, we can do this with a simple $F$-test: $$ \frac{(RSS_.^{LN} - RSS_.^{LNe})/(df_.^{LN} - df_.^{LNe})}{RSS_.^{LNe}/df_.^{LNe}} \stackrel{D}{=} F_{df_.^{LN} - df_.^{LNe}, df_.^{LNe}} $$
rss_BZe_dot = np.sum([sub.rss for sub in sub_models_BZe]) rss_BZ_dot = np.sum([sub.rss for sub in sub_models_BZ]) df_BZe_dot = np.sum([sub.df_resid for sub in sub_models_BZe]) df_BZ_dot = np.sum([sub.df_resid for sub in sub_models_BZ]) F_BZ = ((rss_BZ_dot - rss_BZe_dot)/(df_BZ_dot - df_BZe_dot)) / (rss_BZe_dot/df_BZe_dot) p_F_BZ = stats.f.sf(F_BZ, dfn=df_BZ_dot - df_BZe_dot, dfd=df_BZe_dot) print('p-value of F-test: {:.2f}'.format(p_F_BZ))
apc/vignettes/vignette_misspecification.ipynb
JonasHarnau/apc
gpl-3.0
Thus this is not rejected. However, we already saw that a reduction from $M^{LN}{\sigma^2}$ to $M^{LN}{\mu, \sigma^2}$ is rejected. Repeated testing Just as for the Taylor and Ashe (1983) data, we repeat testing for different splits.
sub_models_BZe_2 = [model_BZe.sub_model(coh_from_to=(1977,1981), fit=True), model_BZe.sub_model(coh_from_to=(1982,1987), fit=True)] sub_models_BZe_4 = [model_BZe.sub_model(per_from_to=(1977,1981), fit=True), model_BZe.sub_model(coh_from_to=(1977,1982), age_from_to=(1,5), per_from_to=(1982,1987), fit=True), model_BZe.sub_model(age_from_to=(6,11), fit=True), model_BZe.sub_model(coh_from_to=(1983,19871), fit=True)] print('Two sub-samples') print('---------------') print('Bartlett') print('--------') for key, value in apc.bartlett_test(sub_models_BZe_2).items(): print('{}: {:.3f}'.format(key, value)) print('\nF-test') print('------') for key, value in apc.f_test(model_BZe, sub_models_BZe_2).items(): print('{}: {:.3f}'.format(key, value)) print('\nFour sub-samples') print('----------------') print('Bartlett') print('--------') for key, value in apc.bartlett_test(sub_models_BZe_4).items(): print('{}: {:.2f}'.format(key, value)) print('\nF-test') print('------') for key, value in apc.f_test(model_BZe, sub_models_BZe_4).items(): print('{}: {:.2f}'.format(key, value))
apc/vignettes/vignette_misspecification.ipynb
JonasHarnau/apc
gpl-3.0
To create an animation we need to do two things: Create the initial visualization, with handles on the figure and axes object. Write a function that will get called for each frame that updates the data and returns the next frame.
duration = 10.0 # this is the total time N = 500 # Make the initial plot outside the animation function fig_mpl, ax = plt.subplots(1,figsize=(5,3), facecolor='white') x = np.random.normal(0.0, 1.0, size=N) y = np.random.normal(0.0, 1.0, size=N) plt.sca(ax) plt.xlim(-3,3) plt.ylim(-3,3) scat = ax.scatter(x, y) def make_frame_mpl(t): # t is the current time between [0,duration] newy = y*np.cos(4.0*t/duration) # Just update the data on each frame # set_offset takes a Nx2 dimensional array of positions scat.set_offsets(np.transpose(np.vstack([x, newy]))) # The mplfig_to_npimage convert the matplotlib figure to an image that # moviepy can work with: return mplfig_to_npimage(fig_mpl) animation = mpy.VideoClip(make_frame_mpl, duration=duration)
days/day20/MoviePy.ipynb
AaronCWong/phys202-2015-work
mit
Use the following call to generate and display the animation in the notebook:
animation.ipython_display(fps=24)
days/day20/MoviePy.ipynb
AaronCWong/phys202-2015-work
mit
Use the following to save the animation to a file that can be uploaded you YouTube:
animation.write_videofile("scatter_animation.mp4", fps=20)
days/day20/MoviePy.ipynb
AaronCWong/phys202-2015-work
mit
Problem 1) Download and Examine the Data The images for this exercise can be downloaded from here: https://northwestern.box.com/s/x6nzuqtdys3jo1nufvswkx62o44ifa11. Be sure to place the images in the same directory as this notebook (but do not add them to your git repo!). Before we dive in, here is some background information on the images we will be analyzing: the imaging data and the group information all come from the Galaxy And Mass Assembly (GAMA) survey; and more specifically, its panchromatic data release. Many of the difficult steps associated with multiband galaxy photometry have already been done for you: GAMA constructs large mosaics of co-added FITS images in 20 bands to measure photometry. The images we will use today are from the g, r, and i mosaics that I (MA) built $\sim$7 years ago. They are built from SDSS observations in those bands, and have all been convolved to a seeing of approximately 2”, background subtracted, and renormalized to a common zeropoint of 30 magnitudes. The group catalogue was done by Aaron Robotham (see https://arxiv.org/abs/1106.1994). In the downloaded directory there are g, r, and i images of 36 galaxies that all belong to the same cluster. These image cutouts have been centered on the galaxy position, are $\sim$80.7" on a side, and have a pixel scale of 0.339"/pix. To begin we will focus on a single galaxy, before eventually working on the entire cluster. Problem 1a Display the $r$-band image of the galaxy 85698. Use a asinh stretch.
r_filename = "galaxy_images/85698_sdss_r.fits" r_data = fits.getdata( # complete plt.imshow( # complete plt.colorbar() plt.tight_layout()
Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 1b Roughly how many sources are present in the image? Hint - an exact count is not required here. Solution 1b Write your answer here Problem 2) Source Detection Prior to measuring any properties of sources in the image, we must first determine the number of sources present in the image. Source detection is challenging, and there are many different thresholding approaches. Today, we will streamline this step in order to spend more time focusing on the issues associated with matching photometric measurements across different images. We will use the detect_sources function in photutils to identify objects in our image. The simplest model assumes that the background is constant over the entire image. Once the background is determined, it can be subtracted from the image to determine high significance "peaks" corresponding to sources. After this week, we have learned that the background isn't so simple, nevertheless we will use the detect_threshold convenience function to estimate a constant background for our images. detect_threshold produces a "detection image" that can be used to estimate the significance of the flux detected in any individual pixel. Problem 2a Create a detection threshold image using the detect_threshold function, set the snr parameter to 3.
threshold = detect_threshold( # complete
Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 2b Develop better intuition for the detection image by plotting it side-by-side with the actual image of the field. Do you notice anything interesting about the threshold image?
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(7,4)) ax1.imshow( # complete ax2.imshow( # complete fig.tight_layout()
Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Following this measurement of the background, we can find sources using the detect_sources function. Briefly, this function uses image segmentation to define and assign pixels to sources, which are defined as objects with $N$ connected pixels that are $s$ times brighter than the background (we already set $s = 3$). Read the docs for further details. Problem 2c Generate a segmentation image using detect_sources. Keep only sources with $N = 7$ pixels, which is keyword arg npixels in detect_sources. If you have extra time Come back to this problem and see how changing $N$ affects your results.
segm = detect_sources( # complete
Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 2d Plot the segmentation image side-by-side with the actual image of the field. Are you concerned or happy with the results? Hint - no stretch should be applied to the segmentation image.
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(7,4)) ax1.imshow(# complete ax2.imshow(# complete fig.tight_layout()
Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 3) Source Centroids and Shapes Now that we have defined all of the sources in the image, we must determine the centroid for each source (in order to ultimately make some form of photometric measurement). As Dora mentioned earlier in the week, there are many ways to determine the centroid of a given source (e.g., fitting a model, finding the max of the marginalized 1-d distribution, etc). Today we will use the centroid_com function, which calculates the "center of mass" of the 2d image moments to determine the source centroids. To measure the centroid we want to isolate the source in question, thus we have generated a convenience function to return the extent of each source from its corresponding segmentation image.
def get_source_extent(segm_data, source_num): """ Determine extent of sources for centroid measurements Parameters ---------- segm_data : array-like Segementation image produced by photutils.segmentation.detect_sources source_num : int The source number from the segmentation image Returns ------- source_extent : list The minimum y, maximum y, minimum x, and maximum x pixel values over which a source is detected """ source_pix = np.where(segm_data == source_num) source_extent = [np.min(source_pix[0]), np.max(source_pix[0]), np.min(source_pix[1]), np.max(source_pix[1])] return source_extent
Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 3a Measure the centroid for each source detected in the image using the centroid_com function. Hint - you'll want to start with a subset of pixels containing the source. Hint 2 - centroids are measured relative to the provided data, you'll need to convert back to "global" pixel values.
xcentroid = np.zeros_like(np.unique(segm.data)[1:], dtype="float") ycentroid = np.zeros_like(np.unique(segm.data)[1:], dtype="float") for source_num in np.unique(segm.data)[1:]: source_extent = get_source_extent( # complete xc, yc = centroid_com( # complete xcentroid[source_num-1], ycentroid[source_num-1] = # complete
Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 3b Overplot the derived centroids on the image data as a sanity check for your methodology.
fig, ax1 = plt.subplots() ax1.imshow( # complete ax1.plot( # complete fig.tight_layout()
Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
With an estimate of the centroid of every source in hand, we now need to determine the ellipse that best describes the galaxies in order to measure their flux. Fortunately, this can be done using the source_properties function within photutils.morphology package. Briefly, source_properties takes both the data array, and the segmentation image as inputs, and then calculates properties for every source. The list of properties is long (see the attributes list), and for now we only care about the semi-major and semi-minor axes as well as the orientation of the source, all of which are needed to measure the flux in an elliptical aperture [this is a lot easier than trying to fit concentric ellipses, no?]. Problem 3c Using source_properties to determine $a$, $b$, and the orientation of each source.
cat = source_properties( # complete tbl = cat.to_table(columns=['id', 'semimajor_axis_sigma','semiminor_axis_sigma', 'orientation'])
Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 4) Photometry We now have all the necessary information to measure the flux in elliptical apertures. The EllipticalAperture function in photutils defines apertures on an image based on input centroids, $a$, $b$, and orientation values. Problem 4a Define apertures for the sources that are detected in the image. Note - the semimajor_axis_sigma reported by source_properties() is the "The 1-sigma standard deviation along the semimajor axis of the 2D Gaussian function that has the same second-order central moments as the source" according to the docs. Thus, be sure to multiple $a$ and $b$ by a factor of 3 in order to capture $\sim$3$\sigma$ of the source flux. Note to the note - this isn't well motivated, but for the sake of argument assume that this adjustment creates a reasonable aperture.
positions = # complete apertures = [EllipticalAperture( # complete # complete # complete # complete
Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 4b Overplot your apertures on the sources that have been detected. Hint - each aperture object has a plot() attribute that can be used to show the aperture for each source.
fig, ax1 = plt.subplots() ax1.imshow( # complete # complete # complete fig.tight_layout()
Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
With apertures now defined, we can finally measure the flux of each source. The aperture_photometry function returns the flux (actually counts) in an image for the provided apertures. It takes the image, apertures, and bakground image as arguments. Note - the background has already been subtracted from these images so we currently do not have an estimate of the full background for these sources. We will create a background image that is approximately correct (we know this because we know the properties of the SDSS survey and detector). In this case what we are doing is not only incorrect, it's entirely made up and should not be repeated in your own work. Nevertheless, this (bad) approximation is necessary to produce uncertainty estimates. Execute the cell below to create an uncertainty image to use with the aperture_photometry function.
bkg = np.random.normal(100, 35, r_data.shape) uncertainty_img = calc_total_error(r_data, bkg - np.mean(bkg), 1)
Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 4c Measure the counts and uncertainty detected from each source within the apertures defined in 4a. Hint - you will need to loop over each aperture as aperture_photometry does not take multiple apertures of different shapes as a single argument.
source_cnts = # complete source_cnts_unc = # complete for source_num, ap in enumerate(apertures): phot = # complete source_cnts[source_num] = # complete source_cnts_unc[source_num] = # complete
Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
The images have been normalized to a zero point of 30. Thus, we can convert from counts to magnitudes via the following equation: $$m = 30 - 2.5 \log (\mathrm{counts}).$$ Recall from Dora's talk that the uncertainty of the magnitude measurements can be calculated as: $$\frac{2.5}{\ln(10)} \frac{\sigma_\mathrm{counts}}{\mathrm{counts}}.$$ Problem 4d Calculate the magnitude of each source in the image.
source_mag = # complete source_mag_unc = # complete for source_num, (mag, mag_unc) in enumerate(zip(source_mag, source_mag_unc)): print("Source {:d} has m = {:.3f} +/- {:.3f} mag".format( # complete
Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
That's it! You've measured the magnitude for every source in the image. As previously noted, the images provided for this dataset are centered are galaxies within a cluster, and ultimately, these galaxies are all that we care about. For this first image, that means we care about the galaxy centered at $(x,y) \approx (118, 118)$. Problem 4e What is the magnitude of the galaxy we care about for this image? [We will need this moving forward]
# complete
Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 5) Multiwavelength Photometry Ultimately we want to measure colors for these galaxies. We now know the $r$-band magnitude for galaxy 85698, we need to measure the $g$ and $i$ band magnitudes as well. Problem 5a Using the various pieces described above, write a function to measure the magnitude of the galaxy at the center of the image. You should create a new background image for every field. Hint - creating an actual function is essential as we will eventually run this on every image. Hint 2 - source_properties directly measures source centroids, use this it will be faster.
def cluster_galaxy_photometry(data): ''' Determine the magnitude of the galaxy at the center of the image Parameters ---------- data : array-like Background subtracted 2D image centered on the galaxy of interest Returns ------- mag : float Magnitude of the galaxy mag_unc : float Uncertainty of the magnitude measurement ''' # complete # complete # complete # complete # complete # complete # complete # complete # complete # complete # complete # complete # complete return mag, mag_unc
Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 5b Confirm that the function calculates the same $r$-band mag that was calculated in Problem 4.
# complete print("""Previously, we found m = {:.3f} mag. This new function finds m = {:.3f} mag.""".format( # complete
Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 5c Use this new function to calculate the galaxy magnitude in the $g$ and the $i$ band, and determine the $g - r$ and $r - i$ colors of the galaxy.
g_data = fits.getdata( # complete i_data = fits.getdata( # complete # complete # complete # complete print("""The g-r color = {:.3f} +/- {:.3f} mag. The r-i color = {:.3f} +/- {:.3f} mag""".format(g_mag - r_mag, np.hypot(g_mag_unc, r_mag_unc), r_mag - i_mag, np.hypot(r_mag_unc, i_mag_unc)))
Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
But wait! Problem 5d Was this calculation "fair"? Hint - this is a relatively red galaxy. Solution 5d This calculation was not "fair" because identical apertures were not used in all 3 filters. Problem 5e [Assuming your calculation was not fair] Calculate the $g - r$ and $r - i$ colors of the galaxy in a consistent fashion. Hint - split your initial function into two functions, one to determine an aperture and another to measure photometry. Use the $r$-band image (where the signal-to-noise ratio of the data is highest) to define the aperture for all 3 images.
def cluster_galaxy_aperture(data): # complete # complete # complete # complete # complete # complete # complete # complete # complete # complete # complete return aperture def cluster_galaxy_phot(data, aperture): # complete # complete # complete # complete # complete return mag, mag_unc r_ap = # complete # complete # complete # complete print("""The g-r color = {:.3f} +/- {:.3f} mag. The r-i color = {:.3f} +/- {:.3f} mag""".format(g_mag - r_mag, np.hypot(g_mag_unc, r_mag_unc), r_mag - i_mag, np.hypot(r_mag_unc, i_mag_unc)))
Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Using Iris Dataset
# import some data to play with iris = datasets.load_iris() # look at individual aspects by uncommenting the below #iris.data #iris.feature_names #iris.target #iris.target_names
notebooks/jupyter/datascience/K Means Cluster Visualization.ipynb
gregoryg/cdh-projects
apache-2.0
Original author converted the data to Pandas Dataframes. Note that we have separated out the inputs (x) and the outputs/labels (y).
# Store the inputs as a Pandas Dataframe and set the column names x = pd.DataFrame(iris.data) x.columns = ['Sepal_Length','Sepal_Width','Petal_Length','Petal_Width'] y = pd.DataFrame(iris.target) y.columns = ['Targets']
notebooks/jupyter/datascience/K Means Cluster Visualization.ipynb
gregoryg/cdh-projects
apache-2.0
Visualise the data It is always important to have a look at the data. We will do this by plotting two scatter plots. One looking at the Sepal values and another looking at Petal. We will also set it to use some colours so it is clearer.
# Set the size of the plot plt.figure(figsize=(14,7)) # Create a colormap colormap = np.array(['red', 'lime', 'black']) # Plot Sepal plt.subplot(1, 2, 1) plt.scatter(x.Sepal_Length, x.Sepal_Width, c=colormap[y.Targets], s=40) plt.title('Sepal') plt.subplot(1, 2, 2) plt.scatter(x.Petal_Length, x.Petal_Width, c=colormap[y.Targets], s=40) plt.title('Petal');
notebooks/jupyter/datascience/K Means Cluster Visualization.ipynb
gregoryg/cdh-projects
apache-2.0
Build the K Means Model - non-Spark example This is the easy part, providing you have the data in the correct format (which we do). Here we only need two lines. First we create the model and specify the number of clusters the model should find (n_clusters=3) next we fit the model to the data.
# K Means Cluster model = KMeans(n_clusters=3) model.fit(x) 1 2 3 # K Means Cluster model = KMeans(n_clusters=3) model.fit(x) # This is what KMeans thought model.labels_
notebooks/jupyter/datascience/K Means Cluster Visualization.ipynb
gregoryg/cdh-projects
apache-2.0
Visualise the classifier results Let's plot the actual classes against the predicted classes from the K Means model. Here we are plotting the Petal Length and Width, however each plot changes the colors of the points using either c=colormap[y.Targets] for the original class and c=colormap[model.labels_] for the predicted classess.
# View the results # Set the size of the plot plt.figure(figsize=(14,7)) # Create a colormap colormap = np.array(['red', 'lime', 'black']) # Plot the Original Classifications plt.subplot(1, 2, 1) plt.scatter(x.Petal_Length, x.Petal_Width, c=colormap[y.Targets], s=40) plt.title('Real Classification') # Plot the Models Classifications plt.subplot(1, 2, 2) plt.scatter(x.Petal_Length, x.Petal_Width, c=colormap[model.labels_], s=40) plt.title('K Mean Classification');
notebooks/jupyter/datascience/K Means Cluster Visualization.ipynb
gregoryg/cdh-projects
apache-2.0
Fixing the coloring Here we are going to change the class labels, we are not changing the any of the classification groups we are simply giving each group the correct number. We need to do this for measuring the performance. Using this code below we using the np.choose() to assign new values, basically we are changing the 1’s in the predicted values to 0’s and the 0’s to 1’s. Class 2 matched so we can leave. By running the two print functions you can see that all we have done is swap the values. NOTE: your results might be different to mine, if so you will have to figure out which class matches which and adjust the order of the values in the np.choose() function.
# The fix, we convert all the 1s to 0s and 0s to 1s. predY = np.choose(model.labels_, [1, 0, 2]).astype(np.int64) print (model.labels_) print (predY)
notebooks/jupyter/datascience/K Means Cluster Visualization.ipynb
gregoryg/cdh-projects
apache-2.0
Re-plot Now we can re plot the data as before but using predY instead of model.labels_.
# View the results # Set the size of the plot plt.figure(figsize=(14,7)) # Create a colormap colormap = np.array(['red', 'lime', 'black']) # Plot Orginal plt.subplot(1, 2, 1) plt.scatter(x.Petal_Length, x.Petal_Width, c=colormap[y.Targets], s=40) plt.title('Real Classification') # Plot Predicted with corrected values plt.subplot(1, 2, 2) plt.scatter(x.Petal_Length, x.Petal_Width, c=colormap[predY], s=40) plt.title('K Mean Classification');
notebooks/jupyter/datascience/K Means Cluster Visualization.ipynb
gregoryg/cdh-projects
apache-2.0