markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
Example output:
Serving function input: bytes_inputs
projects.locations.models.upload
Request | model = {
"display_name": "custom_job_TF" + TIMESTAMP,
"metadata_schema_uri": "",
"artifact_uri": model_artifact_dir,
"container_spec": {
"image_uri": "gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-1:latest"
},
}
print(MessageToJson(aip.UploadModelRequest(parent=PARENT, model=model).__dict__["_pb"])) | notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Example output:
```
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"model": {
"displayName": "custom_job_TF20210227173057",
"containerSpec": {
"imageUri": "gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-1:latest"
},
"artifactUri": "gs://migration-ucaip-trainingaip-20210227173057/custom_job_TF_20210227173057"
}
}
```
Call | request = clients["model"].upload_model(parent=PARENT, model=model) | notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Example output:
{
"model": "projects/116273516712/locations/us-central1/models/8844102097923211264"
} | # The full unique ID for the model
model_id = result.model
print(model_id) | notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Make batch predictions
Make the batch input file
Let's now make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can be JSONL. | import base64
import json
import cv2
import numpy as np
import tensorflow as tf
(_, _), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
test_image_1, test_label_1 = x_test[0], y_test[0]
test_image_2, test_label_2 = x_test[1], y_test[1]
cv2.imwrite("tmp1.jpg", (test_image_1).astype(np.uint8))
cv2.imwrite("tmp2.jpg", (test_image_2).astype(np.uint8))
gcs_input_uri = "gs://" + BUCKET_NAME + "/" + "test.jsonl"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
bytes = tf.io.read_file("tmp1.jpg")
b64str = base64.b64encode(bytes.numpy()).decode("utf-8")
f.write(json.dumps({input_name: {"b64": b64str}}) + "\n")
bytes = tf.io.read_file("tmp2.jpg")
b64str = base64.b64encode(bytes.numpy()).decode("utf-8")
f.write(json.dumps({input_name: {"b64": b64str}}) + "\n")
! gsutil cat $gcs_input_uri | notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Example output:
{"bytes_inputs": {"b64": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQICAgICAgUDAwUKBwYHCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgr/wAARCAAgACADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD570PxBpmp6nfaEl48lzpUqpewPCU8lpEDqMsOeD26Z55Fa+s3HhnR/Aj6xZjV7rWrW4ke/wBMtLRGRLTaux1cuPnLlhtIAAUEE5490/ao8E6F4b8P3NxZeGksNW1z4h62Iby2t1/eC3ZoozJxwSiKQOhEZJ5JrqZtI8MftFfs56j8YI/hvo/gq1u9C0ywlbTbFoLa+1SOFWlgPGRmNiQzNkiPOflyf1WHFdark0K8UlUbkvJWel1vqmn5n5MuD6MM7qUJzbpxUXazvJSWtmuzTR8iaBoXirx54H1Hxo10mhx2V/8AZltpEE7ByAV8w8YLdRjAHAz1NcSNcXUtev8AwVrE0DajaQ+YZLY4jnXPJXrkjPPTPXGDXvXwi+F3hvwh8Ffip4i1a7GqX7a1b6fp0c84SKO3Wz3FiCdpHnSHDZ2/KAOtfP8A4v8Ah1qOoWul/Efwu4sL+wk8u2IkUi7JRhtwM5RgBkHpz0xXy+F4gzNY6Mqs3NTfvR6a6adj6bGcPZX/AGfKFKEYcqupemurufqP8c9Il/aA8BeHNS+HHh/7Ze634p0rUtMhsFWUJNdsFlR8HAAWWRXBPrmvGvi5+y/B+z1+0ZqHwW+PXx08LaL4VtJI75dOtPEksgfe8krskKIDCZWdCUkyU2MRuVga5X9lr9qAfsk/tCWPjTW9Ol1XwzpurtdXei27gBJTEyJcxBsDcu/OOAwBHBwa8S+JXxltPi3431/x34y8TT/2tqmpy3V1d6h8/mOzFiN46LkgDpgcdOK/HcPxo/qMalONqkn70ei816307I/Xa/C0XjXTrO8EtJdfR/cUfiz4m8aaBJefD/4NXcd4CJ7f/hI7bVXitZ4HkPzSQMvMxRUUTAEqFGCM4EPw/wDAsnhjwZEmrzte6ipKmWeYSbAV+bYTjAJBPTgNjNbOk+HYdL0qPxPcWsN5BK2FaO43q3fHUH8eld34kku/hP4LsvHPiPRtPvZNSkU6fYSFStvED8zsqjLsq5IBwOB1Jri/4iFn2BxSq0Yxulyq8eZLp1f4ms+BMkx2FlRquVm7u0uVvrbRH//Z"}}
{"bytes_inputs": {"b64": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQICAgICAgUDAwUKBwYHCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgr/wAARCAAgACADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD9qIntrti9vhg3KkLwR69Kbc3FrYskd1LGjOjsqNjJCjLH8Mj8xXw3+yr+3v8ABbUZL2/8L/G/4ja2L0raac/xAvEbTmndtyLFKOd5AwcZwCSccV6X8Xv22/jD4K+L2n+BPA/7H+qeP4v7LSb/AISLQNYjW0ieTmWLfIoUBQiksxA6VxwxtN0VOWn4nTPC1Y1XBHpuqftI6BZ+MrDw/FZSw2dyzRyXl3p8g/eblCgbcjBG/k8dPevU1tCWIKj/AL5r5+8aftTfCqx+H9leeM/i1pXw51aWJvtWkWF1b6ldQnkqnmRqyg9c7fXGag/Zm/aY+HL69d6MPjvr/jVNWm32M19pcgSwREyVZygAJO7PbAFZ08TUjNqpt32/AdSiuVOK2PyC/Zs/4LOfs7/s+fAbQvgz4K/Ywu7rw94Bd4op9WsbfUZ1u5CGlupHBBLSMCd2MYAA4Fe0eGf+Dm/4deO9EuvDvhvSLjSWt7MpPaw+DfNiihYgNvRWK4/hyRjn3r8WvjN8MviF4C+LPiPTvhtZ6lDo8l86W6QswDID0IHUA5x7Ve/ZF1f9pX4C/Gq1+Ifw90PV7e6mgms71o7QP58EowyMrgqwJCnB9K3w+UQxleFF4hw52lzSb5Y3aXM7Juy3dtbHRRzrCu0qlKEl17/fc/W6f/gsjpGtX40z4Zadp1280IVYYPAdsv70nO8ZQnPPToK7z4a/tKftD/ETU7TQPEur6nbpdgMmnrFHak5PUwwquPq3Wvk34QwftUfE/GtfE3xmnhm0LAiy0SwhiupgezSxouzPfb+dfdv7DPwl0rQtcivhZx4Ub1eWQtJu6lmZslmPqfWnmXD+DyjESgsSq1usYyjF+a5tWvkh18+w+IXJQpJeZ//Z"}}
projects.locations.batchPredictionJobs.create
Request | batch_prediction_job = aip.BatchPredictionJob(
display_name="custom_job_TF" + TIMESTAMP,
model=model_id,
input_config={
"instances_format": "jsonl",
"gcs_source": {"uris": [gcs_input_uri]},
},
model_parameters=ParseDict(
{"confidenceThreshold": 0.5, "maxPredictions": 2}, Value()
),
output_config={
"predictions_format": "jsonl",
"gcs_destination": {
"output_uri_prefix": "gs://" + f"{BUCKET_NAME}/batch_output/"
},
},
dedicated_resources={
"machine_spec": {"machine_type": "n1-standard-2", "accelerator_type": 0},
"starting_replica_count": 1,
"max_replica_count": 1,
},
)
print(
MessageToJson(
aip.CreateBatchPredictionJobRequest(
parent=PARENT, batch_prediction_job=batch_prediction_job
).__dict__["_pb"]
)
) | notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Example output:
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"batchPredictionJob": {
"displayName": "custom_job_TF_TF20210227173057",
"model": "projects/116273516712/locations/us-central1/models/8844102097923211264",
"inputConfig": {
"instancesFormat": "jsonl",
"gcsSource": {
"uris": [
"gs://migration-ucaip-trainingaip-20210227173057/test.jsonl"
]
}
},
"modelParameters": {
"maxPredictions": 10000.0,
"confidenceThreshold": 0.5
},
"outputConfig": {
"predictionsFormat": "jsonl",
"gcsDestination": {
"outputUriPrefix": "gs://migration-ucaip-trainingaip-20210227173057/batch_output/"
}
},
"dedicatedResources": {
"machineSpec": {
"machineType": "n1-standard-2"
},
"startingReplicaCount": 1,
"maxReplicaCount": 1
}
}
}
Call | request = clients["job"].create_batch_prediction_job(
parent=PARENT, batch_prediction_job=batch_prediction_job
) | notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Example output:
{
"name": "projects/116273516712/locations/us-central1/batchPredictionJobs/659759753223733248",
"displayName": "custom_job_TF_TF20210227173057",
"model": "projects/116273516712/locations/us-central1/models/8844102097923211264",
"inputConfig": {
"instancesFormat": "jsonl",
"gcsSource": {
"uris": [
"gs://migration-ucaip-trainingaip-20210227173057/test.jsonl"
]
}
},
"modelParameters": {
"maxPredictions": 10000.0,
"confidenceThreshold": 0.5
},
"outputConfig": {
"predictionsFormat": "jsonl",
"gcsDestination": {
"outputUriPrefix": "gs://migration-ucaip-trainingaip-20210227173057/batch_output/"
}
},
"dedicatedResources": {
"machineSpec": {
"machineType": "n1-standard-2"
},
"startingReplicaCount": 1,
"maxReplicaCount": 1
},
"manualBatchTuningParameters": {},
"state": "JOB_STATE_PENDING",
"createTime": "2021-02-27T18:00:30.887438Z",
"updateTime": "2021-02-27T18:00:30.887438Z"
} | # The fully qualified ID for the batch job
batch_job_id = request.name
# The short numeric ID for the batch job
batch_job_short_id = batch_job_id.split("/")[-1]
print(batch_job_id) | notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
projects.locations.batchPredictionJobs.get
Call | request = clients["job"].get_batch_prediction_job(name=batch_job_id) | notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Example output:
{
"name": "projects/116273516712/locations/us-central1/batchPredictionJobs/659759753223733248",
"displayName": "custom_job_TF_TF20210227173057",
"model": "projects/116273516712/locations/us-central1/models/8844102097923211264",
"inputConfig": {
"instancesFormat": "jsonl",
"gcsSource": {
"uris": [
"gs://migration-ucaip-trainingaip-20210227173057/test.jsonl"
]
}
},
"modelParameters": {
"confidenceThreshold": 0.5,
"maxPredictions": 10000.0
},
"outputConfig": {
"predictionsFormat": "jsonl",
"gcsDestination": {
"outputUriPrefix": "gs://migration-ucaip-trainingaip-20210227173057/batch_output/"
}
},
"dedicatedResources": {
"machineSpec": {
"machineType": "n1-standard-2"
},
"startingReplicaCount": 1,
"maxReplicaCount": 1
},
"manualBatchTuningParameters": {},
"state": "JOB_STATE_RUNNING",
"createTime": "2021-02-27T18:00:30.887438Z",
"startTime": "2021-02-27T18:00:30.938444Z",
"updateTime": "2021-02-27T18:00:30.938444Z"
} | def get_latest_predictions(gcs_out_dir):
""" Get the latest prediction subfolder using the timestamp in the subfolder name"""
folders = !gsutil ls $gcs_out_dir
latest = ""
for folder in folders:
subfolder = folder.split("/")[-2]
if subfolder.startswith("prediction-"):
if subfolder > latest:
latest = folder[:-1]
return latest
while True:
response = clients["job"].get_batch_prediction_job(name=batch_job_id)
if response.state != aip.JobState.JOB_STATE_SUCCEEDED:
print("The job has not completed:", response.state)
if response.state == aip.JobState.JOB_STATE_FAILED:
break
else:
folder = get_latest_predictions(
response.output_config.gcs_destination.output_uri_prefix
)
! gsutil ls $folder/prediction*
! gsutil cat $folder/prediction*
break
time.sleep(60) | notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Example output:
gs://migration-ucaip-trainingaip-20210227173057/batch_output/prediction-custom_job_TF_TF20210227173057-2021_02_27T10_00_30_820Z/prediction.errors_stats-00000-of-00001
gs://migration-ucaip-trainingaip-20210227173057/batch_output/prediction-custom_job_TF_TF20210227173057-2021_02_27T10_00_30_820Z/prediction.results-00000-of-00001
{"instance": {"bytes_inputs": {"b64": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQICAgICAgUDAwUKBwYHCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgr/wAARCAAgACADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD570PxBpmp6nfaEl48lzpUqpewPCU8lpEDqMsOeD26Z55Fa+s3HhnR/Aj6xZjV7rWrW4ke/wBMtLRGRLTaux1cuPnLlhtIAAUEE5490/ao8E6F4b8P3NxZeGksNW1z4h62Iby2t1/eC3ZoozJxwSiKQOhEZJ5JrqZtI8MftFfs56j8YI/hvo/gq1u9C0ywlbTbFoLa+1SOFWlgPGRmNiQzNkiPOflyf1WHFdark0K8UlUbkvJWel1vqmn5n5MuD6MM7qUJzbpxUXazvJSWtmuzTR8iaBoXirx54H1Hxo10mhx2V/8AZltpEE7ByAV8w8YLdRjAHAz1NcSNcXUtev8AwVrE0DajaQ+YZLY4jnXPJXrkjPPTPXGDXvXwi+F3hvwh8Ffip4i1a7GqX7a1b6fp0c84SKO3Wz3FiCdpHnSHDZ2/KAOtfP8A4v8Ah1qOoWul/Efwu4sL+wk8u2IkUi7JRhtwM5RgBkHpz0xXy+F4gzNY6Mqs3NTfvR6a6adj6bGcPZX/AGfKFKEYcqupemurufqP8c9Il/aA8BeHNS+HHh/7Ze634p0rUtMhsFWUJNdsFlR8HAAWWRXBPrmvGvi5+y/B+z1+0ZqHwW+PXx08LaL4VtJI75dOtPEksgfe8krskKIDCZWdCUkyU2MRuVga5X9lr9qAfsk/tCWPjTW9Ol1XwzpurtdXei27gBJTEyJcxBsDcu/OOAwBHBwa8S+JXxltPi3431/x34y8TT/2tqmpy3V1d6h8/mOzFiN46LkgDpgcdOK/HcPxo/qMalONqkn70ei816307I/Xa/C0XjXTrO8EtJdfR/cUfiz4m8aaBJefD/4NXcd4CJ7f/hI7bVXitZ4HkPzSQMvMxRUUTAEqFGCM4EPw/wDAsnhjwZEmrzte6ipKmWeYSbAV+bYTjAJBPTgNjNbOk+HYdL0qPxPcWsN5BK2FaO43q3fHUH8eld34kku/hP4LsvHPiPRtPvZNSkU6fYSFStvED8zsqjLsq5IBwOB1Jri/4iFn2BxSq0Yxulyq8eZLp1f4ms+BMkx2FlRquVm7u0uVvrbRH//Z"}}, "prediction": [0.0407731421, 0.125140116, 0.118551917, 0.100501947, 0.128865793, 0.089787662, 0.157575116, 0.121281914, 0.0312845968, 0.0862377882]}
{"instance": {"bytes_inputs": {"b64": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQICAgICAgUDAwUKBwYHCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgr/wAARCAAgACADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD9qIntrti9vhg3KkLwR69Kbc3FrYskd1LGjOjsqNjJCjLH8Mj8xXw3+yr+3v8ABbUZL2/8L/G/4ja2L0raac/xAvEbTmndtyLFKOd5AwcZwCSccV6X8Xv22/jD4K+L2n+BPA/7H+qeP4v7LSb/AISLQNYjW0ieTmWLfIoUBQiksxA6VxwxtN0VOWn4nTPC1Y1XBHpuqftI6BZ+MrDw/FZSw2dyzRyXl3p8g/eblCgbcjBG/k8dPevU1tCWIKj/AL5r5+8aftTfCqx+H9leeM/i1pXw51aWJvtWkWF1b6ldQnkqnmRqyg9c7fXGag/Zm/aY+HL69d6MPjvr/jVNWm32M19pcgSwREyVZygAJO7PbAFZ08TUjNqpt32/AdSiuVOK2PyC/Zs/4LOfs7/s+fAbQvgz4K/Ywu7rw94Bd4op9WsbfUZ1u5CGlupHBBLSMCd2MYAA4Fe0eGf+Dm/4deO9EuvDvhvSLjSWt7MpPaw+DfNiihYgNvRWK4/hyRjn3r8WvjN8MviF4C+LPiPTvhtZ6lDo8l86W6QswDID0IHUA5x7Ve/ZF1f9pX4C/Gq1+Ifw90PV7e6mgms71o7QP58EowyMrgqwJCnB9K3w+UQxleFF4hw52lzSb5Y3aXM7Juy3dtbHRRzrCu0qlKEl17/fc/W6f/gsjpGtX40z4Zadp1280IVYYPAdsv70nO8ZQnPPToK7z4a/tKftD/ETU7TQPEur6nbpdgMmnrFHak5PUwwquPq3Wvk34QwftUfE/GtfE3xmnhm0LAiy0SwhiupgezSxouzPfb+dfdv7DPwl0rQtcivhZx4Ub1eWQtJu6lmZslmPqfWnmXD+DyjESgsSq1usYyjF+a5tWvkh18+w+IXJQpJeZ//Z"}}, "prediction": [0.0406896845, 0.125281364, 0.118567884, 0.100639313, 0.12864624, 0.0898737088, 0.157521054, 0.121037535, 0.0313298739, 0.0864133239]}
Make online predictions
projects.locations.endpoints.create
Request | endpoint = {"display_name": "custom_job_TF" + TIMESTAMP}
print(
MessageToJson(
aip.CreateEndpointRequest(parent=PARENT, endpoint=endpoint).__dict__["_pb"]
)
) | notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Example output:
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"endpoint": {
"displayName": "custom_job_TF_TF20210227173057"
}
}
Call | request = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint) | notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Example output:
{
"name": "projects/116273516712/locations/us-central1/endpoints/6810814827095654400"
} | # The full unique ID for the endpoint
endpoint_id = result.name
# The short numeric ID for the endpoint
endpoint_short_id = endpoint_id.split("/")[-1]
print(endpoint_id) | notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
projects.locations.endpoints.deployModel
Request | deployed_model = {
"model": model_id,
"display_name": "custom_job_TF" + TIMESTAMP,
"dedicated_resources": {
"min_replica_count": 1,
"machine_spec": {"machine_type": "n1-standard-4", "accelerator_count": 0},
},
}
print(
MessageToJson(
aip.DeployModelRequest(
endpoint=endpoint_id,
deployed_model=deployed_model,
traffic_split={"0": 100},
).__dict__["_pb"]
)
) | notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Example output:
{
"endpoint": "projects/116273516712/locations/us-central1/endpoints/6810814827095654400",
"deployedModel": {
"model": "projects/116273516712/locations/us-central1/models/8844102097923211264",
"displayName": "custom_job_TF_TF20210227173057",
"dedicatedResources": {
"machineSpec": {
"machineType": "n1-standard-4"
},
"minReplicaCount": 1
}
},
"trafficSplit": {
"0": 100
}
}
Call | request = clients["endpoint"].deploy_model(
endpoint=endpoint_id, deployed_model=deployed_model, traffic_split={"0": 100}
) | notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Example output:
{
"deployedModel": {
"id": "2064302294823862272"
}
} | # The unique ID for the deployed model
deployed_model_id = result.deployed_model.id
print(deployed_model_id) | notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
projects.locations.endpoints.predict
Prepare file for online prediction
Request | import base64
import cv2
import tensorflow as tf
(_, _), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
test_image, test_label = x_test[0], y_test[0]
cv2.imwrite("tmp.jpg", (test_image * 255).astype(np.uint8))
bytes = tf.io.read_file("tmp.jpg")
b64str = base64.b64encode(bytes.numpy()).decode("utf-8")
instances_list = [{"bytes_inputs": {"b64": b64str}}]
prediction_request = aip.PredictRequest(endpoint=endpoint_id)
prediction_request.instances.append(instances_list)
print(MessageToJson(prediction_request.__dict__["_pb"])) | notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Example output:
```
{
"endpoint": "projects/116273516712/locations/us-central1/endpoints/6810814827095654400",
"instances": [
[
{
"bytes_inputs": {
"b64": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQICAgICAgUDAwUKBwYHCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgr/wAARCAAgACADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD6E1zw/qemaZY669mkdtqsTPZTpMH85Y3KMcKeOR36444NZGj2/ibWPHaaPeHSLXRbq3jSw1O7u3V3u9zb0ZAh+QIFO4EkliCBjnwv9lfxtrviTxBbW974le/0nQ/h5ohms7m4b92bhVlkEfPIDuwJ6gyADgCuWh1fxP8As6/tGad8H5PiRrHjW6tNd1O/iXUr5Z7mx0uSZlinHODiRQCqrgGTGPmwPyqfClGlnM6Em3TSi/N3Wtnto015H6y+MK08kp14QSqScle6tFxel0+6aZ9d6/rvhXwH4407wWtq+uSXth9pa5jcwKUBIbyxzkL0Ock8nHQV2x0NtN0Gw8a6PDOunXc3liO5GZIGxwG6YBxx1x0zkV4L8Xfij4k8X/Gr4V+HdJtDpdgui3GoajJBAXlkuGvNoUEDcD5MYyuN3zEnpX0B4Q+Iunafdap8OPFCG/sL+PzLkGNgbQB1O7Jxh1JOCOvHXNfUYrh/LPqMo0oKDgvdl10117nzGD4izR5hGdWcp8zs4+umisflx8DNXi/Z/wDHviPTfiP4g+x2WieFtV03U5r9miLw2ilonTIySWijZCB6Yr2X4R/tQT/tC/s56f8AGn4C/AvxTrXiq7jksW1G78NxRlNiRxIrzO5EwiVHAePAfeoO1lIrqv2pf2Xz+1t+z3feC9E1GLSvE2paQtraa1cISXiEqu9tKVydrbMZ5Kkg8jIr234a/Bq7+EngjQPAng3wzB/ZOl6ZFa2tpp/yeWiqFB2Hq2ASeuTz15r9ixHBa+vSp1JXpxXuy6vyfpbXuz8jocUyWCVSirTb1j09V95e+E3hnwXr8dn8QPjLaSWZBguP+EcudKSW6gnSMfLHOrcQh2djCSAxY5BxkzfEDx1H4n8ZyvpEC2WnMAwighMe8hvl3gZyQCB15K5xWNq3iKbVNVk8MW91NZzxLllkt9jL2z0I/DrXCeG47T4seNL3wN4c1nULKPTY2GoX8YYNcSkfKisxwis2ASMnk9AK7f8AiHuQ47CulWlKzfM7S5W+vRfgZQ47zvA4qNako3irK8eZLpfVn//Z"
}
}
]
]
}
```
Call | request = clients["prediction"].predict(endpoint=endpoint_id, instances=instances_list) | notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Example output:
{
"predictions": [
[
0.0406113081,
0.125313938,
0.118626907,
0.100714684,
0.128500372,
0.0899592042,
0.157601,
0.121072263,
0.0312432405,
0.0863570943
]
],
"deployedModelId": "2064302294823862272"
}
projects.locations.endpoints.undeployModel
Call | request = clients["endpoint"].undeploy_model(
endpoint=endpoint_id, deployed_model_id=deployed_model_id, traffic_split={}
) | notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Example output:
{}
Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial. | delete_model = True
delete_endpoint = True
delete_custom_job = True
delete_batchjob = True
delete_bucket = True
# Delete the model using the Vertex AI fully qualified identifier for the model
try:
if delete_model:
clients["model"].delete_model(name=model_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex AI fully qualified identifier for the endpoint
try:
if delete_endpoint:
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the custom training using the Vertex AI fully qualified identifier for the custom training
try:
if delete_custom_job:
clients["job"].delete_custom_job(name=custom_training_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex AI fully qualified identifier for the batch job
try:
if delete_batchjob:
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r gs://$BUCKET_NAME | notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Import SHL Prediction Module: shl_pm | import shl_pm | rnd03/shl_sm_NoOCR_v010.ipynb | telescopeuser/uat_shl | mit |
shl_sm parameters:
shl_sm simulated real time per second price ata, fetch from csv: | # which month to predictsimulate?
# shl_sm_parm_ccyy_mm = '2017-04'
# shl_sm_parm_ccyy_mm_offset = 1647
# shl_sm_parm_ccyy_mm = '2017-05'
# shl_sm_parm_ccyy_mm_offset = 1708
# shl_sm_parm_ccyy_mm = '2017-06'
# shl_sm_parm_ccyy_mm_offset = 1769
shl_sm_parm_ccyy_mm = '2017-07'
shl_sm_parm_ccyy_mm_offset = 1830
#----------------------------------
shl_sm_data = pd.read_csv('shl_sm_data/history_ts.csv')
shl_sm_data | rnd03/shl_sm_NoOCR_v010.ipynb | telescopeuser/uat_shl | mit |
shl_pm Initialization | shl_pm.shl_initialize(shl_sm_parm_ccyy_mm)
# Upon receiving 11:29:00 second price, to predict till 11:29:49 <- one-step forward price forecasting
for i in range(shl_sm_parm_ccyy_mm_offset, shl_sm_parm_ccyy_mm_offset+50): # use csv data as simulatino
# for i in range(shl_sm_parm_ccyy_mm_offset, shl_sm_parm_ccyy_mm_offset+55): # use csv data as simulatino
print('\n<<<< Record No.: %5d >>>>' % i)
print(shl_sm_data['ccyy-mm'][i]) # format: ccyy-mm
print(shl_sm_data['time'][i]) # format: hh:mm:ss
print(shl_sm_data['bid-price'][i]) # format: integer
######################################################################################################################
# call prediction function, returned result is in 'list' format, i.e. [89400]
shl_sm_prediction_list_local_1 = shl_pm.shl_predict_price_k_step(shl_sm_data['time'][i], shl_sm_data['bid-price'][i],1) # <- one-step forward price forecasting
print(shl_sm_prediction_list_local_1)
######################################################################################################################
# Upon receiving 11:29:50 second price, to predict till 11:30:00 <- ten-step forward price forecasting
for i in range(shl_sm_parm_ccyy_mm_offset+50, shl_sm_parm_ccyy_mm_offset+51): # use csv data as simulation
print('\n<<<< Record No.: %5d >>>>' % i)
print(shl_sm_data['ccyy-mm'][i]) # format: ccyy-mm
print(shl_sm_data['time'][i]) # format: hh:mm:ss
print(shl_sm_data['bid-price'][i]) # format: integer/boost-trap-float
######################################################################################################################
# call prediction function, returned result is in 'list' format, i.e. [89400, 89400, 89400, 89500, 89500, 89500, 89500, 89600, 89600, 89600]
shl_sm_prediction_list_local_k = shl_pm.shl_predict_price_k_step(shl_sm_data['time'][i], shl_sm_data['bid-price'][i],10) # <- ten-step forward price forecasting
print(shl_sm_prediction_list_local_k)
######################################################################################################################
shl_pm.shl_data_pm_1_step
shl_pm.shl_data_pm_k_step
print(shl_sm_prediction_list_local_1)
print(shl_sm_prediction_list_local_k)
shl_pm.shl_data_pm_1_step.tail(11)
shl_pm.shl_data_pm_k_step.tail(20) | rnd03/shl_sm_NoOCR_v010.ipynb | telescopeuser/uat_shl | mit |
MISC - Validation | %matplotlib inline
import matplotlib.pyplot as plt
shl_data_pm_k_step_local = shl_pm.shl_data_pm_k_step.copy()
shl_data_pm_k_step_local.index = shl_data_pm_k_step_local.index + 1
shl_data_pm_k_step_local
# bid is predicted bid-price from shl_pm
plt.figure(figsize=(12,6))
plt.plot(shl_pm.shl_data_pm_k_step['f_current_bid'])
# plt.plot(shl_data_pm_1_step_k_step['f_1_step_pred_price'].shift(1))
plt.plot(shl_data_pm_k_step_local['f_1_step_pred_price'])
# bid is actual bid-price from raw dataset
shl_data_actual_bid_local = shl_sm_data[shl_sm_parm_ccyy_mm_offset:shl_sm_parm_ccyy_mm_offset+61].copy()
shl_data_actual_bid_local.reset_index(inplace=True)
plt.figure(figsize=(12,6))
plt.plot(shl_data_actual_bid_local['bid-price'])
plt.plot(shl_data_pm_k_step_local['f_1_step_pred_price'])
plt.figure(figsize=(12,6))
plt.plot(shl_data_actual_bid_local['bid-price'])
plt.plot(shl_data_pm_k_step_local['f_1_step_pred_price_rounded'])
# pd.concat([shl_data_actual_bid_local['bid-price'], shl_data_pm_k_step_local['f_1_step_pred_price'], shl_data_pm_k_step_local['f_1_step_pred_price'] - shl_data_actual_bid_local['bid-price']], axis=1, join='inner')
pd.concat([shl_data_actual_bid_local['bid-price'].tail(11), shl_data_pm_k_step_local['f_1_step_pred_price'].tail(11), shl_data_pm_k_step_local['f_1_step_pred_price'].tail(11) - shl_data_actual_bid_local['bid-price'].tail(11)], axis=1, join='inner')
| rnd03/shl_sm_NoOCR_v010.ipynb | telescopeuser/uat_shl | mit |
<h2>Reproducible Research</h2> | %%python
import os
os.system('python -V')
os.system('python ../helper_modules/Package_Versions.py')
SEED = 7
np.random.seed(SEED)
CURR_DIR = os.getcwd()
DATA_DIR = '/Users/jnarhan/Dropbox/Breast_Cancer_Data/Data_Thresholded/ALL_IMGS/'
AUG_DIR = '/Users/jnarhan/Dropbox/Breast_Cancer_Data/Data_Thresholded/AUG_DIAGNOSIS_IMGS/'
meta_file = '../../Meta_Data_Files/meta_data_all.csv'
PATHO_INX = 6 # Column number of pathology label in meta_file
FILE_INX = 1 # Column number of File name in meta_file
meta_data, _ = tqdm( bc.load_meta(meta_file, patho_idx=PATHO_INX, file_idx=FILE_INX,
balanceByRemoval=False, verbose=False) )
# Minor addition to reserve records in meta data for which we actually have images:
meta_data = bc.clean_meta(meta_data, DATA_DIR)
# Only work with benign and malignant classes:
for k,v in meta_data.items():
if v not in ['benign', 'malignant']:
del meta_data[k]
bc.pprint('Loading data')
cats = bc.bcLabels(['benign', 'malignant'])
# For smaller images supply tuple argument for a parameter 'imgResize':
# X_data, Y_data = bc.load_data(meta_data, DATA_DIR, cats, imgResize=(150,150))
X_data, Y_data = tqdm( bc.load_data(meta_data, DATA_DIR, cats) )
cls_cnts = bc.get_clsCnts(Y_data, cats)
bc.pprint('Before Balancing')
for k in cls_cnts:
print '{0:10}: {1}'.format(k, cls_cnts[k]) | src/models/JN_BC_Threshold_Diagnosis.ipynb | jnarhan/Breast_Cancer | mit |
Class Balancing
Here - I look at a modified version of SMOTE, growing the under-represented class via synthetic augmentation, until there is a balance among the categories: | datagen = ImageDataGenerator(rotation_range=5, width_shift_range=.01, height_shift_range=0.01,
data_format='channels_first')
X_data, Y_data = bc.balanceViaSmote(cls_cnts, meta_data, DATA_DIR, AUG_DIR, cats,
datagen, X_data, Y_data, seed=SEED, verbose=True) | src/models/JN_BC_Threshold_Diagnosis.ipynb | jnarhan/Breast_Cancer | mit |
Create the Training and Test Datasets | X_train, X_test, Y_train, Y_test = train_test_split(X_data, Y_data,
test_size=0.20, # deviation given small data set
random_state=SEED,
stratify=zip(*Y_data)[0])
print 'Size of X_train: {:>5}'.format(len(X_train))
print 'Size of X_test: {:>5}'.format(len(X_test))
print 'Size of Y_train: {:>5}'.format(len(Y_train))
print 'Size of Y_test: {:>5}'.format(len(Y_test))
print X_train.shape
print X_test.shape
print Y_train.shape
print Y_test.shape
data = [X_train, X_test, Y_train, Y_test] | src/models/JN_BC_Threshold_Diagnosis.ipynb | jnarhan/Breast_Cancer | mit |
<h2>Support Vector Machine Model</h2> | X_train_svm = X_train.reshape( (X_train.shape[0], -1))
X_test_svm = X_test.reshape( (X_test.shape[0], -1))
SVM_model = SVC(gamma=0.001)
SVM_model.fit( X_train_svm, Y_train)
predictOutput = SVM_model.predict(X_test_svm)
svm_acc = metrics.accuracy_score(y_true=Y_test, y_pred=predictOutput)
print 'SVM Accuracy: {: >7.2f}%'.format(svm_acc * 100)
print 'SVM Error: {: >10.2f}%'.format(100 - svm_acc * 100)
svm_matrix = skm.confusion_matrix(y_true=Y_test, y_pred=predictOutput)
numBC = bc.reverseDict(cats)
class_names = numBC.values()
plt.figure(figsize=(8,6))
bc.plot_confusion_matrix(svm_matrix, classes=class_names, normalize=True,
title='SVM Normalized Confusion Matrix Using Thresholded \n')
plt.tight_layout()
plt.savefig('../../figures/jn_SVM_Diagnosis_CM_Threshold_20170609.png', dpi=100)
plt.figure(figsize=(8,6))
bc.plot_confusion_matrix(svm_matrix, classes=class_names, normalize=False,
title='SVM Normalized Confusion Matrix Using Thresholded \n')
plt.tight_layout()
bc.cat_stats(svm_matrix) | src/models/JN_BC_Threshold_Diagnosis.ipynb | jnarhan/Breast_Cancer | mit |
<h2>CNN Modelling Using VGG16 in Transfer Learning</h2> | def VGG_Prep(img_data):
"""
:param img_data: training or test images of shape [#images, height, width]
:return: the array transformed to the correct shape for the VGG network
shape = [#images, height, width, 3] transforms to rgb and reshapes
"""
images = np.zeros([len(img_data), img_data.shape[1], img_data.shape[2], 3])
for i in range(0, len(img_data)):
im = (img_data[i] * 255) # Original imagenet images were not rescaled
im = color.gray2rgb(im)
images[i] = im
return(images)
def vgg16_bottleneck(data, modelPath, fn_train_feats, fn_train_lbls, fn_test_feats, fn_test_lbls):
# Loading data
X_train, X_test, Y_train, Y_test = data
print('Preparing the Training Data for the VGG_16 Model.')
X_train = VGG_Prep(X_train)
print('Preparing the Test Data for the VGG_16 Model')
X_test = VGG_Prep(X_test)
print('Loading the VGG_16 Model')
# "model" excludes top layer of VGG16:
model = applications.VGG16(include_top=False, weights='imagenet')
# Generating the bottleneck features for the training data
print('Evaluating the VGG_16 Model on the Training Data')
bottleneck_features_train = model.predict(X_train)
# Saving the bottleneck features for the training data
featuresTrain = os.path.join(modelPath, fn_train_feats)
labelsTrain = os.path.join(modelPath, fn_train_lbls)
print('Saving the Training Data Bottleneck Features.')
np.save(open(featuresTrain, 'wb'), bottleneck_features_train)
np.save(open(labelsTrain, 'wb'), Y_train)
# Generating the bottleneck features for the test data
print('Evaluating the VGG_16 Model on the Test Data')
bottleneck_features_test = model.predict(X_test)
# Saving the bottleneck features for the test data
featuresTest = os.path.join(modelPath, fn_test_feats)
labelsTest = os.path.join(modelPath, fn_test_lbls)
print('Saving the Test Data Bottleneck Feaures.')
np.save(open(featuresTest, 'wb'), bottleneck_features_test)
np.save(open(labelsTest, 'wb'), Y_test)
# Locations for the bottleneck and labels files that we need
train_bottleneck = '2Class_Lesions_VGG16_bottleneck_features_train_threshold.npy'
train_labels = '2Class_Lesions_VGG16_labels_train_threshold.npy'
test_bottleneck = '2Class_Lesions_VGG16_bottleneck_features_test_threshold.npy'
test_labels = '2Class_Lesions_VGG16_labels_test_threshold.npy'
modelPath = os.getcwd()
top_model_weights_path = './weights/'
np.random.seed(SEED)
vgg16_bottleneck(data, modelPath, train_bottleneck, train_labels, test_bottleneck, test_labels)
def train_top_model(train_feats, train_lab, test_feats, test_lab, model_path, model_save, epoch = 50, batch = 64):
start_time = time.time()
train_bottleneck = os.path.join(model_path, train_feats)
train_labels = os.path.join(model_path, train_lab)
test_bottleneck = os.path.join(model_path, test_feats)
test_labels = os.path.join(model_path, test_lab)
history = bc.LossHistory()
X_train = np.load(train_bottleneck)
Y_train = np.load(train_labels)
Y_train = np_utils.to_categorical(Y_train, num_classes=2)
X_test = np.load(test_bottleneck)
Y_test = np.load(test_labels)
Y_test = np_utils.to_categorical(Y_test, num_classes=2)
model = Sequential()
model.add(Flatten(input_shape=X_train.shape[1:]))
model.add( Dropout(0.7))
model.add( Dense(256, activation='relu', kernel_constraint= maxnorm(3.)) )
model.add( Dropout(0.5))
# Softmax for probabilities for each class at the output layer
model.add( Dense(2, activation='softmax'))
model.compile(optimizer='rmsprop', # adadelta
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(X_train, Y_train,
epochs=epoch,
batch_size=batch,
callbacks=[history],
validation_data=(X_test, Y_test),
verbose=2)
print "Training duration : {0}".format(time.time() - start_time)
score = model.evaluate(X_test, Y_test, batch_size=16, verbose=2)
print "Network's test score [loss, accuracy]: {0}".format(score)
print 'CNN Error: {:.2f}%'.format(100 - score[1] * 100)
bc.save_model(model_save, model, "jn_VGG16_Diagnosis_top_weights_threshold.h5")
return model, history.losses, history.acc, score
np.random.seed(SEED)
(trans_model, loss_cnn, acc_cnn, test_score_cnn) = train_top_model(train_feats=train_bottleneck,
train_lab=train_labels,
test_feats=test_bottleneck,
test_lab=test_labels,
model_path=modelPath,
model_save=top_model_weights_path,
epoch=100)
plt.figure(figsize=(10,10))
bc.plot_losses(loss_cnn, acc_cnn)
plt.savefig('../../figures/epoch_figures/jn_Transfer_Diagnosis_Threshold_20170609.png', dpi=100)
print 'Transfer Learning CNN Accuracy: {: >7.2f}%'.format(test_score_cnn[1] * 100)
print 'Transfer Learning CNN Error: {: >10.2f}%'.format(100 - test_score_cnn[1] * 100)
predictOutput = bc.predict(trans_model, np.load(test_bottleneck))
trans_matrix = skm.confusion_matrix(y_true=Y_test, y_pred=predictOutput)
plt.figure(figsize=(8,6))
bc.plot_confusion_matrix(trans_matrix, classes=class_names, normalize=True,
title='Transfer CNN Normalized Confusion Matrix Using Thresholded \n')
plt.tight_layout()
plt.savefig('../../figures/TMP_jn_Transfer_Diagnosis_CM_Threshold_20170609.png', dpi=100)
plt.figure(figsize=(8,6))
bc.plot_confusion_matrix(trans_matrix, classes=class_names, normalize=False,
title='Transfer CNN Normalized Confusion Matrix Using Thresholded \n')
plt.tight_layout()
bc.cat_stats(trans_matrix) | src/models/JN_BC_Threshold_Diagnosis.ipynb | jnarhan/Breast_Cancer | mit |
<h2>Core CNN Modelling</h2>
Prep and package the data for Keras processing: | data = [X_train, X_test, Y_train, Y_test]
X_train, X_test, Y_train, Y_test = bc.prep_data(data, cats)
data = [X_train, X_test, Y_train, Y_test]
print X_train.shape
print X_test.shape
print Y_train.shape
print Y_test.shape | src/models/JN_BC_Threshold_Diagnosis.ipynb | jnarhan/Breast_Cancer | mit |
Heavy Regularization | def diff_model_v7_reg(numClasses, input_shape=(3, 150,150), add_noise=False, noise=0.01, verbose=False):
model = Sequential()
if (add_noise):
model.add( GaussianNoise(noise, input_shape=input_shape))
model.add( Convolution2D(filters=16,
kernel_size=(5,5),
data_format='channels_first',
padding='same',
activation='relu'))
else:
model.add( Convolution2D(filters=16,
kernel_size=(5,5),
data_format='channels_first',
padding='same',
activation='relu',
input_shape=input_shape))
# model.add( Dropout(0.7))
model.add( Dropout(0.5))
model.add( Convolution2D(filters=32, kernel_size=(3,3),
data_format='channels_first', padding='same', activation='relu'))
model.add( MaxPooling2D(pool_size= (2,2), data_format='channels_first'))
# model.add( Dropout(0.4))
model.add( Dropout(0.25))
model.add( Convolution2D(filters=32, kernel_size=(3,3),
data_format='channels_first', activation='relu'))
model.add( Convolution2D(filters=64, kernel_size=(3,3),
data_format='channels_first', padding='same', activation='relu',
kernel_regularizer=regularizers.l2(0.01)))
model.add( MaxPooling2D(pool_size= (2,2), data_format='channels_first'))
model.add( Convolution2D(filters=64, kernel_size=(3,3),
data_format='channels_first', activation='relu',
kernel_regularizer=regularizers.l2(0.01)))
#model.add( Dropout(0.4))
model.add( Dropout(0.25))
model.add( Convolution2D(filters=128, kernel_size=(3,3),
data_format='channels_first', padding='same', activation='relu',
kernel_regularizer=regularizers.l2(0.01)))
model.add( MaxPooling2D(pool_size= (2,2), data_format='channels_first'))
model.add( Convolution2D(filters=128, kernel_size=(3,3),
data_format='channels_first', activation='relu',
kernel_regularizer=regularizers.l2(0.01)))
#model.add(Dropout(0.4))
model.add( Dropout(0.25))
model.add( Flatten())
model.add( Dense(128, activation='relu', kernel_constraint= maxnorm(3.)) )
# model.add( Dropout(0.4))
model.add( Dropout(0.25))
model.add( Dense(64, activation='relu', kernel_constraint= maxnorm(3.)) )
# model.add( Dropout(0.4))
model.add( Dropout(0.25))
# Softmax for probabilities for each class at the output layer
model.add( Dense(numClasses, activation='softmax'))
if verbose:
print( model.summary() )
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
return model
diff_model7_noise_reg = diff_model_v7_reg(len(cats),
input_shape=(X_train.shape[1], X_train.shape[2], X_train.shape[3]),
add_noise=True, verbose=True)
np.random.seed(SEED)
(cnn_model, loss_cnn, acc_cnn, test_score_cnn) = bc.run_network(model=diff_model7_noise_reg, earlyStop=True,
data=data,
epochs=50, batch=64)
plt.figure(figsize=(10,10))
bc.plot_losses(loss_cnn, acc_cnn)
plt.savefig('../../figures/epoch_figures/jn_Core_CNN_Diagnosis_Threshold_20170609.png', dpi=100)
bc.save_model(dir_path='./weights/', model=cnn_model, name='jn_Core_CNN_Diagnosis_Threshold_20170609')
print 'Core CNN Accuracy: {: >7.2f}%'.format(test_score_cnn[1] * 100)
print 'Core CNN Error: {: >10.2f}%'.format(100 - test_score_cnn[1] * 100)
predictOutput = bc.predict(cnn_model, X_test)
cnn_matrix = skm.confusion_matrix(y_true=[val.argmax() for val in Y_test], y_pred=predictOutput)
plt.figure(figsize=(8,6))
bc.plot_confusion_matrix(cnn_matrix, classes=class_names, normalize=True,
title='CNN Normalized Confusion Matrix Using Thresholded \n')
plt.tight_layout()
plt.savefig('../../figures/jn_Core_CNN_Diagnosis_Threshold_201706090.png', dpi=100)
plt.figure(figsize=(8,6))
bc.plot_confusion_matrix(cnn_matrix, classes=class_names, normalize=False,
title='CNN Raw Confusion Matrix Using Thresholded \n')
plt.tight_layout()
bc.cat_stats(cnn_matrix) | src/models/JN_BC_Threshold_Diagnosis.ipynb | jnarhan/Breast_Cancer | mit |
Licensing
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Program Description | import radnlp.rules as rules
import radnlp.schema as schema
import radnlp.utils as utils
import radnlp.classifier as classifier
import radnlp.split as split
from IPython.display import clear_output, display, HTML
from IPython.html.widgets import interact, interactive, fixed
import io
from IPython.html import widgets # Widget definitions
import pyConTextNLP.itemData as itemData
from pyConTextNLP.display.html import mark_document_with_html | notebooks/radnlp_demo.ipynb | chapmanbe/RadNLP | apache-2.0 |
Example Data
Below are two example radiology reports pulled from the MIMIC2 demo data set. | reports = ["""1. Pulmonary embolism with filling defects noted within the upper and lower
lobar branches of the right main pulmonary artery.
2. Bilateral pleural effusions, greater on the left.
3. Ascites.
4. There is edema of the gallbladder wall, without any evidence of
distention, intra- or extra-hepatic biliary dilatation. This, along with
stranding within the mesentery, likely represents third spacing of fluid.
5. There are several wedge shaped areas of decreased perfusion within the
spleen, which may represent splenic infarcts.
Results were discussed with Dr. [**First Name8 (NamePattern2) 15561**] [**Last Name (NamePattern1) 13459**]
at 8 pm on [**3099-11-6**].""",
"""1. Filling defects within the subsegmental arteries in the region
of the left lower lobe and lingula and within the right lower lobe consistent
with pulmonary emboli.
2. Small bilateral pleural effusions with associated bibasilar atelectasis.
3. Left anterior pneumothorax.
4. No change in the size of the thoracoabdominal aortic aneurysm.
5. Endotracheal tube 1.8 cm above the carina. NG tube within the stomach,
although the tip is pointed superiorly toward the fundus.""",
"""1. There are no pulmonary emboli observed.
2. Small bilateral pleural effusions with associated bibasilar atelectasis.
3. Left anterior pneumothorax.
4. No change in the size of the thoracoabdominal aortic aneurysm.
5. Endotracheal tube 1.8 cm above the carina. NG tube within the stomach,
although the tip is pointed superiorly toward the fundus."""
]
#!python -m textblob.download_corpora | notebooks/radnlp_demo.ipynb | chapmanbe/RadNLP | apache-2.0 |
Define locations of knowledge, schema, and rules files | def getOptions():
"""Generates arguments for specifying database and other parameters"""
options = {}
options['lexical_kb'] = ["https://raw.githubusercontent.com/chapmanbe/pyConTextNLP/master/KB/lexical_kb_04292013.tsv",
"https://raw.githubusercontent.com/chapmanbe/pyConTextNLP/master/KB/criticalfinder_generalized_modifiers.tsv"]
options['domain_kb'] = ["https://raw.githubusercontent.com/chapmanbe/pyConTextNLP/master/KB/pe_kb.tsv"]#[os.path.join(DATADIR2,"pe_kb.tsv")]
options["schema"] = "https://raw.githubusercontent.com/chapmanbe/RadNLP/master/KBs/schema2.csv"#"file specifying schema"
options["rules"] = "https://raw.githubusercontent.com/chapmanbe/RadNLP/master/KBs/classificationRules3.csv" # "file specifying sentence level rules")
return options
| notebooks/radnlp_demo.ipynb | chapmanbe/RadNLP | apache-2.0 |
Define report analysis
For every report we do two steps
Markup all the sentences in the report based on the provided targets and modifiers
Given this markup we apply our rules and schema to generate a document classification.
radnlp provides functions to do both of these steps:
radnlp.utils.mark_report takes lists of modifiers and targets and generates a pyConTextNLP document graph
radnlp.classify.classify_document_targets takes the document graph, rules, and schema and generates document classification for each identified concept.
Because pyConTextNLP operates on sentences we split the report into sentences. In this function we use radnlp.split.get_sentences which is simply a wrapper around textblob for splitting the sentences. | def analyze_report(report, modifiers, targets, rules, schema):
"""
given an individual radiology report, creates a pyConTextGraph
object that contains the context markup
report: a text string containing the radiology reports
"""
markup = utils.mark_report(split.get_sentences(report),
modifiers,
targets)
return classifier.classify_document_targets(markup,
rules[0],
rules[1],
rules[2],
schema)
def process_report(report):
options = getOptions()
_radnlp_rules = rules.read_rules(options["rules"])
_schema = schema.read_schema(options["schema"])
#_schema = readSchema(options["schema"])
modifiers = itemData.itemData()
targets = itemData.itemData()
for kb in options['lexical_kb']:
modifiers.extend( itemData.instantiateFromCSVtoitemData(kb) )
for kb in options['domain_kb']:
targets.extend( itemData.instantiateFromCSVtoitemData(kb) )
return analyze_report(report, modifiers, targets, _radnlp_rules, _schema)
rslt_0 = process_report(reports[0]) | notebooks/radnlp_demo.ipynb | chapmanbe/RadNLP | apache-2.0 |
radnlp.classifier.classify_document_targets returns a dictionary with keys equal to the target category (e.g. pulmonary_embolism) and the values a 3-tuple with the following values:
The schema category (e.g. 8 or 2).
The XML representation of the maximal schema node
A list (usually empty (not really implemented yet)) of severity values. | for key, value in rslt_0.items():
print(("%s"%key).center(42,"-"))
for v in value:
print(v)
rslt_1 = main(reports[1])
for key, value in rslt_1.items():
print(("%s"%key).center(42,"-"))
for v in value:
print(v) | notebooks/radnlp_demo.ipynb | chapmanbe/RadNLP | apache-2.0 |
Negative Report
For the third report I simply rewrote one of the findings to be negative for PE. We now see a change in the schema classification. | rslt_2 = main(reports[2])
for key, value in rslt_2.items():
print(("%s"%key).center(42,"-"))
for v in value:
print(v)
keys = list(pec.markups.keys())
keys.sort()
pec.reports.insert(pec.reports.columns.get_loc(u'markup')+1,
"ConText Coding",
[codingKey.get(pec.markups[k][1].get("pulmonary_embolism",[None])[0],"NA") for k in keys]) | notebooks/radnlp_demo.ipynb | chapmanbe/RadNLP | apache-2.0 |
🔪 Pure functions
JAX transformation and compilation are designed to work only on Python functions that are functionally pure: all the input data is passed through the function parameters, all the results are output through the function results. A pure function will always return the same result if invoked with the same inputs.
Here are some examples of functions that are not functionally pure for which JAX behaves differently than the Python interpreter. Note that these behaviors are not guaranteed by the JAX system; the proper way to use JAX is to use it only on functionally pure Python functions. | def impure_print_side_effect(x):
print("Executing function") # This is a side-effect
return x
# The side-effects appear during the first run
print ("First call: ", jit(impure_print_side_effect)(4.))
# Subsequent runs with parameters of same type and shape may not show the side-effect
# This is because JAX now invokes a cached compilation of the function
print ("Second call: ", jit(impure_print_side_effect)(5.))
# JAX re-runs the Python function when the type or shape of the argument changes
print ("Third call, different type: ", jit(impure_print_side_effect)(jnp.array([5.])))
g = 0.
def impure_uses_globals(x):
return x + g
# JAX captures the value of the global during the first run
print ("First call: ", jit(impure_uses_globals)(4.))
g = 10. # Update the global
# Subsequent runs may silently use the cached value of the globals
print ("Second call: ", jit(impure_uses_globals)(5.))
# JAX re-runs the Python function when the type or shape of the argument changes
# This will end up reading the latest value of the global
print ("Third call, different type: ", jit(impure_uses_globals)(jnp.array([4.])))
g = 0.
def impure_saves_global(x):
global g
g = x
return x
# JAX runs once the transformed function with special Traced values for arguments
print ("First call: ", jit(impure_saves_global)(4.))
print ("Saved global: ", g) # Saved global has an internal JAX value | docs/notebooks/Common_Gotchas_in_JAX.ipynb | google/jax | apache-2.0 |
A Python function can be functionally pure even if it actually uses stateful objects internally, as long as it does not read or write external state: | def pure_uses_internal_state(x):
state = dict(even=0, odd=0)
for i in range(10):
state['even' if i % 2 == 0 else 'odd'] += x
return state['even'] + state['odd']
print(jit(pure_uses_internal_state)(5.)) | docs/notebooks/Common_Gotchas_in_JAX.ipynb | google/jax | apache-2.0 |
It is not recommended to use iterators in any JAX function you want to jit or in any control-flow primitive. The reason is that an iterator is a python object which introduces state to retrieve the next element. Therefore, it is incompatible with JAX functional programming model. In the code below, there are some examples of incorrect attempts to use iterators with JAX. Most of them return an error, but some give unexpected results. | import jax.numpy as jnp
import jax.lax as lax
from jax import make_jaxpr
# lax.fori_loop
array = jnp.arange(10)
print(lax.fori_loop(0, 10, lambda i,x: x+array[i], 0)) # expected result 45
iterator = iter(range(10))
print(lax.fori_loop(0, 10, lambda i,x: x+next(iterator), 0)) # unexpected result 0
# lax.scan
def func11(arr, extra):
ones = jnp.ones(arr.shape)
def body(carry, aelems):
ae1, ae2 = aelems
return (carry + ae1 * ae2 + extra, carry)
return lax.scan(body, 0., (arr, ones))
make_jaxpr(func11)(jnp.arange(16), 5.)
# make_jaxpr(func11)(iter(range(16)), 5.) # throws error
# lax.cond
array_operand = jnp.array([0.])
lax.cond(True, lambda x: x+1, lambda x: x-1, array_operand)
iter_operand = iter(range(10))
# lax.cond(True, lambda x: next(x)+1, lambda x: next(x)-1, iter_operand) # throws error | docs/notebooks/Common_Gotchas_in_JAX.ipynb | google/jax | apache-2.0 |
🔪 In-Place Updates
In Numpy you're used to doing this: | numpy_array = np.zeros((3,3), dtype=np.float32)
print("original array:")
print(numpy_array)
# In place, mutating update
numpy_array[1, :] = 1.0
print("updated array:")
print(numpy_array) | docs/notebooks/Common_Gotchas_in_JAX.ipynb | google/jax | apache-2.0 |
If we try to update a JAX device array in-place, however, we get an error! (☉_☉) | jax_array = jnp.zeros((3,3), dtype=jnp.float32)
# In place update of JAX's array will yield an error!
try:
jax_array[1, :] = 1.0
except Exception as e:
print("Exception {}".format(e)) | docs/notebooks/Common_Gotchas_in_JAX.ipynb | google/jax | apache-2.0 |
Allowing mutation of variables in-place makes program analysis and transformation difficult. JAX requires that programs are pure functions.
Instead, JAX offers a functional array update using the .at property on JAX arrays.
️⚠️ inside jit'd code and lax.while_loop or lax.fori_loop the size of slices can't be functions of argument values but only functions of argument shapes -- the slice start indices have no such restriction. See the below Control Flow Section for more information on this limitation.
Array updates: x.at[idx].set(y)
For example, the update above can be written as: | updated_array = jax_array.at[1, :].set(1.0)
print("updated array:\n", updated_array) | docs/notebooks/Common_Gotchas_in_JAX.ipynb | google/jax | apache-2.0 |
JAX's array update functions, unlike their NumPy versions, operate out-of-place. That is, the updated array is returned as a new array and the original array is not modified by the update. | print("original array unchanged:\n", jax_array) | docs/notebooks/Common_Gotchas_in_JAX.ipynb | google/jax | apache-2.0 |
However, inside jit-compiled code, if the input value x of x.at[idx].set(y) is not reused, the compiler will optimize the array update to occur in-place.
Array updates with other operations
Indexed array updates are not limited simply to overwriting values. For example, we can perform indexed addition as follows: | print("original array:")
jax_array = jnp.ones((5, 6))
print(jax_array)
new_jax_array = jax_array.at[::2, 3:].add(7.)
print("new array post-addition:")
print(new_jax_array) | docs/notebooks/Common_Gotchas_in_JAX.ipynb | google/jax | apache-2.0 |
For more details on indexed array updates, see the documentation for the .at property.
🔪 Out-of-Bounds Indexing
In Numpy, you are used to errors being thrown when you index an array outside of its bounds, like this: | try:
np.arange(10)[11]
except Exception as e:
print("Exception {}".format(e)) | docs/notebooks/Common_Gotchas_in_JAX.ipynb | google/jax | apache-2.0 |
However, raising an error from code running on an accelerator can be difficult or impossible. Therefore, JAX must choose some non-error behavior for out of bounds indexing (akin to how invalid floating point arithmetic results in NaN). When the indexing operation is an array index update (e.g. index_add or scatter-like primitives), updates at out-of-bounds indices will be skipped; when the operation is an array index retrieval (e.g. NumPy indexing or gather-like primitives) the index is clamped to the bounds of the array since something must be returned. For example, the last value of the array will be returned from this indexing operation: | jnp.arange(10)[11] | docs/notebooks/Common_Gotchas_in_JAX.ipynb | google/jax | apache-2.0 |
Note that due to this behavior for index retrieval, functions like jnp.nanargmin and jnp.nanargmax return -1 for slices consisting of NaNs whereas Numpy would throw an error.
Note also that, as the two behaviors described above are not inverses of each other, reverse-mode automatic differentiation (which turns index updates into index retrievals and vice versa) will not preserve the semantics of out of bounds indexing. Thus it may be a good idea to think of out-of-bounds indexing in JAX as a case of undefined behavior.
🔪 Non-array inputs: NumPy vs. JAX
NumPy is generally happy accepting Python lists or tuples as inputs to its API functions: | np.sum([1, 2, 3]) | docs/notebooks/Common_Gotchas_in_JAX.ipynb | google/jax | apache-2.0 |
JAX departs from this, generally returning a helpful error: | try:
jnp.sum([1, 2, 3])
except TypeError as e:
print(f"TypeError: {e}") | docs/notebooks/Common_Gotchas_in_JAX.ipynb | google/jax | apache-2.0 |
This is a deliberate design choice, because passing lists or tuples to traced functions can lead to silent performance degradation that might otherwise be difficult to detect.
For example, consider the following permissive version of jnp.sum that allows list inputs: | def permissive_sum(x):
return jnp.sum(jnp.array(x))
x = list(range(10))
permissive_sum(x) | docs/notebooks/Common_Gotchas_in_JAX.ipynb | google/jax | apache-2.0 |
The output is what we would expect, but this hides potential performance issues under the hood. In JAX's tracing and JIT compilation model, each element in a Python list or tuple is treated as a separate JAX variable, and individually processed and pushed to device. This can be seen in the jaxpr for the permissive_sum function above: | make_jaxpr(permissive_sum)(x) | docs/notebooks/Common_Gotchas_in_JAX.ipynb | google/jax | apache-2.0 |
Each entry of the list is handled as a separate input, resulting in a tracing & compilation overhead that grows linearly with the size of the list. To prevent surprises like this, JAX avoids implicit conversions of lists and tuples to arrays.
If you would like to pass a tuple or list to a JAX function, you can do so by first explicitly converting it to an array: | jnp.sum(jnp.array(x)) | docs/notebooks/Common_Gotchas_in_JAX.ipynb | google/jax | apache-2.0 |
🔪 Random Numbers
If all scientific papers whose results are in doubt because of bad
rand()s were to disappear from library shelves, there would be a
gap on each shelf about as big as your fist. - Numerical Recipes
RNGs and State
You're used to stateful pseudorandom number generators (PRNGs) from numpy and other libraries, which helpfully hide a lot of details under the hood to give you a ready fountain of pseudorandomness: | print(np.random.random())
print(np.random.random())
print(np.random.random()) | docs/notebooks/Common_Gotchas_in_JAX.ipynb | google/jax | apache-2.0 |
Underneath the hood, numpy uses the Mersenne Twister PRNG to power its pseudorandom functions. The PRNG has a period of $2^{19937}-1$ and at any point can be described by 624 32bit unsigned ints and a position indicating how much of this "entropy" has been used up. | np.random.seed(0)
rng_state = np.random.get_state()
#print(rng_state)
# --> ('MT19937', array([0, 1, 1812433255, 1900727105, 1208447044,
# 2481403966, 4042607538, 337614300, ... 614 more numbers...,
# 3048484911, 1796872496], dtype=uint32), 624, 0, 0.0) | docs/notebooks/Common_Gotchas_in_JAX.ipynb | google/jax | apache-2.0 |
This pseudorandom state vector is automagically updated behind the scenes every time a random number is needed, "consuming" 2 of the uint32s in the Mersenne twister state vector: | _ = np.random.uniform()
rng_state = np.random.get_state()
#print(rng_state)
# --> ('MT19937', array([2443250962, 1093594115, 1878467924,
# ..., 2648828502, 1678096082], dtype=uint32), 2, 0, 0.0)
# Let's exhaust the entropy in this PRNG statevector
for i in range(311):
_ = np.random.uniform()
rng_state = np.random.get_state()
#print(rng_state)
# --> ('MT19937', array([2443250962, 1093594115, 1878467924,
# ..., 2648828502, 1678096082], dtype=uint32), 624, 0, 0.0)
# Next call iterates the RNG state for a new batch of fake "entropy".
_ = np.random.uniform()
rng_state = np.random.get_state()
# print(rng_state)
# --> ('MT19937', array([1499117434, 2949980591, 2242547484,
# 4162027047, 3277342478], dtype=uint32), 2, 0, 0.0) | docs/notebooks/Common_Gotchas_in_JAX.ipynb | google/jax | apache-2.0 |
The problem with magic PRNG state is that it's hard to reason about how it's being used and updated across different threads, processes, and devices, and it's very easy to screw up when the details of entropy production and consumption are hidden from the end user.
The Mersenne Twister PRNG is also known to have a number of problems, it has a large 2.5Kb state size, which leads to problematic initialization issues. It fails modern BigCrush tests, and is generally slow.
JAX PRNG
JAX instead implements an explicit PRNG where entropy production and consumption are handled by explicitly passing and iterating PRNG state. JAX uses a modern Threefry counter-based PRNG that's splittable. That is, its design allows us to fork the PRNG state into new PRNGs for use with parallel stochastic generation.
The random state is described by two unsigned-int32s that we call a key: | from jax import random
key = random.PRNGKey(0)
key | docs/notebooks/Common_Gotchas_in_JAX.ipynb | google/jax | apache-2.0 |
JAX's random functions produce pseudorandom numbers from the PRNG state, but do not change the state!
Reusing the same state will cause sadness and monotony, depriving the end user of lifegiving chaos: | print(random.normal(key, shape=(1,)))
print(key)
# No no no!
print(random.normal(key, shape=(1,)))
print(key) | docs/notebooks/Common_Gotchas_in_JAX.ipynb | google/jax | apache-2.0 |
Instead, we split the PRNG to get usable subkeys every time we need a new pseudorandom number: | print("old key", key)
key, subkey = random.split(key)
normal_pseudorandom = random.normal(subkey, shape=(1,))
print(" \---SPLIT --> new key ", key)
print(" \--> new subkey", subkey, "--> normal", normal_pseudorandom) | docs/notebooks/Common_Gotchas_in_JAX.ipynb | google/jax | apache-2.0 |
We propagate the key and make new subkeys whenever we need a new random number: | print("old key", key)
key, subkey = random.split(key)
normal_pseudorandom = random.normal(subkey, shape=(1,))
print(" \---SPLIT --> new key ", key)
print(" \--> new subkey", subkey, "--> normal", normal_pseudorandom) | docs/notebooks/Common_Gotchas_in_JAX.ipynb | google/jax | apache-2.0 |
We can generate more than one subkey at a time: | key, *subkeys = random.split(key, 4)
for subkey in subkeys:
print(random.normal(subkey, shape=(1,))) | docs/notebooks/Common_Gotchas_in_JAX.ipynb | google/jax | apache-2.0 |
🔪 Control Flow
✔ python control_flow + autodiff ✔
If you just want to apply grad to your python functions, you can use regular python control-flow constructs with no problems, as if you were using Autograd (or Pytorch or TF Eager). | def f(x):
if x < 3:
return 3. * x ** 2
else:
return -4 * x
print(grad(f)(2.)) # ok!
print(grad(f)(4.)) # ok! | docs/notebooks/Common_Gotchas_in_JAX.ipynb | google/jax | apache-2.0 |
python control flow + JIT
Using control flow with jit is more complicated, and by default it has more constraints.
This works: | @jit
def f(x):
for i in range(3):
x = 2 * x
return x
print(f(3)) | docs/notebooks/Common_Gotchas_in_JAX.ipynb | google/jax | apache-2.0 |
So does this: | @jit
def g(x):
y = 0.
for i in range(x.shape[0]):
y = y + x[i]
return y
print(g(jnp.array([1., 2., 3.]))) | docs/notebooks/Common_Gotchas_in_JAX.ipynb | google/jax | apache-2.0 |
But this doesn't, at least by default: | @jit
def f(x):
if x < 3:
return 3. * x ** 2
else:
return -4 * x
# This will fail!
try:
f(2)
except Exception as e:
print("Exception {}".format(e)) | docs/notebooks/Common_Gotchas_in_JAX.ipynb | google/jax | apache-2.0 |
What gives!?
When we jit-compile a function, we usually want to compile a version of the function that works for many different argument values, so that we can cache and reuse the compiled code. That way we don't have to re-compile on each function evaluation.
For example, if we evaluate an @jit function on the array jnp.array([1., 2., 3.], jnp.float32), we might want to compile code that we can reuse to evaluate the function on jnp.array([4., 5., 6.], jnp.float32) to save on compile time.
To get a view of your Python code that is valid for many different argument values, JAX traces it on abstract values that represent sets of possible inputs. There are multiple different levels of abstraction, and different transformations use different abstraction levels.
By default, jit traces your code on the ShapedArray abstraction level, where each abstract value represents the set of all array values with a fixed shape and dtype. For example, if we trace using the abstract value ShapedArray((3,), jnp.float32), we get a view of the function that can be reused for any concrete value in the corresponding set of arrays. That means we can save on compile time.
But there's a tradeoff here: if we trace a Python function on a ShapedArray((), jnp.float32) that isn't committed to a specific concrete value, when we hit a line like if x < 3, the expression x < 3 evaluates to an abstract ShapedArray((), jnp.bool_) that represents the set {True, False}. When Python attempts to coerce that to a concrete True or False, we get an error: we don't know which branch to take, and can't continue tracing! The tradeoff is that with higher levels of abstraction we gain a more general view of the Python code (and thus save on re-compilations), but we require more constraints on the Python code to complete the trace.
The good news is that you can control this tradeoff yourself. By having jit trace on more refined abstract values, you can relax the traceability constraints. For example, using the static_argnums argument to jit, we can specify to trace on concrete values of some arguments. Here's that example function again: | def f(x):
if x < 3:
return 3. * x ** 2
else:
return -4 * x
f = jit(f, static_argnums=(0,))
print(f(2.)) | docs/notebooks/Common_Gotchas_in_JAX.ipynb | google/jax | apache-2.0 |
Here's another example, this time involving a loop: | def f(x, n):
y = 0.
for i in range(n):
y = y + x[i]
return y
f = jit(f, static_argnums=(1,))
f(jnp.array([2., 3., 4.]), 2) | docs/notebooks/Common_Gotchas_in_JAX.ipynb | google/jax | apache-2.0 |
In effect, the loop gets statically unrolled. JAX can also trace at higher levels of abstraction, like Unshaped, but that's not currently the default for any transformation
️⚠️ functions with argument-value dependent shapes
These control-flow issues also come up in a more subtle way: numerical functions we want to jit can't specialize the shapes of internal arrays on argument values (specializing on argument shapes is ok). As a trivial example, let's make a function whose output happens to depend on the input variable length. | def example_fun(length, val):
return jnp.ones((length,)) * val
# un-jit'd works fine
print(example_fun(5, 4))
bad_example_jit = jit(example_fun)
# this will fail:
try:
print(bad_example_jit(10, 4))
except Exception as e:
print("Exception {}".format(e))
# static_argnums tells JAX to recompile on changes at these argument positions:
good_example_jit = jit(example_fun, static_argnums=(0,))
# first compile
print(good_example_jit(10, 4))
# recompiles
print(good_example_jit(5, 4)) | docs/notebooks/Common_Gotchas_in_JAX.ipynb | google/jax | apache-2.0 |
static_argnums can be handy if length in our example rarely changes, but it would be disastrous if it changed a lot!
Lastly, if your function has global side-effects, JAX's tracer can cause weird things to happen. A common gotcha is trying to print arrays inside jit'd functions: | @jit
def f(x):
print(x)
y = 2 * x
print(y)
return y
f(2) | docs/notebooks/Common_Gotchas_in_JAX.ipynb | google/jax | apache-2.0 |
Structured control flow primitives
There are more options for control flow in JAX. Say you want to avoid re-compilations but still want to use control flow that's traceable, and that avoids un-rolling large loops. Then you can use these 4 structured control flow primitives:
lax.cond differentiable
lax.while_loop fwd-mode-differentiable
lax.fori_loop fwd-mode-differentiable in general; fwd and rev-mode differentiable if endpoints are static.
lax.scan differentiable
cond
python equivalent:
python
def cond(pred, true_fun, false_fun, operand):
if pred:
return true_fun(operand)
else:
return false_fun(operand) | from jax import lax
operand = jnp.array([0.])
lax.cond(True, lambda x: x+1, lambda x: x-1, operand)
# --> array([1.], dtype=float32)
lax.cond(False, lambda x: x+1, lambda x: x-1, operand)
# --> array([-1.], dtype=float32) | docs/notebooks/Common_Gotchas_in_JAX.ipynb | google/jax | apache-2.0 |
while_loop
python equivalent:
def while_loop(cond_fun, body_fun, init_val):
val = init_val
while cond_fun(val):
val = body_fun(val)
return val | init_val = 0
cond_fun = lambda x: x<10
body_fun = lambda x: x+1
lax.while_loop(cond_fun, body_fun, init_val)
# --> array(10, dtype=int32) | docs/notebooks/Common_Gotchas_in_JAX.ipynb | google/jax | apache-2.0 |
fori_loop
python equivalent:
def fori_loop(start, stop, body_fun, init_val):
val = init_val
for i in range(start, stop):
val = body_fun(i, val)
return val | init_val = 0
start = 0
stop = 10
body_fun = lambda i,x: x+i
lax.fori_loop(start, stop, body_fun, init_val)
# --> array(45, dtype=int32) | docs/notebooks/Common_Gotchas_in_JAX.ipynb | google/jax | apache-2.0 |
Summary
$$
\begin{array} {r|rr}
\hline \
\textrm{construct}
& \textrm{jit}
& \textrm{grad} \
\hline \
\textrm{if} & ❌ & ✔ \
\textrm{for} & ✔ & ✔\
\textrm{while} & ✔ & ✔\
\textrm{lax.cond} & ✔ & ✔\
\textrm{lax.while_loop} & ✔ & \textrm{fwd}\
\textrm{lax.fori_loop} & ✔ & \textrm{fwd}\
\textrm{lax.scan} & ✔ & ✔\
\hline
\end{array}
$$
<center>
$\ast$ = argument-<b>value</b>-independent loop condition - unrolls the loop
</center>
🔪 NaNs
Debugging NaNs
If you want to trace where NaNs are occurring in your functions or gradients, you can turn on the NaN-checker by:
setting the JAX_DEBUG_NANS=True environment variable;
adding from jax.config import config and config.update("jax_debug_nans", True) near the top of your main file;
adding from jax.config import config and config.parse_flags_with_absl() to your main file, then set the option using a command-line flag like --jax_debug_nans=True;
This will cause computations to error-out immediately on production of a NaN. Switching this option on adds a nan check to every floating point type value produced by XLA. That means values are pulled back to the host and checked as ndarrays for every primitive operation not under an @jit. For code under an @jit, the output of every @jit function is checked and if a nan is present it will re-run the function in de-optimized op-by-op mode, effectively removing one level of @jit at a time.
There could be tricky situations that arise, like nans that only occur under a @jit but don't get produced in de-optimized mode. In that case you'll see a warning message print out but your code will continue to execute.
If the nans are being produced in the backward pass of a gradient evaluation, when an exception is raised several frames up in the stack trace you will be in the backward_pass function, which is essentially a simple jaxpr interpreter that walks the sequence of primitive operations in reverse. In the example below, we started an ipython repl with the command line env JAX_DEBUG_NANS=True ipython, then ran this:
```
In [1]: import jax.numpy as jnp
In [2]: jnp.divide(0., 0.)
FloatingPointError Traceback (most recent call last)
<ipython-input-2-f2e2c413b437> in <module>()
----> 1 jnp.divide(0., 0.)
.../jax/jax/numpy/lax_numpy.pyc in divide(x1, x2)
343 return floor_divide(x1, x2)
344 else:
--> 345 return true_divide(x1, x2)
346
347
.../jax/jax/numpy/lax_numpy.pyc in true_divide(x1, x2)
332 x1, x2 = _promote_shapes(x1, x2)
333 return lax.div(lax.convert_element_type(x1, result_dtype),
--> 334 lax.convert_element_type(x2, result_dtype))
335
336
.../jax/jax/lax.pyc in div(x, y)
244 def div(x, y):
245 r"""Elementwise division: :math:x \over y."""
--> 246 return div_p.bind(x, y)
247
248 def rem(x, y):
... stack trace ...
.../jax/jax/interpreters/xla.pyc in handle_result(device_buffer)
103 py_val = device_buffer.to_py()
104 if np.any(np.isnan(py_val)):
--> 105 raise FloatingPointError("invalid value")
106 else:
107 return DeviceArray(device_buffer, *result_shape)
FloatingPointError: invalid value
```
The nan generated was caught. By running %debug, we can get a post-mortem debugger. This also works with functions under @jit, as the example below shows.
```
In [4]: from jax import jit
In [5]: @jit
...: def f(x, y):
...: a = x * y
...: b = (x + y) / (x - y)
...: c = a + 2
...: return a + b * c
...:
In [6]: x = jnp.array([2., 0.])
In [7]: y = jnp.array([3., 0.])
In [8]: f(x, y)
Invalid value encountered in the output of a jit function. Calling the de-optimized version.
FloatingPointError Traceback (most recent call last)
<ipython-input-8-811b7ddb3300> in <module>()
----> 1 f(x, y)
... stack trace ...
<ipython-input-5-619b39acbaac> in f(x, y)
2 def f(x, y):
3 a = x * y
----> 4 b = (x + y) / (x - y)
5 c = a + 2
6 return a + b * c
.../jax/jax/numpy/lax_numpy.pyc in divide(x1, x2)
343 return floor_divide(x1, x2)
344 else:
--> 345 return true_divide(x1, x2)
346
347
.../jax/jax/numpy/lax_numpy.pyc in true_divide(x1, x2)
332 x1, x2 = _promote_shapes(x1, x2)
333 return lax.div(lax.convert_element_type(x1, result_dtype),
--> 334 lax.convert_element_type(x2, result_dtype))
335
336
.../jax/jax/lax.pyc in div(x, y)
244 def div(x, y):
245 r"""Elementwise division: :math:x \over y."""
--> 246 return div_p.bind(x, y)
247
248 def rem(x, y):
... stack trace ...
```
When this code sees a nan in the output of an @jit function, it calls into the de-optimized code, so we still get a clear stack trace. And we can run a post-mortem debugger with %debug to inspect all the values to figure out the error.
⚠️ You shouldn't have the NaN-checker on if you're not debugging, as it can introduce lots of device-host round-trips and performance regressions!
⚠️ The NaN-checker doesn't work with pmap. To debug nans in pmap code, one thing to try is replacing pmap with vmap.
🔪 Double (64bit) precision
At the moment, JAX by default enforces single-precision numbers to mitigate the Numpy API's tendency to aggressively promote operands to double. This is the desired behavior for many machine-learning applications, but it may catch you by surprise! | x = random.uniform(random.PRNGKey(0), (1000,), dtype=jnp.float64)
x.dtype | docs/notebooks/Common_Gotchas_in_JAX.ipynb | google/jax | apache-2.0 |
To use double-precision numbers, you need to set the jax_enable_x64 configuration variable at startup.
There are a few ways to do this:
You can enable 64bit mode by setting the environment variable JAX_ENABLE_X64=True.
You can manually set the jax_enable_x64 configuration flag at startup:
python
# again, this only works on startup!
from jax.config import config
config.update("jax_enable_x64", True)
You can parse command-line flags with absl.app.run(main)
python
from jax.config import config
config.config_with_absl()
If you want JAX to run absl parsing for you, i.e. you don't want to do absl.app.run(main), you can instead use
python
from jax.config import config
if __name__ == '__main__':
# calls config.config_with_absl() *and* runs absl parsing
config.parse_flags_with_absl()
Note that #2-#4 work for any of JAX's configuration options.
We can then confirm that x64 mode is enabled: | import jax.numpy as jnp
from jax import random
x = random.uniform(random.PRNGKey(0), (1000,), dtype=jnp.float64)
x.dtype # --> dtype('float64') | docs/notebooks/Common_Gotchas_in_JAX.ipynb | google/jax | apache-2.0 |
loading different datasets :
- Publicy known adresses
- Features dataframe from the graph features generators | known = pd.read_csv('../data/known.csv')
rogues = pd.read_csv('../data/rogues.csv')
transactions = pd.read_csv('../data/edges.csv').drop('Unnamed: 0',1)
#Dropping features and fill na with 0
df = pd.read_csv('../data/features_full.csv').drop('Unnamed: 0',1).fillna(0)
df = df.set_index(['nodes'])
#build normalize values
data = scale(df.values)
n_sample = 10000 | notebooks/chain-clustering.ipynb | jhamilius/chain | mit |
I - Clustering Nodes
<hr>
Exploring clustering methods on the nodes featured dataset
A - k-means
First a very simple kmeans method | #Define estimator / by default clusters = 6 an init = 10
kmeans = KMeans(init='k-means++', n_clusters=6, n_init=10)
kmeans.fit(data) | notebooks/chain-clustering.ipynb | jhamilius/chain | mit |
1 - Parameters Optimization
a - Finding the best k
code from http://www.slideshare.net/SarahGuido/kmeans-clustering-with-scikitlearn#notes-panel) | %%time
#Determine your k range
k_range = range(1,14)
# Fit the kmeans model for each n_clusters = k
k_means_var = [KMeans(n_clusters=k).fit(data) for k in k_range]
# Pull out the centroids for each model
centroids = [X.cluster_centers_ for X in k_means_var]
%%time
# Caluculate the Euclidean distance from each pont to each centroid
k_euclid=[cdist(data, cent, 'euclidean') for cent in centroids]
dist = [np.min(ke,axis=1) for ke in k_euclid]
# Total within-cluster sum of squares
wcss = [sum(d**2) for d in dist]
# The total sum of squares
tss = sum(pdist(data)**2)/data.shape[0]
#The between-cluster sum of squares
bss = tss - wcss
%%time
plt.plot(k_range,bss/tss,'-bo')
plt.xlabel('number of cluster')
plt.ylabel('% of variance explained')
plt.title('Variance explained vs k')
plt.grid(True)
plt.show() | notebooks/chain-clustering.ipynb | jhamilius/chain | mit |
Difficult to find an elbow criteria
Other heuristic criteria k = sqrt(n/2)
b - Other heuristic method
$k=\sqrt{\frac{n}{2}}$ | np.sqrt(data.shape[0]/2) | notebooks/chain-clustering.ipynb | jhamilius/chain | mit |
-> Weird
c - Silhouette Metrics for supervised ?
2 - Visualize with PCA reduction
code from scikit learn | ##############################################################################
# Generate sample data
batch_size = 10
#centers = [[1, 1], [-1, -1], [1, -1]]
n_clusters = 6
#X, labels_true = make_blobs(n_samples=3000, centers=centers, cluster_std=0.7)
X = PCA(n_components=2).fit_transform(data)
##############################################################################
# Compute clustering with Means
k_means = KMeans(init='k-means++', n_clusters=6, n_init=10,random_state=2)
t0 = time.time()
k_means.fit(X)
t_batch = time.time() - t0
k_means_labels = k_means.labels_
k_means_cluster_centers = k_means.cluster_centers_
k_means_labels_unique = np.unique(k_means_labels)
##############################################################################
# Compute clustering with MiniBatchKMeans
mbk = MiniBatchKMeans(init='k-means++', n_clusters=6, batch_size=batch_size,
n_init=10, max_no_improvement=10, verbose=0,random_state=2)
t0 = time.time()
mbk.fit(X)
t_mini_batch = time.time() - t0
mbk_means_labels = mbk.labels_
mbk_means_cluster_centers = mbk.cluster_centers_
mbk_means_labels_unique = np.unique(mbk_means_labels)
##############################################################################
# Plot result
fig = plt.figure(figsize=(15, 5))
colors = ['#4EACC5', '#FF9C34', '#4E9A06','#FF0000','#800000','purple']
#fig.subplots_adjust(left=0.02, right=0.98, bottom=0.05, top=0.9)
# We want to have the same colors for the same cluster from the
# MiniBatchKMeans and the KMeans algorithm. Let's pair the cluster centers per
# closest one.
order = pairwise_distances_argmin(k_means_cluster_centers,
mbk_means_cluster_centers)
# KMeans
ax = fig.add_subplot(1, 3, 1)
for k, col in zip(range(n_clusters), colors):
my_members = k_means_labels == k
cluster_center = k_means_cluster_centers[k]
ax.plot(X[my_members, 0], X[my_members, 1], 'w',
markerfacecolor=col, marker='.',markersize=10)
ax.plot(cluster_center[0], cluster_center[1], 'o', markerfacecolor=col,
markeredgecolor='k', markersize=6)
ax.set_title('KMeans')
ax.set_xticks(())
ax.set_yticks(())
#plt.text(10,10, 'train time: %.2fs\ninertia: %f' % (
#t_batch, k_means.inertia_))
# Plot result
# MiniBatchKMeans
ax = fig.add_subplot(1, 3, 2)
for k, col in zip(range(n_clusters), colors):
my_members = mbk_means_labels == order[k]
cluster_center = mbk_means_cluster_centers[order[k]]
ax.plot(X[my_members, 0], X[my_members, 1], 'w',
markerfacecolor=col, marker='.', markersize=10)
ax.plot(cluster_center[0], cluster_center[1], 'o', markerfacecolor=col,
markeredgecolor='k', markersize=6)
ax.set_title('MiniBatchKMeans')
ax.set_xticks(())
ax.set_yticks(())
#plt.text(-5, 10, 'train time: %.2fs\ninertia: %f' %
#(t_mini_batch, mbk.inertia_))
# Plot result
# Initialise the different array to all False
different = (mbk_means_labels == 4)
ax = fig.add_subplot(1, 3, 3)
for l in range(n_clusters):
different += ((k_means_labels == k) != (mbk_means_labels == order[k]))
identic = np.logical_not(different)
ax.plot(X[identic, 0], X[identic, 1], 'w',
markerfacecolor='#bbbbbb', marker='.')
ax.plot(X[different, 0], X[different, 1], 'w',
markerfacecolor='m', marker='.')
ax.set_title('Difference')
ax.set_xticks(())
ax.set_yticks(())
plt.show() | notebooks/chain-clustering.ipynb | jhamilius/chain | mit |
B - Mini batch
II - Outlier Detection
<hr>
Objectives :
- Perform outlier detection on node data
- Test different methods (with perf metrics)
- Plot outlier detection
- Tag transaction
Explain : Mahalanobis Distance | X = PCA(n_components=2).fit_transform(data)
# compare estimators learnt from the full data set with true parameters
emp_cov = EmpiricalCovariance().fit(X)
robust_cov = MinCovDet().fit(X)
###############################################################################
# Display results
fig = plt.figure(figsize=(15, 8))
plt.subplots_adjust(hspace=-.1, wspace=.4, top=.95, bottom=.05)
# Show data set
subfig1 = plt.subplot(1, 1, 1)
inlier_plot = subfig1.scatter(X[:, 0], X[:, 1],
color='black', label='points')
subfig1.set_xlim(subfig1.get_xlim()[0], 11.)
subfig1.set_title("Mahalanobis distances of a contaminated data set:")
# Show contours of the distance functions
xx, yy = np.meshgrid(np.linspace(plt.xlim()[0], plt.xlim()[1], 100),
np.linspace(plt.ylim()[0], plt.ylim()[1], 100))
zz = np.c_[xx.ravel(), yy.ravel()]
mahal_emp_cov = emp_cov.mahalanobis(zz)
mahal_emp_cov = mahal_emp_cov.reshape(xx.shape)
emp_cov_contour = subfig1.contour(xx, yy, np.sqrt(mahal_emp_cov),
cmap=plt.cm.PuBu_r,
linestyles='dashed')
mahal_robust_cov = robust_cov.mahalanobis(zz)
mahal_robust_cov = mahal_robust_cov.reshape(xx.shape)
robust_contour = subfig1.contour(xx, yy, np.sqrt(mahal_robust_cov),
cmap=plt.cm.YlOrBr_r, linestyles='dotted')
plt.xticks(())
plt.yticks(())
plt.show() | notebooks/chain-clustering.ipynb | jhamilius/chain | mit |
<hr>
III - Look at the clusters | df.head(3)
k_means = KMeans(init='random', n_clusters=6, n_init=10, random_state=2)
clusters = k_means.fit_predict(data)
df['clusters'] = clusters
df.groupby('clusters').count()
tagged = pd.merge(known,df,left_on='id',how='inner',right_index=True)
tagged.groupby('clusters').count().apply(lambda x: 100*x/float(x.sum()))['id']
df.groupby('clusters').count().apply(lambda x: 100*x/float(x.sum()))['total_degree']
rogues_tag = pd.merge(rogues,df,left_on='id',how='inner',right_index=True)
rogues_tag.groupby('clusters').count()['total_degree'] | notebooks/chain-clustering.ipynb | jhamilius/chain | mit |
Rogues and tagged are overrepresnetated in cluster 1 | df.groupby('clusters').mean() | notebooks/chain-clustering.ipynb | jhamilius/chain | mit |
IV - Tag transactions | transactions.head(4)
df.head(20)
#write function
def get_cluster(node,df):
return df.loc[node].clusters
get_cluster('0x037dd056e7fdbd641db5b6bea2a8780a83fae180',df) | notebooks/chain-clustering.ipynb | jhamilius/chain | mit |
Set parameters: | base_depth = 22.0 # depth of aquifer base below ground level, m
initial_water_table_depth = 2.0 # starting depth to water table, m
dx = 100.0 # cell width, m
pumping_rate = 0.001 # pumping rate, m3/s
well_locations = [800, 1200]
K = 0.001 # hydraulic conductivity, (m/s)
n = 0.2 # porosity, (-)
dt = 3600.0 # time-step duration, s
background_recharge = 0.1 / (3600 * 24 * 365.25) # recharge rate from infiltration, m/s | notebooks/tutorials/agent_based_modeling/groundwater/landlab_mesa_groundwater_pumping.ipynb | landlab/landlab | mit |
Create a grid and add fields: | # Raster grid with closed boundaries
# boundaries = {'top': 'closed','bottom': 'closed','right':'closed','left':'closed'}
grid = RasterModelGrid((41, 41), xy_spacing=dx) # , bc=boundaries)
# Topographic elevation field (meters)
elev = grid.add_zeros("topographic__elevation", at="node")
# Field for the elevation of the top of an impermeable geologic unit that forms
# the base of the aquifer (meters)
base = grid.add_zeros("aquifer_base__elevation", at="node")
base[:] = elev - base_depth
# Field for the elevation of the water table (meters)
wt = grid.add_zeros("water_table__elevation", at="node")
wt[:] = elev - initial_water_table_depth
# Field for the groundwater recharge rate (meters per second)
recharge = grid.add_zeros("recharge__rate", at="node")
recharge[:] = background_recharge
recharge[well_locations] -= pumping_rate / (
dx * dx
) # pumping rate, in terms of recharge | notebooks/tutorials/agent_based_modeling/groundwater/landlab_mesa_groundwater_pumping.ipynb | landlab/landlab | mit |
Instantiate the component (note use of an array/field instead of a scalar constant for recharge_rate): | gdp = GroundwaterDupuitPercolator(
grid,
hydraulic_conductivity=K,
porosity=n,
recharge_rate=recharge,
regularization_f=0.01,
) | notebooks/tutorials/agent_based_modeling/groundwater/landlab_mesa_groundwater_pumping.ipynb | landlab/landlab | mit |
Define a couple of handy functions to run the model for a day or a year: | def run_for_one_day(gdp, dt):
num_iter = int(3600.0 * 24 / dt)
for _ in range(num_iter):
gdp.run_one_step(dt)
def run_for_one_year(gdp, dt):
num_iter = int(365.25 * 3600.0 * 24 / dt)
for _ in range(num_iter):
gdp.run_one_step(dt) | notebooks/tutorials/agent_based_modeling/groundwater/landlab_mesa_groundwater_pumping.ipynb | landlab/landlab | mit |
Run for a year and plot the water table: | run_for_one_year(gdp, dt)
imshow_grid(grid, wt, colorbar_label="Water table elevation (m)") | notebooks/tutorials/agent_based_modeling/groundwater/landlab_mesa_groundwater_pumping.ipynb | landlab/landlab | mit |
Aside: calculating a pumping rate in terms of recharge
The pumping rate at a particular grid cell (in volume per time, representing pumping from a well at that location) needs to be given in terms of a recharge rate (depth of water equivalent per time) in a given grid cell. Suppose for example you're pumping 16 gallons/minute (horrible units of course). That equates to:
16 gal/min x 0.00378541 m3/gal x (1/60) min/sec = | Qp = 16.0 * 0.00378541 / 60.0
print(Qp) | notebooks/tutorials/agent_based_modeling/groundwater/landlab_mesa_groundwater_pumping.ipynb | landlab/landlab | mit |
...equals about 0.001 m$^3$/s. That's $Q_p$. The corresponding negative recharge in a cell of dimensions $\Delta x$ by $\Delta x$ would be
$R_p = Q_p / \Delta x^2$ | Rp = Qp / (dx * dx)
print(Rp) | notebooks/tutorials/agent_based_modeling/groundwater/landlab_mesa_groundwater_pumping.ipynb | landlab/landlab | mit |
A very simple ABM with farmers who drill wells into the aquifer
For the sake of illustration, our ABM will be extremely simple. There are $N$ farmers, at random locations, who each pump at a rate $Q_p$ as long as the water table lies above the depth of their well, $d_w$. Once the water table drops below their well, the well runs dry and they switch from crops to pasture.
Check that Mesa is installed
For the next step, we must verify that Mesa is available. If it is not, use one of the installation commands below to install, then re-start the kernel (Kernel => Restart) and continue. | try:
from mesa import Model
except ModuleNotFoundError:
print(
"""
Mesa needs to be installed in order to run this notebook.
Normally Mesa should be pre-installed alongside the Landlab notebook collection.
But it appears that Mesa is not already installed on the system on which you are
running this notebook. You can install Mesa from a command prompt using either:
`conda install -c conda-forge mesa`
or
`pip install mesa`
"""
)
raise | notebooks/tutorials/agent_based_modeling/groundwater/landlab_mesa_groundwater_pumping.ipynb | landlab/landlab | mit |
Defining the ABM
In Mesa, an ABM is created using a class for each Agent and a class for the Model. Here's the Agent class (a Farmer). Farmers have a grid location and an attribute: whether they are actively pumping their well or not. They also have a well depth: the depth to the bottom of their well. Their action consists of checking whether their well is wet or dry; if wet, they will pump, and if dry, they will not. | from mesa import Agent, Model
from mesa.space import MultiGrid
from mesa.time import RandomActivation
class FarmerAgent(Agent):
"""An agent who pumps from a well if it's not dry."""
def __init__(self, unique_id, model, well_depth=5.0):
super().__init__(unique_id, model)
self.pumping = True
self.well_depth = well_depth
def step(self):
x, y = self.pos
print(f"Farmer {self.unique_id}, ({x}, {y})")
print(f" Depth to the water table: {self.model.wt_depth_2d[x,y]}")
print(f" Depth to the bottom of the well: {self.well_depth}")
if self.model.wt_depth_2d[x, y] >= self.well_depth: # well is dry
print(" Well is dry.")
self.pumping = False
else:
print(" Well is pumping.")
self.pumping = True | notebooks/tutorials/agent_based_modeling/groundwater/landlab_mesa_groundwater_pumping.ipynb | landlab/landlab | mit |
Next, define the model class. The model will take as a parameter a reference to a 2D array (with the same dimensions as the grid) that contains the depth to water table at each grid location. This allows the Farmer agents to check whether their well has run dry. | class FarmerModel(Model):
"""A model with several agents on a grid."""
def __init__(self, N, width, height, well_depth, depth_to_water_table):
self.num_agents = N
self.grid = MultiGrid(width, height, True)
self.depth_to_water_table = depth_to_water_table
self.schedule = RandomActivation(self)
# Create agents
for i in range(self.num_agents):
a = FarmerAgent(i, self, well_depth)
self.schedule.add(a)
# Add the agent to a random grid cell (excluding the perimeter)
x = self.random.randrange(self.grid.width - 2) + 1
y = self.random.randrange(self.grid.width - 2) + 1
self.grid.place_agent(a, (x, y))
def step(self):
self.wt_depth_2d = self.depth_to_water_table.reshape(
(self.grid.width, self.grid.height)
)
self.schedule.step() | notebooks/tutorials/agent_based_modeling/groundwater/landlab_mesa_groundwater_pumping.ipynb | landlab/landlab | mit |
Setting up the Landlab grid, fields, and groundwater simulator | base_depth = 22.0 # depth of aquifer base below ground level, m
initial_water_table_depth = 2.8 # starting depth to water table, m
dx = 100.0 # cell width, m
pumping_rate = 0.004 # pumping rate, m3/s
well_depth = 3 # well depth, m
background_recharge = 0.002 / (365.25 * 24 * 3600) # recharge rate, m/s
K = 0.001 # hydraulic conductivity, (m/s)
n = 0.2 # porosity, (-)
dt = 3600.0 # time-step duration, s
num_agents = 12 # number of farmer agents
run_duration_yrs = 15 # run duration in years
grid = RasterModelGrid((41, 41), xy_spacing=dx)
elev = grid.add_zeros("topographic__elevation", at="node")
base = grid.add_zeros("aquifer_base__elevation", at="node")
base[:] = elev - base_depth
wt = grid.add_zeros("water_table__elevation", at="node")
wt[:] = elev - initial_water_table_depth
depth_to_wt = grid.add_zeros("water_table__depth_below_ground", at="node")
depth_to_wt[:] = elev - wt
recharge = grid.add_zeros("recharge__rate", at="node")
recharge[:] = background_recharge
recharge[well_locations] -= pumping_rate / (
dx * dx
) # pumping rate, in terms of recharge
gdp = GroundwaterDupuitPercolator(
grid,
hydraulic_conductivity=K,
porosity=n,
recharge_rate=recharge,
regularization_f=0.01,
) | notebooks/tutorials/agent_based_modeling/groundwater/landlab_mesa_groundwater_pumping.ipynb | landlab/landlab | mit |
Set up the Farmer model | nc = grid.number_of_node_columns
nr = grid.number_of_node_rows
farmer_model = FarmerModel(
num_agents, nc, nr, well_depth, depth_to_wt.reshape((nr, nc))
) | notebooks/tutorials/agent_based_modeling/groundwater/landlab_mesa_groundwater_pumping.ipynb | landlab/landlab | mit |
Check the spatial distribution of wells: | import numpy as np
def get_well_count(model):
well_count = np.zeros((nr, nc), dtype=int)
pumping_well_count = np.zeros((nr, nc), dtype=int)
for cell in model.grid.coord_iter():
cell_content, x, y = cell
well_count[x][y] = len(cell_content)
for agent in cell_content:
if agent.pumping:
pumping_well_count[x][y] += 1
return well_count, pumping_well_count
well_count, p_well_count = get_well_count(farmer_model)
imshow_grid(grid, well_count.flatten()) | notebooks/tutorials/agent_based_modeling/groundwater/landlab_mesa_groundwater_pumping.ipynb | landlab/landlab | mit |
Set the initial recharge field | recharge[:] = -(pumping_rate / (dx * dx)) * p_well_count.flatten()
imshow_grid(grid, -recharge * 3600 * 24, colorbar_label="Pumping rate (m/day)") | notebooks/tutorials/agent_based_modeling/groundwater/landlab_mesa_groundwater_pumping.ipynb | landlab/landlab | mit |
Run the model | for i in range(run_duration_yrs):
# Run the groundwater simulator for one year
run_for_one_year(gdp, dt)
# Update the depth to water table
depth_to_wt[:] = elev - wt
# Run the farmer model
farmer_model.step()
# Count the number of pumping wells
well_count, pumping_well_count = get_well_count(farmer_model)
total_pumping_wells = np.sum(pumping_well_count)
print(f"In year {i + 1} there are {total_pumping_wells} pumping wells")
print(f" and the greatest depth to water table is {np.amax(depth_to_wt)} meters.")
# Update the recharge field according to current pumping rate
recharge[:] = (
background_recharge - (pumping_rate / (dx * dx)) * pumping_well_count.flatten()
)
print(f"Total recharge: {np.sum(recharge)}")
print("")
plt.figure()
imshow_grid(grid, wt)
imshow_grid(grid, wt)
# Display the area of water table that lies below the well depth
depth_to_wt[:] = elev - wt
too_deep = depth_to_wt > well_depth
imshow_grid(grid, too_deep) | notebooks/tutorials/agent_based_modeling/groundwater/landlab_mesa_groundwater_pumping.ipynb | landlab/landlab | mit |
Line with Gaussian noise
Write a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\sigma^2)$:
$$
y = m x + b + N(0,\sigma^2)
$$
Be careful about the sigma=0.0 case. | def random_line(m, b, sigma, size=10):
"""Create a line y = m*x + b + N(0,sigma**2) between x=[-1.0,1.0]
Parameters
----------
m : float
The slope of the line.
b : float
The y-intercept of the line.
sigma : float
The standard deviation of the y direction normal distribution noise.
size : int
The number of points to create for the line.
Returns
-------
x : array of floats
The array of x values for the line with `size` points.
y : array of floats
The array of y values for the lines with `size` points.
"""
x = np.linspace(-1, 1, size)
n = np.random.randn(size)
y = np.zeros(size)
for a in range(size):
y[a] = m*x[a] + b + (sigma * n[a])
# formula for normal sitribution found on SciPy.org
return x, y
m = 0.0; b = 1.0; sigma=0.0; size=3
x, y = random_line(m, b, sigma, size)
assert len(x)==len(y)==size
assert list(x)==[-1.0,0.0,1.0]
assert list(y)==[1.0,1.0,1.0]
sigma = 1.0
m = 0.0; b = 0.0
size = 500
x, y = random_line(m, b, sigma, size)
assert np.allclose(np.mean(y-m*x-b), 0.0, rtol=0.1, atol=0.1)
assert np.allclose(np.std(y-m*x-b), sigma, rtol=0.1, atol=0.1) | assignments/assignment05/InteractEx04.ipynb | joshnsolomon/phys202-2015-work | mit |
Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function:
Make the marker color settable through a color keyword argument with a default of red.
Display the range $x=[-1.1,1.1]$ and $y=[-10.0,10.0]$.
Customize your plot to make it effective and beautiful. | def ticks_out(ax):
"""Move the ticks to the outside of the box."""
ax.get_xaxis().set_tick_params(direction='out', width=1, which='both')
ax.get_yaxis().set_tick_params(direction='out', width=1, which='both')
def plot_random_line(m, b, sigma, size=10, color='red'):
"""Plot a random line with slope m, intercept b and size points."""
x, y = random_line(m, b, sigma, size)
plt.scatter(x,y,color=color)
plt.xlim(-1.1,1.1)
plt.ylim(-10.0,10.0)
plot_random_line(5.0, -1.0, 2.0, 50)
assert True # use this cell to grade the plot_random_line function | assignments/assignment05/InteractEx04.ipynb | joshnsolomon/phys202-2015-work | mit |
Use interact to explore the plot_random_line function using:
m: a float valued slider from -10.0 to 10.0 with steps of 0.1.
b: a float valued slider from -5.0 to 5.0 with steps of 0.1.
sigma: a float valued slider from 0.0 to 5.0 with steps of 0.01.
size: an int valued slider from 10 to 100 with steps of 10.
color: a dropdown with options for red, green and blue. | interact(plot_random_line, m=(-10.0,10.0,0.1),b=(-5.0,5.0,.1),sigma=(0.0,5.0,.01),size=(10,100,10),color = ['red','green','blue']);
#### assert True # use this cell to grade the plot_random_line interact | assignments/assignment05/InteractEx04.ipynb | joshnsolomon/phys202-2015-work | mit |
this matrix has $\mathcal{O}(1)$ elements in a row, therefore it is sparse.
Finite elements method is also likely to give you a system with a sparse matrix.
How to store a sparse matrix
Coordinate format (coo)
(i, j, value)
i.e. store two integer arrays and one real array.
Easy to add elements.
But how to multiply a matrix by a vector?
CSR format
A matrix is stored as 3 different arrays:
sa, ja, ia
where:
nnz is the total number of non-zeros for the matrix
sa is an real-value array of non-zeros for the matrix (length nnz)
ja is an integer array of column number of the non-zeros (length nnz)
ia is an integer array of locations of the first non-zero element in each row (length n+1)
(Blackboard figure)
Idea behind CSR
For each row i we store the column number of the non-zeros (and their) values
We stack this all together into ja and sa arrays
We save the location of the first non-zero element in each row
CSR helps for matrix-by-vector product as well
for i in xrange(n):
for k in xrange(ia(i):ia(i+1)-1):
y(i) += sa(k) * x(ja(k))
Let us do a short timing test | import numpy as np
import scipy as sp
import scipy.sparse
import scipy.sparse.linalg
from scipy.sparse import csc_matrix, csr_matrix, coo_matrix, lil_matrix
A = csr_matrix([10,10])
B = lil_matrix([10,10])
A[0,0] = 1
#print A
B[0,0] = 1
#print B
import numpy as np
import scipy as sp
import scipy.sparse
import scipy.sparse.linalg
from scipy.sparse import csc_matrix, csr_matrix, coo_matrix
import matplotlib.pyplot as plt
import time
%matplotlib inline
n = 1000
ex = np.ones(n);
lp1 = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr');
e = sp.sparse.eye(n)
A = sp.sparse.kron(lp1, e) + sp.sparse.kron(e, lp1)
A = csr_matrix(A)
rhs = np.ones(n * n)
B = coo_matrix(A)
#t0 = time.time()
%timeit A.dot(rhs)
#print time.time() - t0
#t0 = time.time()
%timeit B.dot(rhs)
#print time.time() - t0 | lecture-7.ipynb | oseledets/fastpde | cc0-1.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.