markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
The default for Robust Linear Models is MAD
another popular choice is Huber's proposal 2 | np.random.seed(12345)
fat_tails = stats.t(6).rvs(40)
kde = sm.nonparametric.KDEUnivariate(fat_tails)
kde.fit()
fig = plt.figure(figsize=(12, 8))
ax = fig.add_subplot(111)
ax.plot(kde.support, kde.density)
print(fat_tails.mean(), fat_tails.std())
print(stats.norm.fit(fat_tails))
print(stats.t.fit(fat_tails, f0=6))
huber = sm.robust.scale.Huber()
loc, scale = huber(fat_tails)
print(loc, scale)
sm.robust.mad(fat_tails)
sm.robust.mad(fat_tails, c=stats.t(6).ppf(0.75))
sm.robust.scale.mad(fat_tails) | v0.13.1/examples/notebooks/generated/robust_models_1.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Duncan's Occupational Prestige data - M-estimation for outliers | from statsmodels.graphics.api import abline_plot
from statsmodels.formula.api import ols, rlm
prestige = sm.datasets.get_rdataset("Duncan", "carData", cache=True).data
print(prestige.head(10))
fig = plt.figure(figsize=(12, 12))
ax1 = fig.add_subplot(211, xlabel="Income", ylabel="Prestige")
ax1.scatter(prestige.income, prestige.prestige)
xy_outlier = prestige.loc["minister", ["income", "prestige"]]
ax1.annotate("Minister", xy_outlier, xy_outlier + 1, fontsize=16)
ax2 = fig.add_subplot(212, xlabel="Education", ylabel="Prestige")
ax2.scatter(prestige.education, prestige.prestige)
ols_model = ols("prestige ~ income + education", prestige).fit()
print(ols_model.summary())
infl = ols_model.get_influence()
student = infl.summary_frame()["student_resid"]
print(student)
print(student.loc[np.abs(student) > 2])
print(infl.summary_frame().loc["minister"])
sidak = ols_model.outlier_test("sidak")
sidak.sort_values("unadj_p", inplace=True)
print(sidak)
fdr = ols_model.outlier_test("fdr_bh")
fdr.sort_values("unadj_p", inplace=True)
print(fdr)
rlm_model = rlm("prestige ~ income + education", prestige).fit()
print(rlm_model.summary())
print(rlm_model.weights) | v0.13.1/examples/notebooks/generated/robust_models_1.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Hertzprung Russell data for Star Cluster CYG 0B1 - Leverage Points
Data is on the luminosity and temperature of 47 stars in the direction of Cygnus. | dta = sm.datasets.get_rdataset("starsCYG", "robustbase", cache=True).data
from matplotlib.patches import Ellipse
fig = plt.figure(figsize=(12, 8))
ax = fig.add_subplot(
111,
xlabel="log(Temp)",
ylabel="log(Light)",
title="Hertzsprung-Russell Diagram of Star Cluster CYG OB1",
)
ax.scatter(*dta.values.T)
# highlight outliers
e = Ellipse((3.5, 6), 0.2, 1, alpha=0.25, color="r")
ax.add_patch(e)
ax.annotate(
"Red giants",
xy=(3.6, 6),
xytext=(3.8, 6),
arrowprops=dict(facecolor="black", shrink=0.05, width=2),
horizontalalignment="left",
verticalalignment="bottom",
clip_on=True, # clip to the axes bounding box
fontsize=16,
)
# annotate these with their index
for i, row in dta.loc[dta["log.Te"] < 3.8].iterrows():
ax.annotate(i, row, row + 0.01, fontsize=14)
xlim, ylim = ax.get_xlim(), ax.get_ylim()
from IPython.display import Image
Image(filename="star_diagram.png")
y = dta["log.light"]
X = sm.add_constant(dta["log.Te"], prepend=True)
ols_model = sm.OLS(y, X).fit()
abline_plot(model_results=ols_model, ax=ax)
rlm_mod = sm.RLM(y, X, sm.robust.norms.TrimmedMean(0.5)).fit()
abline_plot(model_results=rlm_mod, ax=ax, color="red") | v0.13.1/examples/notebooks/generated/robust_models_1.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Why? Because M-estimators are not robust to leverage points. | infl = ols_model.get_influence()
h_bar = 2 * (ols_model.df_model + 1) / ols_model.nobs
hat_diag = infl.summary_frame()["hat_diag"]
hat_diag.loc[hat_diag > h_bar]
sidak2 = ols_model.outlier_test("sidak")
sidak2.sort_values("unadj_p", inplace=True)
print(sidak2)
fdr2 = ols_model.outlier_test("fdr_bh")
fdr2.sort_values("unadj_p", inplace=True)
print(fdr2) | v0.13.1/examples/notebooks/generated/robust_models_1.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Let's delete that line | l = ax.lines[-1]
l.remove()
del l
weights = np.ones(len(X))
weights[X[X["log.Te"] < 3.8].index.values - 1] = 0
wls_model = sm.WLS(y, X, weights=weights).fit()
abline_plot(model_results=wls_model, ax=ax, color="green") | v0.13.1/examples/notebooks/generated/robust_models_1.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
MM estimators are good for this type of problem, unfortunately, we do not yet have these yet.
It's being worked on, but it gives a good excuse to look at the R cell magics in the notebook. | yy = y.values[:, None]
xx = X["log.Te"].values[:, None] | v0.13.1/examples/notebooks/generated/robust_models_1.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Note: The R code and the results in this notebook has been converted to markdown so that R is not required to build the documents. The R results in the notebook were computed using R 3.5.1 and robustbase 0.93.
```ipython
%load_ext rpy2.ipython
%R library(robustbase)
%Rpush yy xx
%R mod <- lmrob(yy ~ xx);
%R params <- mod$coefficients;
%Rpull params
```
ipython
%R print(mod)
Call:
lmrob(formula = yy ~ xx)
\--> method = "MM"
Coefficients:
(Intercept) xx
-4.969 2.253 | params = [-4.969387980288108, 2.2531613477892365] # Computed using R
print(params[0], params[1])
abline_plot(intercept=params[0], slope=params[1], ax=ax, color="red") | v0.13.1/examples/notebooks/generated/robust_models_1.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Exercise: Breakdown points of M-estimator | np.random.seed(12345)
nobs = 200
beta_true = np.array([3, 1, 2.5, 3, -4])
X = np.random.uniform(-20, 20, size=(nobs, len(beta_true) - 1))
# stack a constant in front
X = sm.add_constant(X, prepend=True) # np.c_[np.ones(nobs), X]
mc_iter = 500
contaminate = 0.25 # percentage of response variables to contaminate
all_betas = []
for i in range(mc_iter):
y = np.dot(X, beta_true) + np.random.normal(size=200)
random_idx = np.random.randint(0, nobs, size=int(contaminate * nobs))
y[random_idx] = np.random.uniform(-750, 750)
beta_hat = sm.RLM(y, X).fit().params
all_betas.append(beta_hat)
all_betas = np.asarray(all_betas)
se_loss = lambda x: np.linalg.norm(x, ord=2) ** 2
se_beta = lmap(se_loss, all_betas - beta_true) | v0.13.1/examples/notebooks/generated/robust_models_1.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Squared error loss | np.array(se_beta).mean()
all_betas.mean(0)
beta_true
se_loss(all_betas.mean(0) - beta_true) | v0.13.1/examples/notebooks/generated/robust_models_1.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
<b>Import dependencies. </b> | # from apache_beam.options.pipeline_options import PipelineOptions
# from apache_beam.options.pipeline_options import GoogleCloudOptions
# from apache_beam.options.pipeline_options import StandardOptions
# import apache_beam as beam
from tensorflow.core.example import example_pb2
import tensorflow as tf
import time
from proto import version_config_pb2
from proto.stu3 import fhirproto_extensions_pb2
from proto.stu3 import resources_pb2
from google.protobuf import text_format
from py.google.fhir.labels import label
from py.google.fhir.labels import bundle_to_label
from py.google.fhir.seqex import bundle_to_seqex
from py.google.fhir.models import model
from py.google.fhir.models.model import make_estimator | examples/gcp_datalab/notebooks/2_deploy_and_run_ml_model_to_predict_los.ipynb | google/fhir | apache-2.0 |
<b>Optionally, enable logging for debugging.</b> | import logging
logger = logging.getLogger()
#logger.setLevel(logging.INFO)
logger.setLevel(logging.ERROR) | examples/gcp_datalab/notebooks/2_deploy_and_run_ml_model_to_predict_los.ipynb | google/fhir | apache-2.0 |
<b> Previous step saved Sequence Examples into GCS. Let's examine file size and location of the Sequence Examples we will use of the inference. </b> | %bash
gsutil ls -l ${SEQEX_IN_GCS} | examples/gcp_datalab/notebooks/2_deploy_and_run_ml_model_to_predict_los.ipynb | google/fhir | apache-2.0 |
<h2> 2. Deploy and Run ML Model on Cloud ML</h2>
<ul>
<li>A pre-trained ML Model which was exported to GCS in step 1 will be deployed to Cloud ML Serving.</li>
</ul>
<b>2a. Let's start with exporting our model for serving.<b> | from py.google.fhir.models.model import get_serving_input_fn
hparams = model.create_hparams()
time_crossed_features = [
cross.split(':') for cross in hparams.time_crossed_features if cross
]
LABEL_VALUES = ['less_or_equal_3', '3_7', '7_14', 'above_14']
estimator = make_estimator(hparams, LABEL_VALUES, MODEL_PATH)
serving_input_fn = get_serving_input_fn(hparams.dedup, hparams.time_windows, hparams.include_age, hparams.categorical_context_features, hparams.sequence_features, time_crossed_features)
export_dir = estimator.export_savedmodel(SAVED_MODEL_PATH, serving_input_fn)
os.putenv("MODEL_BINARY", export_dir) | examples/gcp_datalab/notebooks/2_deploy_and_run_ml_model_to_predict_los.ipynb | google/fhir | apache-2.0 |
<b>2b. List all the models deployed currently in the Cloud ML Engine</b> | %%bash
gcloud ml-engine models list | examples/gcp_datalab/notebooks/2_deploy_and_run_ml_model_to_predict_los.ipynb | google/fhir | apache-2.0 |
<b>2c. Optionally run following cell to delete previously deployed model. </b> | %%bash
gcloud ml-engine versions delete v1 --model ${MODEL_NAME} -q
gcloud ml-engine models delete $MODEL_NAME -q | examples/gcp_datalab/notebooks/2_deploy_and_run_ml_model_to_predict_los.ipynb | google/fhir | apache-2.0 |
<b>2d. Run following cell to create a new model if it does not exist </b> | %%bash
gcloud ml-engine models create $MODEL_NAME --regions=$REGION | examples/gcp_datalab/notebooks/2_deploy_and_run_ml_model_to_predict_los.ipynb | google/fhir | apache-2.0 |
<b> 2e. List versions of the Model</b> | %%bash
gcloud ml-engine versions list --model ${MODEL_NAME} | examples/gcp_datalab/notebooks/2_deploy_and_run_ml_model_to_predict_los.ipynb | google/fhir | apache-2.0 |
<b> 2f. Run following cell to create a new version of the model. Increment the version number like v1, v2, v3 </b> <br />
Optionally, you can delete a version using: <br />
gcloud ml-engine versions delete v1 --model ${MODEL_NAME} -q | %%bash
#gcloud ml-engine versions delete v1 --model ${MODEL_NAME} -q
gcloud ml-engine versions create v1 \
--model ${MODEL_NAME} \
--origin ${MODEL_BINARY} \
--runtime-version 1.12 | examples/gcp_datalab/notebooks/2_deploy_and_run_ml_model_to_predict_los.ipynb | google/fhir | apache-2.0 |
<b> 2g. Run an inference job on CloudML engine </b> | %%bash
INFER_JOB_NAME="job_inf_$(date +%Y%m%d_%H%M%S)"
gcloud ml-engine jobs submit prediction $INFER_JOB_NAME --model $MODEL_NAME --version v1 --data-format tf-record --region $REGION --input-paths $SERVING_DATASET --output-path $INFERENCE_PATH
| examples/gcp_datalab/notebooks/2_deploy_and_run_ml_model_to_predict_los.ipynb | google/fhir | apache-2.0 |
<b>You can check the status of the job and other information on <a href="https://console.cloud.google.com/mlengine/jobs">GCP CloudML page</a> </b>
<b> 2h. View the prediction (output) generated by the inference job </b> | %%bash
gsutil cat ${INFERENCE_PATH}/prediction.results-00000-of-00001 | examples/gcp_datalab/notebooks/2_deploy_and_run_ml_model_to_predict_los.ipynb | google/fhir | apache-2.0 |
Connect to Cloud SQL using the Cloud SQL Python Connector
This notebook will be demonstrating how to connect and query data from a Cloud SQL database in an easy and efficient way all from within a jupyter style notebook! Let's have some fun!
π Using this interactive notebook
Click the run icons βΆοΈ of each section within this notebook.
π‘ Alternatively, you can run the currently selected cell with Ctrl + Enter (or β + Enter on a Mac).
β οΈ To avoid any errors, wait for each section to finish in their order before clicking the next βrunβ icon.
This sample must be connected to a Google Cloud project, but nothing else is needed other than your Google Cloud project.
You can use an existing project. Alternatively, you can create a new Cloud project with cloud credits for free.
π Cloud SQL Python Connector
To connect and access our Cloud SQL database instance(s) we will leverage the Cloud SQL Python Connector.
The Cloud SQL Python Connector is a library that can be used alongside a database driver to allow users to easily connect to a Cloud SQL database without having to manually allowlist IP or manage SSL certificates. π₯³ π π€©
β₯οΈ Benefits of Using a Connector
Using a Cloud SQL connector provides the following benefits:
π IAM Authorization: uses IAM permissions to control who/what can connect to your Cloud SQL instances.
π Improved Security: uses robust, updated TLS 1.3 encryption and identity verification between the client connector and the server-side proxy, independent of the database protocol.
π Convenience: removes the requirement to use and distribute SSL certificates, as well as manage firewalls or source/destination IP addresses.
πͺͺ IAM DB Authentication (optional): provides support for Cloud SQLβs automatic IAM DB AuthN feature.
π± Supported Dialects/Drivers
Google Cloud SQL and the Python Connector currently support the following dialects of SQL: MySQL, PostgreSQL, and SQL Server.
Depending on which dialect you are using for your relational database(s) the Python Connector will utilize a different database driver.
SUPPORTED DRIVERS:
pymysql (MySQL) π¬
pg8000 (PostgreSQL) π
pytds (SQL Server) π
Therefore, depending on the dialect of your database you will need to switch to the corresponding notebook!
π MySQL Notebook
π PostgreSQL Notebook (this notebook)
π SQL Server Notebook
π§ Getting Started
This notebook requires the following steps to be completed in order to successfully make Cloud SQL connections with the Cloud SQL Python Connector.
π Authenticate to Google Cloud within Colab
Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project. | from google.colab import auth
auth.authenticate_user() | samples/notebooks/postgres_python_connector.ipynb | GoogleCloudPlatform/cloud-sql-python-connector | apache-2.0 |
π Connect Your Google Cloud Project
Time to connect your Google Cloud Project to this notebook so that you can leverage Google Cloud from within Colab. π
π | #@markdown Please fill in the value below with your GCP project ID and then run the cell.
# Please fill in these values.
project_id = "" #@param {type:"string"}
# Quick input validations.
assert project_id, "β οΈ Please provide a Google Cloud project ID"
# Configure gcloud.
!gcloud config set project {project_id} | samples/notebooks/postgres_python_connector.ipynb | GoogleCloudPlatform/cloud-sql-python-connector | apache-2.0 |
β Configure Your Google Cloud Project
Configure the following in your Google Cloud Project.
IAM principal (user, service account, etc.) with the
Cloud SQL Client role.
π¨ The user logged into this notebook will be used as the IAM principal and will be granted the Cloud SQL Client role. | # grant Cloud SQL Client role to authenticated user
current_user = !gcloud auth list --filter=status:ACTIVE --format="value(account)"
!gcloud projects add-iam-policy-binding {project_id} \
--member=user:{current_user[0]} \
--role="roles/cloudsql.client" | samples/notebooks/postgres_python_connector.ipynb | GoogleCloudPlatform/cloud-sql-python-connector | apache-2.0 |
Enable the Cloud SQL Admin API within your project. | # enable Cloud SQL Admin API
!gcloud services enable sqladmin.googleapis.com | samples/notebooks/postgres_python_connector.ipynb | GoogleCloudPlatform/cloud-sql-python-connector | apache-2.0 |
βοΈ Setting up Cloud SQL
A Postgres Cloud SQL instance is required for the following stages of this notebook.
π½ Create a Postgres Instance
Running the below cell will verify the existence of a Cloud SQL instance or create a new one if one does not exist.
β³ - Creating a Cloud SQL instance may take a few minutes. | #@markdown Please fill in the both the Google Cloud region and name of your Cloud SQL instance. Once filled in, run the cell.
# Please fill in these values.
region = "us-central1" #@param {type:"string"}
instance_name = "" #@param {type:"string"}
# Quick input validations.
assert region, "β οΈ Please provide a Google Cloud region"
assert instance_name, "β οΈ Please provide the name of your instance"
# check if Cloud SQL instance exists in the provided region
database_version = !gcloud sql instances describe {instance_name} --format="value(databaseVersion)"
if database_version[0].startswith("POSTGRES"):
print("Found existing Postgres Cloud SQL Instance!")
else:
print("Creating new Cloud SQL instance...")
password = input("Please provide a password to be used for 'postgres' database user: ")
!gcloud sql instances create {instance_name} --database-version=POSTGRES_14 \
--region={region} --cpu=1 --memory=4GB --root-password={password} \
--database-flags=cloudsql.iam_authentication=On | samples/notebooks/postgres_python_connector.ipynb | GoogleCloudPlatform/cloud-sql-python-connector | apache-2.0 |
π¬ Create a Movies Database
A movies database will be used in later steps when connecting to and querying a Cloud SQL database.
To create a movies database within your Cloud SQL instance run the below command: | !gcloud sql databases create movies --instance={instance_name} | samples/notebooks/postgres_python_connector.ipynb | GoogleCloudPlatform/cloud-sql-python-connector | apache-2.0 |
π₯· Create Batman Database User
To create the batman database user that is used throughout the notebook, run the following gcloud command. | !gcloud sql users create batman \
--instance={instance_name} \
--password="robin" | samples/notebooks/postgres_python_connector.ipynb | GoogleCloudPlatform/cloud-sql-python-connector | apache-2.0 |
<img src='https://i.pinimg.com/originals/12/64/dd/1264dd5ff31fbc65c5edbb5e1a71830e.gif' class="center"/>
π Python Connector Usage
Let's now connect to Cloud SQL using the Python Connector! π β π
π Configuring Credentials
The Cloud SQL Python Connector uses Application Default Credentials (ADC) strategy for resolving credentials.
π‘ Using the Python Connector in Cloud Run, App Engine, or Cloud Functions will automatically use the service account deployed with each service, allowing this step to be skipped. β
Please see the google.auth package documentation for more information on how these credentials are sourced.
This means setting default credentials was previously done for you when you ran:
```python
from google.colab import auth
auth.authenticate_user()
```
π» Install Code Dependencies
It is recommended to use the Connector alongside a library that can create connection pools, such as SQLAlchemy.
This will allow for connections to remain open and be reused, reducing connection overhead and the number of connections needed
Let's pip install the Cloud SQL Python Connector as well as SQLAlchemy, using the below command. | # install dependencies
import sys
!{sys.executable} -m pip install cloud-sql-python-connector["pg8000"] SQLAlchemy | samples/notebooks/postgres_python_connector.ipynb | GoogleCloudPlatform/cloud-sql-python-connector | apache-2.0 |
π Connect to a Postgres Instance
We are now ready to connect to a Postgres instance using the Cloud SQL Python Connector! π β β
Let's set some parameters that are needed to connect properly to a Cloud SQL instance:
* INSTANCE_CONNECTION_NAME : The connection name to your Cloud SQL Instance, takes the form PROJECT_ID:REGION:INSTANCE_NAME.
* DB_USER : The user that the connector will use to connect to the database.
* DB_PASS : The password of the DB_USER.
* DB_NAME : The name of the database on the Cloud SQL instance to connect to. | # initialize parameters
INSTANCE_CONNECTION_NAME = f"{project_id}:{region}:{instance_name}" # i.e demo-project:us-central1:demo-instance
print(f"Your instance connection name is: {INSTANCE_CONNECTION_NAME}")
DB_USER = "batman"
DB_PASS = "robin"
DB_NAME = "movies" | samples/notebooks/postgres_python_connector.ipynb | GoogleCloudPlatform/cloud-sql-python-connector | apache-2.0 |
β
Basic Usage
To connect to Cloud SQL using the connector, inititalize a Connector object and call its connect method with the proper input parameters.
The connect method takes in the parameters we previously defined, as well as a few additional parameters such as:
* driver: The name of the database driver to connect with.
* ip_type (optional): The IP type (public or private) used to connect. IP types can be either IPTypes.PUBLIC or IPTypes.PRIVATE. (Example)
* enable_iam_auth: (optional) Boolean enabling IAM based authentication. (Example)
Let's show an example! π€ π | from google.cloud.sql.connector import Connector
import sqlalchemy
# initialize Connector object
connector = Connector()
# function to return the database connection object
def getconn():
conn = connector.connect(
INSTANCE_CONNECTION_NAME,
"pg8000",
user=DB_USER,
password=DB_PASS,
db=DB_NAME
)
return conn
# create connection pool with 'creator' argument to our connection object function
pool = sqlalchemy.create_engine(
"postgresql+pg8000://",
creator=getconn,
) | samples/notebooks/postgres_python_connector.ipynb | GoogleCloudPlatform/cloud-sql-python-connector | apache-2.0 |
To use this connector with SQLAlchemy, we use the creator argument for sqlalchemy.create_engine
Now that we have established a connection pool, let's write a query! π π | # connect to connection pool
with pool.connect() as db_conn:
# create ratings table in our movies database
db_conn.execute(
"CREATE TABLE IF NOT EXISTS ratings "
"( id SERIAL NOT NULL, title VARCHAR(255) NOT NULL, "
"genre VARCHAR(255) NOT NULL, rating FLOAT NOT NULL, "
"PRIMARY KEY (id));"
)
# insert data into our ratings table
insert_stmt = sqlalchemy.text(
"INSERT INTO ratings (title, genre, rating) VALUES (:title, :genre, :rating)",
)
# insert entries into table
db_conn.execute(insert_stmt, title="Batman Begins", genre="Action", rating=8.5)
db_conn.execute(insert_stmt, title="Star Wars: Return of the Jedi", genre="Action", rating=9.1)
db_conn.execute(insert_stmt, title="The Breakfast Club", genre="Drama", rating=8.3)
# query and fetch ratings table
results = db_conn.execute("SELECT * FROM ratings").fetchall()
# show results
for row in results:
print(row) | samples/notebooks/postgres_python_connector.ipynb | GoogleCloudPlatform/cloud-sql-python-connector | apache-2.0 |
You have successfully been able to connect to a Cloud SQL instance from this notebook and make a query. YOU DID IT! πΊ π π
<img src=https://media.giphy.com/media/MtHGs1yo4FFKrIs55L/giphy.gif />
To close the Connector object's background resources, call it's close() method at the end of your code as follows: | # cleanup connector object
connector.close() | samples/notebooks/postgres_python_connector.ipynb | GoogleCloudPlatform/cloud-sql-python-connector | apache-2.0 |
πͺͺ IAM Database Authentication
Automatic IAM database authentication is supported for Postgres Cloud SQL instances.
π‘ This allows an IAM user to establish an authenticated connection to a Postgres database without having to set a password and enabling the enable_iam_auth parameter in the connector's connect method.
π¨ If you are using a pre-existing Cloud SQL instance within this notebook you may need to configure Cloud SQL instance to allow IAM authentication by setting the cloudsql.iam_authentication database flag to On.
(Cloud SQL instances created within this notebook already have it enabled)
IAM principals wanting to use IAM authentication to connect to a Cloud SQL instance require the Cloud SQL Instance User and Cloud SQL Client IAM role.
Let's add the Cloud SQL Instance User role to the IAM account logged into this notebook. (Client role previously granted) | # add Cloud SQL Instance User role to current logged in IAM user
!gcloud projects add-iam-policy-binding {project_id} \
--member=user:{current_user[0]} \
--role="roles/cloudsql.instanceUser" | samples/notebooks/postgres_python_connector.ipynb | GoogleCloudPlatform/cloud-sql-python-connector | apache-2.0 |
Now the current IAM user can be added to the Cloud SQL instance as an IAM database user. | # add current logged in IAM user to database
!gcloud sql users create {current_user[0]} \
--instance={instance_name} \
--type=cloud_iam_user | samples/notebooks/postgres_python_connector.ipynb | GoogleCloudPlatform/cloud-sql-python-connector | apache-2.0 |
Finally, let's update our getconn function to connect to our Cloud SQL instance with IAM database authentication enabled.
β οΈ The below sample is a limited example as it logs in to the Cloud SQL instance and outputs the current time. By default new IAM database users have no permissions on a Cloud SQL instance. To connect to specific tables and perform more complex queries, permissions must be granted at the database level. (Grant Database Privileges to the IAM user) | from google.cloud.sql.connector import Connector
import sqlalchemy
# IAM database user parameter (IAM user's email)
IAM_USER = current_user[0]
# initialize connector
connector = Connector()
# getconn now using IAM user and requiring no password with IAM Auth enabled
def getconn():
conn = connector.connect(
INSTANCE_CONNECTION_NAME,
"pg8000",
user=IAM_USER,
db="postgres",
enable_iam_auth=True
)
return conn
# create connection pool
pool = sqlalchemy.create_engine(
"postgresql+pg8000://",
creator=getconn,
)
# connect to connection pool
with pool.connect() as db_conn:
# get current datetime from database
results = db_conn.execute("SELECT NOW()").fetchone()
# output time
print("Current time: ", results[0])
# cleanup connector
connector.close() | samples/notebooks/postgres_python_connector.ipynb | GoogleCloudPlatform/cloud-sql-python-connector | apache-2.0 |
Sucess! You were able to connect to Cloud SQL as an IAM authenticated user using the Cloud SQL Python Connector! πΎ π π
<img src="https://media.giphy.com/media/YTbZzCkRQCEJa/giphy.gif" />
π Clean Up Notebook Resources
Make sure to delete your Cloud SQL instance when your are finished with this notebook to avoid further costs. πΈ π° | # delete Cloud SQL instance
!gcloud sql instances delete {instance_name} | samples/notebooks/postgres_python_connector.ipynb | GoogleCloudPlatform/cloud-sql-python-connector | apache-2.0 |
β Appendix
Additional information provided for connecting to a Cloud SQL instance using private IP connections.
π Using Private IP Connections
By default the connector connects to the Cloud SQL instance database using a Public IP address.
Private IP connections are also supported by the connector and can be easily enabled through the ip_type parameter in the connector's connect method.
β οΈ To connect via Private IP, the Cloud SQL instance being connected to must have a Private IP address configured within a VPC Network. (How to Configure Private IP)
π« The below cell is a working sample but will not work within this notebook as the notebook is not within your VPC Network! The cell should be copied into an environment (Cloud Run, Cloud Functions, App Engine etc.) that has access to the VPC Network.
Connecting Cloud Run to a VPC Network
Let's update our getconn function to connect to our Cloud SQL instance with Private IP. | from google.cloud.sql.connector import Connector, IPTypes
import sqlalchemy
# initialize connector
connector = Connector()
# getconn now set to private IP
def getconn():
conn = connector.connect(
INSTANCE_CONNECTION_NAME, # <PROJECT-ID>:<REGION>:<INSTANCE-NAME>
"pg8000",
user=DB_USER,
password=DB_PASS,
db=DB_NAME,
ip_type=IPTypes.PRIVATE
)
return conn
# create connection pool
pool = sqlalchemy.create_engine(
"postgresql+pg8000://",
creator=getconn,
)
# connect to connection pool
with pool.connect() as db_conn:
# query database and fetch results
results = db_conn.execute("SELECT * FROM ratings").fetchall()
# show results
for row in results:
print(row)
# cleanup connector
connector.close() | samples/notebooks/postgres_python_connector.ipynb | GoogleCloudPlatform/cloud-sql-python-connector | apache-2.0 |
There are two versions of each data set from PsychSignal. A simple version with fewer fields and full version with more fields. This is an basic data set with fewer fields.
Let's go over the columns:
- asof_date: The date to which this data applies.
- symbol: stock ticker symbol of the affected company.
- source: the same value for all records in this data set
- bull_scored_messages: total count of bullish sentiment messages scored by PsychSignal's algorithm
- bear_scored_messages: total count of bearish sentiment messages scored by PsychSignal's algorithm
- bullish_intensity: score for each message's language for the stength of the bullishness present in the messages on a 0-4 scale. 0 indicates no bullish sentiment measured, 4 indicates strongest bullish sentiment measured. 4 is rare
- bearish_intensity: score for each message's language for the stength of the bearish present in the messages on a 0-4 scale. 0 indicates no bearish sentiment measured, 4 indicates strongest bearish sentiment measured. 4 is rare
- total_scanned_messages: number of messages coming through PsuchSignal's feeds and attributable to a symbol regardless of whether the PsychSignal sentiment engine can score them for bullish or bearish intensity- timestamp: this is our timestamp on when we registered the data.
- bull_minus_bear: subtracts the bearish intesity from the bullish intensity [BULL - BEAR] to rpovide an immediate net score.
- bull_bear_msg_ratio: the ratio between bull scored messages and bear scored messages.
- sid: the equity's unique identifier. Use this instead of the symbol.
We've done much of the data processing for you. Fields like timestamp and sid are standardized across all our Store Datasets, so the datasets are easy to combine. We have standardized the sid across all our equity databases.
We can select columns and rows with ease. Below, we'll fetch all rows for Apple (sid 24) and explore the scores a bit with a chart. | # Filtering for AAPL
aapl = dataset[dataset.sid == 24]
aapl_df = odo(aapl.sort('asof_date'), pd.DataFrame)
plt.plot(aapl_df.asof_date, aapl_df.bull_scored_messages, marker='.', linestyle='None', color='r')
plt.plot(aapl_df.asof_date, pd.rolling_mean(aapl_df.bull_scored_messages, 30))
plt.xlabel("As Of Date (asof_date)")
plt.ylabel("Count of Bull Messages")
plt.title("Count of Bullish Messages for AAPL")
plt.legend(["Bull Messages - Single Day", "30 Day Rolling Average"], loc=2) | notebooks/data/psychsignal.stocktwits/notebook.ipynb | quantopian/research_public | apache-2.0 |
<a id='pipeline'></a>
Pipeline Overview
Accessing the data in your algorithms & research
The only method for accessing partner data within algorithms running on Quantopian is via the pipeline API. Different data sets work differently but in the case of this data, you can add this data to your pipeline as follows:
Import the data set here
from quantopian.pipeline.data.psychsignal import (
stocktwits_free
)
Then in intialize() you could do something simple like adding the raw value of one of the fields to your pipeline:
pipe.add(stocktwits_free.total_scanned_messages.latest, 'total_scanned_messages') | # Import necessary Pipeline modules
from quantopian.pipeline import Pipeline
from quantopian.research import run_pipeline
from quantopian.pipeline.factors import AverageDollarVolume
# For use in your algorithms
# Using the full paid dataset in your pipeline algo
# from quantopian.pipeline.data.psychsignal import stocktwits
# Using the free sample in your pipeline algo
from quantopian.pipeline.data.psychsignal import stocktwits_free | notebooks/data/psychsignal.stocktwits/notebook.ipynb | quantopian/research_public | apache-2.0 |
Now that we've imported the data, let's take a look at which fields are available for each dataset.
You'll find the dataset, the available fields, and the datatypes for each of those fields. | print "Here are the list of available fields per dataset:"
print "---------------------------------------------------\n"
def _print_fields(dataset):
print "Dataset: %s\n" % dataset.__name__
print "Fields:"
for field in list(dataset.columns):
print "%s - %s" % (field.name, field.dtype)
print "\n"
for data in (stocktwits_free ,):
_print_fields(data)
print "---------------------------------------------------\n" | notebooks/data/psychsignal.stocktwits/notebook.ipynb | quantopian/research_public | apache-2.0 |
Now that we know what fields we have access to, let's see what this data looks like when we run it through Pipeline.
This is constructed the same way as you would in the backtester. For more information on using Pipeline in Research view this thread:
https://www.quantopian.com/posts/pipeline-in-research-build-test-and-visualize-your-factors-and-filters | # Let's see what this data looks like when we run it through Pipeline
# This is constructed the same way as you would in the backtester. For more information
# on using Pipeline in Research view this thread:
# https://www.quantopian.com/posts/pipeline-in-research-build-test-and-visualize-your-factors-and-filters
pipe = Pipeline()
pipe.add(stocktwits_free.total_scanned_messages.latest,
'total_scanned_messages')
pipe.add(stocktwits_free.bear_scored_messages .latest,
'bear_scored_messages ')
pipe.add(stocktwits_free.bull_scored_messages .latest,
'bull_scored_messages ')
pipe.add(stocktwits_free.bull_bear_msg_ratio .latest,
'bull_bear_msg_ratio ')
# Setting some basic liquidity strings (just for good habit)
dollar_volume = AverageDollarVolume(window_length=20)
top_1000_most_liquid = dollar_volume.rank(ascending=False) < 1000
pipe.set_screen(top_1000_most_liquid &
(stocktwits_free.total_scanned_messages.latest>20))
# The show_graph() method of pipeline objects produces a graph to show how it is being calculated.
pipe.show_graph(format='png')
# run_pipeline will show the output of your pipeline
pipe_output = run_pipeline(pipe, start_date='2013-11-01', end_date='2013-11-25')
pipe_output | notebooks/data/psychsignal.stocktwits/notebook.ipynb | quantopian/research_public | apache-2.0 |
Taking what we've seen from above, let's see how we'd move that into the backtester. | # This section is only importable in the backtester
from quantopian.algorithm import attach_pipeline, pipeline_output
# General pipeline imports
from quantopian.pipeline import Pipeline
from quantopian.pipeline.factors import AverageDollarVolume
# Import the datasets available
# For use in your algorithms
# Using the full paid dataset in your pipeline algo
# from quantopian.pipeline.data.psychsignal import stocktwits
# Using the free sample in your pipeline algo
from quantopian.pipeline.data.psychsignal import stocktwits_free
def make_pipeline():
# Create our pipeline
pipe = Pipeline()
# Screen out penny stocks and low liquidity securities.
dollar_volume = AverageDollarVolume(window_length=20)
is_liquid = dollar_volume.rank(ascending=False) < 1000
# Create the mask that we will use for our percentile methods.
base_universe = (is_liquid)
# Add pipeline factors
pipe.add(stocktwits_free.total_scanned_messages.latest,
'total_scanned_messages')
pipe.add(stocktwits_free.bear_scored_messages .latest,
'bear_scored_messages ')
pipe.add(stocktwits_free.bull_scored_messages .latest,
'bull_scored_messages ')
pipe.add(stocktwits_free.bull_bear_msg_ratio .latest,
'bull_bear_msg_ratio ')
# Set our pipeline screens
pipe.set_screen(is_liquid)
return pipe
def initialize(context):
attach_pipeline(make_pipeline(), "pipeline")
def before_trading_start(context, data):
results = pipeline_output('pipeline') | notebooks/data/psychsignal.stocktwits/notebook.ipynb | quantopian/research_public | apache-2.0 |
Compute seed based time-frequency connectivity in sensor space
Computes the connectivity between a seed-gradiometer close to the visual cortex
and all other gradiometers. The connectivity is computed in the time-frequency
domain using Morlet wavelets and the debiased Squared Weighted Phase Lag Index
[1]_ is used as connectivity metric.
.. [1] Vinck et al. "An improved index of phase-synchronization for electro-
physiological data in the presence of volume-conduction, noise and
sample-size bias" NeuroImage, vol. 55, no. 4, pp. 1548-1565, Apr. 2011. | # Author: Martin Luessi <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import mne
from mne import io
from mne.connectivity import spectral_connectivity, seed_target_indices
from mne.datasets import sample
from mne.time_frequency import AverageTFR
print(__doc__) | 0.14/_downloads/plot_cwt_sensor_connectivity.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Set parameters | data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
# Add a bad channel
raw.info['bads'] += ['MEG 2443']
# Pick MEG gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=False, eog=True,
exclude='bads')
# Create epochs for left-visual condition
event_id, tmin, tmax = 3, -0.2, 0.5
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=dict(grad=4000e-13, eog=150e-6),
preload=True)
# Use 'MEG 2343' as seed
seed_ch = 'MEG 2343'
picks_ch_names = [raw.ch_names[i] for i in picks]
# Create seed-target indices for connectivity computation
seed = picks_ch_names.index(seed_ch)
targets = np.arange(len(picks))
indices = seed_target_indices(seed, targets)
# Define wavelet frequencies and number of cycles
cwt_frequencies = np.arange(7, 30, 2)
cwt_n_cycles = cwt_frequencies / 7.
# Run the connectivity analysis using 2 parallel jobs
sfreq = raw.info['sfreq'] # the sampling frequency
con, freqs, times, _, _ = spectral_connectivity(
epochs, indices=indices,
method='wpli2_debiased', mode='cwt_morlet', sfreq=sfreq,
cwt_frequencies=cwt_frequencies, cwt_n_cycles=cwt_n_cycles, n_jobs=1)
# Mark the seed channel with a value of 1.0, so we can see it in the plot
con[np.where(indices[1] == seed)] = 1.0
# Show topography of connectivity from seed
title = 'WPLI2 - Visual - Seed %s' % seed_ch
layout = mne.find_layout(epochs.info, 'meg') # use full layout
tfr = AverageTFR(epochs.info, con, times, freqs, len(epochs))
tfr.plot_topo(fig_facecolor='w', font_color='k', border='k') | 0.14/_downloads/plot_cwt_sensor_connectivity.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Synthetic signals
Here, we make a synthetic measurement.
The synthetic signal $\mathrm{y}$ is simulated from the grand truth solution $g_true$ and random gaussian noise. | n = 30
N = 40
# radial coordinate
r = np.linspace(0, 1., n)
# synthetic latent function
f = np.exp(-(r-0.3)*(r-0.3)/0.1) + np.exp(-(r+0.3)*(r+0.3)/0.1)
# plotting the latent function
plt.figure(figsize=(5,3))
plt.plot(r, f)
plt.xlabel('$r$: Radial coordinate')
plt.ylabel('$f$: Function value') | notebooks/Abel_inversion.ipynb | fujii-team/GPinv | apache-2.0 |
Prepare the synthetic signal. | # los height
z = np.linspace(-0.9,0.9, N)
# Los-matrix
A = make_LosMatrix.make_LosMatrix(r, z)
# noise amplitude
e_amp = 0.1
# synthetic observation
y = np.dot(A, f) + e_amp * np.random.randn(N)
plt.figure(figsize=(5,3))
plt.plot(z, y, 'o', [-1,1],[0,0], '--k', ms=5)
plt.xlabel('$z$: Los-height')
plt.ylabel('$y$: Observation') | notebooks/Abel_inversion.ipynb | fujii-team/GPinv | apache-2.0 |
Inference
In order to carry out an inference, a custom likelihood, which calculates $p(\mathbf{Y}|\mathbf{f})$ with given $\mathbf{f}$, must be prepared according to the problem.
The method to be implemented is logp(f,Y) method, that calculates log-likelihood for data Y with given f | class AbelLikelihood(GPinv.likelihoods.Likelihood):
def __init__(self, Amat):
GPinv.likelihoods.Likelihood.__init__(self)
self.Amat = GPinv.param.DataHolder(Amat)
self.variance = GPinv.param.Param(np.ones(1), GPinv.transforms.positive)
def logp(self, F, Y):
Af = self.sample_F(F)
Y = tf.tile(tf.expand_dims(Y, 0), [tf.shape(F)[0],1,1])
return GPinv.densities.gaussian(Af, Y, self.variance)
def sample_F(self, F):
N = tf.shape(F)[0]
Amat = tf.tile(tf.expand_dims(self.Amat,0), [N, 1,1])
Af = tf.batch_matmul(Amat, tf.exp(F))
return Af
def sample_Y(self, F):
f_sample = self.sample_F(F)
return f_sample + tf.random_normal(tf.shape(f_sample)) * tf.sqrt(self.variance) | notebooks/Abel_inversion.ipynb | fujii-team/GPinv | apache-2.0 |
Variational inference by StVGP
In StVGP, we evaluate the posterior $p(\mathbf{f}|\mathbf{y},\theta)$ by approximating as a multivariate Gaussian distribution.
The hyperparameters are obtained at the maximum of the evidence lower bound (ELBO) $p(\mathbf{y}|\theta)$.
Kernel
The statistical property is interpreted in Gaussian Process kernel.
In our example, since $f$ is a cylindrically symmetric function, we adopt RBF_csym kernel.
MeanFunction
To make $f$ scale invariant, we added the constant mean_function to $f$. | model_stvgp = GPinv.stvgp.StVGP(r.reshape(-1,1), y.reshape(-1,1),
kern = GPinv.kernels.RBF_csym(1,1),
mean_function = GPinv.mean_functions.Constant(1),
likelihood=AbelLikelihood(A),
num_samples=10) | notebooks/Abel_inversion.ipynb | fujii-team/GPinv | apache-2.0 |
Check the initial estimate | # Data Y should scatter around the transform F of the GP function f.
sample_F = model_stvgp.sample_F(100)
plt.figure(figsize=(5,3))
plt.plot(z, y, 'o', [-1,1],[0,0], '--k', ms=5)
for s in sample_F:
plt.plot(z, s, '-k', alpha=0.1, lw=1)
plt.xlabel('$z$: Los-height')
plt.ylabel('$y$: Observation') | notebooks/Abel_inversion.ipynb | fujii-team/GPinv | apache-2.0 |
Iteration
Although the initial estimate is not very good, we start iteration. | # This function is just for the visualization of the iteration
from IPython import display
logf = []
def logger(x):
if (logger.i % 10) == 0:
obj = -model_stvgp._objective(x)[0]
logf.append(obj)
# display
if (logger.i % 100) ==0:
plt.clf()
plt.plot(logf, '--ko', markersize=3, linewidth=1)
plt.ylabel('ELBO')
plt.xlabel('iteration')
display.display(plt.gcf())
display.clear_output(wait=True)
logger.i+=1
logger.i = 1
plt.figure(figsize=(5,3))
# Rough optimization by scipy.minimize
model_stvgp.optimize()
# Final optimization by tf.train
trainer = tf.train.AdamOptimizer(learning_rate=0.002)
_= model_stvgp.optimize(trainer, maxiter=5000, callback=logger)
display.clear_output(wait=True) | notebooks/Abel_inversion.ipynb | fujii-team/GPinv | apache-2.0 |
Plot results | # Predict the latent function f, which follows Gaussian Process
r_new = np.linspace(0.,1.2, 40)
f_pred, f_var = model_stvgp.predict_f(r_new.reshape(-1,1))
# Data Y should scatter around the transform F of the GP function f.
sample_F = model_stvgp.sample_F(100)
plt.figure(figsize=(8,3))
plt.subplot(1,2,1)
f_plus = np.exp(f_pred.flatten() + 2.*np.sqrt(f_var.flatten()))
f_minus = np.exp(f_pred.flatten() - 2.*np.sqrt(f_var.flatten()))
plt.fill_between(r_new, f_plus, f_minus, alpha=0.2)
plt.plot(r_new, np.exp(f_pred.flatten()), label='StVGP',lw=1.5)
plt.plot(r, f, '-r', label='true',lw=1.5)# ground truth
plt.xlabel('$r$: Radial coordinate')
plt.ylabel('$g$: Latent function')
plt.legend(loc='best')
plt.subplot(1,2,2)
for s in sample_F:
plt.plot(z, s, '-k', alpha=0.05, lw=1)
plt.plot(z, y, 'o', ms=5)
plt.plot(z, np.dot(A, f), 'r', label='true',lw=1.5)
plt.xlabel('$z$: Los-height')
plt.ylabel('$y$: Observation')
plt.legend(loc='best')
plt.tight_layout() | notebooks/Abel_inversion.ipynb | fujii-team/GPinv | apache-2.0 |
MCMC
MCMC is fully Bayesian inference.
The hyperparameters are numerically marginalized out. | model_gpmc = GPinv.gpmc.GPMC(r.reshape(-1,1), y.reshape(-1,1),
kern = GPinv.kernels.RBF_csym(1,1),
mean_function = GPinv.mean_functions.Constant(1),
likelihood=AbelLikelihood(A)) | notebooks/Abel_inversion.ipynb | fujii-team/GPinv | apache-2.0 |
Sample from posterior | samples = model_gpmc.sample(300, thin=3, burn=500, verbose=True, epsilon=0.01, Lmax=15) | notebooks/Abel_inversion.ipynb | fujii-team/GPinv | apache-2.0 |
Plot result | r_new = np.linspace(0.,1.2, 40)
plt.figure(figsize=(8,3))
# Latent function
plt.subplot(1,2,1)
for i in range(0,len(samples),3):
s = samples[i]
model_gpmc.set_state(s)
f_pred, f_var = model_gpmc.predict_f(r_new.reshape(-1,1))
plt.plot(r_new, np.exp(f_pred.flatten()), 'k',lw=1, alpha=0.1)
plt.plot(r, f, '-r', label='true',lw=1.5)# ground truth
plt.xlabel('$r$: Radial coordinate')
plt.ylabel('$g$: Latent function')
plt.legend(loc='best')
#
plt.subplot(1,2,2)
for i in range(0,len(samples),3):
s = samples[i]
model_gpmc.set_state(s)
f_sample = model_gpmc.sample_F()
plt.plot(z, f_sample[0], 'k',lw=1, alpha=0.1)
plt.plot(z, y, 'o', ms=5)
plt.plot(z, np.dot(A, f), 'r', label='true',lw=1.5)
plt.xlabel('$z$: Los-height')
plt.ylabel('$y$: Observation')
plt.legend(loc='best')
plt.tight_layout() | notebooks/Abel_inversion.ipynb | fujii-team/GPinv | apache-2.0 |
Comparison between StVGP and GPMC
The StVGP makes a point estimate for the hyperparameter (variance and length-scale of the kernel, mean function value, and variance at the likelihood),
while the GPMC integrate them out.
Therefore, there is some difference between them.
Difference in the hyperparameter estimation | # make a histogram (posterior) for these hyperparameter estimated by GPMC
gpmc_hyp_samples = {
'k_variance' : [], # variance
'k_lengthscale': [], # kernel lengthscale
'mean' : [], # mean function values
'lik_variance' : [], # variance for the likelihood
}
for s in samples:
model_gpmc.set_state(s)
gpmc_hyp_samples['k_variance' ].append(model_gpmc.kern.variance.value[0])
gpmc_hyp_samples['k_lengthscale'].append(model_gpmc.kern.lengthscales.value[0])
gpmc_hyp_samples['mean'].append(model_gpmc.mean_function.c.value[0])
gpmc_hyp_samples['lik_variance'].append(model_gpmc.likelihood.variance.value[0])
plt.figure(figsize=(10,2))
# kernel variance
plt.subplot(1,4,1)
plt.title('k_variance')
_= plt.hist(gpmc_hyp_samples['k_variance'])
plt.plot([model_stvgp.kern.variance.value]*2, [0,100], '-r')
plt.subplot(1,4,2)
plt.title('k_lengthscale')
_= plt.hist(gpmc_hyp_samples['k_lengthscale'])
plt.plot([model_stvgp.kern.lengthscales.value]*2, [0,100], '-r')
plt.subplot(1,4,3)
plt.title('mean')
_= plt.hist(gpmc_hyp_samples['mean'])
plt.plot([model_stvgp.mean_function.c.value]*2, [0,100], '-r')
plt.subplot(1,4,4)
plt.title('lik_variance')
_= plt.hist(gpmc_hyp_samples['lik_variance'])
plt.plot([model_stvgp.likelihood.variance.value]*2, [0,100], '-r')
plt.tight_layout()
print('Here the red line shows the MAP estimate by StVGP') | notebooks/Abel_inversion.ipynb | fujii-team/GPinv | apache-2.0 |
Difference in the prediction. | r_new = np.linspace(0.,1.2, 40)
plt.figure(figsize=(4,3))
# StVGP
f_pred, f_var = model_stvgp.predict_f(r_new.reshape(-1,1))
f_plus = np.exp(f_pred.flatten() + 2.*np.sqrt(f_var.flatten()))
f_minus = np.exp(f_pred.flatten() - 2.*np.sqrt(f_var.flatten()))
plt.plot(r_new, np.exp(f_pred.flatten()), 'b', label='StVGP',lw=1.5)
plt.plot(r_new, f_plus, '--b', r_new, f_minus, '--b', lw=1.5)
# GPMC
for i in range(0,len(samples),3):
s = samples[i]
model_gpmc.set_state(s)
f_pred, f_var = model_gpmc.predict_f(r_new.reshape(-1,1))
plt.plot(r_new, np.exp(f_pred.flatten()), 'k',lw=1, alpha=0.1)
plt.plot(r_new, np.exp(f_pred.flatten()), 'k',lw=1, alpha=0.1, label='GPMC')
plt.xlabel('$r$: Radial coordinate')
plt.ylabel('$g$: Latent function')
plt.legend(loc='best') | notebooks/Abel_inversion.ipynb | fujii-team/GPinv | apache-2.0 |
<br>
add_numbers is a function that takes two numbers and adds them together. | def add_numbers(x, y):
return x + y
add_numbers(x, y) | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
add_numbers updated to take an optional 3rd parameter. Using print allows printing of multiple expressions within a single cell. | def add_numbers(x,y,z=None):
if (z==None):
return x+y
else:
return x+y+z
print(add_numbers(1, 2))
print(add_numbers(1, 2, 3)) | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
add_numbers updated to take an optional flag parameter. | def add_numbers(x, y, z=None, flag=False):
if (flag):
print('Flag is true!')
if (z==None):
return x + y
else:
return x + y + z
print(add_numbers(1, 2, flag=True)) | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
Assign function add_numbers to variable a. | def add_numbers(x,y):
return x+y
a = add_numbers
a(1,2) | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
The Python Programming Language: Types and Sequences
<br>
Use type to return the object's type. | type('This is a string')
type(None)
type(1)
type(1.0)
type(add_numbers) | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
Tuples are an immutable data structure (cannot be altered). | x = (1, 'a', 2, 'b')
type(x) | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
Lists are a mutable data structure. | x = [1, 'a', 2, 'b']
type(x) | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
Use append to append an object to a list. | x.append(3.3)
print(x) | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
This is an example of how to loop through each item in the list. | for item in x:
print(item) | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
Or using the indexing operator: | i=0
while( i != len(x) ):
print(x[i])
i = i + 1 | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
Use + to concatenate lists. | [1,2] + [3,4] | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
Use * to repeat lists. | [1]*3 | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
Use the in operator to check if something is inside a list. | 1 in [1, 2, 3] | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
Now let's look at strings. Use bracket notation to slice a string. | x = 'This is a string'
print(x[0]) #first character
print(x[0:1]) #first character, but we have explicitly set the end character
print(x[0:2]) #first two characters
print(x[::-1]) | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
This will return the last element of the string. | x[-1] | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
This will return the slice starting from the 4th element from the end and stopping before the 2nd element from the end. | x[-4:-2] | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
This is a slice from the beginning of the string and stopping before the 3rd element. | x[:3] | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
And this is a slice starting from the 3rd element of the string and going all the way to the end. | x[3:]
firstname = 'Christopher'
lastname = 'Brooks'
print(firstname + ' ' + lastname)
print(firstname*3)
print('Chris' in firstname)
| Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
split returns a list of all the words in a string, or a list split on a specific character. | firstname = 'Christopher Arthur Hansen Brooks'.split(' ')[0] # [0] selects the first element of the list
lastname = 'Christopher Arthur Hansen Brooks'.split(' ')[-1] # [-1] selects the last element of the list
print(firstname)
print(lastname) | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
Make sure you convert objects to strings before concatenating. | 'Chris' + 2
'Chris' + str(2) | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
Dictionaries associate keys with values. | x = {'Christopher Brooks': '[email protected]', 'Bill Gates': '[email protected]'}
x['Christopher Brooks'] # Retrieve a value by using the indexing operator
x['Kevyn Collins-Thompson'] = "Test Test"
x['Kevyn Collins-Thompson'] | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
Iterate over all of the keys: | for name in x:
print(x[name]) | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
Iterate over all of the values: | for email in x.values():
print(email) | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
Iterate over all of the items in the list: | for name, email in x.items():
print(name)
print(email) | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
You can unpack a sequence into different variables: | x = ('Christopher', 'Brooks', '[email protected]')
fname, lname, email = x
fname
lname | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
Make sure the number of values you are unpacking matches the number of variables being assigned. | x = ('Christopher', 'Brooks', '[email protected]', 'Ann Arbor')
fname, lname, email, location = x | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
The Python Programming Language: More on Strings | print("Chris" + 2)
print('Chris' + str(2)) | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
Python has a built in method for convenient string formatting. | sales_record = {
'price': 3.24,
'num_items': 4,
'person': 'Chris'}
sales_statement = '{} bought {} item(s) at a price of {} each for a total of {}'
print(sales_statement.format(sales_record['person'],
sales_record['num_items'],
sales_record['price'],
sales_record['num_items']*sales_record['price']))
| Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
Reading and Writing CSV files
<br>
Let's import our datafile mpg.csv, which contains fuel economy data for 234 cars.
mpg : miles per gallon
class : car classification
cty : city mpg
cyl : # of cylinders
displ : engine displacement in liters
drv : f = front-wheel drive, r = rear wheel drive, 4 = 4wd
fl : fuel (e = ethanol E85, d = diesel, r = regular, p = premium, c = CNG)
hwy : highway mpg
manufacturer : automobile manufacturer
model : model of car
trans : type of transmission
year : model year | import csv
import pandas as pd
# Nice, sets decimple point
%precision 2
with open('mpg.csv') as csvfile:
mpg = list(csv.DictReader(csvfile))
df = pd.read_csv('mpg.csv')
mpg[:3] # The first three dictionaries in our list.
df | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
csv.Dictreader has read in each row of our csv file as a dictionary. len shows that our list is comprised of 234 dictionaries. | len(mpg) | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
keys gives us the column names of our csv. | mpg[0].keys() | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
This is how to find the average cty fuel economy across all cars. All values in the dictionaries are strings, so we need to convert to float. | sum(float(d['cty']) for d in mpg) / len(mpg) | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
Similarly this is how to find the average hwy fuel economy across all cars. | sum(float(d['hwy']) for d in mpg) / len(mpg) | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
Use set to return the unique values for the number of cylinders the cars in our dataset have. | # set returns unique values
cylinders = set(d['cyl'] for d in mpg)
cylinders | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
Here's a more complex example where we are grouping the cars by number of cylinder, and finding the average cty mpg for each group. | CtyMpgByCyl = []
for c in cylinders: # iterate over all the cylinder levels
summpg = 0
cyltypecount = 0
for d in mpg: # iterate over all dictionaries
if d['cyl'] == c: # if the cylinder level type matches,
summpg += float(d['cty']) # add the cty mpg
cyltypecount += 1 # increment the count
CtyMpgByCyl.append((c, summpg / cyltypecount)) # append the tuple ('cylinder', 'avg mpg')
CtyMpgByCyl.sort(key=lambda x: x[0])
CtyMpgByCyl | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
Use set to return the unique values for the class types in our dataset. | vehicleclass = set(d['class'] for d in mpg) # what are the class types
vehicleclass | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
And here's an example of how to find the average hwy mpg for each class of vehicle in our dataset. | HwyMpgByClass = []
for t in vehicleclass: # iterate over all the vehicle classes
summpg = 0
vclasscount = 0
for d in mpg: # iterate over all dictionaries
if d['class'] == t: # if the cylinder amount type matches,
summpg += float(d['hwy']) # add the hwy mpg
vclasscount += 1 # increment the count
HwyMpgByClass.append((t, summpg / vclasscount)) # append the tuple ('class', 'avg mpg')
HwyMpgByClass.sort(key=lambda x: x[1])
HwyMpgByClass | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
The Python Programming Language: Dates and Times | import datetime as dt
import time as tm | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
time returns the current time in seconds since the Epoch. (January 1st, 1970) | tm.time() | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
Convert the timestamp to datetime. | dtnow = dt.datetime.fromtimestamp(tm.time())
dtnow | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
Handy datetime attributes: | dtnow.year, dtnow.month, dtnow.day, dtnow.hour, dtnow.minute, dtnow.second # get year, month, day, etc.from a datetime | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
timedelta is a duration expressing the difference between two dates. | delta = dt.timedelta(days = 100) # create a timedelta of 100 days
delta
dt.date.today() | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
date.today returns the current local date. | today = dt.date.today()
today - delta # the date 100 days ago
today > today-delta # compare dates | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
<br>
The Python Programming Language: Objects and map()
<br>
An example of a class in python: | class Person:
department = 'School of Information' #a class variable
def set_name(self, new_name): #a method
self.name = new_name
def set_location(self, new_location):
self.location = new_location
person = Person()
person.set_name('Christopher Brooks')
person.set_location('Ann Arbor, MI, USA')
print('{} live in {} and works in the department {}'.format(person.name, person.location, person.department)) | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | Z0m6ie/Zombie_Code | mit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.