markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
REGION - Used for operations throughout the rest of this notebook. Make sure to choose a region where Cloud Vertex AI services are available. You may not use a Multi-Regional Storage bucket for training with Vertex AI. MODEL_ARTIFACT_DIR - Folder directory path to your model artifacts within a Cloud Storage bucket, for example: "my-models/fraud-detection/trial-4" REPOSITORY - Name of the Artifact Repository to create or use. IMAGE - Name of the container image that will be pushed. MODEL_DISPLAY_NAME - Display name of Vertex AI Model resource. Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. To update your model artifacts without re-building the container, you must upload your model artifacts and any custom code to Cloud Storage. Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
notebooks/community/sdk/SDK_Custom_Container_Prediction.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Write your pre-processor Scaling training data so each numerical feature column has a mean of 0 and a standard deviation of 1 can improve your model. Create preprocess.py, which contains a class to do this scaling:
%mkdir app %%writefile app/preprocess.py import numpy as np class MySimpleScaler(object): def __init__(self): self._means = None self._stds = None def preprocess(self, data): if self._means is None: # during training only self._means = np.mean(data, axis=0) if self._stds is None: # during training only self._stds = np.std(data, axis=0) if not self._stds.all(): raise ValueError("At least one column has standard deviation of 0.") return (data - self._means) / self._stds
notebooks/community/sdk/SDK_Custom_Container_Prediction.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Train and store model with pre-processor Next, use preprocess.MySimpleScaler to preprocess the iris data, then train a model using scikit-learn. At the end, export your trained model as a joblib (.joblib) file and export your MySimpleScaler instance as a pickle (.pkl) file:
%cd app/ import pickle import joblib from preprocess import MySimpleScaler from sklearn.datasets import load_iris from sklearn.ensemble import RandomForestClassifier iris = load_iris() scaler = MySimpleScaler() X = scaler.preprocess(iris.data) y = iris.target model = RandomForestClassifier() model.fit(X, y) joblib.dump(model, "model.joblib") with open("preprocessor.pkl", "wb") as f: pickle.dump(scaler, f)
notebooks/community/sdk/SDK_Custom_Container_Prediction.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Upload model artifacts and custom code to Cloud Storage Before you can deploy your model for serving, Vertex AI needs access to the following files in Cloud Storage: model.joblib (model artifact) preprocessor.pkl (model artifact) Run the following commands to upload your files:
!gsutil cp model.joblib preprocessor.pkl {BUCKET_NAME}/{MODEL_ARTIFACT_DIR}/ %cd ..
notebooks/community/sdk/SDK_Custom_Container_Prediction.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Build a FastAPI server
%%writefile app/main.py from fastapi import FastAPI, Request import joblib import json import numpy as np import pickle import os from google.cloud import storage from preprocess import MySimpleScaler from sklearn.datasets import load_iris app = FastAPI() gcs_client = storage.Client() with open("preprocessor.pkl", 'wb') as preprocessor_f, open("model.joblib", 'wb') as model_f: gcs_client.download_blob_to_file( f"{os.environ['AIP_STORAGE_URI']}/preprocessor.pkl", preprocessor_f ) gcs_client.download_blob_to_file( f"{os.environ['AIP_STORAGE_URI']}/model.joblib", model_f ) with open("preprocessor.pkl", "rb") as f: preprocessor = pickle.load(f) _class_names = load_iris().target_names _model = joblib.load("model.joblib") _preprocessor = preprocessor @app.get(os.environ['AIP_HEALTH_ROUTE'], status_code=200) def health(): return {} @app.post(os.environ['AIP_PREDICT_ROUTE']) async def predict(request: Request): body = await request.json() instances = body["instances"] inputs = np.asarray(instances) preprocessed_inputs = _preprocessor.preprocess(inputs) outputs = _model.predict(preprocessed_inputs) return {"predictions": [_class_names[class_num] for class_num in outputs]}
notebooks/community/sdk/SDK_Custom_Container_Prediction.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Add pre-start script FastAPI will execute this script before starting up the server. The PORT environment variable is set to equal AIP_HTTP_PORT in order to run FastAPI on same the port expected by Vertex AI.
%%writefile app/prestart.sh #!/bin/bash export PORT=$AIP_HTTP_PORT
notebooks/community/sdk/SDK_Custom_Container_Prediction.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Store test instances to use later To learn more about formatting input instances in JSON, read the documentation.
%%writefile instances.json { "instances": [ [6.7, 3.1, 4.7, 1.5], [4.6, 3.1, 1.5, 0.2] ] }
notebooks/community/sdk/SDK_Custom_Container_Prediction.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Build and push container to Artifact Registry Build your container Optionally copy in your credentials to run the container locally.
# NOTE: Copy in credentials to run locally, this step can be skipped for deployment %cp $GOOGLE_APPLICATION_CREDENTIALS app/credentials.json
notebooks/community/sdk/SDK_Custom_Container_Prediction.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Write the Dockerfile, using tiangolo/uvicorn-gunicorn-fastapi as a base image. This will automatically run FastAPI for you using Gunicorn and Uvicorn. Visit the FastAPI docs to read more about deploying FastAPI with Docker.
%%writefile Dockerfile FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7 COPY ./app /app COPY requirements.txt requirements.txt RUN pip install -r requirements.txt
notebooks/community/sdk/SDK_Custom_Container_Prediction.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Build the image and tag the Artifact Registry path that you will push to.
!docker build \ --tag={REGION}-docker.pkg.dev/{PROJECT_ID}/{REPOSITORY}/{IMAGE} \ .
notebooks/community/sdk/SDK_Custom_Container_Prediction.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Run and test the container locally (optional) Run the container locally in detached mode and provide the environment variables that the container requires. These env vars will be provided to the container by Vertex Prediction once deployed. Test the /health and /predict routes, then stop the running image.
!docker rm local-iris !docker run -d -p 80:8080 \ --name=local-iris \ -e AIP_HTTP_PORT=8080 \ -e AIP_HEALTH_ROUTE=/health \ -e AIP_PREDICT_ROUTE=/predict \ -e AIP_STORAGE_URI={BUCKET_NAME}/{MODEL_ARTIFACT_DIR} \ -e GOOGLE_APPLICATION_CREDENTIALS=credentials.json \ {REGION}-docker.pkg.dev/{PROJECT_ID}/{REPOSITORY}/{IMAGE} !curl localhost/health !curl -X POST \ -d @instances.json \ -H "Content-Type: application/json; charset=utf-8" \ localhost/predict !docker stop local-iris
notebooks/community/sdk/SDK_Custom_Container_Prediction.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Push the container to artifact registry Configure Docker to access Artifact Registry. Then push your container image to your Artifact Registry repository.
!gcloud beta artifacts repositories create {REPOSITORY} \ --repository-format=docker \ --location=$REGION !gcloud auth configure-docker {REGION}-docker.pkg.dev !docker push {REGION}-docker.pkg.dev/{PROJECT_ID}/{REPOSITORY}/{IMAGE}
notebooks/community/sdk/SDK_Custom_Container_Prediction.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Deploy to Vertex AI Use the Python SDK to upload and deploy your model. Upload the custom container model
from google.cloud import aiplatform aiplatform.init(project=PROJECT, location=REGION) model = aiplatform.Model.upload( display_name=MODEL_DISPLAY_NAME, artifact_uri=f"{BUCKET_NAME}/{MODEL_ARTIFACT_DIR}", serving_container_image_uri=f"{REGION}-docker.pkg.dev/{PROJECT_ID}/{REPOSITORY}/{IMAGE}", )
notebooks/community/sdk/SDK_Custom_Container_Prediction.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Deploy the model on Vertex AI After this step completes, the model is deployed and ready for online prediction.
endpoint = model.deploy(machine_type="n1-standard-4")
notebooks/community/sdk/SDK_Custom_Container_Prediction.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Send predictions Using Python SDK
endpoint.predict(instances=[[6.7, 3.1, 4.7, 1.5], [4.6, 3.1, 1.5, 0.2]])
notebooks/community/sdk/SDK_Custom_Container_Prediction.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Using REST
ENDPOINT_ID = endpoint.name ! curl \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ -d @instances.json \ https://{REGION}-aiplatform.googleapis.com/v1/projects/{PROJECT_ID}/locations/{REGION}/endpoints/{ENDPOINT_ID}:predict
notebooks/community/sdk/SDK_Custom_Container_Prediction.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Using gcloud CLI
!gcloud beta ai endpoints predict $ENDPOINT_ID \ --region=$REGION \ --json-request=instances.json
notebooks/community/sdk/SDK_Custom_Container_Prediction.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Cleaning up To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial:
# Undeploy model and delete endpoint endpoint.delete(force=True) # Delete the model resource model.delete() # Delete the container image from Artifact Registry !gcloud artifacts docker images delete \ --quiet \ --delete-tags \ {REGION}-docker.pkg.dev/{PROJECT_ID}/{REPOSITORY}/{IMAGE}
notebooks/community/sdk/SDK_Custom_Container_Prediction.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Solution : Numpy Numric Python or simply "numpy". An alternative to python list: Numpy Array. calculation is performed over entire arrays( element wise ) Easy and Fast. Importing Numpy Syntax: import numpy
import numpy as np # selective import # Convet the followoing list to numpy arrays height = [1.75, 1.65, 1.71, 1.89, 1.79] weight = [65.4, 59.2, 63.6, 88.4, 68.7] np_height = np.array( height ) np_weight = np.array( weight ) # Let's confirm this as numpy arrray type(np_height) type(np_weight) bmi = np_weight / np_height ** 2 bmi
Courses/DAT-208x/DAT208X - Week 4 - Section 1 - Numpy.ipynb
dataDogma/Computer-Science
gpl-3.0
Note: Numpy assumes that your array contain elements of same type. If the arary contains elements of differnet types, then resulitng numpy array will converted to type string. Numpy array should'nt be missclassified as an array, technically it a "new data type", just like int, string, float or boolean, and: Comes packaged with it's own methods. i.e. It can behave differently than you'd expect.
# A numpy arary with different types np.array( [1, 2.5, "are different", True ] )
Courses/DAT-208x/DAT208X - Week 4 - Section 1 - Numpy.ipynb
dataDogma/Computer-Science
gpl-3.0
Numpy : remarks
# a simple python list py_list = [ 1, 2, 3 ] # a numpy array numpy_array = np.array([1, 2, 3]) """ remarks: + If we add py_list with itself, it will generate a list of new length. + Whereas, if we add the numpy_array, it would perform, "element wise addition" Warning: Again be careful while using different python types in a numpy arary. """ py_list + py_list numpy_array + numpy_array
Courses/DAT-208x/DAT208X - Week 4 - Section 1 - Numpy.ipynb
dataDogma/Computer-Science
gpl-3.0
Numpy Subsetting All the subsetting operation on a list, also get's performed on Numpy arrays, except for a few minor change, we look them now.
bmi # get the fourth elemnt from the numpy array "bmi" print("The bmi of the fourth element is: " + str( bmi[3] ) ) # slice and dice print("\nThe bmi's from 2nd to 3rd element is: " + str( bmi[2 : 4] ) ) """ Specifically for Numpy, there's another way to do list subsetting via "booleans", here's how. """ print("\nList of bmi have bmi larger than 23: " + str( bmi > 23 ) ) # Next, use this boolean arary to do subsetting print("\nThe element with the largest bmi is: " + str(bmi[ bmi > 23 ]) )
Courses/DAT-208x/DAT208X - Week 4 - Section 1 - Numpy.ipynb
dataDogma/Computer-Science
gpl-3.0
Exercise : RQ1: Which Numpy function do you use to create an array? Ans: array() RQ2: Which two statements describe the advantage of Numpy Package over regular Python Lists? Ans: The Numpy Package provides the array, a data type that can be used to do element-wise calculations. Because Numpy arrays can only hold element of a single type, calculations on Numpy arrays can be carried out way faster than regular Python lists. RQ3: What is the resulting Numpy array z after executing the following lines of code? import numpy as np x = np.array([1, 2, 3]) y = np.array([3, 2, 1]) z = x + y Ans: array( [4, 4, 4] ) RQ4: What happens when you put an integer, a Boolean, and a string in the same Numpy array using the array() function? Ans: An array element is converted to string. Lab : Numpy Objective: Parctice with Numpy Perform Calculations with it. Understand subtle difference b/w Numpy arrays and Python list. List of lab exercises: Your first Numpy Arary -- 100xp, status : earned Baseball's player's height -- 100xp, status : earned Lightweight baseball players -- 100xp, status : earned Numpy Side Effects -- 50xp, status : earned Subsetting Numpy Arrays -- 100xp, status : earned 1. Your First Numpy array
""" Instructions: + Import the "numpy" package as "np", so that you can refer to "numpy" with "np". + Use "np.array()" to create a Numpy array from "baseball". Name this array "np_baseball". + Print out the "type of np_baseball" to check that you got it right. """ # Create list baseball baseball = [180, 215, 210, 210, 188, 176, 209, 200] # Import the numpy package as np import numpy as np # Create a Numpy array from baseball: np_baseball np_baseball = np.array(baseball) print(np_baseball) # Print out type of np_baseball print(type( np_baseball) )
Courses/DAT-208x/DAT208X - Week 4 - Section 1 - Numpy.ipynb
dataDogma/Computer-Science
gpl-3.0
2. Baseball player's height Preface: You are a huge baseball fan. You decide to call the MLB (Major League Baseball) and ask around for some more statistics on the height of the main players. They pass along data on more than a thousand players, which is stored as a regular Python list: height. The height is expressed in inches. Can you make a Numpy array out of it and convert the units to centimeters?
""" Instructions: + Create a Numpy array from height. Name this new array np_height. + Print "np_height". + Multiply "np_height" with 0.0254 to convert all height measurements from inches to meters. - Store the new values in a new array, "np_height_m". + Print out np_height_m and check if the output makes sense. """ # height is available as a regular list # http://wiki.stat.ucla.edu/socr/index.php/SOCR_Data_MLB_HeightsWeights#References # Import numpy import numpy as np # Create a Numpy array from height: np_height np_height = np.array( height ) # Print out np_height print("The Height of the baseball players are: " + str( np_height ) ) # Convert np_height to m: np_height_m np_height_m = np_height * 0.0254 # a inch is 0.0245 meters # Print np_height_m print("\nThe Height of the baseball players in meters are: " + str( np_height_m ) )
Courses/DAT-208x/DAT208X - Week 4 - Section 1 - Numpy.ipynb
dataDogma/Computer-Science
gpl-3.0
3. Baseball player's BMI: Preface: The MLB also offers to let you analyze their weight data. Again, both are available as regular Python lists: height and weight. height is in inches and weight is in pounds. It's now possible to calculate the BMI of each baseball player. Python code to convert height to a Numpy array with the correct units is already available in the workspace. Follow the instructions step by step and finish the game!
""" Instructions: + Create a Numpy array from the weight list with the correct units. - Multiply by 0.453592 to go from pounds to kilograms. - Store the resulting Numpy array as np_weight_kg. + Use np_height_m and np_weight_kg to calculate the BMI of each player. - Use the following equation: BMI = weight( kg ) / height( m ) - Save the resulting numpy array as "bmi". + Print out "bmi". """ # height and weight are available as a regular lists # Import numpy import numpy as np # Create array from height with correct units: np_height_m np_height_m = np.array(height) * 0.0254 # Create array from weight with correct units: np_weight_kg np_weight_kg = np.array( weight ) * 0.453592 # Calculate the BMI: bmi bmi = np_weight_kg / np_height_m ** 2 # Print out bmi print("\nThe Bmi of all the baseball players are: " + str( bmi ) )
Courses/DAT-208x/DAT208X - Week 4 - Section 1 - Numpy.ipynb
dataDogma/Computer-Science
gpl-3.0
4. Leightweight baseball players: To subset both regular Python lists and Numpy arrays, you can use square brackets: x = [4 , 9 , 6, 3, 1] x[1] import numpy as np y = np.array(x) y[1] For Numpy specifically, you can also use boolean Numpy arrays: high = y > 5 y[high]
""" Instructions: + Create a boolean Numpy array: - the element of the array should be "True", - If the corresponding baseball player's BMI is below 21. - You can use the "<" operator for this - Name the array "light", Print the array "light". + Print out a Numpy array with the BMIs of all baseball players whose BMI is below 21. - Use "light" inside square brackets to do a selection on the bmi array. """ # height and weight are available as a regular lists # Import numpy import numpy as np # Calculate the BMI: bmi np_height_m = np.array(height) * 0.0254 np_weight_kg = np.array(weight) * 0.453592 bmi = np_weight_kg / (np_height_m ** 2) # Create the light array light = np.array( bmi < 21 ) # Print out light print("\nLightweight baseball players" + str( light ) ) # Print out BMIs of all baseball players whose BMI is below 21 print(bmi[ light < 21 ])
Courses/DAT-208x/DAT208X - Week 4 - Section 1 - Numpy.ipynb
dataDogma/Computer-Science
gpl-3.0
5. Numpy Side Effect: Preface: Numpy arrays cannot contain elements with different types. If you try to build such a list, some of the elments' types are changed to end up with a homogenous list. This is known as type coercion. Second, the typical arithmetic operators, such as +, -, * and / have a different meaning for regular Python lists and Numpy arrays. Have a look at this line: ```In [1]: np.array([True, 1, 2]) + np.array([3, 4, False]) Out[1]: array([4, 5, 2])``` Here, the + operator is summing Numpy arrays element wise, as a result, the True element ~ 1 as integer, get's added to 3, a int to give off 4, only to be later converted to a string. Same happens with all the othere two numbers. Which code chunk builds the exact same Python data structure? Ans: np.array([4, 3, 0]) + np.array([0, 2, 2]). 6. Subsetting Numpy Arrays: Luckily, subsetting the two, i.e. "Python list" and "Numpy arrays" behave similar while subsetting, wohoooo!
""" Instructions: + Subset np_weight: print out the element at index 50. + Print out a sub-array of np_height: It contains the elements at index 100 up to and including index 110 """ # height and weight are available as a regular lists # Import numpy import numpy as np # Store weight and height lists as numpy arrays np_weight = np.array(weight) np_height = np.array(height) # Print out the weight at index 50 # Ans: print(np_weight[50]) # Print out sub-array of np_height: index 100 up to and including index 110 # Ans: print(np_height[100 : 111])
Courses/DAT-208x/DAT208X - Week 4 - Section 1 - Numpy.ipynb
dataDogma/Computer-Science
gpl-3.0
Run photon packets in parallel plane (film) medium This is an example code to run a Monte Carlo calculation for photon packets travelling in a scattering medium. Set random number seed. This is so that the code produces the same trajectories each time (for testing purposes). Comment this out or set the seed to None for real calculations.
seed = 1 # Properties of system ntrajectories = 100 # number of trajectories nevents = 100 # number of scattering events in each trajectory wavelen = sc.Quantity('600 nm') # wavelength for scattering calculations radius = sc.Quantity('0.125 um') # particle radius volume_fraction = sc.Quantity(0.5, '') # volume fraction of particles n_particle = sc.Quantity(1.54, '') # refractive indices can be specified as pint quantities or n_matrix = ri.n('vacuum', wavelen) # called from the refractive_index module. n_matrix is the n_medium = ri.n('vacuum', wavelen) # space within sample. n_medium is outside the sample. # n_particle and n_matrix can have complex indices if absorption is desired n_sample = ri.n_eff(n_particle, # refractive index of sample, calculated using Bruggeman approximation n_matrix, volume_fraction) boundary = 'film' # geometry of sample, can be 'film' or 'sphere', see below for tutorial # on sphere case incidence_theta_min = sc.Quantity(0, 'rad') # min incidence angle of illumination (should be >=0 and < pi/2) incidence_theta_max = sc.Quantity(0, 'rad') # max incidence angle of illumination (should be >=0 and < pi/2) # (in this case, all trajectories hit the sample normally to the surface) incidence_phi_min = sc.Quantity(0, 'rad') # min incidence angle of illumination (should be >=0 and <= pi/2) incidence_phi_max = sc.Quantity(2*np.pi, 'rad') # max incidence angle of illumination (should be >=0 and <= pi/2) #%%timeit # Calculate the phase function and scattering and absorption coefficients from the single scattering model p, mu_scat, mu_abs = mc.calc_scat(radius, n_particle, n_sample, volume_fraction, wavelen, mie_theory=False) # Initialize the trajectories r0, k0, W0 = mc.initialize(nevents, ntrajectories, n_medium, n_sample, boundary, seed=seed, incidence_theta_min = incidence_theta_min, incidence_theta_max = incidence_theta_max, incidence_phi_min = incidence_phi_min, incidence_phi_max = incidence_phi_max, incidence_theta_data = None, incidence_phi_data = None) # We can input specific incidence angles for each trajectory by setting # incidence_theta_data or incidence_phi_data to not None. This can be useful if we # have BRDF data on a specific material, and we want to model how light would reflect # off said material into a structurally colored film. The incidence angle data can be # Quantity arrays, but if they aren't, the values must be in radians. r0 = sc.Quantity(r0, 'um') k0 = sc.Quantity(k0, '') W0 = sc.Quantity(W0, '') # Generate a matrix of all the randomly sampled angles first sintheta, costheta, sinphi, cosphi, _, _ = mc.sample_angles(nevents, ntrajectories, p) # Create step size distribution step = mc.sample_step(nevents, ntrajectories, mu_scat) # Create trajectories object trajectories = mc.Trajectory(r0, k0, W0) # Run photons trajectories.absorb(mu_abs, step) trajectories.scatter(sintheta, costheta, sinphi, cosphi) trajectories.move(step)
montecarlo_tutorial.ipynb
manoharan-lab/structural-color
gpl-3.0
Plot trajectories
trajectories.plot_coord(ntrajectories, three_dim=True)
montecarlo_tutorial.ipynb
manoharan-lab/structural-color
gpl-3.0
Calculate the fraction of trajectories that are reflected and transmitted
thickness = sc.Quantity('50 um') # thickness of the sample film reflectance, transmittance = det.calc_refl_trans(trajectories, thickness, n_medium, n_sample, boundary) print('Reflectance = '+ str(reflectance)) print('Transmittance = '+ str(transmittance)) print('Absorption coefficient = ' + str(mu_abs))
montecarlo_tutorial.ipynb
manoharan-lab/structural-color
gpl-3.0
Add absorption to the system (in the particle and/or in the matrix) Having absorption the particle or in the matrix implies that their refractive indices are complex (have a non-zero imaginary component). To include the effect of this absorption into the calculations, we just need to specify the complex refractive index in n_particle and/or n_matrix. Everything else remains the same as for the non-absorbing case.
# Properties of system n_particle = sc.Quantity(1.54 + 0.001j, '') n_matrix = ri.n('vacuum', wavelen) + 0.0001j n_sample = ri.n_eff(n_particle, n_matrix, volume_fraction) # Calculate the phase function and scattering and absorption coefficients from the single scattering model p, mu_scat, mu_abs = mc.calc_scat(radius, n_particle, n_sample, volume_fraction, wavelen) # Initialize the trajectories r0, k0, W0 = mc.initialize(nevents, ntrajectories, n_medium, n_sample, boundary, seed=seed, incidence_theta_min = incidence_theta_min, incidence_theta_max = incidence_theta_max, incidence_phi_min = incidence_phi_min, incidence_phi_max = incidence_phi_max) r0 = sc.Quantity(r0, 'um') k0 = sc.Quantity(k0, '') W0 = sc.Quantity(W0, '') # Generate a matrix of all the randomly sampled angles first sintheta, costheta, sinphi, cosphi, _, _ = mc.sample_angles(nevents, ntrajectories, p) # Create step size distribution step = mc.sample_step(nevents, ntrajectories, mu_scat) # Create trajectories object trajectories = mc.Trajectory(r0, k0, W0) # Run photons trajectories.absorb(mu_abs, step) trajectories.scatter(sintheta, costheta, sinphi, cosphi) trajectories.move(step) # Calculate the fraction of reflected and transmitted trajectories thickness = sc.Quantity('50 um') reflectance, transmittance = det.calc_refl_trans(trajectories, thickness, n_medium, n_sample, boundary) print('Reflectance = '+ str(reflectance)) print('Transmittance = '+ str(transmittance))
montecarlo_tutorial.ipynb
manoharan-lab/structural-color
gpl-3.0
As expected, the reflected fraction decreases if the system is absorbing. Calculate the reflectance for a system of core-shell particles When the system is made of core-shell particles, we must specify the refractive index, radius, and volume fraction of each layer, from innermost to outermost. The reflectance is normalized, so it goes from 0 to 1.
# Properties of system ntrajectories = 100 # number of trajectories nevents = 100 # number of scattering events in each trajectory wavelen = sc.Quantity('600 nm') radius = sc.Quantity(np.array([0.125, 0.13]), 'um') # specify the radii from innermost to outermost layer n_particle = sc.Quantity(np.array([1.54,1.33]), '') # specify the index from innermost to outermost layer n_matrix = ri.n('vacuum', wavelen) n_medium = ri.n('vacuum', wavelen) volume_fraction = sc.Quantity(0.5, '') # this is the volume fraction of the core-shell particle as a whole boundary = 'film' # geometry of sample # Calculate the volume fractions of each layer vf_array = np.empty(len(radius)) r_array = np.array([0] + radius.magnitude.tolist()) for r in np.arange(len(r_array)-1): vf_array[r] = (r_array[r+1]**3-r_array[r]**3) / (r_array[-1:]**3) * volume_fraction.magnitude n_sample = ri.n_eff(n_particle, n_matrix, vf_array) #%%timeit # Calculate the phase function and scattering and absorption coefficients from the single scattering model # (this absorption coefficient is of the scatterer, not of an absorber added to the system) p, mu_scat, mu_abs = mc.calc_scat(radius, n_particle, n_sample, volume_fraction, wavelen) # Initialize the trajectories r0, k0, W0 = mc.initialize(nevents, ntrajectories, n_medium, n_sample, boundary, seed=seed, incidence_theta_min = incidence_theta_min, incidence_theta_max = incidence_theta_max, incidence_phi_min = incidence_phi_min, incidence_phi_max = incidence_phi_max) r0 = sc.Quantity(r0, 'um') k0 = sc.Quantity(k0, '') W0 = sc.Quantity(W0, '') # Generate a matrix of all the randomly sampled angles first sintheta, costheta, sinphi, cosphi, _, _ = mc.sample_angles(nevents, ntrajectories, p) # Create step size distribution step = mc.sample_step(nevents, ntrajectories, mu_scat) # Create trajectories object trajectories = mc.Trajectory(r0, k0, W0) # Run photons trajectories.absorb(mu_abs, step) trajectories.scatter(sintheta, costheta, sinphi, cosphi) trajectories.move(step) # Calculate the reflection and transmission fractions thickness = sc.Quantity('50 um') reflectance, transmittance = det.calc_refl_trans(trajectories, thickness, n_medium, n_sample, boundary) print('Reflectance = '+ str(reflectance)) print('Transmittance = '+ str(transmittance))
montecarlo_tutorial.ipynb
manoharan-lab/structural-color
gpl-3.0
Calculate the reflectance for a polydisperse system We can calculate the reflectance of a polydisperse system with either one or two species of particles, meaning that there are one or two mean radii, and each species has its own size distribution. We then need to specify the mean radius, the polydispersity index (pdi), and the concentration of each species. For example, consider a bispecies system of 90$\%$ of 200 nm polystyrene particles and 10$\%$ of 300 nm particles, with each species having a polydispersity index of 1$\%$. In this case, the mean radii are [200, 300] nm, the pdi are [0.01, 0.01], and the concentrations are [0.9, 0.1]. If the system is monospecies, we still need to specify the polydispersity parameters in 2-element arrays. For example, the mean radii become [200, 200] nm, the pdi become [0.01, 0.01], and the concentrations become [1.0, 0.0]. To run the code for polydisperse systems, we just need to specify the parameters accounting for polydispersity when calling 'mc.calc_scat()'. To include absorption into the polydisperse system calculation, we just need to use the complex refractive index of the particle and/or the matrix. The reflectance is normalized, so it goes from 0 to 1. Note: the code currently does not handle polydispersity for systems of core-shell particles.
# Properties of system n_particle = sc.Quantity(1.54, '') n_matrix = ri.n('vacuum', wavelen) n_sample = ri.n_eff(n_particle, n_matrix, volume_fraction) # define the parameters for polydispersity radius = sc.Quantity('125 nm') radius2 = sc.Quantity('150 nm') concentration = sc.Quantity(np.array([0.9,0.1]), '') pdi = sc.Quantity(np.array([0.01, 0.01]), '') # Calculate the phase function and scattering and absorption coefficients from the single scattering model # Need to specify extra parameters for the polydisperse (and bispecies) case p, mu_scat, mu_abs = mc.calc_scat(radius, n_particle, n_sample, volume_fraction, wavelen, radius2=radius2, concentration=concentration, pdi=pdi, polydisperse=True) # Initialize the trajectories r0, k0, W0 = mc.initialize(nevents, ntrajectories, n_medium, n_sample, boundary, seed=seed, incidence_theta_min = incidence_theta_min, incidence_theta_max = incidence_theta_max, incidence_phi_min = incidence_phi_min, incidence_phi_max = incidence_phi_max) r0 = sc.Quantity(r0, 'um') k0 = sc.Quantity(k0, '') W0 = sc.Quantity(W0, '') # Generate a matrix of all the randomly sampled angles first sintheta, costheta, sinphi, cosphi, _, _ = mc.sample_angles(nevents, ntrajectories, p) # Create step size distribution step = mc.sample_step(nevents, ntrajectories, mu_scat) # Create trajectories object trajectories = mc.Trajectory(r0, k0, W0) # Run photons trajectories.absorb(mu_abs, step) trajectories.scatter(sintheta, costheta, sinphi, cosphi) trajectories.move(step) # Calculate the reflection and transmission fractions thickness = sc.Quantity('50 um') reflectance, transmittance = det.calc_refl_trans(trajectories, thickness, n_medium, n_sample, boundary) print('Reflectance = '+ str(reflectance)) print('Transmittance = '+ str(transmittance))
montecarlo_tutorial.ipynb
manoharan-lab/structural-color
gpl-3.0
Calculate the reflectance for a sample with surface roughness Two classes of surface roughnesses are implemented in the model: 1) When the surface roughness is high compared to the wavelength of light, we assume that light “sees” a nanoparticle before “seeing” the sample as an effective medium. The photons take a step based on the scattering length determined by the nanoparticle Mie resonances, without including the structure factor. After this first step, the photons are inside the sample and proceed to get scattered by the sample as an effective medium. We call this type of roughness "fine", and we input a fine_roughness parameter that is the fraction of the surface covered by "fine" roughness. For example, a fine_roughness of 0.3 means that 30% of incident light will hit fine surface roughness (e.g. will "see" a Mie scatterer first). The rest of the light will see a smooth surface, which could be flat or have coarse roughness. The fine_roughness parameter must be between 0 and 1. 2) When the surface roughness is low relative to the wavelength, we can assume that light encounters a locally smooth surface with a slope relative to the z=0 plane. The model corrects the Fresnel reflection and refraction to account for the different angles of incidence due to the roughness. The coarse_roughness parameter is the rms slope of the surface and should be larger than 0. There is no upper bound, but when the coarse roughness tends to infinity, the surface becomes too "spiky" and light can no longer hit it, which reduces the reflectance down to 0. To run the code with either type of surface roughness, the following functions are called differently: calc_scat(): to include fine roughness, need to input fine_roughness > 0. In this case, it returns a 2-element mu_scat, with the first element being the scattering coefficient of the sample as a whole, and the second being the scattering coefficient from Mie theory. If fine_roughness=0, the function returns only the first scattering coefficient in a calculation without roughness. initialize(): to include coarse roughness, need to input coarse_roughness > 0, in which case the function returns kz0_rot and kz0_refl that are needed for calc_refl_trans(). sample_step(): to include fine roughness, need to input fine_roughness > 0. calc_refl_trans(): to include coarse roughness, need to input kz0_rot and kz0_refl from initialize(). To include fine roughness, need to input fine_roughness and n_matrix. $\textbf{Note 1:}$ to reiterate, fine_roughness + coarse_roughness can add up to more than 1. Coarse roughness is how much coarse roughness there is on the surface, and it can be larger than 1. The larger the value, the larger the slopes on the surface. Fine roughness is what fraction of the surface is covered by fine surface roughness so it must be between 0 and 1. Both types of roughnesses can be included together or separately into the calculation. $\textbf{Note 2:}$ Surface roughness has not yet been implemented to work with spherical boundary conditions.
# Properties of system ntrajectories = 100 # number of trajectories nevents = 100 # number of scattering events in each trajectory wavelen = sc.Quantity('600 nm') radius = sc.Quantity('0.125 um') volume_fraction = sc.Quantity(0.5, '') n_particle = sc.Quantity(1.54, '') # refractive indices can be specified as pint quantities or n_matrix = ri.n('vacuum', wavelen) # called from the refractive_index module. n_matrix is the n_medium = ri.n('vacuum', wavelen) # space within sample. n_medium is outside the sample. # n_particle and n_matrix can have complex indices if absorption is desired boundary = 'film' # geometry of sample, can be 'film' or 'sphere' n_sample = ri.n_eff(n_particle, n_matrix, volume_fraction) # Need to specify fine_roughness and coarse_roughness fine_roughness = sc.Quantity(0.6, '') coarse_roughness = sc.Quantity(1.1, '') # Need to specify fine roughness parameter in this function p, mu_scat, mu_abs = mc.calc_scat(radius, n_particle, n_sample, volume_fraction, wavelen, fine_roughness=fine_roughness, n_matrix=n_matrix) # The output of mc.initialize() depends on whether there is coarse roughness or not if coarse_roughness > 0.: r0, k0, W0, kz0_rotated, kz0_reflected = mc.initialize(nevents, ntrajectories, n_medium, n_sample, boundary, seed=seed, incidence_theta_min = incidence_theta_min, incidence_theta_max = incidence_theta_max, incidence_phi_min = incidence_phi_min, incidence_phi_max = incidence_phi_max, coarse_roughness=coarse_roughness) else: r0, k0, W0 = mc.initialize(nevents, ntrajectories, n_medium, n_sample, boundary, seed=seed, incidence_theta_min = incidence_theta_min, incidence_theta_max = incidence_theta_max, incidence_phi_min = incidence_phi_min, incidence_phi_max = incidence_phi_max, coarse_roughness=coarse_roughness) kz0_rotated = None kz0_reflected = None r0 = sc.Quantity(r0, 'um') k0 = sc.Quantity(k0, '') W0 = sc.Quantity(W0, '') sintheta, costheta, sinphi, cosphi, _, _ = mc.sample_angles(nevents, ntrajectories, p) # Need to specify the fine roughness parameter in this function step = mc.sample_step(nevents, ntrajectories, mu_scat, fine_roughness=fine_roughness) trajectories = mc.Trajectory(r0, k0, W0) trajectories.absorb(mu_abs, step) trajectories.scatter(sintheta, costheta, sinphi, cosphi) trajectories.move(step) z_low = sc.Quantity('0.0 um') cutoff = sc.Quantity('50 um') # If there is coarse roughness, need to specify kz0_rotated and kz0_reflected. reflectance, transmittance = det.calc_refl_trans(trajectories, cutoff, n_medium, n_sample, boundary, kz0_rot=kz0_rotated, kz0_refl=kz0_reflected) print('R = '+ str(reflectance)) print('T = '+ str(transmittance))
montecarlo_tutorial.ipynb
manoharan-lab/structural-color
gpl-3.0
Run photon packets in a medium with a spherical boundary Example code to run a Monte Carlo calculation for photon packets travelling in a sample with a spherical boundary There are only a few subtle differences between running the basic Monte Carlo calculation for a sphere and a film: 1. Set boundary='sphere' instead of 'film' 2. After initialization, multiply r0 by assembly_diameter/2. This corresponds to a spot size that is equal to the size of the sphere. 3. Assembly_diameter is passed for sphere where thickness is passed for film The sphere also has a few extra options for more complex Monte Carlo simulations, and more plotting options that allow you to visually check the results. initialize(): When the argument boundary='sphere', you can set plot_initial=True to see the initial positions on of the trajectories on the sphere. The blue arrows show the original directions of the incident light, and the green arrows show the directions after correction for refraction. For sphere boundary, incidence angle currently must be 0. calc_refl_trans(): when argument plot_exits=True, the function plots the reflected and transmitted trajectory exits from the sphere. Blue dots mark the last trajectory position inside the sphere, before exiting. The red dots mark the intersection of the trajectory with the sphere surface. The green dots mark the trajectory position outside the sphere, just after exiting. Calculate reflectance for a sphere sample
# Properties of system ntrajectories = 100 # number of trajectories nevents = 100 # number of scattering events in each trajectory wavelen = sc.Quantity('600 nm') # wavelength for scattering calculations radius = sc.Quantity('0.125 um') # particle radius assembly_diameter = sc.Quantity('10 um')# diameter of sphere assembly volume_fraction = sc.Quantity(0.5, '') # volume fraction of particles n_particle = sc.Quantity(1.54, '') # refractive indices can be specified as pint quantities or n_matrix = ri.n('vacuum', wavelen) # called from the refractive_index module. n_matrix is the n_medium = ri.n('vacuum', wavelen) # space within sample. n_medium is outside the sample. # n_particle and n_matrix can have complex indices if absorption is desired n_sample = ri.n_eff(n_particle, # refractive index of sample, calculated using Bruggeman approximation n_matrix, volume_fraction) boundary = 'sphere' # geometry of sample, can be 'film' or 'sphere' # Calculate the phase function and scattering and absorption coefficients from the single scattering model # (this absorption coefficient is of the scatterer, not of an absorber added to the system) p, mu_scat, mu_abs = mc.calc_scat(radius, n_particle, n_sample, volume_fraction, wavelen) # Initialize the trajectories for a sphere # set plot_initial to True to see the initial positions of trajectories. The default value of plot_initial is False r0, k0, W0 = mc.initialize(nevents, ntrajectories, n_medium, n_sample, boundary, plot_initial = True, sample_diameter = assembly_diameter, spot_size = assembly_diameter) # make positions, directions, and weights into quantities with units r0 = sc.Quantity(r0, 'um') k0 = sc.Quantity(k0, '') W0 = sc.Quantity(W0, '') # Generate a matrix of all the randomly sampled angles first sintheta, costheta, sinphi, cosphi, _, _ = mc.sample_angles(nevents, ntrajectories, p) # Create step size distribution step = mc.sample_step(nevents, ntrajectories, mu_scat) #print(step) # Create trajectories object trajectories = mc.Trajectory(r0, k0, W0) # Run photons trajectories.absorb(mu_abs, step) trajectories.scatter(sintheta, costheta, sinphi, cosphi) trajectories.move(step) # Calculate reflectance and transmittance # Set plot_exits to true to plot positions of trajectories just before (red) and after (green) exit. # The default value of plot_exits is False. # The default value of run_tir is True, so you must set it to False to exclude the fresnel reflected trajectories. reflectance, transmittance = det.calc_refl_trans(trajectories, assembly_diameter, n_medium, n_sample, boundary, plot_exits = True) print('Reflectance = '+ str(reflectance)) print('Transmittance = '+ str(transmittance))
montecarlo_tutorial.ipynb
manoharan-lab/structural-color
gpl-3.0
For spherical boundaries, there tends to be more light reflected back into the film upon an attempted exit, due to Fresnel reflection (this includes both total internal reflection and partial reflections). We've addressed this problem by including the option to re-run these Fresnel reflected trajectories as new Monte Carlo trajectories. To re-run these trajectory components as new Monte Carlo trajectories, there are a few extra arguments that you must include in calc_refl_trans() run_fresnel_traj = True <br> This boolean tells calc_refl_trans() that we want to re-run the Fresenl reflected trajectories mu_abs = mu_abs, mu_scat=mu_scat, p=p <br> These values are needed because when run_fresnel_traj=True, a new Monte Carlo simulation is calculated, which requires scattering calculations Calculate reflectance for a sphere sample, re-running the Fresnel reflected components of trajectories
# Calculate reflectance and transmittance # The default value of plot_exits is False, so you need not set it to avoid plotting trajectories. # The default value of run_tir is True, so you need not set it to include fresnel reflected trajectories. reflectance, transmittance = det.calc_refl_trans(trajectories, assembly_diameter, n_medium, n_sample, boundary, run_fresnel_traj = True, mu_abs=mu_abs, mu_scat=mu_scat, p=p) print('Reflectance = '+ str(reflectance)) print('Transmittance = '+ str(transmittance))
montecarlo_tutorial.ipynb
manoharan-lab/structural-color
gpl-3.0
Installing development tools Let's start by installing Java. We'll use the default-jdk, which uses OpenJDK. This will take a while, so feel free to go for a walk or do some stretching. Note: Alternatively, you could install the propietary Oracle JDK instead.
# Update and upgrade the system before installing anything else. run('apt-get update > /dev/null') run('apt-get upgrade > /dev/null') # Install the Java JDK. run('apt-get install default-jdk > /dev/null') # Check the Java version to see if everything is working well. run('javac -version')
examples/notebooks/get-started/try-apache-beam-java.ipynb
iemejia/incubator-beam
apache-2.0
Now, let's install Gradle, which we'll need to automate the build and running processes for our application. Note: Alternatively, you could install and configure Maven instead.
import os # Download the gradle source. gradle_version = 'gradle-5.0' gradle_path = f"/opt/{gradle_version}" if not os.path.exists(gradle_path): run(f"wget -q -nc -O gradle.zip https://services.gradle.org/distributions/{gradle_version}-bin.zip") run('unzip -q -d /opt gradle.zip') run('rm -f gradle.zip') # We're choosing to use the absolute path instead of adding it to the $PATH environment variable. def gradle(args): run(f"{gradle_path}/bin/gradle --console=plain {args}") gradle('-v')
examples/notebooks/get-started/try-apache-beam-java.ipynb
iemejia/incubator-beam
apache-2.0
build.gradle We'll also need a build.gradle file which will allow us to invoke some useful commands.
%%writefile build.gradle plugins { // id 'idea' // Uncomment for IntelliJ IDE // id 'eclipse' // Uncomment for Eclipse IDE // Apply java plugin and make it a runnable application. id 'java' id 'application' // 'shadow' allows us to embed all the dependencies into a fat jar. id 'com.github.johnrengelman.shadow' version '4.0.3' } // This is the path of the main class, stored within ./src/main/java/ mainClassName = 'samples.quickstart.WordCount' // Declare the sources from which to fetch dependencies. repositories { mavenCentral() } // Java version compatibility. sourceCompatibility = 1.8 targetCompatibility = 1.8 // Use the latest Apache Beam major version 2. // You can also lock into a minor version like '2.9.+'. ext.apacheBeamVersion = '2.+' // Declare the dependencies of the project. dependencies { shadow "org.apache.beam:beam-sdks-java-core:$apacheBeamVersion" runtime "org.apache.beam:beam-runners-direct-java:$apacheBeamVersion" runtime "org.slf4j:slf4j-api:1.+" runtime "org.slf4j:slf4j-jdk14:1.+" testCompile "junit:junit:4.+" } // Configure 'shadowJar' instead of 'jar' to set up the fat jar. shadowJar { baseName = 'WordCount' // Name of the fat jar file. classifier = null // Set to null, otherwise 'shadow' appends a '-all' to the jar file name. manifest { attributes('Main-Class': mainClassName) // Specify where the main class resides. } }
examples/notebooks/get-started/try-apache-beam-java.ipynb
iemejia/incubator-beam
apache-2.0
Creating the directory structure Java and Gradle expect a specific directory structure. This helps organize large projects into a standard structure. For now, we only need a place where our quickstart code will reside. That has to go within ./src/main/java/.
run('mkdir -p src/main/java/samples/quickstart')
examples/notebooks/get-started/try-apache-beam-java.ipynb
iemejia/incubator-beam
apache-2.0
Minimal word count The following example is the "Hello, World!" of data processing, a basic implementation of word count. We're creating a simple data processing pipeline that reads a text file and counts the number of occurrences of every word. There are many scenarios where all the data does not fit in memory. Notice that the outputs of the pipeline go to the file system, which allows for large processing jobs in distributed environments. WordCount.java
%%writefile src/main/java/samples/quickstart/WordCount.java package samples.quickstart; import org.apache.beam.sdk.Pipeline; import org.apache.beam.sdk.io.TextIO; import org.apache.beam.sdk.options.PipelineOptions; import org.apache.beam.sdk.options.PipelineOptionsFactory; import org.apache.beam.sdk.transforms.Count; import org.apache.beam.sdk.transforms.Filter; import org.apache.beam.sdk.transforms.FlatMapElements; import org.apache.beam.sdk.transforms.MapElements; import org.apache.beam.sdk.values.KV; import org.apache.beam.sdk.values.TypeDescriptors; import java.util.Arrays; public class WordCount { public static void main(String[] args) { String inputsDir = "data/*"; String outputsPrefix = "outputs/part"; PipelineOptions options = PipelineOptionsFactory.fromArgs(args).create(); Pipeline pipeline = Pipeline.create(options); pipeline .apply("Read lines", TextIO.read().from(inputsDir)) .apply("Find words", FlatMapElements.into(TypeDescriptors.strings()) .via((String line) -> Arrays.asList(line.split("[^\\p{L}]+")))) .apply("Filter empty words", Filter.by((String word) -> !word.isEmpty())) .apply("Count words", Count.perElement()) .apply("Write results", MapElements.into(TypeDescriptors.strings()) .via((KV<String, Long> wordCount) -> wordCount.getKey() + ": " + wordCount.getValue())) .apply(TextIO.write().to(outputsPrefix)); pipeline.run(); } }
examples/notebooks/get-started/try-apache-beam-java.ipynb
iemejia/incubator-beam
apache-2.0
Build and run Let's first check how the final file system structure looks like. These are all the files required to build and run our application. build.gradle - build configuration for Gradle src/main/java/samples/quickstart/WordCount.java - application source code data/kinglear.txt - input data, this could be any file or files We are now ready to build the application using gradle build.
# Build the project. gradle('build') # Check the generated build files. run('ls -lh build/libs/')
examples/notebooks/get-started/try-apache-beam-java.ipynb
iemejia/incubator-beam
apache-2.0
There are two files generated: * The content.jar file, the application generated from the regular build command. It's only a few kilobytes in size. * The WordCount.jar file, with the baseName we specified in the shadowJar section of the gradle.build file. It's a several megabytes in size, with all the required libraries it needs to run embedded in it. The file we're actually interested in is the fat JAR file WordCount.jar. To run the fat JAR, we'll use the gradle runShadow command.
# Run the shadow (fat jar) build. gradle('runShadow') # Sample the first 20 results, remember there are no ordering guarantees. run('head -n 20 outputs/part-00000-of-*')
examples/notebooks/get-started/try-apache-beam-java.ipynb
iemejia/incubator-beam
apache-2.0
Distributing your application We can run our fat JAR file as long as we have a Java Runtime Environment installed. To distribute, we copy the fat JAR file and run it with java -jar.
# You can now distribute and run your Java application as a standalone jar file. run('cp build/libs/WordCount.jar .') run('java -jar WordCount.jar') # Sample the first 20 results, remember there are no ordering guarantees. run('head -n 20 outputs/part-00000-of-*')
examples/notebooks/get-started/try-apache-beam-java.ipynb
iemejia/incubator-beam
apache-2.0
Word count with comments Below is mostly the same code as above, but with comments explaining every line in more detail.
%%writefile src/main/java/samples/quickstart/WordCount.java package samples.quickstart; import org.apache.beam.sdk.Pipeline; import org.apache.beam.sdk.io.TextIO; import org.apache.beam.sdk.options.PipelineOptions; import org.apache.beam.sdk.options.PipelineOptionsFactory; import org.apache.beam.sdk.transforms.Count; import org.apache.beam.sdk.transforms.Filter; import org.apache.beam.sdk.transforms.FlatMapElements; import org.apache.beam.sdk.transforms.MapElements; import org.apache.beam.sdk.values.KV; import org.apache.beam.sdk.values.PCollection; import org.apache.beam.sdk.values.TypeDescriptors; import java.util.Arrays; public class WordCount { public static void main(String[] args) { String inputsDir = "data/*"; String outputsPrefix = "outputs/part"; PipelineOptions options = PipelineOptionsFactory.fromArgs(args).create(); Pipeline pipeline = Pipeline.create(options); // Store the word counts in a PCollection. // Each element is a KeyValue of (word, count) of types KV<String, Long>. PCollection<KV<String, Long>> wordCounts = // The input PCollection is an empty pipeline. pipeline // Read lines from a text file. .apply("Read lines", TextIO.read().from(inputsDir)) // Element type: String - text line // Use a regular expression to iterate over all words in the line. // FlatMapElements will yield an element for every element in an iterable. .apply("Find words", FlatMapElements.into(TypeDescriptors.strings()) .via((String line) -> Arrays.asList(line.split("[^\\p{L}]+")))) // Element type: String - word // Keep only non-empty words. .apply("Filter empty words", Filter.by((String word) -> !word.isEmpty())) // Element type: String - word // Count each unique word. .apply("Count words", Count.perElement()); // Element type: KV<String, Long> - key: word, value: counts // We can process a PCollection through other pipelines, too. // The input PCollection are the wordCounts from the previous step. wordCounts // Format the results into a string so we can write them to a file. .apply("Write results", MapElements.into(TypeDescriptors.strings()) .via((KV<String, Long> wordCount) -> wordCount.getKey() + ": " + wordCount.getValue())) // Element type: str - text line // Finally, write the results to a file. .apply(TextIO.write().to(outputsPrefix)); // We have to explicitly run the pipeline, otherwise it's only a definition. pipeline.run(); } } # Build and run the project. The 'runShadow' task implicitly does a 'build'. gradle('runShadow') # Sample the first 20 results, remember there are no ordering guarantees. run('head -n 20 outputs/part-00000-of-*')
examples/notebooks/get-started/try-apache-beam-java.ipynb
iemejia/incubator-beam
apache-2.0
Setup Google Cloud project
PROJECT = '[your-project-id]' # Change to your project id. REGION = 'us-central1' # Change to your region. if PROJECT == "" or PROJECT is None or PROJECT == "[your-project-id]": # Get your GCP project id from gcloud shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT = shell_output[0] print("Project ID:", PROJECT) print("Region:", REGION)
06-model-deployment.ipynb
GoogleCloudPlatform/mlops-with-vertex-ai
apache-2.0
Set configurations
VERSION = 'v01' DATASET_DISPLAY_NAME = 'chicago-taxi-tips' MODEL_DISPLAY_NAME = f'{DATASET_DISPLAY_NAME}-classifier-{VERSION}' ENDPOINT_DISPLAY_NAME = f'{DATASET_DISPLAY_NAME}-classifier' CICD_IMAGE_NAME = 'cicd:latest' CICD_IMAGE_URI = f"gcr.io/{PROJECT}/{CICD_IMAGE_NAME}"
06-model-deployment.ipynb
GoogleCloudPlatform/mlops-with-vertex-ai
apache-2.0
1. Run CI/CD steps locally
os.environ['PROJECT'] = PROJECT os.environ['REGION'] = REGION os.environ['MODEL_DISPLAY_NAME'] = MODEL_DISPLAY_NAME os.environ['ENDPOINT_DISPLAY_NAME'] = ENDPOINT_DISPLAY_NAME
06-model-deployment.ipynb
GoogleCloudPlatform/mlops-with-vertex-ai
apache-2.0
Run the model artifact testing
!py.test src/tests/model_deployment_tests.py::test_model_artifact -s
06-model-deployment.ipynb
GoogleCloudPlatform/mlops-with-vertex-ai
apache-2.0
Run create endpoint
!python build/utils.py \ --mode=create-endpoint\ --project={PROJECT}\ --region={REGION}\ --endpoint-display-name={ENDPOINT_DISPLAY_NAME}
06-model-deployment.ipynb
GoogleCloudPlatform/mlops-with-vertex-ai
apache-2.0
Run deploy model
!python build/utils.py \ --mode=deploy-model\ --project={PROJECT}\ --region={REGION}\ --endpoint-display-name={ENDPOINT_DISPLAY_NAME}\ --model-display-name={MODEL_DISPLAY_NAME}
06-model-deployment.ipynb
GoogleCloudPlatform/mlops-with-vertex-ai
apache-2.0
Test deployed model endpoint
!py.test src/tests/model_deployment_tests.py::test_model_endpoint
06-model-deployment.ipynb
GoogleCloudPlatform/mlops-with-vertex-ai
apache-2.0
2. Execute the Model Deployment CI/CD routine in Cloud Build The CI/CD routine is defined in the model-deployment.yaml file, and consists of the following steps: 1. Load and test the the trained model interface. 2. Create and endpoint in Vertex AI if it doesn't exists. 3. Deploy the model to the endpoint. 4. Test the endpoint. Build CI/CD container Image for Cloud Build This is the runtime environment where the steps of testing and deploying model will be executed.
!echo $CICD_IMAGE_URI !gcloud builds submit --tag $CICD_IMAGE_URI build/. --timeout=15m
06-model-deployment.ipynb
GoogleCloudPlatform/mlops-with-vertex-ai
apache-2.0
Run CI/CD from model deployment using Cloud Build
REPO_URL = "https://github.com/GoogleCloudPlatform/mlops-with-vertex-ai.git" # Change to your github repo. BRANCH = "main" SUBSTITUTIONS=f"""\ _REPO_URL='{REPO_URL}',\ _BRANCH={BRANCH},\ _CICD_IMAGE_URI={CICD_IMAGE_URI},\ _PROJECT={PROJECT},\ _REGION={REGION},\ _MODEL_DISPLAY_NAME={MODEL_DISPLAY_NAME},\ _ENDPOINT_DISPLAY_NAME={ENDPOINT_DISPLAY_NAME},\ """ !echo $SUBSTITUTIONS !gcloud builds submit --no-source --config build/model-deployment.yaml --substitutions {SUBSTITUTIONS} --timeout=30m
06-model-deployment.ipynb
GoogleCloudPlatform/mlops-with-vertex-ai
apache-2.0
RequestsでWebページを取得
# Requestsでgihyo.jpのページのデータを取得 import requests r = requests.get('http://gihyo.jp/lifestyle/clip/01/everyday-cat') r.status_code # ステータスコードを取得 r.text[:50] # 先頭50文字を取得
4_scraping/4_2_scraping.ipynb
takanory/pymook-samplecode
mit
Requestsを使いこなす connpass APIリファレンス https://connpass.com/about/api/
# JSON形式のAPIレスポンスを取得 r = requests.get('https://connpass.com/api/v1/event/?keyword=python') data = r.json() # JSONをデコードしたデータを取得 for event in data['events']: print(event['title']) # 各種HTTPメソッドに対応 payload = {'key1': 'value1', 'key2': 'value2'} r = requests.post('http://httpbin.org/post', data=payload) r = requests.put('http://httpbin.org/put', data=payload) r = requests.delete('http://httpbin.org/delete') r = requests.head('http://httpbin.org/get') r = requests.options('http://httpbin.org/get') # Requestsの便利な使い方 r = requests.get('http://httpbin.org/get', params=payload) r.url r = requests.get('https://httpbin.org/basic-auth/user/passwd', auth=('user', 'passwd')) r.status_code
4_scraping/4_2_scraping.ipynb
takanory/pymook-samplecode
mit
httpbin(1): HTTP Client Testing Service https://httpbin.org/ Beautiful Soup 4でWebページを解析
# Beautiful Soup 4で「技評ねこ部通信」を取得 import requests from bs4 import BeautifulSoup r = requests.get('http://gihyo.jp/lifestyle/clip/01/everyday-cat') soup = BeautifulSoup(r.content, 'html.parser') title = soup.title # titleタグの情報を取得 type(title) # オブジェクトの型は Tag 型 print(title) # タイトルの中身を確認 print(title.text) # タイトルの中のテキストを取得 # 技評ねこ部通信の1件分のデータを取得 div = soup.find('div', class_='readingContent01') li = div.find('li') # divタグの中の最初のliタグを取得 print(li.a['href']) # liタグの中のaタグのhref属性の値を取得 print(li.a.text) # aタグの中の文字列を取得 li.a.text.split(maxsplit=1) # 文字列のsplit()で日付とタイトルに分割 # 技評ねこ部通信の全データを取得 div = soup.find('div', class_='readingContent01') for li in div.find_all('li'): # divタグの中の全liタグを取得 url = li.a['href'] date, text = li.a.text.split(maxsplit=1) print('{},{},{}'.format(date, text, url))
4_scraping/4_2_scraping.ipynb
takanory/pymook-samplecode
mit
Beautiful Soup 4を使いこなす
# タグの情報を取得する div = soup.find('div', class_='readingContent01') type(div) # データの型はTag型 div.name div['class'] div.attrs # 全属性を取得 # さまざまな検索方法 a_tags = soup.find_all('a') # タグ名を指定 len(a_tags) import re for tag in soup.find_all(re.compile('^b')): # 正規表現で指定 print(tag.name) for tag in soup.find_all(['html', 'title']): # リストで指定 print(tag.name) # キーワード引数での属性指定 tag = soup.find(id='categoryNavigation') # id属性を指定して検索 tag.name, tag.attrs tags = soup.find_all(id=True) # id属性があるタグを全て検索 len(tags) div = soup.find('div', class_='readingContent01') # class属性はclass_と指定する div.attrs div = soup.find('div', {'class': 'readingContent01'}) # 辞書形式でも指定できる div.attrs # CSSセレクターを使用した検索 soup.select('title') # タグ名を指定 tags = soup.select('body a') # body タグの下のaタグ len(a_tags) a_tags = soup.select('p > a') # pタグの直下のaタグ len(a_tags) soup.select('body > a') # bodyタグの直下のaタグは存在しない div = soup.select('.readingContent01') # classを指定 div = soup.select('div.readingContent01') div = soup.select('#categoryNavigation') # idを指定 div = soup.select('div#categoryNavigation') a_tag = soup.select_one('div > a') # 最初のdivタグ直下のaタグを返す
4_scraping/4_2_scraping.ipynb
takanory/pymook-samplecode
mit
Contents I. The crystal structure A. Download and visualize B. Try assigning a forcefield II. Parameterizing a small molecule A. Isolate the ligand B. Assign bond orders and hydrogens C. Generate forcefield parameters III. Prepping the protein A. Strip waters B. Histidine IV. Prep for dynamics A. Assign the forcefield B. Attach and configure simulation methods D. Equilibrate the protein I. The crystal structure First, we'll download and investigate the 3AID crystal structure. A. Download and visualize
protease = mdt.from_pdb('3AID') protease protease.draw()
moldesign/_notebooks/Example 4. HIV Protease bound to an inhibitor.ipynb
Autodesk/molecular-design-toolkit
apache-2.0
B. Try assigning a forcefield This structure is not ready for MD - this command will raise a ParameterizationError Exception. After running this calculation, click on the Errors/Warnings tab to see why.
amber_ff = mdt.forcefields.DefaultAmber() newmol = amber_ff.create_prepped_molecule(protease)
moldesign/_notebooks/Example 4. HIV Protease bound to an inhibitor.ipynb
Autodesk/molecular-design-toolkit
apache-2.0
You should see 3 errors: 1. The residue name ARQ not recognized 1. Atom HD1 in residue HIS69, chain A was not recognized 1. Atom HD1 in residue HIS69, chain B was not recognized (There's also a warning about bond distances, but these can be generally be fixed with an energy minimization before running dynamics) We'll start by tackling the small molecule "ARQ". II. Parameterizing a small molecule We'll use the GAFF (generalized Amber force field) to create force field parameters for the small ligand. A. Isolate the ligand Click on the ligand to select it, then we'll use that selection to create a new molecule.
sel = mdt.widgets.ResidueSelector(protease) sel drugres = mdt.Molecule(sel.selected_residues[0]) drugres.draw2d(width=700, show_hydrogens=True)
moldesign/_notebooks/Example 4. HIV Protease bound to an inhibitor.ipynb
Autodesk/molecular-design-toolkit
apache-2.0
B. Assign bond orders and hydrogens A PDB file provides only limited information; they often don't provide indicate bond orders, hydrogen locations, or formal charges. These can be added, however, with the add_missing_pdb_data tool:
drugmol = mdt.tools.set_hybridization_and_saturate(drugres) drugmol.draw(width=500) drugmol.draw2d(width=700, show_hydrogens=True)
moldesign/_notebooks/Example 4. HIV Protease bound to an inhibitor.ipynb
Autodesk/molecular-design-toolkit
apache-2.0
C. Generate forcefield parameters We'll next generate forcefield parameters using this ready-to-simulate structure. NOTE: for computational speed, we use the gasteiger charge model. This is not advisable for production work! am1-bcc or esp are far likelier to produce sensible results.
drug_parameters = mdt.create_ff_parameters(drugmol, charges='gasteiger')
moldesign/_notebooks/Example 4. HIV Protease bound to an inhibitor.ipynb
Autodesk/molecular-design-toolkit
apache-2.0
III. Prepping the protein Section II. dealt with getting forcefield parameters for an unknown small molecule. Next, we'll prep the other part of the structure. A. Strip waters Waters in crystal structures are usually stripped from a simulation as artifacts of the crystallization process. Here, we'll remove the waters from the protein structure.
dehydrated = mdt.Molecule([atom for atom in protease.atoms if atom.residue.type != 'water'])
moldesign/_notebooks/Example 4. HIV Protease bound to an inhibitor.ipynb
Autodesk/molecular-design-toolkit
apache-2.0
B. Histidine Histidine is notoriously tricky, because it exists in no less than three different protonation states at biological pH (7.4) - the "delta-protonated" form, referred to with residue name HID; the "epsilon-protonated" form aka HIE; and the doubly-protonated form HIP, which has a +1 charge. Unfortunately, crystallography isn't usually able to resolve the difference between these three. Luckily, these histidines are pretty far from the ligand binding site, so their protonation is unlikely to affect the dynamics. We'll therefore use the guess_histidine_states function to assign a reasonable starting guess.
mdt.guess_histidine_states(dehydrated)
moldesign/_notebooks/Example 4. HIV Protease bound to an inhibitor.ipynb
Autodesk/molecular-design-toolkit
apache-2.0
IV. Prep for dynamics With these problems fixed, we can succesfully assigne a forcefield and set up the simulation. A. Assign the forcefield Now that we have parameters for the drug and have dealt with histidine, the forcefield assignment will succeed:
amber_ff = mdt.forcefields.DefaultAmber() amber_ff.add_ff(drug_parameters) sim_mol = amber_ff.create_prepped_molecule(dehydrated)
moldesign/_notebooks/Example 4. HIV Protease bound to an inhibitor.ipynb
Autodesk/molecular-design-toolkit
apache-2.0
B. Attach and configure simulation methods Armed with the forcefield parameters, we can connect an energy model to compute energies and forces, and an integrator to create trajectories:
sim_mol.set_energy_model(mdt.models.OpenMMPotential, implicit_solvent='obc', cutoff=8.0*u.angstrom) sim_mol.set_integrator(mdt.integrators.OpenMMLangevin, timestep=2.0*u.fs) sim_mol.configure_methods()
moldesign/_notebooks/Example 4. HIV Protease bound to an inhibitor.ipynb
Autodesk/molecular-design-toolkit
apache-2.0
C. Equilibrate the protein The next series of cells first minimize the crystal structure to remove clashes, then heats the system to 300K.
mintraj = sim_mol.minimize() mintraj.draw() traj = sim_mol.run(20*u.ps) viewer = traj.draw(display=True) viewer.autostyle()
moldesign/_notebooks/Example 4. HIV Protease bound to an inhibitor.ipynb
Autodesk/molecular-design-toolkit
apache-2.0
Atmospheric drag The poliastro package now has several commonly used natural perturbations. One of them is atmospheric drag! See how one can monitor decay of the near-Earth orbit over time using our new module poliastro.twobody.perturbations!
R = Earth.R.to(u.km).value k = Earth.k.to(u.km**3 / u.s**2).value orbit = Orbit.circular(Earth, 250 * u.km, epoch=Time(0.0, format='jd', scale='tdb')) # parameters of a body C_D = 2.2 # dimentionless (any value would do) A = ((np.pi / 4.0) * (u.m**2)).to(u.km**2).value # km^2 m = 100 # kg B = C_D * A / m # parameters of the atmosphere rho0 = Earth.rho0.to(u.kg / u.km**3).value # kg/km^3 H0 = Earth.H0.to(u.km).value tof = (100000 * u.s).to(u.day).value tr = time_range(0.0, periods=2000, end=tof, format='jd', scale='tdb') cowell_with_ad = functools.partial(cowell, ad=atmospheric_drag, R=R, C_D=C_D, A=A, m=m, H0=H0, rho0=rho0) rr = orbit.sample(tr, method=cowell_with_ad) plt.ylabel('h(t)') plt.xlabel('t, days') plt.plot(tr.value, rr.data.norm() - Earth.R)
docs/source/examples/Natural and artificial perturbations.ipynb
newlawrence/poliastro
mit
Evolution of RAAN due to the J2 perturbation We can also see how the J2 perturbation changes RAAN over time!
r0 = np.array([-2384.46, 5729.01, 3050.46]) # km v0 = np.array([-7.36138, -2.98997, 1.64354]) # km/s k = Earth.k.to(u.km**3 / u.s**2).value orbit = Orbit.from_vectors(Earth, r0 * u.km, v0 * u.km / u.s) tof = (48.0 * u.h).to(u.s).value rr, vv = cowell(orbit, np.linspace(0, tof, 2000), ad=J2_perturbation, J2=Earth.J2.value, R=Earth.R.to(u.km).value) raans = [rv2coe(k, r, v)[3] for r, v in zip(rr, vv)] plt.ylabel('RAAN(t)') plt.xlabel('t, s') plt.plot(np.linspace(0, tof, 2000), raans)
docs/source/examples/Natural and artificial perturbations.ipynb
newlawrence/poliastro
mit
3rd body Apart from time-independent perturbations such as atmospheric drag, J2/J3, we have time-dependend perturbations. Lets's see how Moon changes the orbit of GEO satellite over time!
# database keeping positions of bodies in Solar system over time solar_system_ephemeris.set('de432s') j_date = 2454283.0 * u.day # setting the exact event date is important tof = (60 * u.day).to(u.s).value # create interpolant of 3rd body coordinates (calling in on every iteration will be just too slow) body_r = build_ephem_interpolant(Moon, 28 * u.day, (j_date, j_date + 60 * u.day), rtol=1e-2) epoch = Time(j_date, format='jd', scale='tdb') initial = Orbit.from_classical(Earth, 42164.0 * u.km, 0.0001 * u.one, 1 * u.deg, 0.0 * u.deg, 0.0 * u.deg, 0.0 * u.rad, epoch=epoch) # multiply Moon gravity by 400 so that effect is visible :) cowell_with_3rdbody = functools.partial(cowell, rtol=1e-6, ad=third_body, k_third=400 * Moon.k.to(u.km**3 / u.s**2).value, third_body=body_r) tr = time_range(j_date.value, periods=1000, end=j_date.value + 60, format='jd', scale='tdb') rr = initial.sample(tr, method=cowell_with_3rdbody) frame = OrbitPlotter3D() frame.set_attractor(Earth) frame.plot_trajectory(rr, label='orbit influenced by Moon') frame.show()
docs/source/examples/Natural and artificial perturbations.ipynb
newlawrence/poliastro
mit
Thrusts Apart from natural perturbations, there are artificial thrusts aimed at intentional change of orbit parameters. One of such changes is simultaineous change of eccenricy and inclination.
from poliastro.twobody.thrust import change_inc_ecc ecc_0, ecc_f = 0.4, 0.0 a = 42164 # km inc_0 = 0.0 # rad, baseline inc_f = (20.0 * u.deg).to(u.rad).value # rad argp = 0.0 # rad, the method is efficient for 0 and 180 f = 2.4e-6 # km / s2 k = Earth.k.to(u.km**3 / u.s**2).value s0 = Orbit.from_classical( Earth, a * u.km, ecc_0 * u.one, inc_0 * u.deg, 0 * u.deg, argp * u.deg, 0 * u.deg, epoch=Time(0, format='jd', scale='tdb') ) a_d, _, _, t_f = change_inc_ecc(s0, ecc_f, inc_f, f) cowell_with_ad = functools.partial(cowell, rtol=1e-6, ad=a_d) tr = time_range(0.0, periods=1000, end=(t_f * u.s).to(u.day).value, format='jd', scale='tdb') rr = s0.sample(tr, method=cowell_with_ad) frame = OrbitPlotter3D() frame.set_attractor(Earth) frame.plot_trajectory(rr, label='orbit with artificial thrust') frame.show()
docs/source/examples/Natural and artificial perturbations.ipynb
newlawrence/poliastro
mit
Units support all functionality that is supported by floats. Unit combinations are automatically taken care of.
dist = mg.Length(65, "mile") time = mg.Time(30, "min") speed = dist / time print "The speed is {}".format(speed) #Let's do a more sensible unit. print "The speed is {}".format(speed.to("mile h^-1"))
notebooks/2013-01-01-Units.ipynb
materialsvirtuallab/matgenb
bsd-3-clause
Note that complex units are specified as space-separated powers of units. Powers are specified using "^". E.g., "kg m s^-1". Only integer powers are supported. Now, let's do some basic science.
g = mg.FloatWithUnit(9.81, "m s^-2") #Acceleration due to gravity m = mg.Mass(2, "kg") h = mg.Length(10, "m") print "The force is {}".format(m * g) print "The potential energy is force is {}".format((m * g * h).to("J"))
notebooks/2013-01-01-Units.ipynb
materialsvirtuallab/matgenb
bsd-3-clause
Some highly complex conversions are possible with this system. Let's do some made up units. We will also demonstrate pymatgen's internal unit consistency checks.
made_up = mg.FloatWithUnit(100, "Ha^3 bohr^-2") print made_up.to("J^3 ang^-2") try: made_up.to("J^2") except mg.UnitError as ex: print ex
notebooks/2013-01-01-Units.ipynb
materialsvirtuallab/matgenb
bsd-3-clause
For arrays, we have the equivalent EnergyArray, ... and ArrayWithUnit classes. All other functionality remain the same.
dists = mg.LengthArray([1, 2, 3], "mile") times = mg.TimeArray([0.11, 0.12, 0.23], "h") print "Speeds are {}".format(dists / times)
notebooks/2013-01-01-Units.ipynb
materialsvirtuallab/matgenb
bsd-3-clause
Merge CSV databases
from tools import get_psycinfo_database words_df = get_psycinfo_database() words_df.head() #words_df.to_csv("data/PsycInfo/processed/psychinfo_combined.csv.bz2", encoding='utf-8',compression='bz2')
Word_Tracker/3rd_Yr_Paper/PsychoInfo.ipynb
aboSamoor/compsocial
gpl-3.0
Load PsychINFO unified database
#psychinfo = pd.read_csv("data/PsycInfo/processed/psychinfo_combined.csv.bz2", encoding='utf-8', compression='bz2') psychinfo = words_df
Word_Tracker/3rd_Yr_Paper/PsychoInfo.ipynb
aboSamoor/compsocial
gpl-3.0
Term appearance in abstract and title
abstract_occurrence = [] for x,y in psychinfo[["Term", "Abstract"]].fillna("").values: if x.lower() in y.lower(): abstract_occurrence.append(1) else: abstract_occurrence.append(0) psychinfo["term_in_abstract"] = abstract_occurrence title_occurrence = [] for x,y in psychinfo[["Term", "Title"]].fillna("").values: if x.lower() in y.lower(): title_occurrence.append(1) else: title_occurrence.append(0) psychinfo["term_in_title"] = title_occurrence psychinfo_search = psychinfo.drop('Abstract', 1) psychinfo_search = psychinfo_search.drop('Title', 1) term_ID = {"multiculturalism": 1, "polyculturalism": 2, "cultural pluralism": 3, "monocultural": 4, "monoracial": 5, "bicultural": 6, "biracial": 7, "biethnic": 8, "interracial": 9, "multicultural": 10, "multiracial": 11, "polycultural": 12, "polyracial": 13, "polyethnic": 14, "mixed race": 15, "mixed ethnicity": 16, "other race": 17, "other ethnicity": 18} psychinfo_search["term_ID"] = psychinfo_search.Term.map(term_ID) psychinfo_search["Type of Book"].value_counts() type_of_book = { 'Handbook/Manual': 1, 'Textbook/Study Guide': 2, 'Conference Proceedings': 3, 'Reference Book': 2, 'Classic Book': 4,'Handbook/Manual\n\nTextbook/Study Guide': 5, 'Reference Book\n\nTextbook/Study Guide': 5,'Classic Book\n\nTextbook/Study Guide': 5, 'Handbook/Manual\n\nReference Book': 5,'Conference Proceedings\n\nTextbook/Study Guide': 5, 'Reference Book\r\rTextbook/Study Guide': 5,'Conference Proceedings\r\rTextbook/Study Guide': 5} psychinfo_search["type_of_book"] = psychinfo_search["Type of Book"].map(type_of_book) psychinfo_search["cited_references"] = psychinfo_search['Cited References'].map(lambda text:len(text.strip().split("\n")),"ignore") psychinfo_search['Document Type'].value_counts() document_type = {'Journal Article': 1, 'Dissertation': 2, 'Chapter': 3, 'Review-Book': 4, 'Comment/Reply': 6, 'Editorial': 6, 'Chapter\n\nReprint': 3, 'Erratum/Correction': 6, 'Review-Media': 6, 'Abstract Collection': 6, 'Letter': 6, 'Obituary': 6, 'Chapter\n\nComment/Reply': 3, 'Column/Opinion': 6, 'Reprint': 5, 'Bibliography': 5, 'Journal Article\n\nReprint': 1, 'Chapter\r\rReprint': 3, 'Chapter\n\nJournal Article\n\nReprint': 3, 'Bibliography\n\nChapter': 3, 'Encyclopedia Entry': 5, 'Chapter\r\rJournal Article\r\rReprint': 3, 'Review-Software & Other': 6, 'Publication Information': 6, 'Journal Article\r\rReprint': 1, 'Reprint\n\nReview-Book': 4} psychinfo_search['document_type'] = psychinfo_search['Document Type'].map(document_type) psychinfo_search["conference_dich"] = psychinfo_search["Conference"].fillna("").map(lambda x: int((len(x) > 0))) psychinfo_search['Publication Type'].value_counts() publication_type = {'Journal\n\nPeer Reviewed Journal': 1, 'Book\n\nEdited Book': 3, 'Dissertation Abstract': 2, 'Book\n\nAuthored Book': 3, 'Journal\r\rPeer Reviewed Journal': 1, 'Electronic Collection': 1, 'Journal\n\nPeer-Reviewed Status-Unknown': 1, 'Book\r\rEdited Book': 3, 'Book': 3, 'Journal\r\rPeer-Reviewed Status-Unknown': 1, 'Book\r\rAuthored Book': 3, 'Encyclopedia': 4} psychinfo_search['publication_type'] = psychinfo_search['Publication Type'].map(publication_type) (psychinfo_search["publication_type"] * psychinfo_search["conference_dich"]).value_counts() selection = (psychinfo_search["publication_type"] == 3) * (psychinfo_search["conference_dich"] == 1) psychinfo_search[selection][["Publication Type", "Conference"]] psychinfo_search['Language'].value_counts() language = {'English': 1, 'French': 2, 'Spanish': 3, 'Italian': 4, 'German': 5, 'Portuguese': 6, 'Dutch': 7, 'Chinese': 8, 'Greek': 9, 'Hebrew': 10, 'Turkish': 10, 'Russian': 10, 'Serbo-Croatian': 10, 'Slovak': 10, 'Japanese': 10, 'Hungarian': 10, 'Czech': 10, 'Danish': 10, 'Romanian': 10, 'Polish': 10, 'Norwegian': 10, 'Swedish': 10, 'Finnish': 10, 'NonEnglish': 10, 'Arabic': 10, 'Afrikaans': 10} psychinfo_search['language'] = psychinfo_search['Language'].map(language) #psychinfo_search["PsycINFO Classification Code"].value_counts().to_csv("data/PsycInfo/processed/PsycINFO_Classification_Code.csv") #psychinfo_search["Tests & Measures"].value_counts().to_csv("data/PsycInfo/processed/Tests_&_Measures.csv") #psychinfo_search["Key Concepts"].value_counts().to_csv("data/PsycInfo/processed/Key_Concepts.csv") #psychinfo_search["Location"].value_counts().to_csv("data/PsycInfo/processed/Location.csv") #psychinfo_search["MeSH Subject Headings"].value_counts().to_csv("data/PsycInfo/processed/MeSH_Subject_Headings.csv") #psychinfo_search["Journal Name"].value_counts().to_csv("data/PsycInfo/processed/Journal_Name.csv") #psychinfo_search["Institution"].value_counts().to_csv("data/PsycInfo/processed/Institution.csv") len(psychinfo_search["Population Group"].value_counts()) #psychinfo_search["Methodology"].value_counts() def GetCats(text): pattern = re.compile("([0-9]+)") results = [100*(int(x)//100) for x in pattern.findall(text)] if len(set(results))>1: return 4300 else: return results[0] psychinfo_search["PsycINFO_Classification_Code"] = psychinfo_search["PsycINFO Classification Code"].map(GetCats, "ignore") lists = psychinfo["PsycINFO Classification Code"].map(GetCats, "ignore") len(set([x for x in lists.dropna()])) #Number of unique categories psychinfo_search["grants_sponsorship"] = psychinfo_search["Grant/Sponsorship"].fillna("").map(lambda x: int(len(x) > 0)) #psychinfo_search.to_csv("data/PsycInfo/processed/psychinfo_term_search.csv.bz2", encoding='utf-8', compression='bz2') #psychinfo_search = psychinfo_search.drop('Title', 1) #psychinfo_search["Methodology"].value_counts().to_csv("data/PsycInfo/Manual_Mapping/Methodology.csv") #psychinfo_search["Population Group"].value_counts().to_csv("data/PsycInfo/Manual_Mapping/Population_Group.csv")
Word_Tracker/3rd_Yr_Paper/PsychoInfo.ipynb
aboSamoor/compsocial
gpl-3.0
PsycINFO Tasks Keep the current spreadsheet and add the following: 1. ~~Add Term in Abstract to spreadsheet~~ (word co-occurrence and control for the length of the abstract--lambda(len(abstract)) )do this for NSF/NIH data as well 1. ~~Add Term in Title to spreadsheet~~ 1. ~~Copy the word data into a new column (title it 'terms')--> code them as the following: 1 = multiculturalism, 2 = polyculturalism, 3 = cultural pluralism, 4 = monocultural, 5 = monoracial, 6 = bicultural, 7 = biracial, 8 = biethnic, 9 = interracial, 10 = multicultural, 11 = multiracial, 12 = polycultural, 13 = polyracial, 14 = polyethnic, 15 = mixed race, 16 = mixed ethnicity, 17 = other race, 18 = other ethnicity~~ 1. Search all options in set for the following categories: -- I will manually categorize them once you give all options in each set 1. ~~"Type of Book"~~ 1. ~~"PsycINFO Classification Code"~~ ~~1. (used the classification codes[recoded to most basic category levels] -- subcategories created by PsycInfo (22)-- multiple categories = 4300)~~ 1. ~~"Document Type"~~ 1. ~~"Grant/Scholarship"~~ 1. ~~(create a dichotomized variable 0/1)~~ 1. ~~"Tests & Measures"--> csv (no longer necessary)~~ 1. ~~(Too many categories---needs to be reviewed manually/carefully in excel)~~ 1. ~~"Publication Type"~~ 1. ~~"Publication Status"~~ 1. "Population Group" 1. (Need to be mapped manually and then recategorized) 1. We need: gender, age (abstract, years) 1. "Methodology" 1. (can make specific methods dichotomous--may remove if unnecessary) 1. "Conference" 1. ~~Right now, this is text (~699 entries)--> dichotomize variable.~~ ~~If it is a conference ie there is a text = 1, if there is NaN = 0.~~ 1. Then, I will incorporate this as a new category in "Publication Type" and remove this column).??? [not currently included as a category--overlaps with category 3 in Publication Type = Books] 1. "Key Concepts"--> csv 1. (word co-occurrence) 1. "Location"-->csv--> sent to Barbara 1. (categorized by region--multiple regions) 1. ~~"Language"~~ ~~1. I am not sure about my "other" language (10) category -- I put everything with less than 10 entries into one category.~~ 1. "MeSH Subject Headings"--> csv (may no longer be necessary?) 1. (word co-occurrence) 1. "Journal Name"-->csv--> sent to Jian Xin 1. (categorized by H-index in 2014) 1. "Institution"-->csv --> sent to Barbara 1. (categorized by state, region & country) 1. ~~Count the number of cited references for each entry~~ ***Once we extract the csv files for these columns, I will categorize them. Once all of these corrections have been made, make a new spreadsheet and delete the following information: 1. Volume 1. Publisher 1. Accession Number 1. Author(s) 1. Issue 1. Cited References 1. Publication Status (had no variance)--only first posting 1. Document Type???
len(psychinfo_search["Population Group"].value_counts())
Word_Tracker/3rd_Yr_Paper/PsychoInfo.ipynb
aboSamoor/compsocial
gpl-3.0
Creazione della mappa invece che uno scatterplot con dei raggi, la libreria ci consente solo di fare una heatmap (eventualmente pesata)
roma = pandas.read_csv("../data/Roma_towers.csv") coordinate = roma[['lat', 'lon']].values heatmap = gmaps.heatmap(coordinate) gmaps.display(heatmap) # TODO scrivere che dietro queste due semplici linee ci sta un pomeriggio intero di smadonnamenti colosseo = (41.890183, 12.492369) import gmplot from gmplot import GoogleMapPlotter # gmap = gmplot.from_geocode("San Francisco") mappa = gmplot.GoogleMapPlotter(41.890183, 12.492369, 11) #gmap.plot(latitudes, longitudes, 'cornflowerblue', edge_width=10) #gmap.plot((41.890183, 41.891183), (12.492369, 12.493369), 'cornflowerblue', edge_width=10) #gmap.scatter(more_lats, more_lngs, '#3B0B39', size=40, marker=False) #gmap.scatter(marker_lats, marker_lngs, 'k', marker=True) #gmap.heatmap(heat_lats, heat_lngs) #mappa.scatter((41.890183, 41.891183), (12.492369, 12.493369), color='#3B0B39', size=40, marker=False) #mappa.scatter(roma.lat.values, # roma.lon.values, # color='#3333ff', # size=0, # marker=False) mappa.heatmap(roma.lat.values,roma.lon.values) mappa.draw("../html/heatmap.html") #print a
src/heatmap_and_range.ipynb
FedericoMuciaccia/SistemiComplessi
mit
NOTE guardando la mappa Sembrano esserci dei problemi con la posizione delle antenne: ci sono antenne sul tevere, su ponte Sisto, dentro il parchetto di Castel Sant'Angelo, in mezzo al pratone della Sapienza, in cima al dipartimento di Fisica... Inoltre sembra esserci una strana clusterizzazione lungo le vie di traffico principali. Questo è ragionevole nell'ottica di garantire la copertura in una città con grossi flussi turistici come Roma, ma probabilmente non a tal punto da rendere plausibile la presenza di 7 antenne attorno a piazza Panteon. Ci sono anche coppie di antenne isolate che sembrano distare tra loro pochi metri. Probabilmente sono artefatti di ricostruzione. Probabilmente l'algoitmo di ricostruzione di Mozilla ha diversi problemi. Se questa è la situazione delle antenne non oso pensare alla situazione dei router wifi. Queste misure e queste ricostruzioni devono essere precise, perché è su queste che si poggerà il loro futuro servizio di geolocalizzazione. Bisognerebbe farglielo presente (magari ci prendono a lavorare da loro :-) ) Analisi del raggio di copertura delle antenne dato che ci servirà fare un grafico con scale logaritmiche teniamo solo i dati con range =! 0
# condizioni di filtro raggioMin = 1 # raggioMax = 1000 raggiPositivi = roma.range >= raggioMin # raggiCorti = roma.range < raggioMax # query con le condizioni #romaFiltrato = roma[raggiPositivi & raggiCorti] romaFiltrato = roma[raggiPositivi] raggi = romaFiltrato.range print max(raggi) # logaritmic (base 2) binning in log-log (base 10) plots of integer histograms def logBinnedHist(histogramResults): """ histogramResults = numpy.histogram(...) OR matplotlib.pyplot.hist(...) returns x, y to be used with matplotlib.pyplot.step(x, y, where='post') """ # TODO così funziona solo con l'istogramma di pyplot; # quello di numpy restituisce solo la tupla (values, binEdges) values, binEdges, others = histogramResults # print binEdges # TODO # if 0 in binEdges: # return "error: log2(0) = ?" # print len(values), len(binEdges) # print binEdges # TODO vedere quando non si parte da 1 # int arrotonda all'intero inferiore linMin = min(binEdges) linMax = max(binEdges) # print linMin, linMax logStart = int(numpy.log2(linMin)) logStop = int(numpy.log2(linMax)) # print logStart, logStop nLogBins = logStop - logStart + 1 # print nLogBins logBins = numpy.logspace(logStart, logStop, num=nLogBins, base=2, dtype=int) # print logBins # 1,2,4,8,16,32,64,128,256,512,1024 ###################### linStart = 2**logStop + 1 linStop = linMax # print linStart, linStop nLinBins = linStop - linStart + 1 # print nLinBins linBins = numpy.linspace(linStart, linStop, num=nLinBins, dtype=int) # print linBins ###################### bins = numpy.append(logBins, linBins) # print bins # print len(bins) # TODO rendere generale questa funzione!!! totalValues, binEdges, otherBinNumbers = scipy.stats.binned_statistic(raggi.values, raggi.values, statistic='count', bins=bins) # print totalValues # print len(totalValues) # uso le proprietà dei logaritmi in base 2: # 2^(n+1) - 2^n = 2^n correzioniDatiCanalizzatiLog = numpy.delete(logBins, -1) # print correzioniDatiCanalizzatiLog # print len(correzioniDatiCanalizzatiLog) correzioniDatiCanalizzatiLin = numpy.ones(nLinBins, dtype=int) # print correzioniDatiCanalizzatiLin # print len(correzioniDatiCanalizzatiLin) correzioniDatiCanalizzati = numpy.append(correzioniDatiCanalizzatiLog, correzioniDatiCanalizzatiLin) # print correzioniDatiCanalizzati # print len(correzioniDatiCanalizzati) x = numpy.concatenate(([0], bins)) conteggi = totalValues/correzioniDatiCanalizzati # TODO caso speciale per il grafico di sotto # (per non fare vedere la parte oltre l'ultima potenza di 2) l = len(correzioniDatiCanalizzatiLin) conteggi[-l:] = numpy.zeros(l, dtype='int') y = numpy.concatenate(([0], conteggi, [0])) return x, y # creazione di un istogramma log-log per la distribuzione del raggio di copertura # TODO provare a raggruppare le code # esempio: con bins=100 # oppure con canalizzazione a logaritmo di 2, ma mediato # in modo che venga equispaziato nel grafico logaritmico # il programma vuole pesati i dati e non i canali # si potrebbe implementare una mappa che pesa i dati # secondo la funzione divisione intera per logaritmo di 2 # TODO mettere cerchietto che indica il range massimo oppure scritta in rosso "20341 m!" # TODO spiegare perché ci sono così tanti conteggi a 1,2,4,... metri # TODO ricavare il range dai dati grezzi, facendo un algoritmo di clustering # sulle varie osservazioni delle antenne. machine learning? # TODO scrivere funzione che fa grafici logaritmici con canali # equispaziati nel plot logaritmico (canali pesati) # impostazioni plot complessivo # pyplot.figure(figsize=(20,8)) # dimensioni in pollici pyplot.figure(figsize=(10,10)) matplotlib.pyplot.xlim(10**0,10**5) matplotlib.pyplot.ylim(10**-3,10**2) pyplot.title('Distribuzione del raggio di copertura') pyplot.ylabel("Numero di antenne") pyplot.xlabel("Copertura [m]") # pyplot.gca().set_xscale("log") # pyplot.gca().set_yscale("log") pyplot.xscale("log") pyplot.yscale("log") # lin binning distribuzioneRange = pyplot.hist(raggi.values, bins=max(raggi)-min(raggi), histtype='step', color='#3385ff', label='linear binning') # log_2 binning xLog2, yLog2 = logBinnedHist(distribuzioneRange) matplotlib.pyplot.step(xLog2, yLog2, where='post', color='#ff3300', linewidth=2, label='log_2 weighted binning') #where = mid OR post # matplotlib.pyplot.plot(xLog2, yLog2) # linea verticale ad indicare il massimo grado pyplot.axvline(x=max(raggi), color='#808080', linestyle='dotted', label='max range (41832m)') # legenda e salvataggio pyplot.legend(loc='lower left', frameon=False) pyplot.savefig('../img/range/infinite_log_binning.svg', format='svg', dpi=600, transparent=True)
src/heatmap_and_range.ipynb
FedericoMuciaccia/SistemiComplessi
mit
Frequency-rank
# istogramma sugli interi unique, counts = numpy.unique(raggi.values, return_counts=True) # print numpy.asarray((unique, counts)).T rank = numpy.arange(1,len(unique)+1) frequency = numpy.array(sorted(counts, reverse=True)) pyplot.figure(figsize=(20,10)) pyplot.title('Distribuzione del raggio di copertura') pyplot.ylabel("Numero di antenne") pyplot.xlabel("Copertura [m] o ranking") pyplot.xscale("log") pyplot.yscale("log") matplotlib.pyplot.xlim(10**0,10**4) matplotlib.pyplot.ylim(10**0,10**2) matplotlib.pyplot.step(x=rank, y=frequency, where='post', label='frequency-rank', color='#00cc44') matplotlib.pyplot.scatter(x=unique, y=counts, marker='o', color='#3385ff', label='linear binning (scatter)') matplotlib.pyplot.step(xLog2, yLog2, where='post', color='#ff3300', label='log_2 weighted binning') pyplot.legend(loc='lower left', frameon=False) pyplot.savefig('../img/range/range_distribution.svg', format='svg', dpi=600, transparent=True)
src/heatmap_and_range.ipynb
FedericoMuciaccia/SistemiComplessi
mit
Cumulative histogram the cumulative distribution function cdf(x) is the probability that a real-valued random variable X will take a value less than or equal to x
conteggi, binEdges = numpy.histogram(raggi.values, bins=max(raggi)-min(raggi)) conteggiCumulativi = numpy.cumsum(conteggi) valoriRaggi = numpy.delete(binEdges, -1) N = len(raggi.values) pyplot.figure(figsize=(12,10)) pyplot.title('Raggio di copertura') pyplot.ylabel("Numero di antenne") pyplot.xlabel("Copertura [m]") pyplot.xscale("log") pyplot.yscale("log") matplotlib.pyplot.xlim(10**0,10**5) matplotlib.pyplot.ylim(10**0,10**4) matplotlib.pyplot.step(x=valoriRaggi, y=conteggiCumulativi, where='post', label='Cumulata', color='#009999') matplotlib.pyplot.step(x=valoriRaggi, y=N-conteggiCumulativi, where='post', label='N - Cumulata', color='#ff0066') pyplot.axhline(y=N, color='#808080', linestyle='dotted', label='N_max = 6505') pyplot.legend(loc='lower left', frameon=False) pyplot.savefig('../img/range/range_cumulated_distribution.svg', format='svg', dpi=600, transparent=True) # TODO fare fit a mano e controllare le relazioni tra i vari esponenti
src/heatmap_and_range.ipynb
FedericoMuciaccia/SistemiComplessi
mit
<a id='step1a'></a> A. Baseline Case: No Torque Tube When torquetube is False, zgap is the distance from axis of torque tube to module surface, but since we are rotating from the module's axis, this Zgap doesn't matter for this baseline case.
#CASE 0 No torque tube # When torquetube is False, zgap is the distance from axis of torque tube to module surface, but since we are rotating from the module's axis, this Zgap doesn't matter. # zgap = 0.1 + diameter/2.0 torquetube = False customname = '_NoTT' module_NoTT = demo.makeModule(name=customname,x=x,y=y, numpanels=numpanels) module_NoTT.addTorquetube(visible=False, axisofrotation=False, diameter=0) trackerdict = demo.makeScene1axis(trackerdict, module_NoTT, sceneDict, cumulativesky = cumulativesky) trackerdict = demo.makeOct1axis(trackerdict) trackerdict = demo.analysis1axis(trackerdict, sensorsy = sensorsy, customname = customname)
docs/tutorials/9 - Advanced topics - 1 axis torque tube Shading for 1 day (Research documentation).ipynb
NREL/bifacial_radiance
bsd-3-clause
<a id='step1b'></a> B. ZGAP = 0.1
#ZGAP 0.1 zgap = 0.1 customname = '_zgap0.1' tubeParams = {'tubetype':tubetype, 'diameter':diameter, 'material':material, 'axisofrotation':False, 'visible':True} # either pass this into makeModule, or separately into module.addTorquetube() module_zgap01 = demo.makeModule(name=customname, x=x,y=y, numpanels=numpanels, zgap=zgap, tubeParams=tubeParams) trackerdict = demo.makeScene1axis(trackerdict, module_zgap01, sceneDict, cumulativesky = cumulativesky) trackerdict = demo.makeOct1axis(trackerdict) trackerdict = demo.analysis1axis(trackerdict, sensorsy = sensorsy, customname = customname)
docs/tutorials/9 - Advanced topics - 1 axis torque tube Shading for 1 day (Research documentation).ipynb
NREL/bifacial_radiance
bsd-3-clause
<a id='step1c'></a> C. ZGAP = 0.2
#ZGAP 0.2 zgap = 0.2 customname = '_zgap0.2' tubeParams = {'tubetype':tubetype, 'diameter':diameter, 'material':material, 'axisofrotation':False, 'visible':True} # either pass this into makeModule, or separately into module.addTorquetube() module_zgap02 = demo.makeModule(name=customname, x=x,y=y, numpanels=numpanels,zgap=zgap, tubeParams=tubeParams) trackerdict = demo.makeScene1axis(trackerdict, module_zgap02, sceneDict, cumulativesky = cumulativesky) trackerdict = demo.makeOct1axis(trackerdict) trackerdict = demo.analysis1axis(trackerdict, sensorsy = sensorsy, customname = customname)
docs/tutorials/9 - Advanced topics - 1 axis torque tube Shading for 1 day (Research documentation).ipynb
NREL/bifacial_radiance
bsd-3-clause
<a id='step1d'></a> D. ZGAP = 0.3
#ZGAP 0.3 zgap = 0.3 customname = '_zgap0.3' tubeParams = {'tubetype':tubetype, 'diameter':diameter, 'material':material, 'axisofrotation':False, 'visible':True} # either pass this into makeModule, or separately into module.addTorquetube() module_zgap03 = demo.makeModule(name=customname,x=x,y=y, numpanels=numpanels, zgap=zgap, tubeParams=tubeParams) trackerdict = demo.makeScene1axis(trackerdict, module_zgap03, sceneDict, cumulativesky = cumulativesky) trackerdict = demo.makeOct1axis(trackerdict) trackerdict = demo.analysis1axis(trackerdict, sensorsy = sensorsy, customname = customname)
docs/tutorials/9 - Advanced topics - 1 axis torque tube Shading for 1 day (Research documentation).ipynb
NREL/bifacial_radiance
bsd-3-clause
<a id='step2'></a> 2. Read-back the values and tabulate average values for unshaded, 10cm gap and 30cm gap
import glob import pandas as pd resultsfolder = os.path.join(testfolder, 'results') print (resultsfolder) filenames = glob.glob(os.path.join(resultsfolder,'*.csv')) noTTlist = [k for k in filenames if 'NoTT' in k] zgap10cmlist = [k for k in filenames if 'zgap0.1' in k] zgap20cmlist = [k for k in filenames if 'zgap0.2' in k] zgap30cmlist = [k for k in filenames if 'zgap0.3' in k] # sum across all hours for each case unsh_front = np.array([pd.read_csv(f, engine='python')['Wm2Front'] for f in noTTlist]).sum(axis = 0) cm10_front = np.array([pd.read_csv(f, engine='python')['Wm2Front'] for f in zgap10cmlist]).sum(axis = 0) cm20_front = np.array([pd.read_csv(f, engine='python')['Wm2Front'] for f in zgap20cmlist]).sum(axis = 0) cm30_front = np.array([pd.read_csv(f, engine='python')['Wm2Front'] for f in zgap30cmlist]).sum(axis = 0) unsh_back = np.array([pd.read_csv(f, engine='python')['Wm2Back'] for f in noTTlist]).sum(axis = 0) cm10_back = np.array([pd.read_csv(f, engine='python')['Wm2Back'] for f in zgap10cmlist]).sum(axis = 0) cm20_back = np.array([pd.read_csv(f, engine='python')['Wm2Back'] for f in zgap20cmlist]).sum(axis = 0) cm30_back = np.array([pd.read_csv(f, engine='python')['Wm2Back'] for f in zgap30cmlist]).sum(axis = 0)
docs/tutorials/9 - Advanced topics - 1 axis torque tube Shading for 1 day (Research documentation).ipynb
NREL/bifacial_radiance
bsd-3-clause
<a id='step3'></a> 3. plot spatial loss values for 10cm and 30cm data
import matplotlib.pyplot as plt plt.rcParams['font.family'] = 'sans-serif' plt.rcParams['font.sans-serif'] = ['Helvetica'] plt.rcParams['axes.linewidth'] = 0.2 #set the value globally fig = plt.figure() fig.set_size_inches(4, 2.5) ax = fig.add_axes((0.15,0.15,0.78,0.75)) #plt.rc('font', family='sans-serif') plt.rc('xtick',labelsize=8) plt.rc('ytick',labelsize=8) plt.rc('axes',labelsize=8) plt.plot(np.linspace(-1,1,unsh_back.__len__()),(cm30_back - unsh_back)/unsh_back*100, label = '30cm gap',color = 'black') #steelblue plt.plot(np.linspace(-1,1,unsh_back.__len__()),(cm20_back - unsh_back)/unsh_back*100, label = '20cm gap',color = 'steelblue', linestyle = '--') #steelblue plt.plot(np.linspace(-1,1,unsh_back.__len__()),(cm10_back - unsh_back)/unsh_back*100, label = '10cm gap',color = 'darkorange') #steelblue #plt.ylabel('$G_{rear}$ vs unshaded [Wm-2]')#(r'$BG_E$ [%]') plt.ylabel('$G_{rear}$ / $G_{rear,tubeless}$ -1 [%]') plt.xlabel('Module X position [m]') plt.legend(fontsize = 8,frameon = False,loc='best') #plt.ylim([0, 15]) plt.title('Torque tube shading loss',fontsize=9) #plt.annotate('South',xy=(-10,9.5),fontsize = 8); plt.annotate('North',xy=(8,9.5),fontsize = 8) plt.show()
docs/tutorials/9 - Advanced topics - 1 axis torque tube Shading for 1 day (Research documentation).ipynb
NREL/bifacial_radiance
bsd-3-clause
<a id='step4'></a> 4. Overall Shading Loss Factor To calculate shading loss factor, we can use the following equation: <img src="../images_wiki/AdvancedJournals/Equation_ShadingFactor.PNG">
ShadingFactor = (1 - cm30_back.sum() / unsh_back.sum())*100
docs/tutorials/9 - Advanced topics - 1 axis torque tube Shading for 1 day (Research documentation).ipynb
NREL/bifacial_radiance
bsd-3-clause
Reading an NCEP BUFR data set NCEP BUFR (Binary Universal Form for the Representation of meteorological data) can be read two ways: Fortran code with BUFRLIB py-ncepbufr, which is basically Python wrappers around BUFRLIB In this example we'll use py-ncepbufr to read a snapshot of the Argo data tank from WCOSS, show how to navigate the BUFR structure, and how to extract and plot a profile. The py-ncepbufr library and installation instructions can be found at https://github.com/JCSDA/py-ncepbufr We begin by importing the required libraries.
import matplotlib.pyplot as plt # graphics library import numpy as np import ncepbufr # python wrappers around BUFRLIB
test/Python_tutorial_bufr.ipynb
jswhit/py-ncepbufr
isc
For the purposes of this demo I've made a local copy of the Argo data tank on WCOSS located at /dcom/us007003/201808/b031/xx005 Begin by opening the file
bufr = ncepbufr.open('data/xx005')
test/Python_tutorial_bufr.ipynb
jswhit/py-ncepbufr
isc
Movement and data access within the BUFR file is through these methods: bufr.advance() bufr.load_subset() bufr.read_subset() bufr.rewind() bufr.close() There is a lot more functionality to ncepbufr, such as searching on multiple mnenomics, printing or saving the BUFR table included in the file, printing or saving the inventory and subsets, setting and using checkpoints in the file. See the ncepbufr help for more details. Important Note: py-ncepbufr is unforgiving of mistakes. A BUFRLIB fortran error will result in an immediate exit from the Python interpreter.
# move down to first message - a return code of 0 indicates success bufr.advance() # load the message subset -- a return code of 0 indicates success bufr.load_subset()
test/Python_tutorial_bufr.ipynb
jswhit/py-ncepbufr
isc
You can print the subset and determine the parameter names. BUFR dumps can be very verbose, so I'll just copy in the header and the first subset replication from a bufr.dump_subset() command. I've highlighted in red the parameters I want to plot. <pre style="font-size: x-small"> MESSAGE TYPE NC031005 004001 YEAR 2018.0 YEAR YEAR 004002 MNTH 8.0 MONTH MONTH 004003 DAYS 1.0 DAY DAY 004004 HOUR 0.0 HOUR HOUR 004005 MINU 16.0 MINUTE MINUTE 035195 SEQNUM 317 ( 4)CCITT IA5 CHANNEL SEQUENCE NUMBER 035021 BUHD IOPX01 ( 6)CCITT IA5 BULLETIN BEING MONITORED (TTAAii) 035023 BORG KWBC ( 4)CCITT IA5 BULLETIN BEING MONITORED (CCCC) 035022 BULTIM 010029 ( 6)CCITT IA5 BULLETIN BEING MONITORED (YYGGgg) 035194 BBB MISSING ( 6)CCITT IA5 BULLETIN BEING MONITORED (BBB) 008202 RCTS 0.0 CODE TABLE RECEIPT TIME SIGNIFICANCE 004200 RCYR 2018.0 YEAR YEAR - TIME OF RECEIPT 004201 RCMO 8.0 MONTH MONTH - TIME OF RECEIPT 004202 RCDY 1.0 DAY DAY - TIME OF RECEIPT 004203 RCHR 0.0 HOUR HOUR - TIME OF RECEIPT 004204 RCMI 31.0 MINUTE MINUTE - TIME OF RECEIPT 033215 CORN 0.0 CODE TABLE CORRECTED REPORT INDICATOR 001087 WMOP 6903327.0 NUMERIC WMO marine observing platform extended identifie 001085 OPMM S2-X (20)CCITT IA5 Observing platform manufacturer's model 001086 OPMS 10151 ( 32)CCITT IA5 Observing platform manufacturer's serial number 002036 BUYTS 2.0 CODE TABLE Buoy type 002148 DCLS 8.0 CODE TABLE Data collection and/or location system 002149 BUYT 14.0 CODE TABLE Type of data buoy 022055 FCYN 28.0 NUMERIC Float cycle number 022056 DIPR 0.0 CODE TABLE Direction of profile 022067 IWTEMP 846.0 CODE TABLE INSTRUMENT TYPE FOR WATER TEMPERATURE PROFILE ME 005001 CLATH 59.34223 DEGREES LATITUDE (HIGH ACCURACY) 006001 CLONH -9.45180 DEGREES LONGITUDE (HIGH ACCURACY) 008080 QFQF 20.0 CODE TABLE Qualifier for GTSPP quality flag 033050 GGQF 1.0 CODE TABLE Global GTSPP quality flag (GLPFDATA) 636 REPLICATIONS ++++++ GLPFDATA REPLICATION # 1 ++++++ <span style="color: red">007065 WPRES 10000.0 PA Water pressure</span> 008080 QFQF 10.0 CODE TABLE Qualifier for GTSPP quality flag 033050 GGQF 1.0 CODE TABLE Global GTSPP quality flag <span style="color: red">022045 SSTH 285.683 K Sea/water temperature</span> 008080 QFQF 11.0 CODE TABLE Qualifier for GTSPP quality flag 033050 GGQF 1.0 CODE TABLE Global GTSPP quality flag <span style="color: red">022064 SALNH 35.164 PART PER THOUSAND Salinity</span> 008080 QFQF 12.0 CODE TABLE Qualifier for GTSPP quality flag 033050 GGQF 1.0 CODE TABLE Global GTSPP quality flag </pre> Now we can load the data for plotting
temp = bufr.read_subset('SSTH').squeeze()-273.15 # convert from Kelvin to Celsius sal = bufr.read_subset('SALNH').squeeze() depth = bufr.read_subset('WPRES').squeeze()/10000. # convert from Pa to depth in meters # observation location, date, and receipt time lon = bufr.read_subset('CLONH')[0][0] lat = bufr.read_subset('CLATH')[0][0] date = bufr.msg_date receipt = bufr.receipt_time bufr.close()
test/Python_tutorial_bufr.ipynb
jswhit/py-ncepbufr
isc
Set up the plotting figure. But this time, just for fun, let's put both the temperature and salinity profiles on the same axes. This trick uses both the top and bottom axis for different parameters. As these are depth profiles, we need twin x-axes and a shared y-axis for the depth.
fig = plt.figure(figsize = (5,4)) ax1 = plt.axes() ax1.plot(temp, depth,'r-') ax1.grid(axis = 'y') ax1.invert_yaxis() # flip the y-axis for ocean depths ax2 = ax1.twiny() # here's the second x-axis definition ax2.plot(np.nan, 'r-', label = 'Temperature') ax2.plot(sal, depth, 'b-', label = 'Salinity') ax2.legend() ax1.set_xlabel('Temperature (C)', color = 'red') ax1.set_ylabel('Depth (m)') ax2.set_xlabel('Salinity (PSU)', color = 'blue') ttl='ARGO T,S Profiles at lon:{:6.2f}, lat:{:6.2f}\ntimestamp: {} received: {}\n'.format(lon,lat,date,receipt) fig.suptitle(ttl,x = 0.5,y = 1.1,fontsize = 'large');
test/Python_tutorial_bufr.ipynb
jswhit/py-ncepbufr
isc
Update time series for the symbols below. Time series will be fetched for any symbols not already cached.
pf.update_cache_symbols(symbols=['msft', 'orcl', 'tsla'])
examples/A00.update-cache-symbols/update-cache-symbols.ipynb
fja05680/pinkfish
mit
Remove the time series for TSLA
pf.remove_cache_symbols(symbols=['tsla'])
examples/A00.update-cache-symbols/update-cache-symbols.ipynb
fja05680/pinkfish
mit
Update time series for all symbols in the cache directory
pf.update_cache_symbols()
examples/A00.update-cache-symbols/update-cache-symbols.ipynb
fja05680/pinkfish
mit
Remove time series for all symbols in the cache directory
# WARNING!!! - if you uncomment the line below, you'll wipe out # all the symbols in your cache directory #pf.remove_cache_symbols()
examples/A00.update-cache-symbols/update-cache-symbols.ipynb
fja05680/pinkfish
mit