markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
That was easy! All the information required by ``ionize_box`` was given directly by the ``perturbed_field`` object. If we had _also_ passed a redshift explicitly, this redshift would be checked against that from the ``perturbed_field`` and an error raised if they were incompatible: Let's see the fieldnames:
ionized_field.fieldnames
_____no_output_____
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
Here the ``first_box`` field is actually just a flag to tell the C code whether this has been *evolved* or not. Here, it hasn't been, it's the "first box" of an evolutionary chain. Let's plot the neutral fraction:
plotting.coeval_sliceplot(ionized_field, "xH_box");
_____no_output_____
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
Brightness Temperature Now we can use what we have to get the brightness temperature:
brightness_temp = p21c.brightness_temperature(ionized_box=ionized_field, perturbed_field=perturbed_field)
_____no_output_____
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
This has only a single field, ``brightness_temp``:
plotting.coeval_sliceplot(brightness_temp);
_____no_output_____
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
The Problem And there you have it -- you've computed each of the four steps (there's actually another, `spin_temperature`, that you require if you don't assume the saturated limit) individually. However, some problems quickly arise. What if you want the `perturb_field`, but don't care about the initial conditions? We know how to get the full `Coeval` object in one go, but it would seem that the sub-boxes have to _each_ be computed as the input to the next. A perhaps more interesting problem is that some quantities require *evolution*: i.e. a whole bunch of simulations at a string of redshifts must be performed in order to obtain the current redshift. This is true when not in the saturated limit, for example. That means you'd have to manually compute each redshift in turn, and pass it to the computation at the next redshift. While this is definitely possible, it becomes difficult to set up manually when all you care about is the box at the final redshift.`py21cmfast` solves this by making each of the functions recursive: if `perturb_field` is not passed the `init_boxes` that it needs, it will go and compute them, based on the parameters that you've passed it. If the previous `spin_temp` box required for the current redshift is not passed -- it will be computed (and if it doesn't have a previous `spin_temp` *it* will be computed, and so on).That's all good, but what if you now want to compute another `perturb_field`, with the same fundamental parameters (but at a different redshift)? Since you didn't ever see the `init_boxes`, they'll have to be computed all over again. That's where the automatic caching comes in, which is where we turn now... Using the Automatic Cache To solve all this, ``21cmFAST`` uses an on-disk caching mechanism, where all boxes are saved in HDF5 format in a default location. The cache allows for reading in previously-calculated boxes automatically if they match the parameters that are input. The functions used at every step (in the previous section) will try to use a cached box instead of calculating a new one, unless its explicitly asked *not* to. Thus, we could do this:
perturbed_field = p21c.perturb_field( redshift = 8.0, user_params = {"HII_DIM": 100, "BOX_LEN": 100}, cosmo_params = p21c.CosmoParams(SIGMA_8=0.8), ) plotting.coeval_sliceplot(perturbed_field, "density");
2020-02-29 15:10:45,367 | INFO | Existing z=8.0 perturb_field boxes found and read in (seed=12345).
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
Note that here we pass exactly the same parameters as were used in the previous section. It gives a message that the full box was found in the cache and immediately returns. However, if we change the redshift:
perturbed_field = p21c.perturb_field( redshift = 7.0, user_params = {"HII_DIM": 100, "BOX_LEN": 100}, cosmo_params = p21c.CosmoParams(SIGMA_8=0.8), ) plotting.coeval_sliceplot(perturbed_field, "density");
2020-02-29 15:10:45,748 | INFO | Existing init_boxes found and read in (seed=12345).
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
Now it finds the initial conditions, but it must compute the perturbed field at the new redshift. If we had changed the initial parameters as well, it would have to calculate everything:
perturbed_field = p21c.perturb_field( redshift = 8.0, user_params = {"HII_DIM": 50, "BOX_LEN": 100}, cosmo_params = p21c.CosmoParams(SIGMA_8=0.8), ) plotting.coeval_sliceplot(perturbed_field, "density");
_____no_output_____
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
This shows that we don't need to perform the *previous* step to do any of the steps, they will be calculated automatically.Now, let's get an ionized box, but this time we won't assume the saturated limit, so we need to use the spin temperature. We can do this directly in the ``ionize_box`` function, but let's do it explicitly. We will use the auto-generation of the initial conditions and perturbed field. However, the spin temperature is an *evolved* field, i.e. to compute the field at $z$, we need to know the field at $z+\Delta z$. This continues up to some redshift, labelled ``z_heat_max``, above which the spin temperature can be defined directly from the perturbed field. Thus, one option is to pass to the function a *previous* spin temperature box, to evolve to *this* redshift. However, we don't have a previous spin temperature box yet. Of course, the function itself will go and calculate that box if it's not given (or read it from cache if it's been calculated before!). When it tries to do that, it will go to the one before, and so on until it reaches ``z_heat_max``, at which point it will calculate it directly. To facilitate this recursive progression up the redshift ladder, there is a parameter, ``z_step_factor``, which is a multiplicate factor that determines the previous redshift at each step. We can also pass the dependent boxes explicitly, which provides the parameters necessary.**WARNING: THIS IS THE MOST TIME-CONSUMING STEP OF THE CALCULATION!**
spin_temp = p21c.spin_temperature( perturbed_field = perturbed_field, zprime_step_factor=1.05, ) plotting.coeval_sliceplot(spin_temp, "Ts_box");
_____no_output_____
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
Let's note here that each of the functions accepts a few of the same arguments that modifies how the boxes are cached. There is a ``write`` argument, which if set to ``False``, will disable writing that box to cache (and it is passed through the recursive heirarchy). There is also ``regenerate``, which if ``True``, forces this box and all its predecessors to be re-calculated even if they exist in the cache. Then there is ``direc``, which we have seen before.Finally note that by default, ``random_seed`` is set to ``None``. If this is the case, then any cached dataset matching all other parameters will be read in, and the ``random_seed`` will be set based on the file read in. If it is set to an integer number, then the cached dataset must also match the seed. If it is ``None``, and no matching dataset is found, a random seed will be autogenerated. Now if we calculate the ionized box, ensuring that it uses the spin temperature, then it will also need to be evolved. However, due to the fact that we cached each of the spin temperature steps, these should be read in accordingly:
ionized_box = p21c.ionize_box( spin_temp = spin_temp, zprime_step_factor=1.05, ) plotting.coeval_sliceplot(ionized_box, "xH_box");
_____no_output_____
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
Great! So again, we can just get the brightness temp:
brightness_temp = p21c.brightness_temperature( ionized_box = ionized_box, perturbed_field = perturbed_field, spin_temp = spin_temp )
_____no_output_____
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
Now lets plot our brightness temperature, which has been evolved from high redshift with spin temperature fluctuations:
plotting.coeval_sliceplot(brightness_temp);
_____no_output_____
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
We can also check what the result would have been if we had limited the maximum redshift of heating. Note that this *recalculates* all previous spin temperature and ionized boxes, because they depend on both ``z_heat_max`` and ``zprime_step_factor``.
ionized_box = p21c.ionize_box( spin_temp = spin_temp, zprime_step_factor=1.05, z_heat_max = 20.0 ) brightness_temp = p21c.brightness_temperature( ionized_box = ionized_box, perturbed_field = perturbed_field, spin_temp = spin_temp ) plotting.coeval_sliceplot(brightness_temp);
2020-02-29 15:13:08,824 | INFO | Existing init_boxes found and read in (seed=521414794440). 2020-02-29 15:13:08,840 | INFO | Existing z=19.62816486020931 perturb_field boxes found and read in (seed=521414794440). 2020-02-29 15:13:11,438 | INFO | Existing z=18.645871295437438 perturb_field boxes found and read in (seed=521414794440). 2020-02-29 15:13:11,447 | INFO | Existing z=19.62816486020931 spin_temp boxes found and read in (seed=521414794440). 2020-02-29 15:13:14,041 | INFO | Existing z=17.71035361470232 perturb_field boxes found and read in (seed=521414794440). 2020-02-29 15:13:14,050 | INFO | Existing z=18.645871295437438 spin_temp boxes found and read in (seed=521414794440). 2020-02-29 15:13:16,667 | INFO | Existing z=16.81938439495459 perturb_field boxes found and read in (seed=521414794440). 2020-02-29 15:13:16,675 | INFO | Existing z=17.71035361470232 spin_temp boxes found and read in (seed=521414794440). 2020-02-29 15:13:19,213 | INFO | Existing z=15.970842280909132 perturb_field boxes found and read in (seed=521414794440). 2020-02-29 15:13:19,222 | INFO | Existing z=16.81938439495459 spin_temp boxes found and read in (seed=521414794440). 2020-02-29 15:13:21,756 | INFO | Existing z=15.162706934199171 perturb_field boxes found and read in (seed=521414794440). 2020-02-29 15:13:21,764 | INFO | Existing z=15.970842280909132 spin_temp boxes found and read in (seed=521414794440). 2020-02-29 15:13:24,409 | INFO | Existing z=14.393054223046828 perturb_field boxes found and read in (seed=521414794440). 2020-02-29 15:13:24,417 | INFO | Existing z=15.162706934199171 spin_temp boxes found and read in (seed=521414794440). 2020-02-29 15:13:26,938 | INFO | Existing z=13.66005164099698 perturb_field boxes found and read in (seed=521414794440). 2020-02-29 15:13:26,947 | INFO | Existing z=14.393054223046828 spin_temp boxes found and read in (seed=521414794440). 2020-02-29 15:13:29,504 | INFO | Existing z=12.961953943806646 perturb_field boxes found and read in (seed=521414794440). 2020-02-29 15:13:29,517 | INFO | Existing z=13.66005164099698 spin_temp boxes found and read in (seed=521414794440). 2020-02-29 15:13:32,163 | INFO | Existing z=12.297098994101567 perturb_field boxes found and read in (seed=521414794440). 2020-02-29 15:13:32,171 | INFO | Existing z=12.961953943806646 spin_temp boxes found and read in (seed=521414794440). 2020-02-29 15:13:34,704 | INFO | Existing z=11.663903803906255 perturb_field boxes found and read in (seed=521414794440). 2020-02-29 15:13:34,712 | INFO | Existing z=12.297098994101567 spin_temp boxes found and read in (seed=521414794440). 2020-02-29 15:13:37,257 | INFO | Existing z=11.060860765625003 perturb_field boxes found and read in (seed=521414794440). 2020-02-29 15:13:37,266 | INFO | Existing z=11.663903803906255 spin_temp boxes found and read in (seed=521414794440). 2020-02-29 15:13:39,809 | INFO | Existing z=10.486534062500002 perturb_field boxes found and read in (seed=521414794440). 2020-02-29 15:13:39,817 | INFO | Existing z=11.060860765625003 spin_temp boxes found and read in (seed=521414794440). 2020-02-29 15:13:42,378 | INFO | Existing z=9.939556250000003 perturb_field boxes found and read in (seed=521414794440). 2020-02-29 15:13:42,387 | INFO | Existing z=10.486534062500002 spin_temp boxes found and read in (seed=521414794440). 2020-02-29 15:13:44,941 | INFO | Existing z=9.418625000000002 perturb_field boxes found and read in (seed=521414794440). 2020-02-29 15:13:44,950 | INFO | Existing z=9.939556250000003 spin_temp boxes found and read in (seed=521414794440). 2020-02-29 15:13:47,518 | INFO | Existing z=8.922500000000001 perturb_field boxes found and read in (seed=521414794440). 2020-02-29 15:13:47,528 | INFO | Existing z=9.418625000000002 spin_temp boxes found and read in (seed=521414794440). 2020-02-29 15:13:50,077 | INFO | Existing z=8.450000000000001 perturb_field boxes found and read in (seed=521414794440). 2020-02-29 15:13:50,086 | INFO | Existing z=8.922500000000001 spin_temp boxes found and read in (seed=521414794440). 2020-02-29 15:13:52,626 | INFO | Existing z=8.0 perturb_field boxes found and read in (seed=521414794440). 2020-02-29 15:13:52,762 | INFO | Existing brightness_temp box found and read in (seed=521414794440).
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
Composing a pipeline from reusable, pre-built, and lightweight componentsThis tutorial describes how to build a Kubeflow pipeline from reusable, pre-built, and lightweight components. The following provides a summary of the steps involved in creating and using a reusable component:- Write the program that contains your component’s logic. The program must use files and command-line arguments to pass data to and from the component.- Containerize the program.- Write a component specification in YAML format that describes the component for the Kubeflow Pipelines system.- Use the Kubeflow Pipelines SDK to load your component, use it in a pipeline and run that pipeline.Then, we will compose a pipeline from a reusable component, a pre-built component, and a lightweight component. The pipeline will perform the following steps:- Train an MNIST model and export it to Google Cloud Storage.- Deploy the exported TensorFlow model on AI Platform Prediction service.- Test the deployment by calling the endpoint with test data. Note: Ensure that you have Docker installed, if you want to build the image locally, by running the following command: `which docker` The result should be something like:`/usr/bin/docker`
import kfp import kfp.gcp as gcp import kfp.dsl as dsl import kfp.compiler as compiler import kfp.components as comp import datetime import kubernetes as k8s # Required Parameters PROJECT_ID='<ADD GCP PROJECT HERE>' GCS_BUCKET='gs://<ADD STORAGE LOCATION HERE>'
_____no_output_____
Apache-2.0
samples/contrib/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
danishsamad/pipelines
Create clientIf you run this notebook **outside** of a Kubeflow cluster, run the following command:- `host`: The URL of your Kubeflow Pipelines instance, for example "https://``.endpoints.``.cloud.goog/pipeline"- `client_id`: The client ID used by Identity-Aware Proxy- `other_client_id`: The client ID used to obtain the auth codes and refresh tokens.- `other_client_secret`: The client secret used to obtain the auth codes and refresh tokens.```pythonclient = kfp.Client(host, client_id, other_client_id, other_client_secret)```If you run this notebook **within** a Kubeflow cluster, run the following command:```pythonclient = kfp.Client()```You'll need to create OAuth client ID credentials of type `Other` to get `other_client_id` and `other_client_secret`. Learn more about [creating OAuth credentials](https://cloud.google.com/iap/docs/authentication-howtoauthenticating_from_a_desktop_app)
# Optional Parameters, but required for running outside Kubeflow cluster # The host for 'AI Platform Pipelines' ends with 'pipelines.googleusercontent.com' # The host for pipeline endpoint of 'full Kubeflow deployment' ends with '/pipeline' # Examples are: # https://7c021d0340d296aa-dot-us-central2.pipelines.googleusercontent.com # https://kubeflow.endpoints.kubeflow-pipeline.cloud.goog/pipeline HOST = '<ADD HOST NAME TO TALK TO KUBEFLOW PIPELINE HERE>' # For 'full Kubeflow deployment' on GCP, the endpoint is usually protected through IAP, therefore the following # will be needed to access the endpoint. CLIENT_ID = '<ADD OAuth CLIENT ID USED BY IAP HERE>' OTHER_CLIENT_ID = '<ADD OAuth CLIENT ID USED TO OBTAIN AUTH CODES HERE>' OTHER_CLIENT_SECRET = '<ADD OAuth CLIENT SECRET USED TO OBTAIN AUTH CODES HERE>' # This is to ensure the proper access token is present to reach the end point for 'AI Platform Pipelines' # If you are not working with 'AI Platform Pipelines', this step is not necessary ! gcloud auth print-access-token # Create kfp client in_cluster = True try: k8s.config.load_incluster_config() except: in_cluster = False pass if in_cluster: client = kfp.Client() else: if HOST.endswith('googleusercontent.com'): CLIENT_ID = None OTHER_CLIENT_ID = None OTHER_CLIENT_SECRET = None client = kfp.Client(host=HOST, client_id=CLIENT_ID, other_client_id=OTHER_CLIENT_ID, other_client_secret=OTHER_CLIENT_SECRET)
_____no_output_____
Apache-2.0
samples/contrib/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
danishsamad/pipelines
Build reusable components Writing the program code The following cell creates a file `app.py` that contains a Python script. The script downloads MNIST dataset, trains a Neural Network based classification model, writes the training log and exports the trained model to Google Cloud Storage.Your component can create outputs that the downstream components can use as inputs. Each output must be a string and the container image must write each output to a separate local text file. For example, if a training component needs to output the path of the trained model, the component writes the path into a local file, such as `/output.txt`.
%%bash # Create folders if they don't exist. mkdir -p tmp/reuse_components_pipeline/mnist_training # Create the Python file that lists GCS blobs. cat > ./tmp/reuse_components_pipeline/mnist_training/app.py <<HERE import argparse from datetime import datetime import tensorflow as tf parser = argparse.ArgumentParser() parser.add_argument( '--model_path', type=str, required=True, help='Name of the model file.') parser.add_argument( '--bucket', type=str, required=True, help='GCS bucket name.') args = parser.parse_args() bucket=args.bucket model_path=args.model_path model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(512, activation=tf.nn.relu), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation=tf.nn.softmax) ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) print(model.summary()) mnist = tf.keras.datasets.mnist (x_train, y_train),(x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 callbacks = [ tf.keras.callbacks.TensorBoard(log_dir=bucket + '/logs/' + datetime.now().date().__str__()), # Interrupt training if val_loss stops improving for over 2 epochs tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'), ] model.fit(x_train, y_train, batch_size=32, epochs=5, callbacks=callbacks, validation_data=(x_test, y_test)) from tensorflow import gfile gcs_path = bucket + "/" + model_path # The export require the folder is new if gfile.Exists(gcs_path): gfile.DeleteRecursively(gcs_path) tf.keras.experimental.export_saved_model(model, gcs_path) with open('/output.txt', 'w') as f: f.write(gcs_path) HERE
_____no_output_____
Apache-2.0
samples/contrib/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
danishsamad/pipelines
Create a Docker containerCreate your own container image that includes your program. Creating a Dockerfile Now create a container that runs the script. Start by creating a Dockerfile. A Dockerfile contains the instructions to assemble a Docker image. The `FROM` statement specifies the Base Image from which you are building. `WORKDIR` sets the working directory. When you assemble the Docker image, `COPY` copies the required files and directories (for example, `app.py`) to the file system of the container. `RUN` executes a command (for example, install the dependencies) and commits the results.
%%bash # Create Dockerfile. # AI platform only support tensorflow 1.14 cat > ./tmp/reuse_components_pipeline/mnist_training/Dockerfile <<EOF FROM tensorflow/tensorflow:1.14.0-py3 WORKDIR /app COPY . /app EOF
_____no_output_____
Apache-2.0
samples/contrib/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
danishsamad/pipelines
Build docker image Now that we have created our Dockerfile for creating our Docker image. Then we need to build the image and push to a registry to host the image. There are three possible options:- Use the `kfp.containers.build_image_from_working_dir` to build the image and push to the Container Registry (GCR). This requires [kaniko](https://cloud.google.com/blog/products/gcp/introducing-kaniko-build-container-images-in-kubernetes-and-google-container-builder-even-without-root-access), which will be auto-installed with 'full Kubeflow deployment' but not 'AI Platform Pipelines'.- Use [Cloud Build](https://cloud.google.com/cloud-build), which would require the setup of GCP project and enablement of corresponding API. If you are working with GCP 'AI Platform Pipelines' with GCP project running, it is recommended to use Cloud Build.- Use [Docker](https://www.docker.com/get-started) installed locally and push to e.g. GCR. **Note**:If you run this notebook **within Kubeflow cluster**, **with Kubeflow version >= 0.7** and exploring **kaniko option**, you need to ensure that valid credentials are created within your notebook's namespace.- With Kubeflow version >= 0.7, the credential is supposed to be copied automatically while creating notebook through `Configurations`, which doesn't work properly at the time of creating this notebook. - You can also add credentials to the new namespace by either [copying credentials from an existing Kubeflow namespace, or by creating a new service account](https://www.kubeflow.org/docs/gke/authentication/kubeflow-v0-6-and-before-gcp-service-account-key-as-secret).- The following cell demonstrates how to copy the default secret to your own namespace.```bash%%bashNAMESPACE=SOURCE=kubeflowNAME=user-gcp-saSECRET=$(kubectl get secrets \${NAME} -n \${SOURCE} -o jsonpath="{.data.\${NAME}\.json}" | base64 -D)kubectl create -n \${NAMESPACE} secret generic \${NAME} --from-literal="\${NAME}.json=\${SECRET}"```
IMAGE_NAME="mnist_training_kf_pipeline" TAG="latest" # "v_$(date +%Y%m%d_%H%M%S)" GCR_IMAGE="gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{TAG}".format( PROJECT_ID=PROJECT_ID, IMAGE_NAME=IMAGE_NAME, TAG=TAG ) APP_FOLDER='./tmp/reuse_components_pipeline/mnist_training/' # In the following, for the purpose of demonstration # Cloud Build is choosen for 'AI Platform Pipelines' # kaniko is choosen for 'full Kubeflow deployment' if HOST.endswith('googleusercontent.com'): # kaniko is not pre-installed with 'AI Platform Pipelines' import subprocess # ! gcloud builds submit --tag ${IMAGE_NAME} ${APP_FOLDER} cmd = ['gcloud', 'builds', 'submit', '--tag', GCR_IMAGE, APP_FOLDER] build_log = (subprocess.run(cmd, stdout=subprocess.PIPE).stdout[:-1].decode('utf-8')) print(build_log) else: if kfp.__version__ <= '0.1.36': # kfp with version 0.1.36+ introduce broken change that will make the following code not working import subprocess builder = kfp.containers._container_builder.ContainerBuilder( gcs_staging=GCS_BUCKET + "/kfp_container_build_staging" ) kfp.containers.build_image_from_working_dir( image_name=GCR_IMAGE, working_dir=APP_FOLDER, builder=builder ) else: raise("Please build the docker image use either [Docker] or [Cloud Build]")
_____no_output_____
Apache-2.0
samples/contrib/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
danishsamad/pipelines
If you want to use docker to build the imageRun the following in a cell```bash%%bash -s "{PROJECT_ID}"IMAGE_NAME="mnist_training_kf_pipeline"TAG="latest" "v_$(date +%Y%m%d_%H%M%S)" Create script to build docker image and push it.cat > ./tmp/components/mnist_training/build_image.sh <<HEREPROJECT_ID="${1}"IMAGE_NAME="${IMAGE_NAME}"TAG="${TAG}"GCR_IMAGE="gcr.io/\${PROJECT_ID}/\${IMAGE_NAME}:\${TAG}"docker build -t \${IMAGE_NAME} .docker tag \${IMAGE_NAME} \${GCR_IMAGE}docker push \${GCR_IMAGE}docker image rm \${IMAGE_NAME}docker image rm \${GCR_IMAGE}HEREcd tmp/components/mnist_trainingbash build_image.sh```
image_name = GCR_IMAGE
_____no_output_____
Apache-2.0
samples/contrib/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
danishsamad/pipelines
Writing your component definition fileTo create a component from your containerized program, you must write a component specification in YAML that describes the component for the Kubeflow Pipelines system.For the complete definition of a Kubeflow Pipelines component, see the [component specification](https://www.kubeflow.org/docs/pipelines/reference/component-spec/). However, for this tutorial you don’t need to know the full schema of the component specification. The notebook provides enough information to complete the tutorial.Start writing the component definition (component.yaml) by specifying your container image in the component’s implementation section:
%%bash -s "{image_name}" GCR_IMAGE="${1}" echo ${GCR_IMAGE} # Create Yaml # the image uri should be changed according to the above docker image push output cat > mnist_pipeline_component.yaml <<HERE name: Mnist training description: Train a mnist model and save to GCS inputs: - name: model_path description: 'Path of the tf model.' type: String - name: bucket description: 'GCS bucket name.' type: String outputs: - name: gcs_model_path description: 'Trained model path.' type: GCSPath implementation: container: image: ${GCR_IMAGE} command: [ python, /app/app.py, --model_path, {inputValue: model_path}, --bucket, {inputValue: bucket}, ] fileOutputs: gcs_model_path: /output.txt HERE import os mnist_train_op = kfp.components.load_component_from_file(os.path.join('./', 'mnist_pipeline_component.yaml')) mnist_train_op.component_spec
_____no_output_____
Apache-2.0
samples/contrib/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
danishsamad/pipelines
Define deployment operation on AI Platform
mlengine_deploy_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.1.2/components/gcp/ml_engine/deploy/component.yaml') def deploy( project_id, model_uri, model_id, runtime_version, python_version): return mlengine_deploy_op( model_uri=model_uri, project_id=project_id, model_id=model_id, runtime_version=runtime_version, python_version=python_version, replace_existing_version=True, set_default=True)
_____no_output_____
Apache-2.0
samples/contrib/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
danishsamad/pipelines
Kubeflow serving deployment component as an option. **Note that, the deployed Endppoint URI is not availabe as output of this component.**```pythonkubeflow_deploy_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.1.2/components/gcp/ml_engine/deploy/component.yaml')def deploy_kubeflow( model_dir, tf_server_name): return kubeflow_deploy_op( model_dir=model_dir, server_name=tf_server_name, cluster_name='kubeflow', namespace='kubeflow', pvc_name='', service_type='ClusterIP')``` Create a lightweight component for testing the deployment
def deployment_test(project_id: str, model_name: str, version: str) -> str: model_name = model_name.split("/")[-1] version = version.split("/")[-1] import googleapiclient.discovery def predict(project, model, data, version=None): """Run predictions on a list of instances. Args: project: (str), project where the Cloud ML Engine Model is deployed. model: (str), model name. data: ([[any]]), list of input instances, where each input instance is a list of attributes. version: str, version of the model to target. Returns: Mapping[str: any]: dictionary of prediction results defined by the model. """ service = googleapiclient.discovery.build('ml', 'v1') name = 'projects/{}/models/{}'.format(project, model) if version is not None: name += '/versions/{}'.format(version) response = service.projects().predict( name=name, body={ 'instances': data }).execute() if 'error' in response: raise RuntimeError(response['error']) return response['predictions'] import tensorflow as tf import json mnist = tf.keras.datasets.mnist (x_train, y_train),(x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 result = predict( project=project_id, model=model_name, data=x_test[0:2].tolist(), version=version) print(result) return json.dumps(result) # # Test the function with already deployed version # deployment_test( # project_id=PROJECT_ID, # model_name="mnist", # version='ver_bb1ebd2a06ab7f321ad3db6b3b3d83e6' # previous deployed version for testing # ) deployment_test_op = comp.func_to_container_op( func=deployment_test, base_image="tensorflow/tensorflow:1.15.0-py3", packages_to_install=["google-api-python-client==1.7.8"])
_____no_output_____
Apache-2.0
samples/contrib/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
danishsamad/pipelines
Create your workflow as a Python function Define your pipeline as a Python function. ` @kfp.dsl.pipeline` is a required decoration, and must include `name` and `description` properties. Then compile the pipeline function. After the compilation is completed, a pipeline file is created.
# Define the pipeline @dsl.pipeline( name='Mnist pipeline', description='A toy pipeline that performs mnist model training.' ) def mnist_reuse_component_deploy_pipeline( project_id: str = PROJECT_ID, model_path: str = 'mnist_model', bucket: str = GCS_BUCKET ): train_task = mnist_train_op( model_path=model_path, bucket=bucket ).apply(gcp.use_gcp_secret('user-gcp-sa')) deploy_task = deploy( project_id=project_id, model_uri=train_task.outputs['gcs_model_path'], model_id="mnist", runtime_version="1.14", python_version="3.5" ).apply(gcp.use_gcp_secret('user-gcp-sa')) deploy_test_task = deployment_test_op( project_id=project_id, model_name=deploy_task.outputs["model_name"], version=deploy_task.outputs["version_name"], ).apply(gcp.use_gcp_secret('user-gcp-sa')) return True
_____no_output_____
Apache-2.0
samples/contrib/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
danishsamad/pipelines
Submit a pipeline run
pipeline_func = mnist_reuse_component_deploy_pipeline experiment_name = 'minist_kubeflow' arguments = {"model_path":"mnist_model", "bucket":GCS_BUCKET} run_name = pipeline_func.__name__ + ' run' # Submit pipeline directly from pipeline function run_result = client.create_run_from_pipeline_func(pipeline_func, experiment_name=experiment_name, run_name=run_name, arguments=arguments)
_____no_output_____
Apache-2.0
samples/contrib/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
danishsamad/pipelines
prepared by Abuzer Yakaryilmaz (QLatvia) This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. $ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} 1 \mspace{-1.5mu} \rfloor } $ Visualization of a (Real-Valued) Qubit _We use certain tools from python library "matplotlib.pyplot" for drawing. Check the notebook [Python: Drawing](../python/Python06_Drawing.ipynb) for the list of these tools._ Suppose that we have a single qubit. Each possible (real-valued) quantum state of this qubit is a point on 2-dimensional space.It can also be represented as a vector from origin to that point.We start with the visual representation of the following quantum states: $$ \ket{0} = \myvector{1\\0}, ~~ \ket{1} = \myvector{0\\1} , ~~ -\ket{0} = \myrvector{-1\\0}, ~~\mbox{and}~~ -\ket{1} = \myrvector{0\\-1}. $$ We draw these quantum states as points.We use one of our predefined functions for drawing axes: "draw_axes()". We include our predefined functions with the following line of code: %run qlatvia.py
# import the drawing methods from matplotlib.pyplot import plot, figure, show # draw a figure figure(figsize=(6,6), dpi=80) # include our predefined functions %run qlatvia.py # draw the axes draw_axes() # draw the origin plot(0,0,'ro') # a point in red color # draw these quantum states as points (in blue color) plot(1,0,'bo') plot(0,1,'bo') plot(-1,0,'bo') plot(0,-1,'bo') show()
_____no_output_____
Apache-2.0
bronze/B30_Visualization_of_a_Qubit.ipynb
dilyaraahmetshina/quantum_computings
Now, we draw the quantum states as arrows (vectors):
# import the drawing methods from matplotlib.pyplot import figure, arrow, show # draw a figure figure(figsize=(6,6), dpi=80) # include our predefined functions %run qlatvia.py # draw the axes draw_axes() # draw the quantum states as vectors (in blue color) arrow(0,0,0.92,0,head_width=0.04, head_length=0.08, color="blue") arrow(0,0,0,0.92,head_width=0.04, head_length=0.08, color="blue") arrow(0,0,-0.92,0,head_width=0.04, head_length=0.08, color="blue") arrow(0,0,0,-0.92,head_width=0.04, head_length=0.08, color="blue") show()
_____no_output_____
Apache-2.0
bronze/B30_Visualization_of_a_Qubit.ipynb
dilyaraahmetshina/quantum_computings
Task 1 Write a function that returns a randomly created 2-dimensional (real-valued) quantum state._You can use your code written for [a task given in notebook "Quantum State](B28_Quantum_State.ipynbtask2)._Create 100 random quantum states by using your function and then draw all of them as points.Create 1000 random quantum states by using your function and then draw all of them as points.The different colors can be used when drawing the points ([matplotlib.colors](https://matplotlib.org/2.0.2/api/colors_api.html)).
# randomly creating a 2-dimensional quantum state from random import randrange def random_quantum_state(): first_entry = randrange(-100,101) second_entry = randrange(-100,101) length_square = first_entry**2+second_entry**2 while length_square == 0: first_entry = randrange(-100,101) second_entry = randrange(-100,101) length_square = first_entry**2+second_entry**2 first_entry = first_entry / length_square**0.5 second_entry = second_entry / length_square**0.5 return [first_entry,second_entry] # import the drawing methods from matplotlib.pyplot import plot, figure # draw a figure figure(figsize=(6,6), dpi=60) # draw the origin plot(0,0,'ro') from random import randrange colors = ['ro','bo','go','yo','co','mo','ko'] for i in range(100): # create a random quantum state quantum_state = random_quantum_state(); # draw a blue point for the random quantum state x = quantum_state[0]; y = quantum_state[1]; plot(x,y,colors[randrange(len(colors))]) show() # randomly creating a 2-dimensional quantum state from random import randrange def random_quantum_state(): first_entry = randrange(-1000,1001) second_entry = randrange(-1000,1001) length_square = first_entry**2+second_entry**2 while length_square == 0: first_entry = randrange(-100,101) second_entry = randrange(-100,101) length_square = first_entry**2+second_entry**2 first_entry = first_entry / length_square**0.5 second_entry = second_entry / length_square**0.5 return [first_entry,second_entry] # import the drawing methods from matplotlib.pyplot import plot, figure # draw a figure figure(figsize=(6,6), dpi=60) # draw the origin plot(0,0,'ro') from random import randrange colors = ['ro','bo','go','yo','co','mo','ko'] for i in range(1000): # create a random quantum state quantum_state = random_quantum_state(); # draw a blue point for the random quantum state x = quantum_state[0]; y = quantum_state[1]; plot(x,y,colors[randrange(len(colors))]) show()
_____no_output_____
Apache-2.0
bronze/B30_Visualization_of_a_Qubit.ipynb
dilyaraahmetshina/quantum_computings
click for our solution Task 2 Repeat the previous task by drawing the quantum states as vectors (arrows) instead of points.The different colors can be used when drawing the points ([matplotlib.colors](https://matplotlib.org/2.0.2/api/colors_api.html))._Please keep the codes below for drawing axes for getting a better visual focus._
# randomly creating a 2-dimensional quantum state from random import randrange def random_quantum_state(): first_entry = randrange(-1000,1001) second_entry = randrange(-1000,1001) length_square = first_entry**2+second_entry**2 while length_square == 0: first_entry = randrange(-100,101) second_entry = randrange(-100,101) length_square = first_entry**2+second_entry**2 first_entry = first_entry / length_square**0.5 second_entry = second_entry / length_square**0.5 return [first_entry,second_entry] # import the drawing methods from matplotlib.pyplot import plot, figure, arrow # draw a figure figure(figsize=(6,6), dpi=60) # include our predefined functions %run qlatvia.py # draw the axes draw_axes() # draw the origin plot(0,0,'ro') from random import randrange colors = ['r','b','g','y','b','c','m'] for i in range(500): quantum_state = random_quantum_state(); x = quantum_state[0]; y = quantum_state[1]; x = 0.92 * x y = 0.92 * y arrow(0,0,x,y,head_width=0.04,head_length=0.08,color=colors[randrange(len(colors))]) show()
_____no_output_____
Apache-2.0
bronze/B30_Visualization_of_a_Qubit.ipynb
dilyaraahmetshina/quantum_computings
click for our solution Unit circle All quantum states of a qubit form the unit circle.The length of each quantum state is 1.All points that are 1 unit away from the origin form the circle with radius 1 unit.We can draw the unit circle with python.We have a predefined function for drawing the unit circle: "draw_unit_circle()".
# import the drawing methods from matplotlib.pyplot import figure figure(figsize=(6,6), dpi=80) # size of the figure # include our predefined functions %run qlatvia.py # draw axes draw_axes() # draw the unit circle draw_unit_circle()
_____no_output_____
Apache-2.0
bronze/B30_Visualization_of_a_Qubit.ipynb
dilyaraahmetshina/quantum_computings
Quantum state of a qubit Suppose that we have a single qubit. Each possible (real-valued) quantum state of this qubit is a point on 2-dimensional space.It can also be represented as a vector from origin to that point.We draw the quantum state $ \myvector{3/5 \\ 4/5} $ and its elements. Our predefined function "draw_qubit()" draws a figure, the origin, the axes, the unit circle, and base quantum states.Our predefined function "draw_quantum_state(x,y,name)" draws an arrow from (0,0) to (x,y) and associates it with name.We include our predefined functions with the following line of code: %run qlatvia.py
%run qlatvia.py draw_qubit() draw_quantum_state(3/5,4/5,"|v>")
_____no_output_____
Apache-2.0
bronze/B30_Visualization_of_a_Qubit.ipynb
dilyaraahmetshina/quantum_computings
Now, we draw its angle with $ \ket{0} $-axis and its projections on both axes. For drawing the angle, we use the method "Arc" from library "matplotlib.patches".
%run qlatvia.py draw_qubit() draw_quantum_state(3/5,4/5,"|v>") from matplotlib.pyplot import arrow, text, gca # the projection on |0>-axis arrow(0,0,3/5,0,color="blue",linewidth=1.5) arrow(0,4/5,3/5,0,color="blue",linestyle='dotted') text(0.1,-0.1,"cos(a)=3/5") # the projection on |1>-axis arrow(0,0,0,4/5,color="blue",linewidth=1.5) arrow(3/5,0,0,4/5,color="blue",linestyle='dotted') text(-0.1,0.55,"sin(a)=4/5",rotation="90") # drawing the angle with |0>-axis from matplotlib.patches import Arc gca().add_patch( Arc((0,0),0.4,0.4,angle=0,theta1=0,theta2=53) ) text(0.08,0.05,'.',fontsize=30) text(0.21,0.09,'a')
_____no_output_____
Apache-2.0
bronze/B30_Visualization_of_a_Qubit.ipynb
dilyaraahmetshina/quantum_computings
Observations: The angle of quantum state with state $ \ket{0} $ is $a$. The amplitude of state $ \ket{0} $ is $ \cos(a) = \frac{3}{5} $. The probability of observing state $ \ket{0} $ is $ \cos^2(a) = \frac{9}{25} $. The amplitude of state $ \ket{1} $ is $ \sin(a) = \frac{4}{5} $. The probability of observing state $ \ket{1} $ is $ \sin^2(a) = \frac{16}{25} $. The angle of a quantum state The angle of a vector (in radians) on the unit circle is the length of arc in counter-clockwise direction that starts from $ (1,0) $ and with the points representing the vector.We execute the following code a couple of times to see different examples, where the angle is picked randomly in each run.You can also set the value of "myangle" manually for seeing a specific angle.
# set the angle from random import randrange myangle = randrange(361) ################################################ from matplotlib.pyplot import figure,gca from matplotlib.patches import Arc from math import sin,cos,pi # draw a figure figure(figsize=(6,6), dpi=60) %run qlatvia.py draw_axes() print("the selected angle is",myangle,"degrees") ratio_of_arc = ((1000*myangle/360)//1)/1000 print("it is",ratio_of_arc,"of a full circle") print("its length is",ratio_of_arc,"x 2\u03C0","=",ratio_of_arc*2*pi) myangle_in_radian = 2*pi*(myangle/360) print("its radian value is",myangle_in_radian) gca().add_patch( Arc((0,0),0.2,0.2,angle=0,theta1=0,theta2=myangle,color="red",linewidth=2) ) gca().add_patch( Arc((0,0),2,2,angle=0,theta1=0,theta2=myangle,color="brown",linewidth=2) ) x = cos(myangle_in_radian) y = sin(myangle_in_radian) draw_quantum_state(x,y,"|v>") # the projection on |0>-axis arrow(0,0,x,0,color="blue",linewidth=1) arrow(0,y,x,0,color="blue",linestyle='dashed') # the projection on |1>-axis arrow(0,0,0,y,color="blue",linewidth=1) arrow(x,0,0,y,color="blue",linestyle='dashed') print() print("the amplitude of state |0> is",x) print("the amplitude of state |1> is",y) print() print("the probability of observing state |0> is",x*x) print("the probability of observing state |1> is",y*y) print("the total probability is",round(x*x+y*y,6))
the selected angle is 27 degrees it is 0.075 of a full circle its length is 0.075 x 2π = 0.47123889803846897 its radian value is 0.47123889803846897 the amplitude of state |0> is 0.8910065241883679 the amplitude of state |1> is 0.45399049973954675 the probability of observing state |0> is 0.7938926261462367 the probability of observing state |1> is 0.2061073738537634 the total probability is 1.0
Apache-2.0
bronze/B30_Visualization_of_a_Qubit.ipynb
dilyaraahmetshina/quantum_computings
Random quantum states Any quantum state of a (real-valued) qubit is a point on the unit circle.We use this fact to create random quantum states by picking a random point on the unit circle. For this purpose, we randomly pick an angle between zero and 360 degrees and then find the amplitudes of the quantum state by using the basic trigonometric functions. Task 3 Define a function randomly creating a quantum state based on this idea.Randomly create a quantum state by using this function.Draw the quantum state on the unit circle.Repeat the task for a few times.Randomly create 100 quantum states and draw all of them. You can save your function for using later: comment out the first command, give an appropriate file name, and then run the cell.
# %%writefile FILENAME.py # randomly creating a 2-dimensional quantum state from random import randrange def random_quantum_state2(): angle_degree = randrange(360) angle_radian = 2*pi*angle/360 return [cos(angle_radian),sin(angle_radian)]
_____no_output_____
Apache-2.0
bronze/B30_Visualization_of_a_Qubit.ipynb
dilyaraahmetshina/quantum_computings
Our predefined function "draw_qubit()" draws a figure, the origin, the axes, the unit circle, and base quantum states.Our predefined function "draw_quantum_state(x,y,name)" draws an arrow from (0,0) to (x,y) and associates it with name.We include our predefined functions with the following line of code: %run qlatvia.py
# visually test your function %run qlatvia.py # draw the axes draw_qubit() from random import randrange for i in range(6): [x,y]=random_quantum_state2() draw_quantum_state(x,y,"|v"+str(i)+">") # include our predefined functions %run qlatvia.py # draw the axes draw_qubit() for i in range(100): [x,y]=random_quantum_state2() draw_quantum_state(x,y,"")
_____no_output_____
Apache-2.0
bronze/B30_Visualization_of_a_Qubit.ipynb
dilyaraahmetshina/quantum_computings
'power_act_.csv' In total we have 18 columns and 64328 rows Coulmns names : ['dt_start_utc', 'power_act_21', 'power_act_24', 'power_act_47' .....] date ranges from '2019-06-30' to '2021-04-30' Date seems to be recorded for every 15 minutesAll the columns contains missing values only for 'power_act_21' its 1.5% whereas >20% for other features
df1.tail(10) df1.info() df1.describe() df1.isnull().sum().sort_values(ascending=False)/len(df1)*100 df1['dt_start_utc'] = df1['dt_start_utc'].apply(pd.to_datetime) df1 = df1.set_index('dt_start_utc') plt.figure(figsize=(10,6)) df1['power_act_21'].plot() plt.figure(figsize=(10,6)) df1['power_act_47'].plot() plt.figure(figsize=(10,6)) df1['power_act_196'].plot() df2 = pd.read_csv('data/power_fc_.csv')
_____no_output_____
MIT
notebooks/eda_ravi.ipynb
Windbenders/capstone_energy
'power_fc_.csv' In total we have 23 columns and 66020 rows Coulmns names : ['dt_start_utc', 'power_act_21', 'power_act_24', 'power_act_47' .....] date ranges from ''2019-06-13 07:00'' to ''2021-04-30 23:45'' Date seems to be recorded for every 15 minutesno null values for 'power_act_21' whereas for other features >17% null values
df2['dt_start_utc'].max() df2.head() df2.shape df2.info() df2.isnull().sum().sort_values(ascending=False)/len(df1)*100 df2['dt_start_utc'] = df2['dt_start_utc'].apply(pd.to_datetime) #df2 = df2.reset_index('dt_start_utc') df2['dt_start_utc'] = df2['dt_start_utc'].apply(pd.to_datetime) df2.head() df3 = pd.read_csv('data/regelleistung_aggr_results.csv')
_____no_output_____
MIT
notebooks/eda_ravi.ipynb
Windbenders/capstone_energy
'regelleistung_aggr_results.csv' In total we have 17 columns and 16068 rows Coulmns names : ['date_start', 'date_end', 'product', 'reserve_type', 'total_min_capacity_price_eur_mw', 'total_average_capacity_price_eur_mw', 'total_marginal_capacity_price_eur_mw','total_min_energy_price_eur_mwh', 'total_average_energy_price_eur_mwh', 'total_marginal_energy_price_eur_mwh', 'germany_min_capacity_price_eur_mw', 'germany_average_capacity_price_eur_mw', 'germany_marginal_capacity_price_eur_mw','germany_min_energy_price_eur_mwh', 'germany_average_energy_price_eur_mwh', 'germany_marginal_energy_price_eur_mwh', 'germany_import_export_mw'] 2 unique reserve type ['MRL', 'SRL'] 12 unique product type ['NEG_00_04', 'NEG_04_08', 'NEG_08_12', 'NEG_12_16', 'NEG_16_20','NEG_20_24', 'POS_00_04', 'POS_04_08', 'POS_08_12', 'POS_12_16', 'POS_16_20', 'POS_20_24'] date ranges from '2019-01-01' to '2021-03-19' Date seems to be recorded for every hours (24 values for each days) Few columns contains missing values of about 37%
df3.shape df3.info() df3.groupby(by='date_start').count().head(2) df3['reserve_type'].unique() df3['product'].unique() #sns.pairplot(df3) df3.info() df3.isnull().sum().sort_values(ascending=False)/len(df1)*100 df3.shape df4 = pd.read_csv('data/regelleistung_demand.csv') df4.head()
_____no_output_____
MIT
notebooks/eda_ravi.ipynb
Windbenders/capstone_energy
'regelleistung_demand.csv' In total we have 6 columns and 16188 rows Coulmns names : ['date_start', 'date_end', 'product', 'total_demand_mw', 'germany_block_demand_mw', 'reserve_type'] 2 unique reserve type ['MRL', 'SRL'] 12 unique product type ['NEG_00_04', 'NEG_04_08', 'NEG_08_12', 'NEG_12_16', 'NEG_16_20','NEG_20_24', 'POS_00_04', 'POS_04_08', 'POS_08_12', 'POS_12_16', 'POS_16_20', 'POS_20_24'] date ranges from '2019-01-01' to '2021-03-18' Date seems to be recorded for every hours (24 values for each days)no missing values
df4.isnull().sum().sort_values(ascending=False) def check_unique(df): ''' check unique values for each columns and print them if they are below 15''' for col in df.columns: n = df[col].nunique() print(f'{col} has {n} unique values') if n < 15: print(df[col].unique()) df4.info() sns.pairplot(df4)
_____no_output_____
MIT
notebooks/eda_ravi.ipynb
Windbenders/capstone_energy
2.1A=[[5],[7],[0]]; A=Matrix(A); show("A=",A)B=[[-4]]; B=Matrix(B); show("B=",B)show("AB=",A*B)show("BA=",B*A)
#2.2 A=[[2,7,2],[8,-6,6]]; A=Matrix(A); show("A=",A) B=[[6,6],[-2,-3],[-6,8]]; B=Matrix(B); show("B=",B) show("AB=",A*B) show("BA=",B*A) #2.3 A=[[-4,-6],[0,6]]; A=Matrix(A); show("A=",A) B=[[-2],[-2]]; B=Matrix(B); show("B=",B) show("AB=",A*B) #show("BA=",B*A) #2.4 A=[[7],[6]]; A=Matrix(A); show("A=",A) B=[[-9,8]]; B=Matrix(B); show("B=",B) show("AB=",A*B) show("BA=",B*A) #2.5 A=[[-2],[-4]]; A=Matrix(A); show("A=",A) B=[[-8]]; B=Matrix(B); show("B=",B) show("AB=",A*B) #show("BA=",B*A) #2.6 A=[[-5,-8,-1]]; A=Matrix(A); show("A=",A) B=[[8,-1],[5,4],[4,-3]]; B=Matrix(B); show("B=",B) show("AB=",A*B) #show("BA=",B*A)
_____no_output_____
MIT
latex/MD_SMC/MD03 Matrices.ipynb
asimovian-academy/optimizacion-combinatoria
Text Cleaningfrom http://udallasclassics.org/wp-content/uploads/maurer_files/APPARATUSABBREVIATIONS.pdf[...] Square brackets, or in recent editions wavy brackets ʺ{...}ʺ, enclose words etc. that an editor thinks should be deleted (see ʺdel.ʺ) or marked as out of place (see ʺsecl.ʺ).[...] Square brackets in a papyrus text, or in an inscription, enclose places where words have been lost through physical damage. If this happens in mid-line, editors use ʺ[...]ʺ. If only the end of the line is missing, they use a single bracket ʺ[...ʺ If the lineʹs beginning is missing, they use ʺ...]ʺ Within the brackets, often each dot represents one missing letter.[[...]] Double brackets enclose letters or words deleted by the medieval copyist himself.(...) Round brackets are used to supplement words abbreviated by the original copyist; e.g. in an inscription: ʺtrib(unus) mil(itum) leg(ionis) IIIʺ diamond ( = elbow = angular) brackets enclose words etc. that an editor has added (see ʺsuppl.ʺ)† An obelus (pl. obeli) means that the word(s etc.) is very plainly corrupt, but the editor cannot see how to emend. If only one word is corrupt, there is only one obelus, which precedes the word; if two or more words are corrupt, two obeli enclose them. (Such at least is the rule--but that rule is often broken, especially in older editions, which sometimes dagger several words using only one obelus.) To dagger words in this way is to ʺobelizeʺ them. Load/Build Truecasing dictionary; count all cased tokens, use to normalize cases later
truecase_file = 'truecase_counter.latin.pkl' if os.path.exists(truecase_file): with open(truecase_file, 'rb') as fin: case_counts = pickle.load(fin) else: tesserae = get_corpus_reader(corpus_name='latin_text_tesserae', language='latin') case_counts = Counter() jv_replacer = JVReplacer() aeoe_replacer = AEOEReplacer() toker = WordTokenizer('latin') sent_toker = SentenceTokenizer() lemmatizer = LemmaReplacer('latin') for file in tqdm(tesserae.fileids(), total=len(tesserae.fileids())): for sent in tesserae.sents(file): sent = aeoe_replacer.replace(jv_replacer.replace(drop_punct(sent))) sent = normalize_accents(sent) sent = accept_editorial(sent) for token in toker.tokenize(sent): case_counts.update({token:1}) with open(truecase_file, 'wb') as fout: pickle.dump(case_counts, fout) len(case_counts) # 344393, 322711 # 318451 # 316722 # 311399 # 310384 # 310567 # 309529 print(sample(list(case_counts.items()), 25)) def get_word_counts(files:List[str])->Tuple[Dict[str, int], Dict[str, int]]: """ Given a list of files, clean & tokenize the documents return Counters for: lemmatized words in the documents inflected words in the documents """ word_counter = Counter() inflected_word_counter = Counter() jv_replacer = JVReplacer() aeoe_replacer = AEOEReplacer() toker = WordTokenizer('latin') sent_toker = SentenceTokenizer() lemmatizer = LemmaReplacer('latin') for file in tqdm(files , total=len(files), unit='files'): with open(file, 'rt') as fin: text = fin.read() text = text.replace("-\n", "") text = text.replace("\n", " ") text = aeoe_replacer.replace(jv_replacer.replace( text)) for sent in sent_toker.tokenize(text): sent = dehyphenate(sent) # because it's Phi5 sent = swallow_braces(sent) sent = swallow_square_brackets(sent) sent = disappear_round_brackets(sent) sent = swallow_obelized_words(sent) sent = disappear_angle_brackets(sent) sent = drop_punct(sent) sent = normalize_accents(sent) # lemmatizer prefers lower # sent = lemmatizer.lemmatize(sent.lower(), return_string=True) for word in toker.tokenize(sent): if word.isnumeric(): continue inflected_word_counter.update({truecase(word, case_counts):1}) word = lemmatizer.lemmatize(word.lower(), return_string=True) # normalize capitals word_counter.update({truecase(word, case_counts) : 1}) return word_counter, inflected_word_counter def word_stats(author:str, lemma_counter:Counter, inflected_counter:Counter)->Tuple[float, float]: """ """ nw = sum(lemma_counter.values()) print(f"Total count of all tokens in {author} corpus: {nw:,}") print(f"Total number of distinct inflected words/tokens in {author} corpus: {len(inflected_counter):,}") print(f"Total number of lemmatized words/tokens in {author} corpus {len(lemma_counter):,}") ciw1 = sum([1 for key, val in inflected_counter.items() if val == 1]) print(f"Count of inflected tokens only occuring once {ciw1:,}") cw1 = sum([1 for key, val in lemma_counter.items() if val == 1]) print(f"Count of lemmatized tokens only occuring once {cw1:,}") Piu_one = ciw1 / nw print(f"Probability of a single count unigram occuring in the {author} corpus: {Piu_one:.3f}") Plu_one = cw1 / nw print(f"Probability of a single count unigram in the lemmatized {author} corpus: {Plu_one:.3f}") return (Piu_one, Plu_one) # Cicero works cicero_files = glob(f"{os.path.expanduser('~')}/cltk_data/latin/text/phi5/individual_works/LAT0474.TXT-0*.txt") len (cicero_files) cicero_lemmas, cicero_inflected_words = get_word_counts(cicero_files) word_stats(author='Cicero', lemma_counter=cicero_lemmas, inflected_counter=cicero_inflected_words) cicero_lemmas_counter_file = 'cicero_lemmas_counter.pkl' cicero_inflected_counter_file = 'cicero_inflected_counter.pkl' if not os.path.exists(cicero_lemmas_counter_file): with open(cicero_lemmas_counter_file, 'wb') as fout: pickle.dump(cicero_lemmas, fout) if not os.path.exists(cicero_inflected_counter_file): with open(cicero_inflected_counter_file, 'wb') as fout: pickle.dump(cicero_inflected_words, fout) author_index = {val:key for key,val in PHI5_INDEX.items() if val != 'Marcus Tullius Cicero, Cicero, Tully'} def get_phi5_author_files(author_name, author_index): stub = author_index[author_name] return glob(os.path.expanduser(f'~/cltk_data/latin/text/phi5/individual_works/{stub}*.txt'))
_____no_output_____
Apache-2.0
probablistic_language_modeling/cicero_corpus_counts.ipynb
todd-cook/ML-You-Can-Use
Visualization of our corpus comparison: If you took one page from one author and placed it into Cicero, how surprising would it be?If the other author's vocabulary was substantially different, it would be noticeable. We can quantify this.As a result, since we want to predict as close as possible to the author, we should only train a language model where the underlying corpus vocabularies are within a reasonable window of surprise.
results = [] for author in author_index: files = get_phi5_author_files(author, author_index) # cicero_lemmas, cicero_inflected_words = get_word_counts(cicero_files) author_lemmas, author_inflected_words = get_word_counts(files) author_words = set(author_lemmas.keys()) cicero_words = set(cicero_lemmas.keys()) common = author_words & cicero_words author_uniq = author_words - common P_one_x_lemma_unigram = len(author_uniq) / sum(author_lemmas.values()) author_words = set(author_inflected_words.keys()) cicero_words = set(cicero_inflected_words.keys()) common = author_words & cicero_words author_uniq = author_words - common P_one_x_inflected_unigram = len(author_uniq) / sum(author_inflected_words.values()) results.append((author, P_one_x_lemma_unigram, P_one_x_inflected_unigram )) # sorted(results, key=lambda x:x[1]) results_map = {key: (val, val2) for key,val,val2 in results} for author in author_index: files = get_phi5_author_files(author, author_index) if len(files) >= 3: print(author, results_map[author]) # the values analogous to Cicero are: (0.02892407263780054, 0.008905886443261747) # grab prose authors # grab poets # consider individual files # Gaius Iulius Caesar, Caesar (0.016170899832329378, 0.0464137117307334) # Apuleius Madaurensis (0.039956560814859196, 0.12101183343319354) # Caelius Apicius (0.04383594547528974, 0.09950159130486999) # Anonymi Comici et Tragici (0.05979473449352968, 0.10397144132083891) # C. Iul. Caes. Augustus Octavianus (0.16793743890518084, 0.20527859237536658) # Publius Papinius Statius (0.03662215849687846, 0.1022791767482152) # Lucius Accius (0.0845518118245391, 0.16634880271243907) # Gaius Caesius Bassus (0.040359504832965916, 0.07953196540613872) # Publius Vergilius Maro, Virgil, Vergil (0.03315200072836527, 0.0929348568307006) # Publius Ovidius Naso (0.023965644822556705, 0.06525858344775079) # Gnaeus Naevius (0.11655300681959083, 0.20644761314321142) # Fragmenta Bobiensia (0.07398076042143839, 0.1385707741639945) # Scriptores Historiae Augustae (0.03177853760216489, 0.071072022819111) # Publius Terentius Afer, Terence (0.028577576089507863, 0.058641733823644474) # Aulus Cornelius Celsus (0.017332921313593843, 0.0558848592109822) # Gaius Suetonius Tranquillus (0.033629947836759745, 0.0958944461491255) # Marcus Terentius Varro, Varro (0.045866176600832524, 0.093891152245151) # Appendix Vergiliana (0.0500247341083354, 0.1418501113034875) # Annius Florus (0.038297569987210456, 0.09140969162995595) # Pomponius Porphyrio (0.04030915576694411, 0.09312987184568636) # Marcus Valerius Probus (0.03835521769177609, 0.08431237042156185) # Quintus Ennius (0.05652467883705206, 0.12021636240703178) # Didascaliae et Per. in Terentium (0.0782967032967033, 0.13598901098901098) # Cornelius Tacitus (0.02469418086200983, 0.07631488690859423) # Titus Livius, Livy (0.011407436246836674, 0.03913716547549524) # Lucius Annaeus Seneca senior (0.01619733327917297, 0.052095498258405856) # Quintus Horatius Flaccus, Horace (0.04486396446418656, 0.12253192670738479) # Gaius Asinius Pollio (0.03592814371257485, 0.08982035928143713) # Gaius Sallustius Crispus (0.020570966643975494, 0.059330326752893126) # C. Plinius Caecilius Secundus, Pliny (0.01694301397770358, 0.06551977816761927) # Marcus Fabius Quintilianus (0.009342494688624445, 0.0416682017463066) # Hyginus Gromaticus (0.0285692634131555, 0.08320703243407093) # Titus Lucretius Carus (0.022190184885737107, 0.06787585965048998) # Claudius Caesar Germanicus (0.04035804020100502, 0.12861180904522612) # Gaius, iur., Gaius (0.011268643689753487, 0.035144203727768185) # Quintus Terentius Scaurus (0.04715169618092597, 0.09174311926605505) # Lucius Livius Andronicus (0.14615384615384616, 0.25) # Marcus Cornelius Fronto (0.03605195520469984, 0.08350927115843583) # Didascaliae et Argum. in Plautum (0.07712590639419907, 0.14831905075807514) # Argum. Aen. et Tetrast. (0.07066381156316917, 0.1441827266238401) # Anonymi Epici et Lyrici (0.09684487291849254, 0.19237510955302367) # Marcus Porcius Cato, Cato (0.061287538049157236, 0.13079823724501385) # Sextus Iulius Frontinus (0.03041633518960488, 0.09337045876425351) # Lucius Annaeus Seneca iunior (0.012655345175352984, 0.05447654369184723) # Titus Maccius Plautus (0.02682148990105487, 0.062141513731995376) # Maurus Servius Honoratus, Servius (0.025347881711764008, 0.05923711189138313) # Quintus Asconius Pedianus (0.010382059800664452, 0.029663028001898434)
_____no_output_____
Apache-2.0
probablistic_language_modeling/cicero_corpus_counts.ipynb
todd-cook/ML-You-Can-Use
T81-558: Applications of Deep Neural Networks**Module 7: Generative Adversarial Networks*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 7 Material* Part 7.1: Introduction to GANS for Image and Data Generation [[Video]](https://www.youtube.com/watch?v=0QnCH6tlZgc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_07_1_gan_intro.ipynb)* **Part 7.2: Implementing a GAN in Keras** [[Video]](https://www.youtube.com/watch?v=T-MCludVNn4&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_07_2_Keras_gan.ipynb)* Part 7.3: Face Generation with StyleGAN and Python [[Video]](https://www.youtube.com/watch?v=Wwwyr7cOBlU&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_07_3_style_gan.ipynb)* Part 7.4: GANS for Semi-Supervised Learning in Keras [[Video]](https://www.youtube.com/watch?v=ZPewmEu7644&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_07_4_gan_semi_supervised.ipynb)* Part 7.5: An Overview of GAN Research [[Video]](https://www.youtube.com/watch?v=cvCvZKvlvq4&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_07_5_gan_research.ipynb)
# Nicely formatted time string def hms_string(sec_elapsed): h = int(sec_elapsed / (60 * 60)) m = int((sec_elapsed % (60 * 60)) / 60) s = sec_elapsed % 60 return "{}:{:>02}:{:>05.2f}".format(h, m, s)
_____no_output_____
MIT
gans_keras.ipynb
andresvilla86/diadx-ia-ml
Part 7.2: Implementing DCGANs in KerasPaper that described the type of DCGAN that we will create in this module. [[Cite:radford2015unsupervised]](https://arxiv.org/abs/1511.06434) This paper implements a DCGAN as follows:* No pre-processing was applied to training images besides scaling to the range of the tanh activation function [-1, 1]. * All models were trained with mini-batch stochastic gradient descent (SGD) with a mini-batch size of 128. * All weights were initialized from a zero-centered Normal distribution with standard deviation 0.02. * In the LeakyReLU, the slope of the leak was set to 0.2 in all models.* we used the Adam optimizer(Kingma & Ba, 2014) with tuned hyperparameters. We found the suggested learning rate of 0.001, to be too high, using 0.0002 instead. * Additionally, we found leaving the momentum term $\beta{1}$ at the suggested value of 0.9 resulted in training oscillation and instability while reducing it to 0.5 helped stabilize training.The paper also provides the following architecture guidelines for stable Deep Convolutional GANs:* Replace any pooling layers with strided convolutions (discriminator) and fractional-strided convolutions (generator).* Use batchnorm in both the generator and the discriminator.* Remove fully connected hidden layers for deeper architectures.* Use ReLU activation in generator for all layers except for the output, which uses Tanh.* Use LeakyReLU activation in the discriminator for all layers.While creating the material for this module I used a number of Internet resources, some of the most helpful were:* [Deep Convolutional Generative Adversarial Network (TensorFlow 2.0 example code)](https://www.tensorflow.org/tutorials/generative/dcgan)* [Keep Calm and train a GAN. Pitfalls and Tips on training Generative Adversarial Networks](https://medium.com/@utk.is.here/keep-calm-and-train-a-gan-pitfalls-and-tips-on-training-generative-adversarial-networks-edd529764aa9)* [Collection of Keras implementations of Generative Adversarial Networks GANs](https://github.com/eriklindernoren/Keras-GAN)* [dcgan-facegenerator](https://github.com/platonovsimeon/dcgan-facegenerator), [Semi-Paywalled Article by GitHub Author](https://medium.com/datadriveninvestor/generating-human-faces-with-keras-3ccd54c17f16)The program created next will generate faces similar to these. While these faces are not perfect, they demonstrate how we can construct and train a GAN on or own. Later we will see how to import very advanced weights from nVidia to produce high resolution, realistic looking faces. Figure 7.GAN-GRID shows images from GAN training.**Figure 7.GAN-GRID: GAN Neural Network Training**![GAN](https://raw.githubusercontent.com/jeffheaton/t81_558_deep_learning/master/images/gan-3.png "GAN Images")As discussed in the previous module, the GAN is made up of two different neural networks: the discriminator and the generator. The generator generates the images, while the discriminator detects if a face is real or was generated. These two neural networks work as shown in Figure 7.GAN-EVAL:**Figure 7.GAN-EVAL: Evaluating GANs**![GAN](https://raw.githubusercontent.com/jeffheaton/t81_558_deep_learning/master/images/gan_fig_1.png "GAN")The discriminator accepts an image as its input and produces number that is the probability of the input image being real. The generator accepts a random seed vector and generates an image from that random vector seed. An unlimited number of new images can be created by providing additional seeds. I suggest running this code with a GPU, it will be very slow on a CPU alone. The following code mounts your Google drive for use with Google CoLab. If you are not using CoLab, the following code will not work.
try: from google.colab import drive drive.mount('/content/drive', force_remount=True) COLAB = True print("Note: using Google CoLab") %tensorflow_version 2.x except: print("Note: not using Google CoLab") COLAB = False
Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly Enter your authorization code: ·········· Mounted at /content/drive Note: using Google CoLab TensorFlow 2.x selected.
MIT
gans_keras.ipynb
andresvilla86/diadx-ia-ml
The following packages will be used to implement a basic GAN system in Python/Keras.
import tensorflow as tf from tensorflow.keras.layers import Input, Reshape, Dropout, Dense from tensorflow.keras.layers import Flatten, BatchNormalization from tensorflow.keras.layers import Activation, ZeroPadding2D from tensorflow.keras.layers import LeakyReLU from tensorflow.keras.layers import UpSampling2D, Conv2D from tensorflow.keras.models import Sequential, Model, load_model from tensorflow.keras.optimizers import Adam import numpy as np from PIL import Image from tqdm import tqdm import os import time import matplotlib.pyplot as plt
_____no_output_____
MIT
gans_keras.ipynb
andresvilla86/diadx-ia-ml
These are the constants that define how the GANs will be created for this example. The higher the resolution, the more memory that will be needed. Higher resolution will also result in longer run times. For Google CoLab (with GPU) 128x128 resolution is as high as can be used (due to memory). Note that the resolution is specified as a multiple of 32. So **GENERATE_RES** of 1 is 32, 2 is 64, etc.To run this you will need training data. The training data can be any collection of images. I suggest using training data from the following two locations. Simply unzip and combine to a common directory. This directory should be uploaded to Google Drive (if you are using CoLab). The constant **DATA_PATH** defines where these images are stored.The source data (faces) used in this module can be found here:* [Kaggle Faces Data New](https://www.kaggle.com/gasgallo/faces-data-new)* [Kaggle Lag Dataset: Dataset of faces, from more than 1k different subjects](https://www.kaggle.com/gasgallo/lag-dataset)
# Generation resolution - Must be square # Training data is also scaled to this. # Note GENERATE_RES 4 or higher # will blow Google CoLab's memory and have not # been tested extensivly. GENERATE_RES = 3 # Generation resolution factor # (1=32, 2=64, 3=96, 4=128, etc.) GENERATE_SQUARE = 32 * GENERATE_RES # rows/cols (should be square) IMAGE_CHANNELS = 3 # Preview image PREVIEW_ROWS = 4 PREVIEW_COLS = 7 PREVIEW_MARGIN = 16 # Size vector to generate images from SEED_SIZE = 100 # Configuration DATA_PATH = '/content/drive/My Drive/projects/faces' EPOCHS = 50 BATCH_SIZE = 32 BUFFER_SIZE = 60000 print(f"Will generate {GENERATE_SQUARE}px square images.")
Will generate 96px square images.
MIT
gans_keras.ipynb
andresvilla86/diadx-ia-ml
Next we will load and preprocess the images. This can take awhile. Google CoLab took around an hour to process. Because of this we store the processed file as a binary. This way we can simply reload the processed training data and quickly use it. It is most efficient to only perform this operation once. The dimensions of the image are encoded into the filename of the binary file because we need to regenerate it if these change.
# Image set has 11,682 images. Can take over an hour # for initial preprocessing. # Because of this time needed, save a Numpy preprocessed file. # Note, that file is large enough to cause problems for # sume verisons of Pickle, # so Numpy binary files are used. training_binary_path = os.path.join(DATA_PATH, f'training_data_{GENERATE_SQUARE}_{GENERATE_SQUARE}.npy') print(f"Looking for file: {training_binary_path}") if not os.path.isfile(training_binary_path): start = time.time() print("Loading training images...") training_data = [] faces_path = os.path.join(DATA_PATH,'face_images') for filename in tqdm(os.listdir(faces_path)): path = os.path.join(faces_path,filename) image = Image.open(path).resize((GENERATE_SQUARE, GENERATE_SQUARE),Image.ANTIALIAS) training_data.append(np.asarray(image)) training_data = np.reshape(training_data,(-1,GENERATE_SQUARE, GENERATE_SQUARE,IMAGE_CHANNELS)) training_data = training_data.astype(np.float32) training_data = training_data / 127.5 - 1. print("Saving training image binary...") np.save(training_binary_path,training_data) elapsed = time.time()-start print (f'Image preprocess time: {hms_string(elapsed)}') else: print("Loading previous training pickle...") training_data = np.load(training_binary_path)
Looking for file: /content/drive/My Drive/projects/faces/training_data_96_96.npy Loading previous training pickle...
MIT
gans_keras.ipynb
andresvilla86/diadx-ia-ml
We will use a TensorFlow **Dataset** object to actually hold the images. This allows the data to be quickly shuffled int divided into the appropriate batch sizes for training.
# Batch and shuffle the data train_dataset = tf.data.Dataset.from_tensor_slices(training_data) \ .shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
_____no_output_____
MIT
gans_keras.ipynb
andresvilla86/diadx-ia-ml
The code below creates the generator and discriminator. Next we actually build the discriminator and the generator. Both will be trained with the Adam optimizer.
def build_generator(seed_size, channels): model = Sequential() model.add(Dense(4*4*256,activation="relu",input_dim=seed_size)) model.add(Reshape((4,4,256))) model.add(UpSampling2D()) model.add(Conv2D(256,kernel_size=3,padding="same")) model.add(BatchNormalization(momentum=0.8)) model.add(Activation("relu")) model.add(UpSampling2D()) model.add(Conv2D(256,kernel_size=3,padding="same")) model.add(BatchNormalization(momentum=0.8)) model.add(Activation("relu")) # Output resolution, additional upsampling model.add(UpSampling2D()) model.add(Conv2D(128,kernel_size=3,padding="same")) model.add(BatchNormalization(momentum=0.8)) model.add(Activation("relu")) if GENERATE_RES>1: model.add(UpSampling2D(size=(GENERATE_RES,GENERATE_RES))) model.add(Conv2D(128,kernel_size=3,padding="same")) model.add(BatchNormalization(momentum=0.8)) model.add(Activation("relu")) # Final CNN layer model.add(Conv2D(channels,kernel_size=3,padding="same")) model.add(Activation("tanh")) return model def build_discriminator(image_shape): model = Sequential() model.add(Conv2D(32, kernel_size=3, strides=2, input_shape=image_shape, padding="same")) model.add(LeakyReLU(alpha=0.2)) model.add(Dropout(0.25)) model.add(Conv2D(64, kernel_size=3, strides=2, padding="same")) model.add(ZeroPadding2D(padding=((0,1),(0,1)))) model.add(BatchNormalization(momentum=0.8)) model.add(LeakyReLU(alpha=0.2)) model.add(Dropout(0.25)) model.add(Conv2D(128, kernel_size=3, strides=2, padding="same")) model.add(BatchNormalization(momentum=0.8)) model.add(LeakyReLU(alpha=0.2)) model.add(Dropout(0.25)) model.add(Conv2D(256, kernel_size=3, strides=1, padding="same")) model.add(BatchNormalization(momentum=0.8)) model.add(LeakyReLU(alpha=0.2)) model.add(Dropout(0.25)) model.add(Conv2D(512, kernel_size=3, strides=1, padding="same")) model.add(BatchNormalization(momentum=0.8)) model.add(LeakyReLU(alpha=0.2)) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(1, activation='sigmoid')) return model
_____no_output_____
MIT
gans_keras.ipynb
andresvilla86/diadx-ia-ml
As we progress through training images will be produced to show the progress. These images will contain a number of rendered faces that show how good the generator has become. These faces will be
def save_images(cnt,noise): image_array = np.full(( PREVIEW_MARGIN + (PREVIEW_ROWS * (GENERATE_SQUARE+PREVIEW_MARGIN)), PREVIEW_MARGIN + (PREVIEW_COLS * (GENERATE_SQUARE+PREVIEW_MARGIN)), 3), 255, dtype=np.uint8) generated_images = generator.predict(noise) generated_images = 0.5 * generated_images + 0.5 image_count = 0 for row in range(PREVIEW_ROWS): for col in range(PREVIEW_COLS): r = row * (GENERATE_SQUARE+16) + PREVIEW_MARGIN c = col * (GENERATE_SQUARE+16) + PREVIEW_MARGIN image_array[r:r+GENERATE_SQUARE,c:c+GENERATE_SQUARE] \ = generated_images[image_count] * 255 image_count += 1 output_path = os.path.join(DATA_PATH,'output') if not os.path.exists(output_path): os.makedirs(output_path) filename = os.path.join(output_path,f"train-{cnt}.png") im = Image.fromarray(image_array) im.save(filename)
_____no_output_____
MIT
gans_keras.ipynb
andresvilla86/diadx-ia-ml
generator = build_generator(SEED_SIZE, IMAGE_CHANNELS) noise = tf.random.normal([1, SEED_SIZE]) generated_image = generator(noise, training=False) plt.imshow(generated_image[0, :, :, 0])
_____no_output_____
MIT
gans_keras.ipynb
andresvilla86/diadx-ia-ml
image_shape = (GENERATE_SQUARE,GENERATE_SQUARE,IMAGE_CHANNELS) discriminator = build_discriminator(image_shape) decision = discriminator(generated_image) print (decision)
tf.Tensor([[0.50032634]], shape=(1, 1), dtype=float32)
MIT
gans_keras.ipynb
andresvilla86/diadx-ia-ml
Loss functions must be developed that allow the generator and discriminator to be trained in an adversarial way. Because these two neural networks are being trained independently they must be trained in two separate passes. This requires two separate loss functions and also two separate updates to the gradients. When the discriminator's gradients are applied to decrease the discriminator's loss it is important that only the discriminator's weights are update. It is not fair, nor will it produce good results, to adversarially damage the weights of the generator to help the discriminator. A simple backpropagation would do this. It would simultaneously affect the weights of both generator and discriminator to lower whatever loss it was assigned to lower.Figure 7.TDIS shows how the discriminator is trained.**Figure 7.TDIS: Training the Discriminator**![Training the Discriminator](https://raw.githubusercontent.com/jeffheaton/t81_558_deep_learning/master/images/gan_fig_2.png "Training the Discriminator")Here a training set is generated with an equal number of real and fake images. The real images are randomly sampled (chosen) from the training data. An equal number of random images are generated from random seeds. For the discriminator training set, the $x$ contains the input images and the $y$ contains a value of 1 for real images and 0 for generated ones.Likewise, the Figure 7.TGEN shows how the generator is trained.**Figure 7.TGEN: Training the Generator**![Training the Generator](https://raw.githubusercontent.com/jeffheaton/t81_558_deep_learning/master/images/gan_fig_3.png "Training the Generator")For the generator training set, the $x$ contains the random seeds to generate images and the $y$ always contains the value of 1, because the optimal is for the generator to have generated such good images that the discriminiator was fooled into assigning them a probability near 1.
# This method returns a helper function to compute cross entropy loss cross_entropy = tf.keras.losses.BinaryCrossentropy() def discriminator_loss(real_output, fake_output): real_loss = cross_entropy(tf.ones_like(real_output), real_output) fake_loss = cross_entropy(tf.zeros_like(fake_output), fake_output) total_loss = real_loss + fake_loss return total_loss def generator_loss(fake_output): return cross_entropy(tf.ones_like(fake_output), fake_output)
_____no_output_____
MIT
gans_keras.ipynb
andresvilla86/diadx-ia-ml
Both the generator and discriminator use Adam and the same learning rate and momentum. This does not need to be the case. If you use a **GENERATE_RES** greater than 3 you may need to tune these learning rates, as well as other training and hyperparameters.
generator_optimizer = tf.keras.optimizers.Adam(1.5e-4,0.5) discriminator_optimizer = tf.keras.optimizers.Adam(1.5e-4,0.5)
_____no_output_____
MIT
gans_keras.ipynb
andresvilla86/diadx-ia-ml
The following function is where most of the training takes place for both the discriminator and the generator. This function was based on the GAN provided by the [TensorFlow Keras exmples](https://www.tensorflow.org/tutorials/generative/dcgan) documentation. The first thing you should notice about this function is that it is annotated with the **tf.function** annotation. This causes the function to be precompiled and improves performance.This function trans differently than the code we previously saw for training. This code makes use of **GradientTape** to allow the discriminator and generator to be trained together, yet separately.
# Notice the use of `tf.function` # This annotation causes the function to be "compiled". @tf.function def train_step(images): seed = tf.random.normal([BATCH_SIZE, SEED_SIZE]) with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape: generated_images = generator(seed, training=True) real_output = discriminator(images, training=True) fake_output = discriminator(generated_images, training=True) gen_loss = generator_loss(fake_output) disc_loss = discriminator_loss(real_output, fake_output) gradients_of_generator = gen_tape.gradient(\ gen_loss, generator.trainable_variables) gradients_of_discriminator = disc_tape.gradient(\ disc_loss, discriminator.trainable_variables) generator_optimizer.apply_gradients(zip( gradients_of_generator, generator.trainable_variables)) discriminator_optimizer.apply_gradients(zip( gradients_of_discriminator, discriminator.trainable_variables)) return gen_loss,disc_loss def train(dataset, epochs): fixed_seed = np.random.normal(0, 1, (PREVIEW_ROWS * PREVIEW_COLS, SEED_SIZE)) start = time.time() for epoch in range(epochs): epoch_start = time.time() gen_loss_list = [] disc_loss_list = [] for image_batch in dataset: t = train_step(image_batch) gen_loss_list.append(t[0]) disc_loss_list.append(t[1]) g_loss = sum(gen_loss_list) / len(gen_loss_list) d_loss = sum(disc_loss_list) / len(disc_loss_list) epoch_elapsed = time.time()-epoch_start print (f'Epoch {epoch+1}, gen loss={g_loss},disc loss={d_loss},'\ ' {hms_string(epoch_elapsed)}') save_images(epoch,fixed_seed) elapsed = time.time()-start print (f'Training time: {hms_string(elapsed)}') train(train_dataset, EPOCHS)
Epoch 1, gen loss=0.6737478375434875,disc loss=1.2876468896865845, 0:01:33.86 Epoch 2, gen loss=0.6754337549209595,disc loss=1.2804691791534424, 0:01:26.96 Epoch 3, gen loss=0.6792119741439819,disc loss=1.2002451419830322, 0:01:27.23 Epoch 4, gen loss=0.6704637408256531,disc loss=1.1011184453964233, 0:01:26.74 Epoch 5, gen loss=0.6726831197738647,disc loss=1.0789912939071655, 0:01:26.87 Epoch 6, gen loss=0.6760282516479492,disc loss=1.086405873298645, 0:01:26.64 Epoch 7, gen loss=0.6668657660484314,disc loss=1.0990564823150635, 0:01:26.75 Epoch 8, gen loss=0.6794514060020447,disc loss=1.0572694540023804, 0:01:26.72 Epoch 9, gen loss=0.667030930519104,disc loss=1.099564552307129, 0:01:26.83 Epoch 10, gen loss=0.6715224385261536,disc loss=1.0793499946594238, 0:01:26.66 Epoch 11, gen loss=0.6730715036392212,disc loss=1.078121304512024, 0:01:26.76 Epoch 12, gen loss=0.6647288203239441,disc loss=1.1045489311218262, 0:01:26.72 Epoch 13, gen loss=0.6678280234336853,disc loss=1.0965650081634521, 0:01:26.85 Epoch 14, gen loss=0.6659991145133972,disc loss=1.0995577573776245, 0:01:26.63 Epoch 15, gen loss=0.6716247797012329,disc loss=1.0771788358688354, 0:01:26.71 Epoch 16, gen loss=0.6729282140731812,disc loss=1.070194125175476, 0:01:26.70 Epoch 17, gen loss=0.5819647312164307,disc loss=1.2202180624008179, 0:01:26.77 Epoch 18, gen loss=0.6757639646530151,disc loss=1.065081238746643, 0:01:26.61 Epoch 19, gen loss=0.67451012134552,disc loss=1.0673273801803589, 0:01:26.69 Epoch 20, gen loss=0.6766645908355713,disc loss=1.0598846673965454, 0:01:26.77 Epoch 21, gen loss=0.6780093908309937,disc loss=1.0609248876571655, 0:01:26.69 Epoch 22, gen loss=0.6759778261184692,disc loss=1.0606046915054321, 0:01:26.82 Epoch 23, gen loss=0.6769225001335144,disc loss=1.054925799369812, 0:01:26.65 Epoch 24, gen loss=0.6781579852104187,disc loss=1.0544047355651855, 0:01:26.73 Epoch 25, gen loss=0.676042377948761,disc loss=1.0546783208847046, 0:01:26.75 Epoch 26, gen loss=0.677901029586792,disc loss=1.052791953086853, 0:01:26.80 Epoch 27, gen loss=0.681058406829834,disc loss=1.0429731607437134, 0:01:26.64 Epoch 28, gen loss=0.6791036128997803,disc loss=1.0511810779571533, 0:01:26.73 Epoch 29, gen loss=0.6798704266548157,disc loss=1.0473483800888062, 0:01:26.71 Epoch 30, gen loss=0.6804633736610413,disc loss=1.0467922687530518, 0:01:26.79 Epoch 31, gen loss=0.6794509291648865,disc loss=1.0476592779159546, 0:01:26.77 Epoch 32, gen loss=0.6806340217590332,disc loss=1.0451370477676392, 0:01:26.61 Epoch 33, gen loss=0.6815114617347717,disc loss=1.0450003147125244, 0:01:26.71 Epoch 34, gen loss=0.6811343431472778,disc loss=1.0472784042358398, 0:01:26.67 Epoch 35, gen loss=0.6814053058624268,disc loss=1.044533371925354, 0:01:26.79 Epoch 36, gen loss=0.6829107403755188,disc loss=1.0392040014266968, 0:01:26.59 Epoch 37, gen loss=0.68181312084198,disc loss=1.0412304401397705, 0:01:26.70 Epoch 38, gen loss=0.6876026391983032,disc loss=1.023917555809021, 0:01:26.56 Epoch 39, gen loss=0.6833422780036926,disc loss=1.0362210273742676, 0:01:26.75 Epoch 40, gen loss=0.6848627328872681,disc loss=1.0325196981430054, 0:01:26.57 Epoch 41, gen loss=0.6868091821670532,disc loss=1.0275564193725586, 0:01:26.60 Epoch 42, gen loss=0.6845845580101013,disc loss=1.037258505821228, 0:01:26.65 Epoch 43, gen loss=0.6817746758460999,disc loss=1.0453448295593262, 0:01:26.80 Epoch 44, gen loss=0.6872225403785706,disc loss=1.0249252319335938, 0:01:26.50 Epoch 45, gen loss=0.6867306232452393,disc loss=1.0273733139038086, 0:01:26.66 Epoch 46, gen loss=0.6831635236740112,disc loss=1.041325330734253, 0:01:26.63 Epoch 47, gen loss=0.6846590042114258,disc loss=1.033777117729187, 0:01:26.76 Epoch 48, gen loss=0.683456301689148,disc loss=1.0370010137557983, 0:01:26.63 Epoch 49, gen loss=0.6844978332519531,disc loss=1.034816861152649, 0:01:26.71 Epoch 50, gen loss=0.6809002757072449,disc loss=1.0432080030441284, 0:01:26.67 Training time: 1:12:52.22
MIT
gans_keras.ipynb
andresvilla86/diadx-ia-ml
generator.save(os.path.join(DATA_PATH,"face_generator.h5"))
_____no_output_____
MIT
gans_keras.ipynb
andresvilla86/diadx-ia-ml
Preprocess data
nb_classes = 10 # input image dimensions img_rows, img_cols = 28, 28 # number of convolutional filters to use nb_filters = 32 # size of pooling area for max pooling pool_size = (2, 2) # convolution kernel size kernel_size = (3, 3) # the data, shuffled and split between train and test sets (X_train, y_train), (X_test, y_test) = mnist.load_data() if K.image_dim_ordering() == 'th': X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols) X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols) input_shape = (1, img_rows, img_cols) else: X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1) X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1) input_shape = (img_rows, img_cols, 1) X_train = X_train.astype('float32') X_test = X_test.astype('float32') X_train /= 255 X_test /= 255 print('X_train shape:', X_train.shape) print(X_train.shape[0], 'train samples') print(X_test.shape[0], 'test samples') # convert class vectors to binary class matrices Y_train = np_utils.to_categorical(y_train, nb_classes) Y_test = np_utils.to_categorical(y_test, nb_classes)
X_train shape: (60000, 28, 28, 1) 60000 train samples 10000 test samples
MIT
notebooks/keras/train.ipynb
Petr-By/qtpyvis
Build a Keras model using the `Sequential API`
batch_size = 50 nb_epoch = 10 model = Sequential() model.add(Convolution2D(nb_filters, kernel_size, padding='valid', input_shape=input_shape, activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Convolution2D(nb_filters, kernel_size,activation='relu')) model.add(Dropout(rate=0.25)) model.add(Flatten()) model.add(Dense(64,activation='relu')) model.add(Dropout(rate=5)) model.add(Dense(nb_classes,activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.summary()
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_1 (Conv2D) (None, 26, 26, 32) 320 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 13, 13, 32) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 11, 11, 32) 9248 _________________________________________________________________ dropout_1 (Dropout) (None, 11, 11, 32) 0 _________________________________________________________________ flatten_1 (Flatten) (None, 3872) 0 _________________________________________________________________ dense_1 (Dense) (None, 64) 247872 _________________________________________________________________ dropout_2 (Dropout) (None, 64) 0 _________________________________________________________________ dense_2 (Dense) (None, 10) 650 ================================================================= Total params: 258,090 Trainable params: 258,090 Non-trainable params: 0 _________________________________________________________________
MIT
notebooks/keras/train.ipynb
Petr-By/qtpyvis
Train and evaluate the model
model.fit(X_train[0:10000, ...], Y_train[0:10000, ...], batch_size=batch_size, epochs=nb_epoch, verbose=1, validation_data=(X_test, Y_test)) score = model.evaluate(X_test, Y_test, verbose=0) print('Test score:', score[0]) print('Test accuracy:', score[1])
Train on 10000 samples, validate on 10000 samples Epoch 1/10 10000/10000 [==============================] - 5s - loss: 0.4915 - acc: 0.8532 - val_loss: 0.2062 - val_acc: 0.9433 Epoch 2/10 10000/10000 [==============================] - 5s - loss: 0.1562 - acc: 0.9527 - val_loss: 0.1288 - val_acc: 0.9605 Epoch 3/10 10000/10000 [==============================] - 5s - loss: 0.0950 - acc: 0.9704 - val_loss: 0.0890 - val_acc: 0.9741 Epoch 4/10 10000/10000 [==============================] - 5s - loss: 0.0678 - acc: 0.9788 - val_loss: 0.0972 - val_acc: 0.97000.9 Epoch 5/10 10000/10000 [==============================] - 5s - loss: 0.0532 - acc: 0.9826 - val_loss: 0.0822 - val_acc: 0.9739 Epoch 6/10 10000/10000 [==============================] - 5s - loss: 0.0382 - acc: 0.9876 - val_loss: 0.0898 - val_acc: 0.9751 Epoch 7/10 10000/10000 [==============================] - 5s - loss: 0.0293 - acc: 0.9903 - val_loss: 0.0652 - val_acc: 0.9822 Epoch 8/10 10000/10000 [==============================] - 5s - loss: 0.0277 - acc: 0.9917 - val_loss: 0.0840 - val_acc: 0.9758 Epoch 9/10 10000/10000 [==============================] - 5s - loss: 0.0217 - acc: 0.9918 - val_loss: 0.0736 - val_acc: 0.9787 Epoch 10/10 10000/10000 [==============================] - 5s - loss: 0.0188 - acc: 0.9936 - val_loss: 0.0735 - val_acc: 0.9812 Test score: 0.0734814465971 Test accuracy: 0.9812
MIT
notebooks/keras/train.ipynb
Petr-By/qtpyvis
Save the model
model.save('example_keras_mnist_model.h5')
_____no_output_____
MIT
notebooks/keras/train.ipynb
Petr-By/qtpyvis
Compute forcing for 1%CO2 data
import os import numpy as np import pandas as pd import matplotlib.pyplot as plt filedir1 = '/Users/hege-beatefredriksen/OneDrive - UiT Office 365/Data/CMIP5_globalaverages/Forcingpaperdata' storedata = False # store anomalies in file? storeforcingdata = False createnewfile = False # if it is the first time this is run, new files should be created, that can later be loaded exp = '1pctCO2' filenameT = 'annualTanom_1pctCO2.txt' filenameN = 'annualNanom_1pctCO2.txt' filenameFF = 'forcing_F13method_1pctCO2.txt' filenameNF = 'forcing_estimates_1pctCO2.txt' # create file first time code is run: if createnewfile == True: cols = ['year','ACCESS1-0','ACCESS1-3','CanESM2','CCSM4','CNRM-CM5','CSIRO-Mk3-6-0','GFDL-CM3','GFDL-ESM2G','GFDL-ESM2M','GISS-E2-H','GISS-E2-R','HadGEM2-ES','inmcm4','IPSL-CM5A-LR','IPSL-CM5B-LR','MIROC-ESM','MIROC5','MPI-ESM-LR','MPI-ESM-MR','MRI-CGCM3','NorESM1-M'] dfT = pd.DataFrame(np.full((140, len(cols)),'-'), columns = cols); dfT['year'] = np.arange(1,140+1) dfN = pd.DataFrame(np.full((140, len(cols)),'-'), columns = cols); dfN['year'] = np.arange(1,140+1) dfT.to_csv(filenameT, sep='\t'); dfN.to_csv(filenameN, sep='\t'); dfFF = pd.DataFrame(np.full((140, len(cols)),'-'), columns = cols); dfFF['year'] = np.arange(1,140+1) dfNF = pd.DataFrame(np.full((140, len(cols)),'-'), columns = cols); dfNF['year'] = np.arange(1,140+1) dfFF.to_csv(filenameFF, sep='\t'); dfNF.to_csv(filenameNF, sep='\t'); #model = 'ACCESS1-0' #model = 'ACCESS1-3' #model = 'CanESM2' #model = 'CCSM4' #model = 'CNRM-CM5' #model = 'CSIRO-Mk3-6-0' #model = 'GFDL-CM3' #model = 'GFDL-ESM2G' # 1pctco2 only for 70 years #model = 'GFDL-ESM2M' # 1pctco2 only for 70 years #model = 'GISS-E2-H' #model = 'GISS-E2-R' #model = 'HadGEM2-ES' #model = 'inmcm4' #model = 'IPSL-CM5A-LR' #model = 'IPSL-CM5B-LR' #model = 'MIROC-ESM' #model = 'MIROC5' #model = 'MPI-ESM-LR' #model = 'MPI-ESM-MR' #model = 'MRI-CGCM3' model = 'NorESM1-M' realm = 'Amon' ensemble = 'r1i1p1' ## define time periods of data: if model == 'ACCESS1-0': controltimeperiod = '030001-079912' exptimeperiod = '030001-043912' control_branch_yr = 300 elif model == 'ACCESS1-3': controltimeperiod = '025001-074912' exptimeperiod = '025001-038912' control_branch_yr = 250 elif model == 'CanESM2': controltimeperiod = '201501-301012' exptimeperiod = '185001-198912' control_branch_yr = 2321 elif model == 'CCSM4': controltimeperiod = '025001-130012' exptimeperiod = '185001-200512' control_branch_yr = 251 elif model == 'CNRM-CM5': controltimeperiod = '185001-269912' exptimeperiod = '185001-198912' control_branch_yr = 1850 elif model == 'CSIRO-Mk3-6-0': controltimeperiod = '000101-050012' exptimeperiod = '000101-014012' control_branch_yr = 104 elif model == 'GFDL-CM3': controltimeperiod = '000101-050012' exptimeperiod = '000101-014012' control_branch_yr = 1 elif model == 'GFDL-ESM2G' or model == 'GFDL-ESM2M': controltimeperiod = '000101-050012' exptimeperiod = '000101-020012' # 1pctco2 only for 70 years. control_branch_yr = 1 elif model == 'GISS-E2-H': print(model + 'has control run for two different periods') #controltimeperiod = '118001-141912' controltimeperiod = '241001-294912' exptimeperiod = '185001-200012' control_branch_yr = 2410 elif model == 'GISS-E2-R': #controltimeperiod = '333101-363012' #controltimeperiod1 = '398101-453012' controltimeperiod2 = '398101-920512' exptimeperiod = '185001-200012' control_branch_yr = 3981 # Note: The two blocks of years that are present (3331-3630 and 3981-4530) represent different control runs elif model == 'HadGEM2-ES': controltimeperiod = '186001-243511' exptimeperiod = '186001-199912' control_branch_yr = 1860 # or actually december 1859, but I ignore this first month is the annual average elif model == 'inmcm4': controltimeperiod = '185001-234912' exptimeperiod = '209001-222912' control_branch_yr = 2090 elif model == 'IPSL-CM5A-LR': controltimeperiod = '180001-279912' exptimeperiod = '185001-198912' control_branch_yr = 1850 elif model == 'IPSL-CM5B-LR': controltimeperiod = '183001-212912' exptimeperiod = '185001-200012' control_branch_yr = 1850 elif model == 'MIROC-ESM': controltimeperiod = '180001-242912' exptimeperiod = '000101-014012' control_branch_yr = 1880 elif model == 'MIROC5': controltimeperiod = '200001-286912' exptimeperiod = '220001-233912' control_branch_yr = 2200 elif model == 'MPI-ESM-LR': controltimeperiod = '185001-284912' exptimeperiod = '185001-199912' control_branch_yr = 1880 elif model == 'MPI-ESM-MR': controltimeperiod = '185001-284912' exptimeperiod = '185001-199912' control_branch_yr = 1850 elif model == 'MRI-CGCM3': controltimeperiod = '185101-235012' exptimeperiod = '185101-199012' control_branch_yr = 1891 elif model == 'NorESM1-M': controltimeperiod = '070001-120012' exptimeperiod = '000101-014012' control_branch_yr = 700 #### load 1pctCO2 data #### var = 'tas' # temperatures strings = [var, realm, model, exp, ensemble, exptimeperiod] filename = 'glannual_' + "_".join(strings) + '.txt' file = os.path.join(filedir1, model, exp, filename) datatable = pd.read_table(file, header=None,sep=" ") temp=datatable.iloc[0:len(datatable),0] var = 'rlut' # rlut strings = [var, realm, model, exp, ensemble, exptimeperiod] filename = 'glannual_' + "_".join(strings) + '.txt' file = os.path.join(filedir1, model, exp, filename) datatable = pd.read_table(file, header=None,sep=" ") rlut=datatable.iloc[0:len(datatable),0] var = 'rsut' # rsut strings = [var, realm, model, exp, ensemble, exptimeperiod] filename = 'glannual_' + "_".join(strings) + '.txt' file = os.path.join(filedir1, model, exp, filename) datatable = pd.read_table(file, header=None,sep=" ") rsut=datatable.iloc[0:len(datatable),0] var = 'rsdt' # rsdt strings = [var, realm, model, exp, ensemble, exptimeperiod] filename = 'glannual_' + "_".join(strings) + '.txt' file = os.path.join(filedir1, model, exp, filename) datatable = pd.read_table(file, header=None,sep=" ") rsdt=datatable.iloc[0:len(datatable),0] # drop all data after 140 years temp = temp[:140]; rlut = rlut[:140]; rsut = rsut[:140]; rsdt = rsdt[:140] ###### load control run data ###### exp = 'piControl' var = 'tas' # temperatures if model == 'GISS-E2-R': controltimeperiod = controltimeperiod2 strings = [var, realm, model, exp, ensemble, controltimeperiod] filename = 'glannual_' + "_".join(strings) + '.txt' file = os.path.join(filedir1, model, exp, filename) datatable = pd.read_table(file, header=None,sep=" ") controltemp=datatable.iloc[:,0] var = 'rlut' # rlut strings = [var, realm, model, exp, ensemble, controltimeperiod] filename = 'glannual_' + "_".join(strings) + '.txt' file = os.path.join(filedir1, model, exp, filename) datatable = pd.read_table(file, header=None,sep=" ") controlrlut=datatable.iloc[0:len(controltemp),0] var = 'rsut' # rsut strings = [var, realm, model, exp, ensemble, controltimeperiod] filename = 'glannual_' + "_".join(strings) + '.txt' file = os.path.join(filedir1, model, exp, filename) datatable = pd.read_table(file, header=None,sep=" ") controlrsut=datatable.iloc[0:len(controltemp),0] var = 'rsdt' # rsdt strings = [var, realm, model, exp, ensemble, controltimeperiod] filename = 'glannual_' + "_".join(strings) + '.txt' file = os.path.join(filedir1, model, exp, filename) datatable = pd.read_table(file, header=None,sep=" ") controlrsdt=datatable.iloc[0:len(controltemp),0] years = np.arange(1,len(temp)+1) # create figure fig, ax = plt.subplots(nrows=2,ncols=2,figsize = [16,10]) # plot temperature var = temp[:]; label = 'tas' ax[0,0].plot(years,var,linewidth=2,color = "black") #ax[0,0].set_xlabel('t',fontsize = 18) ax[0,0].set_ylabel(label + '(t)',fontsize = 18) ax[0,0].set_title('1pctCO2 ' + label,fontsize = 18) ax[0,0].grid() ax[0,0].set_xlim(min(years),max(years)) ax[0,0].tick_params(axis='both',labelsize=18) # plot rlut var = rlut[:]; label = 'rlut' ax[0,1].plot(years,var,linewidth=2,color = "black") #ax[0,1].set_xlabel('t',fontsize = 18) ax[0,1].set_ylabel(label + '(t)',fontsize = 18) ax[0,1].set_title('1pctCO2 ' + label,fontsize = 18) ax[0,1].grid() ax[0,1].set_xlim(min(years),max(years)) ax[0,1].tick_params(axis='both',labelsize=18) # plot rsdt var = rsdt[:]; label = 'rsdt' ax[1,0].plot(years,var,linewidth=2,color = "black") ax[1,0].set_xlabel('t',fontsize = 18) ax[1,0].set_ylabel(label + '(t)',fontsize = 18) ax[1,0].set_title('1pctCO2 ' + label,fontsize = 18) ax[1,0].grid() ax[1,0].set_xlim(min(years),max(years)) ax[1,0].tick_params(axis='both',labelsize=18) # plot rsut var = rsut[:]; label = 'rsut' ax[1,1].plot(years,var,linewidth=2,color = "black") ax[1,1].set_xlabel('t',fontsize = 18) ax[1,1].set_ylabel(label + '(t)',fontsize = 18) ax[1,1].set_title('1pctCO2 ' + label,fontsize = 18) ax[1,1].grid() ax[1,1].set_xlim(min(years),max(years)) ax[1,1].tick_params(axis='both',labelsize=18) # plot control run data and linear trends controlyears = np.arange(0,len(controltemp)) branchindex = control_branch_yr - int(controltimeperiod[0:4]) print(branchindex) # create figure fig, ax = plt.subplots(nrows=2,ncols=2,figsize = [16,10]) # plot temperature var = controltemp[:]; label = 'tas' ax[0,0].plot(controlyears,var,linewidth=2,color = "black") # find linear fits to control T and nettoarad in the same period as exp: p1 = np.polyfit(controlyears[branchindex:(branchindex + len(temp))], controltemp[branchindex:(branchindex + len(temp))], deg = 1) lintrendT = np.polyval(p1,controlyears[branchindex:(branchindex + len(temp))]) ax[0,0].plot(controlyears[branchindex:(branchindex + len(temp))], lintrendT, linewidth = 4) ax[0,0].set_ylabel(label + '(t)',fontsize = 18) ax[0,0].set_title('Control ' + label,fontsize = 18) ax[0,0].grid() ax[0,0].set_xlim(min(controlyears),max(controlyears)) ax[0,0].tick_params(axis='both',labelsize=18) # plot rlut var = controlrlut[:]; label = 'rlut' ax[0,1].plot(controlyears,var,linewidth=2,color = "black") p2 = np.polyfit(controlyears[branchindex:(branchindex + len(temp))], controlrlut[branchindex:(branchindex + len(temp))], deg = 1) lintrend_rlut = np.polyval(p2,controlyears[branchindex:(branchindex + len(temp))]) ax[0,1].plot(controlyears[branchindex:(branchindex + len(temp))], lintrend_rlut, linewidth = 4) ax[0,1].set_ylabel(label + '(t)',fontsize = 18) ax[0,1].set_title('Control ' + label,fontsize = 18) ax[0,1].grid() ax[0,1].set_xlim(min(controlyears),max(controlyears)) ax[0,1].tick_params(axis='both',labelsize=18) # plot rsdt var = controlrsdt[:]; label = 'rsdt' ax[1,0].plot(controlyears,var,linewidth=2,color = "black") p3 = np.polyfit(controlyears[branchindex:(branchindex + len(temp))], controlrsdt[branchindex:(branchindex + len(temp))], deg = 1) lintrend_rsdt = np.polyval(p3,controlyears[branchindex:(branchindex + len(temp))]) ax[1,0].plot(controlyears[branchindex:(branchindex + len(temp))], lintrend_rsdt, linewidth = 4) ax[1,0].set_xlabel('t',fontsize = 18) ax[1,0].set_ylabel(label + '(t)',fontsize = 18) ax[1,0].set_title('Control ' + label,fontsize = 18) ax[1,0].grid() ax[1,0].set_xlim(min(controlyears),max(controlyears)) ax[1,0].set_ylim(var[0]-2,var[0]+2) ax[1,0].tick_params(axis='both',labelsize=18) # plot rsut var = controlrsut[:]; label = 'rsut' ax[1,1].plot(controlyears,var,linewidth=2,color = "black") p4 = np.polyfit(controlyears[branchindex:(branchindex + len(temp))], controlrsut[branchindex:(branchindex + len(temp))], deg = 1) lintrend_rsut = np.polyval(p4,controlyears[branchindex:(branchindex + len(temp))]) ax[1,1].plot(controlyears[branchindex:(branchindex + len(temp))], lintrend_rsut, linewidth = 4) ax[1,1].set_xlabel('t',fontsize = 18) ax[1,1].set_ylabel(label + '(t)',fontsize = 18) ax[1,1].set_title('Control ' + label,fontsize = 18) ax[1,1].grid() ax[1,1].set_xlim(min(controlyears),max(controlyears)) ax[1,1].tick_params(axis='both',labelsize=18) nettoarad = rsdt - rsut - rlut controlnettoarad = controlrsdt - controlrsut - controlrlut lintrendN = lintrend_rsdt - lintrend_rsut - lintrend_rlut deltaN = nettoarad - lintrendN deltaT = temp - lintrendT # create figure fig, ax = plt.subplots(nrows=1,ncols=2,figsize = [16,5]) # plot 1pctCO2 net TOA rad var = nettoarad[:]; label = 'net TOA rad' ax[0,].plot(years,var,linewidth=2,color = "black") ax[0,].set_xlabel('t',fontsize = 18) ax[0,].set_ylabel(label + '(t)',fontsize = 18) ax[0,].set_title('1pctCO2 ' + label,fontsize = 18) ax[0,].grid() ax[0,].set_xlim(min(years),max(years)) ax[0,].tick_params(axis='both',labelsize=18) # plot control net TOA rad var = controlnettoarad[:]; label = 'net TOA rad' ax[1,].plot(controlyears,var,linewidth=2,color = "black") ax[1,].plot(controlyears[branchindex:(branchindex + len(temp))],lintrendN,linewidth=4) ax[1,].set_xlabel('t',fontsize = 18) ax[1,].set_ylabel(label + '(t)',fontsize = 18) ax[1,].set_title('Control ' + label,fontsize = 18) ax[1,].grid() ax[1,].set_xlim(min(controlyears),max(controlyears)) ax[1,].tick_params(axis='both',labelsize=18) ########### plot also anomalies: ########### # create figure fig, ax = plt.subplots(nrows=1,ncols=1,figsize = [8,5]) var = deltaN; label = 'net TOA rad' ax.plot(years,var,linewidth=2,color = "black") ax.set_xlabel('t',fontsize = 18) ax.set_ylabel(label + '(t)',fontsize = 18) ax.set_title('1pctCO2 ' + label + ' anomaly',fontsize = 18) ax.grid() ax.set_xlim(min(years),max(years)) ax.tick_params(axis='both',labelsize=18) # write time series to a dataframe? if storedata == True: dfT = pd.read_table(filenameT, index_col=0); dfN = pd.read_table(filenameN, index_col=0); # load files dfT[model] = deltaT; dfN[model] = deltaN dfT.to_csv(filenameT, sep='\t'); dfN.to_csv(filenameN, sep='\t') # save files again
_____no_output_____
MIT
Compute_forcing_1pctCO2.ipynb
Hegebf/CMIP5-forcing
Load my estimated parameters
filename = 'best_estimated_parameters.txt' parameter_table = pd.read_table(filename,index_col=0) GregoryT2x = parameter_table.loc[model,'GregoryT2x'] GregoryF2x = parameter_table.loc[model,'GregoryF2x'] fbpar = GregoryF2x/GregoryT2x #feedback parameter from Gregory plot print(fbpar) F = deltaN + fbpar*deltaT fig, ax = plt.subplots(figsize = [9,5]) plt.plot(years,F,linewidth=2,color = "black") ax.set_xlabel('t (years)',fontsize = 18) ax.set_ylabel('F(t) [$W/m^2$]',fontsize = 18) ax.set_title('1pctCO2 forcing',fontsize = 18) ax.grid() ax.set_xlim(min(years),max(years)) ax.tick_params(axis='both',labelsize=22) if storeforcingdata == True: dfFF = pd.read_table(filenameFF, index_col=0); # load files dfFF[model] = F; dfFF.to_csv(filenameFF, sep='\t'); # save file again # load remaining parameters: taulist = np.array(parameter_table.loc[model,'tau1':'tau3']) a_n = np.array(parameter_table.loc[model,'a_1':'a_4']) b_n = np.array(parameter_table.loc[model,'b_1':'b_4']) F2x = parameter_table.loc[model,'F2x'] T2x = parameter_table.loc[model,'T2x'] # compute other needed parameters from these: dim = len(taulist) if any(a_n == 0): dim = np.count_nonzero(a_n[:dim]) zeroindex = np.where(a_n == 0)[0] a_n = np.delete(a_n,zeroindex) b_n = np.delete(b_n,zeroindex) taulist = np.delete(taulist,zeroindex) fbparlist = (b_n/a_n)[:dim] print(fbparlist) amplitudes = a_n[:dim]/(2*F2x*taulist) print(np.sum(a_n)/2) print(T2x) # compute components T_n(t) = exp(-t/tau_n)*F(t) (Here * is a convolution) dim = len(taulist) lf = len(F) predictors = np.full((lf,dim),np.nan) # compute exact predictors by integrating greens function for k in range(0,dim): intgreensti = np.full((lf,lf),0.) # remember dot after 0 to create floating point number array instead of integer for t in range(0,lf): # compute one new contribution to the matrix: intgreensti[t,0] = taulist[k]*(np.exp(-t/taulist[k]) - np.exp(-(t+1)/taulist[k])) # take the rest from row above: if t > 0: intgreensti[t,1:(t+1)] = intgreensti[t-1,0:t] # compute discretized convolution integral by this matrix product: predictors[:,k] = [email protected](F) Tn = amplitudes*predictors fig, ax = plt.subplots(figsize = [9,5]) plt.plot(years,Tn[:,0],linewidth=2,color = "black",label = 'Mode with time scale ' + str(np.round(taulist[0])) + ' years') plt.plot(years,Tn[:,1],linewidth=2,color = "blue",label = 'Mode with time scale ' + str(np.round(taulist[1])) + ' years') if dim>2: plt.plot(years,Tn[:,2],linewidth=2,color = "red",label = 'Mode with time scale ' + str(np.round(taulist[2],1)) + ' years') ax.set_xlabel('t',fontsize = 18) ax.set_ylabel('T(t)',fontsize = 18) ax.set_title('Temperature responses to forcing',fontsize = 18) ax.grid() ax.set_xlim(min(years),max(years)) ax.tick_params(axis='both',labelsize=22) ax.legend(loc=2, prop={'size': 18}); fig, ax = plt.subplots(figsize = [9,5]) plt.plot(years,np.sum(Tn, axis=1),linewidth=2,color = "black",label = 'Mode with time scale ' + str(np.round(taulist[0])) + ' years') ax.set_xlabel('t (years)',fontsize = 18) ax.set_ylabel('T(t) [°C]',fontsize = 18) ax.set_title('Linear response to forcing',fontsize = 18) ax.grid() ax.set_xlim(min(years),max(years)) ax.tick_params(axis='both',labelsize=22) # Compute new estimate of adjusted forcing it = 20 # number of iterations Fiarray = np.full((lf,it),np.nan) Fi = F for i in range(0,it): # iterate predictors = np.full((lf,dim),np.nan) # compute exact predictors by integrating greens function for k in range(0,dim): intgreensti = np.full((lf,lf),0.) # remember dot after 0 to create floating point number array instead of integer for t in range(0,lf): # compute one new contribution to the matrix: intgreensti[t,0] = taulist[k]*(np.exp(-t/taulist[k]) - np.exp(-(t+1)/taulist[k])) # take the rest from row above: if t > 0: intgreensti[t,1:(t+1)] = intgreensti[t-1,0:t] # compute discretized convolution integral by this matrix product: predictors[:,k] = [email protected](Fi) Tni = amplitudes*predictors Fi = deltaN + Tni@fbparlist Fiarray[:,i] = Fi fig, ax = plt.subplots(nrows=1,ncols=2,figsize = [16,5]) ax[0,].plot(years,F,linewidth=2,color = "black",label = "Old forcing") for i in range(0,it-1): ax[0,].plot(years,Fiarray[:,i],linewidth=1,color = "gray") ax[0,].plot(years,Fiarray[:,it-1],linewidth=1,color = "blue",label = "New forcing") ax[0,].set_xlabel('t (years)',fontsize = 18) ax[0,].set_ylabel('F(t) [$W/m^2$]',fontsize = 18) ax[0,].grid() ax[0,].set_xlim(min(years),max(years)) ax[0,].tick_params(axis='both',labelsize=18) if model == 'GFDL-ESM2G' or model == 'GFDL-ESM2M': # linear fit for only 70 years # linear fit to forster forcing: linfitpar1 = np.polyfit(years[:70],F[:70],deg = 1) linfit_forcing1 = np.polyval(linfitpar1,years[:70]) ax[0,].plot(years[:70],linfit_forcing1,'--',linewidth=1,color = "black") # linear fit to new forcing: linfitpar2 = np.polyfit(years[:70],Fiarray[:70,it-1],deg = 1) linfit_forcing2 = np.polyval(linfitpar2,years[:70]) ax[0,].plot(years[:70],linfit_forcing2,'--',linewidth=1,color = "blue") else: # linear fit for 140 years # linear fit to forster forcing: linfitpar1 = np.polyfit(years,F,deg = 1) linfit_forcing1 = np.polyval(linfitpar1,years) ax[0,].plot(years,linfit_forcing1,'--',linewidth=1,color = "black") # linear fit to new forcing: linfitpar2 = np.polyfit(years,Fiarray[:,it-1],deg = 1) linfit_forcing2 = np.polyval(linfitpar2,years) ax[0,].plot(years,linfit_forcing2,'--',linewidth=1,color = "blue") # Estimate and print out 4xCO2 forcing from end values of linear fits: print(linfit_forcing1[-1]) print(linfit_forcing2[-1]) # compare responses label = 'temperature' # plot temperature ax[1,].plot(years,deltaT,linewidth=3,color = "black",label = model + " modelled response") # plot response ax[1,].plot(years,np.sum(Tn,axis=1),'--',linewidth=2,color = "black",label = "Linear response to old forcing") ax[1,].plot(years,np.sum(Tni,axis=1),'--',linewidth=2,color = "blue",label = "Linear response to new forcing") ax[1,].set_xlabel('t (years)',fontsize = 18) ax[1,].set_ylabel('T(t) [°C]',fontsize = 18) ax[1,].set_title('1% CO$_2$ ' + label,fontsize = 18) ax[0,].set_title('1% CO$_2$ effective forcing',fontsize = 18) ax[1,].grid() ax[1,].set_xlim(min(years),max(years)) ax[1,].tick_params(axis='both',labelsize=18) ax[0,].text(0,1.03,'a)',transform=ax[0,].transAxes, fontsize=20) ax[1,].text(0,1.03,'b)',transform=ax[1,].transAxes, fontsize=20) #plt.savefig('/Users/hege-beatefredriksen/OneDrive - UiT Office 365/Papers/Forcingpaper/Figures/' + model + '_1pctCO2_forcing_and_response.pdf', format='pdf', dpi=600, bbox_inches="tight") if storeforcingdata == True: dfNF = pd.read_table(filenameNF, index_col=0); dfNF = pd.read_table(filenameNF, index_col=0); # load file dfNF[model] = Fiarray[:,it-1]; dfNF.to_csv(filenameNF, sep='\t'); # save file again # put results in pandas dataframe: columnnames = ['4xCO2forcingest_1pctCO2', '4xCO2forcingest_1pctCO2_F13method']; # if file is not already created, create a new file to store the results in: filename = 'estimated_4xCO2forcing_from1pctCO2.txt' #dataframe = pd.DataFrame([np.concatenate((linfit_forcing2[-1], linfit_forcing1[-1]), axis=None)], index = [model], columns=columnnames) #dataframe.to_csv(filename, sep='\t') #dataframe # load existing dataframe, and append present result: loaded_dataframe = pd.read_table(filename,index_col=0) pd.set_option('display.expand_frame_repr', False) # fill numbers into table: if model == 'GFDL-ESM2G' or model == 'GFDL-ESM2M': loaded_dataframe.loc[model,columnnames] = [np.concatenate((2*linfit_forcing2[-1], 2*linfit_forcing1[-1]), axis=None)] else: loaded_dataframe.loc[model,columnnames] = [np.concatenate((linfit_forcing2[-1], linfit_forcing1[-1]), axis=None)] # write them to a file: loaded_dataframe.to_csv(filename, sep='\t') loaded_dataframe timedep_fbpar1 = Tni@fbparlist/np.sum(Tni,axis=1) # two alternative definitions timedep_fbpar2 = Tni@fbparlist/deltaT fig, ax = plt.subplots(figsize = [9,5]) label = 'Instantaneous feedback parameter' # plot response ax.plot(years,timedep_fbpar1,linewidth=3,color = "black") ax.plot(years,timedep_fbpar2,linewidth=1,color = "gray") ax.plot(years,np.full((len(years),1),fbpar),linewidth=2,color = "green") ax.set_xlabel('t',fontsize = 18) ax.set_ylabel('$\lambda$ (t)',fontsize = 18) ax.set_title(label,fontsize = 18) ax.grid() ax.set_xlim(min(years),max(years)) ax.set_ylim(0,3) ax.tick_params(axis='both',labelsize=18) fig, ax = plt.subplots(figsize = [9,5]) label = 'Instantaneous climate sensitivity parameter' # plot response ax.plot(years,1/timedep_fbpar1,linewidth=3,color = "black") ax.plot(years,1/timedep_fbpar2,linewidth=1,color = "gray") ax.plot(years,np.full((len(years),1),1/fbpar),linewidth=2,color = "green") ax.set_xlabel('t',fontsize = 18) ax.set_ylabel('S(t)',fontsize = 18) ax.set_title(label,fontsize = 18) ax.grid() ax.set_xlim(min(years),max(years)) ax.set_ylim(0,2) ax.tick_params(axis='both',labelsize=18) fig, ax = plt.subplots(figsize = [9,5]) label = 'Instantaneous climate sensitivity' # plot response ax.plot(years,F2x/timedep_fbpar1,linewidth=3,color = "black") ax.plot(years,F2x/timedep_fbpar2,linewidth=1,color = "gray") ax.plot(years,np.full((len(years),1),F2x/fbpar),linewidth=2,color = "green") ax.set_xlabel('t',fontsize = 18) ax.set_ylabel('ECS(t)',fontsize = 18) ax.set_title(label,fontsize = 18) ax.grid() ax.set_xlim(min(years),max(years)) ax.set_ylim(0,6) ax.tick_params(axis='both',labelsize=18)
_____no_output_____
MIT
Compute_forcing_1pctCO2.ipynb
Hegebf/CMIP5-forcing
How to do video classification In this tutorial, we will show how to train a video classification model in Classy Vision. Given an input video, the video classification task is to predict the most probable class label. This is very similar to image classification, which was covered in other tutorials, but there are a few differences that make video special. As the video duration can be long, we sample short video clips of a small number of frames, use the classifier to make predictions, and finally average the clip-level predictions to get the final video-level predictions. In this tutorial we will: (1) load a video dataset; (2) configure a video model; (3) configure video meters; (4) build a task; (5) start training; Please note that these steps are being done separately in the tutorial for easy of exposition in the notebook format. As described in our [Getting started](https://classyvision.ai/tutorials/getting_started) tutorial, you can combine all configs used in this tutorial into a single config for ClassificationTask and train it using `classy_train.py`. 1. Prepare the datasetAll right! Let's start with the dataset. [UCF-101](https://www.crcv.ucf.edu/data/UCF101.php) is a canonical action recognition dataset. It has 101 action classes, and has 3 folds with different training/testing splitting . We use fold 1 in this tutorial. Classy Vision has implemented the dataset `ucf101`, which can be used to load the training and testing splits.
from classy_vision.dataset import build_dataset # set it to the folder where video files are saved video_dir = "[PUT YOUR VIDEO FOLDER HERE]" # set it to the folder where dataset splitting files are saved splits_dir = "[PUT THE FOLDER WHICH CONTAINS SPLITTING FILES HERE]" # set it to the file path for saving the metadata metadata_file = "[PUT THE FILE PATH OF DATASET META DATA HERE]" datasets = {} datasets["train"] = build_dataset({ "name": "ucf101", "split": "train", "batchsize_per_replica": 8, # For training, we use 8 clips in a minibatch in each model replica "use_shuffle": True, # We shuffle the clips in the training split "num_samples": 64, # We train on 16 clips in one training epoch "clips_per_video": 1, # For training, we randomly sample 1 clip from each video "frames_per_clip": 8, # The video clip contains 8 frames "video_dir": video_dir, "splits_dir": splits_dir, "metadata_file": metadata_file, "fold": 1, "transforms": { "video": [ { "name": "video_default_augment", "crop_size": 112, "size_range": [128, 160] } ] } }) datasets["test"] = build_dataset({ "name": "ucf101", "split": "test", "batchsize_per_replica": 10, # For testing, we will take 1 video once a time, and sample 10 clips per video "use_shuffle": False, # We do not shuffle clips in the testing split "num_samples": 80, # We test on 80 clips in one testing epoch "clips_per_video": 10, # We sample 10 clips per video "frames_per_clip": 8, "video_dir": video_dir, "splits_dir": splits_dir, "metadata_file": metadata_file, "fold": 1, "transforms": { "video": [ { "name": "video_default_no_augment", "size": 128 } ] } })
_____no_output_____
MIT
tutorials/video_classification.ipynb
isunjin/ClassyVision
Note we specify different transforms for training and testing split. For training split, we first randomly select a size from `size_range` [128, 160], and resize the video clip so that its short edge is equal to the random size. After that, we take a random crop of spatial size 112 x 112. We find such data augmentation helps the model generalize better, and use it as the default transform with data augmentation. For testing split, we resize the video clip to have short edge of size 128, and skip the random cropping to use the entire video clip. This is the default transform without data augmentation. 2. Define a model trunk and a headNext, let's create the video model, which consists of a trunk and a head. The trunk can be viewed as a feature extractor for computing discriminative features from raw video pixels while the head is viewed as a classifier for producing the final predictions. Let's first create the trunk of architecture ResNet3D-18 by using the built-in `resnext3d` model in Classy Vision.
from classy_vision.models import build_model model = build_model({ "name": "resnext3d", "frames_per_clip": 8, # The number of frames we have in each video clip "input_planes": 3, # We use RGB video frames. So the input planes is 3 "clip_crop_size": 112, # We take croppings of size 112 x 112 from the video frames "skip_transformation_type": "postactivated_shortcut", # The type of skip connection in residual unit "residual_transformation_type": "basic_transformation", # The type of residual connection in residual unit "num_blocks": [2, 2, 2, 2], # The number of residual blocks in each of the 4 stages "input_key": "video", # The key used to index into the model input of dict type "stage_planes": 64, "num_classes": 101 # the number of classes })
_____no_output_____
MIT
tutorials/video_classification.ipynb
isunjin/ClassyVision
We also need to create a model head, which consists of an average pooling layer and a linear layer, by using the `fully_convolutional_linear` head. At test time, the shape (channels, frames, height, width) of input tensor is typically `(3 x 8 x 128 x 173)`. The shape of input tensor to the average pooling layer is `(2048, 1, 8, 10)`. Since we do not use a global average pooling but an average pooling layer of kernel size `(1, 7, 7)`, the pooled feature map has shape `(2048, 1, 2, 5)`. The shape of prediction tensor from the linear layer is `(1, 2, 5, 101)`, which indicates the model computes a 101-D prediction vector densely over a `2 x 5` grid. That's why we name the head as `FullyConvolutionalLinearHead` because we use the linear layer as a `1x1` convolution layer to produce spatially dense predictions. Finally, predictions over the `2 x 5` grid are averaged.
from classy_vision.heads import build_head from collections import defaultdict unique_id = "default_head" head = build_head({ "name": "fully_convolutional_linear", "unique_id": unique_id, "pool_size": [1, 7, 7], "num_classes": 101, "in_plane": 512 }) # In Classy Vision, the head can be attached to any residual block in the trunk. # Here we attach the head to the last block as in the standard ResNet model fork_block = "pathway0-stage4-block1" heads = defaultdict(dict) heads[fork_block][unique_id] = head model.set_heads(heads)
_____no_output_____
MIT
tutorials/video_classification.ipynb
isunjin/ClassyVision
3. Choose the metersThis is the biggest difference between video and image classification. For images we used `AccuracyMeter` to measure top-1 and top-5 accuracy. For videos you can also use both `AccuracyMeter` and `VideoAccuracyMeter`, but they behave differently: * `AccuracyMeter` takes one clip-level prediction and compare it with groundtruth video label. It reports the clip-level accuracy. * `VideoAccuracyMeter` takes multiple clip-level predictions from the same video, averages them and compares that with groundtruth video label. It reports the video-level accuracy which is usually higher than clip-level accuracy. Both meters report top-1 and top-5 accuracy.
from classy_vision.meters import build_meters, AccuracyMeter, VideoAccuracyMeter meters = build_meters({ "accuracy": { "topk": [1, 5] }, "video_accuracy": { "topk": [1, 5], "clips_per_video_train": 1, "clips_per_video_test": 10 } })
_____no_output_____
MIT
tutorials/video_classification.ipynb
isunjin/ClassyVision
4. Build a taskGreat! we have defined the minimal set of components necessary for video classification, including dataset, model, loss function, meters and optimizer. We proceed to define a video classification task, and populate it with all the components.
from classy_vision.tasks import ClassificationTask from classy_vision.optim import build_optimizer from classy_vision.losses import build_loss loss = build_loss({"name": "CrossEntropyLoss"}) optimizer = build_optimizer({ "name": "sgd", "lr": { "name": "multistep", "values": [0.005, 0.0005], "milestones": [1] }, "num_epochs": 2, "weight_decay": 0.0001, "momentum": 0.9 }) num_epochs = 2 task = ( ClassificationTask() .set_num_epochs(num_epochs) .set_loss(loss) .set_model(model) .set_optimizer(optimizer) .set_meters(meters) ) for phase in ["train", "test"]: task.set_dataset(datasets[phase], phase)
_____no_output_____
MIT
tutorials/video_classification.ipynb
isunjin/ClassyVision
5. Start trainingAfter creating a task, you can simply pass that to a Trainer to start training. Here we will train on a single node and configure logging and checkpoints for training:
import time import os from classy_vision.trainer import LocalTrainer from classy_vision.hooks import CheckpointHook from classy_vision.hooks import LossLrMeterLoggingHook hooks = [LossLrMeterLoggingHook(log_freq=4)] checkpoint_dir = f"/tmp/classy_checkpoint_{time.time()}" os.mkdir(checkpoint_dir) hooks.append(CheckpointHook(checkpoint_dir, input_args={})) task = task.set_hooks(hooks) trainer = LocalTrainer() trainer.train(task)
_____no_output_____
MIT
tutorials/video_classification.ipynb
isunjin/ClassyVision
Bayesian Parametric Regression Notebook version: 1.5 (Sep 24, 2019) Author: Jerónimo Arenas García ([email protected]) Jesús Cid-Sueiro ([email protected]) Changes: v.1.0 - First version v.1.1 - ML Model selection included v.1.2 - Some typos corrected v.1.3 - Rewriting text, reorganizing content, some exercises. v.1.4 - Revised introduction v.1.5 - Revised notation. Solved exercise 5 Pending changes: * Include regression on the stock data
# Import some libraries that will be necessary for working with data and displaying plots # To visualize plots in the notebook %matplotlib inline from IPython import display import matplotlib import matplotlib.pyplot as plt import numpy as np import scipy.io # To read matlab files import pylab import time
_____no_output_____
MIT
R5.Bayesian_Regression/Bayesian_regression_professor.ipynb
ML4DS/ML4all
A quick note on the mathematical notationIn this notebook we will make extensive use of probability distributions. In general, we will use capital letters${\bf X}$, $S$, $E$ ..., to denote random variables, and lower-case letters ${\bf x}$, $s$, $\epsilon$ ..., to denote the values they can take. In general, we will use letter $p$ for probability density functions (pdf). When necessary, we will use, capital subindices to make the random variable explicit. For instance, $p_{{\bf X}, S}({\bf x}, s)$ would be the joint pdf of random variables ${\bf X}$ and $S$ at values ${\bf x}$ and $s$, respectively. However, to avoid a notation overload, we will omit subindices when they are clear from the context. For instance, we will use $p({\bf x}, s)$ instead of $p_{{\bf X}, S}({\bf x}, s)$. 1. Model-based parametric regression 1.1. The regression problem.Given an observation vector ${\bf x}$, the goal of the regression problem is to find a function $f({\bf x})$ providing *good* predictions about some unknown variable $s$. To do so, we assume that a set of *labelled* training examples, $\{{\bf x}_k, s_k\}_{k=0}^{K-1}$ is available. The predictor function should make good predictions for new observations ${\bf x}$ not used during training. In practice, this is tested using a second set (the *test set*) of labelled samples. 1.2. Model-based parametric regressionModel-based regression methods assume that all data in the training and test dataset have been generated by some stochastic process. In parametric regression, we assume that the probability distribution generating the data has a known parametric form, but the values of some parameters are unknown. In particular, in this notebook we will assume the target variables in all pairs $({\bf x}_k, s_k)$ from the training and test sets have been generated independently from some posterior distribution $p(s| {\bf x}, {\bf w})$, were ${\bf w}$ is some unknown parameter. The training dataset is used to estimate ${\bf w}$. 1.3. Model assumptionsIn order to estimate ${\bf w}$ from the training data in a mathematicaly rigorous and compact form let us group the target variables into a vector$${\bf s} = \left(s_0, \dots, s_{K-1}\right)^\top$$and the input vectors into a matrix$${\bf X} = \left({\bf x}_0, \dots, {\bf x}_{K-1}\right)^\top$$We will make the following assumptions: * A1. All samples in ${\cal D}$ have been generated by the same distribution, $p({\bf x}, s \mid {\bf w})$ * A2. Input variables ${\bf x}$ do not depend on ${\bf w}$. This implies that$$p({\bf X} \mid {\bf w}) = p({\bf X})$$ * A3. Targets $s_0, \dots, s_{K-1}$ are statistically independent, given ${\bf w}$ and the inputs ${\bf x}_0,\ldots, {\bf x}_{K-1}$, that is:$$p({\bf s} \mid {\bf X}, {\bf w}) = \prod_{k=0}^{K-1} p(s_k \mid {\bf x}_k, {\bf w})$$ 2. Bayesian inference. 2.1. The Bayesian approachThe main idea of Bayesian inference is the following: assume we want to estimate some unknown variable $U$ given an observed variable $O$. If $U$ and $O$ are random variables, we can describe the relation between $U$ and $O$ through the following functions: * **Prior distribution**: $p_U(u)$. It describes our uncertainty on the true value of $U$ before observing $O$. * **Likelihood function**: $p_{O \mid U}(o \mid u)$. It describes how the value of the observation is generated for a given value of $U$. * **Posterior distribution**: $p_{U|O}(u \mid o)$. It describes our uncertainty on the true value of $U$ once the true value of $O$ is observed. The major component of the Bayesian inference is the posterior distribution. All Bayesian estimates are computed as some of its central statistics (e.g. the mean, the median or the mode), for instance * **Maximum A Posteriori (MAP) estimate**: $\qquad{\widehat{u}}_{\text{MAP}} = \arg\max_u p_{U \mid O}(u \mid o)$ * **Minimum Mean Square Error (MSE) estimate**: $\qquad\widehat{u}_{\text{MSE}} = \mathbb{E}\{U \mid O=o\}$The choice between the MAP or the MSE estimate may depend on practical or computational considerations. From a theoretical point of view, $\widehat{u}_{\text{MSE}}$ has some nice properties: it minimizes $\mathbb{E}\{(U-\widehat{u})^2\}$ among all possible estimates, $\widehat{u}$, so it is a natural choice. However, it involves the computation of an integral, which may not have a closed-form solution. In such cases, the MAP estimate can be a better choice.The prior and the likelihood function are auxiliary distributions: if the posterior distribution is unknown, it can be computed from them using the Bayes rule:\begin{equation}p_{U|O}(u \mid o) = \frac{p_{O|U}(o \mid u) \cdot p_{U}(u)}{p_{O}(o)}\end{equation}In the next two sections we show that the Bayesian approach can be applied to both the prediction and the estimation problems. 2.2. Bayesian prediction under a known modelAssuming that the model parameters ${\bf w}$ are known, we can apply the Bayesian approach to predict ${\bf s}$ for an input ${\bf x}$. In that case, we can take * Unknown variable: ${\bf s}$, and * Observations: ${\bf x}$the MAP and MSE predictions become* Maximum A Posterior (MAP): $\qquad\widehat{s}_{\text{MAP}} = \arg\max_s p(s| {\bf x}, {\bf w})$* Minimum Mean Square Error (MSE): $\qquad\widehat{s}_{\text{MSE}} = \mathbb{E}\{S |{\bf x}, {\bf w}\}$ Exercise 1:Assuming $$p(s\mid x, w) = \frac{s}{w x^2} \exp\left({-\frac{s^2}{2 w x^2}}\right), \qquad s \geq 0,$$compute the MAP and MSE predictions of $s$ given $x$ and $w$. Solution:\begin{align}\widehat{s}_\text{MAP} &= \arg\max_s \left\{\frac{s}{w x^2} \exp\left({-\frac{s^2}{2 w x^2}}\right) \right\} \\ &= \arg\max_s \left\{\log(s) - \log(w x^2) -\frac{s^2}{2 w x^2} \right\} \\ &= \sqrt{w}x\end{align}where the last step results from maximizing by differentiation.\begin{align}\widehat{s}_\text{MSE} &= \mathbb{E}\{s | x, w\} \\ &= \int_0^\infty \frac{s^2}{w x^2} \exp\left({-\frac{s^2}{2 w x^2}}\right) \\ &= \frac{1}{2} \int_{-\infty}^\infty \frac{s^2}{w x^2} \exp\left({-\frac{s^2}{2 w x^2}}\right) \\ &= \frac{\sqrt{2\pi}}{2\sqrt{w x^2}} \int_{-\infty}^\infty \frac{s^2}{\sqrt{2\pi w x^2}} \exp\left({-\frac{s^2}{2 w x^2}}\right)\end{align}Noting that the last integral corresponds to the variance of a zero-mean Gaussian distribution, we get\begin{align}\widehat{s}_\text{MSE} &= \frac{\sqrt{2\pi}}{2\sqrt{w x^2}} w x^2 \\ &= \sqrt{\frac{\pi w}{2}}x\end{align} 2.2.1. The Gaussian caseA particularly interesting case arises when the data model is Gaussian:$$p(s|{\bf x}, {\bf w}) = \frac{1}{\sqrt{2\pi}\sigma_\varepsilon} \exp\left(-\frac{(s-{\bf w}^\top{\bf z})^2}{2\sigma_\varepsilon^2}\right)$$where ${\bf z}=T({\bf x})$ is a vector with components which can be computed directly from the observed variables. For a Gaussian distribution (and for any unimodal symetric distributions) the mean and the mode are the same and, thus,$$\widehat{s}_\text{MAP} = \widehat{s}_\text{MSE} = {\bf w}^\top{\bf z}$$Such expression includes a linear regression model, where ${\bf z} = [1; {\bf x}]$, as well as any other non-linear model as long as it can be expressed as a "linear in the parameters" model. 2.3. Bayesian Inference for Parameter EstimationIn a similar way, we can apply Bayesian inference to estimate the model parameters ${\bf w}$ from a given dataset, $\cal{D}$. In that case * the unknown variable is ${\bf w}$, and * the observation is $\cal{D} \equiv \{{\bf X}, {\bf s}\}$so that* Maximum A Posterior (MAP): $\qquad\widehat{\bf w}_{\text{MAP}} = \arg\max_{\bf w} p({\bf w}| {\cal D})$* Minimum Mean Square Error (MSE): $\qquad\widehat{\bf w}_{\text{MSE}} = \mathbb{E}\{{\bf W} | {\cal D}\}$ 3. Bayesian parameter estimationNOTE: Since the training data inputs are known, all probability density functions and expectations in the remainder of this notebook will be conditioned on the data matrix, ${\bf X}$. To simplify the mathematical notation, from now on we will remove ${\bf X}$ from all conditions. For instance, we will write $p({\bf s}|{\bf w})$ instead of $p({\bf s}|{\bf w}, {\bf X})$, etc. Keep in mind that, in any case, all probabilities and expectations may depend on ${\bf X}$ implicitely.Summarizing, the steps to design a Bayesian parametric regresion algorithm are the following:1. Assume a parametric data model $p(s| {\bf x},{\bf w})$ and a prior distribution $p({\bf w})$.2. Using the data model and the i.i.d. assumption, compute $p({\bf s}|{\bf w})$.3. Applying the bayes rule, compute the posterior distribution $p({\bf w}|{\bf s})$.4. Compute the MAP or the MSE estimate of ${\bf w}$ given ${\bf x}$.5. Compute predictions using the selected estimate. 3.1. Bayesian Inference and Maximum Likelihood.Applying the Bayes rule the MAP estimate can be alternatively expressed as\begin{align}\qquad\widehat{\bf w}_{\text{MAP}} &= \arg\max_{\bf w} \frac{p({\cal D}| {\bf w}) \cdot p({\bf w})}{p({\cal D})} \\ &= \arg\max_{\bf w} p({\cal D}| {\bf w}) \cdot p({\bf w})\end{align}By comparisons, the ML (Maximum Likelihood) estimate has the form:$$\widehat{\bf w}_{\text{ML}} = \arg \max_{\bf w} p(\mathcal{D}|{\bf w})$$This shows that the MAP estimate takes into account the prior distribution on the unknown parameter.Another advantage of the Bayesian approach is that it provides not only a point estimate of the unknown parameter, but a whole funtion, the posterior distribution, which encompasses our belief on the unknown parameter given the data. For instance, we can take second order statistics like the variance of the posterior distributions to measure the uncertainty on the true value of the parameter around the mean. 3.2. The prior distributionSince each value of ${\bf w}$ determines a regression function, by stating a prior distribution over the weights we state also a prior distribution over the space of regression functions.For instance, assume that the data likelihood follows the Gaussian model in sec. 2.2.1, with $T(x) = (1, x, x^2, x^3)$, i.e. the regression functions have the form$$w_0 + w_1 x + w_2 x^2 + w_3 x^3$$Each value of ${\bf w}$ determines a specific polynomial of degree 3. Thus, the prior distribution over ${\bf w}$ describes which polynomials are more likely before observing the data.For instance, assume a Gaussian prior with zero mean and variance ${\bf V}_p$, i.e.,$$p({\bf w}) = \frac{1}{(2\pi)^{D/2} |{\bf V}_p|^{1/2}} \exp \left(-\frac{1}{2} {\bf w}^\intercal {\bf V}_{p}^{-1}{\bf w} \right)$$where $D$ is the dimension of ${\bf w}$. To abbreviate, we will also express this as$${\bf w} \sim {\cal N}\left({\bf 0},{\bf V}_{p} \right)$$The following code samples ${\bf w}$ according to this distribution for ${\bf V}_p = 0.002 \, {\bf I}$, and plots the resulting polynomial over the scatter plot of an arbitrary dataset.You can check the effect of modifying the variance of the prior distribution.
n_grid = 200 degree = 3 nplots = 20 # Prior distribution parameters mean_w = np.zeros((degree+1,)) v_p = 0.2 ### Try increasing this value var_w = v_p * np.eye(degree+1) xmin = -1 xmax = 1 X_grid = np.linspace(xmin, xmax, n_grid) fig = plt.figure() ax = fig.add_subplot(111) for k in range(nplots): #Draw weigths fromt the prior distribution w_iter = np.random.multivariate_normal(mean_w, var_w) S_grid_iter = np.polyval(w_iter, X_grid) ax.plot(X_grid, S_grid_iter,'g-') ax.set_xlim(xmin, xmax) ax.set_ylim(-1, 1) ax.set_xlabel('$x$') ax.set_ylabel('$s$') plt.show()
_____no_output_____
MIT
R5.Bayesian_Regression/Bayesian_regression_professor.ipynb
ML4DS/ML4all
The data observation will modify our belief about the true data model according to the posterior distribution. In the following we will analyze this in a Gaussian case. 4. Bayesian regression for a Gaussian model.We will apply the above steps to derive a Bayesian regression algorithm for a Gaussian model. 4.1. Step 1: The Gaussian model.Let us assume that the likelihood function is given by the Gaussian model described in Sec. 1.3.2.$$s~|~{\bf w} \sim {\cal N}\left({\bf z}^\top{\bf w}, \sigma_\varepsilon^2 \right)$$that is $$p(s|{\bf x}, {\bf w}) = \frac{1}{\sqrt{2\pi}\sigma_\varepsilon} \exp\left(-\frac{(s-{\bf w}^\top{\bf z})^2}{2\sigma_\varepsilon^2}\right)$$Assume, also, that the prior is Gaussian$${\bf w} \sim {\cal N}\left({\bf 0},{\bf V}_{p} \right)$$ 4.2. Step 2: Complete data likelihoodUsing the assumptions A1, A2 and A3, it can be shown that$${\bf s}~|~{\bf w} \sim {\cal N}\left({\bf Z}{\bf w},\sigma_\varepsilon^2 {\bf I} \right)$$that is$$p({\bf s}| {\bf w}) = \frac{1}{\left(\sqrt{2\pi}\sigma_\varepsilon\right)^K} \exp\left(-\frac{1}{2\sigma_\varepsilon^2}\|{\bf s}-{\bf Z}{\bf w}\|^2\right)$$ 4.3. Step 3: Posterior weight distributionThe posterior distribution of the weights can be computed using the Bayes rule$$p({\bf w}|{\bf s}) = \frac{p({\bf s}|{\bf w})~p({\bf w})}{p({\bf s})}$$Since both $p({\bf s}|{\bf w})$ and $p({\bf w})$ follow a Gaussian distribution, we know also that the joint distribution and the posterior distribution of ${\bf w}$ given ${\bf s}$ are also Gaussian. Therefore,$${\bf w}~|~{\bf s} \sim {\cal N}\left({\bf w}_\text{MSE}, {\bf V}_{\bf w}\right)$$After some algebra, it can be shown that mean and the covariance matrix of the distribution are:$${\bf V}_{\bf w} = \left[\frac{1}{\sigma_\varepsilon^2} {\bf Z}^{\top}{\bf Z} + {\bf V}_p^{-1}\right]^{-1}$$$${\bf w}_\text{MSE} = {\sigma_\varepsilon^{-2}} {\bf V}_{\bf w} {\bf Z}^\top {\bf s}$$ Exercise 2: Consider the dataset with one-dimensional inputs given by
# True data parameters w_true = 3 std_n = 0.4 # Generate the whole dataset n_max = 64 X_tr = 3 * np.random.random((n_max,1)) - 0.5 S_tr = w_true * X_tr + std_n * np.random.randn(n_max,1) # Plot data plt.figure() plt.plot(X_tr, S_tr, 'b.') plt.xlabel('$x$') plt.ylabel('$s$') plt.show()
_____no_output_____
MIT
R5.Bayesian_Regression/Bayesian_regression_professor.ipynb
ML4DS/ML4all
Fit a Bayesian linear regression model assuming $z= x$ and
# Model parameters sigma_eps = 0.4 mean_w = np.zeros((1,)) sigma_p = 1e6 Var_p = sigma_p**2* np.eye(1)
_____no_output_____
MIT
R5.Bayesian_Regression/Bayesian_regression_professor.ipynb
ML4DS/ML4all
To do so, compute the posterior weight distribution using the first $k$ samples in the complete dataset, for $k = 1,2,4,8,\ldots 128$. Draw all these posteriors along with the prior distribution in the same plot.
# No. of points to analyze n_points = [1, 2, 4, 8, 16, 32, 64] # Prepare plots w_grid = np.linspace(2.7, 3.4, 5000) # Sample the w axis plt.figure() # Compute the prior distribution over the grid points in w_grid # p = <FILL IN> p = 1.0/(sigma_p*np.sqrt(2*np.pi)) * np.exp(-(w_grid**2)/(2*sigma_p**2)) plt.plot(w_grid, p,'g-') for k in n_points: # Select the first k samples Zk = X_tr[0:k, :] Sk = S_tr[0:k] # Parameters of the posterior distribution # 1. Compute the posterior variance. # (Make sure that the resulting variable, Var_w, is a 1x1 numpy array.) # Var_w = <FILL IN> Var_w = np.linalg.inv(np.dot(Zk.T, Zk)/(sigma_eps**2) + np.linalg.inv(Var_p)) # 2. Compute the posterior mean. # (Make sure that the resulting variable, w_MSE, is a scalar) # w_MSE = <FILL IN> w_MSE = (Var_w.dot(Zk.T).dot(Sk)/(sigma_eps**2)).flatten() # Compute the posterior distribution over the grid points in w_grid sigma_w = np.sqrt(Var_w.flatten()) # First we take a scalar standard deviation # p = <FILL IN> p = 1.0/(sigma_w*np.sqrt(2*np.pi)) * np.exp(-((w_grid-w_MSE)**2)/(2*sigma_w**2)) plt.plot(w_grid, p,'g-') plt.fill_between(w_grid, 0, p, alpha=0.8, edgecolor='#1B2ACC', facecolor='#089FFF', linewidth=1, antialiased=True) plt.title('Posterior distribution after {} samples'.format(k)) plt.xlim(w_grid[0], w_grid[-1]) plt.ylim(0, np.max(p)) plt.xlabel('$w$') plt.ylabel('$p(w|s)$') display.clear_output(wait=True) display.display(plt.gcf()) time.sleep(2.0) # Remove the temporary plots and fix the last one display.clear_output(wait=True) plt.show()
_____no_output_____
MIT
R5.Bayesian_Regression/Bayesian_regression_professor.ipynb
ML4DS/ML4all
Exercise 3: Note that, in the example above, the model assumptions are correct: the target variables have been generated by a linear model with noise standard deviation `sigma_n` which is exactly equal to the value assumed by the model, stored in variable `sigma_eps`. Check what happens if we take `sigma_eps=4*sigma_n` or `sigma_eps=sigma_n/4`. * Does the algorithm fail in that cases?* What differences can you observe with respect to the ideal case `sigma_eps=sigma_n`? 4.4. Step 4: Weight estimation.Since the posterior weight distribution is Gaussian, both the MAP and the MSE estimates are equal to the posterior mean, which has been already computed in step 3:$$\widehat{\bf w}_\text{MAP} = \widehat{\bf w}_\text{MSE} = {\sigma_\varepsilon^{-2}} {\bf V}_{\bf w} {\bf Z}^\top {\bf s}$$ 4.5. Step 5: PredictionUsing the MSE estimate, the final predictions are given by$$\widehat{s}_\text{MSE} = \widehat{\bf w}_\text{MSE}^\top{\bf z}$$ Exercise 4:Plot the minimum MSE predictions of $s$ for inputs $x$ in the interval [-1, 3].
# <SOL> x = np.array([-1.0, 3.0]) s_pred = w_MSE * x plt.figure() plt.plot(X_tr, S_tr,'b.') plt.plot(x, s_pred) plt.show() # </SOL>
_____no_output_____
MIT
R5.Bayesian_Regression/Bayesian_regression_professor.ipynb
ML4DS/ML4all
5. Maximum likelihood vs Bayesian Inference. 5.1. The Maximum Likelihood Estimate.For comparative purposes, it is interesting to see here that the likelihood function is enough to compute the Maximum Likelihood (ML) estimate \begin{align}{\bf w}_\text{ML} &= \arg \max_{\bf w} p(\mathcal{D}|{\bf w}) \\ &= \arg \min_{\bf w} \|{\bf s}-{\bf Z}{\bf w}\|^2\end{align}which leads to the Least Squares (LS) solution$${\bf w}_\text{ML} = ({\bf Z}^\top{\bf Z})^{-1}{\bf Z}^\top{\bf s}$$ML estimation is prone to overfiting. In general, if the number of parameters (i.e. the dimension of ${\bf w}$) is large in relation to the size of the training data, the predictor based on the ML estimate may have a small square error over the training set but a large error over the test set. Therefore, in practice, some cross validation procedure is required to keep the complexity of the predictor function under control depending on the size of the training set.By defining a prior distribution over the unknown parameters, and using the Bayesian inference methods, the overfitting problems can be alleviated 5.2 Making predictions- Following an **ML approach**, we retain a single model, ${\bf w}_{ML} = \arg \max_{\bf w} p({\bf s}|{\bf w})$. Then, the predictive distribution of the target value for a new point would be obtained as:$$p({s^*}|{\bf w}_{ML},{\bf x}^*) $$ For the generative model of Section 3.1.2 (additive i.i.d. Gaussian noise), this distribution is:$$p({s^*}|{\bf w}_{ML},{\bf x}^*) = \frac{1}{\sqrt{2\pi\sigma_\varepsilon^2}} \exp \left(-\frac{\left(s^* - {\bf w}_{ML}^\top {\bf z}^*\right)^2}{2 \sigma_\varepsilon^2} \right)$$ * The mean of $s^*$ is just the same as the prediction of the LS model, and the same uncertainty is assumed independently of the observation vector (i.e., the variance of the noise of the model). * If a single value is to be kept, we would probably keep the mean of the distribution, which is equivalent to the LS prediction. - Using Bayesian inference, we retain all models. Then, the inference of the value $s^* = s({\bf x}^*)$ is carried out by mixing all models, according to the weights given by the posterior distribution.\begin{align}p({s^*}|{\bf x}^*,{\bf s}) & = \int p({s^*}~|~{\bf w},{\bf x}^*) p({\bf w}~|~{\bf s}) d{\bf w}\end{align}where: * $p({s^*}|{\bf w},{\bf x}^*) = \dfrac{1}{\sqrt{2\pi\sigma_\varepsilon^2}} \exp \left(-\frac{\left(s^* - {\bf w}^\top {\bf z}^*\right)^2}{2 \sigma_\varepsilon^2} \right)$ * $p({\bf w} \mid {\bf s})$ is the posterior distribution of the weights, that can be computed using Bayes' Theorem. In general the integral expression of the posterior distribution $p({s^*}|{\bf x}^*,{\bf s})$ cannot be computed analytically. Fortunately, for the Gaussian model, the computation of the posterior is simple, as we will show in the following section. 6. Posterior distribution of the target variableIn the same way that we have computed a distribution on ${\bf w}$, we can compute a distribution on the target variable for a given input ${\bf x}$ and given the whole dataset.Since ${\bf w}$ is a random variable, the noise-free component of the target variable for an arbitrary input ${\bf x}$, that is, $f = f({\bf x}) = {\bf w}^\top{\bf z}$ is also a random variable, and we can compute its distribution from the posterior distribution of ${\bf w}$Since ${\bf w}$ is Gaussian and $f$ is a linear transformation of ${\bf w}$, $f$ is also a Gaussian random variable, whose posterior mean and variance can be calculated as follows:\begin{align}\mathbb{E}\{f \mid {\bf s}, {\bf z}\} &= \mathbb{E}\{{\bf w}^\top {\bf z}~|~{\bf s}, {\bf z}\} = \mathbb{E}\{{\bf w} ~|~{\bf s}, {\bf z}\}^\top {\bf z} \\ &= \widehat{\bf w}_\text{MSE} ^\top {\bf z} \\% &= {\sigma_\varepsilon^{-2}} {{\bf z}}^\top {\bf V}_{\bf w} {\bf Z}^\top {\bf s}\end{align}\begin{align}\text{Cov}\left[{{\bf z}}^\top {\bf w}~|~{\bf s}, {\bf z}\right] &= {\bf z}^\top \text{Cov}\left[{\bf w}~|~{\bf s}\right] {\bf z} \\ &= {\bf z}^\top {\bf V}_{\bf w} {{\bf z}}\end{align} Therefore, $$f^*~|~{\bf s}, {\bf x} \sim {\cal N}\left(\widehat{\bf w}_\text{MSE} ^\top {\bf z}, ~~ {\bf z}^\top {\bf V}_{\bf w} {\bf z} \right)$$ Finally, for $s = f + \varepsilon$, the posterior distribution is $$s ~|~{\bf s}, {\bf z}^* \sim {\cal N}\left(\widehat{\bf w}_\text{MSE} ^\top {\bf z}, ~~ {\bf z}^\top {\bf V}_{\bf w} {\bf z} + \sigma_\varepsilon^2\right)$$ Example:The next figure shows a one-dimensional dataset with 15 points, which are noisy samples from a cosine signal (shown in the dotted curve)
n_points = 15 n_grid = 200 frec = 3 std_n = 0.2 # Data generation X_tr = 3 * np.random.random((n_points,1)) - 0.5 S_tr = - np.cos(frec*X_tr) + std_n * np.random.randn(n_points,1) # Signal xmin = np.min(X_tr) - 0.1 xmax = np.max(X_tr) + 0.1 X_grid = np.linspace(xmin, xmax, n_grid) S_grid = - np.cos(frec*X_grid) #Noise free for the true model # Compute matrix with training input data for the polynomial model Z = [] for x_val in X_tr.tolist(): Z.append([x_val[0]**k for k in range(degree+1)]) Z = np.asmatrix(Z) # Plot data fig = plt.figure() ax = fig.add_subplot(111) ax.plot(X_tr,S_tr,'b.',markersize=10) # Plot noise-free function ax.plot(X_grid, S_grid, 'b:', label='Noise-free signal') # Set axes ax.set_xlim(xmin, xmax) ax.set_ylim(S_tr[0] - 2, S_tr[-1] + 2) ax.legend(loc='best') plt.show()
_____no_output_____
MIT
R5.Bayesian_Regression/Bayesian_regression_professor.ipynb
ML4DS/ML4all
Let us assume that the cosine form of the noise-free signal is unknown, and we assume a polynomial model with a high degree. The following code plots the LS estimate
degree = 12 # We plot also the least square solution w_LS = np.polyfit(X_tr.flatten(), S_tr.flatten(), degree) S_grid_LS = np.polyval(w_LS,X_grid) # Plot data fig = plt.figure() ax = fig.add_subplot(111) ax.plot(X_tr,S_tr,'b.',markersize=10) # Plot noise-free function ax.plot(X_grid, S_grid, 'b:', label='Noise-free signal') # Plot LS regression function ax.plot(X_grid, S_grid_LS, 'm-', label='LS regression') # Set axis ax.set_xlim(xmin, xmax) ax.set_ylim(S_tr[0] - 2, S_tr[-1] + 2) ax.legend(loc='best') plt.show()
_____no_output_____
MIT
R5.Bayesian_Regression/Bayesian_regression_professor.ipynb
ML4DS/ML4all
The following fragment of code computes the posterior weight distribution, draws random vectors from $p({\bf w}|{\bf s})$, and plots the corresponding regression curves along with the training points. Compare these curves with those extracted from the prior distribution of ${\bf w}$ and with the LS solution.
nplots = 6 # Prior distribution parameters sigma_eps = 0.2 mean_w = np.zeros((degree+1,)) sigma_p = .5 Var_p = sigma_p**2 * np.eye(degree+1) # Compute matrix with training input data for the polynomial model Z = [] for x_val in X_tr.tolist(): Z.append([x_val[0]**k for k in range(degree+1)]) Z = np.asmatrix(Z) #Compute posterior distribution parameters Var_w = np.linalg.inv(np.dot(Z.T,Z)/(sigma_eps**2) + np.linalg.inv(Var_p)) posterior_mean = Var_w.dot(Z.T).dot(S_tr)/(sigma_eps**2) posterior_mean = np.array(posterior_mean).flatten() # Plot data fig = plt.figure() ax = fig.add_subplot(111) ax.plot(X_tr,S_tr,'b.',markersize=10) # Plot noise-free function ax.plot(X_grid, S_grid, 'b:', label='Noise-free signal') # Plot LS regression function ax.plot(X_grid, S_grid_LS, 'm-', label='LS regression') for k in range(nplots): # Draw weights from the posterior distribution w_iter = np.random.multivariate_normal(posterior_mean, Var_w) # Note that polyval assumes the first element of weight vector is the coefficient of # the highest degree term. Thus, we need to reverse w_iter S_grid_iter = np.polyval(w_iter[::-1], X_grid) ax.plot(X_grid,S_grid_iter,'g-') # Set axis ax.set_xlim(xmin, xmax) ax.set_ylim(S_tr[0] - 2, S_tr[-1] + 2) ax.legend(loc='best') plt.show()
_____no_output_____
MIT
R5.Bayesian_Regression/Bayesian_regression_professor.ipynb
ML4DS/ML4all
Not only do we obtain a better predictive model, but we also have confidence intervals (error bars) for the predictions.
# Compute standard deviation std_x = [] for el in X_grid: x_ast = np.array([el**k for k in range(degree+1)]) std_x.append(np.sqrt(x_ast.dot(Var_w).dot(x_ast)[0,0])) std_x = np.array(std_x) # Plot data fig = plt.figure(figsize=(10,6)) ax = fig.add_subplot(111) ax.plot(X_tr,S_tr,'b.',markersize=10) # Plot the posterior mean # Note that polyval assumes the first element of weight vector is the coefficient of # the highest degree term. Thus, we need to reverse w_iter S_grid_iter = np.polyval(posterior_mean[::-1],X_grid) ax.plot(X_grid,S_grid_iter,'g-',label='Predictive mean, BI') #Plot confidence intervals for the Bayesian Inference plt.fill_between(X_grid, S_grid_iter-std_x, S_grid_iter+std_x, alpha=0.4, edgecolor='#1B2ACC', facecolor='#089FFF', linewidth=2, antialiased=True) #We plot also the least square solution w_LS = np.polyfit(X_tr.flatten(), S_tr.flatten(), degree) S_grid_iter = np.polyval(w_LS,X_grid) # Plot noise-free function ax.plot(X_grid, S_grid, 'b:', label='Noise-free signal') # Plot LS regression function ax.plot(X_grid, S_grid_LS, 'm-', label='LS regression') # Set axis ax.set_xlim(xmin, xmax) ax.set_ylim(S_tr[0]-2,S_tr[-1]+2) ax.set_title('Predicting the target variable') ax.set_xlabel('Input variable') ax.set_ylabel('Target variable') ax.legend(loc='best') plt.show()
_____no_output_____
MIT
R5.Bayesian_Regression/Bayesian_regression_professor.ipynb
ML4DS/ML4all
Exercise 5:Assume the dataset ${\cal{D}} = \left\{ x_k, s_k \right\}_{k=0}^{K-1}$ containing $K$ i.i.d. samples from a distribution$$p(s|x,w) = w x \exp(-w x s), \qquad s>0,\quad x> 0,\quad w> 0$$We model also our uncertainty about the value of $w$ assuming a prior distribution for $w$ following a Gamma distribution with parameters $\alpha>0$ and $\beta>0$.$$w \sim \text{Gamma}\left(\alpha, \beta \right) = \frac{\beta^\alpha}{\Gamma(\alpha)} w^{\alpha-1} \exp\left(-\beta w\right), \qquad w>0$$Note that the mean and the mode of a Gamma distribution can be calculated in closed-form as$$\mathbb{E}\left\{w\right\}=\frac{\alpha}{\beta}; \qquad$$$$\text{mode}\{w\} = \arg\max_w p(w) = \frac{\alpha-1}{\beta}$$**1.** Determine an expression for the likelihood function. Solution:[comment]: ()\begin{align}p({\bf s}| w) &= \prod_{k=0}^{K-1} p(s_k|w, x_k) = \prod_{k=0}^{K-1} \left(w x_k \exp(-w x_k s_k)\right) \nonumber\\ &= w^K \cdot \left(\prod_{k=0}^{K-1} x_k \right) \exp\left( -w \sum_{k=0}^{K-1} x_k s_k\right)\end{align}[comment]: () **2.** Determine the maximum likelihood coefficient, $\widehat{w}_{\text{ML}}$. Solution:[comment]: ()\begin{align}\widehat{w}_{\text{ML}} &= \arg\max_w w^K \cdot \left(\prod_{k=0}^{K-1} x_k \right) \exp\left( -w \sum_{k=0}^{K-1} x_k s_k\right) \\ &= \arg\max_w \left(w^K \cdot \exp\left( -w \sum_{k=0}^{K-1} x_k s_k\right)\right) \\ &= \arg\max_w \left(K \log(w) - w \sum_{k=0}^{K-1} x_k s_k \right) \\ &= \frac{K}{\sum_{k=0}^{K-1} x_k s_k} \end{align}[comment]: () **3.** Obtain the posterior distribution $p(w|{\bf s})$. Note that you do not need to calculate $p({\bf s})$ since the posterior distribution can be readily identified as another Gamma distribution. Solution:[comment]: ()\begin{align}p(w|{\bf s}) &= \frac{p({\bf s}|w) p(w)}{p(s)} \\ &= \frac{1}{p(s)} \left(w^K \cdot \left(\prod_{k=0}^{K-1} x_k \right) \exp\left( -w \sum_{k=0}^{K-1} x_k s_k\right) \right) \left(\frac{\beta^\alpha}{\Gamma(\alpha)} w^{\alpha-1} \exp\left(-\beta w\right)\right) \\ &= \frac{1}{p(s)} \frac{\beta^\alpha}{\Gamma(\alpha)} \left(\prod_{k=0}^{K-1} x_k \right) \left(w^{K + \alpha - 1} \cdot \exp\left( -w \left(\beta + \sum_{k=0}^{K-1} x_k s_k\right) \right) \right)\end{align}that is$$w \mid {\bf s} \sim Gamma\left(K+\alpha, \beta + \sum_{k=0}^{K-1} x_k s_k \right)$$[comment]: () **4.** Determine the MSE and MAP a posteriori estimators of $w$: $w_\text{MSE}=\mathbb{E}\left\{w|{\bf s}\right\}$ and $w_\text{MAP} = \max_w p(w|{\bf s})$. Solution:[comment]: ()$$w_{\text{MSE}} = \mathbb{E}\left\{w \mid {\bf s} \right\} = \frac{K + \alpha}{\beta + \sum_{k=0}^{K-1} x_k s_k}$$$$w_{\text{MAP}} = \text{mode}\{w\} = \arg\max_w p(w) = \frac{K + \alpha-1}{\beta + \sum_{k=0}^{K-1} x_k s_k}$$[comment]: () **5.** Compute the following estimators of $S$:$\qquad\widehat{s}_1 = \mathbb{E}\{s|w_\text{ML},x\}$$\qquad\widehat{s}_2 = \mathbb{E}\{s|w_\text{MSE},x\}$$\qquad\widehat{s}_3 = \mathbb{E}\{s|w_\text{MAP},x\}$ Solution:[comment]: ()$$\widehat{s}_1 = \mathbb{E}\{s|w_\text{ML},x\} = w_\text{ML} x$$$$\widehat{s}_2 = \mathbb{E}\{s|w_\text{MSE},x\} = w_\text{MSE} x$$$$\widehat{s}_3 = \mathbb{E}\{s|w_\text{MAP},x\} = w_\text{MAP} x$$ [comment]: () 7. Maximum evidence model selectionWe have already addressed with Bayesian Inference the following two issues: - For a given degree, how do we choose the weights? - Should we focus on just one model, or can we use several models at once? However, we still needed some assumptions: a parametric model (i.e., polynomial function and a priori degree selection) and several parameters needed to be adjusted.Though we can recur to cross-validation, Bayesian inference opens the door to other strategies. - We could argue that rather than keeping single selections of these parameters, we could use simultaneously several sets of parameters (and/or several parametric forms), and average them in a probabilistic way ... (like we did with the models) - We will follow a simpler strategy, selecting just the most likely set of parameters according to an ML criterion  7.1 Model evidenceThe evidence of a model is defined as$$L = p({\bf s}~|~{\cal M})$$where ${\cal M}$ denotes the model itself and any free parameters it may have. For instance, for the polynomial model we have assumed so far, ${\cal M}$ would represent the degree of the polynomia, the variance of the additive noise, and the a priori covariance matrix of the weightsApplying the Theorem of Total probability, we can compute the evidence of the model as$$L = \int p({\bf s}~|~{\bf f},{\cal M}) p({\bf f}~|~{\cal M}) d{\bf f} $$For the linear model $f({\bf x}) = {\bf w}^\top{\bf z}$, the evidence can be computed as$$L = \int p({\bf s}~|~{\bf w},{\cal M}) p({\bf w}~|~{\cal M}) d{\bf w} $$It is important to notice that these probability density functions are exactly the ones we computed on the previous section. We are just making explicit that they depend on a particular model and the selection of its parameters. Therefore: - $p({\bf s}~|~{\bf w},{\cal M})$ is the likelihood of ${\bf w}$ - $p({\bf w}~|~{\cal M})$ is the a priori distribution of the weights 7.2 Model selection via evidence maximization - As we have already mentioned, we could propose a prior distribution for the model parameters, $p({\cal M})$, and use it to infer the posterior. However, this can be very involved (usually no closed-form expressions can be derived) - Alternatively, maximizing the evidence is normally good enough $${\cal M}_\text{ML} = \arg\max_{\cal M} p(s~|~{\cal M})$$ Note that we are using the subscript 'ML' because the evidence can also be referred to as the likelihood of the model 7.3 Example: Selection of the degree of the polynomiaFor the previous example we had (we consider a spherical Gaussian for the weights): - ${\bf s}~|~{\bf w},{\cal M}~\sim~{\cal N}\left({\bf Z}{\bf w},~\sigma_\varepsilon^2 {\bf I} \right)$ - ${\bf w}~|~{\cal M}~\sim~{\cal N}\left({\bf 0},~\sigma_p^2 {\bf I} \right)$ In this case, $p({\bf s}~|~{\cal M})$ follows also a Gaussian distribution, and it can be shown that - $L = p({\bf s}~|~{\cal M}) = {\cal N}\left({\bf 0},\sigma_p^2 {\bf Z} {\bf Z}^\top+\sigma_\varepsilon^2 {\bf I} \right)$ If we just pursue the maximization of $L$, this is equivalent to maximizing the log of the evidence$$\log(L) = -\frac{M}{2} \log(2\pi) -{\frac{1}{2}}\log\mid\sigma_p^2 {\bf Z} {\bf Z}^\top+\sigma_\varepsilon^2 {\bf I}\mid - \frac{1}{2} {\bf s}^\top \left(\sigma_p^2 {\bf Z} {\bf Z}^\top+\sigma_\varepsilon^2 {\bf I}\right)^{-1} {\bf s}$$where $M$ denotes the length of vector ${\bf z}$ (the degree of the polynomia minus 1). The following fragment of code evaluates the evidence of the model as a function of the degree of the polynomia
from math import pi n_points = 15 frec = 3 std_n = 0.2 max_degree = 12 #Prior distribution parameters sigma_eps = 0.2 mean_w = np.zeros((degree+1,)) sigma_p = 0.5 X_tr = 3 * np.random.random((n_points,1)) - 0.5 S_tr = - np.cos(frec*X_tr) + std_n * np.random.randn(n_points,1) #Compute matrix with training input data for the polynomial model Z = [] for x_val in X_tr.tolist(): Z.append([x_val[0]**k for k in range(degree+1)]) Z=np.asmatrix(Z) #Evaluate the posterior evidence logE = [] for deg in range(max_degree): Z_iter = Z[:,:deg+1] logE_iter = -((deg+1)*np.log(2*pi)/2) \ -np.log(np.linalg.det((sigma_p**2)*Z_iter.dot(Z_iter.T) + (sigma_eps**2)*np.eye(n_points)))/2 \ -S_tr.T.dot(np.linalg.inv((sigma_p**2)*Z_iter.dot(Z_iter.T) + (sigma_eps**2)*np.eye(n_points))).dot(S_tr)/2 logE.append(logE_iter[0,0]) plt.plot(np.array(range(max_degree))+1,logE) plt.xlabel('Polynomia degree') plt.ylabel('log evidence') plt.show()
_____no_output_____
MIT
R5.Bayesian_Regression/Bayesian_regression_professor.ipynb
ML4DS/ML4all
The above curve may change the position of its maximum from run to run.We conclude the notebook by plotting the result of the Bayesian inference for $M=6$
n_points = 15 n_grid = 200 frec = 3 std_n = 0.2 degree = 5 #M-1 nplots = 6 #Prior distribution parameters sigma_eps = 0.1 mean_w = np.zeros((degree+1,)) sigma_p = .5 * np.eye(degree+1) X_tr = 3 * np.random.random((n_points,1)) - 0.5 S_tr = - np.cos(frec*X_tr) + std_n * np.random.randn(n_points,1) X_grid = np.linspace(-1,3,n_grid) S_grid = - np.cos(frec*X_grid) #Noise free for the true model fig = plt.figure() ax = fig.add_subplot(111) ax.plot(X_tr,S_tr,'b.',markersize=10) #Compute matrix with training input data for the polynomial model Z = [] for x_val in X_tr.tolist(): Z.append([x_val[0]**k for k in range(degree+1)]) Z=np.asmatrix(Z) #Compute posterior distribution parameters Sigma_w = np.linalg.inv(np.dot(Z.T,Z)/(sigma_eps**2) + np.linalg.inv(sigma_p)) posterior_mean = Sigma_w.dot(Z.T).dot(S_tr)/(sigma_eps**2) posterior_mean = np.array(posterior_mean).flatten() #Plot the posterior mean #Note that polyval assumes the first element of weight vector is the coefficient of #the highest degree term. Thus, we need to reverse w_iter S_grid_iter = np.polyval(posterior_mean[::-1],X_grid) ax.plot(X_grid,S_grid_iter,'g-',label='Predictive mean, BI') #Plot confidence intervals for the Bayesian Inference std_x = [] for el in X_grid: x_ast = np.array([el**k for k in range(degree+1)]) std_x.append(np.sqrt(x_ast.dot(Sigma_w).dot(x_ast)[0,0])) std_x = np.array(std_x) plt.fill_between(X_grid, S_grid_iter-std_x, S_grid_iter+std_x, alpha=0.2, edgecolor='#1B2ACC', facecolor='#089FFF', linewidth=4, linestyle='dashdot', antialiased=True) #We plot also the least square solution w_LS = np.polyfit(X_tr.flatten(), S_tr.flatten(), degree) S_grid_iter = np.polyval(w_LS,X_grid) ax.plot(X_grid,S_grid_iter,'m-',label='LS regression') ax.set_xlim(-1,3) ax.set_ylim(S_tr[0]-2,S_tr[-1]+2) ax.legend(loc='best') plt.show()
_____no_output_____
MIT
R5.Bayesian_Regression/Bayesian_regression_professor.ipynb
ML4DS/ML4all
Goals 1. Learn to implement Resnet V2 Block (Type - 1) using monk - Monk's Keras - Monk's Pytorch - Monk's Mxnet 2. Use network Monk's debugger to create complex blocks 3. Understand how syntactically different it is to implement the same using - Traditional Keras - Traditional Pytorch - Traditional Mxnet Resnet V2 Block - Type 1 - Note: The block structure can have variations too, this is just an example
from IPython.display import Image Image(filename='imgs/resnet_v2_with_downsample.png')
_____no_output_____
Apache-2.0
study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/3) Resnet V2 Block (Type - 1).ipynb
arijitgupta42/monk_v1
Table of contents[1. Install Monk](1)[2. Block basic Information](2) - [2.1) Visual structure](2-1) - [2.2) Layers in Branches](2-2)[3) Creating Block using monk visual debugger](3) - [3.1) Create the first branch](3-1) - [3.2) Create the second branch](3-2) - [3.3) Merge the branches](3-3) - [3.4) Debug the merged network](3-4) - [3.5) Compile the network](3-5) - [3.6) Visualize the network](3-6) - [3.7) Run data through the network](3-7) [4) Creating Block Using MONK one line API call](4) - [Mxnet Backend](4-1) - [Pytorch Backend](4-2) - [Keras Backend](4-3) [5) Appendix](5) - [Study Material](5-1) - [Creating block using traditional Mxnet](5-2) - [Creating block using traditional Pytorch](5-3) - [Creating block using traditional Keras](5-4) Install Monk - git clone https://github.com/Tessellate-Imaging/monk_v1.git - cd monk_v1/installation/Linux && pip install -r requirements_cu9.txt - (Select the requirements file as per OS and CUDA version)
!git clone https://github.com/Tessellate-Imaging/monk_v1.git
_____no_output_____
Apache-2.0
study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/3) Resnet V2 Block (Type - 1).ipynb
arijitgupta42/monk_v1
Imports
# Common import numpy as np import math import netron from collections import OrderedDict from functools import partial # Monk import os import sys sys.path.append("monk_v1/monk/");
_____no_output_____
Apache-2.0
study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/3) Resnet V2 Block (Type - 1).ipynb
arijitgupta42/monk_v1
Block Information Visual structure
from IPython.display import Image Image(filename='imgs/resnet_v2_with_downsample.png')
_____no_output_____
Apache-2.0
study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/3) Resnet V2 Block (Type - 1).ipynb
arijitgupta42/monk_v1
Layers in Branches - Number of branches: 2 - Common element - batchnorm -> relu - Branch 1 - conv_1x1 - Branch 2 - conv_3x3 -> batchnorm -> relu -> conv_3x3 - Branches merged using - Elementwise addition (See Appendix to read blogs on resnets) Creating Block using monk debugger
# Imports and setup a project # To use pytorch backend - replace gluon_prototype with pytorch_prototype # To use keras backend - replace gluon_prototype with keras_prototype from gluon_prototype import prototype # Create a sample project gtf = prototype(verbose=1); gtf.Prototype("sample-project-1", "sample-experiment-1");
Mxnet Version: 1.5.1 Experiment Details Project: sample-project-1 Experiment: sample-experiment-1 Dir: /home/abhi/Desktop/Work/tess_tool/gui/v0.3/finetune_models/Organization/development/v5.0_blocks/study_roadmap/blocks/workspace/sample-project-1/sample-experiment-1/
Apache-2.0
study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/3) Resnet V2 Block (Type - 1).ipynb
arijitgupta42/monk_v1
Create the first branch
def first_branch(output_channels=128, stride=1): network = []; network.append(gtf.convolution(output_channels=output_channels, kernel_size=1, stride=stride)); return network; # Debug the branch branch_1 = first_branch(output_channels=128, stride=1) network = []; network.append(branch_1); gtf.debug_custom_model_design(network);
_____no_output_____
Apache-2.0
study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/3) Resnet V2 Block (Type - 1).ipynb
arijitgupta42/monk_v1
Create the second branch
def second_branch(output_channels=128, stride=1): network = []; network.append(gtf.convolution(output_channels=output_channels, kernel_size=3, stride=stride)); network.append(gtf.batch_normalization()); network.append(gtf.relu()); network.append(gtf.convolution(output_channels=output_channels, kernel_size=3, stride=1)); return network; # Debug the branch branch_2 = second_branch(output_channels=128, stride=1) network = []; network.append(branch_2); gtf.debug_custom_model_design(network);
_____no_output_____
Apache-2.0
study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/3) Resnet V2 Block (Type - 1).ipynb
arijitgupta42/monk_v1
Merge the branches
def final_block(output_channels=128, stride=1): network = []; # Common elements network.append(gtf.batch_normalization()); network.append(gtf.relu()); #Create subnetwork and add branches subnetwork = []; branch_1 = first_branch(output_channels=output_channels, stride=stride) branch_2 = second_branch(output_channels=output_channels, stride=stride) subnetwork.append(branch_1); subnetwork.append(branch_2); # Add merging element subnetwork.append(gtf.add()); # Add the subnetwork network.append(subnetwork); return network;
_____no_output_____
Apache-2.0
study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/3) Resnet V2 Block (Type - 1).ipynb
arijitgupta42/monk_v1
Debug the merged network
final = final_block(output_channels=128, stride=1) network = []; network.append(final); gtf.debug_custom_model_design(network);
_____no_output_____
Apache-2.0
study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/3) Resnet V2 Block (Type - 1).ipynb
arijitgupta42/monk_v1
Compile the network
gtf.Compile_Network(network, data_shape=(3, 224, 224), use_gpu=False);
Model Details Loading pretrained model Model Loaded on device Model name: Custom Model Num of potentially trainable layers: 5 Num of actual trainable layers: 5
Apache-2.0
study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/3) Resnet V2 Block (Type - 1).ipynb
arijitgupta42/monk_v1
Run data through the network
import mxnet as mx x = np.zeros((1, 3, 224, 224)); x = mx.nd.array(x); y = gtf.system_dict["local"]["model"].forward(x); print(x.shape, y.shape)
(1, 3, 224, 224) (1, 128, 224, 224)
Apache-2.0
study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/3) Resnet V2 Block (Type - 1).ipynb
arijitgupta42/monk_v1
Visualize network using netron
gtf.Visualize_With_Netron(data_shape=(3, 224, 224))
Using Netron To Visualize Not compatible on kaggle Compatible only for Jupyter Notebooks Stopping http://localhost:8080 Serving 'model-symbol.json' at http://localhost:8080
Apache-2.0
study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/3) Resnet V2 Block (Type - 1).ipynb
arijitgupta42/monk_v1
Creating Using MONK LOW code API Mxnet backend
from gluon_prototype import prototype gtf = prototype(verbose=1); gtf.Prototype("sample-project-1", "sample-experiment-1"); network = []; # Single line addition of blocks network.append(gtf.resnet_v2_block(output_channels=128)); gtf.Compile_Network(network, data_shape=(3, 224, 224), use_gpu=False);
Mxnet Version: 1.5.1 Experiment Details Project: sample-project-1 Experiment: sample-experiment-1 Dir: /home/abhi/Desktop/Work/tess_tool/gui/v0.3/finetune_models/Organization/development/v5.0_blocks/study_roadmap/blocks/workspace/sample-project-1/sample-experiment-1/ Model Details Loading pretrained model Model Loaded on device Model name: Custom Model Num of potentially trainable layers: 5 Num of actual trainable layers: 5
Apache-2.0
study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/3) Resnet V2 Block (Type - 1).ipynb
arijitgupta42/monk_v1
Pytorch backend - Only the import changes
#Change gluon_prototype to pytorch_prototype from pytorch_prototype import prototype gtf = prototype(verbose=1); gtf.Prototype("sample-project-1", "sample-experiment-1"); network = []; # Single line addition of blocks network.append(gtf.resnet_v2_block(output_channels=128)); gtf.Compile_Network(network, data_shape=(3, 224, 224), use_gpu=False);
Pytorch Version: 1.2.0 Experiment Details Project: sample-project-1 Experiment: sample-experiment-1 Dir: /home/abhi/Desktop/Work/tess_tool/gui/v0.3/finetune_models/Organization/development/v5.0_blocks/study_roadmap/blocks/workspace/sample-project-1/sample-experiment-1/ Model Details Loading pretrained model Model Loaded on device Model name: Custom Model Num layers in model: 5 Num trainable layers: 5
Apache-2.0
study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/3) Resnet V2 Block (Type - 1).ipynb
arijitgupta42/monk_v1
Keras backend - Only the import changes
#Change gluon_prototype to keras_prototype from keras_prototype import prototype gtf = prototype(verbose=1); gtf.Prototype("sample-project-1", "sample-experiment-1"); network = []; # Single line addition of blocks network.append(gtf.resnet_v1_block(output_channels=128)); gtf.Compile_Network(network, data_shape=(3, 224, 224), use_gpu=False);
Keras Version: 2.2.5 Tensorflow Version: 1.12.0 Experiment Details Project: sample-project-1 Experiment: sample-experiment-1 Dir: /home/abhi/Desktop/Work/tess_tool/gui/v0.3/finetune_models/Organization/development/v5.0_blocks/study_roadmap/blocks/workspace/sample-project-1/sample-experiment-1/ Model Details Loading pretrained model Model Loaded on device Model name: Custom Model Num layers in model: 10 Num trainable layers: 9
Apache-2.0
study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/3) Resnet V2 Block (Type - 1).ipynb
arijitgupta42/monk_v1
Appendix Study links - https://towardsdatascience.com/residual-blocks-building-blocks-of-resnet-fd90ca15d6ec - https://medium.com/@MaheshNKhatri/resnet-block-explanation-with-a-terminology-deep-dive-989e15e3d691 - https://medium.com/analytics-vidhya/understanding-and-implementation-of-residual-networks-resnets-b80f9a507b9c - https://hackernoon.com/resnet-block-level-design-with-deep-learning-studio-part-1-727c6f4927ac Creating block using traditional Mxnet - Code credits - https://mxnet.incubator.apache.org/
# Traditional-Mxnet-gluon import mxnet as mx from mxnet.gluon import nn from mxnet.gluon.nn import HybridBlock, BatchNorm from mxnet.gluon.contrib.nn import HybridConcurrent, Identity from mxnet import gluon, init, nd def _conv3x3(channels, stride, in_channels): return nn.Conv2D(channels, kernel_size=3, strides=stride, padding=1, use_bias=False, in_channels=in_channels) class ResnetBlockV2(HybridBlock): def __init__(self, channels, stride, in_channels=0, last_gamma=False, norm_layer=BatchNorm, norm_kwargs=None, **kwargs): super(ResnetBlockV2, self).__init__(**kwargs) #Branch - 1 self.downsample = nn.Conv2D(channels, 1, stride, use_bias=False, in_channels=in_channels) # Branch - 2 self.bn1 = norm_layer(**({} if norm_kwargs is None else norm_kwargs)) self.conv1 = _conv3x3(channels, stride, in_channels) if not last_gamma: self.bn2 = norm_layer(**({} if norm_kwargs is None else norm_kwargs)) else: self.bn2 = norm_layer(gamma_initializer='zeros', **({} if norm_kwargs is None else norm_kwargs)) self.conv2 = _conv3x3(channels, 1, channels) def hybrid_forward(self, F, x): residual = x x = self.bn1(x) x = F.Activation(x, act_type='relu') residual = self.downsample(x) x = self.conv1(x) x = self.bn2(x) x = F.Activation(x, act_type='relu') x = self.conv2(x) return x + residual # Invoke the block block = ResnetBlockV2(64, 1) # Initialize network and load block on machine ctx = [mx.cpu()]; block.initialize(init.Xavier(), ctx = ctx); block.collect_params().reset_ctx(ctx) block.hybridize() # Run data through network x = np.zeros((1, 3, 224, 224)); x = mx.nd.array(x); y = block.forward(x); print(x.shape, y.shape) # Export Model to Load on Netron block.export("final", epoch=0); netron.start("final-symbol.json", port=8082)
_____no_output_____
Apache-2.0
study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/3) Resnet V2 Block (Type - 1).ipynb
arijitgupta42/monk_v1
Creating block using traditional Pytorch - Code credits - https://pytorch.org/
# Traiditional-Pytorch import torch from torch import nn from torch.jit.annotations import List import torch.nn.functional as F def conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1): """3x3 convolution with padding""" return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=dilation, groups=groups, bias=False, dilation=dilation) def conv1x1(in_planes, out_planes, stride=1): """1x1 convolution""" return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False) class ResnetBlock(nn.Module): expansion = 1 __constants__ = ['downsample'] def __init__(self, inplanes, planes, stride=1, groups=1, base_width=64, dilation=1, norm_layer=None): super(ResnetBlock, self).__init__() norm_layer = nn.BatchNorm2d # Common Element self.bn0 = norm_layer(inplanes) self.relu0 = nn.ReLU(inplace=True) # Branch - 1 self.downsample = conv1x1(inplanes, planes, stride) # Branch - 2 self.conv1 = conv3x3(inplanes, planes, stride) self.bn1 = norm_layer(planes) self.relu = nn.ReLU(inplace=True) self.conv2 = conv3x3(planes, planes) self.stride = stride def forward(self, x): x = self.bn0(x); x = self.relu0(x); identity = self.downsample(x) out = self.conv1(x) out = self.bn1(out) out = self.relu(out) out = self.conv2(out) out += identity out = self.relu(out) return out # Invoke the block block = ResnetBlock(3, 64, stride=1); # Initialize network and load block on machine layers = [] layers.append(block); net = nn.Sequential(*layers); # Run data through network x = torch.randn(1, 3, 224, 224) y = net(x) print(x.shape, y.shape); # Export Model to Load on Netron torch.onnx.export(net, # model being run x, # model input (or a tuple for multiple inputs) "model.onnx", # where to save the model (can be a file or file-like object) export_params=True, # store the trained parameter weights inside the model file opset_version=10, # the ONNX version to export the model to do_constant_folding=True, # whether to execute constant folding for optimization input_names = ['input'], # the model's input names output_names = ['output'], # the model's output names dynamic_axes={'input' : {0 : 'batch_size'}, # variable lenght axes 'output' : {0 : 'batch_size'}}) netron.start('model.onnx', port=9998);
torch.Size([1, 3, 224, 224]) torch.Size([1, 64, 224, 224]) Serving 'model.onnx' at http://localhost:9998
Apache-2.0
study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/3) Resnet V2 Block (Type - 1).ipynb
arijitgupta42/monk_v1
Creating block using traditional Keras - Code credits: https://keras.io/
# Traditional-Keras import keras import keras.layers as kla import keras.models as kmo import tensorflow as tf from keras.models import Model backend = 'channels_last' from keras import layers def resnet_conv_block(input_tensor, kernel_size, filters, stage, block, strides=(2, 2)): filters1, filters2, filters3 = filters bn_axis = 3 conv_name_base = 'res' + str(stage) + block + '_branch' bn_name_base = 'bn' + str(stage) + block + '_branch' # Common Element start = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '0a')(input_tensor) start = layers.Activation('relu')(start) #Branch - 1 shortcut = layers.Conv2D(filters3, (1, 1), strides=strides, kernel_initializer='he_normal', name=conv_name_base + '1')(start) #Branch - 2 x = layers.Conv2D(filters1, (1, 1), strides=strides, kernel_initializer='he_normal', name=conv_name_base + '2a')(start) x = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '2a')(x) x = layers.Activation('relu')(x) x = layers.Conv2D(filters2, kernel_size, padding='same', kernel_initializer='he_normal', name=conv_name_base + '2b')(x) x = layers.add([x, shortcut]) x = layers.Activation('relu')(x) return x def create_model(input_shape, kernel_size, filters, stage, block): img_input = layers.Input(shape=input_shape); x = resnet_conv_block(img_input, kernel_size, filters, stage, block) return Model(img_input, x); # Invoke the block kernel_size=3; filters=[64, 64, 64]; input_shape=(224, 224, 3); model = create_model(input_shape, kernel_size, filters, 0, "0"); # Run data through network x = tf.placeholder(tf.float32, shape=(1, 224, 224, 3)) y = model(x) print(x.shape, y.shape) # Export Model to Load on Netron model.save("final.h5"); netron.start("final.h5", port=8082)
(1, 224, 224, 3) (1, 112, 112, 64) Stopping http://localhost:8082 Serving 'final.h5' at http://localhost:8082
Apache-2.0
study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/3) Resnet V2 Block (Type - 1).ipynb
arijitgupta42/monk_v1
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Equations of General Relativistic Hydrodynamics (GRHD) Authors: Zach Etienne & Patrick Nelson This notebook documents and constructs a number of quantities useful for building symbolic (SymPy) expressions for the equations of general relativistic hydrodynamics (GRHD), using the same (Valencia) formalism as `IllinoisGRMHD`**Notebook Status:** Self-Validated **Validation Notes:** This tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](code_validation). **Additional validation tests may have been performed, but are as yet, undocumented. (TODO)** IntroductionWe write the equations of general relativistic hydrodynamics in conservative form as follows (adapted from Eqs. 41-44 of [Duez et al](https://arxiv.org/pdf/astro-ph/0503420.pdf):\begin{eqnarray}\ \partial_t \rho_* &+& \partial_j \left(\rho_* v^j\right) = 0 \\\partial_t \tilde{\tau} &+& \partial_j \left(\alpha^2 \sqrt{\gamma} T^{0j} - \rho_* v^j \right) = s \\\partial_t \tilde{S}_i &+& \partial_j \left(\alpha \sqrt{\gamma} T^j{}_i \right) = \frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i},\end{eqnarray}where we assume $T^{\mu\nu}$ is the stress-energy tensor of a perfect fluid:$$T^{\mu\nu} = \rho_0 h u^{\mu} u^{\nu} + P g^{\mu\nu},$$the $s$ source term is given in terms of ADM quantities via$$s = \alpha \sqrt{\gamma}\left[\left(T^{00}\beta^i\beta^j + 2 T^{0i}\beta^j + T^{ij} \right)K_{ij}- \left(T^{00}\beta^i + T^{0i} \right)\partial_i\alpha \right],$$and \begin{align}v^j &= \frac{u^j}{u^0} \\\rho_* &= \alpha\sqrt{\gamma} \rho_0 u^0 \\h &= 1 + \epsilon + \frac{P}{\rho_0}.\end{align}Also we will write the 4-metric in terms of the ADM 3-metric, lapse, and shift using standard equations.Thus the full set of input variables include:* Spacetime quantities: * ADM quantities $\alpha$, $\beta^i$, $\gamma_{ij}$, $K_{ij}$* Hydrodynamical quantities: * Rest-mass density $\rho_0$ * Pressure $P$ * Internal energy $\epsilon$ * 4-velocity $u^\mu$For completeness, the rest of the conservative variables are given by\begin{align}\tilde{\tau} &= \alpha^2\sqrt{\gamma} T^{00} - \rho_* \\\tilde{S}_i &= \alpha \sqrt{\gamma} T^0{}_i\end{align} A Note on NotationAs is standard in NRPy+, * Greek indices refer to four-dimensional quantities where the zeroth component indicates temporal (time) component.* Latin indices refer to three-dimensional quantities. This is somewhat counterintuitive since Python always indexes its lists starting from 0. As a result, the zeroth component of three-dimensional quantities will necessarily indicate the first *spatial* direction.For instance, in calculating the first term of $b^2 u^\mu u^\nu$, we use Greek indices:```pythonT4EMUU = ixp.zerorank2(DIM=4)for mu in range(4): for nu in range(4): Term 1: b^2 u^{\mu} u^{\nu} T4EMUU[mu][nu] = smallb2*u4U[mu]*u4U[nu]```When we calculate $\beta_i = \gamma_{ij} \beta^j$, we use Latin indices:```pythonbetaD = ixp.zerorank1(DIM=3)for i in range(3): for j in range(3): betaD[i] += gammaDD[i][j] * betaU[j]```As a corollary, any expressions involving mixed Greek and Latin indices will need to offset one set of indices by one: A Latin index in a four-vector will be incremented and a Greek index in a three-vector will be decremented (however, the latter case does not occur in this tutorial notebook). This can be seen when we handle $\frac{1}{2} \alpha \sqrt{\gamma} T^{\mu \nu}_{\rm EM} \partial_i g_{\mu \nu}$:```python \alpha \sqrt{\gamma} T^{\mu \nu}_{\rm EM} \partial_i g_{\mu \nu} / 2for i in range(3): for mu in range(4): for nu in range(4): S_tilde_rhsD[i] += alpsqrtgam * T4EMUU[mu][nu] * g4DD_zerotimederiv_dD[mu][nu][i+1] / 2``` Table of Contents$$\label{toc}$$Each family of quantities is constructed within a given function (**boldfaced** below). This notebook is organized as follows1. [Step 1](importmodules): Import needed NRPy+ & Python modules1. [Step 2](stressenergy): Define the stress-energy tensor $T^{\mu\nu}$ and $T^\mu{}_\nu$: * **compute_enthalpy()**, **compute_T4UU()**, **compute_T4UD()**: 1. [Step 3](primtoconserv): Writing the conservative variables in terms of the primitive variables: * **compute_sqrtgammaDET()**, **compute_rho_star()**, **compute_tau_tilde()**, **compute_S_tildeD()**1. [Step 4](grhdfluxes): Define the fluxes for the GRHD equations 1. [Step 4.a](rhostarfluxterm): Define $\rho_*$ flux term for GRHD equations: * **compute_vU_from_u4U__no_speed_limit()**, **compute_rho_star_fluxU()**: 1. [Step 4.b](taustildesourceterms) Define $\tilde{\tau}$ and $\tilde{S}_i$ flux terms for GRHD equations: * **compute_tau_tilde_fluxU()**, **compute_S_tilde_fluxUD()**1. [Step 5](grhdsourceterms): Define source terms on RHSs of GRHD equations 1. [Step 5.a](ssourceterm): Define $s$ source term on RHS of $\tilde{\tau}$ equation: * **compute_s_source_term()** 1. [Step 5.b](stildeisourceterm): Define source term on RHS of $\tilde{S}_i$ equation 1. [Step 5.b.i](fourmetricderivs): Compute $g_{\mu\nu,i}$ in terms of ADM quantities and their derivatives: * **compute_g4DD_zerotimederiv_dD()** 1. [Step 5.b.ii](stildeisource): Compute source term of the $\tilde{S}_i$ equation: $\frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}$: * **compute_S_tilde_source_termD()**1. [Step 6](convertvtou): Conversion of $v^i$ to $u^\mu$ (Courtesy Patrick Nelson): * **u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit()**, **u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit()**1. [Step 7](declarevarsconstructgrhdeqs): Declare ADM and hydrodynamical input variables, and construct GRHD equations1. [Step 8](code_validation): Code Validation against `GRHD.equations` NRPy+ module1. [Step 9](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Import needed NRPy+ & Python modules \[Back to [top](toc)\]$$\label{importmodules}$$
# Step 1: Import needed core NRPy+ modules from outputC import nrpyAbs # NRPy+: Core C code output module import NRPy_param_funcs as par # NRPy+: parameter interface import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
_____no_output_____
BSD-2-Clause
Tutorial-GRHD_Equations-Cartesian.ipynb
rhaas80/nrpytutorial
Step 2: Define the stress-energy tensor $T^{\mu\nu}$ and $T^\mu{}_\nu$ \[Back to [top](toc)\]$$\label{stressenergy}$$Recall from above that$$T^{\mu\nu} = \rho_0 h u^{\mu} u^{\nu} + P g^{\mu\nu},$$where $h = 1 + \epsilon + \frac{P}{\rho_0}$. Also $$T^\mu{}_{\nu} = T^{\mu\delta} g_{\delta \nu}$$
# Step 2.a: First define h, the enthalpy: def compute_enthalpy(rho_b,P,epsilon): global h h = 1 + epsilon + P/rho_b # Step 2.b: Define T^{mu nu} (a 4-dimensional tensor) def compute_T4UU(gammaDD,betaU,alpha, rho_b,P,epsilon,u4U): global T4UU compute_enthalpy(rho_b,P,epsilon) # Then define g^{mu nu} in terms of the ADM quantities: import BSSN.ADMBSSN_tofrom_4metric as AB4m AB4m.g4UU_ito_BSSN_or_ADM("ADM",gammaDD,betaU,alpha) # Finally compute T^{mu nu} T4UU = ixp.zerorank2(DIM=4) for mu in range(4): for nu in range(4): T4UU[mu][nu] = rho_b * h * u4U[mu]*u4U[nu] + P*AB4m.g4UU[mu][nu] # Step 2.c: Define T^{mu}_{nu} (a 4-dimensional tensor) def compute_T4UD(gammaDD,betaU,alpha, T4UU): global T4UD # Next compute T^mu_nu = T^{mu delta} g_{delta nu}, needed for S_tilde flux. # First we'll need g_{alpha nu} in terms of ADM quantities: import BSSN.ADMBSSN_tofrom_4metric as AB4m AB4m.g4DD_ito_BSSN_or_ADM("ADM",gammaDD,betaU,alpha) T4UD = ixp.zerorank2(DIM=4) for mu in range(4): for nu in range(4): for delta in range(4): T4UD[mu][nu] += T4UU[mu][delta]*AB4m.g4DD[delta][nu]
_____no_output_____
BSD-2-Clause
Tutorial-GRHD_Equations-Cartesian.ipynb
rhaas80/nrpytutorial
Step 3: Writing the conservative variables in terms of the primitive variables \[Back to [top](toc)\]$$\label{primtoconserv}$$Recall from above that the conservative variables may be written as\begin{align}\rho_* &= \alpha\sqrt{\gamma} \rho_0 u^0 \\\tilde{\tau} &= \alpha^2\sqrt{\gamma} T^{00} - \rho_* \\\tilde{S}_i &= \alpha \sqrt{\gamma} T^0{}_i\end{align}$T^{\mu\nu}$ and $T^\mu{}_\nu$ have already been defined $-$ all in terms of primitive variables. Thus we'll just need $\sqrt{\gamma}=$`gammaDET`, and all conservatives can then be written in terms of other defined quantities, which themselves are written in terms of primitive variables and the ADM metric.
# Step 3: Writing the conservative variables in terms of the primitive variables def compute_sqrtgammaDET(gammaDD): global sqrtgammaDET gammaUU, gammaDET = ixp.symm_matrix_inverter3x3(gammaDD) sqrtgammaDET = sp.sqrt(gammaDET) def compute_rho_star(alpha, sqrtgammaDET, rho_b,u4U): global rho_star # Compute rho_star: rho_star = alpha*sqrtgammaDET*rho_b*u4U[0] def compute_tau_tilde(alpha, sqrtgammaDET, T4UU,rho_star): global tau_tilde tau_tilde = alpha**2*sqrtgammaDET*T4UU[0][0] - rho_star def compute_S_tildeD(alpha, sqrtgammaDET, T4UD): global S_tildeD S_tildeD = ixp.zerorank1(DIM=3) for i in range(3): S_tildeD[i] = alpha*sqrtgammaDET*T4UD[0][i+1]
_____no_output_____
BSD-2-Clause
Tutorial-GRHD_Equations-Cartesian.ipynb
rhaas80/nrpytutorial
Step 4: Define the fluxes for the GRHD equations \[Back to [top](toc)\]$$\label{grhdfluxes}$$ Step 4.a: Define $\rho_*$ flux term for GRHD equations \[Back to [top](toc)\]$$\label{rhostarfluxterm}$$Recall from above that\begin{array}\ \partial_t \rho_* &+ \partial_j \left(\rho_* v^j\right) = 0.\end{array}Here we will define the $\rho_* v^j$ that constitutes the flux of $\rho_*$, first defining $v^j=u^j/u^0$:
# Step 4: Define the fluxes for the GRHD equations # Step 4.a: vU from u4U may be needed for computing rho_star_flux from u4U def compute_vU_from_u4U__no_speed_limit(u4U): global vU # Now compute v^i = u^i/u^0: vU = ixp.zerorank1(DIM=3) for j in range(3): vU[j] = u4U[j+1]/u4U[0] # Step 4.b: rho_star flux def compute_rho_star_fluxU(vU, rho_star): global rho_star_fluxU rho_star_fluxU = ixp.zerorank1(DIM=3) for j in range(3): rho_star_fluxU[j] = rho_star*vU[j]
_____no_output_____
BSD-2-Clause
Tutorial-GRHD_Equations-Cartesian.ipynb
rhaas80/nrpytutorial