markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Predictions are made on a dataframe with a column `ds` containing the dates for which predictions are to be made. The `make_future_dataframe` function takes the model object and a number of periods to forecast and produces a suitable dataframe. By default it will also include the historical dates so we can evaluate in-sample fit.
%%R future <- make_future_dataframe(m, periods = 365) tail(future)
_____no_output_____
MIT
notebooks/quick_start.ipynb
timgates42/prophet
As with most modeling procedures in R, we use the generic `predict` function to get our forecast. The `forecast` object is a dataframe with a column `yhat` containing the forecast. It has additional columns for uncertainty intervals and seasonal components.
%%R forecast <- predict(m, future) tail(forecast[c('ds', 'yhat', 'yhat_lower', 'yhat_upper')])
_____no_output_____
MIT
notebooks/quick_start.ipynb
timgates42/prophet
You can use the generic `plot` function to plot the forecast, by passing in the model and the forecast dataframe.
%%R -w 10 -h 6 -u in plot(m, forecast)
_____no_output_____
MIT
notebooks/quick_start.ipynb
timgates42/prophet
You can use the `prophet_plot_components` function to see the forecast broken down into trend, weekly seasonality, and yearly seasonality.
%%R -w 9 -h 9 -u in prophet_plot_components(m, forecast)
_____no_output_____
MIT
notebooks/quick_start.ipynb
timgates42/prophet
BONUS: Elbow plot - finding elbow in order to decide about the number of clusters
#find errir for 1-10 clusters sse = [] k_rng = range(1,10) for k in k_rng: km = KMeans(n_clusters=k) km.fit(df[['Age','Income($)']]) sse.append(km.inertia_) #plot errors and find "elbow" plt.xlabel('K') plt.ylabel('Sum of squared error') plt.plot(k_rng,sse)
_____no_output_____
MIT
k-mean.ipynb
pawel-krawczyk/machine_learning_basic
Reading Data
df = pd.read_csv('data.csv') df.head() df.tail() df.info() df['fever'].value_counts() df['diffBreath'].value_counts() df.describe()
_____no_output_____
MIT
.ipynb_checkpoints/Corona-checkpoint.ipynb
ayushman17/COVID-19-Detector
Train Test Splitting
import numpy as np def data_split(data, ratio): np.random.seed(42) shuffled = np.random.permutation(len(data)) test_set_size = int(len(data) * ratio) test_indices = shuffled[:test_set_size] train_indices = shuffled[test_set_size:] return data.iloc[train_indices], data.iloc[test_indices] np.random.permutation(7) train, test = data_split(df, 0.2) train test X_train = train[['fever', 'bodyPain', 'age', 'runnyNose', 'diffBreath']].to_numpy() X_test = test[['fever', 'bodyPain', 'age', 'runnyNose', 'diffBreath']].to_numpy() Y_train = train[['infectionProb']].to_numpy().reshape(2400,) Y_test = test[['infectionProb']].to_numpy().reshape(599,) Y_train from sklearn.linear_model import LogisticRegression clf = LogisticRegression() clf.fit(X_train, Y_train) inputFeatures = [101, 1, 22, -1, 1] infProb =clf.predict_proba([inputFeatures])[0][1] infProb
_____no_output_____
MIT
.ipynb_checkpoints/Corona-checkpoint.ipynb
ayushman17/COVID-19-Detector
TensorFlow 2 Complete Project Workflow in Amazon SageMaker Data Preprocessing -> Code Prototyping -> Automatic Model Tuning -> Deployment 1. [Introduction](Introduction)2. [SageMaker Processing for dataset transformation](SageMakerProcessing)3. [Local Mode training](LocalModeTraining)4. [Local Mode endpoint](LocalModeEndpoint)5. [SageMaker hosted training](SageMakerHostedTraining)6. [Automatic Model Tuning](AutomaticModelTuning)7. [SageMaker hosted endpoint](SageMakerHostedEndpoint)8. [Workflow Automation with the Step Functions Data Science SDK](WorkflowAutomation) 1. [Add an IAM policy to your SageMaker role](IAMPolicy) 2. [Create an execution role for Step Functions](CreateExecutionRole) 3. [Set up a TrainingPipeline](TrainingPipeline) 4. [Visualizing the workflow](VisualizingWorkflow) 5. [Creating and executing the pipeline](CreatingExecutingPipeline) 6. [Cleanup](Cleanup)9. [Extensions](Extensions) ***Prerequisite: To run the Local Mode sections of this example, use a SageMaker Notebook Instance; otherwise skip those sections (for example if you're using SageMaker Studio instead).*** Introduction If you are using TensorFlow 2, you can use the Amazon SageMaker prebuilt TensorFlow 2 container with training scripts similar to those you would use outside SageMaker. This feature is named Script Mode. Using Script Mode and other SageMaker features, you can build a complete workflow for a TensorFlow 2 project. This notebook presents such a workflow, including all key steps such as preprocessing data with SageMaker Processing, code prototyping with SageMaker Local Mode training and inference, and production-ready model training and deployment with SageMaker hosted training and inference. Automatic Model Tuning in SageMaker is used to tune the model's hyperparameters. Additionally, the [AWS Step Functions Data Science SDK](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/readmelink.html) is used to automate the main training and deployment steps for use in a production workflow outside notebooks. To enable you to run this notebook within a reasonable time (typically less than an hour), this notebook's use case is a straightforward regression task: predicting house prices based on the well-known Boston Housing dataset. This public dataset contains 13 features regarding housing stock of towns in the Boston area. Features include average number of rooms, accessibility to radial highways, adjacency to the Charles River, etc. To begin, we'll import some necessary packages and set up directories for local training and test data. We'll also set up a SageMaker Session to perform various operations, and specify an Amazon S3 bucket to hold input data and output. The default bucket used here is created by SageMaker if it doesn't already exist, and named in accordance with the AWS account ID and AWS Region.
import os import sagemaker import tensorflow as tf sess = sagemaker.Session() bucket = sess.default_bucket() data_dir = os.path.join(os.getcwd(), 'data') os.makedirs(data_dir, exist_ok=True) train_dir = os.path.join(os.getcwd(), 'data/train') os.makedirs(train_dir, exist_ok=True) test_dir = os.path.join(os.getcwd(), 'data/test') os.makedirs(test_dir, exist_ok=True) raw_dir = os.path.join(os.getcwd(), 'data/raw') os.makedirs(raw_dir, exist_ok=True)
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
SageMaker Processing for dataset transformation Next, we'll import the dataset and transform it with SageMaker Processing, which can be used to process terabytes of data in a SageMaker-managed cluster separate from the instance running your notebook server. In a typical SageMaker workflow, notebooks are only used for prototyping and can be run on relatively inexpensive and less powerful instances, while processing, training and model hosting tasks are run on separate, more powerful SageMaker-managed instances. SageMaker Processing includes off-the-shelf support for Scikit-learn, as well as a Bring Your Own Container option, so it can be used with many different data transformation technologies and tasks. First we'll load the Boston Housing dataset, save the raw feature data and upload it to Amazon S3 for transformation by SageMaker Processing. We'll also save the labels for training and testing.
import numpy as np from tensorflow.python.keras.datasets import boston_housing from sklearn.preprocessing import StandardScaler (x_train, y_train), (x_test, y_test) = boston_housing.load_data() np.save(os.path.join(raw_dir, 'x_train.npy'), x_train) np.save(os.path.join(raw_dir, 'x_test.npy'), x_test) np.save(os.path.join(train_dir, 'y_train.npy'), y_train) np.save(os.path.join(test_dir, 'y_test.npy'), y_test) s3_prefix = 'tf-2-workflow' rawdata_s3_prefix = '{}/data/raw'.format(s3_prefix) raw_s3 = sess.upload_data(path='./data/raw/', key_prefix=rawdata_s3_prefix) print(raw_s3)
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
To use SageMaker Processing, simply supply a Python data preprocessing script as shown below. For this example, we're using a SageMaker prebuilt Scikit-learn container, which includes many common functions for processing data. There are few limitations on what kinds of code and operations you can run, and only a minimal contract: input and output data must be placed in specified directories. If this is done, SageMaker Processing automatically loads the input data from S3 and uploads transformed data back to S3 when the job is complete.
%%writefile preprocessing.py import glob import numpy as np import os from sklearn.preprocessing import StandardScaler if __name__=='__main__': input_files = glob.glob('{}/*.npy'.format('/opt/ml/processing/input')) print('\nINPUT FILE LIST: \n{}\n'.format(input_files)) scaler = StandardScaler() for file in input_files: raw = np.load(file) transformed = scaler.fit_transform(raw) if 'train' in file: output_path = os.path.join('/opt/ml/processing/train', 'x_train.npy') np.save(output_path, transformed) print('SAVED TRANSFORMED TRAINING DATA FILE\n') else: output_path = os.path.join('/opt/ml/processing/test', 'x_test.npy') np.save(output_path, transformed) print('SAVED TRANSFORMED TEST DATA FILE\n')
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
Before starting the SageMaker Processing job, we instantiate a `SKLearnProcessor` object. This object allows you to specify the instance type to use in the job, as well as how many instances. Although the Boston Housing dataset is quite small, we'll use two instances to showcase how easy it is to spin up a cluster for SageMaker Processing.
from sagemaker import get_execution_role from sagemaker.sklearn.processing import SKLearnProcessor sklearn_processor = SKLearnProcessor(framework_version='0.20.0', role=get_execution_role(), instance_type='ml.m5.xlarge', instance_count=2)
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
We're now ready to run the Processing job. To enable distributing the data files equally among the instances, we specify the `ShardedByS3Key` distribution type in the `ProcessingInput` object. This ensures that if we have `n` instances, each instance will receive `1/n` files from the specified S3 bucket. It may take around 3 minutes for the following code cell to run, mainly to set up the cluster. At the end of the job, the cluster automatically will be torn down by SageMaker.
from sagemaker.processing import ProcessingInput, ProcessingOutput from time import gmtime, strftime processing_job_name = "tf-2-workflow-{}".format(strftime("%d-%H-%M-%S", gmtime())) output_destination = 's3://{}/{}/data'.format(bucket, s3_prefix) sklearn_processor.run(code='preprocessing.py', job_name=processing_job_name, inputs=[ProcessingInput( source=raw_s3, destination='/opt/ml/processing/input', s3_data_distribution_type='ShardedByS3Key')], outputs=[ProcessingOutput(output_name='train', destination='{}/train'.format(output_destination), source='/opt/ml/processing/train'), ProcessingOutput(output_name='test', destination='{}/test'.format(output_destination), source='/opt/ml/processing/test')]) preprocessing_job_description = sklearn_processor.jobs[-1].describe()
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
In the log output of the SageMaker Processing job above, you should be able to see logs in two different colors for the two different instances, and that each instance received different files. Without the `ShardedByS3Key` distribution type, each instance would have received a copy of **all** files. By spreading the data equally among `n` instances, you should receive a speedup by approximately a factor of `n` for most stateless data transformations. After saving the job results locally, we'll move on to prototyping training and inference code with Local Mode.
train_in_s3 = '{}/train/x_train.npy'.format(output_destination) test_in_s3 = '{}/test/x_test.npy'.format(output_destination) !aws s3 cp {train_in_s3} ./data/train/x_train.npy !aws s3 cp {test_in_s3} ./data/test/x_test.npy
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
Local Mode training Local Mode in Amazon SageMaker is a convenient way to make sure your code is working locally as expected before moving on to full scale, hosted training in a separate, more powerful SageMaker-managed cluster. To train in Local Mode, it is necessary to have docker-compose or nvidia-docker-compose (for GPU instances) installed. Running the following commands will install docker-compose or nvidia-docker-compose, and configure the notebook environment for you.
!wget -q https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-script-mode/master/local_mode_setup.sh !wget -q https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-script-mode/master/daemon.json !/bin/bash ./local_mode_setup.sh
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
Next, we'll set up a TensorFlow Estimator for Local Mode training. Key parameters for the Estimator include:- `train_instance_type`: the kind of hardware on which training will run. In the case of Local Mode, we simply set this parameter to `local` to invoke Local Mode training on the CPU, or to `local_gpu` if the instance has a GPU. - `git_config`: to make sure training scripts are source controlled for coordinated, shared use by a team, the Estimator can pull in the code from a Git repository rather than local directories. - Other parameters of note: the algorithm’s hyperparameters, which are passed in as a dictionary, and a Boolean parameter indicating that we are using Script Mode. Recall that we are using Local Mode here mainly to make sure our code is working. Accordingly, instead of performing a full cycle of training with many epochs (passes over the full dataset), we'll train only for a small number of epochs just to confirm the code is working properly and avoid wasting full-scale training time unnecessarily.
from sagemaker.tensorflow import TensorFlow git_config = {'repo': 'https://github.com/aws-samples/amazon-sagemaker-script-mode', 'branch': 'master'} model_dir = '/opt/ml/model' train_instance_type = 'local' hyperparameters = {'epochs': 5, 'batch_size': 128, 'learning_rate': 0.01} local_estimator = TensorFlow(git_config=git_config, source_dir='tf-2-workflow/train_model', entry_point='train.py', model_dir=model_dir, instance_type=train_instance_type, instance_count=1, hyperparameters=hyperparameters, role=sagemaker.get_execution_role(), base_job_name='tf-2-workflow', framework_version='2.2', py_version='py37', script_mode=True)
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
The `fit` method call below starts the Local Mode training job. Metrics for training will be logged below the code, inside the notebook cell. You should observe the validation loss decrease substantially over the five epochs, with no training errors, which is a good indication that our training code is working as expected.
inputs = {'train': f'file://{train_dir}', 'test': f'file://{test_dir}'} local_estimator.fit(inputs)
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
Local Mode endpoint While Amazon SageMaker’s Local Mode training is very useful to make sure your training code is working before moving on to full scale training, it also would be useful to have a convenient way to test your model locally before incurring the time and expense of deploying it to production. One possibility is to fetch the TensorFlow SavedModel artifact or a model checkpoint saved in Amazon S3, and load it in your notebook for testing. However, an even easier way to do this is to use the SageMaker Python SDK to do this work for you by setting up a Local Mode endpoint.More specifically, the Estimator object from the Local Mode training job can be used to deploy a model locally. With one exception, this code is the same as the code you would use to deploy to production. In particular, all you need to do is invoke the local Estimator's deploy method, and similarly to Local Mode training, specify the instance type as either `local_gpu` or `local` depending on whether your notebook is on a GPU instance or CPU instance. Just in case there are other inference containers running in Local Mode, we'll stop them to avoid conflict before deploying our new model locally.
!docker container stop $(docker container ls -aq) >/dev/null
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
The following single line of code deploys the model locally in the SageMaker TensorFlow Serving container:
local_predictor = local_estimator.deploy(initial_instance_count=1, instance_type='local')
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
To get predictions from the Local Mode endpoint, simply invoke the Predictor's predict method.
local_results = local_predictor.predict(x_test[:10])['predictions']
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
As a sanity check, the predictions can be compared against the actual target values.
local_preds_flat_list = [float('%.1f'%(item)) for sublist in local_results for item in sublist] print('predictions: \t{}'.format(np.array(local_preds_flat_list))) print('target values: \t{}'.format(y_test[:10].round(decimals=1)))
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
We only trained the model for a few epochs and there is much room for improvement, but the predictions so far should at least appear reasonably within the ballpark. To avoid having the SageMaker TensorFlow Serving container indefinitely running locally, simply gracefully shut it down by calling the `delete_endpoint` method of the Predictor object.
local_predictor.delete_endpoint()
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
SageMaker hosted training Now that we've confirmed our code is working locally, we can move on to use SageMaker's hosted training functionality. Hosted training is preferred for doing actual training, especially large-scale, distributed training. Unlike Local Mode training, for hosted training the actual training itself occurs not on the notebook instance, but on a separate cluster of machines managed by SageMaker. Before starting hosted training, the data must be in S3, or an EFS or FSx for Lustre file system. We'll upload to S3 now, and confirm the upload was successful.
s3_prefix = 'tf-2-workflow' traindata_s3_prefix = '{}/data/train'.format(s3_prefix) testdata_s3_prefix = '{}/data/test'.format(s3_prefix) train_s3 = sess.upload_data(path='./data/train/', key_prefix=traindata_s3_prefix) test_s3 = sess.upload_data(path='./data/test/', key_prefix=testdata_s3_prefix) inputs = {'train':train_s3, 'test': test_s3} print(inputs)
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
We're now ready to set up an Estimator object for hosted training. It is similar to the Local Mode Estimator, except the `train_instance_type` has been set to a SageMaker ML instance type instead of `local` for Local Mode. Also, since we know our code is working now, we'll train for a larger number of epochs with the expectation that model training will converge to an improved, lower validation loss.With these two changes, we simply call `fit` to start the actual hosted training.
train_instance_type = 'ml.c5.xlarge' hyperparameters = {'epochs': 30, 'batch_size': 128, 'learning_rate': 0.01} git_config = {'repo': 'https://github.com/aws-samples/amazon-sagemaker-script-mode', 'branch': 'master'} estimator = TensorFlow(git_config=git_config, source_dir='tf-2-workflow/train_model', entry_point='train.py', model_dir=model_dir, instance_type=train_instance_type, instance_count=1, hyperparameters=hyperparameters, role=sagemaker.get_execution_role(), base_job_name='tf-2-workflow', framework_version='2.2', py_version='py37', script_mode=True)
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
After starting the hosted training job with the `fit` method call below, you should observe the training converge over the longer number of epochs to a validation loss that is considerably lower than that which was achieved in the shorter Local Mode training job. Can we do better? We'll look into a way to do so in the **Automatic Model Tuning** section below.
estimator.fit(inputs)
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
As with the Local Mode training, hosted training produces a model saved in S3 that we can retrieve. This is an example of the modularity of SageMaker: having trained the model in SageMaker, you can now take the model out of SageMaker and run it anywhere else. Alternatively, you can deploy the model into a production-ready environment using SageMaker's hosted endpoints functionality, as shown in the **SageMaker hosted endpoint** section below.Retrieving the model from S3 is very easy: the hosted training estimator you created above stores a reference to the model's location in S3. You simply copy the model from S3 using the estimator's `model_data` property and unzip it to inspect the contents.
!aws s3 cp {estimator.model_data} ./model/model.tar.gz
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
The unzipped archive should include the assets required by TensorFlow Serving to load the model and serve it, including a .pb file:
!tar -xvzf ./model/model.tar.gz -C ./model
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
Automatic Model Tuning So far we have simply run one Local Mode training job and one Hosted Training job without any real attempt to tune hyperparameters to produce a better model, other than increasing the number of epochs. Selecting the right hyperparameter values to train your model can be difficult, and typically is very time consuming if done manually. The right combination of hyperparameters is dependent on your data and algorithm; some algorithms have many different hyperparameters that can be tweaked; some are very sensitive to the hyperparameter values selected; and most have a non-linear relationship between model fit and hyperparameter values. SageMaker Automatic Model Tuning helps automate the hyperparameter tuning process: it runs multiple training jobs with different hyperparameter combinations to find the set with the best model performance.We begin by specifying the hyperparameters we wish to tune, and the range of values over which to tune each one. We also must specify an objective metric to be optimized: in this use case, we'd like to minimize the validation loss.
from sagemaker.tuner import IntegerParameter, CategoricalParameter, ContinuousParameter, HyperparameterTuner hyperparameter_ranges = { 'learning_rate': ContinuousParameter(0.001, 0.2, scaling_type="Logarithmic"), 'epochs': IntegerParameter(10, 50), 'batch_size': IntegerParameter(64, 256), } metric_definitions = [{'Name': 'loss', 'Regex': ' loss: ([0-9\\.]+)'}, {'Name': 'val_loss', 'Regex': ' val_loss: ([0-9\\.]+)'}] objective_metric_name = 'val_loss' objective_type = 'Minimize'
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
Next we specify a HyperparameterTuner object that takes the above definitions as parameters. Each tuning job must be given a budget: a maximum number of training jobs. A tuning job will complete after that many training jobs have been executed. We also can specify how much parallelism to employ, in this case five jobs, meaning that the tuning job will complete after three series of five jobs in parallel have completed. For the default Bayesian Optimization tuning strategy used here, the tuning search is informed by the results of previous groups of training jobs, so we don't run all of the jobs in parallel, but rather divide the jobs into groups of parallel jobs. There is a trade-off: using more parallel jobs will finish tuning sooner, but likely will sacrifice tuning search accuracy. Now we can launch a hyperparameter tuning job by calling the `fit` method of the HyperparameterTuner object. The tuning job may take around 10 minutes to finish. While you're waiting, the status of the tuning job, including metadata and results for invidual training jobs within the tuning job, can be checked in the SageMaker console in the **Hyperparameter tuning jobs** panel.
tuner = HyperparameterTuner(estimator, objective_metric_name, hyperparameter_ranges, metric_definitions, max_jobs=15, max_parallel_jobs=5, objective_type=objective_type) tuning_job_name = "tf-2-workflow-{}".format(strftime("%d-%H-%M-%S", gmtime())) tuner.fit(inputs, job_name=tuning_job_name) tuner.wait()
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
After the tuning job is finished, we can use the `HyperparameterTuningJobAnalytics` object from the SageMaker Python SDK to list the top 5 tuning jobs with the best performance. Although the results vary from tuning job to tuning job, the best validation loss from the tuning job (under the FinalObjectiveValue column) likely will be substantially lower than the validation loss from the hosted training job above, where we did not perform any tuning other than manually increasing the number of epochs once.
tuner_metrics = sagemaker.HyperparameterTuningJobAnalytics(tuning_job_name) tuner_metrics.dataframe().sort_values(['FinalObjectiveValue'], ascending=True).head(5)
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
The total training time and training jobs status can be checked with the following lines of code. Because automatic early stopping is by default off, all the training jobs should be completed normally. For an example of a more in-depth analysis of a tuning job, see the SageMaker official sample [HPO_Analyze_TuningJob_Results.ipynb](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/hyperparameter_tuning/analyze_results/HPO_Analyze_TuningJob_Results.ipynb) notebook.
total_time = tuner_metrics.dataframe()['TrainingElapsedTimeSeconds'].sum() / 3600 print("The total training time is {:.2f} hours".format(total_time)) tuner_metrics.dataframe()['TrainingJobStatus'].value_counts()
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
SageMaker hosted endpoint Assuming the best model from the tuning job is better than the model produced by the individual Hosted Training job above, we could now easily deploy that model to production. A convenient option is to use a SageMaker hosted endpoint, which serves real time predictions from the trained model (Batch Transform jobs also are available for asynchronous, offline predictions on large datasets). The endpoint will retrieve the TensorFlow SavedModel created during training and deploy it within a SageMaker TensorFlow Serving container. This all can be accomplished with one line of code. More specifically, by calling the `deploy` method of the HyperparameterTuner object we instantiated above, we can directly deploy the best model from the tuning job to a SageMaker hosted endpoint. It will take several minutes longer to deploy the model to the hosted endpoint compared to the Local Mode endpoint, which is more useful for fast prototyping of inference code.
tuning_predictor = tuner.deploy(initial_instance_count=1, instance_type='ml.m5.xlarge')
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
We can compare the predictions generated by this endpoint with those generated locally by the Local Mode endpoint:
results = tuning_predictor.predict(x_test[:10])['predictions'] flat_list = [float('%.1f'%(item)) for sublist in results for item in sublist] print('predictions: \t{}'.format(np.array(flat_list))) print('target values: \t{}'.format(y_test[:10].round(decimals=1)))
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
To avoid billing charges from stray resources, you can delete the prediction endpoint to release its associated instance(s).
sess.delete_endpoint(tuning_predictor.endpoint_name)
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
Workflow Automation with the AWS Step Functions Data Science SDK In the previous parts of this notebook, we prototyped various steps of a TensorFlow project within the notebook itself. Notebooks are great for prototyping, but generally are not used in production-ready machine learning pipelines. For example, a simple pipeline in SageMaker includes the following steps: 1. Training the model.2. Creating a SageMaker Model object that wraps the model artifact for serving.3. Creating a SageMaker Endpoint Configuration specifying how the model should be served (e.g. hardware type and amount).4. Deploying the trained model to the configured SageMaker Endpoint. The AWS Step Functions Data Science SDK automates the process of creating and running these kinds of workflows using AWS Step Functions and SageMaker. It does this by allowing you to create workflows using short, simple Python scripts that define workflow steps and chain them together. Under the hood, all the workflow steps are coordinated by AWS Step Functions without any need for you to manage the underlying infrastructure. To begin, install the Step Functions Data Science SDK:
import sys !{sys.executable} -m pip install --quiet --upgrade stepfunctions
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
Add an IAM policy to your SageMaker role **If you are running this notebook on an Amazon SageMaker notebook instance**, the IAM role assumed by your notebook instance needs permission to create and run workflows in AWS Step Functions. To provide this permission to the role, do the following.1. Open the Amazon [SageMaker console](https://console.aws.amazon.com/sagemaker/). 2. Select **Notebook instances** and choose the name of your notebook instance3. Under **Permissions and encryption** select the role ARN to view the role on the IAM console4. Choose **Attach policies** and search for `AWSStepFunctionsFullAccess`.5. Select the check box next to `AWSStepFunctionsFullAccess` and choose **Attach policy**If you are running this notebook in a local environment, the SDK will use your configured AWS CLI configuration. For more information, see [Configuring the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html). Create an execution role for Step Functions You also need to create an execution role for Step Functions to enable that service to access SageMaker and other service functionality.1. Go to the [IAM console](https://console.aws.amazon.com/iam/)2. Select **Roles** and then **Create role**.3. Under **Choose the service that will use this role** select **Step Functions**4. Choose **Next** until you can enter a **Role name**5. Enter a name such as `StepFunctionsWorkflowExecutionRole` and then select **Create role**Select your newly create role and attach a policy to it. The following steps attach a policy that provides full access to Step Functions, however as a good practice you should only provide access to the resources you need. 1. Under the **Permissions** tab, click **Add inline policy**2. Enter the following in the **JSON** tab```json{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "sagemaker:CreateTransformJob", "sagemaker:DescribeTransformJob", "sagemaker:StopTransformJob", "sagemaker:CreateTrainingJob", "sagemaker:DescribeTrainingJob", "sagemaker:StopTrainingJob", "sagemaker:CreateHyperParameterTuningJob", "sagemaker:DescribeHyperParameterTuningJob", "sagemaker:StopHyperParameterTuningJob", "sagemaker:CreateModel", "sagemaker:CreateEndpointConfig", "sagemaker:CreateEndpoint", "sagemaker:DeleteEndpointConfig", "sagemaker:DeleteEndpoint", "sagemaker:UpdateEndpoint", "sagemaker:ListTags", "lambda:InvokeFunction", "sqs:SendMessage", "sns:Publish", "ecs:RunTask", "ecs:StopTask", "ecs:DescribeTasks", "dynamodb:GetItem", "dynamodb:PutItem", "dynamodb:UpdateItem", "dynamodb:DeleteItem", "batch:SubmitJob", "batch:DescribeJobs", "batch:TerminateJob", "glue:StartJobRun", "glue:GetJobRun", "glue:GetJobRuns", "glue:BatchStopJobRun" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "iam:PassRole" ], "Resource": "*", "Condition": { "StringEquals": { "iam:PassedToService": "sagemaker.amazonaws.com" } } }, { "Effect": "Allow", "Action": [ "events:PutTargets", "events:PutRule", "events:DescribeRule" ], "Resource": [ "arn:aws:events:*:*:rule/StepFunctionsGetEventsForSageMakerTrainingJobsRule", "arn:aws:events:*:*:rule/StepFunctionsGetEventsForSageMakerTransformJobsRule", "arn:aws:events:*:*:rule/StepFunctionsGetEventsForSageMakerTuningJobsRule", "arn:aws:events:*:*:rule/StepFunctionsGetEventsForECSTaskRule", "arn:aws:events:*:*:rule/StepFunctionsGetEventsForBatchJobsRule" ] } ]}```3. Choose **Review policy** and give the policy a name such as `StepFunctionsWorkflowExecutionPolicy`4. Choose **Create policy**. You will be redirected to the details page for the role.5. Copy the **Role ARN** at the top of the **Summary** Set up a TrainingPipeline Although the AWS Step Functions Data Science SDK provides various primitives to build up pipelines from scratch, it also provides prebuilt templates for common workflows, including a [TrainingPipeline](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/pipelines.htmlstepfunctions.template.pipeline.train.TrainingPipeline) object to simplify creation of a basic pipeline that includes model training and deployment. The following code cell configures a `pipeline` object with the necessary parameters to define such a simple pipeline:
import stepfunctions from stepfunctions.template.pipeline import TrainingPipeline # paste the StepFunctionsWorkflowExecutionRole ARN from above workflow_execution_role = "<execution-role-arn>" pipeline = TrainingPipeline( estimator=estimator, role=workflow_execution_role, inputs=inputs, s3_bucket=bucket )
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
Visualizing the workflow You can now view the workflow definition, and visualize it as a graph. This workflow and graph represent your training pipeline from starting a training job to deploying the model.
print(pipeline.workflow.definition.to_json(pretty=True)) pipeline.render_graph()
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
Creating and executing the pipeline Before the workflow can be run for the first time, the pipeline must be created using the `create` method:
pipeline.create()
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
Now the workflow can be started by invoking the pipeline's `execute` method:
execution = pipeline.execute()
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
Use the `list_executions` method to list all executions for the workflow you created, including the one we just started. After a pipeline is created, it can be executed as many times as needed, for example on a schedule for retraining on new data. (For purposes of this notebook just execute the workflow one time to save resources.) The output will include a list you can click through to access a view of the execution in the AWS Step Functions console.
pipeline.workflow.list_executions(html=True)
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
While the workflow is running, you can check workflow progress inside this notebook with the `render_progress` method. This generates a snapshot of the current state of your workflow as it executes. This is a static image. Run the cell again to check progress while the workflow is running.
execution.render_progress()
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
BEFORE proceeding with the rest of the notebook:Wait until the workflow completes with status **Succeeded**, which will take a few minutes. You can check status with `render_progress` above, or open in a new browser tab the **Inspect in AWS Step Functions** link in the cell output. To view the details of the completed workflow execution, from model training through deployment, use the `list_events` method, which lists all events in the workflow execution.
execution.list_events(reverse_order=True, html=False)
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
From this list of events, we can extract the name of the endpoint that was set up by the workflow.
import re endpoint_name_suffix = re.search('endpoint\Wtraining\Wpipeline\W([a-zA-Z0-9\W]+?)"', str(execution.list_events())).group(1) print(endpoint_name_suffix)
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
Once we have the endpoint name, we can use it to instantiate a TensorFlowPredictor object that wraps the endpoint. This TensorFlowPredictor can be used to make predictions, as shown in the following code cell. BEFORE running the following code cell:Go to the [SageMaker console](https://console.aws.amazon.com/sagemaker/), click **Endpoints** in the left panel, and make sure that the endpoint status is **InService**. If the status is **Creating**, wait until it changes, which may take several minutes.
from sagemaker.tensorflow import TensorFlowPredictor workflow_predictor = TensorFlowPredictor('training-pipeline-' + endpoint_name_suffix) results = workflow_predictor.predict(x_test[:10])['predictions'] flat_list = [float('%.1f'%(item)) for sublist in results for item in sublist] print('predictions: \t{}'.format(np.array(flat_list))) print('target values: \t{}'.format(y_test[:10].round(decimals=1)))
_____no_output_____
Apache-2.0
tf-2-workflow/tf-2-workflow.ipynb
scott2b/amazon-sagemaker-script-mode
Example: compound interest $A = P (1 + \frac{r}{n})^{nt}$+ A - amount+ P - principle+ r - interest rate+ n - number of times interest is compunded per unit 't'+ t - time
import numpy as np import plotly.graph_objects as go from plotly.subplots import make_subplots import ipywidgets as widgets from ipywidgets import interactive def compound_interest_with_saving_rate(start_value, saving_per_month, interest_rate, duration_years): months = np.array(np.linspace(0, (12*duration_years), (12*duration_years)+1)) balance = np.array([(start_value+i*saving_per_month)*(1+interest_rate/12)**(i) for i in months]) principal = np.array([start_value + saving_per_month *i for i in months]) return months, balance, principal def visualize(start_value, saving_per_month, interest_rate, duration_years): months, balance, principle = compound_interest_with_saving_rate(start_value, saving_per_month, interest_rate, duration_years) print(months[-1], balance[-1], principle[-1]) fig = go.Figure() fig.add_trace(go.Scatter(x=months/12, y=balance, name="balance")) fig.add_trace(go.Scatter(x=months/12,y=principle,name="principle")) fig.update_xaxes(title_text="<b>years</b>") fig.update_yaxes(title_text="<b>balance</b>") fig.show() fig = make_subplots(specs=[[{"secondary_y": True}]]) fig.add_trace(go.Scatter(x=months/12,y=balance-principle,name="interest"),secondary_y=False, ) fig.add_trace(go.Scatter(x=months/12,y=principle/balance,name="ratio"),secondary_y=True, ) fig.update_xaxes(title_text="<b>years</b>") fig.update_yaxes(title_text="<b>interest</b>", secondary_y=False) fig.update_yaxes(title_text="<b>ratio</b>", secondary_y=True) fig.show() interactive_plot = interactive(visualize, start_value=widgets.IntSlider(min=0, max=10000,step=100, value=1000), saving_per_month=widgets.IntSlider(min=0, max=1000,step=10, value=500), interest_rate=widgets.FloatSlider(min=-0.5,max=0.5, step=0.01,value=0.05), duration_years=widgets.IntSlider(min=0, max=50,step=1, value=10)) interactive_plot
_____no_output_____
MIT
plotly_widgets_compound_interest.ipynb
summiee/jupyter_demos
Paralelizacion de entrenamiento de redes neuronales con TensorFlowEn esta seccion dejaremos atras los rudimentos de las matematicas y nos centraremos en utilizar TensorFlow, la cual es una de las librerias mas populares de arpendizaje profundo y que realiza una implementacion mas eficaz de las redes neuronales que cualquier otra implementacion de Numpy.TensorFlow es una interfaz de programacion multiplataforma y escalable para implementar y ejecutar algortimos de aprendizaje automatico de una manera mas eficaz ya que permite usar tanto la CPU como la GPU, la cual suele tener muchos mas procesadores que la CPU, los cuales, combinando sus frecuencias, presentan un rendimiento mas potente. La API mas desarrollada de esta herramienta se presenta para Python, por lo cual muchos desarrolladores se ven atraidos a este lenguaje. Primeros pasos con TensorFlowhttps://jakevdp.github.io/PythonDataScienceHandbook/02.01-understanding-data-types.html
# Creando tensores # ============================================= import tensorflow as tf import numpy as np np.set_printoptions(precision=3) a = np.array([1, 2, 3], dtype=np.int32) b = [4, 5, 6] t_a = tf.convert_to_tensor(a) t_b = tf.convert_to_tensor(b) print(t_a) print(t_b) # Obteniendo las dimensiones de un tensor # =============================================== t_ones = tf.ones((2, 3)) print(t_ones) t_ones.shape # Obteniendo los valores del tensor como array # =============================================== t_ones.numpy() # Creando un tensor de valores constantes # ================================================ const_tensor = tf.constant([1.2, 5, np.pi], dtype=tf.float32) print(const_tensor) matriz = np.array([[2, 3, 4, 5], [6, 7, 8, 8]], dtype = np.int32) matriz matriz_tf = tf.convert_to_tensor(matriz) print(matriz_tf, end = '\n'*2) print(matriz_tf.numpy(), end = '\n'*2) print(matriz_tf.shape)
_____no_output_____
MIT
Semana-18/Tensor Flow.ipynb
bel4212a/Curso-ciencia-de-datos
Manipulando los tipos de datos y forma de un tensor
# Cambiando el tipo de datos del tensor # ============================================== print(matriz_tf.dtype) matriz_tf_n = tf.cast(matriz_tf, tf.int64) print(matriz_tf_n.dtype) # Transponiendo un tensor # ================================================= t = tf.random.uniform(shape=(3, 5)) print(t, end = '\n'*2) t_tr = tf.transpose(t) print(t_tr, end = '\n'*2) # Redimensionando un vector # ===================================== t = tf.zeros((30,)) print(t, end = '\n'*2) print(t.shape, end = '\n'*3) t_reshape = tf.reshape(t, shape=(5, 6)) print(t_reshape, end = '\n'*2) print(t_reshape.shape) # Removiendo las dimensiones innecesarias # ===================================================== t = tf.zeros((1, 2, 1, 4, 1)) print(t, end = '\n'*2) print(t.shape, end = '\n'*3) t_sqz = tf.squeeze(t, axis=(2, 4)) print(t_sqz, end = '\n'*2) print(t_sqz.shape, end = '\n'*3) print(t.shape, ' --> ', t_sqz.shape)
_____no_output_____
MIT
Semana-18/Tensor Flow.ipynb
bel4212a/Curso-ciencia-de-datos
Operaciones matematicas sobre tensores
# Inicializando dos tensores con numeros aleatorios # ============================================================= tf.random.set_seed(1) t1 = tf.random.uniform(shape=(5, 2), minval=-1.0, maxval=1.0) t2 = tf.random.normal(shape=(5, 2), mean=0.0, stddev=1.0) print(t1, '\n'*2, t2) # Producto tipo element-wise: elemento a elemento # ================================================= t3 = tf.multiply(t1, t2).numpy() print(t3) # Promedio segun el eje # ================================================ t4 = tf.math.reduce_mean(t1, axis=None) print(t4, end = '\n'*3) t4 = tf.math.reduce_mean(t1, axis=0) print(t4, end = '\n'*3) t4 = tf.math.reduce_mean(t1, axis=1) print(t4, end = '\n'*3) # suma segun el eje # ================================================= t4 = tf.math.reduce_sum(t1, axis=None) print('Suma de todos los elementos:', t4, end = '\n'*3) t4 = tf.math.reduce_sum(t1, axis=0) print('Suma de los elementos por columnas:', t4, end = '\n'*3) t4 = tf.math.reduce_sum(t1, axis=1) print('Suma de los elementos por filas:', t4, end = '\n'*3) # Desviacion estandar segun el eje # ================================================= t4 = tf.math.reduce_std(t1, axis=None) print('Suma de todos los elementos:', t4, end = '\n'*3) t4 = tf.math.reduce_std(t1, axis=0) print('Suma de los elementos por columnas:', t4, end = '\n'*3) t4 = tf.math.reduce_std(t1, axis=1) print('Suma de los elementos por filas:', t4, end = '\n'*3) # Producto entre matrices # =========================================== t5 = tf.linalg.matmul(t1, t2, transpose_b=True) print(t5.numpy(), end = '\n'*2) # Producto entre matrices # =========================================== t6 = tf.linalg.matmul(t1, t2, transpose_a=True) print(t6.numpy()) # Calculando la norma de un vector # ========================================== norm_t1 = tf.norm(t1, ord=2, axis=None).numpy() print(norm_t1, end='\n'*2) norm_t1 = tf.norm(t1, ord=2, axis=0).numpy() print(norm_t1, end='\n'*2) norm_t1 = tf.norm(t1, ord=2, axis=1).numpy() print(norm_t1, end='\n'*2)
_____no_output_____
MIT
Semana-18/Tensor Flow.ipynb
bel4212a/Curso-ciencia-de-datos
Partir, apilar y concatenar tensores
# Datos a trabajar # ======================================= tf.random.set_seed(1) t = tf.random.uniform((6,)) print(t.numpy()) # Partiendo el tensor en un numero determinado de piezas # ====================================================== t_splits = tf.split(t, num_or_size_splits = 3) [item.numpy() for item in t_splits] # Partiendo el tensor segun los tamaños definidos # ====================================================== tf.random.set_seed(1) t = tf.random.uniform((6,)) print(t.numpy()) t_splits = tf.split(t, num_or_size_splits=[3, 3]) [item.numpy() for item in t_splits] print(matriz_tf.numpy()) # m_splits = tf.split(t, num_or_size_splits = 0, axis = 1) matriz_n = tf.reshape(matriz_tf, shape = (8,)) print(matriz_n.numpy()) m_splits = tf.split(matriz_n, num_or_size_splits = 2) [item.numpy() for item in m_splits] # Concatenando tensores # ========================================= A = tf.ones((3,)) print(A, end ='\n'*2) B = tf.zeros((2,)) print(B, end ='\n'*2) C = tf.concat([A, B], axis=0) print(C.numpy()) # Apilando tensores # ========================================= A = tf.ones((3,)) print(A, end ='\n'*2) B = tf.zeros((3,)) print(B, end ='\n'*2) S = tf.stack([A, B], axis=1) print(S.numpy())
_____no_output_____
MIT
Semana-18/Tensor Flow.ipynb
bel4212a/Curso-ciencia-de-datos
Mas funciones y herramientas en:https://www.tensorflow.org/versions/r2.0/api_docs/python/tf. EJERCICIOS1. Cree dos tensores de dimensiones (4, 6), de numeros aleatorios provenientes de una distribucion normal estandar con promedio 0.0 y dsv 1.0. Imprimalos.2. Multiplique los anteriores tensores de las dos formas vistas, element-wise y producto matricial, realizando las dos transposiciones vistas. 3. Calcule los promedios, desviaciones estandar y suma de sus elementos para los dos tensores.4. Redimensione los tensores para que sean ahora de rango 1.5. Calcule el coseno de los elementos de los tensores (revise la documentacion).6. Cree un tensor de rango 1 con 1001 elementos, empezando con el 0 y hasta el 30.7. Realice un for sobre los elementos del tensor e imprimalos.8. Realice el calculo de los factoriales de los numero del 1 al 30 usando el tensor del punto 6. Imprima el resultado como un DataFrame Creación de *pipelines* de entrada con tf.data: la API de conjunto de datos de TensorFlowCuando entrenamos un modelo NN profundo, generalmente entrenamos el modelo de forma incremental utilizando un algoritmo de optimización iterativo como el descenso de gradiente estocástico, como hemos visto en clases anteriores.La API de Keras es un contenedor de TensorFlow para crear modelos NN. La API de Keras proporciona un método, `.fit ()`, para entrenar los modelos. En los casos en que el conjunto de datos de entrenamiento es bastante pequeño y se puede cargar como un tensor en la memoria, los modelos de TensorFlow (que se compilan con la API de Keras) pueden usar este tensor directamente a través de su método .fit () para el entrenamiento. Sin embargo, en casos de uso típicos, cuando el conjunto de datos es demasiado grande para caber en la memoria de la computadora, necesitaremos cargar los datos del dispositivo de almacenamiento principal (por ejemplo, el disco duro o la unidad de estado sólido) en trozos, es decir, lote por lote. Además, es posible que necesitemos construir un *pipeline* de procesamiento de datos para aplicar ciertas transformaciones y pasos de preprocesamiento a nuestros datos, como el centrado medio, el escalado o la adición de ruido para aumentar el procedimiento de entrenamiento y evitar el sobreajuste.Aplicar las funciones de preprocesamiento manualmente cada vez puede resultar bastante engorroso. Afortunadamente, TensorFlow proporciona una clase especial para construir *pipelines* de preprocesamiento eficientes y convenientes. En esta parte, veremos una descripción general de los diferentes métodos para construir un conjunto de datos de TensorFlow, incluidas las transformaciones del conjunto de datos y los pasos de preprocesamiento comunes. Creando un Dataset de TensorFlow desde tensores existentesSi los datos ya existen en forma de un objeto tensor, una lista de Python o una matriz NumPy, podemos crear fácilmente un conjunto de datos usando la función `tf.data.Dataset.from_tensor_slices()`. Esta función devuelve un objeto de la clase Dataset, que podemos usar para iterar a través de los elementos individuales en el conjunto de datos de entrada:
import tensorflow as tf # Ejemplo con listas # ====================================================== a = [1.2, 3.4, 7.5, 4.1, 5.0, 1.0] ds = tf.data.Dataset.from_tensor_slices(a) print(ds) for item in ds: print(item) for i in ds: print(i.numpy())
tf.Tensor(1.2, shape=(), dtype=float32) tf.Tensor(3.4, shape=(), dtype=float32) tf.Tensor(7.5, shape=(), dtype=float32) tf.Tensor(4.1, shape=(), dtype=float32) tf.Tensor(5.0, shape=(), dtype=float32) tf.Tensor(1.0, shape=(), dtype=float32) 1.2 3.4 7.5 4.1 5.0 1.0
MIT
Semana-18/Tensor Flow.ipynb
bel4212a/Curso-ciencia-de-datos
Si queremos crear lotes a partir de este conjunto de datos, con un tamaño de lote deseado de 3, podemos hacerlo de la siguiente manera:
# Creando lotes de 3 elementos cada uno # =================================================== ds_batch = ds.batch(3) for i, elem in enumerate(ds_batch, 1): print(f'batch {i}:', elem)
batch 1: tf.Tensor([1.2 3.4 7.5], shape=(3,), dtype=float32) batch 2: tf.Tensor([4.1 5. 1. ], shape=(3,), dtype=float32)
MIT
Semana-18/Tensor Flow.ipynb
bel4212a/Curso-ciencia-de-datos
Esto creará dos lotes a partir de este conjunto de datos, donde los primeros tres elementos van al lote n° 1 y los elementos restantes al lote n° 2. El método `.batch()` tiene un argumento opcional, `drop_remainder`, que es útil para los casos en los que el número de elementos en el tensor no es divisible por el tamaño de lote deseado. El valor predeterminado de `drop_remainder` es `False`. Combinar dos tensores en un DatasetA menudo, podemos tener los datos en dos (o posiblemente más) tensores. Por ejemplo, podríamos tener un tensor para características y un tensor para etiquetas. En tales casos, necesitamos construir un conjunto de datos que combine estos tensores juntos, lo que nos permitirá recuperar los elementos de estos tensores en tuplas.Suponga que tenemos dos tensores, t_x y t_y. El tensor t_x contiene nuestros valores de características, cada uno de tamaño 3, y t_y almacena las etiquetas de clase. Para este ejemplo, primero creamos estos dos tensores de la siguiente manera:
# Datos de ejemplo # ============================================ tf.random.set_seed(1) t_x = tf.random.uniform([4, 3], dtype=tf.float32) t_y = tf.range(4) print(t_x) print(t_y) # Uniendo los dos tensores en un Dataset # ============================================ ds_x = tf.data.Dataset.from_tensor_slices(t_x) ds_y = tf.data.Dataset.from_tensor_slices(t_y) ds_joint = tf.data.Dataset.zip((ds_x, ds_y)) for example in ds_joint: print('x:', example[0].numpy(),' y:', example[1].numpy()) ds_joint = tf.data.Dataset.from_tensor_slices((t_x, t_y)) for example in ds_joint: #print(example) print('x:', example[0].numpy(), ' y:', example[1].numpy()) ds_joint # Operacion sobre el dataset generado # ==================================================== ds_trans = ds_joint.map(lambda x, y: (x*2-1.0, y)) for example in ds_trans: print(' x:', example[0].numpy(), ' y:', example[1].numpy())
x: [-0.6697383 0.80296254 0.26194835] y: 0 x: [-0.13090777 -0.41612196 0.28500414] y: 1 x: [ 0.951571 -0.12980103 0.32020378] y: 2 x: [0.20979166 0.27326298 0.22889757] y: 3
MIT
Semana-18/Tensor Flow.ipynb
bel4212a/Curso-ciencia-de-datos
Mezclar, agrupar y repetirPara entrenar un modelo NN usando la optimización de descenso de gradiente estocástico, es importante alimentar los datos de entrenamiento como lotes mezclados aleatoriamente. Ya hemos visto arriba como crear lotes llamando al método `.batch()` de un objeto de conjunto de datos. Ahora, además de crear lotes, vamos a mezclar y reiterar sobre los conjuntos de datos:
# Mezclando los elementos de un tensor # =================================================== tf.random.set_seed(1) ds = ds_joint.shuffle(buffer_size = len(t_x)) for example in ds: print(' x:', example[0].numpy(), ' y:', example[1].numpy())
_____no_output_____
MIT
Semana-18/Tensor Flow.ipynb
bel4212a/Curso-ciencia-de-datos
donde las filas se barajan sin perder la correspondencia uno a uno entre las entradas en x e y. El método `.shuffle()` requiere un argumento llamado `buffer_size`, que determina cuántos elementos del conjunto de datos se agrupan antes de barajar. Los elementos del búfer se recuperan aleatoriamente y su lugar en el búfer se asigna a los siguientes elementos del conjunto de datos original (sin mezclar). Por lo tanto, si elegimos un tamaño de búfer pequeño, es posible que no mezclemos perfectamente el conjunto de datos.Si el conjunto de datos es pequeño, la elección de un tamaño de búfer relativamente pequeño puede afectar negativamente el rendimiento predictivo del NN, ya que es posible que el conjunto de datos no esté completamente aleatorizado. En la práctica, sin embargo, por lo general no tiene un efecto notable cuando se trabaja con conjuntos de datos relativamente grandes, lo cual es común en el aprendizaje profundo.Alternativamente, para asegurar una aleatorización completa durante cada época, simplemente podemos elegir un tamaño de búfer que sea igual al número de ejemplos de entrenamiento, como en el código anterior (`buffer_size = len(t_x)`). Ahora, creemos lotes a partir del conjunto de datos ds_joint:
ds = ds_joint.batch(batch_size = 3, drop_remainder = False) print(ds) batch_x, batch_y = next(iter(ds)) print('Batch-x:\n', batch_x.numpy()) print('Batch-y: ', batch_y.numpy())
_____no_output_____
MIT
Semana-18/Tensor Flow.ipynb
bel4212a/Curso-ciencia-de-datos
Además, al entrenar un modelo para múltiples épocas, necesitamos mezclar e iterar sobre el conjunto de datos por el número deseado de épocas. Entonces, repitamos el conjunto de datos por lotes dos veces:
ds = ds_joint.batch(3).repeat(count = 2) for i,(batch_x, batch_y) in enumerate(ds): print(i, batch_x.numpy(), batch_y.numpy(), end = '\n'*2)
_____no_output_____
MIT
Semana-18/Tensor Flow.ipynb
bel4212a/Curso-ciencia-de-datos
Esto da como resultado dos copias de cada lote. Si cambiamos el orden de estas dos operaciones, es decir, primero lote y luego repetimos, los resultados serán diferentes:
ds = ds_joint.repeat(count=2).batch(3) for i,(batch_x, batch_y) in enumerate(ds): print(i, batch_x.numpy(), batch_y.numpy(), end = '\n'*2)
_____no_output_____
MIT
Semana-18/Tensor Flow.ipynb
bel4212a/Curso-ciencia-de-datos
Finalmente, para comprender mejor cómo se comportan estas tres operaciones (batch, shuffle y repeat), experimentemos con ellas en diferentes órdenes. Primero, combinaremos las operaciones en el siguiente orden: (1) shuffle, (2) batch y (3) repeat:
# Orden 1: shuffle -> batch -> repeat tf.random.set_seed(1) ds = ds_joint.shuffle(4).batch(2).repeat(3) for i,(batch_x, batch_y) in enumerate(ds): print(i, batch_x, batch_y.numpy(), end = '\n'*2) # Orden 2: batch -> shuffle -> repeat tf.random.set_seed(1) ds = ds_joint.batch(2).shuffle(4).repeat(3) for i,(batch_x, batch_y) in enumerate(ds): print(i, batch_x, batch_y.numpy(), end = '\n'*2) # Orden 2: batch -> repeat-> shuffle tf.random.set_seed(1) ds = ds_joint.batch(2).repeat(3).shuffle(4) for i,(batch_x, batch_y) in enumerate(ds): print(i, batch_x, batch_y.numpy(), end = '\n'*2)
_____no_output_____
MIT
Semana-18/Tensor Flow.ipynb
bel4212a/Curso-ciencia-de-datos
Obteniendo conjuntos de datos disponibles de la biblioteca tensorflow_datasetsLa biblioteca tensorflow_datasets proporciona una buena colección de conjuntos de datos disponibles gratuitamente para entrenar o evaluar modelos de aprendizaje profundo. Los conjuntos de datos están bien formateados y vienen con descripciones informativas, incluido el formato de características y etiquetas y su tipo y dimensionalidad, así como la cita del documento original que introdujo el conjunto de datos en formato BibTeX. Otra ventaja es que todos estos conjuntos de datos están preparados y listos para usar como objetos tf.data.Dataset, por lo que todas las funciones que cubrimos se pueden usar directamente:
# pip install tensorflow-datasets import tensorflow_datasets as tfds print(len(tfds.list_builders())) print(tfds.list_builders()[:5]) # Trabajando con el archivo mnist # =============================================== mnist, mnist_info = tfds.load('mnist', with_info=True, shuffle_files=False) print(mnist_info) print(mnist.keys()) ds_train = mnist['train'] ds_train = ds_train.map(lambda item:(item['image'], item['label'])) ds_train = ds_train.batch(10) batch = next(iter(ds_train)) print(batch[0].shape, batch[1]) import matplotlib.pyplot as plt fig = plt.figure(figsize=(15, 6)) for i,(image,label) in enumerate(zip(batch[0], batch[1])): ax = fig.add_subplot(2, 5, i+1) ax.set_xticks([]); ax.set_yticks([]) ax.imshow(image[:, :, 0], cmap='gray_r') ax.set_title('{}'.format(label), size=15) plt.show()
_____no_output_____
MIT
Semana-18/Tensor Flow.ipynb
bel4212a/Curso-ciencia-de-datos
Construyendo un modelo NN en TensorFlow La API de TensorFlow Keras (tf.keras)Keras es una API NN de alto nivel y se desarrolló originalmente para ejecutarse sobre otras bibliotecas como TensorFlow y Theano. Keras proporciona una interfaz de programación modular y fácil de usar que permite la creación de prototipos y la construcción de modelos complejos en solo unas pocas líneas de código. Keras se puede instalar independientemente de PyPI y luego configurarse para usar TensorFlow como su motor de backend. Keras está estrechamente integrado en TensorFlow y se puede acceder a sus módulos a través de tf.keras.En TensorFlow 2.0, tf.keras se ha convertido en el enfoque principal y recomendado para implementar modelos. Esto tiene la ventaja de que admite funcionalidades específicas de TensorFlow, como las canalizaciones de conjuntos de datos que usan tf.data.La API de Keras (tf.keras) hace que la construcción de un modelo NN sea extremadamente fácil. El enfoque más utilizado para crear una NN en TensorFlow es a través de `tf.keras.Sequential()`, que permite apilar capas para formar una red. Se puede dar una pila de capas en una lista de Python a un modelo definido como tf.keras.Sequential(). Alternativamente, las capas se pueden agregar una por una usando el método .add().Además, tf.keras nos permite definir un modelo subclasificando tf.keras.Model.Esto nos da más control sobre la propagacion hacia adelante al definir el método call() para nuestra clase modelo para especificar la propagacion hacia adelante explicitamente. Finalmente, los modelos construidos usando la API tf.keras se pueden compilar y entrenar a través de los métodos .compile() y .fit(). Construyendo un modelo de regresion lineal
X_train = np.arange(10).reshape((10, 1)) y_train = np.array([1.0, 1.3, 3.1, 2.0, 5.0, 6.3, 6.6, 7.4, 8.0, 9.0]) X_train, y_train import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.plot(X_train, y_train, 'o', markersize=10) ax.set_xlabel('x') ax.set_ylabel('y') import tensorflow as tf X_train_norm = (X_train - np.mean(X_train))/np.std(X_train) ds_train_orig = tf.data.Dataset.from_tensor_slices((tf.cast(X_train_norm, tf.float32),tf.cast(y_train, tf.float32))) for i in ds_train_orig: print(i[0].numpy(), i[1].numpy())
_____no_output_____
MIT
Semana-18/Tensor Flow.ipynb
bel4212a/Curso-ciencia-de-datos
Ahora, podemos definir nuestro modelo de regresión lineal como $𝑧 = 𝑤x + 𝑏$. Aquí, vamos a utilizar la API de Keras. `tf.keras` proporciona capas predefinidas para construir modelos NN complejos, pero para empezar, usaremos un modelo desde cero:
class MyModel(tf.keras.Model): def __init__(self): super(MyModel, self).__init__() self.w = tf.Variable(0.0, name='weight') self.b = tf.Variable(0.0, name='bias') def call(self, x): return self.w * x + self.b model = MyModel() model.build(input_shape=(None, 1)) model.summary()
_____no_output_____
MIT
Semana-18/Tensor Flow.ipynb
bel4212a/Curso-ciencia-de-datos
--- now we have an app and can start doing stuff --- creating a mutable Object
myMutable = myApp.mData()
_____no_output_____
MIT
crappyChat_setup.ipynb
rid-dim/pySafe
define Entries and drop them onto Safe
import datetime now = datetime.datetime.utcnow().strftime('%Y-%m-%d - %H:%M:%S') myName = 'Welcome to the SAFE Network' text = 'free speech and free knowledge to the world!' timeUser = f'{now} {myName}' entries={timeUser:text}
_____no_output_____
MIT
crappyChat_setup.ipynb
rid-dim/pySafe
entries={'firstkey':'this is awesome', 'secondKey':'and soon it should be', 'thirdKey':'even easier to use safe with python', 'i love safe':'and this is just the start', 'thisWasUploaded at':datetime.datetime.utcnow().strftime('%Y-%m-%d - %H:%M:%S UTC'), 'additionalEntry':input('enter your custom value here: ')}
infoData = myMutable.new_random_public(777,signKey,entries) print(safenet.safe_utils.getXorAddresOfMutable(infoData,myMutable.ffi_app)) additionalEntries={'this wasnt here':'before'} additionalEntries={'baduff':'another entry'} myMutable.insertEntries(infoData,additionalEntries) with open('testfile','wb') as f: f.write(myMutable.ffi_app.buffer(infoData)[:]) with open('testfile','rb') as f: infoData= safenet.safe_utils.getffiMutable(f.read(),myMutable.ffi_app) myMutable.ffi_app.buffer(infoData)[:] mutableBytes = b'H\x8f\x08x}\xc5D]U\xeeW\x08\xe0\xb4\xaau\x94\xd4\x8a\x0bz\x06h\xe3{}\xd1\x06\x843\x01P[t\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x007\xdbNV\x00\x00' infoData= safenet.safe_utils.getffiMutable(mutableBytes,myMutable.ffi_app) infoData def getNewEntries(lastState,newState): newEntries = {} for additional in [item for item in newState if item not in lastState]: newEntries[additional]=newState[additional] return newEntries, newState
_____no_output_____
MIT
crappyChat_setup.ipynb
rid-dim/pySafe
lastState={} additionalEntries, lastState = getNewEntries(lastState,myMutable.getCurrentState(infoData)) additionalEntries
import queue import time from threading import Thread import datetime import sys from PyQt5.QtWidgets import (QWidget, QPushButton, QTextBrowser,QLineEdit, QHBoxLayout, QVBoxLayout, QApplication) class Example(QWidget): def __init__(self): super().__init__() self.lineedit1 = QLineEdit("anon") self.browser = QTextBrowser() self.lineedit = QLineEdit("Type a message and press Enter") self.lineedit.selectAll() self.setWindowTitle("crappychat_reloaded") vbox = QVBoxLayout() vbox.addWidget(self.lineedit1) vbox.addWidget(self.browser) vbox.addWidget(self.lineedit) self.setLayout(vbox) self.setGeometry(300, 300, 900, 600) self.show() self.lineedit.setFocus() self.lineedit.returnPressed.connect(self.updateUi) self.messageQueue = queue.Queue() t = Thread(name='updateThread', target=self.updateBrowser) t.start() def updateUi(self): try: now = datetime.datetime.utcnow().strftime('%Y-%m-%d - %H:%M:%S') myName = self.lineedit1.text() text = self.lineedit.text() timeUser = f'{now} {myName}' additionalEntries={timeUser:text} self.messageQueue.put(additionalEntries) #self.browser.append(f"<b>{timeUser}</b>: {text}") self.lineedit.clear() except: self.browser.append("<font color=red>{0} is invalid!</font>" .format(text)) def updateBrowser(self): lastState={} while True: try: if not self.messageQueue.empty(): newEntries = self.messageQueue.get() myMutable.insertEntries(infoData,newEntries) additionalEntries, lastState = getNewEntries(lastState,myMutable.getCurrentState(infoData)) for entry in additionalEntries: entry_string = entry.decode() value_string = additionalEntries[entry].decode() self.browser.append(f"<b>{entry_string}</b>: {value_string}") self.browser.ensureCursorVisible() except: pass time.sleep(2) if __name__ == '__main__': app = QApplication(sys.argv) ex = Example() sys.exit(app.exec_())
_____no_output_____
MIT
crappyChat_setup.ipynb
rid-dim/pySafe
Tutorial 13: Skyrmion in a disk> Interactive online tutorial:> [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/ubermag/oommfc/master?filepath=docs%2Fipynb%2Findex.ipynb) In this tutorial, we compute and relax a skyrmion in a interfacial-DMI material in a confined disk like geometry.
import oommfc as oc import discretisedfield as df import micromagneticmodel as mm
_____no_output_____
BSD-3-Clause
docs/ipynb/13-tutorial-skyrmion.ipynb
spinachslayer420/MSE598-SAF-Project
We define mesh in cuboid through corner points `p1` and `p2`, and discretisation cell size `cell`.
region = df.Region(p1=(-50e-9, -50e-9, 0), p2=(50e-9, 50e-9, 10e-9)) mesh = df.Mesh(region=region, cell=(5e-9, 5e-9, 5e-9))
_____no_output_____
BSD-3-Clause
docs/ipynb/13-tutorial-skyrmion.ipynb
spinachslayer420/MSE598-SAF-Project
The mesh we defined is:
%matplotlib inline mesh.k3d()
_____no_output_____
BSD-3-Clause
docs/ipynb/13-tutorial-skyrmion.ipynb
spinachslayer420/MSE598-SAF-Project
Now, we can define the system object by first setting up the Hamiltonian:
system = mm.System(name='skyrmion') system.energy = (mm.Exchange(A=1.6e-11) + mm.DMI(D=4e-3, crystalclass='Cnv') + mm.UniaxialAnisotropy(K=0.51e6, u=(0, 0, 1)) + mm.Demag() + mm.Zeeman(H=(0, 0, 2e5)))
_____no_output_____
BSD-3-Clause
docs/ipynb/13-tutorial-skyrmion.ipynb
spinachslayer420/MSE598-SAF-Project
Disk geometry is set up be defining the saturation magnetisation (norm of the magnetisation field). For that, we define a function:
Ms = 1.1e6 def Ms_fun(pos): """Function to set magnitude of magnetisation: zero outside cylindric shape, Ms inside cylinder. Cylinder radius is 50nm. """ x, y, z = pos if (x**2 + y**2)**0.5 < 50e-9: return Ms else: return 0
_____no_output_____
BSD-3-Clause
docs/ipynb/13-tutorial-skyrmion.ipynb
spinachslayer420/MSE598-SAF-Project
And the second function we need is the function to definr the initial magnetisation which is going to relax to skyrmion.
def m_init(pos): """Function to set initial magnetisation direction: -z inside cylinder (r=10nm), +z outside cylinder. y-component to break symmetry. """ x, y, z = pos if (x**2 + y**2)**0.5 < 10e-9: return (0, 0, -1) else: return (0, 0, 1) # create system with above geometry and initial magnetisation system.m = df.Field(mesh, dim=3, value=m_init, norm=Ms_fun)
_____no_output_____
BSD-3-Clause
docs/ipynb/13-tutorial-skyrmion.ipynb
spinachslayer420/MSE598-SAF-Project
The geometry is now:
system.m.norm.k3d_nonzero()
_____no_output_____
BSD-3-Clause
docs/ipynb/13-tutorial-skyrmion.ipynb
spinachslayer420/MSE598-SAF-Project
and the initial magnetsation is:
system.m.plane('z').mpl()
/Users/marijanbeg/miniconda3/envs/ubermag-dev/lib/python3.8/site-packages/matplotlib/quiver.py:715: RuntimeWarning: divide by zero encountered in double_scalars length = a * (widthu_per_lenu / (self.scale * self.width)) /Users/marijanbeg/miniconda3/envs/ubermag-dev/lib/python3.8/site-packages/matplotlib/quiver.py:715: RuntimeWarning: invalid value encountered in multiply length = a * (widthu_per_lenu / (self.scale * self.width)) /Users/marijanbeg/miniconda3/envs/ubermag-dev/lib/python3.8/site-packages/matplotlib/quiver.py:767: RuntimeWarning: invalid value encountered in less short = np.repeat(length < minsh, 8, axis=1) /Users/marijanbeg/miniconda3/envs/ubermag-dev/lib/python3.8/site-packages/matplotlib/quiver.py:780: RuntimeWarning: invalid value encountered in less tooshort = length < self.minlength
BSD-3-Clause
docs/ipynb/13-tutorial-skyrmion.ipynb
spinachslayer420/MSE598-SAF-Project
Finally we can minimise the energy and plot the magnetisation.
# minimize the energy md = oc.MinDriver() md.drive(system) # Plot relaxed configuration: vectors in z-plane system.m.plane('z').mpl() # Plot z-component only: system.m.z.plane('z').mpl() # 3d-plot of z-component system.m.z.k3d_scalar(filter_field=system.m.norm)
_____no_output_____
BSD-3-Clause
docs/ipynb/13-tutorial-skyrmion.ipynb
spinachslayer420/MSE598-SAF-Project
Finally we can sample and plot the magnetisation along the line:
system.m.z.line(p1=(-49e-9, 0, 0), p2=(49e-9, 0, 0), n=20).mpl()
_____no_output_____
BSD-3-Clause
docs/ipynb/13-tutorial-skyrmion.ipynb
spinachslayer420/MSE598-SAF-Project
Reader - ImplantaçãoEste componente utiliza um modelo de QA pré-treinado em Português com o dataset SQuAD v1.1, é um modelo de domínio público disponível em [Hugging Face](https://huggingface.co/pierreguillou/bert-large-cased-squad-v1.1-portuguese).Seu objetivo é encontrar a resposta de uma ou mais perguntas de acordo com uma lista de contextos distintos.A tabela de dados de entrada deve possuir uma coluna de contextos, em que cada linha representa um contexto diferente, e uma coluna de perguntas em que cada linha representa uma pergunta a ser realizada. Note que para cada pergunta serão utilizados todos os contextos fornecidos para realização da inferência, e portanto, podem haver bem mais contextos do que perguntas.Obs: Este componente utiliza recursos da internet, portanto é importante estar conectado à rede para que este componente funcione corretamente. **Em caso de dúvidas, consulte os [tutoriais da PlatIAgro](https://platiagro.github.io/tutorials/).** Declaração de Classe para Predições em Tempo RealA tarefa de implantação cria um serviço REST para predições em tempo-real.Para isso você deve criar uma classe `Model` que implementa o método `predict`.
%%writefile Model.py import joblib import numpy as np import pandas as pd from reader import Reader class Model: def __init__(self): self.loaded = False def load(self): # Load artifacts artifacts = joblib.load("/tmp/data/reader.joblib") self.model_parameters = artifacts["model_parameters"] self.inference_parameters = artifacts["inference_parameters"] # Initialize reader self.reader = Reader(**self.model_parameters) # Set model loaded self.loaded = True print("Loaded model") def class_names(self): column_names = list(self.inference_parameters['output_columns']) return column_names def predict(self, X, feature_names, meta=None): if not self.loaded: self.load() # Convert to dataframe if feature_names != []: df = pd.DataFrame(X, columns = feature_names) df = df[self.inference_parameters['input_columns']] else: df = pd.DataFrame(X, columns = self.inference_parameters['input_columns']) # Predict answers # # Iterate over dataset for idx, row in df.iterrows(): # Get question question = row[self.inference_parameters['question_column_name']] # Get context context = row[self.inference_parameters['context_column_name']] # Make prediction answer, probability, _ = self.reader([question], [context]) # Save to df df.at[idx, self.inference_parameters['answer_column_name']] = answer[0] df.at[idx, self.inference_parameters['proba_column_name']] = probability[0] # Retrieve Only Best Answer # # Initializate best df best_df = pd.DataFrame(columns=df.columns) # Get unique questions unique_questions = df[self.inference_parameters['question_column_name']].unique() # Iterate over each unique question for question in unique_questions: # Filter df question_df = df[df[self.inference_parameters['question_column_name']] == question] # Sort by score (descending) question_df = question_df.sort_values(by=self.inference_parameters['proba_column_name'], ascending=False).reset_index(drop=True) # Append best ansewer to output df best_df = pd.concat((best_df,pd.DataFrame(question_df.loc[0]).T)).reset_index(drop=True) if self.inference_parameters['keep_best'] == 'sim': return best_df.values else: return df.values
Overwriting Model.py
MIT
tasks/reader/Deployment.ipynb
platiagro/tasks
Estimator validationThis notebook contains code to generate Figure 2 of the paper. This notebook also serves to compare the estimates of the re-implemented scmemo with sceb package from Vasilis.
import pandas as pd import matplotlib.pyplot as plt import scanpy as sc import scipy as sp import itertools import numpy as np import scipy.stats as stats from scipy.integrate import dblquad import seaborn as sns from statsmodels.stats.multitest import fdrcorrection import imp pd.options.display.max_rows = 999 pd.set_option('display.max_colwidth', -1) import pickle as pkl import time import matplotlib as mpl mpl.rcParams['pdf.fonttype'] = 42 mpl.rcParams['ps.fonttype'] = 42 import matplotlib.pylab as pylab params = {'legend.fontsize': 'x-small', 'axes.labelsize': 'medium', 'axes.titlesize':'medium', 'figure.titlesize':'medium', 'xtick.labelsize':'xx-small', 'ytick.labelsize':'xx-small'} pylab.rcParams.update(params) import sys sys.path.append('/data/home/Github/scrna-parameter-estimation/dist/schypo-0.0.0-py3.7.egg') import schypo import schypo.simulate as simulate import sys sys.path.append('/data/home/Github/single_cell_eb/') sys.path.append('/data/home/Github/single_cell_eb/sceb/') import scdd data_path = '/data/parameter_estimation/' fig_path = '/data/home/Github/scrna-parameter-estimation/figures/fig3/'
_____no_output_____
MIT
analysis/simulation/estimator_validation.ipynb
yelabucsf/scrna-parameter-estimation
Check 1D estimates of `sceb` with `scmemo`Using the Poisson model. The outputs should be identical, this is for checking the implementation.
data = sp.sparse.csr_matrix(simulate.simulate_transcriptomes(100, 20)) adata = sc.AnnData(data) size_factors = scdd.dd_size_factor(adata) Nr = data.sum(axis=1).mean() _, M_dd = scdd.dd_1d_moment(adata, size_factor=scdd.dd_size_factor(adata), Nr=Nr) var_scdd = scdd.M_to_var(M_dd) print(var_scdd) imp.reload(estimator) mean_scmemo, var_scmemo = estimator._poisson_1d(data, data.shape[0], estimator._estimate_size_factor(data)) print(var_scmemo) df = pd.DataFrame() df['size_factor'] = size_factors df['inv_size_factor'] = 1/size_factors df['inv_size_factor_sq'] = 1/size_factors**2 df['expr'] = data[:, 0].todense().A1 precomputed_size_factors = df.groupby('expr')['inv_size_factor'].mean(), df.groupby('expr')['inv_size_factor_sq'].mean() imp.reload(estimator) expr, count = np.unique(data[:, 0].todense().A1, return_counts=True) print(estimator._poisson_1d((expr, count), data.shape[0], precomputed_size_factors))
[0.5217290008068085, 0.9860336223993191]
MIT
analysis/simulation/estimator_validation.ipynb
yelabucsf/scrna-parameter-estimation
Check 2D estimates of `sceb` and `scmemo`Using the Poisson model. The outputs should be identical, this is for checking the implementation.
data = sp.sparse.csr_matrix(simulate.simulate_transcriptomes(1000, 4)) adata = sc.AnnData(data) size_factors = scdd.dd_size_factor(adata) mean_scdd, cov_scdd, corr_scdd = scdd.dd_covariance(adata, size_factors) print(cov_scdd) imp.reload(estimator) cov_scmemo = estimator._poisson_cov(data, data.shape[0], size_factors, idx1=[0, 1, 2], idx2=[1, 2, 3]) print(cov_scmemo) expr, count = np.unique(data[:, :2].toarray(), return_counts=True, axis=0) df = pd.DataFrame() df['size_factor'] = size_factors df['inv_size_factor'] = 1/size_factors df['inv_size_factor_sq'] = 1/size_factors**2 df['expr1'] = data[:, 0].todense().A1 df['expr2'] = data[:, 1].todense().A1 precomputed_size_factors = df.groupby(['expr1', 'expr2'])['inv_size_factor'].mean(), df.groupby(['expr1', 'expr2'])['inv_size_factor_sq'].mean() cov_scmemo = estimator._poisson_cov((expr[:, 0], expr[:, 1], count), data.shape[0], size_factor=precomputed_size_factors) print(cov_scmemo)
-1.4590297282462616
MIT
analysis/simulation/estimator_validation.ipynb
yelabucsf/scrna-parameter-estimation
Extract parameters from interferon dataset
adata = sc.read(data_path + 'interferon_filtered.h5ad') adata = adata[adata.obs.cell_type == 'CD4 T cells - ctrl'] data = adata.X.copy() relative_data = data.toarray()/data.sum(axis=1) q = 0.07 x_param, z_param, Nc, good_idx = schypo.simulate.extract_parameters(adata.X, q=q, min_mean=q) imp.reload(simulate) transcriptome = simulate.simulate_transcriptomes( n_cells=10000, means=z_param[0], variances=z_param[1], corr=x_param[2], Nc=Nc) relative_transcriptome = transcriptome/transcriptome.sum(axis=1).reshape(-1, 1) qs, captured_data = simulate.capture_sampling(transcriptome, q=q, q_sq=q**2+1e-10) def qqplot(x, y, s=1): plt.scatter( np.quantile(x, np.linspace(0, 1, 1000)), np.quantile(y, np.linspace(0, 1, 1000)), s=s) plt.plot(x, x, lw=1, color='m') plt.figure(figsize=(8, 2)); plt.subplots_adjust(wspace=0.2); plt.subplot(1, 3, 1); sns.distplot(np.log(captured_data.mean(axis=0)), hist=False, label='Simulated') sns.distplot(np.log(data[:, good_idx].toarray().var(axis=0)), hist=False, label='Real') plt.xlabel('Log(mean)') plt.subplot(1, 3, 2); sns.distplot(np.log(captured_data.var(axis=0)), hist=False) sns.distplot(np.log(data[:, good_idx].toarray().var(axis=0)), hist=False) plt.xlabel('Log(variance)') plt.subplot(1, 3, 3); sns.distplot(np.log(captured_data.sum(axis=1)), hist=False) sns.distplot(np.log(data.toarray().sum(axis=1)), hist=False) plt.xlabel('Log(total UMI count)') plt.savefig(figpath + 'simulation_stats.png', bbox_inches='tight')
_____no_output_____
MIT
analysis/simulation/estimator_validation.ipynb
yelabucsf/scrna-parameter-estimation
Compare datasets generated by Poisson and hypergeometric processes
_, poi_captured = simulate.capture_sampling(transcriptome, q=q, process='poisson') _, hyper_captured = simulate.capture_sampling(transcriptome, q=q, process='hyper') q_list = [0.05, 0.1, 0.2, 0.3, 0.5] plt.figure(figsize=(8, 2)) plt.subplots_adjust(wspace=0.3) for idx, q in enumerate(q_list): _, poi_captured = simulate.capture_sampling(transcriptome, q=q, process='poisson') _, hyper_captured = simulate.capture_sampling(transcriptome, q=q, process='hyper') relative_poi_captured = poi_captured/poi_captured.sum(axis=1).reshape(-1, 1) relative_hyper_captured = hyper_captured/hyper_captured.sum(axis=1).reshape(-1, 1) poi_corr = np.corrcoef(relative_poi_captured, rowvar=False) hyper_corr = np.corrcoef(relative_hyper_captured, rowvar=False) sample_idx = np.random.choice(poi_corr.ravel().shape[0], 100000) plt.subplot(1, len(q_list), idx+1) plt.scatter(poi_corr.ravel()[sample_idx], hyper_corr.ravel()[sample_idx], s=1, alpha=1) plt.plot([-1, 1], [-1, 1], 'm', lw=1) # plt.xlim([-0.3, 0.4]) # plt.ylim([-0.3, 0.4]) if idx != 0: plt.yticks([]) plt.title('q={}'.format(q)) plt.savefig(figpath + 'poi_vs_hyp_sim_corr.png', bbox_inches='tight')
_____no_output_____
MIT
analysis/simulation/estimator_validation.ipynb
yelabucsf/scrna-parameter-estimation
Compare Poisson vs HG estimators
def compare_esimators(q, plot=False, true_data=None, var_q=1e-10): q_sq = var_q + q**2 true_data = schypo.simulate.simulate_transcriptomes(1000, 1000, correlated=True) if true_data is None else true_data true_relative_data = true_data / true_data.sum(axis=1).reshape(-1, 1) qs, captured_data = schypo.simulate.capture_sampling(true_data, q, q_sq) Nr = captured_data.sum(axis=1).mean() captured_relative_data = captured_data/captured_data.sum(axis=1).reshape(-1, 1) adata = sc.AnnData(sp.sparse.csr_matrix(captured_data)) sf = schypo.estimator._estimate_size_factor(adata.X, 'hyper_relative', total=True) good_idx = (captured_data.mean(axis=0) > q) # True moments m_true, v_true, corr_true = true_relative_data.mean(axis=0), true_relative_data.var(axis=0), np.corrcoef(true_relative_data, rowvar=False) rv_true = v_true/m_true**2#schypo.estimator._residual_variance(m_true, v_true, schypo.estimator._fit_mv_regressor(m_true, v_true)) # Compute 1D moments m_obs, v_obs = captured_relative_data.mean(axis=0), captured_relative_data.var(axis=0) rv_obs = v_obs/m_obs**2#schypo.estimator._residual_variance(m_obs, v_obs, schypo.estimator._fit_mv_regressor(m_obs, v_obs)) m_poi, v_poi = schypo.estimator._poisson_1d_relative(adata.X, size_factor=sf, n_obs=true_data.shape[0]) rv_poi = v_poi/m_poi**2#schypo.estimator._residual_variance(m_poi, v_poi, schypo.estimator._fit_mv_regressor(m_poi, v_poi)) m_hyp, v_hyp = schypo.estimator._hyper_1d_relative(adata.X, size_factor=sf, n_obs=true_data.shape[0], q=q) rv_hyp = v_hyp/m_hyp**2#schypo.estimator._residual_variance(m_hyp, v_hyp, schypo.estimator._fit_mv_regressor(m_hyp, v_hyp)) # Compute 2D moments corr_obs = np.corrcoef(captured_relative_data, rowvar=False) # corr_obs = corr_obs[np.triu_indices(corr_obs.shape[0])] idx1 = np.array([i for i,j in itertools.combinations(range(adata.shape[1]), 2) if good_idx[i] and good_idx[j]]) idx2 = np.array([j for i,j in itertools.combinations(range(adata.shape[1]), 2) if good_idx[i] and good_idx[j]]) sample_idx = np.random.choice(idx1.shape[0], 10000) idx1 = idx1[sample_idx] idx2 = idx2[sample_idx] corr_true = corr_true[(idx1, idx2)] corr_obs = corr_obs[(idx1, idx2)] cov_poi = schypo.estimator._poisson_cov_relative(adata.X, n_obs=adata.shape[0], size_factor=sf, idx1=idx1, idx2=idx2) cov_hyp = schypo.estimator._hyper_cov_relative(adata.X, n_obs=adata.shape[0], size_factor=sf, idx1=idx1, idx2=idx2, q=q) corr_poi = schypo.estimator._corr_from_cov(cov_poi, v_poi[idx1], v_poi[idx2]) corr_hyp = schypo.estimator._corr_from_cov(cov_hyp, v_hyp[idx1], v_hyp[idx2]) corr_poi[np.abs(corr_poi) > 1] = np.nan corr_hyp[np.abs(corr_hyp) > 1] = np.nan mean_list = [m_obs, m_poi, m_hyp] var_list = [rv_obs, rv_poi, rv_hyp] corr_list = [corr_obs, corr_poi, corr_hyp] estimated_list = [mean_list, var_list, corr_list] true_list = [m_true, rv_true, corr_true] if plot: count = 0 for j in range(3): for i in range(3): plt.subplot(3, 3, count+1) if i != 2: plt.scatter( np.log(true_list[i][good_idx]), np.log(estimated_list[i][j][good_idx]), s=0.1) plt.plot(np.log(true_list[i][good_idx]), np.log(true_list[i][good_idx]), linestyle='--', color='m') plt.xlim(np.log(true_list[i][good_idx]).min(), np.log(true_list[i][good_idx]).max()) plt.ylim(np.log(true_list[i][good_idx]).min(), np.log(true_list[i][good_idx]).max()) else: x = true_list[i] y = estimated_list[i][j] print(x.shape, y.shape) plt.scatter( x, y, s=0.1) plt.plot([-1, 1], [-1, 1],linestyle='--', color='m') plt.xlim(-1, 1); plt.ylim(-1, 1); # if not (i == j): # plt.yticks([]); # plt.xticks([]); if i == 1 or i == 0: print((np.log(true_list[i][good_idx]) > np.log(estimated_list[i][j][good_idx])).mean()) count += 1 else: return qs, good_idx, estimated_list, true_list import matplotlib.pylab as pylab params = {'legend.fontsize': 'small', 'axes.labelsize': 'medium', 'axes.titlesize':'medium', 'figure.titlesize':'medium', 'xtick.labelsize':'xx-small', 'ytick.labelsize':'xx-small'} pylab.rcParams.update(params)
_____no_output_____
MIT
analysis/simulation/estimator_validation.ipynb
yelabucsf/scrna-parameter-estimation
imp.reload(simulate)q = 0.4plt.figure(figsize=(4, 4))plt.subplots_adjust(wspace=0.5, hspace=0.5)true_data = simulate.simulate_transcriptomes(n_cells=10000, means=z_param[0], variances=z_param[1], corr=x_param[2], Nc=Nc)compare_esimators(q, plot=True, true_data=true_data)plt.savefig(fig_path + 'poi_vs_hyper_scatter_2.png', bbox_inches='tight')
true_data = schypo.simulate.simulate_transcriptomes(n_cells=10000, means=z_param[0], variances=z_param[1], Nc=Nc) q = 0.025 plt.figure(figsize=(4, 4)) plt.subplots_adjust(wspace=0.5, hspace=0.5) compare_esimators(q, plot=True, true_data=true_data) plt.savefig(fig_path + 'poi_vs_hyper_scatter_rv_2.5.png', bbox_inches='tight', dpi=1200) q = 0.4 plt.figure(figsize=(4, 4)) plt.subplots_adjust(wspace=0.5, hspace=0.5) compare_esimators(q, plot=True, true_data=true_data) plt.savefig(fig_path + 'poi_vs_hyper_scatter_rv_40.png', bbox_inches='tight', dpi=1200) def compute_mse(x, y, log=True): if log: return np.nanmean(np.abs(np.log(x)-np.log(y))) else: return np.nanmean(np.abs(x-y)) def concordance(x, y, log=True): if log: a = np.log(x) b = np.log(y) else: a = x b = y cond = np.isfinite(a) & np.isfinite(b) a = a[cond] b = b[cond] cmat = np.cov(a, b) return 2*cmat[0,1]/(cmat[0,0] + cmat[1,1] + (a.mean()-b.mean())**2) m_mse_list, v_mse_list, c_mse_list = [], [], [] # true_data = schypo.simulate.simulate_transcriptomes(n_cells=10000, means=z_param[0], variances=z_param[1], # Nc=Nc) q_list = [0.01, 0.025, 0.1, 0.15, 0.3, 0.5, 0.7, 0.99] qs_list = [] for q in q_list: qs, good_idx, est, true = compare_esimators(q, plot=False, true_data=true_data) qs_list.append(qs) m_mse_list.append([concordance(x[good_idx], true[0][good_idx]) for x in est[0]]) v_mse_list.append([concordance(x[good_idx], true[1][good_idx]) for x in est[1]]) c_mse_list.append([concordance(x, true[2], log=False) for x in est[2]]) m_mse_list, v_mse_list, c_mse_list = np.array(m_mse_list), np.array(v_mse_list), np.array(c_mse_list) import matplotlib.pylab as pylab params = {'legend.fontsize': 'small', 'axes.labelsize': 'medium', 'axes.titlesize':'medium', 'figure.titlesize':'medium', 'xtick.labelsize':'small', 'ytick.labelsize':'small'} pylab.rcParams.update(params) plt.figure(figsize=(8, 3)) plt.subplots_adjust(wspace=0.5) plt.subplot(1, 3, 1) plt.plot(q_list[1:], m_mse_list[:, 0][1:], '-o') # plt.legend(['Naive,\nPoisson,\nHG']) plt.ylabel('CCC log(mean)') plt.xlabel('overall UMI efficiency (q)') plt.subplot(1, 3, 2) plt.plot(q_list[2:], v_mse_list[:, 0][2:], '-o') plt.plot(q_list[2:], v_mse_list[:, 1][2:], '-o') plt.plot(q_list[2:], v_mse_list[:, 2][2:], '-o') plt.legend(['Naive', 'Poisson', 'HG'], ncol=3, loc='upper center', bbox_to_anchor=(0.4,1.15)) plt.ylabel('CCC log(variance)') plt.xlabel('overall UMI efficiency (q)') plt.subplot(1, 3, 3) plt.plot(q_list[2:], c_mse_list[:, 0][2:], '-o') plt.plot(q_list[2:], c_mse_list[:, 1][2:], '-o') plt.plot(q_list[2:], c_mse_list[:, 2][2:], '-o') # plt.legend(['Naive', 'Poisson', 'HG']) plt.ylabel('CCC correlation') plt.xlabel('overall UMI efficiency (q)') plt.savefig(fig_path + 'poi_vs_hyper_rv_ccc.pdf', bbox_inches='tight') plt.figure(figsize=(1, 1.3)) plt.plot(q_list, v_mse_list[:, 0], '-o', ms=4) plt.plot(q_list, v_mse_list[:, 1], '-o', ms=4) plt.plot(q_list, v_mse_list[:, 2], '-o', ms=4) plt.savefig(fig_path + 'poi_vs_hyper_ccc_var_rv_inset.pdf', bbox_inches='tight') plt.figure(figsize=(1, 1.3)) plt.plot(q_list, c_mse_list[:, 0], '-o', ms=4) plt.plot(q_list, c_mse_list[:, 1], '-o', ms=4) plt.plot(q_list, c_mse_list[:, 2], '-o', ms=4) plt.savefig(fig_path + 'poi_vs_hyper_ccc_corr_inset.pdf', bbox_inches='tight')
_____no_output_____
MIT
analysis/simulation/estimator_validation.ipynb
yelabucsf/scrna-parameter-estimation
TRTR and TSTR Results Comparison
#import libraries import warnings warnings.filterwarnings("ignore") import numpy as np import pandas as pd from matplotlib import pyplot as plt pd.set_option('precision', 4)
_____no_output_____
MIT
notebooks/Dataset D - Contraceptive Method Choice/Synthetic data evaluation/Utility/TRTR and TSTR Results Comparison.ipynb
Vicomtech/STDG-evaluation-metrics
1. Create empty dataset to save metrics differences
DATA_TYPES = ['Real','GM','SDV','CTGAN','WGANGP'] SYNTHESIZERS = ['GM','SDV','CTGAN','WGANGP'] ml_models = ['RF','KNN','DT','SVM','MLP']
_____no_output_____
MIT
notebooks/Dataset D - Contraceptive Method Choice/Synthetic data evaluation/Utility/TRTR and TSTR Results Comparison.ipynb
Vicomtech/STDG-evaluation-metrics
2. Read obtained results when TRTR and TSTR
FILEPATHS = {'Real' : 'RESULTS/models_results_real.csv', 'GM' : 'RESULTS/models_results_gm.csv', 'SDV' : 'RESULTS/models_results_sdv.csv', 'CTGAN' : 'RESULTS/models_results_ctgan.csv', 'WGANGP' : 'RESULTS/models_results_wgangp.csv'} #iterate over all datasets filepaths and read each dataset results_all = dict() for name, path in FILEPATHS.items() : results_all[name] = pd.read_csv(path, index_col='model') results_all
_____no_output_____
MIT
notebooks/Dataset D - Contraceptive Method Choice/Synthetic data evaluation/Utility/TRTR and TSTR Results Comparison.ipynb
Vicomtech/STDG-evaluation-metrics
3. Calculate differences of models
metrics_diffs_all = dict() real_metrics = results_all['Real'] columns = ['data','accuracy_diff','precision_diff','recall_diff','f1_diff'] metrics = ['accuracy','precision','recall','f1'] for name in SYNTHESIZERS : syn_metrics = results_all[name] metrics_diffs_all[name] = pd.DataFrame(columns = columns) for model in ml_models : real_metrics_model = real_metrics.loc[model] syn_metrics_model = syn_metrics.loc[model] data = [model] for m in metrics : data.append(abs(real_metrics_model[m] - syn_metrics_model[m])) metrics_diffs_all[name] = metrics_diffs_all[name].append(pd.DataFrame([data], columns = columns)) metrics_diffs_all
_____no_output_____
MIT
notebooks/Dataset D - Contraceptive Method Choice/Synthetic data evaluation/Utility/TRTR and TSTR Results Comparison.ipynb
Vicomtech/STDG-evaluation-metrics
4. Compare absolute differences 4.1. Barplots for each metric
metrics = ['accuracy', 'precision', 'recall', 'f1'] metrics_diff = ['accuracy_diff', 'precision_diff', 'recall_diff', 'f1_diff'] colors = ['tab:blue', 'tab:orange', 'tab:green', 'tab:red', 'tab:purple'] barwidth = 0.15 fig, axs = plt.subplots(nrows=1, ncols=4, figsize=(15, 2.5)) axs_idxs = range(4) idx = dict(zip(metrics + metrics_diff,axs_idxs)) for i in range(0,len(metrics)) : data = dict() y_pos = dict() y_pos[0] = np.arange(len(ml_models)) ax = axs[idx[metrics[i]]] for k in range(0,len(DATA_TYPES)) : generator_data = results_all[DATA_TYPES[k]] data[k] = [0, 0, 0, 0, 0] for p in range(0,len(ml_models)) : data[k][p] = generator_data[metrics[i]].iloc[p] ax.bar(y_pos[k], data[k], color=colors[k], width=barwidth, edgecolor='white', label=DATA_TYPES[k]) y_pos[k+1] = [x + barwidth for x in y_pos[k]] ax.set_xticks([r + barwidth*2 for r in range(len(ml_models))]) ax.set_xticklabels([]) ax.set_xticklabels(ml_models, fontsize=10) ax.set_title(metrics[i], fontsize=12) ax.legend(DATA_TYPES, ncol=5, bbox_to_anchor=(-0.3, -0.2)) fig.tight_layout() #fig.suptitle('Models performance comparisson Boxplots (TRTR and TSTR) \n Dataset F - Indian Liver Patient', fontsize=18) fig.savefig('RESULTS/MODELS_METRICS_BARPLOTS.svg', bbox_inches='tight') metrics = ['accuracy_diff', 'precision_diff', 'recall_diff', 'f1_diff'] colors = ['tab:orange', 'tab:green', 'tab:red', 'tab:purple'] fig, axs = plt.subplots(nrows=1, ncols=4, figsize=(15,2.5)) axs_idxs = range(4) idx = dict(zip(metrics,axs_idxs)) for i in range(0,len(metrics)) : data = dict() ax = axs[idx[metrics[i]]] for k in range(0,len(SYNTHESIZERS)) : generator_data = metrics_diffs_all[SYNTHESIZERS[k]] data[k] = [0, 0, 0, 0, 0] for p in range(0,len(ml_models)) : data[k][p] = generator_data[metrics[i]].iloc[p] ax.plot(data[k], 'o-', color=colors[k], label=SYNTHESIZERS[k]) ax.set_xticks(np.arange(len(ml_models))) ax.set_xticklabels(ml_models, fontsize=10) ax.set_title(metrics[i], fontsize=12) ax.set_ylim(bottom=-0.01, top=0.28) ax.grid() ax.legend(SYNTHESIZERS, ncol=5, bbox_to_anchor=(-0.4, -0.2)) fig.tight_layout() #fig.suptitle('Models performance comparisson Boxplots (TRTR and TSTR) \n Dataset F - Indian Liver Patient', fontsize=18) fig.savefig('RESULTS/MODELS_METRICS_DIFFERENCES.svg', bbox_inches='tight')
_____no_output_____
MIT
notebooks/Dataset D - Contraceptive Method Choice/Synthetic data evaluation/Utility/TRTR and TSTR Results Comparison.ipynb
Vicomtech/STDG-evaluation-metrics
Generating Simpson's ParadoxWe have been maually setting, but now we should also be able to generate it more programatically. his notebook will describe how we develop some functions that will be included in the `sp_data_util` package.
# %load code/env # standard imports we use throughout the project import numpy as np import pandas as pd import seaborn as sns import scipy.stats as stats import matplotlib.pyplot as plt import wiggum as wg import sp_data_util as spdata from sp_data_util import sp_plot
_____no_output_____
MIT
research_notebooks/generate_regression_sp.ipynb
carolinesadlerr/wiggum
We have been thinking of SP hrough gaussian mixture data, so we'll first work wih that. To cause SP we need he clusters to have an opposite trend of the per cluster covariance.
# setup r_clusters = -.6 # correlation coefficient of clusters cluster_spread = .8 # pearson correlation of means p_sp_clusters = .5 # portion of clusters with SP k = 5 # number of clusters cluster_size = [2,3] domain_range = [0, 20, 0, 20] N = 200 # number of points p_clusters = [1.0/k]*k # keep all means in the middle 80% mu_trim = .2 # sample means center = [np.mean(domain_range[:2]),np.mean(domain_range[2:])] mu_transform = np.repeat(np.diff(domain_range)[[0,2]]*(mu_trim),2) mu_transform[[1,3]] = mu_transform[[1,3]]*-1 # sign flip every other mu_domain = [d + m_t for d, m_t in zip(domain_range,mu_transform)] corr = [[1, cluster_spread],[cluster_spread,1]] d = np.sqrt(np.diag(np.diff(mu_domain)[[0,2]])) cov = np.dot(d,corr).dot(d) # sample a lot of means, just for vizualization # mu = np.asarray([np.random.uniform(*mu_domain[:2],size=k*5), # uniform in x # np.random.uniform(*mu_domain[2:],size=k*5)]).T # uniform in y mu = np.random.multivariate_normal(center, cov,k*50) sns.regplot(mu[:,0], mu[:,1]) plt.axis(domain_range); # mu
_____no_output_____
MIT
research_notebooks/generate_regression_sp.ipynb
carolinesadlerr/wiggum
However independent sampling isn't really very uniform and we'd like to ensure the clusters are more spread out, so we can use some post processing to thin out close ones.
mu_thin = [mu[0]] # keep the first one p_dist = [1] # we'll use a gaussian kernel around each to filter and only the closest point matters dist = lambda mu_c,x: stats.norm.pdf(min(np.sum(np.square(mu_c -x),axis=1))) for m in mu: p_keep = 1- dist(mu_thin,m) if p_keep > .99: mu_thin.append(m) p_dist.append(p_keep) mu_thin = np.asarray(mu_thin) sns.regplot(mu_thin[:,0], mu_thin[:,1]) plt.axis(domain_range)
_____no_output_____
MIT
research_notebooks/generate_regression_sp.ipynb
carolinesadlerr/wiggum
Now, we can sample points on top of that, also we'll only use the first k
sns.regplot(mu_thin[:k,0], mu_thin[:k,1]) plt.axis(domain_range)
_____no_output_____
MIT
research_notebooks/generate_regression_sp.ipynb
carolinesadlerr/wiggum
Keeping only a few, we can end up with ones in the center, but if we sort them by the distance to the ones previously selected, we get them spread out a little more
# sort by distance mu_sort, p_sort = zip(*sorted(zip(mu_thin,p_dist), key = lambda x: x[1], reverse =True)) mu_sort = np.asarray(mu_sort) sns.regplot(mu_sort[:k,0], mu_sort[:k,1]) plt.axis(domain_range) # cluster covariance cluster_corr = np.asarray([[1,r_clusters],[r_clusters,1]]) cluster_std = np.diag(np.sqrt(cluster_size)) cluster_cov = np.dot(cluster_std,cluster_corr).dot(cluster_std) # sample from a GMM z = np.random.choice(k,N,p_clusters) x = np.asarray([np.random.multivariate_normal(mu_sort[z_i],cluster_cov) for z_i in z]) # make a dataframe latent_df = pd.DataFrame(data=x, columns = ['x1', 'x2']) # code cluster as color and add it a column to the dataframe latent_df['color'] = z sp_plot(latent_df,'x1','x2','color')
_____no_output_____
MIT
research_notebooks/generate_regression_sp.ipynb
carolinesadlerr/wiggum
We might not want all of the clusters to have the reveral though, so we can also sample the covariances
# cluster covariance cluster_std = np.diag(np.sqrt(cluster_size)) cluster_corr_sp = np.asarray([[1,r_clusters],[r_clusters,1]]) # correlation with sp cluster_cov_sp = np.dot(cluster_std,cluster_corr_sp).dot(cluster_std) #cov with sp cluster_corr = np.asarray([[1,-r_clusters],[-r_clusters,1]]) #correlation without sp cluster_cov = np.dot(cluster_std,cluster_corr).dot(cluster_std) #cov wihtout sp cluster_covs = [cluster_corr_sp, cluster_corr] # sample the[0,1] k times c_sp = np.random.choice(2,k,p=[p_sp_clusters,1-p_sp_clusters]) # sample from a GMM z = np.random.choice(k,N,p_clusters) x = np.asarray([np.random.multivariate_normal(mu_sort[z_i],cluster_covs[c_sp[z_i]]) for z_i in z]) # make a dataframe latent_df = pd.DataFrame(data=x, columns = ['x1', 'x2']) # code cluster as color and add it a column to the dataframe latent_df['color'] = z sp_plot(latent_df,'x1','x2','color') [p_sp_clusters,1-p_sp_clusters] c_sp
_____no_output_____
MIT
research_notebooks/generate_regression_sp.ipynb
carolinesadlerr/wiggum
We'll call this construction of SP `geometric_2d_gmm_sp` and it's included in the `sp_data_utils` module now, so it can be called as follows. We'll change the portion of clusters with SP to 1, to ensure that all are SP.
type(r_clusters) type(cluster_size) type(cluster_spread) type(p_sp_clusters) type(domain_range) type(p_clusters) p_sp_clusters = .9 sp_df2 = spdata.geometric_2d_gmm_sp(r_clusters,cluster_size,cluster_spread, p_sp_clusters, domain_range,k,N,p_clusters) sp_plot(sp_df2,'x1','x2','color')
_____no_output_____
MIT
research_notebooks/generate_regression_sp.ipynb
carolinesadlerr/wiggum
With this, we can start to see how the parameters control a little
# setup r_clusters = -.4 # correlation coefficient of clusters cluster_spread = .8 # pearson correlation of means p_sp_clusters = .6 # portion of clusters with SP k = 5 # number of clusters cluster_size = [4,4] domain_range = [0, 20, 0, 20] N = 200 # number of points p_clusters = [.5, .2, .1, .1, .1] sp_df3 = spdata.geometric_2d_gmm_sp(r_clusters,cluster_size,cluster_spread, p_sp_clusters, domain_range,k,N,p_clusters) sp_plot(sp_df3,'x1','x2','color')
_____no_output_____
MIT
research_notebooks/generate_regression_sp.ipynb
carolinesadlerr/wiggum
We might want to add multiple views, so we added a function that takes the same parameters or lists to allow each view to have different parameters. We'll look first at just two views with the same parameters, both as one another and as above
many_sp_df = spdata.geometric_indep_views_gmm_sp(2,r_clusters,cluster_size,cluster_spread,p_sp_clusters, domain_range,k,N,p_clusters) sp_plot(many_sp_df,'x1','x2','A') sp_plot(many_sp_df,'x3','x4','B') many_sp_df.head()
200 4
MIT
research_notebooks/generate_regression_sp.ipynb
carolinesadlerr/wiggum
We can also look at the pairs of variables that we did not design SP into and see that they have vey different structure
# f, ax_grid = plt.subplots(2,2) # , fig_size=(10,10) sp_plot(many_sp_df,'x1','x4','A') sp_plot(many_sp_df,'x2','x4','B') sp_plot(many_sp_df,'x2','x3','B') sp_plot(many_sp_df,'x1','x3','B')
_____no_output_____
MIT
research_notebooks/generate_regression_sp.ipynb
carolinesadlerr/wiggum
And we can set up the views to be different from one another by design
# setup r_clusters = [.8, -.2] # correlation coefficient of clusters cluster_spread = [.8, .2] # pearson correlation of means p_sp_clusters = [.6, 1] # portion of clusters with SP k = [5,3] # number of clusters cluster_size = [4,4] domain_range = [0, 20, 0, 20] N = 200 # number of points p_clusters = [[.5, .2, .1, .1, .1],[1.0/3]*3] many_sp_df_diff = spdata.geometric_indep_views_gmm_sp(2,r_clusters,cluster_size,cluster_spread,p_sp_clusters, domain_range,k,N,p_clusters) sp_plot(many_sp_df_diff,'x1','x2','A') sp_plot(many_sp_df_diff,'x3','x4','B') many_sp_df.head()
200 4
MIT
research_notebooks/generate_regression_sp.ipynb
carolinesadlerr/wiggum
And we can run our detection algorithm on this as well.
many_sp_df_diff_result = wg.detect_simpsons_paradox(many_sp_df_diff) many_sp_df_diff_result
_____no_output_____
MIT
research_notebooks/generate_regression_sp.ipynb
carolinesadlerr/wiggum
We designed in SP to occur between attributes `x1` and `x2` with respect to `A` and 2 & 3 in grouby by B, for portions fo the subgroups. We detect other occurences. It can be interesting to exmine trends between the deisnged and spontaneous occurences of SP, so
designed_SP = [('x1','x2','A'),('x3','x4','B')] des = [] for i,r in enumerate(many_sp_df_diff_result[['attr1','attr2','groupbyAttr']].values): if tuple(r) in designed_SP: des.append(i) many_sp_df_diff_result['designed'] = 'no' many_sp_df_diff_result.loc[des,'designed'] = 'yes' many_sp_df_diff_result.head() r_clusters = -.9 # correlation coefficient of clusters cluster_spread = .6 # pearson correlation of means p_sp_clusters = .5 # portion of clusters with SP k = 5 # number of clusters cluster_size = [5,5] domain_range = [0, 20, 0, 20] N = 200 # number of points p_clusters = [1.0/k]*k many_sp_df_diff = spdata.geometric_indep_views_gmm_sp(3,r_clusters,cluster_size,cluster_spread,p_sp_clusters, domain_range,k,N,p_clusters) sp_plot(many_sp_df_diff,'x1','x2','A') sp_plot(many_sp_df_diff,'x3','x4','B') sp_plot(many_sp_df_diff,'x3','x4','A') many_sp_df_diff.head()
200 6
MIT
research_notebooks/generate_regression_sp.ipynb
carolinesadlerr/wiggum
Here I'm going to test the details of the function that we are going to write for real video testing.
y = read_video.read_video('/Users/mojtaba/Desktop/OrNet Project/DAT VIDEOS/LLO/DsRed2-HeLa_2_21_LLO_Cell0.mov') y2 = np.array(y[1:]) y2.shape y3 = y2[0,:,:,:] y3.shape def manual_scan(self, video): """ Manual, loop-based implementation of raster scanning. (reference implementation) """ frames, height, width = video.shape raster = np.zeros(shape = (frames, height * width)) for index, frame in enumerate(video): raster[index] = frame.flatten() return raster yt = manual_scan(_, y3) yt.shape yp = raster_scan.raster_scan(y3) yp.shape
_____no_output_____
MIT
notebooks/real_video_test.ipynb
quinngroup/ornet-reu-2018