hexsha
stringlengths 40
40
| size
int64 6
14.9M
| ext
stringclasses 1
value | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 6
260
| max_stars_repo_name
stringlengths 6
119
| max_stars_repo_head_hexsha
stringlengths 40
41
| max_stars_repo_licenses
sequence | max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 6
260
| max_issues_repo_name
stringlengths 6
119
| max_issues_repo_head_hexsha
stringlengths 40
41
| max_issues_repo_licenses
sequence | max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 6
260
| max_forks_repo_name
stringlengths 6
119
| max_forks_repo_head_hexsha
stringlengths 40
41
| max_forks_repo_licenses
sequence | max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | avg_line_length
float64 2
1.04M
| max_line_length
int64 2
11.2M
| alphanum_fraction
float64 0
1
| cells
sequence | cell_types
sequence | cell_type_groups
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
d095a1e37c250e996e0d1516267a965e60bbbdbc | 4,605 | ipynb | Jupyter Notebook | content/lessons/10/Now-You-Code/NYC2-Character-Frequency.ipynb | MahopacHS/spring2019-ditoccoa0302 | ce0c88d4283964379d80ffcffed78c75aed36922 | [
"MIT"
] | null | null | null | content/lessons/10/Now-You-Code/NYC2-Character-Frequency.ipynb | MahopacHS/spring2019-ditoccoa0302 | ce0c88d4283964379d80ffcffed78c75aed36922 | [
"MIT"
] | null | null | null | content/lessons/10/Now-You-Code/NYC2-Character-Frequency.ipynb | MahopacHS/spring2019-ditoccoa0302 | ce0c88d4283964379d80ffcffed78c75aed36922 | [
"MIT"
] | null | null | null | 63.082192 | 1,015 | 0.629967 | [
[
[
"# Now You Code 2: Character Frequency\n\nWrite a program to input some text (a word or a sentence). The program should create a histogram of each character in the text and it's frequency. For example the text `apple` has a frequency `a:1, p:2, l:1, e:1`\n\nSome advice:\n\n- build a dictionary of each character where the character is the key and the value is the number of occurences of that character.\n- omit spaces, in the input text, and they cannot be represented as dictionary keys\n- convert the input text to lower case, so that `A` and `a` are counted as the same character.\n\nAfter you count the characters:\n- sort the dictionary keys alphabetically, \n- print out the character distribution\n\nExample Run:\n\n```\nEnter some text: Michael is a Man from Mississppi.\n. : 1\na : 3\nc : 1\ne : 1\nf : 1\nh : 1\ni : 5\nl : 1\nm : 4\nn : 1\no : 1\np : 2\nr : 1\ns : 5\n```\n",
"_____no_output_____"
],
[
"## Step 1: Problem Analysis\n\nInputs: string\n\nOutputs: frequency of each character (besides space)\n\nAlgorithm (Steps in Program):\n- create empty dictionary\n- input the string\n- make string lowercase and remove spaces\n- for each character in the text, set its dictionary value to 0\n- for each character in the text, increase its dictionary value by one\n- sort the dictionary keys alphabetically\n- for each key, print the key and its corresponding value\n",
"_____no_output_____"
]
],
[
[
"## Step 2: Write code here\nfr = {}\ntext = input(\"Enter text: \")\ntext = text.lower().replace(' ','')\nfor char in text:\n fr[char] = 0\nfor char in text:\n fr[char] = fr[char] + 1\nfor key in sorted(fr.keys()):\n print(key, ':', fr[key])",
"Enter text: Perfectly balanced, as all things should be.\n, : 1\n. : 1\na : 4\nb : 2\nc : 2\nd : 2\ne : 4\nf : 1\ng : 1\nh : 2\ni : 1\nl : 5\nn : 2\no : 1\np : 1\nr : 1\ns : 3\nt : 2\nu : 1\ny : 1\n"
]
],
[
[
"## Step 3: Questions\n\n1. Explain how you handled the situation where the dictionary key does not exist? (For instance the first time you encounter a character?)\n\nI made a first loop that goes through all the characters and sets their values to 0 to make sure all the dictionary keys exist when I increase them by 1.\n\n2. What happens when you just press `ENTER` as opposed to entering some actual text? What can be done about this to provide better feedback.\n\nThe program finishes without displaying anything. I could improve on this by printing that the string is empty if the dictionary is empty by the end of the program.\n\n3. This program is similar to the popular word cloud generators [http://www.wordclouds.com/] you can find on the Web. Describe how this program could be modified to count words instead of characters.\n\nI could split the string up into a list of words and then have the program go through the list of words instead of going through each character in the string.",
"_____no_output_____"
],
[
"## Reminder of Evaluation Criteria\n\n1. What the problem attempted (analysis, code, and answered questions) ?\n2. What the problem analysis thought out? (does the program match the plan?)\n3. Does the code execute without syntax error?\n4. Does the code solve the intended problem?\n5. Is the code well written? (easy to understand, modular, and self-documenting, handles errors)\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d095a32514d53985d86d1ed88407cf7d5539230c | 467,608 | ipynb | Jupyter Notebook | face-recognition.ipynb | 008karan/face-recognition-1 | aec91047b92c80c22dbc1a430f510dc8c825c731 | [
"Apache-2.0"
] | null | null | null | face-recognition.ipynb | 008karan/face-recognition-1 | aec91047b92c80c22dbc1a430f510dc8c825c731 | [
"Apache-2.0"
] | null | null | null | face-recognition.ipynb | 008karan/face-recognition-1 | aec91047b92c80c22dbc1a430f510dc8c825c731 | [
"Apache-2.0"
] | 1 | 2018-07-26T19:51:05.000Z | 2018-07-26T19:51:05.000Z | 667.058488 | 122,612 | 0.94672 | [
[
[
"## Deep face recognition with Keras, Dlib and OpenCV\n\nFace recognition identifies persons on face images or video frames. In a nutshell, a face recognition system extracts features from an input face image and compares them to the features of labeled faces in a database. Comparison is based on a feature similarity metric and the label of the most similar database entry is used to label the input image. If the similarity value is below a certain threshold the input image is labeled as *unknown*. Comparing two face images to determine if they show the same person is known as face verification.\n\nThis notebook uses a deep convolutional neural network (CNN) to extract features from input images. It follows the approach described in [[1]](https://arxiv.org/abs/1503.03832) with modifications inspired by the [OpenFace](http://cmusatyalab.github.io/openface/) project. [Keras](https://keras.io/) is used for implementing the CNN, [Dlib](http://dlib.net/) and [OpenCV](https://opencv.org/) for aligning faces on input images. Face recognition performance is evaluated on a small subset of the [LFW](http://vis-www.cs.umass.edu/lfw/) dataset which you can replace with your own custom dataset e.g. with images of your family and friends if you want to further experiment with this notebook. After an overview of the CNN architecure and how the model can be trained, it is demonstrated how to:\n\n- Detect, transform, and crop faces on input images. This ensures that faces are aligned before feeding them into the CNN. This preprocessing step is very important for the performance of the neural network.\n- Use the CNN to extract 128-dimensional representations, or *embeddings*, of faces from the aligned input images. In embedding space, Euclidean distance directly corresponds to a measure of face similarity. \n- Compare input embedding vectors to labeled embedding vectors in a database. Here, a support vector machine (SVM) and a KNN classifier, trained on labeled embedding vectors, play the role of a database. Face recognition in this context means using these classifiers to predict the labels i.e. identities of new inputs.\n\n### Environment setup\n\nFor running this notebook, create and activate a new [virtual environment](https://docs.python.org/3/tutorial/venv.html) and install the packages listed in [requirements.txt](requirements.txt) with `pip install -r requirements.txt`. Furthermore, you'll need a local copy of Dlib's face landmarks data file for running face alignment:",
"_____no_output_____"
]
],
[
[
"import bz2\nimport os\n\nfrom urllib.request import urlopen\n\ndef download_landmarks(dst_file):\n url = 'http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2'\n decompressor = bz2.BZ2Decompressor()\n \n with urlopen(url) as src, open(dst_file, 'wb') as dst:\n data = src.read(1024)\n while len(data) > 0:\n dst.write(decompressor.decompress(data))\n data = src.read(1024)\n\ndst_dir = 'models'\ndst_file = os.path.join(dst_dir, 'landmarks.dat')\n\nif not os.path.exists(dst_file):\n os.makedirs(dst_dir)\n download_landmarks(dst_file)",
"_____no_output_____"
]
],
[
[
"### CNN architecture and training\n\nThe CNN architecture used here is a variant of the inception architecture [[2]](https://arxiv.org/abs/1409.4842). More precisely, it is a variant of the NN4 architecture described in [[1]](https://arxiv.org/abs/1503.03832) and identified as [nn4.small2](https://cmusatyalab.github.io/openface/models-and-accuracies/#model-definitions) model in the OpenFace project. This notebook uses a Keras implementation of that model whose definition was taken from the [Keras-OpenFace](https://github.com/iwantooxxoox/Keras-OpenFace) project. The architecture details aren't too important here, it's only useful to know that there is a fully connected layer with 128 hidden units followed by an L2 normalization layer on top of the convolutional base. These two top layers are referred to as the *embedding layer* from which the 128-dimensional embedding vectors can be obtained. The complete model is defined in [model.py](model.py) and a graphical overview is given in [model.png](model.png). A Keras version of the nn4.small2 model can be created with `create_model()`.",
"_____no_output_____"
]
],
[
[
"from model import create_model\n\nnn4_small2 = create_model()",
"_____no_output_____"
]
],
[
[
"Model training aims to learn an embedding $f(x)$ of image $x$ such that the squared L2 distance between all faces of the same identity is small and the distance between a pair of faces from different identities is large. This can be achieved with a *triplet loss* $L$ that is minimized when the distance between an anchor image $x^a_i$ and a positive image $x^p_i$ (same identity) in embedding space is smaller than the distance between that anchor image and a negative image $x^n_i$ (different identity) by at least a margin $\\alpha$.\n\n$$L = \\sum^{m}_{i=1} \\large[ \\small {\\mid \\mid f(x_{i}^{a}) - f(x_{i}^{p})) \\mid \\mid_2^2} - {\\mid \\mid f(x_{i}^{a}) - f(x_{i}^{n})) \\mid \\mid_2^2} + \\alpha \\large ] \\small_+$$\n\n$[z]_+$ means $max(z,0)$ and $m$ is the number of triplets in the training set. The triplet loss in Keras is best implemented with a custom layer as the loss function doesn't follow the usual `loss(input, target)` pattern. This layer calls `self.add_loss` to install the triplet loss:",
"_____no_output_____"
]
],
[
[
"from keras import backend as K\nfrom keras.models import Model\nfrom keras.layers import Input, Layer\n\n# Input for anchor, positive and negative images\nin_a = Input(shape=(96, 96, 3))\nin_p = Input(shape=(96, 96, 3))\nin_n = Input(shape=(96, 96, 3))\n\n# Output for anchor, positive and negative embedding vectors\n# The nn4_small model instance is shared (Siamese network)\nemb_a = nn4_small2(in_a)\nemb_p = nn4_small2(in_p)\nemb_n = nn4_small2(in_n)\n\nclass TripletLossLayer(Layer):\n def __init__(self, alpha, **kwargs):\n self.alpha = alpha\n super(TripletLossLayer, self).__init__(**kwargs)\n \n def triplet_loss(self, inputs):\n a, p, n = inputs\n p_dist = K.sum(K.square(a-p), axis=-1)\n n_dist = K.sum(K.square(a-n), axis=-1)\n return K.sum(K.maximum(p_dist - n_dist + self.alpha, 0), axis=0)\n \n def call(self, inputs):\n loss = self.triplet_loss(inputs)\n self.add_loss(loss)\n return loss\n\n# Layer that computes the triplet loss from anchor, positive and negative embedding vectors\ntriplet_loss_layer = TripletLossLayer(alpha=0.2, name='triplet_loss_layer')([emb_a, emb_p, emb_n])\n\n# Model that can be trained with anchor, positive negative images\nnn4_small2_train = Model([in_a, in_p, in_n], triplet_loss_layer)",
"_____no_output_____"
]
],
[
[
"During training, it is important to select triplets whose positive pairs $(x^a_i, x^p_i)$ and negative pairs $(x^a_i, x^n_i)$ are hard to discriminate i.e. their distance difference in embedding space should be less than margin $\\alpha$, otherwise, the network is unable to learn a useful embedding. Therefore, each training iteration should select a new batch of triplets based on the embeddings learned in the previous iteration. Assuming that a generator returned from a `triplet_generator()` call can generate triplets under these constraints, the network can be trained with:",
"_____no_output_____"
]
],
[
[
"from data import triplet_generator\n\n# triplet_generator() creates a generator that continuously returns \n# ([a_batch, p_batch, n_batch], None) tuples where a_batch, p_batch \n# and n_batch are batches of anchor, positive and negative RGB images \n# each having a shape of (batch_size, 96, 96, 3).\ngenerator = triplet_generator() \n\nnn4_small2_train.compile(loss=None, optimizer='adam')\nnn4_small2_train.fit_generator(generator, epochs=10, steps_per_epoch=100)\n\n# Please note that the current implementation of the generator only generates \n# random image data. The main goal of this code snippet is to demonstrate \n# the general setup for model training. In the following, we will anyway \n# use a pre-trained model so we don't need a generator here that operates \n# on real training data. I'll maybe provide a fully functional generator\n# later.",
"_____no_output_____"
]
],
[
[
"The above code snippet should merely demonstrate how to setup model training. But instead of actually training a model from scratch we will now use a pre-trained model as training from scratch is very expensive and requires huge datasets to achieve good generalization performance. For example, [[1]](https://arxiv.org/abs/1503.03832) uses a dataset of 200M images consisting of about 8M identities. \n\nThe OpenFace project provides [pre-trained models](https://cmusatyalab.github.io/openface/models-and-accuracies/#pre-trained-models) that were trained with the public face recognition datasets [FaceScrub](http://vintage.winklerbros.net/facescrub.html) and [CASIA-WebFace](http://arxiv.org/abs/1411.7923). The Keras-OpenFace project converted the weights of the pre-trained nn4.small2.v1 model to [CSV files](https://github.com/iwantooxxoox/Keras-OpenFace/tree/master/weights) which were then [converted here](face-recognition-convert.ipynb) to a binary format that can be loaded by Keras with `load_weights`:",
"_____no_output_____"
]
],
[
[
"nn4_small2_pretrained = create_model()\nnn4_small2_pretrained.load_weights('weights/nn4.small2.v1.h5')",
"_____no_output_____"
]
],
[
[
"### Custom dataset",
"_____no_output_____"
],
[
"To demonstrate face recognition on a custom dataset, a small subset of the [LFW](http://vis-www.cs.umass.edu/lfw/) dataset is used. It consists of 100 face images of [10 identities](images). The metadata for each image (file and identity name) are loaded into memory for later processing.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport os.path\n\nclass IdentityMetadata():\n def __init__(self, base, name, file):\n # dataset base directory\n self.base = base\n # identity name\n self.name = name\n # image file name\n self.file = file\n\n def __repr__(self):\n return self.image_path()\n\n def image_path(self):\n return os.path.join(self.base, self.name, self.file) \n \ndef load_metadata(path):\n metadata = []\n\n for i in os.listdir(path):\n for f in os.listdir(os.path.join(path, i)):\n #checking file extention. Allowing only '.jpg' and '.jpeg' files.\n ext = os.path.splitext(f)[1]\n if ext == '.jpg' or ext == '.jpeg':\n metadata.append(IdentityMetadata(path, i, f))\n return np.array(metadata)\n\nmetadata = load_metadata('images')",
"_____no_output_____"
]
],
[
[
"### Face alignment",
"_____no_output_____"
],
[
"The nn4.small2.v1 model was trained with aligned face images, therefore, the face images from the custom dataset must be aligned too. Here, we use [Dlib](http://dlib.net/) for face detection and [OpenCV](https://opencv.org/) for image transformation and cropping to produce aligned 96x96 RGB face images. By using the [AlignDlib](align.py) utility from the OpenFace project this is straightforward:",
"_____no_output_____"
]
],
[
[
"import cv2\nimport matplotlib.pyplot as plt\nimport matplotlib.patches as patches\n\nfrom align import AlignDlib\n\n%matplotlib inline\n\ndef load_image(path):\n img = cv2.imread(path, 1)\n # OpenCV loads images with color channels\n # in BGR order. So we need to reverse them\n return img[...,::-1]\n\n# Initialize the OpenFace face alignment utility\nalignment = AlignDlib('models/landmarks.dat')\n\n# Load an image of Jacques Chirac\njc_orig = load_image(metadata[2].image_path())\n\n# Detect face and return bounding box\nbb = alignment.getLargestFaceBoundingBox(jc_orig)\n\n# Transform image using specified face landmark indices and crop image to 96x96\njc_aligned = alignment.align(96, jc_orig, bb, landmarkIndices=AlignDlib.OUTER_EYES_AND_NOSE)\n\n# Show original image\nplt.subplot(131)\nplt.imshow(jc_orig)\n\n# Show original image with bounding box\nplt.subplot(132)\nplt.imshow(jc_orig)\nplt.gca().add_patch(patches.Rectangle((bb.left(), bb.top()), bb.width(), bb.height(), fill=False, color='red'))\n\n# Show aligned image\nplt.subplot(133)\nplt.imshow(jc_aligned);",
"_____no_output_____"
]
],
[
[
"As described in the OpenFace [pre-trained models](https://cmusatyalab.github.io/openface/models-and-accuracies/#pre-trained-models) section, landmark indices `OUTER_EYES_AND_NOSE` are required for model nn4.small2.v1. Let's implement face detection, transformation and cropping as `align_image` function for later reuse.",
"_____no_output_____"
]
],
[
[
"def align_image(img):\n return alignment.align(96, img, alignment.getLargestFaceBoundingBox(img), \n landmarkIndices=AlignDlib.OUTER_EYES_AND_NOSE)",
"_____no_output_____"
]
],
[
[
"### Embedding vectors",
"_____no_output_____"
],
[
"Embedding vectors can now be calculated by feeding the aligned and scaled images into the pre-trained network.",
"_____no_output_____"
]
],
[
[
"embedded = np.zeros((metadata.shape[0], 128))\n\nfor i, m in enumerate(metadata):\n img = load_image(m.image_path())\n img = align_image(img)\n # scale RGB values to interval [0,1]\n img = (img / 255.).astype(np.float32)\n # obtain embedding vector for image\n embedded[i] = nn4_small2_pretrained.predict(np.expand_dims(img, axis=0))[0]",
"_____no_output_____"
]
],
[
[
"Let's verify on a single triplet example that the squared L2 distance between its anchor-positive pair is smaller than the distance between its anchor-negative pair.",
"_____no_output_____"
]
],
[
[
"def distance(emb1, emb2):\n return np.sum(np.square(emb1 - emb2))\n\ndef show_pair(idx1, idx2):\n plt.figure(figsize=(8,3))\n plt.suptitle(f'Distance = {distance(embedded[idx1], embedded[idx2]):.2f}')\n plt.subplot(121)\n plt.imshow(load_image(metadata[idx1].image_path()))\n plt.subplot(122)\n plt.imshow(load_image(metadata[idx2].image_path())); \n\nshow_pair(2, 3)\nshow_pair(2, 12)",
"_____no_output_____"
]
],
[
[
"As expected, the distance between the two images of Jacques Chirac is smaller than the distance between an image of Jacques Chirac and an image of Gerhard Schröder (0.30 < 1.12). But we still do not know what distance threshold $\\tau$ is the best boundary for making a decision between *same identity* and *different identity*.",
"_____no_output_____"
],
[
"### Distance threshold",
"_____no_output_____"
],
[
"To find the optimal value for $\\tau$, the face verification performance must be evaluated on a range of distance threshold values. At a given threshold, all possible embedding vector pairs are classified as either *same identity* or *different identity* and compared to the ground truth. Since we're dealing with skewed classes (much more negative pairs than positive pairs), we use the [F1 score](https://en.wikipedia.org/wiki/F1_score) as evaluation metric instead of [accuracy](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html).",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import f1_score, accuracy_score\n\ndistances = [] # squared L2 distance between pairs\nidentical = [] # 1 if same identity, 0 otherwise\n\nnum = len(metadata)\n\nfor i in range(num - 1):\n for j in range(1, num):\n distances.append(distance(embedded[i], embedded[j]))\n identical.append(1 if metadata[i].name == metadata[j].name else 0)\n \ndistances = np.array(distances)\nidentical = np.array(identical)\n\nthresholds = np.arange(0.3, 1.0, 0.01)\n\nf1_scores = [f1_score(identical, distances < t) for t in thresholds]\nacc_scores = [accuracy_score(identical, distances < t) for t in thresholds]\n\nopt_idx = np.argmax(f1_scores)\n# Threshold at maximal F1 score\nopt_tau = thresholds[opt_idx]\n# Accuracy at maximal F1 score\nopt_acc = accuracy_score(identical, distances < opt_tau)\n\n# Plot F1 score and accuracy as function of distance threshold\nplt.plot(thresholds, f1_scores, label='F1 score');\nplt.plot(thresholds, acc_scores, label='Accuracy');\nplt.axvline(x=opt_tau, linestyle='--', lw=1, c='lightgrey', label='Threshold')\nplt.title(f'Accuracy at threshold {opt_tau:.2f} = {opt_acc:.3f}');\nplt.xlabel('Distance threshold')\nplt.legend();",
"_____no_output_____"
]
],
[
[
"The face verification accuracy at $\\tau$ = 0.56 is 95.7%. This is not bad given a baseline of 89% for a classifier that always predicts *different identity* (there are 980 pos. pairs and 8821 neg. pairs) but since nn4.small2.v1 is a relatively small model it is still less than what can be achieved by state-of-the-art models (> 99%). \n\nThe following two histograms show the distance distributions of positive and negative pairs and the location of the decision boundary. There is a clear separation of these distributions which explains the discriminative performance of the network. One can also spot some strong outliers in the positive pairs class but these are not further analyzed here.",
"_____no_output_____"
]
],
[
[
"dist_pos = distances[identical == 1]\ndist_neg = distances[identical == 0]\n\nplt.figure(figsize=(12,4))\n\nplt.subplot(121)\nplt.hist(dist_pos)\nplt.axvline(x=opt_tau, linestyle='--', lw=1, c='lightgrey', label='Threshold')\nplt.title('Distances (pos. pairs)')\nplt.legend();\n\nplt.subplot(122)\nplt.hist(dist_neg)\nplt.axvline(x=opt_tau, linestyle='--', lw=1, c='lightgrey', label='Threshold')\nplt.title('Distances (neg. pairs)')\nplt.legend();",
"_____no_output_____"
]
],
[
[
"### Face recognition",
"_____no_output_____"
],
[
"Given an estimate of the distance threshold $\\tau$, face recognition is now as simple as calculating the distances between an input embedding vector and all embedding vectors in a database. The input is assigned the label (i.e. identity) of the database entry with the smallest distance if it is less than $\\tau$ or label *unknown* otherwise. This procedure can also scale to large databases as it can be easily parallelized. It also supports one-shot learning, as adding only a single entry of a new identity might be sufficient to recognize new examples of that identity.\n\nA more robust approach is to label the input using the top $k$ scoring entries in the database which is essentially [KNN classification](https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm) with a Euclidean distance metric. Alternatively, a linear [support vector machine](https://en.wikipedia.org/wiki/Support_vector_machine) (SVM) can be trained with the database entries and used to classify i.e. identify new inputs. For training these classifiers we use 50% of the dataset, for evaluation the other 50%.",
"_____no_output_____"
]
],
[
[
"from sklearn.preprocessing import LabelEncoder\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.svm import LinearSVC\n\ntargets = np.array([m.name for m in metadata])\n\nencoder = LabelEncoder()\nencoder.fit(targets)\n\n# Numerical encoding of identities\ny = encoder.transform(targets)\n\ntrain_idx = np.arange(metadata.shape[0]) % 2 != 0\ntest_idx = np.arange(metadata.shape[0]) % 2 == 0\n\n# 50 train examples of 10 identities (5 examples each)\nX_train = embedded[train_idx]\n# 50 test examples of 10 identities (5 examples each)\nX_test = embedded[test_idx]\n\ny_train = y[train_idx]\ny_test = y[test_idx]\n\nknn = KNeighborsClassifier(n_neighbors=1, metric='euclidean')\nsvc = LinearSVC()\n\nknn.fit(X_train, y_train)\nsvc.fit(X_train, y_train)\n\nacc_knn = accuracy_score(y_test, knn.predict(X_test))\nacc_svc = accuracy_score(y_test, svc.predict(X_test))\n\nprint(f'KNN accuracy = {acc_knn}, SVM accuracy = {acc_svc}')",
"KNN accuracy = 0.96, SVM accuracy = 0.98\n"
]
],
[
[
"The KNN classifier achieves an accuracy of 96% on the test set, the SVM classifier 98%. Let's use the SVM classifier to illustrate face recognition on a single example.",
"_____no_output_____"
]
],
[
[
"import warnings\n# Suppress LabelEncoder warning\nwarnings.filterwarnings('ignore')\n\nexample_idx = 29\n\nexample_image = load_image(metadata[test_idx][example_idx].image_path())\nexample_prediction = svc.predict([embedded[test_idx][example_idx]])\nexample_identity = encoder.inverse_transform(example_prediction)[0]\n\nplt.imshow(example_image)\nplt.title(f'Recognized as {example_identity}');",
"_____no_output_____"
]
],
[
[
"Seems reasonable :-) Classification results should actually be checked whether (a subset of) the database entries of the predicted identity have a distance less than $\\tau$, otherwise one should assign an *unknown* label. This step is skipped here but can be easily added.\n\n",
"_____no_output_____"
],
[
"### Dataset visualization",
"_____no_output_____"
],
[
"To embed the dataset into 2D space for displaying identity clusters, [t-distributed Stochastic Neighbor Embedding](https://en.wikipedia.org/wiki/T-distributed_stochastic_neighbor_embedding) (t-SNE) is applied to the 128-dimensional embedding vectors. Except from a few outliers, identity clusters are well separated.",
"_____no_output_____"
]
],
[
[
"from sklearn.manifold import TSNE\n\nX_embedded = TSNE(n_components=2).fit_transform(embedded)\n\nfor i, t in enumerate(set(targets)):\n idx = targets == t\n plt.scatter(X_embedded[idx, 0], X_embedded[idx, 1], label=t) \n\nplt.legend(bbox_to_anchor=(1, 1));",
"_____no_output_____"
]
],
[
[
"### References\n\n- [1] [FaceNet: A Unified Embedding for Face Recognition and Clustering](https://arxiv.org/abs/1503.03832)\n- [2] [Going Deeper with Convolutions](https://arxiv.org/abs/1409.4842)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d095a3c96c1e50f90a7e35196966a19137bcf554 | 10,779 | ipynb | Jupyter Notebook | 11 microsoft software prediction/fun-with-ms-dataset.ipynb | MLVPRASAD/KaggleProjects | 379e062cf58d83ff57a456552bb956df68381fdd | [
"MIT"
] | 2 | 2020-01-25T08:31:14.000Z | 2022-03-23T18:24:03.000Z | 11 microsoft software prediction/fun-with-ms-dataset.ipynb | MLVPRASAD/KaggleProjects | 379e062cf58d83ff57a456552bb956df68381fdd | [
"MIT"
] | null | null | null | 11 microsoft software prediction/fun-with-ms-dataset.ipynb | MLVPRASAD/KaggleProjects | 379e062cf58d83ff57a456552bb956df68381fdd | [
"MIT"
] | null | null | null | 10,779 | 10,779 | 0.49717 | [
[
[
"dtypes = {\n 'MachineIdentifier': 'category',\n 'ProductName': 'category',\n 'EngineVersion': 'category',\n 'AppVersion': 'category',\n 'AvSigVersion': 'category',\n 'IsBeta': 'int8',\n 'RtpStateBitfield': 'float16',\n 'IsSxsPassiveMode': 'int8',\n 'DefaultBrowsersIdentifier': 'float16',\n 'AVProductStatesIdentifier': 'float32',\n 'AVProductsInstalled': 'float16',\n 'AVProductsEnabled': 'float16',\n 'HasTpm': 'int8',\n 'CountryIdentifier': 'int16',\n 'CityIdentifier': 'float32',\n 'OrganizationIdentifier': 'float16',\n 'GeoNameIdentifier': 'float16',\n 'LocaleEnglishNameIdentifier': 'int8',\n 'Platform': 'category',\n 'Processor': 'category',\n 'OsVer': 'category',\n 'OsBuild': 'int16',\n 'OsSuite': 'int16',\n 'OsPlatformSubRelease': 'category',\n 'OsBuildLab': 'category',\n 'SkuEdition': 'category',\n 'IsProtected': 'float16',\n 'AutoSampleOptIn': 'int8',\n 'PuaMode': 'category',\n 'SMode': 'float16',\n 'IeVerIdentifier': 'float16',\n 'SmartScreen': 'category',\n 'Firewall': 'float16',\n 'UacLuaenable': 'float32',\n 'Census_MDC2FormFactor': 'category',\n 'Census_DeviceFamily': 'category',\n 'Census_OEMNameIdentifier': 'float16',\n 'Census_OEMModelIdentifier': 'float32',\n 'Census_ProcessorCoreCount': 'float16',\n 'Census_ProcessorManufacturerIdentifier': 'float16',\n 'Census_ProcessorModelIdentifier': 'float16',\n 'Census_ProcessorClass': 'category',\n 'Census_PrimaryDiskTotalCapacity': 'float32',\n 'Census_PrimaryDiskTypeName': 'category',\n 'Census_SystemVolumeTotalCapacity': 'float32',\n 'Census_HasOpticalDiskDrive': 'int8',\n 'Census_TotalPhysicalRAM': 'float32',\n 'Census_ChassisTypeName': 'category',\n 'Census_InternalPrimaryDiagonalDisplaySizeInInches': 'float16',\n 'Census_InternalPrimaryDisplayResolutionHorizontal': 'float16',\n 'Census_InternalPrimaryDisplayResolutionVertical': 'float16',\n 'Census_PowerPlatformRoleName': 'category',\n 'Census_InternalBatteryType': 'category',\n 'Census_InternalBatteryNumberOfCharges': 'float32',\n 'Census_OSVersion': 'category',\n 'Census_OSArchitecture': 'category',\n 'Census_OSBranch': 'category',\n 'Census_OSBuildNumber': 'int16',\n 'Census_OSBuildRevision': 'int32',\n 'Census_OSEdition': 'category',\n 'Census_OSSkuName': 'category',\n 'Census_OSInstallTypeName': 'category',\n 'Census_OSInstallLanguageIdentifier': 'float16',\n 'Census_OSUILocaleIdentifier': 'int16',\n 'Census_OSWUAutoUpdateOptionsName': 'category',\n 'Census_IsPortableOperatingSystem': 'int8',\n 'Census_GenuineStateName': 'category',\n 'Census_ActivationChannel': 'category',\n 'Census_IsFlightingInternal': 'float16',\n 'Census_IsFlightsDisabled': 'float16',\n 'Census_FlightRing': 'category',\n 'Census_ThresholdOptIn': 'float16',\n 'Census_FirmwareManufacturerIdentifier': 'float16',\n 'Census_FirmwareVersionIdentifier': 'float32',\n 'Census_IsSecureBootEnabled': 'int8',\n 'Census_IsWIMBootEnabled': 'float16',\n 'Census_IsVirtualDevice': 'float16',\n 'Census_IsTouchEnabled': 'int8',\n 'Census_IsPenCapable': 'int8',\n 'Census_IsAlwaysOnAlwaysConnectedCapable': 'float16',\n 'Wdft_IsGamer': 'float16',\n 'Wdft_RegionIdentifier': 'float16',\n 'HasDetections': 'int8'\n }",
"_____no_output_____"
],
[
"import numpy as np\nimport pandas as pd\nimport lightgbm as lgb\nimport xgboost as xgb\nimport time, datetime\nfrom sklearn import *\n\ntrain = pd.read_csv('../input/train.csv', iterator=True, chunksize=1_500_000, dtype=dtypes)\ntest = pd.read_csv('../input/test.csv', iterator=True, chunksize=1_000_000, dtype=dtypes)",
"_____no_output_____"
],
[
"gf_defaults = {'col': [], 'ocol':[], 'dcol' : ['EngineVersion', 'AppVersion', 'AvSigVersion', 'OsBuildLab', 'Census_OSVersion']}\none_hot = {}\n\ndef get_features(df, gf_train=False):\n global one_hot\n global gf_defaults\n \n for c in gf_defaults['dcol']:\n for i in range(5):\n df[c + str(i)] = df[c].map(lambda x: str(x).split('.')[i] if len(str(x).split('.'))>i else -1)\n\n col = [c for c in df.columns if c not in ['MachineIdentifier', 'HasDetections']]\n if gf_train:\n for c in col:\n if df[c].dtype == 'O' or df[c].dtype.name == 'category':\n gf_defaults['ocol'].append(c)\n else:\n gf_defaults['col'].append(c)\n one_hot = {c: list(df[c].value_counts().index) for c in gf_defaults['ocol']}\n\n #train and test\n for c in one_hot:\n if len(one_hot[c])>1 and len(one_hot[c]) < 20:\n for val in one_hot[c]:\n df[c+'_oh_' + str(val)] = (df[c].values == val).astype(np.int)\n if gf_train:\n gf_defaults['col'].append(c+'_oh_' + str(val))\n return df[gf_defaults['col']+['MachineIdentifier', 'HasDetections']]",
"_____no_output_____"
],
[
"col = gf_defaults['col']\nmodel = []\nparams = {'objective':'binary', \"boosting\": \"gbdt\", 'learning_rate': 0.02, 'max_depth': -1, \n \"feature_fraction\": 0.8, \"bagging_freq\": 1, \"bagging_fraction\": 0.8 , \"bagging_seed\": 11,\n \"metric\": 'auc', \"lambda_l1\": 0.1, 'num_leaves': 60, 'min_data_in_leaf': 60, \"verbosity\": -1, \"random_state\": 3}\nonline_start = True\nfor df in train:\n if online_start:\n df = get_features(df, True)\n x1, x2, y1, y2 = model_selection.train_test_split(df[col], df['HasDetections'], test_size=0.2, random_state=25)\n model = lgb.train(params, lgb.Dataset(x1, y1), 2500, lgb.Dataset(x2, y2), verbose_eval=100, early_stopping_rounds=200)\n model.save_model('lgb.model')\n else:\n df = get_features(df)\n x1, x2, y1, y2 = model_selection.train_test_split(df[col], df['HasDetections'], test_size=0.2, random_state=25)\n model = lgb.train(params, lgb.Dataset(x1, y1), 2500, lgb.Dataset(x2, y2), verbose_eval=100, early_stopping_rounds=200, init_model='lgb.model')\n model.save_model('lgb.model')\n online_start = False\n print('training...')",
"_____no_output_____"
],
[
"predictions = []\nfor df in test:\n df['HasDetections'] = 0.0\n df = get_features(df)\n df['HasDetections'] = model.predict(df[col], num_iteration=model.best_iteration + 50)\n predictions.append(df[['MachineIdentifier', 'HasDetections']].values)\n print('testing...')",
"_____no_output_____"
],
[
"sub = np.concatenate(predictions)\nsub = pd.DataFrame(sub, columns = ['MachineIdentifier', 'HasDetections'])\nsub.to_csv('submission.csv', index=False)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d095c2ca5a5f93258aa666a391414cf075d6549d | 52,922 | ipynb | Jupyter Notebook | examples/MQCNNtrainer.ipynb | GoldbergData/pytorch-forecasting | e2ef3794da5d996c9740d932a4f55269bb4003f2 | [
"MIT"
] | null | null | null | examples/MQCNNtrainer.ipynb | GoldbergData/pytorch-forecasting | e2ef3794da5d996c9740d932a4f55269bb4003f2 | [
"MIT"
] | null | null | null | examples/MQCNNtrainer.ipynb | GoldbergData/pytorch-forecasting | e2ef3794da5d996c9740d932a4f55269bb4003f2 | [
"MIT"
] | 1 | 2020-11-21T21:21:08.000Z | 2020-11-21T21:21:08.000Z | 41.216511 | 399 | 0.473489 | [
[
[
"import pytorch_lightning as pl\n\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch import optim\nfrom torch.utils.data import DataLoader, random_split\nfrom torch.utils.data.distributed import DistributedSampler\nimport numpy as np\nimport pandas as pd\nimport torch as torch\n\nfrom pathlib import Path\nimport pickle\nimport warnings\n\nimport numpy as np\nimport pandas as pd\nfrom pandas.core.common import SettingWithCopyWarning\nimport pytorch_lightning as pl\nfrom pytorch_lightning.callbacks import EarlyStopping, LearningRateMonitor\nfrom pytorch_lightning.loggers import TensorBoardLogger\nimport torch\n\nfrom pytorch_forecasting import GroupNormalizer, TemporalFusionTransformer, TimeSeriesDataSet\nfrom pytorch_forecasting.data.examples import get_stallion_data\nfrom pytorch_forecasting.metrics import MAE, RMSE, SMAPE, PoissonLoss, QuantileLoss\nfrom pytorch_forecasting.models.temporal_fusion_transformer.tuning import optimize_hyperparameters\nfrom pytorch_forecasting.utils import profile\nfrom torch.utils.data import Dataset, DataLoader, IterableDataset\n\nwarnings.simplefilter(\"error\", category=SettingWithCopyWarning)",
"_____no_output_____"
],
[
"class MQCNNEncoder(nn.Module):\n def __init__(self, time_step, static_features, timevarying_features, num_static_features, num_timevarying_features):\n super().__init__()\n self.time_step = time_step\n self.static_features = static_features\n self.timevarying_features = timevarying_features\n self.num_static_features = num_static_features\n self.num_timevarying_features = num_timevarying_features\n self.static = StaticLayer(in_channels = self.num_static_features,\n time_step = self.time_step,\n static_features = self.static_features)\n\n self.conv = ConvLayer(in_channels = self.num_timevarying_features,\n timevarying_features = self.timevarying_features,\n time_step = self.time_step)\n\n def forward(self, x):\n x_s = self.static(x)\n x_t = self.conv(x)\n return torch.cat((x_s, x_t), axis = 2)\n\n\nclass MQCNNDecoder(nn.Module):\n \"\"\"Decoder implementation for MQCNN\n\n Parameters\n ----------\n config\n Configurations\n ltsp : list of tuple of int\n List of lead-time / span tuples to make predictions for\n expander : HybridBlock\n Overrides default future data expander if not None\n hf1 : HybridBlock\n Overrides default global future layer if not None\n hf2 : HybridBlock\n Overrides default local future layer if not None\n ht1 : HybridBlock\n Overrides horizon-specific layer if not None\n ht2 : HybridBlock\n Overrides horizon-agnostic layer if not None\n h : HybridBlock\n Overrides local MLP if not None\n span_1 : HybridBlock\n Overrides span 1 layer if not None\n span_N : HybridBlock\n Overrides span N layer if not None\n\n Inputs:\n - **xf** : Future data of shape\n (batch_size, Trnn + lead_future - 1, num_future_ts_features)\n - **encoded** : Encoded input tensor of shape\n (batch_size, Trnn, n) for some n\n Outputs:\n - **pred_1** : Span 1 predictions of shape\n (batch_size, Trnn, Tpred * num_quantiles)\n - **pred_N** : Span N predictions of shape\n (batch_size, Trnn, span_N_count * num_quantiles)\n\n In both outputs, the last dimensions has the predictions grouped\n together by quantile. For example, the quantiles are P10 and P90\n then the span 1 predictions will be:\n Tpred_0_p50, Tpred_1_p50, ..., Tpred_N_p50, Tpred_0_p90,\n Tpred_1_p90, ... Tpred_N_90\n \n \n \"\"\"\n\n def __init__(self, time_step, lead_future, ltsp, future_information, num_future_features,\n global_hidden_units, horizon_specific_hidden_units, horizon_agnostic_hidden_units,\n local_mlp_hidden_units, local_mlp_output_units,\n num_quantiles=2, expander=None, hf1=None, hf2=None,\n ht1=None, ht2=None, h=None, span_1=None, span_N=None,\n **kwargs):\n super(MQCNNDecoder, self).__init__(**kwargs)\n self.future_features_count = num_future_features\n self.future_information = future_information\n self.time_step = time_step\n self.lead_future = lead_future\n self.ltsp = ltsp\n self.num_quantiles = num_quantiles\n self.global_hidden_units = global_hidden_units\n self.horizon_specific_hidden_units = horizon_specific_hidden_units\n self.horizon_agnostic_hidden_units = horizon_agnostic_hidden_units\n self.local_mlp_hidden_units = local_mlp_hidden_units\n self.local_mlp_output_units = local_mlp_output_units\n\n # We assume that Tpred == span1_count.\n # Tpred = forecast_end_index\n# self.Tpred = max(map(lambda x: x[0] + x[1], self.ltsp))\n self.Tpred = 6\n# span1_count = len(list(filter(lambda x: x[1] == 1, self.ltsp)))\n span1_count = 1\n #print(self.Tpred, span1_count)\n #assert span1_count == self.Tpred, f\"Number of span 1 horizons: {span1_count}\\\n #does not match Tpred: {self.Tpred}\" \n\n# self.spanN_count = len(list(filter(lambda x: x[1] != 1, self.ltsp)))\n self.spanN_count = 1\n # Setting default components:\n if expander is None:\n expander = ExpandLayer(self.time_step, self.lead_future, self.future_information)\n if hf1 is None:\n hf1 = GlobalFutureLayer(self.time_step, self.lead_future, self.future_features_count, out_channels=self.global_hidden_units)\n if ht1 is None:\n ht1 = HorizonSpecific(self.Tpred, self.time_step, num = self.horizon_specific_hidden_units)\n if ht2 is None:\n ht2 = HorizonAgnostic(self.horizon_agnostic_hidden_units, self.lead_future)\n if h is None:\n h = LocalMlp(self.local_mlp_hidden_units, self.local_mlp_output_units)\n if span_1 is None:\n span_1 = Span1(self.time_step, self.lead_future, self.num_quantiles)\n if span_N is None:\n span_N = SpanN(self.time_step, self.lead_future, self.num_quantiles, self.spanN_count)\n\n self.expander = expander\n self.hf1 = hf1\n self.hf2 = hf2\n self.ht1 = ht1\n self.ht2 = ht2\n self.h = h\n self.span_1 = span_1\n self.span_N = span_N\n\n def forward(self, x, encoded):\n xf = x['future_information']\n expanded = self.expander(xf)\n hf1 = self.hf1(expanded)\n hf2 = F.relu(expanded)\n \n ht = torch.cat((encoded, hf1), dim=-1)\n ht1 = self.ht1(ht)\n ht2 = self.ht2(ht)\n h = torch.cat((ht1, ht2, hf2), dim=-1)\n h = self.h(h)\n return self.span_1(h)#, self.span_N(h)\n\n# submodule\n\nclass StaticLayer(nn.Module):\n def __init__(self, in_channels, time_step, static_features, out_channels = 30, dropout = 0.4):\n super().__init__()\n self.time_step = time_step\n #self.static_features = static_features\n self.dropout = nn.Dropout(dropout)\n self.in_channels = in_channels\n self.out_channels = out_channels\n self.static = nn.Linear(self.in_channels, self.out_channels)\n\n def forward(self, x):\n x = x['static_features'][:,:1,:]\n x = self.dropout(x)\n x = self.static(x)\n return x.repeat(1, self.time_step, 1)\n\nclass ConvLayer(nn.Module):\n def __init__(self, time_step, timevarying_features, in_channels, out_channels = 30, kernel_size = 2):\n super().__init__()\n self.in_channels = in_channels\n self.out_channels = out_channels\n self.kernel_size = kernel_size\n self.timevarying_features = timevarying_features\n self.time_step = time_step\n\n self.c1 = nn.Conv1d(self.in_channels, self.out_channels, self.kernel_size, dilation = 1)\n self.c2 = nn.Conv1d(self.out_channels, self.out_channels, self.kernel_size, dilation = 2)\n self.c3 = nn.Conv1d(self.out_channels, self.out_channels, self.kernel_size, dilation = 4)\n self.c4 = nn.Conv1d(self.out_channels, self.out_channels, self.kernel_size, dilation = 8)\n self.c5 = nn.Conv1d(self.out_channels, self.out_channels, self.kernel_size, dilation = 16)\n #self.c6 = nn.Conv1d(self.out_channels, self.out_channels, self.kernel_size, dilation = 32)\n\n def forward(self, x):\n x_t = x['timevarying_features'][:, :self.time_step, :]\n x_t = x_t.permute(0, 2, 1)\n x_t = F.pad(x_t, (1,0), \"constant\", 0)\n x_t = self.c1(x_t)\n x_t = F.pad(x_t, (2,0), \"constant\", 0)\n x_t = self.c2(x_t)\n x_t = F.pad(x_t, (4,0), \"constant\", 0)\n x_t = self.c3(x_t)\n x_t = F.pad(x_t, (8,0), \"constant\", 0)\n x_t = self.c4(x_t)\n x_t = F.pad(x_t, (16,0), \"constant\", 0)\n x_t = self.c5(x_t)\n \n return x_t.permute(0, 2, 1)\n\nclass ExpandLayer(nn.Module):\n \"\"\"Expands the dimension referred to as `expand_axis` into two\n dimensions by applying a sliding window. For example, a tensor of\n shape (1, 4, 2) as follows:\n\n [[[0. 1.]\n [2. 3.]\n [4. 5.]\n [6. 7.]]]\n\n where `expand_axis` = 1 and `time_step` = 3 (number of windows) and\n `lead_future` = 2 (window length) will become:\n\n [[[[0. 1.]\n [2. 3.]]\n\n [[2. 3.]\n [4. 5.]]\n\n [[4. 5.]\n [6. 7.]]]]\n\n Used for expanding future information tensors\n\n Parameters\n ----------\n time_step : int\n Length of the time sequence (number of windows)\n lead_future : int\n Number of future time points (window length)\n expand_axis : int\n Axis to expand\"\"\"\n\n def __init__(self, time_step, lead_future, future_information, **kwargs):\n super(ExpandLayer, self).__init__(**kwargs)\n \n self.time_step = time_step\n self.future_information = future_information\n self.lead_future = lead_future\n\n def forward(self, x):\n\n # First create a matrix of indices, which we will use to slice\n # `input` along `expand_axis`. For example, for time_step=3 and\n # lead_future=2,\n # idx = [[0. 1.]\n # [1. 2.]\n # [2. 3.]]\n # We achieve this by doing a broadcast add of\n # [[0.] [1.] [2.]] and [[0. 1.]]\n idx = torch.add(torch.arange(self.time_step).unsqueeze(axis = 1),\n torch.arange(self.lead_future).unsqueeze(axis = 0))\n # Now we slice `input`, taking elements from `input` that correspond to\n # the indices in `idx` along the `expand_axis` dimension\n return x[:, idx, :]\n\n \nclass GlobalFutureLayer(nn.Module):\n def __init__(self, time_step, lead_future, future_features_count, out_channels = 30):\n super().__init__()\n self.time_step = time_step\n self.lead_future = lead_future\n self.future_features_count = future_features_count\n self.out_channels = out_channels\n\n self.l1 = nn.Linear(self.lead_future * self.future_features_count, out_channels)\n \n def forward(self, x):\n x = x.contiguous().view(-1, self.time_step, self.lead_future * self.future_features_count)\n \n return self.l1(x)\n \nclass HorizonSpecific(nn.Module):\n def __init__(self, Tpred, time_step, num = 20):\n super().__init__()\n self.Tpred = Tpred\n self.time_step = time_step\n self.num = num\n \n def forward(self, x):\n x = nn.Linear(x.size(-1), self.Tpred * self.num)(x)\n x = F.relu(x)\n\n return x.view(-1, self.time_step, self.Tpred, 20)\n\nclass HorizonAgnostic(nn.Module):\n def __init__(self, out_channels, lead_future):\n super().__init__()\n self.out_channels = out_channels\n self.lead_future = lead_future\n \n def forward(self, x):\n x = nn.Linear(x.size(-1), self.out_channels)(x)\n x = F.relu(x)\n x = x.unsqueeze(axis = 2)\n x = x.repeat(1,1, self.lead_future, 1)\n\n return x\n \nclass LocalMlp(nn.Module):\n def __init__(self, hidden, output):\n super().__init__()\n self.hidden = hidden\n self.output = output\n \n def forward(self, x):\n x = nn.Linear(x.size(-1), self.hidden)(x)\n x = F.relu(x)\n x = nn.Linear(self.hidden, self.output)(x)\n x = F.relu(x)\n\n return x\n\n\nclass Span1(nn.Module):\n def __init__(self, time_step, lead_future, num_quantiles):\n super().__init__()\n self.time_step = time_step\n self.lead_future = lead_future\n self.num_quantiles = num_quantiles\n \n def forward(self, x):\n x = nn.Linear(x.size(-1), self.num_quantiles)(x)\n x = F.relu(x.contiguous().view(-1, x.size(-2), x.size(-1)))\n x = x.view(-1, self.time_step, self.lead_future, self.num_quantiles)\n x = x.view(-1, self.time_step, self.lead_future*self.num_quantiles)\n\n return x\n\n\nclass SpanN(nn.Module):\n def __init__(self, time_step, lead_future, num_quantiles, spanN_count):\n super().__init__()\n self.time_step = time_step\n self.lead_future = lead_future\n self.num_quantiles = num_quantiles\n self.spanN_count = spanN_count\n \n def forward(self, x):\n x = x.permute(0, 1, 3, 2)\n x = x.contiguous().view(-1, self.time_step, x.size(-2) * x.size(-1))\n\n x = nn.Linear(x.size(-1), self.spanN_count * self.num_quantiles)(x)\n\n return x",
"_____no_output_____"
],
[
"class MQCNNModel(pl.LightningModule):\n def __init__(self, static_features, timevarying_features, future_information, time_step, ltsp, lead_future,\n global_hidden_units, horizon_specific_hidden_units,\n horizon_agnostic_hidden_units, local_mlp_hidden_units, local_mlp_output_units):\n super(MQCNNModel, self).__init__()\n #self.input_tensor = input_tensor\n self.time_step = time_step\n self.static_features = static_features\n self.num_static_features = len(static_features)\n self.timevarying_features = timevarying_features\n self.num_timevarying_features = len(timevarying_features)\n self.future_information = future_information\n self.num_future_features = len(future_information)\n self.ltsp = ltsp\n self.lead_future = lead_future\n self.global_hidden_units = global_hidden_units\n self.horizon_specific_hidden_units = horizon_specific_hidden_units\n self.horizon_agnostic_hidden_units = horizon_agnostic_hidden_units\n self.local_mlp_hidden_units = local_mlp_hidden_units\n self.local_mlp_output_units = local_mlp_output_units\n\n self.encoder = MQCNNEncoder(self.time_step, self.static_features, self.timevarying_features,\n self.num_static_features, self.num_timevarying_features)\n \n self.decoder = MQCNNDecoder(self.time_step, self.lead_future, self.ltsp, self.future_information,\n self.num_future_features, self.global_hidden_units, self.horizon_specific_hidden_units,\n self.horizon_agnostic_hidden_units, self.local_mlp_hidden_units,\n self.local_mlp_output_units)\n \n\n def forward(self, x):\n encoding = self.encoder(x)\n output = self.decoder(x, encoding)\n\n return output\n\n def configure_optimizers(self):\n optimizer = optim.SGD(self.parameters(), lr = 1e-2)\n\n return optimizer\n\n def training_step(self, batch, batch_idx):\n x, y = batch, batch['targets']\n \n quantiles = torch.tensor([0.5, 0.9]).view(2, 1)\n\n outputs = self(x)\n\n loss = self.loss(outputs, y, quantiles)\n \n print(f'loss: {loss}')\n \n pbar = {'train_loss': loss[0] + loss[1]}\n \n train_loss = loss[0] + loss[1]\n\n return {\"loss\": train_loss, \"progress_bar\": pbar}\n\n \n def loss(self, outputs, targets, quantiles):\n l = outputs - targets.repeat_interleave(2, dim=2)\n \n p50 = torch.mul(torch.where(l > torch.zeros(l.shape), l, torch.zeros_like(l)), 1 - quantiles[0]) + \\\n torch.mul(torch.where(l < torch.zeros(l.shape), -l, torch.zeros_like(l)), quantiles[0])\n \n p90 = torch.mul(torch.where(l > torch.zeros(l.shape), l, torch.zeros_like(l)), 1 - quantiles[1]) + \\\n torch.mul(torch.where(l < torch.zeros(l.shape), -l, torch.zeros_like(l)), quantiles[1])\n \n p50 = p50.mean()\n p90 = p90.mean()\n \n print(f' p50: {p50}, p90: {p90}')\n \n return p50, p90",
"_____no_output_____"
],
[
"class InstockMask(nn.Module):\n def __init__(self, time_step, ltsp, min_instock_ratio = 0.5, eps_instock_dph = 1e-3,\n eps_total_dph = 1e-3, **kwargs):\n\n super(InstockMask, self).__init__(**kwargs)\n\n if not eps_total_dph > 0:\n raise ValueError(f\"epsilon_total_dph of {eps_total_dph} is invalid! \\\n This parameter must be > 0 to avoid division by 0.\")\n\n self.min_instock_ratio = min_instock_ratio\n self.eps_instock_dph = eps_instock_dph\n self.eps_total_dph = eps_total_dph\n\n def forward(self, demand, total_dph, instock_dph):\n\n if total_dph is not None and instock_dph is not None:\n\n total_dph = total_dph + self.eps_total_dph\n instock_dph = instock_dph + self.eps_instock_dph\n instock_rate = torch.round(instock_dph/total_dph)\n\n demand = torch.where(instock_rate >= self.min_instock_ratio, demand,\n -torch.ones_like(demand))\n\n return demand\n\n\nclass _BaseInstockMask(nn.Module): \n def __init__(self, time_step, ltsp, min_instock_ratio = 0.5, eps_total_dph = 1e-3,\n eps_instock_dph = 1e-3, **kwargs):\n\n super(_BaseInstockMask, self).__init__(**kwargs)\n\n if not eps_total_dph > 0:\n raise ValueError(f\"epsilon_total_dph of {eps_total_dph} is invalid! \\\n This parameter must be > 0 to avoid division by 0.\")\n\n self.instock_mask = InstockMask(time_step, ltsp, min_instock_ratio=min_instock_ratio,\n eps_instock_dph = eps_instock_dph, \n eps_total_dph = eps_total_dph)\n\n def forward(self):\n raise NotImplementedError\n\nclass HorizonMask(_BaseInstockMask):\n def __init__(self, time_step, ltsp, min_instock_ratio = 0.5, eps_instock_dph=1e-3,\n eps_total_dph=1e-3, **kwargs):\n\n super(HorizonMask, self).__init__(time_step, ltsp, \n min_instock_ratio = min_instock_ratio,\n eps_instock_dph=eps_instock_dph,\n eps_total_dph=eps_total_dph, **kwargs)\n \n self.mask_idx = _compute_horizon_mask(time_step, ltsp)\n\n def forward(self, demand, total_dph, instock_dph):\n demand_instock = self.instock_mask(demand, total_dph, instock_dph).float()\n \n mask = mask_idx.repeat(demand_instock.shape[0], 1, 1)\n \n print(f'demand shape: {demand_instock.shape}, mask shape: {mask_idx.shape}')\n masked_demand = torch.where(mask, demand_instock, \n -torch.ones_like(demand_instock))\n\n return masked_demand\n \n\ndef _compute_horizon_mask(time_step, ltsp):\n\n horizon = np.array(list(map(lambda _ltsp: _ltsp[0] + _ltsp[1], ltsp))).\\\n reshape((1, len(ltsp)))\n\n forecast_date_range = np.arange(time_step).reshape((time_step, 1))\n relative_distance = forecast_date_range + horizon\n mask = relative_distance < time_step\n return torch.tensor(mask)",
"_____no_output_____"
],
[
"class DemandExpander(nn.Module):\n\n def __init__(self, time_step, ltsp, normalize = True,\n mask_func = HorizonMask, min_instock_ratio=0.5,\n eps_instock_dph = 1e-3, eps_total_dph = 1e-3, **kwargs):\n\n super(DemandExpander, self).__init__(**kwargs)\n if not eps_total_dph > 0:\n raise ValueError(\"eps_total_dph can't be 0\")\n\n Tpred = max(map(lambda x: x[0] + x[1], ltsp))\n pos_sp1 = [i for i, x in enumerate(ltsp) if x[1] == 1]\n pos_spN = [i for i, x in enumerate(ltsp) if x[1] != 1]\n\n self.pos_sp1 = pos_sp1\n self.pos_spN = pos_spN\n\n self.ltsp_kernel = _ltsp_kernel(Tpred, ltsp, normalize)\n self.ltsp_idx = _ltsp_idx(time_step, Tpred)\n self.demand_mask = mask_func(time_step, ltsp, min_instock_ratio=min_instock_ratio,\n eps_instock_dph=eps_instock_dph,\n eps_total_dph = eps_total_dph)\n\n def forward(self, demand):\n ltsp_demand = _apply_ltsp_kernel(demand, self.ltsp_idx, self.ltsp_kernel)\n\n #ltsp_idph = _apply_ltsp_kernel(instock_dph, self.ltsp_idx, self.ltsp_kernel)\n #ltsp_dph = _apply_ltsp_kernel(total_dph, self.ltsp_idx, self.ltsp_kernel)\n\n #masked_demand = self.demand_mask(ltsp_demand, ltsp_dph, ltsp_idph)\n masked_demand_sp1 = ltsp_demand[:, :, self.pos_sp1]\n masked_demand_spN = ltsp_demand[:, :, self.pos_spN]\n\n return masked_demand_sp1#, masked_demand_spN\n\ndef _ltsp_idx(time_step, Tpred):\n idx = np.arange(time_step).reshape(-1, 1) + np.arange(Tpred)\n return torch.tensor(idx)\n\ndef _ltsp_kernel(Tpred, ltsp, normalize = True):\n \n ltsp_count = len(ltsp)\n kernel = np.zeros((Tpred, ltsp_count), dtype = 'float32')\n for i in range(len(ltsp)):\n lead_time = ltsp[i][0]\n span = ltsp[i][1]\n if normalize:\n kernel[lead_time:lead_time + span, i] = 1.0/span\n else:\n kernel[lead_time:lead_time + span, i] = 1.0\n\n return torch.tensor(kernel)\n\ndef _apply_ltsp_kernel(s, ltsp_idx, ltsp_kernel):\n s_ltsp = s[:, ltsp_idx].float()\n \n return s_ltsp @ ltsp_kernel \n\n\nclass Dataset(Dataset):\n \n def __init__(self, data, static_features, timevarying_features, future_information, \n target, train_time_step, predict_time_step, num_quantiles, ltsp, mask_func):\n \n self.data = data\n self.train_time_step = train_time_step\n self.predict_time_step = predict_time_step\n self.num_quantiles = num_quantiles\n self.ltsp = ltsp\n self.mask_func = mask_func\n \n self.static_features = torch.tensor(self.data.\\\n loc[self.data['time_idx'] < self.train_time_step][static_features].\\\n to_numpy(np.float64).reshape(-1, self.train_time_step, len(static_features))).float()\n \n self.timevarying_features = torch.tensor(self.data.\\\n loc[self.data['time_idx'] < self.train_time_step][timevarying_features].\\\n to_numpy(np.float64).reshape(-1, self.train_time_step, len(timevarying_features))).float()\n \n self.future_information = torch.tensor(self.data[future_information].\\\n to_numpy(np.float64).reshape(-1, (self.train_time_step + self.predict_time_step), len(future_information))).float()\n \n self.targets = torch.tensor(self.data[target].\\\n to_numpy(np.float64).reshape(-1, (self.train_time_step + self.predict_time_step))).float()\n \n self.expander = DemandExpander(self.train_time_step,\n self.ltsp,\n mask_func = self.mask_func)\n \n self.targets = self.expander(self.targets)\n \n def __len__(self):\n \n return self.timevarying_features.shape[1]\n \n def __getitem__(self, idx):\n \n static_features = self.static_features[idx, :, :]\n timevarying_features = self.timevarying_features[idx, :, :]\n future_information = self.future_information[idx, :, :]\n targets = self.targets[idx, :, :]\n \n return dict(static_features = static_features, timevarying_features = timevarying_features,\n future_information = future_information, targets = targets)\n ",
"_____no_output_____"
],
[
"data = get_stallion_data()",
"_____no_output_____"
],
[
"# add time index\ndata[\"time_idx\"] = data[\"date\"].dt.year * 12 + data[\"date\"].dt.month\n\ndata[\"time_idx\"] -= data[\"time_idx\"].min()\n# add additional features\n\n# show sample data\ndata.sample(10, random_state=521)",
"_____no_output_____"
],
[
"data['month'] = data['date'].dt.month\n\ndata_sorted = data.sort_values(['agency', 'sku', 'date'])\n\ndata_sorted = pd.get_dummies(data_sorted, columns=['month'])\n\nstatic_cols=['avg_population_2017']\ntimevarying_cols=['volume', 'industry_volume', 'soda_volume', 'price_regular']\nfuture_cols=['month_1', 'month_2','month_3', 'month_4', 'month_5', 'month_6', 'month_7', 'month_8',\n 'month_9', 'month_10', 'month_11', 'month_12', 'price_regular']",
"_____no_output_____"
],
[
"ltsp = [(i, 1) for i in range(6)]\nlen(ltsp)",
"_____no_output_____"
],
[
"training = Dataset(data_sorted,\n static_features = static_cols,\n timevarying_features = timevarying_cols,\n future_information = future_cols,\n target=['volume'], \n train_time_step=54, \n predict_time_step=6,\n num_quantiles = 2,\n ltsp = ltsp,\n mask_func = HorizonMask)",
"_____no_output_____"
],
[
"MQCNN = MQCNNModel(static_cols, timevarying_cols, future_cols, \n 54, ltsp, 6, 50, 20, 100, 50, 10)",
"_____no_output_____"
],
[
"trainer = pl.Trainer(max_epochs = 10)",
"GPU available: False, used: False\nTPU available: False, using: 0 TPU cores\n"
],
[
"train_loader = DataLoader(training, 32)",
"_____no_output_____"
],
[
"trainer.fit(MQCNN, train_loader)",
"\n | Name | Type | Params\n-----------------------------------------\n0 | encoder | MQCNNEncoder | 7 K \n1 | decoder | MQCNNDecoder | 3 K \n/Users/abkatoch/opt/anaconda3/envs/forecastingenv/lib/python3.6/site-packages/pytorch_lightning/utilities/distributed.py:45: UserWarning: The dataloader, train dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 12 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.\n warnings.warn(*args, **kwargs)\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d095c62125a507db545f99f5f3158ad993ce0f71 | 257,780 | ipynb | Jupyter Notebook | fMRI/Nipype/study_notebook/.ipynb_checkpoints/nipype_basicConcept_workflow_notCompleted-checkpoint.ipynb | JiyangJiang/FUTURE | 50c4f5db4984398117bcf280d7048f9273577b38 | [
"MIT"
] | 2 | 2021-06-12T15:53:29.000Z | 2022-01-02T01:41:09.000Z | fMRI/Nipype/study_notebook/.ipynb_checkpoints/nipype_basicConcept_workflow_notCompleted-checkpoint.ipynb | JiyangJiang/FUTURE | 50c4f5db4984398117bcf280d7048f9273577b38 | [
"MIT"
] | null | null | null | fMRI/Nipype/study_notebook/.ipynb_checkpoints/nipype_basicConcept_workflow_notCompleted-checkpoint.ipynb | JiyangJiang/FUTURE | 50c4f5db4984398117bcf280d7048f9273577b38 | [
"MIT"
] | null | null | null | 486.377358 | 55,016 | 0.93811 | [
[
[
"import numpy as np\nimport nibabel as nb\nimport matplotlib.pyplot as plt\n\n# helper function to plot 3D NIfTI\ndef plot_slice (fname):\n \n # Load image\n img = nb.load (fname)\n data = img.get_data ()\n \n # cut in the middle of brain\n cut = int (data.shape[-1]/2) + 10\n \n # plot data\n plt.imshow (np.rot90 (data[...,cut]), cmap = 'gray')\n plt.gca().set_axis_off()",
"_____no_output_____"
],
[
"# skull strip\n# smooth original img\n# mask smoothed img\n\n# Example 1 : shell command line execution\n# ----------------------------------------",
"_____no_output_____"
],
[
"%%bash\n\nANAT_NAME=sub-2019A_T1w\nANAT=\"/home/jiyang/Work/sub-2019A/anat/${ANAT_NAME}\"\n\nbet ${ANAT} /home/jiyang/Work/sub-2019A/derivatives/${ANAT_NAME}_brain -m -f 0.5\n\nfslmaths ${ANAT} -s 2 /home/jiyang/Work/sub-2019A/derivatives/${ANAT_NAME}_smooth\n\nfslmaths /home/jiyang/Work/sub-2019A/derivatives/${ANAT_NAME}_smooth \\\n -mas /home/jiyang/Work/sub-2019A/derivatives/${ANAT_NAME}_brain_mask \\\n /home/jiyang/Work/sub-2019A/derivatives/${ANAT_NAME}_smooth_mask",
"_____no_output_____"
],
[
"# plot\n\nf = plt.figure (figsize=(12,4))\n\nfor i, img in enumerate(['T1w', 'T1w_smooth', 'T1w_brain_mask', 'T1w_smooth_mask']):\n f.add_subplot (1, 4, i + 1)\n if i == 0:\n plot_slice ('/home/jiyang/Work/sub-2019A/anat/sub-2019A_%s.nii.gz' % img)\n else:\n plot_slice ('/home/jiyang/Work/sub-2019A/derivatives/sub-2019A_%s.nii.gz' % img)\n plt.title(img)",
"_____no_output_____"
],
[
"# Example 2 : interface execution\nimport matplotlib.pyplot as plt\nfrom nipype.interfaces import fsl\n\nskullstrip = fsl.BET (in_file = \"/Users/jiyang/Desktop/test/anat/sub-3625A_T1w.nii.gz\",\n out_file = \"/Users/jiyang/Desktop/test/derivatives/sub-3625A_T1w_brain.nii.gz\",\n mask = True)\nskullstrip.run()\n\nsmooth = fsl.IsotropicSmooth (in_file = \"/Users/jiyang/Desktop/test/anat/sub-3625A_T1w.nii.gz\",\n out_file = \"/Users/jiyang/Desktop/test/derivatives/sub-3625A_T1w_smooth.nii.gz\",\n fwhm = 4)\nsmooth.run()\n\nmask = fsl.ApplyMask (in_file = '/Users/jiyang/Desktop/test/derivatives/sub-3625A_T1w_smooth.nii.gz',\n out_file = '/Users/jiyang/Desktop/test/derivatives/sub-3625A_T1w_smooth_brain.nii.gz',\n mask_file = '/Users/jiyang/Desktop/test/derivatives/sub-3625A_T1w_brain_mask.nii.gz')\nmask.run()\n\n# visualise\nf = plt.figure (figsize = (12,4))\nfor i, img in enumerate (['T1w', 'T1w_smooth',\n 'T1w_brain_mask', 'T1w_smooth_brain']):\n f.add_subplot (1, 4, i + 1)\n if i == 0:\n plot_slice ('/Users/jiyang/Desktop/test/anat/sub-3625A_%s.nii.gz' % img)\n else:\n plot_slice ('/Users/jiyang/Desktop/test/derivatives/sub-3625A_%s.nii.gz' % img)\n plt.title (img)",
"_____no_output_____"
],
[
"# Example 2 can be simplified\n\nskullstrip = fsl.BET (in_file = \"/Users/jiyang/Desktop/test/anat/sub-3625A_T1w.nii.gz\",\n out_file = \"/Users/jiyang/Desktop/test/derivatives/sub-3625A_T1w_brain.nii.gz\",\n mask = True)\nbet_result = skullstrip.run()\n\nsmooth = fsl.IsotropicSmooth (in_file = skullstrip.inputs.in_file,\n out_file = \"/Users/jiyang/Desktop/test/derivatives/sub-3625A_T1w_smooth.nii.gz\",\n fwhm = 4)\nsmooth_result = smooth.run()\n\n# # There is a bug here bet_result.outputs.mask_file point to cwd\n# mask = fsl.ApplyMask (in_file = smooth_result.outputs.out_file,\n# mask_file = bet_result.outputs.mask_file,\n# out_file = '/Users/jiyang/Desktop/test/derivatives/sub-3625A_T1w_smooth_brain.nii.gz')\n# mask_result = mask.run()\n\n\nmask = fsl.ApplyMask (in_file = smooth_result.outputs.out_file,\n mask_file = '/Users/jiyang/Desktop/test/derivatives/sub-3625A_T1w_brain_mask.nii.gz',\n out_file = '/Users/jiyang/Desktop/test/derivatives/sub-3625A_T1w_smooth_brain.nii.gz')\nmask_result = mask.run()\n\n\n# visualise\nf = plt.figure (figsize = (12, 4))\nfor i, img in enumerate ([skullstrip.inputs.in_file, smooth_result.outputs.out_file,\n '/Users/jiyang/Desktop/test/derivatives/sub-3625A_T1w_brain_mask.nii.gz',\n mask_result.outputs.out_file]):\n f.add_subplot (1, 4, i + 1)\n plot_slice (img)\n plt.title (img.split('/')[-1].split('.')[0].split('A_')[-1])",
"_____no_output_____"
],
[
"skullstrip.inputs.in_file\nsmooth_result.outputs.out_file\nbet_result.outputs.mask_file\nbet_result.outputs # bug with bet_result.outputs.mask_file",
"_____no_output_____"
],
[
"# Example 3 : Workflow execution\nfrom nipype import Node, Workflow\nfrom nipype.interfaces import fsl\nfrom os.path import abspath # passing absolute path is clearer\n\nin_file = abspath ('/Users/jiyang/Desktop/test/anat/sub-3625A_T1w.nii.gz')\n# workflow will take care of out_file\n# only need to specify the very original in_file\n# bet_out_file = abspath ('/Users/jiyang/Desktop/test/derivatives/sub-3625A_T1w_brain.nii.gz')\n# smooth_out_file = abspath ('/Users/jiyang/Desktop/test/derivatives/sub-3625A_T1w_smooth.nii.gz')\n\nskullstrip = Node (fsl.BET (in_file = in_file,\n mask = True),\n name = 'skullstrip')\n\nsmooth = Node (fsl.IsotropicSmooth (in_file = in_file,\n fwhm = 4),\n name = 'smooth')\n\nmask = Node (fsl.ApplyMask (), name = 'mask')\n\n# Initiate a workflow\nwf = Workflow (name = 'smoothflow', base_dir = '/Users/jiyang/Desktop/test/derivatives')",
"_____no_output_____"
],
[
"# Two ways to connect nodes\n#\n# Way 1\n# connect (source_node, \"source_node_output\", dest_node, \"dest_node_input\")\n#\n# Way 2\n# connect ([(source_node, dest_node, [(\"source_node_output1\", \"dest_node_input1\"),\n# (\"source_node_output2\", \"dest_node_input2\")\n# ]\n# )])\n#\n#\n# Way 1 can establish one connection at a time. Way 2 can establish multiple connections btw two nodes at once.\n#\n# In either case, four pieces of info are needed :\n# - source node object\n# - output field from source node\n# - dest node object\n# - input field from dest node\n\n# Way 1\nwf.connect (skullstrip, \"mask_file\", mask, \"mask_file\")\n\n# Way 2\nwf.connect ([(smooth, mask, [(\"out_file\", \"in_file\")])])",
"_____no_output_____"
],
[
"# display workflow\nwf.write_graph ('workflow_graph.dot')\nfrom IPython.display import Image\nImage (filename = '/Users/jiyang/Desktop/test/derivatives/smoothflow/workflow_graph.png')",
"190129-11:49:28,703 nipype.workflow INFO:\n\t Generated workflow graph: /Users/jiyang/Desktop/test/derivatives/smoothflow/workflow_graph.png (graph2use=hierarchical, simple_form=True).\n"
],
[
"wf.write_graph (graph2use = 'flat')\nfrom IPython.display import Image\nImage (filename = \"/Users/jiyang/Desktop/test/derivatives/smoothflow/graph_detailed.png\")",
"190129-11:55:09,924 nipype.workflow INFO:\n\t Generated workflow graph: /Users/jiyang/Desktop/test/derivatives/smoothflow/graph.png (graph2use=flat, simple_form=True).\n"
],
[
"# execute\nwf.base_dir = '/Users/jiyang/Desktop/test/derivatives'\nwf.run()\n\n# Note that specifying base_dir is very important (and is why we needed to use absolute paths above),\n# because otherwise all outputs would be saved somewhere in temporary files.\n# Unlike interfaces which by default split out results to local direcotries, Workflow engine execute\n# things off in its own directory hierarchy.",
"190129-12:02:00,225 nipype.workflow INFO:\n\t Workflow smoothflow settings: ['check', 'execution', 'logging', 'monitoring']\n190129-12:02:00,230 nipype.workflow INFO:\n\t Running serially.\n190129-12:02:00,232 nipype.workflow INFO:\n\t [Node] Setting-up \"smoothflow.smooth\" in \"/Users/jiyang/Desktop/test/derivatives/smoothflow/smooth\".\n190129-12:02:00,241 nipype.workflow INFO:\n\t [Node] Running \"smooth\" (\"nipype.interfaces.fsl.maths.IsotropicSmooth\"), a CommandLine Interface with command:\nfslmaths /Users/jiyang/Desktop/test/anat/sub-3625A_T1w.nii.gz -s 1.69864 /Users/jiyang/Desktop/test/derivatives/smoothflow/smooth/sub-3625A_T1w_smooth.nii.gz\n190129-12:02:04,758 nipype.workflow INFO:\n\t [Node] Finished \"smoothflow.smooth\".\n190129-12:02:04,760 nipype.workflow INFO:\n\t [Node] Setting-up \"smoothflow.skullstrip\" in \"/Users/jiyang/Desktop/test/derivatives/smoothflow/skullstrip\".\n190129-12:02:04,766 nipype.workflow INFO:\n\t [Node] Running \"skullstrip\" (\"nipype.interfaces.fsl.preprocess.BET\"), a CommandLine Interface with command:\nbet /Users/jiyang/Desktop/test/anat/sub-3625A_T1w.nii.gz /Users/jiyang/Desktop/test/derivatives/smoothflow/skullstrip/sub-3625A_T1w_brain.nii.gz -m\n190129-12:02:08,372 nipype.workflow INFO:\n\t [Node] Finished \"smoothflow.skullstrip\".\n190129-12:02:08,375 nipype.workflow INFO:\n\t [Node] Setting-up \"smoothflow.mask\" in \"/Users/jiyang/Desktop/test/derivatives/smoothflow/mask\".\n190129-12:02:08,384 nipype.workflow INFO:\n\t [Node] Running \"mask\" (\"nipype.interfaces.fsl.maths.ApplyMask\"), a CommandLine Interface with command:\nfslmaths /Users/jiyang/Desktop/test/derivatives/smoothflow/smooth/sub-3625A_T1w_smooth.nii.gz -mas /Users/jiyang/Desktop/test/derivatives/smoothflow/skullstrip/sub-3625A_T1w_brain_mask.nii.gz /Users/jiyang/Desktop/test/derivatives/smoothflow/mask/sub-3625A_T1w_smooth_masked.nii.gz\n190129-12:02:09,537 nipype.workflow INFO:\n\t [Node] Finished \"smoothflow.mask\".\n"
],
[
"f = plt.figure (figsize = (12, 4))\n\nfor i, img in enumerate (['/Users/jiyang/Desktop/test/anat/sub-3625A_T1w.nii.gz',\n '/Users/jiyang/Desktop/test/derivatives/smoothflow/smooth/sub-3625A_T1w_smooth.nii.gz',\n '/Users/jiyang/Desktop/test/derivatives/smoothflow/skullstrip/sub-3625A_T1w_brain_mask.nii.gz',\n '/Users/jiyang/Desktop/test/derivatives/smoothflow/mask/sub-3625A_T1w_smooth_masked.nii.gz']):\n f.add_subplot (1, 4, i + 1)\n plot_slice (img)",
"_____no_output_____"
],
[
"!tree /Users/jiyang/Desktop/test/derivatives/smoothflow -I '*js|*json|*html|*pklz|_report'",
"/Users/jiyang/Desktop/test/derivatives/smoothflow\r\n├── graph.dot\r\n├── graph.png\r\n├── graph_detailed.dot\r\n├── graph_detailed.png\r\n├── mask\r\n│ ├── command.txt\r\n│ └── sub-3625A_T1w_smooth_masked.nii.gz\r\n├── skullstrip\r\n│ ├── command.txt\r\n│ └── sub-3625A_T1w_brain_mask.nii.gz\r\n├── smooth\r\n│ ├── command.txt\r\n│ └── sub-3625A_T1w_smooth.nii.gz\r\n├── workflow_graph.dot\r\n└── workflow_graph.png\r\n\r\n3 directories, 12 files\r\n"
],
[
"# running workflow will return a graph object\n#\n# workflow does not have inputs/outputs, you can access them through Node\n#",
"_____no_output_____"
],
[
"# A workflow inside a workflow\n# ------------------------------------------------------------------------\n#\n\n# calling create_susan_smooth will return a workflow object\nfrom nipype.workflows.fmri.fsl import create_susan_smooth\nsusan = create_susan_smooth (separate_masks = False)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d095e119736df0cb5940c74d34f5fc162a9e9adb | 141,722 | ipynb | Jupyter Notebook | Use_cases/Monitoring_water_extent/Monitoring_water_extent_WOfS.ipynb | FlexiGroBots-H2020/deafrica-sandbox-notebooks | e412745f130e42232bc439e646cf2c58d4e61136 | [
"Apache-2.0"
] | null | null | null | Use_cases/Monitoring_water_extent/Monitoring_water_extent_WOfS.ipynb | FlexiGroBots-H2020/deafrica-sandbox-notebooks | e412745f130e42232bc439e646cf2c58d4e61136 | [
"Apache-2.0"
] | null | null | null | Use_cases/Monitoring_water_extent/Monitoring_water_extent_WOfS.ipynb | FlexiGroBots-H2020/deafrica-sandbox-notebooks | e412745f130e42232bc439e646cf2c58d4e61136 | [
"Apache-2.0"
] | null | null | null | 247.333333 | 50,552 | 0.923992 | [
[
[
"# Mapping water extent and rainfall using WOfS and CHIRPS\n\n* **Products used:** \n[wofs_ls](https://explorer.digitalearth.africa/products/wofs_ls),\n[rainfall_chirps_monthly](https://explorer.digitalearth.africa/products/rainfall_chirps_monthly)",
"_____no_output_____"
]
],
[
[
"**Keywords**: :index:`data used; WOfS`, :index:`data used; CHIRPS`, :index:`water; extent`, :index:`analysis; time series`",
"_____no_output_____"
]
],
[
[
"## Background\n\nThe United Nations have prescribed 17 \"Sustainable Development Goals\" (SDGs). This notebook attempts to monitor SDG Indicator 6.6.1 - change in the extent of water-related ecosystems. Indicator 6.6.1 has 4 sub-indicators:\n\n i. The spatial extent of water-related ecosystems\n ii. The quantity of water contained within these ecosystems\n iii. The quality of water within these ecosystems\n iv. The health or state of these ecosystems\n\nThis notebook primarily focuses on the first sub-indicator - spatial extents.",
"_____no_output_____"
],
[
"## Description\n\nThe notebook loads WOfS feature layers to map the spatial extent of water bodies. It also loads and plots monthly total rainfall from CHIRPS. The last section will compare the water extent between two periods to allow visulazing where change is occuring.\n\n***",
"_____no_output_____"
],
[
"## Load packages\nImport Python packages that are used for the analysis.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n\nimport datacube\nimport matplotlib.pyplot as plt\n\nfrom deafrica_tools.dask import create_local_dask_cluster\nfrom deafrica_tools.datahandling import wofs_fuser\n\nfrom long_term_water_extent import (\n load_vector_file,\n get_resampled_labels,\n resample_water_observations,\n resample_rainfall_observations,\n calculate_change_in_extent,\n compare_extent_and_rainfall,\n)",
"_____no_output_____"
]
],
[
[
"## Set up a Dask cluster\n\nDask can be used to better manage memory use and conduct the analysis in parallel. ",
"_____no_output_____"
]
],
[
[
"create_local_dask_cluster()",
"_____no_output_____"
]
],
[
[
"## Connect to Data Cube",
"_____no_output_____"
]
],
[
[
"dc = datacube.Datacube(app=\"long_term_water_extent\")",
"_____no_output_____"
]
],
[
[
"## Analysis parameters\n\nThe following cell sets the parameters, which define the area of interest and the length of time to conduct the analysis over.\n\n* Upload a vector file for your water extent and your catchment to the `data` folder.\n* Set the time range you want to use.\n* Set the resampling strategy. Possible options include:\n * `\"1Y\"` - Annual resampling, use this option for longer term monitoring\n * `\"QS-DEC\"` - Quarterly resampling from December\n * `\"3M\"` - Three-monthly resampling\n * `\"1M\"` - Monthly resampling\n\nFor more details on resampling timeframes, see the [xarray](https://xarray.pydata.org/en/v0.8.2/generated/xarray.Dataset.resample.html#r29) and [pandas](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#dateoffset-objects) documentation.",
"_____no_output_____"
]
],
[
[
"water_extent_vector_file = \"data/lake_baringo_extent.geojson\"\n\nwater_catchment_vector_file = \"data/lake_baringo_catchment.geojson\"\n\ntime_range = (\"2018-07\", \"2021\")\n\nresample_strategy = \"Q-DEC\"\n\ndask_chunks = dict(x=1000, y=1000)",
"_____no_output_____"
]
],
[
[
"## Get waterbody and catchment geometries\n\nThe next cell will extract the waterbody and catchment geometries from the supplied vector files, which will be used to load Water Observations from Space and the CHIRPS rainfall products.",
"_____no_output_____"
]
],
[
[
"extent, extent_geometry = load_vector_file(water_extent_vector_file)\ncatchment, catchment_geometry = load_vector_file(water_catchment_vector_file)",
"_____no_output_____"
]
],
[
[
"## Load Water Observation from Space for Waterbody\n\nThe first step is to load the Water Observations from Space product using the extent geometry.",
"_____no_output_____"
]
],
[
[
"extent_query = {\n \"time\": time_range,\n \"resolution\": (-30, 30),\n \"output_crs\": \"EPSG:6933\",\n \"geopolygon\": extent_geometry,\n \"group_by\": \"solar_day\",\n \"dask_chunks\":dask_chunks\n}\n\nwofs_ds = dc.load(product=\"wofs_ls\", fuse_func=wofs_fuser, **extent_query)",
"_____no_output_____"
]
],
[
[
"### Identify water in each resampling period\n\nThe second step is to resample the observations to get a consistent measure of the waterbody, and then calculate the classified as water for each period.",
"_____no_output_____"
]
],
[
[
"resampled_water_ds, resampled_water_area_ds = resample_water_observations(\n wofs_ds, resample_strategy\n)\ndate_range_labels = get_resampled_labels(wofs_ds, resample_strategy)",
"_____no_output_____"
]
],
[
[
"### Plot the change in water area over time",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(figsize=(15, 5))\n\nax.plot(\n date_range_labels,\n resampled_water_area_ds.values,\n color=\"red\",\n marker=\"^\",\n markersize=4,\n linewidth=1,\n)\nplt.xticks(date_range_labels, rotation=65)\nplt.title(f\"Observed Area of Water from {time_range[0]} to {time_range[1]}\")\nplt.ylabel(\"Waterbody area (km$^2$)\")\nplt.tight_layout()",
"_____no_output_____"
]
],
[
[
"## Load CHIRPS monthly rainfall\n\n",
"_____no_output_____"
]
],
[
[
"catchment_query = {\n \"time\": time_range,\n \"resolution\": (-5000, 5000),\n \"output_crs\": \"EPSG:6933\",\n \"geopolygon\": catchment_geometry,\n \"group_by\": \"solar_day\",\n \"dask_chunks\":dask_chunks\n}\n\nrainfall_ds = dc.load(product=\"rainfall_chirps_monthly\", **catchment_query)",
"_____no_output_____"
]
],
[
[
"### Resample to estimate rainfall for each time period\n\nThis is done by taking calculating the average rainfall over the extent of the catchment, then summing these averages over the resampling period to estimate the total rainfall for the catchment.",
"_____no_output_____"
]
],
[
[
"catchment_rainfall_resampled_ds = resample_rainfall_observations(\n rainfall_ds, resample_strategy, catchment\n)",
"_____no_output_____"
]
],
[
[
"## Compare waterbody area to catchment rainfall\n\nThis step plots the summed average rainfall for the catchment area over each period as a histogram, overlaid with the waterbody area calculated previously.",
"_____no_output_____"
]
],
[
[
"figure = compare_extent_and_rainfall(\n resampled_water_area_ds, catchment_rainfall_resampled_ds, \"mm\", date_range_labels\n)",
"_____no_output_____"
]
],
[
[
"### Save the figure",
"_____no_output_____"
]
],
[
[
"figure.savefig(\"waterarea_and_rainfall.png\", bbox_inches=\"tight\")",
"_____no_output_____"
]
],
[
[
"## Compare water extent for two different periods\n\nFor the next step, enter a baseline date, and an analysis date to construct a plot showing where water appeared, as well as disappeared, by comparing the two dates.",
"_____no_output_____"
]
],
[
[
"baseline_time = \"2018-07-01\"\nanalysis_time = \"2021-10-01\"",
"_____no_output_____"
],
[
"figure = calculate_change_in_extent(baseline_time, analysis_time, resampled_water_ds)",
"_____no_output_____"
]
],
[
[
"### Save figure",
"_____no_output_____"
]
],
[
[
"figure.savefig(\"waterarea_change.png\", bbox_inches=\"tight\")",
"_____no_output_____"
]
],
[
[
"---\n\n## Additional information\n\n**License:** The code in this notebook is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). \nDigital Earth Africa data is licensed under the [Creative Commons by Attribution 4.0](https://creativecommons.org/licenses/by/4.0/) license.\n\n**Contact:** If you need assistance, please post a question on the [Open Data Cube Slack channel](http://slack.opendatacube.org/) or on the [GIS Stack Exchange](https://gis.stackexchange.com/questions/ask?tags=open-data-cube) using the `open-data-cube` tag (you can view previously asked questions [here](https://gis.stackexchange.com/questions/tagged/open-data-cube)).\nIf you would like to report an issue with this notebook, you can file one on [Github](https://github.com/digitalearthafrica/deafrica-sandbox-notebooks).\n\n**Compatible datacube version:**",
"_____no_output_____"
]
],
[
[
"print(datacube.__version__)",
"1.8.6\n"
]
],
[
[
"**Last Tested:**",
"_____no_output_____"
]
],
[
[
"from datetime import datetime\ndatetime.today().strftime('%Y-%m-%d')",
"_____no_output_____"
]
]
] | [
"markdown",
"raw",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"raw"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d095e1897b6fe283c2ac4c3b5380cd054b65d3f3 | 609,304 | ipynb | Jupyter Notebook | code/student_project.ipynb | aaryapatel007/Patient-Selection-for-Diabetes-Drug-Testing | 9f4dda4372bd2c3e9b70ba7f2c0133eb3bc82860 | [
"MIT"
] | null | null | null | code/student_project.ipynb | aaryapatel007/Patient-Selection-for-Diabetes-Drug-Testing | 9f4dda4372bd2c3e9b70ba7f2c0133eb3bc82860 | [
"MIT"
] | null | null | null | code/student_project.ipynb | aaryapatel007/Patient-Selection-for-Diabetes-Drug-Testing | 9f4dda4372bd2c3e9b70ba7f2c0133eb3bc82860 | [
"MIT"
] | null | null | null | 125.396995 | 95,703 | 0.815304 | [
[
[
"# Overview",
"_____no_output_____"
],
[
"1. Project Instructions & Prerequisites\n2. Learning Objectives\n3. Data Preparation\n4. Create Categorical Features with TF Feature Columns\n5. Create Continuous/Numerical Features with TF Feature Columns\n6. Build Deep Learning Regression Model with Sequential API and TF Probability Layers\n7. Evaluating Potential Model Biases with Aequitas Toolkit\n",
"_____no_output_____"
],
[
"# 1. Project Instructions & Prerequisites",
"_____no_output_____"
],
[
"## Project Instructions",
"_____no_output_____"
],
[
"**Context**: EHR data is becoming a key source of real-world evidence (RWE) for the pharmaceutical industry and regulators to [make decisions on clinical trials](https://www.fda.gov/news-events/speeches-fda-officials/breaking-down-barriers-between-clinical-trials-and-clinical-care-incorporating-real-world-evidence). You are a data scientist for an exciting unicorn healthcare startup that has created a groundbreaking diabetes drug that is ready for clinical trial testing. It is a very unique and sensitive drug that requires administering the drug over at least 5-7 days of time in the hospital with frequent monitoring/testing and patient medication adherence training with a mobile application. You have been provided a patient dataset from a client partner and are tasked with building a predictive model that can identify which type of patients the company should focus their efforts testing this drug on. Target patients are people that are likely to be in the hospital for this duration of time and will not incur significant additional costs for administering this drug to the patient and monitoring. \n\nIn order to achieve your goal you must build a regression model that can predict the estimated hospitalization time for a patient and use this to select/filter patients for your study.\n",
"_____no_output_____"
],
[
"**Expected Hospitalization Time Regression Model:** Utilizing a synthetic dataset(denormalized at the line level augmentation) built off of the UCI Diabetes readmission dataset, students will build a regression model that predicts the expected days of hospitalization time and then convert this to a binary prediction of whether to include or exclude that patient from the clinical trial.\n\nThis project will demonstrate the importance of building the right data representation at the encounter level, with appropriate filtering and preprocessing/feature engineering of key medical code sets. This project will also require students to analyze and interpret their model for biases across key demographic groups. \n\nPlease see the project rubric online for more details on the areas your project will be evaluated.",
"_____no_output_____"
],
[
"### Dataset",
"_____no_output_____"
],
[
"Due to healthcare PHI regulations (HIPAA, HITECH), there are limited number of publicly available datasets and some datasets require training and approval. So, for the purpose of this exercise, we are using a dataset from UC Irvine(https://archive.ics.uci.edu/ml/datasets/Diabetes+130-US+hospitals+for+years+1999-2008) that has been modified for this course. Please note that it is limited in its representation of some key features such as diagnosis codes which are usually an unordered list in 835s/837s (the HL7 standard interchange formats used for claims and remits).",
"_____no_output_____"
],
[
"**Data Schema**\nThe dataset reference information can be https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/\n. There are two CSVs that provide more details on the fields and some of the mapped values.",
"_____no_output_____"
],
[
"## Project Submission ",
"_____no_output_____"
],
[
"When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"student_project_submission.ipynb\" and save another copy as an HTML file by clicking \"File\" -> \"Download as..\"->\"html\". Include the \"utils.py\" and \"student_utils.py\" files in your submission. The student_utils.py should be where you put most of your code that you write and the summary and text explanations should be written inline in the notebook. Once you download these files, compress them into one zip file for submission.",
"_____no_output_____"
],
[
"## Prerequisites ",
"_____no_output_____"
],
[
"- Intermediate level knowledge of Python\n- Basic knowledge of probability and statistics\n- Basic knowledge of machine learning concepts\n- Installation of Tensorflow 2.0 and other dependencies(conda environment.yml or virtualenv requirements.txt file provided)",
"_____no_output_____"
],
[
"## Environment Setup",
"_____no_output_____"
],
[
"For step by step instructions on creating your environment, please go to https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/README.md.",
"_____no_output_____"
],
[
"# 2. Learning Objectives",
"_____no_output_____"
],
[
"By the end of the project, you will be able to \n - Use the Tensorflow Dataset API to scalably extract, transform, and load datasets and build datasets aggregated at the line, encounter, and patient data levels(longitudinal)\n - Analyze EHR datasets to check for common issues (data leakage, statistical properties, missing values, high cardinality) by performing exploratory data analysis.\n - Create categorical features from Key Industry Code Sets (ICD, CPT, NDC) and reduce dimensionality for high cardinality features by using embeddings \n - Create derived features(bucketing, cross-features, embeddings) utilizing Tensorflow feature columns on both continuous and categorical input features\n - SWBAT use the Tensorflow Probability library to train a model that provides uncertainty range predictions that allow for risk adjustment/prioritization and triaging of predictions\n - Analyze and determine biases for a model for key demographic groups by evaluating performance metrics across groups by using the Aequitas framework \n",
"_____no_output_____"
],
[
"# 3. Data Preparation",
"_____no_output_____"
]
],
[
[
"# from __future__ import absolute_import, division, print_function, unicode_literals\nimport os\nimport numpy as np\nimport seaborn as sns\nimport tensorflow as tf\nfrom tensorflow.keras import layers\nimport tensorflow_probability as tfp\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport aequitas as ae\nfrom sklearn.metrics import roc_auc_score, accuracy_score, f1_score, classification_report, precision_score, recall_score\n# Put all of the helper functions in utils\nfrom utils import build_vocab_files, show_group_stats_viz, aggregate_dataset, preprocess_df, df_to_dataset, posterior_mean_field, prior_trainable\nfrom functools import partial\npd.set_option('display.max_columns', 500)\n# this allows you to make changes and save in student_utils.py and the file is reloaded every time you run a code block\n%load_ext autoreload\n%autoreload",
"_____no_output_____"
],
[
"#OPEN ISSUE ON MAC OSX for TF model training\nimport os\nos.environ['KMP_DUPLICATE_LIB_OK']='True'",
"_____no_output_____"
]
],
[
[
"## Dataset Loading and Schema Review",
"_____no_output_____"
],
[
"Load the dataset and view a sample of the dataset along with reviewing the schema reference files to gain a deeper understanding of the dataset. The dataset is located at the following path https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/starter_code/data/final_project_dataset.csv. Also, review the information found in the data schema https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/",
"_____no_output_____"
]
],
[
[
"dataset_path = \"./data/final_project_dataset.csv\"\ndf = pd.read_csv(dataset_path)",
"_____no_output_____"
],
[
"# Line Test\ntry:\n assert len(df) > df['encounter_id'].nunique() \n print(\"Dataset could be at the line level\")\nexcept:\n print(\"Dataset is not at the line level\")",
"Dataset could be at the line level\n"
]
],
[
[
"## Determine Level of Dataset (Line or Encounter)",
"_____no_output_____"
],
[
"**Question 1**: Based off of analysis of the data, what level is this dataset? Is it at the line or encounter level? Are there any key fields besides the encounter_id and patient_nbr fields that we should use to aggregate on? Knowing this information will help inform us what level of aggregation is necessary for future steps and is a step that is often overlooked. ",
"_____no_output_____"
],
[
"**Student Response** : The dataset is at line level and needs to be converted to encounter level. The dataset should be aggregated on encounter_id, patient_nbr and principal_diagnosis_code.",
"_____no_output_____"
],
[
"## Analyze Dataset",
"_____no_output_____"
],
[
"**Question 2**: Utilizing the library of your choice (recommend Pandas and Seaborn or matplotlib though), perform exploratory data analysis on the dataset. In particular be sure to address the following questions: \n - a. Field(s) with high amount of missing/zero values\n - b. Based off the frequency histogram for each numerical field, which numerical field(s) has/have a Gaussian(normal) distribution shape?\n - c. Which field(s) have high cardinality and why (HINT: ndc_code is one feature)\n - d. Please describe the demographic distributions in the dataset for the age and gender fields.\n \n",
"_____no_output_____"
],
[
"**OPTIONAL**: Use the Tensorflow Data Validation and Analysis library to complete. \n- The Tensorflow Data Validation and Analysis library(https://www.tensorflow.org/tfx/data_validation/get_started) is a useful tool for analyzing and summarizing dataset statistics. It is especially useful because it can scale to large datasets that do not fit into memory. \n- Note that there are some bugs that are still being resolved with Chrome v80 and we have moved away from using this for the project. ",
"_____no_output_____"
],
[
"**Student Response**: \n\n1. Fields with high amount of missing/null values are:\n *weight, payer_code, medical_speciality, number_outpatients, number_inpatients, number_emergency, num_procedures, ndc_codes.*\n1. Numerical values having Gaussian Distribution are: *num_lab_procedures, number_medication.*\n1. Fields having high cardinality are: *encounter_id, patient_nbr, other_diagnosis_codes.* It is because there there are 71,518 patients and more than 1 Lac encounters in the dataset and each encounter have various diagnoisis codes. This can also be reviewed by looking the Tensorflow Data Validation statistics.\n1. Demographic distributions is shown below.",
"_____no_output_____"
]
],
[
[
"def check_null_df(df):\n return pd.DataFrame({\n 'percent_null' : df.isna().sum() / len(df) * 100,\n 'percent_zero' : df.isin([0]).sum() / len(df) * 100,\n 'percent_missing' : df.isin(['?', '?|?', 'Unknown/Invalid']).sum() / len(df) * 100,\n })\ncheck_null_df(df)",
"_____no_output_____"
],
[
"plt.figure(figsize=(8, 5))\nsns.countplot(x = 'age', data = df)",
"_____no_output_____"
],
[
"plt.figure(figsize=(8, 5))\nsns.countplot(x = 'gender', data = df)",
"_____no_output_____"
],
[
"plt.figure(figsize=(8, 5))\nsns.countplot(x = 'age', hue = 'gender', data = df)",
"_____no_output_____"
],
[
"plt.figure(figsize=(8, 5))\nsns.distplot(df['num_lab_procedures'])",
"_____no_output_____"
],
[
"plt.figure(figsize=(8, 5))\nsns.distplot(df['num_medications'])",
"_____no_output_____"
],
[
"######NOTE: The visualization will only display in Chrome browser. ########\n# First install below libraries and then restart the kernel to visualize.\n\n# !pip install tensorflow-data-validation\n# !pip install apache-beam[interactive]\nimport tensorflow_data_validation as tfdv\nfull_data_stats = tfdv.generate_statistics_from_dataframe(dataframe=df) ",
"/opt/conda/lib/python3.7/site-packages/tensorflow_data_validation/arrow/arrow_util.py:236: FutureWarning: Calling .data on ChunkedArray is provided for compatibility after Column was removed, simply drop this attribute\n types.FeaturePath([column_name]), column.data.chunk(0), weights):\n"
],
[
"tfdv.visualize_statistics(full_data_stats)",
"_____no_output_____"
],
[
"schema = tfdv.infer_schema(statistics=full_data_stats)\ntfdv.display_schema(schema=schema)",
"_____no_output_____"
],
[
"categorical_columns_list = ['A1Cresult', 'age', 'change', 'gender', 'max_glu_serum', 'medical_specialty', 'payer_code', 'race',\n 'readmitted', 'weight']\ndef count_unique_values(df):\n cat_df = df\n return pd.DataFrame({\n 'columns' : cat_df.columns,\n 'cardinality' : cat_df.nunique()\n }).reset_index(drop = True).sort_values(by = 'cardinality', ascending = False)\ncount_unique_values(df)",
"_____no_output_____"
]
],
[
[
"## Reduce Dimensionality of the NDC Code Feature",
"_____no_output_____"
],
[
"**Question 3**: NDC codes are a common format to represent the wide variety of drugs that are prescribed for patient care in the United States. The challenge is that there are many codes that map to the same or similar drug. You are provided with the ndc drug lookup file https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/ndc_lookup_table.csv derived from the National Drug Codes List site(https://ndclist.com/). Please use this file to come up with a way to reduce the dimensionality of this field and create a new field in the dataset called \"generic_drug_name\" in the output dataframe. ",
"_____no_output_____"
]
],
[
[
"#NDC code lookup file\nndc_code_path = \"./medication_lookup_tables/final_ndc_lookup_table\"\nndc_code_df = pd.read_csv(ndc_code_path)",
"_____no_output_____"
],
[
"from student_utils import reduce_dimension_ndc",
"_____no_output_____"
],
[
"def reduce_dimension_ndc(df, ndc_code_df):\n '''\n df: pandas dataframe, input dataset\n ndc_df: pandas dataframe, drug code dataset used for mapping in generic names\n return:\n df: pandas dataframe, output dataframe with joined generic drug name\n '''\n mapping = dict(ndc_code_df[['NDC_Code', 'Non-proprietary Name']].values)\n mapping['nan'] = np.nan\n df['generic_drug_name'] = df['ndc_code'].astype(str).apply(lambda x : mapping[x])\n \n return df\nreduce_dim_df = reduce_dimension_ndc(df, ndc_code_df)",
"_____no_output_____"
],
[
"reduce_dim_df.head()",
"_____no_output_____"
],
[
"# Number of unique values should be less for the new output field\nassert df['ndc_code'].nunique() > reduce_dim_df['generic_drug_name'].nunique()\nprint('Number of ndc_code: ', df['ndc_code'].nunique())\nprint('Number of drug name: ', reduce_dim_df['generic_drug_name'].nunique())",
"Number of ndc_code: 251\nNumber of drug name: 22\n"
]
],
[
[
"## Select First Encounter for each Patient ",
"_____no_output_____"
],
[
"**Question 4**: In order to simplify the aggregation of data for the model, we will only select the first encounter for each patient in the dataset. This is to reduce the risk of data leakage of future patient encounters and to reduce complexity of the data transformation and modeling steps. We will assume that sorting in numerical order on the encounter_id provides the time horizon for determining which encounters come before and after another.",
"_____no_output_____"
]
],
[
[
"def select_first_encounter(df):\n '''\n df: pandas dataframe, dataframe with all encounters\n return:\n \n - first_encounter_df: pandas dataframe, dataframe with only the first encounter for a given patient\n '''\n df.sort_values(by = 'encounter_id')\n first_encounters = df.groupby('patient_nbr')['encounter_id'].first().values\n first_encounter_df = df[df['encounter_id'].isin(first_encounters)]\n# first_encounter_df = first_encounter_df.groupby('encounter_id').first().reset_index()\n\n return first_encounter_df",
"_____no_output_____"
],
[
"first_encounter_df = select_first_encounter(reduce_dim_df)",
"_____no_output_____"
],
[
"first_encounter_df.head()",
"_____no_output_____"
],
[
"# unique patients in transformed dataset\nunique_patients = first_encounter_df['patient_nbr'].nunique()\nprint(\"Number of unique patients:{}\".format(unique_patients))\n\n# unique encounters in transformed dataset\nunique_encounters = first_encounter_df['encounter_id'].nunique()\nprint(\"Number of unique encounters:{}\".format(unique_encounters))\n\noriginal_unique_patient_number = reduce_dim_df['patient_nbr'].nunique()\n# number of unique patients should be equal to the number of unique encounters and patients in the final dataset\nassert original_unique_patient_number == unique_patients\nassert original_unique_patient_number == unique_encounters\nprint(\"Tests passed!!\")",
"Number of unique patients:71518\nNumber of unique encounters:71518\nTests passed!!\n"
]
],
[
[
"## Aggregate Dataset to Right Level for Modeling ",
"_____no_output_____"
],
[
"In order to provide a broad scope of the steps and to prevent students from getting stuck with data transformations, we have selected the aggregation columns and provided a function to build the dataset at the appropriate level. The 'aggregate_dataset\" function that you can find in the 'utils.py' file can take the preceding dataframe with the 'generic_drug_name' field and transform the data appropriately for the project. \n\nTo make it simpler for students, we are creating dummy columns for each unique generic drug name and adding those are input features to the model. There are other options for data representation but this is out of scope for the time constraints of the course.",
"_____no_output_____"
]
],
[
[
"exclusion_list = [ 'generic_drug_name', 'ndc_code']\ngrouping_field_list = [c for c in first_encounter_df.columns if c not in exclusion_list]\nagg_drug_df, ndc_col_list = aggregate_dataset(first_encounter_df, grouping_field_list, 'generic_drug_name')",
"_____no_output_____"
],
[
"assert len(agg_drug_df) == agg_drug_df['patient_nbr'].nunique() == agg_drug_df['encounter_id'].nunique()",
"_____no_output_____"
],
[
"ndc_col_list",
"_____no_output_____"
]
],
[
[
"## Prepare Fields and Cast Dataset ",
"_____no_output_____"
],
[
"### Feature Selection",
"_____no_output_____"
],
[
"**Question 5**: After you have aggregated the dataset to the right level, we can do feature selection (we will include the ndc_col_list, dummy column features too). In the block below, please select the categorical and numerical features that you will use for the model, so that we can create a dataset subset. \n\nFor the payer_code and weight fields, please provide whether you think we should include/exclude the field in our model and give a justification/rationale for this based off of the statistics of the data. Feel free to use visualizations or summary statistics to support your choice.",
"_____no_output_____"
],
[
"**Student response**: We should exclude both payer_code and weight in our model because of large missing values.",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(8, 5))\nsns.countplot(x = 'payer_code', data = agg_drug_df)",
"_____no_output_____"
],
[
"plt.figure(figsize=(8, 5))\nsns.countplot(x = 'number_emergency', data = agg_drug_df)",
"_____no_output_____"
],
[
"count_unique_values(agg_drug_df[grouping_field_list])",
"_____no_output_____"
],
[
"'''\nPlease update the list to include the features you think are appropriate for the model \nand the field that we will be using to train the model. There are three required demographic features for the model \nand I have inserted a list with them already in the categorical list. \nThese will be required for later steps when analyzing data splits and model biases.\n'''\nrequired_demo_col_list = ['race', 'gender', 'age']\nstudent_categorical_col_list = [ 'change', 'primary_diagnosis_code'\n ] + required_demo_col_list + ndc_col_list\nstudent_numerical_col_list = [ 'number_inpatient', 'number_emergency', 'num_lab_procedures', 'number_diagnoses','num_medications','num_procedures']\nPREDICTOR_FIELD = 'time_in_hospital'",
"_____no_output_____"
],
[
"def select_model_features(df, categorical_col_list, numerical_col_list, PREDICTOR_FIELD, grouping_key='patient_nbr'):\n selected_col_list = [grouping_key] + [PREDICTOR_FIELD] + categorical_col_list + numerical_col_list \n return agg_drug_df[selected_col_list]\n",
"_____no_output_____"
],
[
"selected_features_df = select_model_features(agg_drug_df, student_categorical_col_list, student_numerical_col_list,\n PREDICTOR_FIELD)",
"_____no_output_____"
]
],
[
[
"### Preprocess Dataset - Casting and Imputing ",
"_____no_output_____"
],
[
"We will cast and impute the dataset before splitting so that we do not have to repeat these steps across the splits in the next step. For imputing, there can be deeper analysis into which features to impute and how to impute but for the sake of time, we are taking a general strategy of imputing zero for only numerical features. \n\nOPTIONAL: What are some potential issues with this approach? Can you recommend a better way and also implement it?",
"_____no_output_____"
]
],
[
[
"processed_df = preprocess_df(selected_features_df, student_categorical_col_list, \n student_numerical_col_list, PREDICTOR_FIELD, categorical_impute_value='nan', numerical_impute_value=0)",
"/home/workspace/starter_code/utils.py:29: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n df[predictor] = df[predictor].astype(float)\n/home/workspace/starter_code/utils.py:31: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n df[c] = cast_df(df, c, d_type=str)\n/home/workspace/starter_code/utils.py:33: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n df[numerical_column] = impute_df(df, numerical_column, numerical_impute_value)\n"
]
],
[
[
"## Split Dataset into Train, Validation, and Test Partitions",
"_____no_output_____"
],
[
"**Question 6**: In order to prepare the data for being trained and evaluated by a deep learning model, we will split the dataset into three partitions, with the validation partition used for optimizing the model hyperparameters during training. One of the key parts is that we need to be sure that the data does not accidently leak across partitions.\n\nPlease complete the function below to split the input dataset into three partitions(train, validation, test) with the following requirements.\n- Approximately 60%/20%/20% train/validation/test split\n- Randomly sample different patients into each data partition\n- **IMPORTANT** Make sure that a patient's data is not in more than one partition, so that we can avoid possible data leakage.\n- Make sure that the total number of unique patients across the splits is equal to the total number of unique patients in the original dataset\n- Total number of rows in original dataset = sum of rows across all three dataset partitions",
"_____no_output_____"
]
],
[
[
"def patient_dataset_splitter(df, patient_key='patient_nbr'):\n '''\n df: pandas dataframe, input dataset that will be split\n patient_key: string, column that is the patient id\n\n return:\n - train: pandas dataframe,\n - validation: pandas dataframe,\n - test: pandas dataframe,\n '''\n df[student_numerical_col_list] = df[student_numerical_col_list].astype(float)\n train_val_df = df.sample(frac = 0.8, random_state=3)\n train_df = train_val_df.sample(frac = 0.8, random_state=3)\n val_df = train_val_df.drop(train_df.index)\n test_df = df.drop(train_val_df.index)\n return train_df.reset_index(drop = True), val_df.reset_index(drop = True), test_df.reset_index(drop = True)",
"_____no_output_____"
],
[
"#from student_utils import patient_dataset_splitter\nd_train, d_val, d_test = patient_dataset_splitter(processed_df, 'patient_nbr')",
"/root/.local/lib/python3.7/site-packages/pandas/core/frame.py:3509: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n self[k1] = value[k2]\n"
],
[
"assert len(d_train) + len(d_val) + len(d_test) == len(processed_df)\nprint(\"Test passed for number of total rows equal!\")",
"Test passed for number of total rows equal!\n"
],
[
"assert (d_train['patient_nbr'].nunique() + d_val['patient_nbr'].nunique() + d_test['patient_nbr'].nunique()) == agg_drug_df['patient_nbr'].nunique()\nprint(\"Test passed for number of unique patients being equal!\")",
"Test passed for number of unique patients being equal!\n"
]
],
[
[
"## Demographic Representation Analysis of Split",
"_____no_output_____"
],
[
"After the split, we should check to see the distribution of key features/groups and make sure that there is representative samples across the partitions. The show_group_stats_viz function in the utils.py file can be used to group and visualize different groups and dataframe partitions.",
"_____no_output_____"
],
[
"### Label Distribution Across Partitions",
"_____no_output_____"
],
[
"Below you can see the distributution of the label across your splits. Are the histogram distribution shapes similar across partitions?",
"_____no_output_____"
]
],
[
[
"show_group_stats_viz(processed_df, PREDICTOR_FIELD)",
"time_in_hospital\n1.0 10717\n2.0 12397\n3.0 12701\n4.0 9567 \n5.0 6839 \n6.0 5171 \n7.0 3999 \n8.0 2919 \n9.0 1990 \n10.0 1558 \n11.0 1241 \n12.0 955 \n13.0 795 \n14.0 669 \ndtype: int64\nAxesSubplot(0.125,0.125;0.775x0.755)\n"
],
[
"show_group_stats_viz(d_train, PREDICTOR_FIELD)",
"time_in_hospital\n1.0 6904\n2.0 7978\n3.0 8116\n4.0 6141\n5.0 4341\n6.0 3294\n7.0 2542\n8.0 1846\n9.0 1259\n10.0 989 \n11.0 788 \n12.0 659 \n13.0 500 \n14.0 414 \ndtype: int64\nAxesSubplot(0.125,0.125;0.775x0.755)\n"
],
[
"show_group_stats_viz(d_test, PREDICTOR_FIELD)",
"time_in_hospital\n1.0 2159\n2.0 2486\n3.0 2576\n4.0 1842\n5.0 1364\n6.0 1041\n7.0 814 \n8.0 584 \n9.0 404 \n10.0 307 \n11.0 242 \n12.0 182 \n13.0 165 \n14.0 138 \ndtype: int64\nAxesSubplot(0.125,0.125;0.775x0.755)\n"
]
],
[
[
"## Demographic Group Analysis",
"_____no_output_____"
],
[
"We should check that our partitions/splits of the dataset are similar in terms of their demographic profiles. Below you can see how we might visualize and analyze the full dataset vs. the partitions.",
"_____no_output_____"
]
],
[
[
"# Full dataset before splitting\npatient_demo_features = ['race', 'gender', 'age', 'patient_nbr']\npatient_group_analysis_df = processed_df[patient_demo_features].groupby('patient_nbr').head(1).reset_index(drop=True)\nshow_group_stats_viz(patient_group_analysis_df, 'gender')",
"gender\nFemale 38025\nMale 33490\nUnknown/Invalid 3 \ndtype: int64\nAxesSubplot(0.125,0.125;0.775x0.755)\n"
],
[
"# Training partition\nshow_group_stats_viz(d_train, 'gender')",
"gender\nFemale 24197\nMale 21572\nUnknown/Invalid 2 \ndtype: int64\nAxesSubplot(0.125,0.125;0.775x0.755)\n"
],
[
"# Test partition\nshow_group_stats_viz(d_test, 'gender')",
"gender\nFemale 7631\nMale 6672\nUnknown/Invalid 1 \ndtype: int64\nAxesSubplot(0.125,0.125;0.775x0.755)\n"
]
],
[
[
"\n\n\n## Convert Dataset Splits to TF Dataset",
"_____no_output_____"
],
[
"We have provided you the function to convert the Pandas dataframe to TF tensors using the TF Dataset API. \nPlease note that this is not a scalable method and for larger datasets, the 'make_csv_dataset' method is recommended -https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_csv_dataset.",
"_____no_output_____"
]
],
[
[
"# Convert dataset from Pandas dataframes to TF dataset \nbatch_size = 128\ndiabetes_train_ds = df_to_dataset(d_train, PREDICTOR_FIELD, batch_size=batch_size)\ndiabetes_val_ds = df_to_dataset(d_val, PREDICTOR_FIELD, batch_size=batch_size)\ndiabetes_test_ds = df_to_dataset(d_test, PREDICTOR_FIELD, batch_size=batch_size)",
"_____no_output_____"
],
[
"# We use this sample of the dataset to show transformations later\ndiabetes_batch = next(iter(diabetes_train_ds))[0]\ndef demo(feature_column, example_batch):\n feature_layer = tf.keras.layers.DenseFeatures(feature_column)\n print(feature_layer(example_batch))",
"_____no_output_____"
]
],
[
[
"# 4. Create Categorical Features with TF Feature Columns",
"_____no_output_____"
],
[
"## Build Vocabulary for Categorical Features",
"_____no_output_____"
],
[
"Before we can create the TF categorical features, we must first create the vocab files with the unique values for a given field that are from the **training** dataset. Below we have provided a function that you can use that only requires providing the pandas train dataset partition and the list of the categorical columns in a list format. The output variable 'vocab_file_list' will be a list of the file paths that can be used in the next step for creating the categorical features.",
"_____no_output_____"
]
],
[
[
"vocab_file_list = build_vocab_files(d_train, student_categorical_col_list)",
"_____no_output_____"
],
[
"assert len(vocab_file_list) == len(student_categorical_col_list)",
"_____no_output_____"
]
],
[
[
"## Create Categorical Features with Tensorflow Feature Column API",
"_____no_output_____"
],
[
"**Question 7**: Using the vocab file list from above that was derived fromt the features you selected earlier, please create categorical features with the Tensorflow Feature Column API, https://www.tensorflow.org/api_docs/python/tf/feature_column. Below is a function to help guide you.",
"_____no_output_____"
]
],
[
[
"def create_tf_categorical_feature_cols(categorical_col_list,\n vocab_dir='./diabetes_vocab/'):\n '''\n categorical_col_list: list, categorical field list that will be transformed with TF feature column\n vocab_dir: string, the path where the vocabulary text files are located\n return:\n output_tf_list: list of TF feature columns\n '''\n output_tf_list = []\n for c in categorical_col_list:\n vocab_file_path = os.path.join(vocab_dir, c + \"_vocab.txt\")\n '''\n Which TF function allows you to read from a text file and create a categorical feature\n You can use a pattern like this below...\n tf_categorical_feature_column = tf.feature_column.......\n\n '''\n diagnosis_vocab = tf.feature_column.categorical_column_with_vocabulary_file(c, vocab_file_path, num_oov_buckets = 1)\n tf_categorical_feature_column = tf.feature_column.indicator_column(diagnosis_vocab)\n output_tf_list.append(tf_categorical_feature_column)\n return output_tf_list\ntf_cat_col_list = create_tf_categorical_feature_cols(student_categorical_col_list)",
"INFO:tensorflow:vocabulary_size = 3 in change is inferred from the number of elements in the vocabulary_file ./diabetes_vocab/change_vocab.txt.\n"
],
[
"test_cat_var1 = tf_cat_col_list[0]\nprint(\"Example categorical field:\\n{}\".format(test_cat_var1))\ndemo(test_cat_var1, diabetes_batch)",
"Example categorical field:\nIndicatorColumn(categorical_column=VocabularyFileCategoricalColumn(key='change', vocabulary_file='./diabetes_vocab/change_vocab.txt', vocabulary_size=3, num_oov_buckets=1, dtype=tf.string, default_value=-1))\nWARNING:tensorflow:Layer dense_features_27 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.\n\nIf you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.\n\nTo change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.\n\n"
]
],
[
[
"# 5. Create Numerical Features with TF Feature Columns",
"_____no_output_____"
],
[
"**Question 8**: Using the TF Feature Column API(https://www.tensorflow.org/api_docs/python/tf/feature_column/), please create normalized Tensorflow numeric features for the model. Try to use the z-score normalizer function below to help as well as the 'calculate_stats_from_train_data' function.",
"_____no_output_____"
]
],
[
[
"from student_utils import create_tf_numeric_feature\ndef create_tf_numeric_feature(col, MEAN, STD, default_value=0):\n '''\n col: string, input numerical column name\n MEAN: the mean for the column in the training data\n STD: the standard deviation for the column in the training data\n default_value: the value that will be used for imputing the field\n\n return:\n tf_numeric_feature: tf feature column representation of the input field\n '''\n normalizer_fn = lambda col, m, s : (col - m) / s\n normalizer = partial(normalizer_fn, m = MEAN, s = STD)\n tf_numeric_feature = tf.feature_column.numeric_column(col, normalizer_fn = normalizer, dtype = tf.float64,\n default_value = default_value)\n return tf_numeric_feature",
"_____no_output_____"
]
],
[
[
"For simplicity the create_tf_numerical_feature_cols function below uses the same normalizer function across all features(z-score normalization) but if you have time feel free to analyze and adapt the normalizer based off the statistical distributions. You may find this as a good resource in determining which transformation fits best for the data https://developers.google.com/machine-learning/data-prep/transform/normalization.\n",
"_____no_output_____"
]
],
[
[
"def calculate_stats_from_train_data(df, col):\n mean = df[col].describe()['mean']\n std = df[col].describe()['std']\n return mean, std\n\ndef create_tf_numerical_feature_cols(numerical_col_list, train_df):\n tf_numeric_col_list = []\n for c in numerical_col_list:\n mean, std = calculate_stats_from_train_data(train_df, c)\n tf_numeric_feature = create_tf_numeric_feature(c, mean, std)\n tf_numeric_col_list.append(tf_numeric_feature)\n return tf_numeric_col_list",
"_____no_output_____"
],
[
"tf_cont_col_list = create_tf_numerical_feature_cols(student_numerical_col_list, d_train)",
"_____no_output_____"
],
[
"test_cont_var1 = tf_cont_col_list[0]\nprint(\"Example continuous field:\\n{}\\n\".format(test_cont_var1))\ndemo(test_cont_var1, diabetes_batch)",
"Example continuous field:\nNumericColumn(key='number_inpatient', shape=(1,), default_value=(0,), dtype=tf.float64, normalizer_fn=functools.partial(<function create_tf_numeric_feature.<locals>.<lambda> at 0x7f7ffe9b6290>, m=0.17600664176006642, s=0.6009985590232482))\n\nWARNING:tensorflow:Layer dense_features_28 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.\n\nIf you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.\n\nTo change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.\n\n"
]
],
[
[
"# 6. Build Deep Learning Regression Model with Sequential API and TF Probability Layers",
"_____no_output_____"
],
[
"## Use DenseFeatures to combine features for model",
"_____no_output_____"
],
[
"Now that we have prepared categorical and numerical features using Tensorflow's Feature Column API, we can combine them into a dense vector representation for the model. Below we will create this new input layer, which we will call 'claim_feature_layer'.",
"_____no_output_____"
]
],
[
[
"claim_feature_columns = tf_cat_col_list + tf_cont_col_list\nclaim_feature_layer = tf.keras.layers.DenseFeatures(claim_feature_columns)",
"_____no_output_____"
]
],
[
[
"## Build Sequential API Model from DenseFeatures and TF Probability Layers",
"_____no_output_____"
],
[
"Below we have provided some boilerplate code for building a model that connects the Sequential API, DenseFeatures, and Tensorflow Probability layers into a deep learning model. There are many opportunities to further optimize and explore different architectures through benchmarking and testing approaches in various research papers, loss and evaluation metrics, learning curves, hyperparameter tuning, TF probability layers, etc. Feel free to modify and explore as you wish.",
"_____no_output_____"
],
[
"**OPTIONAL**: Come up with a more optimal neural network architecture and hyperparameters. Share the process in discovering the architecture and hyperparameters.",
"_____no_output_____"
]
],
[
[
"def build_sequential_model(feature_layer):\n model = tf.keras.Sequential([\n feature_layer,\n tf.keras.layers.Dense(512, activation='relu'),\n tf.keras.layers.Dense(128, activation='relu'),\n tf.keras.layers.Dense(64, activation='relu'),\n tfp.layers.DenseVariational(1+1, posterior_mean_field, prior_trainable),\n tfp.layers.DistributionLambda(\n lambda t:tfp.distributions.Normal(loc=t[..., :1],\n scale=1e-3 + tf.math.softplus(0.01 * t[...,1:])\n )\n ),\n ])\n return model\n\ndef build_diabetes_model(train_ds, val_ds, feature_layer, epochs=5, loss_metric='mse'):\n model = build_sequential_model(feature_layer)\n model.compile(optimizer='rmsprop', loss=loss_metric, metrics=[loss_metric])\n early_stop = tf.keras.callbacks.EarlyStopping(monitor=loss_metric, patience=3) \n model_checkpoint = tf.keras.callbacks.ModelCheckpoint('saved_models/bestmodel.h5', monitor='val_loss', verbose=0, save_best_only=True, mode='auto')\n history = model.fit(train_ds, validation_data=val_ds,\n callbacks=[early_stop],\n epochs=epochs)\n return model, history ",
"_____no_output_____"
],
[
"diabetes_model, history = build_diabetes_model(diabetes_train_ds, diabetes_val_ds, claim_feature_layer, epochs=20)",
"Train for 358 steps, validate for 90 steps\nEpoch 1/20\n358/358 [==============================] - 12s 33ms/step - loss: 25.3230 - mse: 25.1798 - val_loss: 22.8669 - val_mse: 22.5741\nEpoch 2/20\n358/358 [==============================] - 13s 36ms/step - loss: 15.7285 - mse: 15.1443 - val_loss: 13.9822 - val_mse: 13.2914\nEpoch 3/20\n358/358 [==============================] - 13s 37ms/step - loss: 12.7352 - mse: 11.9670 - val_loss: 11.7411 - val_mse: 11.0843\nEpoch 4/20\n358/358 [==============================] - 13s 35ms/step - loss: 11.2485 - mse: 10.4313 - val_loss: 10.4873 - val_mse: 9.5640\nEpoch 5/20\n358/358 [==============================] - 13s 35ms/step - loss: 10.3242 - mse: 9.5285 - val_loss: 9.8177 - val_mse: 9.1636\nEpoch 6/20\n358/358 [==============================] - 8s 24ms/step - loss: 10.2267 - mse: 9.4504 - val_loss: 10.4423 - val_mse: 9.6653\nEpoch 7/20\n358/358 [==============================] - 13s 35ms/step - loss: 9.2538 - mse: 8.3019 - val_loss: 9.8968 - val_mse: 9.1000\nEpoch 8/20\n358/358 [==============================] - 13s 37ms/step - loss: 8.9934 - mse: 8.1093 - val_loss: 8.8550 - val_mse: 8.1466\nEpoch 9/20\n358/358 [==============================] - 13s 36ms/step - loss: 8.7026 - mse: 7.9876 - val_loss: 9.2839 - val_mse: 8.8448\nEpoch 10/20\n358/358 [==============================] - 13s 37ms/step - loss: 8.5294 - mse: 7.6984 - val_loss: 8.0266 - val_mse: 7.2838\nEpoch 11/20\n358/358 [==============================] - 8s 23ms/step - loss: 8.5896 - mse: 7.7653 - val_loss: 7.9876 - val_mse: 7.3438\nEpoch 12/20\n358/358 [==============================] - 8s 23ms/step - loss: 8.1592 - mse: 7.3578 - val_loss: 8.5204 - val_mse: 7.5046\nEpoch 13/20\n358/358 [==============================] - 8s 23ms/step - loss: 8.1387 - mse: 7.3121 - val_loss: 8.0225 - val_mse: 7.3049\nEpoch 14/20\n358/358 [==============================] - 8s 23ms/step - loss: 7.8314 - mse: 7.0780 - val_loss: 7.9314 - val_mse: 7.0396\nEpoch 15/20\n358/358 [==============================] - 10s 27ms/step - loss: 7.5737 - mse: 6.7585 - val_loss: 7.8283 - val_mse: 7.0017\nEpoch 16/20\n358/358 [==============================] - 8s 21ms/step - loss: 7.5256 - mse: 6.8017 - val_loss: 7.5964 - val_mse: 6.9889\nEpoch 17/20\n358/358 [==============================] - 8s 21ms/step - loss: 7.5827 - mse: 6.7757 - val_loss: 7.8556 - val_mse: 7.1059\nEpoch 18/20\n358/358 [==============================] - 8s 21ms/step - loss: 7.4014 - mse: 6.5900 - val_loss: 7.4390 - val_mse: 6.5678\nEpoch 19/20\n358/358 [==============================] - 8s 22ms/step - loss: 7.3756 - mse: 6.5670 - val_loss: 7.4403 - val_mse: 6.6895\nEpoch 20/20\n358/358 [==============================] - 8s 22ms/step - loss: 7.1834 - mse: 6.3069 - val_loss: 7.8813 - val_mse: 7.2031\n"
]
],
[
[
"## Show Model Uncertainty Range with TF Probability",
"_____no_output_____"
],
[
"**Question 9**: Now that we have trained a model with TF Probability layers, we can extract the mean and standard deviation for each prediction. Please fill in the answer for the m and s variables below. The code for getting the predictions is provided for you below.",
"_____no_output_____"
]
],
[
[
"feature_list = student_categorical_col_list + student_numerical_col_list\ndiabetes_x_tst = dict(d_test[feature_list])\ndiabetes_yhat = diabetes_model(diabetes_x_tst)\npreds = diabetes_model.predict(diabetes_test_ds)",
"_____no_output_____"
],
[
"def get_mean_std_from_preds(diabetes_yhat):\n '''\n diabetes_yhat: TF Probability prediction object\n '''\n m = diabetes_yhat.mean()\n s = diabetes_yhat.stddev()\n return m, s\nm, s = get_mean_std_from_preds(diabetes_yhat)",
"_____no_output_____"
]
],
[
[
"## Show Prediction Output ",
"_____no_output_____"
]
],
[
[
"prob_outputs = {\n \"pred\": preds.flatten(),\n \"actual_value\": d_test['time_in_hospital'].values,\n \"pred_mean\": m.numpy().flatten(),\n \"pred_std\": s.numpy().flatten()\n}\nprob_output_df = pd.DataFrame(prob_outputs)",
"_____no_output_____"
],
[
"prob_output_df.head()",
"_____no_output_____"
]
],
[
[
"## Convert Regression Output to Classification Output for Patient Selection",
"_____no_output_____"
],
[
"**Question 10**: Given the output predictions, convert it to a binary label for whether the patient meets the time criteria or does not (HINT: use the mean prediction numpy array). The expected output is a numpy array with a 1 or 0 based off if the prediction meets or doesnt meet the criteria.",
"_____no_output_____"
]
],
[
[
"def get_student_binary_prediction(df, col):\n '''\n df: pandas dataframe prediction output dataframe\n col: str, probability mean prediction field\n return:\n student_binary_prediction: pandas dataframe converting input to flattened numpy array and binary labels\n '''\n student_binary_prediction = df[col].apply(lambda x : 1 if x >= 5 else 0)\n return student_binary_prediction\nstudent_binary_prediction = get_student_binary_prediction(prob_output_df, 'pred_mean')",
"_____no_output_____"
]
],
[
[
"### Add Binary Prediction to Test Dataframe",
"_____no_output_____"
],
[
"Using the student_binary_prediction output that is a numpy array with binary labels, we can use this to add to a dataframe to better visualize and also to prepare the data for the Aequitas toolkit. The Aequitas toolkit requires that the predictions be mapped to a binary label for the predictions (called 'score' field) and the actual value (called 'label_value').",
"_____no_output_____"
]
],
[
[
"def add_pred_to_test(test_df, pred_np, demo_col_list):\n for c in demo_col_list:\n test_df[c] = test_df[c].astype(str)\n test_df['score'] = pred_np\n test_df['label_value'] = test_df['time_in_hospital'].apply(lambda x: 1 if x >=5 else 0)\n return test_df\n\npred_test_df = add_pred_to_test(d_test, student_binary_prediction, ['race', 'gender'])",
"_____no_output_____"
],
[
"pred_test_df[['patient_nbr', 'gender', 'race', 'time_in_hospital', 'score', 'label_value']].head()",
"_____no_output_____"
]
],
[
[
"## Model Evaluation Metrics ",
"_____no_output_____"
],
[
"**Question 11**: Now it is time to use the newly created binary labels in the 'pred_test_df' dataframe to evaluate the model with some common classification metrics. Please create a report summary of the performance of the model and be sure to give the ROC AUC, F1 score(weighted), class precision and recall scores. ",
"_____no_output_____"
],
[
"For the report please be sure to include the following three parts:\n- With a non-technical audience in mind, explain the precision-recall tradeoff in regard to how you have optimized your model.\n\n- What are some areas of improvement for future iterations?",
"_____no_output_____"
],
[
"### Precision-Recall Tradeoff\n\n* Tradeoff means increasing one parameter leads to decreasing of the other.\n* Precision is the fraction of correct positives among the total predicted positives.\n* Recall is the fraction of correct positives among the total positives in the dataset.\n* precision-recall tradeoff occur due to increasing one of the parameter(precision or recall) while keeping the model same.\n\n### Improvements\n\n* Recall seems to be quite low, so we can further try to improve the score.",
"_____no_output_____"
]
],
[
[
"# AUC, F1, precision and recall\n# Summary\nprint(classification_report(pred_test_df['label_value'], pred_test_df['score']))",
" precision recall f1-score support\n\n 0 0.85 0.71 0.77 9063\n 1 0.61 0.78 0.68 5241\n\n accuracy 0.74 14304\n macro avg 0.73 0.75 0.73 14304\nweighted avg 0.76 0.74 0.74 14304\n\n"
],
[
"f1_score(pred_test_df['label_value'], pred_test_df['score'], average='weighted')",
"_____no_output_____"
],
[
"accuracy_score(pred_test_df['label_value'], pred_test_df['score'])",
"_____no_output_____"
],
[
"roc_auc_score(pred_test_df['label_value'], pred_test_df['score'])",
"_____no_output_____"
],
[
"precision_score(pred_test_df['label_value'], pred_test_df['score'])",
"_____no_output_____"
],
[
"recall_score(pred_test_df['label_value'], pred_test_df['score'])",
"_____no_output_____"
]
],
[
[
"# 7. Evaluating Potential Model Biases with Aequitas Toolkit",
"_____no_output_____"
],
[
"## Prepare Data For Aequitas Bias Toolkit ",
"_____no_output_____"
],
[
"Using the gender and race fields, we will prepare the data for the Aequitas Toolkit.",
"_____no_output_____"
]
],
[
[
"# Aequitas\nfrom aequitas.preprocessing import preprocess_input_df\nfrom aequitas.group import Group\nfrom aequitas.plotting import Plot\nfrom aequitas.bias import Bias\nfrom aequitas.fairness import Fairness\n\nae_subset_df = pred_test_df[['race', 'gender', 'score', 'label_value']]\nae_df, _ = preprocess_input_df(ae_subset_df)\ng = Group()\nxtab, _ = g.get_crosstabs(ae_df)\nabsolute_metrics = g.list_absolute_metrics(xtab)\nclean_xtab = xtab.fillna(-1)\naqp = Plot()\nb = Bias()\n",
"/opt/conda/lib/python3.7/site-packages/aequitas/group.py:143: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n df['score'] = df['score'].astype(float)\n"
]
],
[
[
"## Reference Group Selection",
"_____no_output_____"
],
[
"Below we have chosen the reference group for our analysis but feel free to select another one.",
"_____no_output_____"
]
],
[
[
"# test reference group with Caucasian Male\nbdf = b.get_disparity_predefined_groups(clean_xtab, \n original_df=ae_df, \n ref_groups_dict={'race':'Caucasian', 'gender':'Male'\n }, \n alpha=0.05, \n check_significance=False)\n\n\nf = Fairness()\nfdf = f.get_group_value_fairness(bdf)",
"get_disparity_predefined_group()\n"
]
],
[
[
"## Race and Gender Bias Analysis for Patient Selection",
"_____no_output_____"
],
[
"**Question 12**: For the gender and race fields, please plot two metrics that are important for patient selection below and state whether there is a significant bias in your model across any of the groups along with justification for your statement.",
"_____no_output_____"
]
],
[
[
"# Plot two metrics\n# Is there significant bias in your model for either race or gender?",
"_____no_output_____"
],
[
"aqp.plot_group_metric(clean_xtab, 'fpr', min_group_size=0.05)",
"_____no_output_____"
],
[
"aqp.plot_group_metric(clean_xtab, 'tpr', min_group_size=0.05)",
"_____no_output_____"
],
[
"aqp.plot_group_metric(clean_xtab, 'fnr', min_group_size=0.05)",
"_____no_output_____"
],
[
"aqp.plot_group_metric(clean_xtab, 'tnr', min_group_size=0.05)",
"_____no_output_____"
]
],
[
[
"#### There isn't any significant bias in the model for either race or gender.",
"_____no_output_____"
],
[
"## Fairness Analysis Example - Relative to a Reference Group ",
"_____no_output_____"
],
[
"**Question 13**: Earlier we defined our reference group and then calculated disparity metrics relative to this grouping. Please provide a visualization of the fairness evaluation for this reference group and analyze whether there is disparity.",
"_____no_output_____"
]
],
[
[
"# Reference group fairness plot\naqp.plot_fairness_disparity(bdf, group_metric='fnr', attribute_name='race', significance_alpha=0.05, min_group_size=0.05)",
"_____no_output_____"
],
[
"aqp.plot_fairness_disparity(fdf, group_metric='fnr', attribute_name='gender', significance_alpha=0.05, min_group_size=0.05)",
"_____no_output_____"
],
[
"aqp.plot_fairness_disparity(fdf, group_metric='fpr', attribute_name='race', significance_alpha=0.05, min_group_size=0.05)",
"_____no_output_____"
]
],
[
[
"#### There isn't any disparity in the model for either race or gender.",
"_____no_output_____"
]
],
[
[
"aqp.plot_fairness_group(fdf, group_metric='fpr', title=True, min_group_size=0.05)",
"_____no_output_____"
],
[
"aqp.plot_fairness_group(fdf, group_metric='fnr', title=True)",
"_____no_output_____"
]
],
[
[
"#### Nearly all races and gender seem to have the same probability of falsely non-identifying them. The model is unbiased towards race or gender.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
d095f3d77013afbec2c079b780b8de6b124066bb | 175,305 | ipynb | Jupyter Notebook | .ipynb_checkpoints/Project-1-checkpoint.ipynb | Zainy1453/Project-ND-1 | 0f31daa87d1b14e0e539194aa759a6c2d48b25ce | [
"MIT"
] | 2 | 2021-12-26T13:28:47.000Z | 2021-12-26T13:29:44.000Z | Project-1.ipynb | Zainy1453/Project-ND-1 | 0f31daa87d1b14e0e539194aa759a6c2d48b25ce | [
"MIT"
] | null | null | null | Project-1.ipynb | Zainy1453/Project-ND-1 | 0f31daa87d1b14e0e539194aa759a6c2d48b25ce | [
"MIT"
] | null | null | null | 68.963415 | 58,564 | 0.70885 | [
[
[
"<font size ='3'>*First, let's read in the data and necessary libraries*<font/>",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom mypy import print_side_by_side\nfrom mypy import display_side_by_side\n#https://stackoverflow.com/a/44923103/8067752\n%matplotlib inline\n\npd.options.mode.chained_assignment = None",
"_____no_output_____"
],
[
"b_cal = pd.read_csv('boston_calendar.csv')\ns_cal = pd.read_csv('seatle_calendar.csv')\nb_list = pd.read_csv('boston_listings.csv')\ns_list = pd.read_csv('seatle_listings.csv')\nb_rev = pd.read_csv('boston_reviews.csv')\ns_rev = pd.read_csv('seatle_reviews.csv')",
"_____no_output_____"
]
],
[
[
" _______________________________________________________________________________________________________________________",
"_____no_output_____"
],
[
"## Task 1: Business Understanding <font size=\"2\"> *(With some Data Preperation)*</font> \n<font size=\"3\"> *My work flow will be as follows, I will explore the data with some cleaning to get enough insights to formulate questions, then, within every question I will follow the rest of the steps of the CRISP-DM framework.*</font> ",
"_____no_output_____"
],
[
"### Step 1: Basic Exploration with some cleaning\n<font size ='3'>*To be familiarized with the Data and to gather insights to formulate questions*<font/>",
"_____no_output_____"
],
[
"> **Boston & Seatle Calendar**",
"_____no_output_____"
]
],
[
[
"display_side_by_side(b_cal.head(), s_cal.head(), titles = ['b_cal', 's_cal'])",
"_____no_output_____"
]
],
[
[
"<font size ='3'>*Check the sizes of cols and rows & check Nulls*<font/>",
"_____no_output_____"
]
],
[
[
"print_side_by_side('Boston Cal:', 'Seatle Cal:', b=0)\nprint_side_by_side('Shape:',b_cal.shape,\"Shape:\", s_cal.shape)\nprint_side_by_side(\"Cols with nulls: \", b_cal.isnull().sum()[b_cal.isnull().sum()>0].index[0],\"Cols with nulls: \", s_cal.isnull().sum()[s_cal.isnull().sum()>0].index[0])\nprint_side_by_side(\"Null prop of price column: \", round(b_cal.price.isnull().sum()/b_cal.shape[0], 2),\"Null prop of price column: \", round(s_cal.price.isnull().sum()/s_cal.shape[0], 2))\nprint_side_by_side(\"Proportion of False(unit unavailable):\", round(b_cal.available[b_cal.available =='f' ].count()/b_cal.shape[0],2),\"Proportion of False(unit unavailable):\", round(s_cal.available[s_cal.available =='f' ].count()/s_cal.shape[0],2))\nprint_side_by_side(\"Nulls when units are available: \", b_cal[b_cal['available']== 't']['price'].isnull().sum(),\"Nulls when units are available: \", s_cal[s_cal['available']== 't']['price'].isnull().sum() )\nprint('\\n')",
"Boston Cal: Seatle Cal:\nShape: (1308890 4) Shape: (1393570 4)\nCols with nulls: price Cols with nulls: price\nNull prop of price column: 0.51 Null prop of price column: 0.33\nProportion of False(unit unavailable): 0.51 Proportion of False(unit unavailable): 0.33\nNulls when units are available: 0 Nulls when units are available: 0\n\n\n"
]
],
[
[
"<font size ='3'>*Let's do some cleaning, first, let's transfer `date` column to datetime to ease manipulation and analysis. I will also create a dataframe with seperate date items from the Date column, to check the time interval along which the data was collected. In addition to that, let's transform `price` and `available` into numerical values*<font/>",
"_____no_output_____"
]
],
[
[
"def create_dateparts(df, date_col): \n \"\"\"\n INPUT\n df -pandas dataframe\n date_col -list of columns to break down into columns of years,months and days.\n \n OUTPUT\n df - a dataframe with columns of choice transformed in to columns of date parts(years,months and days)\n \"\"\"\n df['date'] = pd.to_datetime(df.date)\n b_date_df = pd.DataFrame()\n b_date_df['year'] = df['date'].dt.year\n b_date_df['month'] = df['date'].dt.month\n b_date_df['day'] =df['date'].dt.strftime(\"%A\")\n #b_date_df['dow'] =df['date'].dt.day\n df = df.join(b_date_df)\n return df\n#######################\ndef get_period_df(df):\n \"\"\"\n INPUT\n df -pandas dataframe\n \n OUTPUT\n df - a dataframe grouped to show the span of all the entries\n \"\"\"\n period =pd.DataFrame(df.groupby(['year','month'], sort = True)['day'].value_counts())\n period = period.rename(columns={'day':'count'}, level=0)\n period = period.reset_index().sort_values(by=['year', 'month', 'day']).reset_index(drop = True)\n return period\n#############################\ndef to_float(df, float_cols):\n \"\"\"\n INPUT\n df -pandas dataframe\n float_cols -list of columns to transform to float\n \n OUTPUT\n df - a dataframe with columns of choice transformed to float \n \"\"\"\n for col in float_cols:\n df[col] = df[col].str.replace('$', \"\", regex = False)\n df[col] = df[col].str.replace('%', \"\", regex = False)\n df[col] = df[col].str.replace(',', \"\", regex = False)\n for col in float_cols:\n df[col] = df[col].astype(float)\n return df\n#############################\ndef bool_nums(df, bool_cols):\n \"\"\"\n INPUT\n df -pandas dataframe\n bool_cols -list of columns with true or false strings\n \n OUTPUT\n df - a dataframe with columns of choice transforemed into binary values\n \"\"\"\n for col in bool_cols:\n df[col] = df[col].apply(lambda x: 1 if x == 't' else 0 )\n df = df.reset_index(drop= True)\n return df",
"_____no_output_____"
]
],
[
[
"<font size = '3'>*Let's take a look at the resulted DataFrames after executing the previous fuc=nctions. I flipped the Boston calendar to have it start in ascending order like Seatle.*<font/>",
"_____no_output_____"
]
],
[
[
"b_cal_1 = to_float(b_cal, ['price'])\ns_cal_1 = to_float(s_cal, ['price'])\nb_cal_1 = create_dateparts(b_cal_1, 'date')\ns_cal_1 = create_dateparts(s_cal_1, 'date')\nb_cal_1 = bool_nums(b_cal_1, ['available'])\ns_cal_1 = bool_nums(s_cal_1, ['available'])\nb_cal_1 = b_cal_1.iloc[::-1].reset_index(drop=True)\n\ndisplay_side_by_side(b_cal_1.head(3),s_cal_1.head(3), titles = ['b_cal_1', 's_cal_1'])",
"_____no_output_____"
]
],
[
[
"<font size = '3'>*Let's take a look at the resulted time intervals for Both Boston and Seatle calendar tables*<font/>",
"_____no_output_____"
]
],
[
[
"b_period =get_period_df(b_cal_1)\ns_period =get_period_df(s_cal_1)\ndisplay_side_by_side(b_period.head(1), b_period.tail(1), titles = ['Boston Period'])\ndisplay_side_by_side(s_period.head(1), s_period.tail(1), titles = ['Seatle Period'])\n\nprint(\"Number of unique Listing IDs in Boston Calendar: \", len(b_cal_1.listing_id.unique()))\nprint(\"Number of unique Listing IDs in Seatle Calendar: \", len(s_cal_1.listing_id.unique()))\nprint('\\n')\n#b_period.iloc[0], s_period.iloc[0], b =0)",
"_____no_output_____"
]
],
[
[
"<font size ='3'>*Seems like they both span a year, through which all the listings are tracked in terms of availability. When we group by year and month; the count is equivalent to the numbers of the unique ids because all the ids are spanning the same interval. Let's check any anomalies*<font/>",
"_____no_output_____"
]
],
[
[
"def check_anomalies(df, col):\n list_ids_not_year_long = []\n for i in sorted(list(df[col].unique())):\n if df[df[col]== i].shape[0] != 365:\n list_ids_not_year_long.append(i)\n print(\"Entry Ids that don't span 1 year: \" , list_ids_not_year_long)",
"_____no_output_____"
],
[
"#Boston\ncheck_anomalies(b_cal_1, 'listing_id')",
"Entry Ids that don't span 1 year: [12898806]\n"
],
[
"#Seatle\ncheck_anomalies(s_cal_1, 'listing_id')",
"Entry Ids that don't span 1 year: []\n"
],
[
"## check this entry in Boston Calendar\nprint(\"Span of the entries for this listing, should be 365: \", b_cal_1[b_cal_1['listing_id']== 12898806].shape[0])\n## 2 years, seems like a duplicate as 730 = 365 * 2\none_or_two = pd.DataFrame(b_cal_1[b_cal_1['listing_id']==12898806].groupby(['year', 'month', 'day'])['day'].count()).day.unique()[0]\nprint(\"Should be 1: \", one_or_two)\n## It indeed is :)\nb_cal_1 = b_cal_1.drop_duplicates()\nprint(\"Size of anomaly listing, Should be = 365: \", b_cal_1.drop_duplicates()[b_cal_1.drop_duplicates().listing_id==12898806]['listing_id'].size)\nprint(\"After removing duplicates, Span of the entries for this listing, should be 365: \", b_cal_1[b_cal_1['listing_id']== 12898806].shape[0])\nprint(\"After removing duplicates, shape is: \", b_cal_1.shape)",
"Span of the entries for this listing, should be 365: 730\nShould be 1: 8\nSize of anomaly listing, Should be = 365: 365\nAfter removing duplicates, Span of the entries for this listing, should be 365: 365\nAfter removing duplicates, shape is: (1308525, 7)\n"
],
[
"# b_cal_1.to_csv('b_cal_1.csv')\n# s_cal_1.to_csv('s_cal_1.csv')",
"_____no_output_____"
]
],
[
[
"_______________________________________________________________________________________________________________________\n### Comments: \n[Boston & Seatle Calendar]\n- The datasets have information about listing dates, availability and price tracked over a year for ever listing id\n- There are no data entry errors, all nulls are due to the structuring of the Data (the listings that weren't available has no price)\n- I added 4 cols that contain dateparts that will aid further analysis and modeling\n- The Boston calendar Dataset ranges through `365`days from `6th of September'16` to `5th of September'17`, No nulls with `1308525` rows and `8` cols\n- The Seatle calendar Dataset ranges through `365`days from `4th of January'16` to `2nd of January'17`, No nulls with `1393570` rows and `8` cols\n- Number of unique Listing IDs in Boston Calendar: `3585`\n- Number of unique Listing IDs in Seatle Calendar: `3818`\n- It seems that the table is not documenting any rentals it just shows if the unit is available at a certain time and the price then.",
"_____no_output_____"
],
[
" _______________________________________________________________________________________________________________________",
"_____no_output_____"
],
[
"## Step 1: Continue - ",
"_____no_output_____"
],
[
"> **Boston & Seatle Listings**",
"_____no_output_____"
]
],
[
[
"b_list.head(1)\n#s_list.head(10)",
"_____no_output_____"
]
],
[
[
" <font size ='3'>*Check the sizes of cols & rows & check Nulls*<font/>",
"_____no_output_____"
]
],
[
[
"print_side_by_side(\"Boston listings size :\", b_list.shape, \"Seatle listings size :\", s_list.shape)\nprint_side_by_side(\"Number of Non-null cols in Boston listings: \", np.sum(b_list.isnull().sum()==0) ,\"Number of Non-null cols in Seatle listings: \", np.sum(s_list.isnull().sum()==0))\nset_difference = set(b_list.columns) - set(s_list.columns)\nprint(\"Columns in Boston but not in Seatle: \", set_difference)\nprint('\\n')",
"Boston listings size : (3585 95) Seatle listings size : (3818 92)\nNumber of Non-null cols in Boston listings: 51 Number of Non-null cols in Seatle listings: 47\nColumns in Boston but not in Seatle: {'interaction', 'house_rules', 'access'}\n\n\n"
]
],
[
[
" <font size ='3'>*Let's go through the columns of this table as they are a lot, decide on which would be useful, which would be ignored and which would be transformed based on intuition.* <font/>",
"_____no_output_____"
],
[
"> **to_parts:**<br><font size = '2'>(Divide into ranges)<font/><br>\n>* *maximum_nights* \n><br> \n> \n> **to_count:** <br><font size = '2'>(Provide a count)<font/><br>\n> * *amenities* <br>\n> * *host_verifications* \n><br> \n> \n>**to_dummy:** <br><font size = '2'>(Convert into dummy variables)<font/><br>\n>* *amenities* \n><br> \n> \n>**to_len_text:** <br><font size = '2'>(provide length of text)<font/><br>\n>* *name* \n>* *host_about* \n>* *summary* \n>* *description* \n>* *neighborhood_overview* \n>* *transit* \n><br> \n>\n>**to_days:** <br><font size = '2'>(calculate the difference between both columns to have a meaningful value of host_since in days)<font/><br>\n>* *host_since*\n>* *last_review*\n><br> \n>\n>**to_float:**<br><font size = '2'>(Transform to float)<font/><br>\n>* *cleaning_fee* <br>\n>* *host_response_rate* <br>\n>* *host_acceptance_rate* <br>\n>* *host_response_rate* <br> \n>* *host_acceptance_rate* <br>\n>* *extra_people* <br>\n>* *price* <br>\n><br> \n>\n> **to_binary:** <br><font size = '2'>(Transform to binary)<font/><br>\n>* *host_has_profile_pic* \n>* *host_identity_verified* \n>* *host_is_superhost* \n>* *is_location_exact* \n>* *instant_bookable* \n>* *require_guest_profile_picture* \n>* *require_guest_phone_verification* \n><br> \n>\n>**to_drop:**<br><font size = '2'>(Columns to be dropped)<font/>\n<br><br>\n>**reason: little use:** <br> \n>* *listing_url, scrape_id, last_scraped, experiences_offered, thumbnail_url,xl_picture_url, medium_url,*\n>* *host_id, host_url, host_thumbnail_url, host_picture_url, host_total_listings_count, neighbourhood,* \n>* *neighbourhood_group_cleansed, state, country_code, country, latitude, longitude,*\n>* *has_availability, calendar_last_scraped, host_name, picture_url, space, first_review, *\n><br><br>\n> \n>**reason: Nulls, text, only in Boston:** <br>\n>* *access , interaction, house_rules*\n><br><br>\n>\n>**reason> Nulls, 0 variability or extreme variability:** <br>\n>* *square_feet* ------------- *90% Null boston 97% Null seatle* <br>\n>* *weekly_price*-------------*75% Null boston 47% Null seatle* <br>\n>* *monthly_price*------------*75% Null boston 60% Null seatle* <br>\n>* *security_deposit*---------*65% Null boston 51% Null seatle* <br>\n>* *notes*---------------------*55% Null boston 42% Null seatle* <br>\n>* *jurisdiction_names*---------*100% Null in both* <br>\n>* *license*--------------------*100% Null in both* \n>* *required_license*-----------*100% Null in both* <br>\n>* *street*---------------------*High variability* <br>",
"_____no_output_____"
],
[
"<font size = '3' >*Let's write anymore functions needed to carry on these suggested changes*<font/>",
"_____no_output_____"
]
],
[
[
"drop_cols = ['listing_url', 'scrape_id', 'last_scraped', 'experiences_offered', 'thumbnail_url','xl_picture_url', \n'medium_url', 'host_id', 'host_url', 'host_thumbnail_url', 'host_picture_url', 'host_total_listings_count', \n'neighbourhood', 'neighbourhood_group_cleansed','state', 'country_code', 'country', 'latitude', 'longitude', \n'has_availability', 'calendar_last_scraped', 'host_name','square_feet', \n'weekly_price', 'monthly_price', 'security_deposit', 'notes', 'jurisdiction_names', 'license', 'requires_license', \n'street', 'picture_url', 'space','first_review', 'house_rules', 'access', 'interaction']\nfloat_cols = ['cleaning_fee', 'host_response_rate','host_acceptance_rate','host_response_rate',\n 'host_acceptance_rate','extra_people','price']\nlen_text_cols = ['name', 'host_about', 'summary', 'description','neighborhood_overview', 'transit']\ncount_cols = ['amenities', 'host_verifications'] \nd_col = [ 'amenities']\npart_col = ['maximum_nights']\nbool_cols = ['host_has_profile_pic', 'host_identity_verified', 'host_is_superhost', 'is_location_exact',\n 'instant_bookable', 'require_guest_profile_picture' , 'require_guest_phone_verification' ] \nday_cols = [ 'host_since', 'last_review']\n###########################################################################################################################\ndef to_drop(df, drop_cols):\n \"\"\"\n INPUT\n df -pandas dataframe\n drop_cols -list of columns to drop\n \n OUTPUT\n df - a dataframe with columns of choice dropped \n \"\"\"\n for col in drop_cols:\n if col in list(df.columns):\n df = df.drop(col, axis = 1)\n else:\n continue\n return df\n#################################\ndef to_len_text(df, len_text_cols):\n \"\"\"\n INPUT\n df -pandas dataframe\n len_text_cols- list of columns to return the length of text of their values\n \n OUTPUT\n df - a dataframe with columns of choice transformed to len(values) instead of long text\n \"\"\"\n df_new = df.copy()\n len_text = []\n new_len_text_cols = [] \n\n for col in len_text_cols:\n new_len_text_cols.append(\"len_\"+col)\n\n for i in df_new[col]:\n #print(col,i)\n try:\n len_text.append(len(i))\n except:\n len_text.append(i)\n #print('\\n'*10) \n df_new = df_new.drop(col, axis = 1)\n len_text_col = pd.Series(len_text) \n len_text_col = len_text_col.reset_index(drop = True)\n #print(len_text_col)\n df_new['len_'+col]= len_text_col\n len_text = []\n df_new[new_len_text_cols] = df_new[new_len_text_cols].fillna(0)\n return df_new, new_len_text_cols\n#########################\ndef to_parts(df, part_col):\n \"\"\"\n INPUT\n df -pandas dataframe\n part_col -list of columns to divide into \"week or less\" and \"more than a week\" depending on values\n \n OUTPUT\n df - a dataframe with columns of choice transformed to ranges of \"week or less\" and \"more than a week\"\n \"\"\"\n def to_apply(val):\n if val <= 7:\n val = '1 Week or less'\n elif (val >7) and (val<=14):\n val = '1 week to 2 weeks'\n elif (val >14) and (val<=30):\n val = '2 weeks to 1 month'\n elif (val >30) and (val>=60):\n val = '1 month to 2 months'\n elif (val >60) and (val>=90):\n val = '2 month to 3 months'\n elif (val >90) and (val>=180):\n val = '3 month to 6 months'\n else:\n val = 'More than 6 months' \n return val\n for part in part_col:\n df[part]= df[part].apply(to_apply)\n return df\n############################\ndef to_count(df, count_cols): \n \"\"\"\n INPUT\n df -pandas dataframe\n count_cols -list of columns to count the string items within each value\n \n OUTPUT\n df - a dataframe with columns of choice transformed to a count of values \n \"\"\"\n def to_apply(val):\n if \"{\" in val:\n val = val.replace('}', \"\").replace('{', \"\").replace(\"'\",\"\" ).replace('\"',\"\" ).replace(\"''\", \"\").strip().split(',')\n elif \"[\" in val:\n val = val.replace('[',\"\" ).replace(']',\"\" ).replace(\"'\",\"\" ).strip().split(\",\")\n return len(val) \n for col in count_cols:\n df['count_'+col]= df[col].apply(to_apply)\n return df\n########################\ndef to_items(df, d_col): \n \"\"\"\n INPUT\n df -pandas dataframe\n d_col -list of columns to divide the values to clean list of items\n \n OUTPUT\n df - a dataframe with columns of choice cleaned and returns the values as lists\n \"\"\"\n def to_apply(val):\n if \"{\" in val:\n val = val.replace('}', \"\").replace('{', \"\").replace(\"'\",\"\" ).replace('\"',\"\" ).replace(\"''\", \"\").lower().split(',')\n elif \"[\" in val:\n val = val.replace('[',\"\" ).replace(']',\"\" ).replace(\"'\",\"\" ).lower().split(\",\")\n return val \n def to_apply1(val):\n new_val = []\n if val == 'None':\n new_val.append(val)\n for i in list(val):\n if (i != \"\") and ('translation' not in i.lower()):\n new_val.append(i.strip())\n return new_val\n \n def to_apply2(val): \n if 'None' in val:\n return ['none']\n elif len((val)) == 0:\n return ['none']\n else:\n return list(val)\n \n for col in d_col:\n df[col]= df[col].apply(to_apply)\n df[col]= df[col].apply(to_apply1)\n df[col]= df[col].apply(to_apply2)\n return df\ndef items_counter(df, d_col):\n \"\"\"\n INPUT\n df -pandas dataframe\n count_col -list of columns to with lists as values to count\n \n OUTPUT\n all_strings - a dictionary with the count of every value every list within every series\n \"\"\"\n all_strings= {}\n def to_apply(val):\n for i in val:\n if i in list(all_strings.keys()):\n all_strings[i]+=1\n else:\n all_strings[i]=1 \n\n df[d_col].apply(to_apply)\n return all_strings\n###################################\ndef to_days(df, day_cols, na_date):\n \"\"\"\n INPUT\n df -pandas dataframe\n day_cols -list of columns to divide the values to clean list of items\n \n OUTPUT\n df - a dataframe with columns of choice cleaned and returns the values as lists\n \"\"\"\n#Since Boston lisitngs span from September'16 to september'17, we can impute using the month of march'16\n#Since Seatle lisitngs span from January'16 to January'17, we can impute using the month of june'16\n df = df.copy()\n df[[day_cols[0], day_cols[1]]]=df[[day_cols[0], day_cols[1]]].apply(pd.to_datetime)\n df = df.dropna(subset= [day_cols[0]], how ='any', axis = 0)\n df[day_cols[1]] = df[day_cols[1]].fillna(pd.to_datetime(na_date))\n df[day_cols[0]]= (df[day_cols[1]] - df[day_cols[0]]).apply(lambda x: round(x.value/(864*1e11)),2)\n df= df.drop(day_cols[1], axis =1 )\n df = df.reset_index(drop= True)\n return df\n########################################################################################################################### \ndef applier(df1,df2,drop = True, float_=True, len_text= True, count= True, items = True,\n parts = True , count_items = True, bool_num = True, days = True):\n \"\"\"\n INPUT\n df1,df2 - 2 pandas dataframes\n drop,float_,len_text, count, parts, date_time - Boolean values that corresponds to previosuly defined functions\n OUTPUT\n df - a clean dataframe that has undergone previously defined functions according to the boolean prameters passed\n \"\"\"\n while drop:\n df1 = to_drop(df1, drop_cols)\n df2 =to_drop(df2, drop_cols)\n break\n while float_:\n df1 =to_float(df1, float_cols)\n df2 =to_float(df2, float_cols)\n break\n while len_text:\n df1, nltc = to_len_text(df1, len_text_cols)\n df2, nltc = to_len_text(df2, len_text_cols)\n break\n while parts:\n df1 = to_parts(df1, part_col)\n df2 = to_parts(df2, part_col)\n break\n while count:\n df1 = to_count(df1, count_cols)\n df2 = to_count(df2, count_cols)\n df1 = df1.drop('host_verifications', axis =1 )\n df2 = df2.drop('host_verifications', axis =1 ) \n break\n while items:\n df1 = to_items(df1, d_col)\n df2 = to_items(df2, d_col)\n break\n while count_items:\n b_amens_count = pd.Series(items_counter(df1,'amenities')).reset_index().rename(columns = {'index':'amenities', 0:'count'}).sort_values(by='count', ascending =False).reset_index(drop =True)\n s_amens_count = pd.Series(items_counter(df2, 'amenities')).reset_index().rename(columns = {'index':'amenities', 0:'count'}).sort_values(by='count', ascending =False).reset_index(drop =True)\n a_counts = [b_amens_count,s_amens_count]\n break\n while bool_num:\n df1 = bool_nums(df1, bool_cols)\n df2 = bool_nums(df2, bool_cols)\n break\n while days:\n df1 = to_days(df1, day_cols, '2016-04-1')\n df2 = to_days(df2, day_cols, '2016-06-1')\n break\n if count_items:\n return df1, df2 ,a_counts\n else:\n return df1,df2",
"_____no_output_____"
],
[
"b_list_1, s_list_1, a_counts = applier(b_list, s_list) ",
"_____no_output_____"
]
],
[
[
"<font size = '3' >*Amenities seems like a good indicator of price as a response variable so let's have it dummified*<font/>\n<br>\n<font size = '2.75' >**This function takes forever(6 mins),so, it's commented out and I use the resulted dataframes that were written to CSV files**<font/>",
"_____no_output_____"
]
],
[
[
"# %%time\n# def to_dummy(df1,df2, col1, cols_ref1,cols_ref2):\n \n# def construct(df,col, cols_ref):\n# count = 0\n# for val2 in df[col]:\n# lister = []\n# for val1 in cols_ref[col]:\n# if val1 in val2:\n# lister.append(1)\n# else:\n# lister.append(0)\n# cols_ref = cols_ref.join(pd.Series(lister, name = count))\n# count+=1\n# cols_ref = cols_ref.drop('count', axis = 1).transpose()\n# cols_ref.columns = list(cols_ref.iloc[0,:])\n# return cols_ref\n# b_amens_1 =construct(df1, col1,cols_ref1)\n# s_amens_1 =construct(df2, col1,cols_ref2)\n# b_amens_1 = b_amens_1.drop('none', axis = 1) #.drop(0,axis=0).reset_index(drop= True)\n# b_amens_1 = b_amens_1.iloc[1:,:]\n# b_amens_1.columns = [\"{}_{}\".format(col1,col) for col in b_amens_1.columns]\n# s_amens_1 = s_amens_1.iloc[1:,:]\n# s_amens_1 = s_amens_1.drop('none', axis = 1)\n# s_amens_1.columns = [\"{}_{}\".format(col1,col) for col in s_amens_1.columns]\n# b_dummies = b_amens_1.reset_index(drop =True)\n# s_dummies = s_amens_1.reset_index(drop =True)\n# df1 = df1.join(b_dummies)\n# df2 = df2.join(s_dummies)\n# df1 = df1.drop([col1], axis = 1)\n# df2 = df2.drop([col1], axis = 1)\n# return b_dummies, s_dummies, df1, df2\n \n# b_d, s_d,b_list_d, s_list_d = to_dummy(b_list_1, s_list_1, 'amenities',\n# b_a_counts, s_a_counts)",
"_____no_output_____"
],
[
"# b_list_d.to_csv('b_list_d.csv')\n# s_list_d.to_csv('s_list_d.csv')",
"_____no_output_____"
],
[
"b_list_d = pd.read_csv('b_list_d.csv', index_col = 0)\ns_list_d = pd.read_csv('s_list_d.csv', index_col = 0)",
"_____no_output_____"
]
],
[
[
"<font size = '3' >*Check the nulls again*<font/><br>",
"_____no_output_____"
]
],
[
[
"df1= (b_list_d.isnull().sum()[b_list_d.isnull().sum()>0]/b_list_d.shape[0]*100).reset_index().rename(columns ={'index':'col_name',0:'nulls_proportion'})\ndf2 = (s_list_d.isnull().sum()[s_list_d.isnull().sum()>0]/s_list_d.shape[0]*100).reset_index().rename(columns ={'index':'col_name',0:'nulls_proportion'})\ndisplay_side_by_side(df1,df2, titles =['b_list_d_Nulls','s_list_d_Nulls' ])",
"_____no_output_____"
]
],
[
[
"_______________________________________________________________________________________________________________________\n### Comments: \n[Boston & Seatle Listings]\n- Boston listings size : `3585`, `95`/ Seatle listings size : `3818`, `92`\n- Number of Non-null cols in Boston listings: `51`, around half\n- Number of Non-null cols in Seatle listings: `47`, around half<br>\n- I wrote a series of functions that commenced some basic cleaning to ease analysis, with the option to switch off any of them depending on the future requirements of the analyses, some of what was done:\n>- Columns with relatively high number nulls or that have little to no forseeable use were removed \n>- Took the charachter length of the values in some of the cols with long text entries and many unique values, possibly the length of some fields maybe correlated somewhat with price.\n>- Columns with dates are transformed into Datetime, numerical values that were in text to floats\n>- Columns `amenities`and `host_verifications`were taken as counts, `amenities` was then dummified, for its seeming importance. \n>- `maximum_nights`column seems to lack some integrity so I divided it into time periods \n> Columns with t and f strings were converted into binary data. \n>- Difference between `host_since`and `last_review` was computed in days to `host_since`<br>\n>- All columns with only 't' or 'f' values were transformed in to binary values.\n\n- **After the basic cleaning and the dummification of `amenities`:** <br>\n~Boston listings size : `3585`, `98`/ Seatle listings size : `3818`, `98`. <br>\n~There are still nulls to deal with in case of modeling, but that depends on the requirements of each question.",
"_____no_output_____"
],
[
"_______________________________________________________________________________________________________________________",
"_____no_output_____"
],
[
"### Step 1: Continue - ",
"_____no_output_____"
],
[
"> **Boston & Seatle Reviews**",
"_____no_output_____"
]
],
[
[
"#b_rev.head(3)\ns_rev.head(3)",
"_____no_output_____"
]
],
[
[
"<font size = '3' >*Check the sizes of cols & rows & check Nulls*<font/>",
"_____no_output_____"
]
],
[
[
"print_side_by_side(\"Boston reviews size:\", b_rev.shape,\"Seatle reviews size:\", s_rev.shape)\nprint_side_by_side(\"No. of unique listing ids:\", b_rev.listing_id.unique().size,\"No. of unique listing ids:\", s_rev.listing_id.unique().size)\nprint_side_by_side(\"Number of Non-null cols in Boston Reviews:\", np.sum(b_rev.isnull().sum()==0), \n\"Number of Non-null cols in Seatle Reviews:\", np.sum(s_rev.isnull().sum()==0))\nprint_side_by_side(\"Null cols % in Boston:\", (b_rev.isnull().sum()[b_rev.isnull().sum()>0]/b_rev.shape[0]*100).to_string(),\n\"Null cols % in Seatle:\", (s_rev.isnull().sum()[s_rev.isnull().sum()>0]/s_rev.shape[0]*100).to_string())\nprint_side_by_side(\"Null cols no. in Boston:\",(b_rev.isnull().sum()[b_rev.isnull().sum()>0]).to_string(),\n\"Null cols no. in Seatle:\", (s_rev.isnull().sum()[s_rev.isnull().sum()>0]).to_string())\nprint('\\n')",
"Boston reviews size: (68275 6) Seatle reviews size: (84849 6)\nNo. of unique listing ids: 2829 No. of unique listing ids: 3191\nNumber of Non-null cols in Boston Reviews: 5 Number of Non-null cols in Seatle Reviews: 5\nNull cols % in Boston: comments 0.077627 Null cols % in Seatle: comments 0.021214\nNull cols no. in Boston: comments 53 Null cols no. in Seatle: comments 18\n\n\n"
]
],
[
[
"<font size = '3' >**To extract analytical insights from the reviews entries, they ought to be transformed from text to numerical scores, to do so I will follow some steps:**<font/>",
"_____no_output_____"
],
[
"<font size = '3' >*1) Find all the words -excluding any non alphanumeric charachters - in each Dataset*<font/><br>\n<font size = '2' >**As the function takes 4 mins to execute, I commented it out and passed the resulted word lists as dfs to CSV files that were added to the project instead of running it in the notebook again.**<font/>",
"_____no_output_____"
]
],
[
[
"#%%time\n# def get_words(df, col):\n# \"\"\"\n# INPUT\n# df -pandas dataframe\n# col -column of which the values are text \n# \n# OUTPUT\n# df - a dataframe with a single colum of all the words \n# \"\"\"\n# all_strings = []\n# for val in df[col]:\n# try:\n# val_strings = [''.join(filter(str.isalnum, i.lower())) for i in val.split() if len(i)>3]\n# except:\n# continue\n# for word in val_strings:\n# if word not in all_strings:\n# all_strings.append(word)\n# val_strings = []\n# return pd.Series(all_strings).to_frame().reset_index(drop = True).rename(columns = {0:'words'})\n# boston_words = get_words(b_rev, 'comments')\n# seatle_words = get_words(s_rev, 'comments')",
"_____no_output_____"
],
[
"# boston_words.to_csv('boston_words.csv')\n# seatle_words.to_csv('seatle_words.csv')",
"_____no_output_____"
],
[
"boston_words = pd.read_csv('drafts/boston_words.csv', index_col= 0)\nseatle_words = pd.read_csv('drafts/seatle_words.csv', index_col= 0)\nprint(\"Boston words no.: \", boston_words.shape[0])\nprint(\"Seatle words no.: \", seatle_words.shape[0])\ndisplay_side_by_side(boston_words.head(5), seatle_words.head(5), titles = [ 'Boston', 'Seatle'])",
"Boston words no.: 54261\nSeatle words no.: 50627\n"
]
],
[
[
"<font size = '3' >*2) Read in positive and negative english word lists that are used for sentiment analysis*<font/>",
"_____no_output_____"
],
[
"### Citation:\n* Using this resource https://www.cs.uic.edu/~liub/FBS/sentiment-analysis.html#lexicon I downloaded a list of words with positive and negative connotations used for sentiment analysis\n* *Based on the book*: \n> Sentiment Analysis and Opinion Mining (Introduction and Survey), Morgan & Claypool, May 2012.",
"_____no_output_____"
]
],
[
[
"positive_words = pd.read_csv('drafts/positive-words.txt', sep = '\\t',encoding=\"ISO-8859-1\")\nnegative_words = pd.read_csv('drafts/negative-words.txt', sep = '\\t',encoding=\"ISO-8859-1\")\npositive_words = positive_words.iloc[29:,:].reset_index(drop = True).rename(columns = {';;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;':'words'})\nnegative_words = negative_words.iloc[31:,:].reset_index(drop = True).rename(columns = {';;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;':'words'})\nb_pos = np.intersect1d(np.array(boston_words['words'].astype(str)), np.array(positive_words['words']),assume_unique=True)\nb_neg = np.intersect1d(np.array(boston_words['words'].astype(str)), np.array(negative_words['words']),assume_unique=True)\ns_pos = np.intersect1d(np.array(seatle_words['words'].astype(str)), np.array(positive_words['words']),assume_unique=True)\ns_neg = np.intersect1d(np.array(seatle_words['words'].astype(str)), np.array(negative_words['words']),assume_unique=True)\nprint_side_by_side('Positive words count: ', positive_words.shape[0]\n,'Negative words count: ', negative_words.shape[0])\nprint_side_by_side(\"No. of positive words in Boston Reviews: \", len(b_pos)\n,\"No. of negative words in Boston Reviews: \", len(b_neg))\nprint_side_by_side(\"No. of positive words in Seatle Reviews: \", len(s_pos)\n,\"No. of negative words in Seatle Reviews: \", len(s_neg))\nprint('\\n')",
"Positive words count: 2005 Negative words count: 4781\nNo. of positive words in Boston Reviews: 1147 No. of negative words in Boston Reviews: 1507\nNo. of positive words in Seatle Reviews: 1235 No. of negative words in Seatle Reviews: 1556\n\n\n"
]
],
[
[
"<font size = '3' >*3) Let's translate the reviews from other languages to English*<font/>\n<br>\n<font size='3'>*Let's start with dropping the nulls, check the language of the reviews using `langdetect`, prepare the non english `comments` to be translated*<font/>",
"_____no_output_____"
]
],
[
[
"##Dependency googletrans-4.0.0rc1\n##langdetect\n\n# b_rev = b_rev.dropna(subset=['comments'], how = 'any', axis = 0)\n# s_rev = s_rev.dropna(subset=['comments'], how = 'any', axis = 0)\n\n# %%time\n# b_rev_t = b_rev.copy()\n# s_rev_t = s_rev.copy()\n# from langdetect import detect\n# def lang_check(val):\n# try:\n# return detect(val)\n# except:\n# return val\n \n# b_rev_t['review_lang']=b_rev['comments'].apply(lang_check)\n# s_rev_t['review_lang']=s_rev['comments'].apply(lang_check)\n# b_rev_t.to_csv('b_rev_t.csv')\n# s_rev_t.to_csv('s_rev_t.csv')\n# b_rev_t = pd.read_csv('b_rev_t.csv', index_col = 0)\n#s_rev_t = pd.read_csv('s_rev_t.csv', index_col = 0)\n# print('Proportion of non English reviews in Boston: ' ,b_rev_t[b_rev_t['review_lang']!= 'en'].shape[0]/b_rev_t.shape[0])\n# print('Proportion of non English reviews in Seattle: ',s_rev_t[s_rev_t['review_lang']!= 'en'].shape[0]/s_rev_t.shape[0])\nprint(f\"\"\"Proportion of non English reviews in Boston: 0.05436662660138958\nProportion of non English reviews in Seattle: 0.012424703233487757\"\"\")\n# b_to_trans =b_rev_t[b_rev_t['review_lang']!= 'en']\n# s_to_trans =s_rev_t[s_rev_t['review_lang']!= 'en']\n# b_to_trans['comments'] = b_to_trans['comments'].map(lambda val : str([re.sub(r\"[^a-zA-Z0-9]+\", '. ', k) for k in val.split(\"\\n\")]).replace('[',\" \").replace(']',\"\").replace(\"'\",\"\"))\n# s_to_trans['comments'] = s_to_trans['comments'].map(lambda val : str([re.sub(r\"[^a-zA-Z0-9]+\", '. ', k) for k in val.split(\"\\n\")]).replace('[',\" \").replace(']',\"\").replace(\"'\",\"\"))",
"Proportion of non English reviews in Boston: 0.05436662660138958\nProportion of non English reviews in Seattle: 0.012424703233487757\n"
]
],
[
[
"<font size='3'>*Since googletrans library is extremely unstable, I break down the non-English reviews in Boston into 4 dataframes*<font/>",
"_____no_output_____"
]
],
[
[
"# def trans_slicer(df,df1 = 0,df2 = 0,df3 = 0, df4 = 0):\n# dfs=[]\n# for i in [df1,df2,df3,df4]:\n# i = df[0:1000]\n# df = df.drop(index = i.index.values,axis = 0).reset_index(drop= True)\n# dfs.append(i.reset_index(drop =True))\n# # df = df.drop(index = range(0,df.shape[0],1),axis = 0).reset_index(drop= True)\n# return dfs\n# df1, df2, df3, df4 = trans_slicer(b_to_trans)",
"_____no_output_____"
],
[
"# %%time\n# import re\n# import time\n# import googletrans\n# import httpx\n# from googletrans import Translator\n# timeout = httpx.Timeout(10) # 5 seconds timeout\n# translator = Translator(timeout=timeout)\n\n# def text_trans(val):\n# vals = translator.translate(val, dest='en').text\n# time.sleep(10)\n# return vals\n# ############################################################\n# df1['t_comments'] = df2['comments'].apply(text_trans)\n# df1.to_csv('df2.csv')\n# df2['t_comments'] = df2['comments'].apply(text_trans)\n# df2.to_csv('df2.csv')\n# df3['t_comments'] = df3['comments'].apply(text_trans)\n# df3.to_csv('df3.csv')\n# df4['t_comments'] = df4['comments'].apply(text_trans)\n# df4.to_csv('df4.csv')\n# #4###########################################################\n# s_to_trans['t_comments'] = s_to_trans['comments'].apply(text_trans)\n# s_to_trans.to_csv('s_translate.csv')",
"_____no_output_____"
],
[
"# dfs = df1.append(df2)\n# dfs = dfs.append(df3)\n# dfs = dfs.append(df4)\n# dfs.index = b_to_trans.index\n# b_to_trans = dfs\n# b_to_trans['comments'] = b_to_trans['t_comments']\n# b_to_trans = b_to_trans.drop(columns =['t_comments'],axis = 1)\n#b_rev_t = b_rev_t.drop(index =b_to_trans.index,axis = 0)\n#b_rev_t = b_rev_t.append(b_to_trans)\n#b_rev_t = b_rev_t.sort_index(axis = 0).reset_index(drop= True)\n# b_rev_t['comments'] = b_rev_t['comments'].apply(lambda x: x.replace('.',' '))\n# b_rev_t.to_csv('b_rev_translated.csv')\n# s_to_trans['comments'] = s_to_trans['t_comments']\n# s_to_trans = s_to_trans.drop(columns =['t_comments'],axis = 1)\n# s_rev_t = s_rev_t.drop(index =s_to_trans.index,axis = 0)\n# s_rev_t = s_rev_t.append(s_to_trans)\n# s_rev_t = s_rev_t.sort_index(axis = 0).reset_index(drop= True)\n# s_rev_t['comments'] = s_rev_t['comments'].apply(lambda x: x.replace('.',' '))\n# s_rev_t.to_csv('s_rev_translated.csv')",
"_____no_output_____"
]
],
[
[
"<font size='3'>*Since googletrans takes around 3 hours to translate 1000 entries, that took some time, here are the resulted DataFrames*<font/>",
"_____no_output_____"
]
],
[
[
"b_rev_trans = pd.read_csv('b_rev_translated.csv', index_col =0)\ns_rev_trans = pd.read_csv('s_rev_translated.csv', index_col =0)",
"_____no_output_____"
]
],
[
[
"<font size = '3' >*4) Add a scores column using the previous resource as a reference to evaulate the score of each review*<font/><br>",
"_____no_output_____"
]
],
[
[
"# %%time\n# def create_scores(df,col, df_pos_array, df_neg_array):\n# \"\"\"\n# INPUT\n# df -pandas dataframe\n# col -column with text reviews to be transformed in to positive and negative scores\n# pos_array- array with reference positive words for the passed df\n# neg_array- array with reference negative words for the passed df\n# OUTPUT\n# df - a dataframe with a score column containing positive and negative scores\"\n# \"\"\"\n# def get_score(val):\n# val_strings = [''.join(filter(str.isalnum, i.lower())) for i in str(val).split() if len(i)>3]\n# pos_score = len(np.intersect1d(np.array(val_strings).astype(object), df_pos_array, assume_unique =True))\n# neg_score = len(np.intersect1d(np.array(val_strings).astype(object), df_neg_array, assume_unique =True))\n# return pos_score - neg_score +1\n# df['score']= df[col].apply(get_score)\n# return df\n\n# b_rev_score = create_scores(b_rev_trans, 'comments', b_pos, b_neg)\n# s_rev_score = create_scores(s_rev_trans, 'comments', s_pos, s_neg)",
"_____no_output_____"
],
[
"# b_rev_score.to_csv('b_rev_score.csv')\n# s_rev_score.to_csv('s_rev_score.csv')",
"_____no_output_____"
]
],
[
[
"<font size = '3' >*As this function takes a while as well, let's write the results into to csv files and read the frames again and then show some samples.*<font/>",
"_____no_output_____"
]
],
[
[
"b_rev_score = pd.read_csv('b_rev_score.csv', index_col = 0)\ns_rev_score = pd.read_csv('s_rev_score.csv', index_col = 0)\nsub_b_rev = b_rev_score.iloc[:,[5,6,7]]\nsub_s_rev = s_rev_score.iloc[:,[5,6,7]]\ndisplay_side_by_side(sub_b_rev.head(3), sub_s_rev.head(3), titles= ['Boston Reviews', 'Seatle_reviews'])",
"_____no_output_____"
]
],
[
[
" _______________________________________________________________________________________________________________________\n\n### Comments: \n[Boston & Seatle Reviews]\n- Boston reviews size : (68275, 6)\n- Seatle reviews size : (84849, 6)\n- Nulls are only in `comments`columns in both Datasets: \n- Null percentage in Boston Reviews: 0.08%\n- Null percentage in Seatle Reviews: 0.02%\n- I added a score column to both tables to reflect positive or negative reviews numerically with the aid of an external resource.",
"_____no_output_____"
],
[
" _______________________________________________________________________________________________________________________",
"_____no_output_____"
],
[
"### Step 2: Formulating Questions\n<font size = '3' >*After going through the data I think those questions would be of interest:*<font/>",
"_____no_output_____"
],
[
"### *Q: How can you compare the reviews in both cities ?*\n### *Q: What aspects of a listing influences the price in both cities?*\n### *Q: How can we predict the price?*\n### *Q: How do prices vary through the year in both cities ? when is the season and off season in both cities?*",
"_____no_output_____"
],
[
"_______________________________________________________________________________________________________________________",
"_____no_output_____"
],
[
"### *Q: How can you compare the reviews in both cities ?*",
"_____no_output_____"
],
[
"<font size = '3' >*Let's attempt to statistically describe the reviews in both cities*<font/>",
"_____no_output_____"
]
],
[
[
"print_side_by_side(' Boston: ', ' Seattle: ', b = 0)\nprint_side_by_side(' Maximum score : ', b_rev_score.iloc[b_rev_score.score.idxmax()].score,\n ' Maximum Score : ', s_rev_score.iloc[s_rev_score.score.idxmax()].score)\nprint_side_by_side(' Minimum Score : ', b_rev_score.iloc[b_rev_score.score.idxmin()].score,\n ' Minimum Score : ', s_rev_score.iloc[s_rev_score.score.idxmin()].score)\nprint_side_by_side(' Most common score: ', b_rev_score['score'].mode().to_string(),\n' Most common score: ', s_rev_score['score'].mode().to_string())\nprint_side_by_side(' Mean score: ', round(b_rev_score['score'].mean(),2)\n,' Mean score: ', round(s_rev_score['score'].mean(),2))\nprint_side_by_side(' Median score: ',round( b_rev_score['score'].median(),2),\n' Median score: ', s_rev_score['score'].median())\nprint_side_by_side(' Standard deviation: ', round(b_rev_score['score'].std(),2)\n,' Standard deviation: ', round(s_rev_score['score'].std(),2))\n# print_side_by_side(' Z score of -2: ', round(b_rev_score['score'].mean()-2*round(b_rev_score['score'].std(),2),1)\n# ,' Z score of -2: ', round(s_rev_score['score'].mean()-2*round(s_rev_score['score'].std(),2)),1)\n# print('Score: ', b_rev_score.iloc[b_rev_score.score.idxmax()].score)\n# b_rev_score.iloc[b_rev_score.score.idxmax()].comments\n\n\nplt.figure(figsize = (14,8))\nplt.subplot(2,1,1)\nplt.title('Boston Reviews', fontsize = 18)\nsns.kdeplot(b_rev_score.score, bw_adjust=2)\nplt.axvline(x= b_rev_score['score'].mean(), color = 'orange', alpha = 0.6)\nplt.axvline(x= b_rev_score['score'].median(), color = 'gray', alpha = 0.6)\nplt.xlim(-15,30)\nplt.xlabel('', fontsize = 14)\nplt.ylabel('Count', fontsize = 14)\nplt.legend(['Scores','mean', 'median'])\norder = np.arange(-15,31,3)\nplt.xticks(order,order, fontsize = 12)\n\nplt.subplot(2,1,2)\nplt.title('Seattle Reviews', fontsize = 18)\nsns.kdeplot(s_rev_score.score, bw_adjust=2)\nplt.axvline(x= s_rev_score['score'].mean(), color = 'orange', alpha = 0.6)\nplt.axvline(x= s_rev_score['score'].median(), color = 'gray', alpha = 0.6)\nplt.xlim(-15,30)\nplt.xlabel('Scores', fontsize = 18)\nplt.ylabel('Count', fontsize = 14)\nplt.legend(['Scores','mean','median'])\nplt.xticks(order,order, fontsize = 12)\n\nplt.tight_layout();",
" Boston: Seattle: \n Maximum score : 34 Maximum Score : 39\n Minimum Score : -15 Minimum Score : -15\n Most common score: 0 5 Most common score: 0 5\n Mean score: 5.84 Mean score: 6.55\n Median score: 5.0 Median score: 6.0\n Standard deviation: 3.19 Standard deviation: 3.28\n"
]
],
[
[
">* <font size = '3'>**The scores clearly follow a normal distribution in both cities, with close standard deviations**</font>\n>* <font size = '3'>**The mean score of Seattle (6.55) is a bit higher than Boston (5.84)**</font>\n>* <font size = '3'>**The median score in both cities is a bit less than the mean which indicates a slight right skew**</font>",
"_____no_output_____"
],
[
"<font size = '3' >*Let'stake a look on the boxplots to have more robust insights*<font/>",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize = (15,6))\nplt.subplot(2,1,1)\nplt.title('Boston Reviews', fontsize = 18)\nsns.boxplot(data = b_rev_score, x = b_rev_score.score)\nplt.axvline(x= b_rev_score['score'].mean(), color = 'orange', alpha = 0.6)\n# plt.axvline(x= b_rev_score['score'].mean()+2*round(b_rev_score['score'].std(),2), color = 'red', alpha = 0.6)\n# plt.axvline(x= b_rev_score['score'].mean()-2*round(b_rev_score['score'].std(),2), color = 'red', alpha = 0.6)\nplt.xlim(-3,15)\nplt.ylabel('Count', fontsize = 16)\norder = np.arange(-3,15,1)\nplt.xticks(order,order, fontsize = 13)\nplt.xlabel('')\n\nplt.subplot(2,1,2)\nplt.title('Seattle Reviews', fontsize = 18)\nsns.boxplot(data = s_rev_score, x = s_rev_score.score)\nplt.axvline(x= s_rev_score['score'].mean(), color = 'orange', alpha = 0.6)\n# plt.axvline(x= s_rev_score['score'].mean()+2*round(s_rev_score['score'].std(),2), color = 'red', alpha = 0.6)\n# plt.axvline(x= s_rev_score['score'].mean()-2*round(s_rev_score['score'].std(),2), color = 'red', alpha = 0.6)\nplt.xlim(-3,15)\nplt.xlabel('Scores', fontsize = 18)\nplt.ylabel('Count', fontsize = 16)\nplt.xticks(order,order, fontsize = 13)\nplt.tight_layout();",
"_____no_output_____"
]
],
[
[
">* <font size = '3'>**50% of The scores in both cities lies between 4 and 8**</font>\n>* <font size = '3'>**The IQR of the scores in both cities lies between -2 to 14**</font>",
"_____no_output_____"
],
[
"<font size = '3' >*Finally, what's the proportion of negative scores in both cities*<font/>",
"_____no_output_____"
]
],
[
[
"b_rev_score['grade']= b_rev_score['score'].apply(lambda x: 1 if x >0 else 0)\ns_rev_score['grade']= s_rev_score['score'].apply(lambda x: 1 if x >0 else 0)\nprint_side_by_side('Boston: ', 'Seattle: ', b=0)\nprint_side_by_side('Negative reviews proportion: ',\nround(b_rev_score['grade'][b_rev_score.grade == 0].count()/b_rev_score.shape[0],3),\n'Negative reviews proportion: ',\nround(s_rev_score['grade'][s_rev_score.grade == 0].count()/s_rev_score.shape[0],3))",
"Boston: Seattle: \nNegative reviews proportion: 0.012 Negative reviews proportion: 0.005\n"
]
],
[
[
"><font size = '3'>**Further exploration:**</font>\n<br>\n>* <font size = '3'>**Use an NLP model be used to better classify the sentiment in the reviews**</font>\n>* <font size = '3'>**Explore how to predict reviews using aspects of a listing**</font>\n>* <font size = '3'>**Explore the relatioship between average price per meter in each city's estates/ temperature trends and reviews**</font>",
"_____no_output_____"
],
[
"_______________________________________________________________________________________________________________________",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d095f44b58ef447320cb9243d6610f0930aa7fb9 | 144,466 | ipynb | Jupyter Notebook | mtcplv_discrepancy.ipynb | HaoLu-a/cPCA-erratum | d94005e1e4d46024d901be0bd8e554908e53dc05 | [
"MIT"
] | 1 | 2020-05-13T20:17:00.000Z | 2020-05-13T20:17:00.000Z | mtcplv_discrepancy.ipynb | HaoLu-a/cPCA-erratum | d94005e1e4d46024d901be0bd8e554908e53dc05 | [
"MIT"
] | null | null | null | mtcplv_discrepancy.ipynb | HaoLu-a/cPCA-erratum | d94005e1e4d46024d901be0bd8e554908e53dc05 | [
"MIT"
] | null | null | null | 371.377892 | 63,772 | 0.941689 | [
[
[
"# Appendix\n\nHao Lu 04/04/2020",
"_____no_output_____"
],
[
"In this notebook, we simulated EEG data with the method described in the paper by Bharadwaj and Shinn-Cunningham (2014) and analyzed the data with the toolbox proposed in the same paper.\n\nThe function was modifed so the values of thee variables within the function can be extracted and studied.\n\nReference: \nBharadwaj, H. M., & Shinn-Cunningham, B. G. (2014). Rapid acquisition of auditory subcortical steady state responses using multichannel recordings. Clinical Neurophysiology, 125(9), 1878-1888.",
"_____no_output_____"
]
],
[
[
"# import packages\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pickle\nimport random\n\nfrom scipy import linalg\nfrom anlffr import spectral,dpss",
"/home/luxx0489/.conda/envs/mne/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192 from C header, got 216 from PyObject\n return f(*args, **kwds)\n"
],
[
"sfreq = 10000\n\nrandom.seed(2020)\nphase_list = [random.uniform(-np.pi,np.pi) for i in range(32)]",
"_____no_output_____"
]
],
[
[
"The phase of the signal from 32 channels were randomly sampled from a uniform distribution",
"_____no_output_____"
]
],
[
[
"plt.plot(phase_list)\nplt.xlabel('Number of Channel')\nplt.ylabel('Phase of signal')",
"_____no_output_____"
]
],
[
[
"The signal is defined as 100 Hz SSSR",
"_____no_output_____"
]
],
[
[
"signal = np.zeros((32,200,int(sfreq*0.2)))\nxt = np.linspace(0,0.2,sfreq*0.2)\n\nfor iChannel in range(32):\n for iTrial in range(200):\n signal[iChannel,iTrial,:] = np.sin(xt*100*2*np.pi+phase_list[iChannel])\n \n# plot first two channels to show the phase differences\nplt.plot(xt,signal[0:2,0,:].transpose())",
"<string>:6: DeprecationWarning: object of type <class 'float'> cannot be safely interpreted as an integer.\n"
]
],
[
[
"The signal to noise ratio (SNR) in the simulated data was set as -40 dB for all channels",
"_____no_output_____"
]
],
[
[
"std = 10**(40/20)*np.sqrt((signal**2).mean())\n\nnoise = np.random.normal(0,std,signal.shape)",
"_____no_output_____"
]
],
[
[
"The simulated data was analyzed through the code from the function anlffr.spectral.mtcplv ",
"_____no_output_____"
]
],
[
[
"params = dict(Fs = sfreq, tapers = [1,1], fpass = [80, 120], itc = 0, pad = 1)\n\nx=signal + noise\n\n\n#codes from the dpss tool of anlffr to make sure the multitaper part is consistent\n\nif(len(x.shape) == 3):\n timedim = 2\n trialdim = 1\n ntrials = x.shape[trialdim]\n nchans = x.shape[0]\n \nnfft, f, fInd = spectral._get_freq_vector(x, params, timedim)\nntaps = params['tapers'][1]\nTW = params['tapers'][0]\nw, conc = dpss.dpss_windows(x.shape[timedim], TW, ntaps)\n",
"_____no_output_____"
],
[
"# the original version of mtcplv\n\nplv = np.zeros((ntaps, len(fInd)))\nfor k, tap in enumerate(w):\n xw = np.fft.rfft(tap * x, n=nfft, axis=timedim)\n\n if params['itc']:\n C = (xw.mean(axis=trialdim) /\n (abs(xw).mean(axis=trialdim))).squeeze()\n else:\n C = (xw / abs(xw)).mean(axis=trialdim).squeeze()\n\n for fi in np.arange(0, C.shape[1]):\n Csd = np.outer(C[:, fi], C[:, fi].conj())\n vals = linalg.eigh(Csd, eigvals_only=True)\n plv[k, fi] = vals[-1] / nchans\n\n# Average over tapers and squeeze to pretty shapes\nplv = (plv.mean(axis=0)).squeeze()\nplv = plv[fInd]",
"_____no_output_____"
]
],
[
[
"The mtcplv did capture the 100 Hz component ",
"_____no_output_____"
]
],
[
[
"plt.plot(f,plv)\nplt.xlabel('frequency')\nplt.ylabel('output of mtcPLV')",
"_____no_output_____"
]
],
[
[
"However, the output of mtcplv perfectly overlaps with the average of squared single-channel PLV stored in matrix C",
"_____no_output_____"
]
],
[
[
"plt.plot(f,abs(C**2).mean(0)[fInd], label='average of square', alpha=0.5)\nplt.plot(f,plv,label = 'mtcplv', alpha = 0.5)\nplt.plot(f,abs(C**2).mean(0)[fInd] - plv, label='difference')\nplt.legend()\nplt.xlabel('frequency')\nplt.ylabel('PLV')",
"_____no_output_____"
]
],
[
[
"We then check the eigen value decomposition around the 100 Hz peak and there is only one non-zero eigen value as expected",
"_____no_output_____"
]
],
[
[
"fi = np.argmax(plv)+np.argwhere(fInd==True).min()\n\nCsd = np.outer(C[:, fi], C[:, fi].conj())\nvals = linalg.eigh(Csd, eigvals_only=True)\n\nplt.bar(np.arange(32),vals[::-1])\nplt.xlabel('Principle Components')\nplt.ylabel('Eigen Values')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0960c2fe53b4948be8e490b757a12dd6482374e | 32,862 | ipynb | Jupyter Notebook | econmt-statistics.ipynb | lnsongxf/coding-for-economists | d54f47ec98a38a1b45de183a1f5c746f38941a8a | [
"MIT"
] | 1 | 2021-06-12T05:18:08.000Z | 2021-06-12T05:18:08.000Z | econmt-statistics.ipynb | kwrahman/coding-for-economists | ceff786189a1f35ce843a59d88e8988df0a5fbff | [
"MIT"
] | null | null | null | econmt-statistics.ipynb | kwrahman/coding-for-economists | ceff786189a1f35ce843a59d88e8988df0a5fbff | [
"MIT"
] | 1 | 2021-10-29T22:20:08.000Z | 2021-10-29T22:20:08.000Z | 39.592771 | 1,040 | 0.613322 | [
[
[
"# Statistics\n",
"_____no_output_____"
],
[
"## Introduction\n\nIn this chapter, you'll learn about how to do statistics with code. We already saw some statistics in the chapter on probability and random processes: here we'll focus on computing basic statistics and using statistical tests. We'll make use of the excellent [*pingouin*](https://pingouin-stats.org/index.html) statistics package and its documentation for many of the examples and methods in this chapter {cite}`vallat2018pingouin`. This chapter also draws on Open Intro Statistics {cite}`diez2012openintro`.\n\n### Notation and basic definitions\n\nGreek letters, like $\\beta$, are the truth and represent parameters. Modified Greek letters are an estimate of the truth, for example $\\hat{\\beta}$. Sometimes Greek letters will stand in for vectors of parameters. Most of the time, upper case Latin characters such as $X$ will represent random variables (which could have more than one dimension). Lower case letters from the Latin alphabet denote realised data, for instance $x$ (which again could be multi-dimensional). Modified Latin alphabet letters denote computations performed on data, for instance $\\bar{x} = \\frac{1}{n} \\displaystyle\\sum_{i} x_i$ where $n$ is number of samples. Parameters are given following a vertical bar, for example if $f(x|\\mu, \\sigma)$ is a probability density function, the vertical line indicates that its parameters are $\\mu$ and $\\sigma$. The set of distributions with densities $f_\\theta(x)$, $\\theta \\in \\Theta$ is called a parametric family, eg there is a family of different distributions that are parametrised by $\\theta$.\n\nA **statistic** $T(x)$ is a function of the data $x=(x_1, \\dots, x_n)$. \n\nAn **estimator** of a parameter $\\theta$ is a function $T=T(x)$ which is used to estimate $\\theta$ based on observations of data. $T$ is an unbiased estimator if $\\mathbb{E}(T) = \\theta$.\n\nIf $X$ has PDF $f(x|\\theta)$ then, given the observed value $x$ of $X$, the **likelihood** of $\\theta$ is defined by $\\text{lik}(\\theta) = f(x | \\theta)$. For independent and identically distributed observed values, then $\\text{lik}(\\theta) = f(x_1, \\dots, x_n| \\theta) = \\Pi_{i=1}^n f(x_i | \\theta)$. The $\\hat{\\theta}$ such that this function attains its maximum value is the **maximum likelihood estimator (MLE)** of $\\theta$.\n\nGiven an MLE $\\hat{\\theta}$ of $\\theta$, $\\hat{\\theta}$ is said to be **consistent** if $\\mathbb{P}(\\hat{\\theta} - \\theta > \\epsilon) \\rightarrow 0$ as $n\\rightarrow \\infty$.\n\nAn estimator *W* is **efficient** relative to another estimator $V$ if $\\text{Var}(W) < \\text{Var}(V)$.\n\nLet $\\alpha$ be the 'significance level' of a test statistic $T$.\n\nLet $\\gamma(X)$ and $\\delta(X)$ be two statistics satisfying $\\gamma(X) < \\delta(X)$ for all $X$. If on observing $X = x$, the inference can be made that $\\gamma(x) \\leq \\theta \\leq \\delta(x)$. Then $[\\delta(x), \\gamma(x)]$ is an **interval estimate** and $[\\delta(X), \\gamma(X)]$ is an **interval estimator**. The random interval (random because the *endpoints* are random variables) $[\\delta(X), \\gamma(X)]$ is called a $100\\cdot\\alpha \\%$ **confidence interval** for $\\theta$. Of course, there is a true $\\theta$, so either it is in this interval or it is not. But if the confidence interval was constructed many times over using samples, $\\theta$ would be contained within it $100\\cdot\\alpha \\%$ of the times.\n\nA **hypothesis test** is a conjecture about the distribution of one or more random variables, and a test of a hypothesis is a procedure for deciding whether or not to reject that conjecture. The **null hypothesis**, $H_0$, is only ever conservatively rejected and represents the default positiion. The **alternative hypothesis**, $H_1$, is the conclusion contrary to this.\n\nA type I error occurs when $H_0$ is rejected when it is true, ie when a *true* null hypothesis is rejected. Mistakenly failing to reject a false null hypothesis is called a type II error.\n\n\nIn the most simple situations, the upper bound on the probability of a type I error is called the size or **significance level** of the *test*. The **p-value** of a random variable $X$ is the smallest value of the significance level (denoted $\\alpha$) for which $H_0$ would be rejected on the basis of seeing $x$. The p-value is sometimes called the significance level of $X$. The probability that a test will reject the null is called the power of the test. The probability of a type II error is equal to 1 minus the power of the test.\n\n\nRecall that there are two types of statistics out there: parametrised, eg by $\\theta$, and non-parametrised. The latter are often distribution free (ie don't involve a PDF) or don't require parameters to be specified.\n\n### Imports\n\nFirst we need to import the packages we'll be using",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom scipy import stats\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport pingouin as pg\nimport statsmodels.formula.api as smf\nfrom numpy.random import Generator, PCG64\n\n# Set seed for random numbers\nseed_for_prng = 78557\nprng = Generator(PCG64(seed_for_prng))",
"_____no_output_____"
]
],
[
[
"## Basic statistics\n\nLet's start with computing the simplest statistics you can think of using some synthetic data. Many of the functions have lots of extra options that we won't explore here (like weights or normalisation); remember that you can see these using the `help()` method. \n\nWe'll generate a vector with 100 entries:",
"_____no_output_____"
]
],
[
[
"data = np.array(range(100))\ndata",
"_____no_output_____"
],
[
"from myst_nb import glue\nimport sympy\nimport warnings\n\nwarnings.filterwarnings(\"ignore\")\ndict_fns = {'mean': np.mean(data),\n 'std': np.std(data),\n 'mode': stats.mode([0, 1, 2, 3, 3, 3, 5])[0][0],\n 'median': np.median(data)}\n\nfor name, eval_fn in dict_fns.items():\n glue(name, f'{eval_fn:.1f}')\n\n\n# Set max rows displayed for readability\npd.set_option('display.max_rows', 6)\n# Plot settings\nplt.style.use('plot_style.txt')",
"_____no_output_____"
]
],
[
[
"Okay, let's see how some basic statistics are computed. The mean is `np.mean(data)=` {glue:}`mean`, the standard deviation is `np.std(data)=` {glue:}`std`, and the median is given by `np.median(data)= `{glue:}`median`. The mode is given by `stats.mode([0, 1, 2, 3, 3, 3, 5])[0]=` {glue:}`mode` (access the counts using `stats.mode(...)[1]`).\n\nLess famous quantiles than the median are given by, for example for $q=0.25$,",
"_____no_output_____"
]
],
[
[
"np.quantile(data, 0.25)",
"_____no_output_____"
]
],
[
[
"As with **pandas**, **numpy** and **scipy** work on scalars, vectors, matrices, and tensors: you just need to specify the axis that you'd like to apply a function to:",
"_____no_output_____"
]
],
[
[
"data = np.fromfunction(lambda i, j: i + j, (3, 6), dtype=int)\ndata",
"_____no_output_____"
],
[
"np.mean(data, axis=0)",
"_____no_output_____"
]
],
[
[
"Remember that, for discrete data points, the $k$th (unnormalised) moment is\n\n$$\n\\hat{m}_k = \\frac{1}{n}\\displaystyle\\sum_{i=1}^{n} \\left(x_i - \\bar{x}\\right)^k\n$$\n\nTo compute this use scipy's `stats.moment(a, moment=1)`. For instance for the kurtosis ($k=4$), it's",
"_____no_output_____"
]
],
[
[
"stats.moment(data, moment=4, axis=1)",
"_____no_output_____"
]
],
[
[
"Covariances are found using `np.cov`.",
"_____no_output_____"
]
],
[
[
"np.cov(np.array([[0, 1, 2], [2, 1, 0]]))",
"_____no_output_____"
]
],
[
[
"Note that, as expected, the $C_{01}$ term is -1 as the vectors are anti-correlated.",
"_____no_output_____"
],
[
"## Parametric tests\n\nReminder: parametric tests assume that data are effectively drawn a probability distribution that can be described with fixed parameters.",
"_____no_output_____"
],
[
"### One-sample t-test\n\nThe one-sample t-test tells us whether a given parameter for the mean, i.e. a suspected $\\mu$, is likely to be consistent with the sample mean. The null hypothesis is that $\\mu = \\bar{x}$. Let's see an example using the default `tail='two-sided'` option. Imagine we have data on the number of hours people spend working each day and we want to test the (alternative) hypothesis that $\\bar{x}$ is not $\\mu=$8 hours:",
"_____no_output_____"
]
],
[
[
"x = [8.5, 5.4, 6.8, 9.6, 4.2, 7.2, 8.8, 8.1]\n\npg.ttest(x, 8).round(2)",
"_____no_output_____"
]
],
[
[
"(The returned object is a **pandas** dataframe.) We only have 8 data points, and so that is a great big confidence interval! It's worth remembering what a t-statistic and t-test really are. In this case, the statistic that is constructed to test whether the sample mean is different from a known parameter $\\mu$ is\n\n$$\nT = \\frac{\\sqrt{n}(\\bar{x}-\\mu)}{\\hat{\\sigma}} \\thicksim t_{n-1}\n$$\n\nwhere $t_{n-1}$ is the student's t-distribution and $n-1$ is the number of degrees of freedom. The $100\\cdot(1-\\alpha)\\%$ test interval in this case is given by\n\n$$\n1 - \\alpha = \\mathbb{P}\\left(-t_{n-1, \\alpha/2} \\leq \\frac{\\sqrt{n}(\\bar{x} - \\mu)}{\\hat{\\sigma}} \\leq t_{n-1,\\alpha/2}\\right)\n$$\n\nwhere we define $t_{n-1, \\alpha/2}$ such that $\\mathbb{P}(T > t_{n-1, \\alpha/2}) = \\alpha/2$. For $\\alpha=0.05$, implying confidence intervals of 95%, this looks like:\n\n",
"_____no_output_____"
]
],
[
[
"import scipy.stats as st\n\ndef plot_t_stat(x, mu):\n T = np.linspace(-7, 7, 500)\n pdf_vals = st.t.pdf(T, len(x)-1)\n\n sigma_hat = np.sqrt(np.sum( (x-np.mean(x))**2)/(len(x)-1))\n actual_T_stat = (np.sqrt(len(x))*(np.mean(x) - mu))/sigma_hat\n\n alpha = 0.05\n T_alpha_over_2 = st.t.ppf(1.0-alpha/2, len(x)-1)\n\n interval_T = T[((T>-T_alpha_over_2) & (T<T_alpha_over_2))]\n interval_y = pdf_vals[((T>-T_alpha_over_2) & (T<T_alpha_over_2))]\n\n fig, ax = plt.subplots()\n ax.plot(T, pdf_vals, label=f'Student t: dof={len(x)-1}', zorder=2)\n ax.fill_between(interval_T, 0, interval_y, alpha=0.2, label=r'95% interval', zorder=1)\n ax.plot(actual_T_stat, st.t.pdf(actual_T_stat, len(x)-1), 'bo', ms=15, label=r'$\\sqrt{n}(\\bar{x} - \\mu)/\\hat{\\sigma}}$',\n color='orchid', zorder=4)\n ax.vlines(actual_T_stat, 0, st.t.pdf(actual_T_stat, len(x)-1), color='orchid', zorder=3)\n ax.set_xlabel('Value of statistic T')\n ax.set_ylabel('PDF')\n ax.set_xlim(-7, 7)\n ax.set_ylim(0., 0.4)\n ax.legend(frameon=False)\n plt.show()\n\nmu = 8\nplot_t_stat(x, mu)",
"_____no_output_____"
]
],
[
[
"In this case, we would reject the alternative hypothesis. You can see why from the plot; the test statistic we have constructed lies within the interval where we cannot reject the null hypothesis. $\\bar{x}-\\mu$ is close enough to zero to give us cause for concern. (You can also see from the plot why this is a two-tailed test: we don't care if $\\bar{x}$ is greater or less than $\\mu$, just that it's different--and so the test statistic could appear in either tail of the distribution for us to accept $H_1$.)\n\nWe accept the null here, but about if there were many more data points? Let's try adding some generated data (pretend it is from making extra observations).\n",
"_____no_output_____"
]
],
[
[
"# 'Observe' extra data\nextra_data = prng.uniform(5.5, 8.5, size=(30))\n# Add it in to existing vector\nx_prime = np.concatenate((np.array(x), extra_data), axis=None)\n# Run t-test\npg.ttest(x_prime, 8).round(2)",
"_____no_output_____"
]
],
[
[
"Okay, what happened? Our extra observations have seen the confidence interval shrink considerably, and the p-value is effectively 0. There's a large negative t-statistic too. Unsurprisingly, as we chose a uniform distribution that only just included 8 but was centered on $(8-4.5)/2$ *and* we had more points, the test now rejects the null hypothesis that $\\mu=8$ . Because the alternative hypothesis is just $\\mu\\neq8$, and these tests are conservative, we haven't got an estimate of what the mean actually is; we just know that our test rejects that it's $8$.\n\nWe can see this in a new version of the chart that uses the extra data:",
"_____no_output_____"
]
],
[
[
"plot_t_stat(x_prime, mu)",
"_____no_output_____"
]
],
[
[
"Now our test statistic is safely outside the interval.\n\n#### Connection to linear regression\n\nNote that testing if $\\mu\\neq0$ is equivalent to having the alternative hypothesis that a single, non-zero scalar value is a good expected value for $x$, i.e. that $\\mathbb{E}(x) \\neq 0$. Which may sound familiar if you've run **linear regression** and, indeed, this t-test has an equivalent linear model! It's just regressing $X$ on a constant--a single, non-zero scalar value. In general, t-tests appear in linear regression to test whether any coefficient $\\beta \\neq 0$. \n\nWe can see this connection by running a hypothesis test of whether the sample mean is not zero. Note the confidence interval, t-statistic, and p-value.",
"_____no_output_____"
]
],
[
[
"pg.ttest(x, 0).round(3)",
"_____no_output_____"
]
],
[
[
"And, as an alternative, regressing x on a constant, again noting the interval, t-stat, and p-value:",
"_____no_output_____"
]
],
[
[
"import statsmodels.formula.api as smf\n\ndf = pd.DataFrame(x, columns=['x'])\n\nres = smf.ols(formula='x ~ 1', data=df).fit()\n# Show only the info relevant to the intercept (there are no other coefficients)\nprint(res.summary().tables[1])",
"_____no_output_____"
]
],
[
[
"Many tests have an equivalent linear model.",
"_____no_output_____"
],
[
"#### Other information provided by **Pingouin** tests\n\nWe've covered the degrees of freedom, the T statistic, the p-value, and the confidence interval. So what's all that other gunk in our t-test? Cohen's d is a measure of whether the difference being measured in our test is large or not (this is important; you can have statistically significant differences that are so small as to be inconsequential). Cohen suggested that $d = 0.2$ be considered a 'small' effect size, 0.5 represents a 'medium' effect size and 0.8 a 'large' effect size. BF10 represents the Bayes factor, the ratio (given the data) of the likelihood of the alternative hypothesis relative to the null hypothesis. Values greater than unity therefore favour the alternative hypothesis. Finally, power is the achieved power of the test, which is $1 - \\mathbb{P}(\\text{type II error})$. A common default to have in mind is a power greater than 0.8.",
"_____no_output_____"
],
[
"### Two-sample t-test\n\nThe two-sample t-test is used to determine if two population means are equal (with the null being that they *are* equal). Let's look at an example with synthetic data of equal length, which means we can use the *paired* version of this. We'll imagine we are looking at an intervention with a pre- and post- dataset.",
"_____no_output_____"
]
],
[
[
"pre = [5.5, 2.4, 6.8, 9.6, 4.2, 5.9]\npost = [6.4, 3.4, 6.4, 11., 4.8, 6.2]\npg.ttest(pre, post, paired=True, tail='two-sided').round(2)",
"_____no_output_____"
]
],
[
[
"In this case, we cannot reject the null hypothesis that the means are the same pre- and post-intervention.",
"_____no_output_____"
],
[
"### Pearson correlation\n\nThe Pearson correlation coefficient measures the linear relationship between two datasets. Strictly speaking, it requires that each dataset be normally distributed. ",
"_____no_output_____"
]
],
[
[
"mean, cov = [4, 6], [(1, .5), (.5, 1)]\nx, y = prng.multivariate_normal(mean, cov, 30).T\n# Compute Pearson correlation\npg.corr(x, y).round(3)",
"_____no_output_____"
]
],
[
[
"### Welch's t-test\n\nIn the case where you have two samples with unequal variances (or, really, unequal sample sizes too), Welch's t-test is appropriate. With `correction='true'`, it assumes that variances are not equal.\n",
"_____no_output_____"
]
],
[
[
"x = prng.normal(loc=7, size=20)\ny = prng.normal(loc=6.5, size=15)\npg.ttest(x, y, correction='true')",
"_____no_output_____"
]
],
[
[
"### One-way ANOVA\n\nAnalysis of variance (ANOVA) is a technique for testing hypotheses about means, for example testing the equality of the means of $k>2$ groups. The model would be\n\n$$\nX_{ij} = \\mu_i + \\epsilon_{ij} \\quad j=1, \\dots, n_i \\quad i=1, \\dots, k.\n$$\n\nso that the $i$th group has $n_i$ observations. The null hypothesis of one-way ANOVA is that $H_0: \\mu_1 = \\mu_2 = \\dots = \\mu_k$, with the alternative hypothesis that this is *not* true.",
"_____no_output_____"
]
],
[
[
"df = pg.read_dataset('mixed_anova')\ndf.head()",
"_____no_output_____"
],
[
"# Run the ANOVA\npg.anova(data=df, dv='Scores', between='Group', detailed=True)",
"_____no_output_____"
]
],
[
[
"### Multiple pairwise t-tests\n\nThere's a problem with running multiple t-tests: if you run enough of them, something is bound to come up as significant! As such, some *post-hoc* adjustments exist that correct for the fact that multiple tests are occurring simultaneously. In the example below, multiple pairwise comparisons are made between the scores by time group. There is a corrected p-value, `p-corr`, computed using the Benjamini/Hochberg FDR correction.",
"_____no_output_____"
]
],
[
[
"pg.pairwise_ttests(data=df, dv='Scores', within='Time', subject='Subject',\n parametric=True, padjust='fdr_bh', effsize='hedges').round(3)",
"_____no_output_____"
]
],
[
[
"### One-way ANCOVA\n\nAnalysis of covariance (ANCOVA) is a general linear model which blends ANOVA and regression. ANCOVA evaluates whether the means of a dependent variable (dv) are equal across levels of a categorical independent variable (between) often called a treatment, while statistically controlling for the effects of other continuous variables that are not of primary interest, known as covariates or nuisance variables (covar).",
"_____no_output_____"
]
],
[
[
"df = pg.read_dataset('ancova')\ndf.head()",
"_____no_output_____"
],
[
"pg.ancova(data=df, dv='Scores', covar='Income', between='Method')",
"_____no_output_____"
]
],
[
[
"### Power calculations\n\nOften, it's quite useful to know what sample size is needed to avoid certain types of testing errors. **Pingouin** offers ways to compute effect sizes and test powers to help with these questions.\n\nAs an example, let's assume we have a new drug (`x`) and an old drug (`y`) that are both intended to reduce blood pressure. The standard deviation of the reduction in blood pressure of those receiving the old drug is 12 units. The null hypothesis is that the new drug is no more effective than the new drug. But it will only be worth switching production to the new drug if it reduces blood pressure by more than 3 units versus the old drug. In this case, the effect size of interest is 3 units.\n\nLet's assume for a moment that the true difference is 3 units and we want to perform a test with $\\alpha=0.05$. The problem is that, for small differences in the effect, the distribution of effects under the null and the distribution of effects under the alternative have a great deal of overlap. So the chances of making a Type II error - accepting the null hypothesis when it is actually false - is quite high. Let's say we'd ideally have at most a 20% chance of making a Type II error: what sample size do we need?\n\nWe can compute this, but we need an extra piece of information first: a normalised version of the effect size called Cohen's $d$. We need to transform the difference in means to compute this. For independent samples, $d$ is:\n\n$$ d = \\frac{\\overline{X} - \\overline{Y}}{\\sqrt{\\frac{(n_{1} - 1)\\sigma_{1}^{2} + (n_{2} - 1)\\sigma_{2}^{2}}{n_1 + n_2 - 2}}}$$\n\n(If you have real data samples, you can compute this using `pg.compute_effsize`.)\n\nFor this case, $d$ is $-3/12 = -1/4$ if we assume the standard deviations are the same across the old (`y`) and new (`x`) drugs. So we will plug that $d$ in and look at a range of possible sample sizes along with a standard value for $alpha$ of 0.05. In the below `tail=less` tests the alternative that `x` has a smaller mean than `y`.",
"_____no_output_____"
]
],
[
[
"cohen_d = -0.25 # Fixed effect size\nsample_size_array = np.arange(1, 500, 50) # Incrementing sample size\n# Compute the achieved power\npwr = pg.power_ttest(d=cohen_d, n=sample_size_array, alpha=0.05,\n contrast='two-samples', tail='less')\nfig, ax = plt.subplots()\nax.plot(sample_size_array, pwr, 'ko-.')\nax.axhline(0.8, color='r', ls=':')\nax.set_xlabel('Sample size')\nax.set_ylabel('Power (1 - type II error)')\nax.set_title('Achieved power of a T-test')\nplt.show()",
"_____no_output_____"
]
],
[
[
"From this, we can see we need a sample size of at least 200 in order to have a power of 0.8. \n\nThe `pg.power_ttest` function takes any three of the four of `d`, `n`, `power`, and `alpha` (ie leave one of these out), and then returns what the missing parameter should be. We passed in `d`, `n`, and `alpha`, and so the `power` was returned.",
"_____no_output_____"
],
[
"## Non-parametric tests\n\nReminder: non-parametrics tests do not make any assumptions about the distribution from which data are drawn or that it can be described by fixed parameters.",
"_____no_output_____"
],
[
"### Wilcoxon Signed-rank Test\n\nThis tests the null hypothesis that two related paired samples come from the same distribution. It is the non-parametric equivalent of the t-test.",
"_____no_output_____"
]
],
[
[
"x = [20, 22, 19, 20, 22, 18, 24, 20, 19, 24, 26, 13]\ny = [38, 37, 33, 29, 14, 12, 20, 22, 17, 25, 26, 16]\npg.wilcoxon(x, y, tail='two-sided').round(2)",
"_____no_output_____"
]
],
[
[
"### Mann-Whitney U Test (aka Wilcoxon rank-sum test)\n\nThe Mann–Whitney U test is a non-parametric test of the null hypothesis that it is equally likely that a randomly selected value from one sample will be less than or greater than a randomly selected value from a second sample. It is the non-parametric version of the two-sample T-test.\n\nLike many non-parametric **pingouin** tests, it can take values of tail that are 'two-sided', 'one-sided', 'greater', or 'less'. Below, we ask if a randomly selected value from `x` is greater than one from `y`, with the null that it is not.",
"_____no_output_____"
]
],
[
[
"x = prng.uniform(low=0, high=1, size=20)\ny = prng.uniform(low=0.2, high=1.2, size=20)\npg.mwu(x, y, tail='greater')",
"_____no_output_____"
]
],
[
[
"### Spearman Correlation\n\nThe Spearman correlation coefficient is the Pearson correlation coefficient between the rank variables, and does not assume normality of data.",
"_____no_output_____"
]
],
[
[
"mean, cov = [4, 6], [(1, .5), (.5, 1)]\nx, y = prng.multivariate_normal(mean, cov, 30).T\npg.corr(x, y, method=\"spearman\").round(2)",
"_____no_output_____"
]
],
[
[
"### Kruskal-Wallace\n\nThe Kruskal-Wallis H-test tests the null hypothesis that the population median of all of the groups are equal. It is a non-parametric version of ANOVA. The test works on 2 or more independent samples, which may have different sizes.",
"_____no_output_____"
]
],
[
[
"df = pg.read_dataset('anova')\ndf.head()",
"_____no_output_____"
],
[
"pg.kruskal(data=df, dv='Pain threshold', between='Hair color')",
"_____no_output_____"
]
],
[
[
"### The Chi-Squared Test\n\nThe chi-squared test is used to determine whether there is a significant difference between the expected frequencies and the observed frequencies in one or more categories. This test can be used to evaluate the quality of a categorical variable in a classification problem or to check the similarity between two categorical variables.\n\nThere are two conditions for a chi-squared test:\n\n- Independence: Each case that contributes a count to the table must be independent of all the other cases in the table.\n\n- Sample size or distribution: Each particular case (ie cell count) must have at least 5 expected cases.\n\nLet's see an example from the **pingouin** docs: whether gender is a good predictor of heart disease. First, let's load the data and look at the gender split in total:",
"_____no_output_____"
]
],
[
[
"chi_data = pg.read_dataset('chi2_independence')\nchi_data['sex'].value_counts(ascending=True)",
"_____no_output_____"
]
],
[
[
"If gender is *not* a predictor, we would expect a roughly similar split between those who have heart disease and those who do not. Let's look at the observerd versus the expected split once we categorise by gender and 'target' (heart disease or not).",
"_____no_output_____"
]
],
[
[
"expected, observed, stats = pg.chi2_independence(chi_data, x='sex', y='target')\nobserved - expected",
"_____no_output_____"
]
],
[
[
"So we have fewer in the 0, 0 and 1, 1 buckets than expected but more in the 0, 1 and 1, 0 buckets. Let's now see how the test interprets this:",
"_____no_output_____"
]
],
[
[
"stats.round(3)",
"_____no_output_____"
]
],
[
[
"From these, it is clear we can reject the null and therefore it seems like gender is a good predictor of heart disease.",
"_____no_output_____"
],
[
"### Shapiro-Wilk Test for Normality\n\nNote that the null here is that the distribution *is* normal, so normality is only rejected when the p-value is sufficiently small.",
"_____no_output_____"
]
],
[
[
"x = prng.normal(size=20)\npg.normality(x)",
"_____no_output_____"
]
],
[
[
"The test can also be run on multiple variables in a dataframe:",
"_____no_output_____"
]
],
[
[
"df = pg.read_dataset('ancova')\npg.normality(df[['Scores', 'Income', 'BMI']], method='normaltest').round(3)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0960e87824644e36b18a78464e1d84d24caeadf | 7,357 | ipynb | Jupyter Notebook | RGB_to_GRAY_scale_Autoencoder.ipynb | NeoBoy/RGB_to_GRAYSCALE_Autoencoder- | b49e89ed6d22a3dfdbb77badbc16190b7297ff9c | [
"CC-BY-2.0"
] | 4 | 2019-10-09T08:27:31.000Z | 2021-06-02T20:32:31.000Z | RGB_to_GRAY_scale_Autoencoder.ipynb | NeoBoy/RGB_to_GRAYSCALE_Autoencoder- | b49e89ed6d22a3dfdbb77badbc16190b7297ff9c | [
"CC-BY-2.0"
] | 1 | 2019-08-03T09:48:02.000Z | 2020-01-01T08:00:58.000Z | RGB_to_GRAY_scale_Autoencoder.ipynb | NeoBoy/RGB_to_GRAYSCALE_Autoencoder- | b49e89ed6d22a3dfdbb77badbc16190b7297ff9c | [
"CC-BY-2.0"
] | 5 | 2019-03-28T01:50:51.000Z | 2019-12-10T19:00:30.000Z | 25.814035 | 299 | 0.535544 | [
[
[
"import tensorflow as tf\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.pyplot import savefig\nimport cv2\n\nnp.set_printoptions(threshold=np.inf)",
"C:\\Users\\Akshath\\AppData\\Local\\Continuum\\Anaconda3\\envs\\rl\\lib\\site-packages\\h5py\\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n from ._conv import register_converters as _register_converters\n"
],
[
"num_images = 3670",
"_____no_output_____"
],
[
"dataset = []\n\nfor i in range(1, num_images+1):\n img = cv2.imread(\"color_images/color_\" +str(i) +\".jpg\" )\n dataset.append(np.array(img))\n\ndataset_source = np.asarray(dataset)\nprint(dataset_source.shape)\n\ndataset_tar = []\n\nfor i in range(1, num_images+1):\n img = cv2.imread(\"gray_images/gray_\" +str(i) +\".jpg\", 0) \n dataset_tar.append(np.array(img))\n\ndataset_target = np.asarray(dataset_tar)\nprint(dataset_target.shape)",
"(3670, 128, 128, 3)\n(3670, 128, 128)\n"
],
[
"dataset_target = dataset_target[:, :, :, np.newaxis]",
"_____no_output_____"
],
[
"def autoencoder(inputs): # Undercomplete Autoencoder\n \n # Encoder\n \n net = tf.layers.conv2d(inputs, 128, 2, activation = tf.nn.relu)\n print(net.shape)\n net = tf.layers.max_pooling2d(net, 2, 2, padding = 'same')\n print(net.shape)\n\n # Decoder\n \n net = tf.image.resize_nearest_neighbor(net, tf.constant([129, 129]))\n net = tf.layers.conv2d(net, 1, 2, activation = None, name = 'outputOfAuto')\n\n print(net.shape)\n \n return net",
"_____no_output_____"
],
[
"ae_inputs = tf.placeholder(tf.float32, (None, 128, 128, 3), name = 'inputToAuto')\nae_target = tf.placeholder(tf.float32, (None, 128, 128, 1))\n\nae_outputs = autoencoder(ae_inputs)\nlr = 0.001\n\nloss = tf.reduce_mean(tf.square(ae_outputs - ae_target))\ntrain_op = tf.train.AdamOptimizer(learning_rate = lr).minimize(loss)\n# Intialize the network \ninit = tf.global_variables_initializer()",
"(?, 127, 127, 128)\n(?, 64, 64, 128)\n(?, 128, 128, 1)\n"
]
],
[
[
"#### If you don't want to train the network skip the cell righ below and dowload the pre-trained model. After downloading the pre-trained model run the cell below to the immediate below cell. ",
"_____no_output_____"
]
],
[
[
"batch_size = 32\nepoch_num = 50\n\nsaving_path = 'K:/autoencoder_color_to_gray/SavedModel/AutoencoderColorToGray.ckpt'\n\nsaver_ = tf.train.Saver(max_to_keep = 3)\n\nbatch_img = dataset_source[0:batch_size]\nbatch_out = dataset_target[0:batch_size]\n\nnum_batches = num_images//batch_size\n\nsess = tf.Session()\nsess.run(init)\n\nfor ep in range(epoch_num):\n batch_size = 0\n for batch_n in range(num_batches): # batches loop\n\n _, c = sess.run([train_op, loss], feed_dict = {ae_inputs: batch_img, ae_target: batch_out})\n print(\"Epoch: {} - cost = {:.5f}\" .format((ep+1), c))\n \n batch_img = dataset_source[batch_size: batch_size+32]\n batch_out = dataset_target[batch_size: batch_size+32]\n \n batch_size += 32\n \n saver_.save(sess, saving_path, global_step = ep)\nrecon_img = sess.run([ae_outputs], feed_dict = {ae_inputs: batch_img})\n\nsess.close()",
"_____no_output_____"
],
[
"saver = tf.train.Saver()\n\ninit = tf.global_variables_initializer()\nsess = tf.Session()\nsess.run(init)\n\nsaver.restore(sess, 'K:/autoencoder_color_to_gray/SavedModel/AutoencoderColorToGray.ckpt-49')",
"INFO:tensorflow:Restoring parameters from K:/autoencoder_color_to_gray/SavedModel/AutoencoderColorToGray.ckpt-49\n"
],
[
"import glob as gl \n\nfilenames = gl.glob('flower_images/*.png')\n\ntest_data = []\nfor file in filenames[0:100]:\n test_data.append(np.array(cv2.imread(file)))\n\ntest_dataset = np.asarray(test_data)\nprint(test_dataset.shape)\n\n# Running the test data on the autoencoder\nbatch_imgs = test_dataset\ngray_imgs = sess.run(ae_outputs, feed_dict = {ae_inputs: batch_imgs})",
"(100, 128, 128, 3)\n"
],
[
"print(gray_imgs.shape)\n\nfor i in range(gray_imgs.shape[0]):\n cv2.imwrite('gen_gray_images/gen_gray_' +str(i) +'.jpeg', gray_imgs[i])",
"(100, 128, 128, 1)\n"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d09620a7a95d7b8b989d47946bde98d6e88909d1 | 30,062 | ipynb | Jupyter Notebook | CarND-Object-Detection-Lab.ipynb | 48cfu/Object-Detection-Lab | 7e0a0e2284c47a1666f2ae9fb2d4458e6bedcdca | [
"MIT"
] | null | null | null | CarND-Object-Detection-Lab.ipynb | 48cfu/Object-Detection-Lab | 7e0a0e2284c47a1666f2ae9fb2d4458e6bedcdca | [
"MIT"
] | null | null | null | CarND-Object-Detection-Lab.ipynb | 48cfu/Object-Detection-Lab | 7e0a0e2284c47a1666f2ae9fb2d4458e6bedcdca | [
"MIT"
] | null | null | null | 36.527339 | 496 | 0.602289 | [
[
[
"# CarND Object Detection Lab\n\nLet's get started!",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom PIL import Image\nfrom PIL import ImageDraw\nfrom PIL import ImageColor\nimport time\nfrom scipy.stats import norm\n\n%matplotlib inline\nplt.style.use('ggplot')",
"/root/miniconda3/envs/carnd-term1/lib/python3.5/site-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.\n warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')\n/root/miniconda3/envs/carnd-term1/lib/python3.5/site-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.\n warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')\n"
]
],
[
[
"## MobileNets\n\n[*MobileNets*](https://arxiv.org/abs/1704.04861), as the name suggests, are neural networks constructed for the purpose of running very efficiently (high FPS, low memory footprint) on mobile and embedded devices. *MobileNets* achieve this with 3 techniques:\n\n1. Perform a depthwise convolution followed by a 1x1 convolution rather than a standard convolution. The 1x1 convolution is called a pointwise convolution if it's following a depthwise convolution. The combination of a depthwise convolution followed by a pointwise convolution is sometimes called a separable depthwise convolution.\n2. Use a \"width multiplier\" - reduces the size of the input/output channels, set to a value between 0 and 1.\n3. Use a \"resolution multiplier\" - reduces the size of the original input, set to a value between 0 and 1.\n\nThese 3 techniques reduce the size of cummulative parameters and therefore the computation required. Of course, generally models with more paramters achieve a higher accuracy. *MobileNets* are no silver bullet, while they perform very well larger models will outperform them. ** *MobileNets* are designed for mobile devices, NOT cloud GPUs**. The reason we're using them in this lab is automotive hardware is closer to mobile or embedded devices than beefy cloud GPUs.",
"_____no_output_____"
],
[
"### Convolutions\n\n#### Vanilla Convolution\n\nBefore we get into the *MobileNet* convolution block let's take a step back and recall the computational cost of a vanilla convolution. There are $N$ kernels of size $D_k * D_k$. Each of these kernels goes over the entire input which is a $D_f * D_f * M$ sized feature map or tensor (if that makes more sense). The computational cost is:\n\n$$\nD_g * D_g * M * N * D_k * D_k\n$$\n\nLet $D_g * D_g$ be the size of the output feature map. Then a standard convolution takes in a $D_f * D_f * M$ input feature map and returns a $D_g * D_g * N$ feature map as output.\n\n(*Note*: In the MobileNets paper, you may notice the above equation for computational cost uses $D_f$ instead of $D_g$. In the paper, they assume the output and input are the same spatial dimensions due to stride of 1 and padding, so doing so does not make a difference, but this would want $D_g$ for different dimensions of input and output.)\n\n\n\n\n\n#### Depthwise Convolution\n\nA depthwise convolution acts on each input channel separately with a different kernel. $M$ input channels implies there are $M$ $D_k * D_k$ kernels. Also notice this results in $N$ being set to 1. If this doesn't make sense, think about the shape a kernel would have to be to act upon an individual channel.\n\nComputation cost:\n\n$$\nD_g * D_g * M * D_k * D_k\n$$\n\n\n\n\n\n#### Pointwise Convolution\n\nA pointwise convolution performs a 1x1 convolution, it's the same as a vanilla convolution except the kernel size is $1 * 1$.\n\nComputation cost:\n\n$$\nD_k * D_k * D_g * D_g * M * N =\n1 * 1 * D_g * D_g * M * N =\nD_g * D_g * M * N\n$$\n\n\n\n\n\nThus the total computation cost is for separable depthwise convolution:\n\n$$\nD_g * D_g * M * D_k * D_k + D_g * D_g * M * N\n$$\n\nwhich results in $\\frac{1}{N} + \\frac{1}{D_k^2}$ reduction in computation:\n\n$$\n\\frac {D_g * D_g * M * D_k * D_k + D_g * D_g * M * N} {D_g * D_g * M * N * D_k * D_k} = \n\\frac {D_k^2 + N} {D_k^2*N} = \n\\frac {1}{N} + \\frac{1}{D_k^2}\n$$\n\n*MobileNets* use a 3x3 kernel, so assuming a large enough $N$, separable depthwise convnets are ~9x more computationally efficient than vanilla convolutions!",
"_____no_output_____"
],
[
"### Width Multiplier\n\nThe 2nd technique for reducing the computational cost is the \"width multiplier\" which is a hyperparameter inhabiting the range [0, 1] denoted here as $\\alpha$. $\\alpha$ reduces the number of input and output channels proportionally:\n\n$$\nD_f * D_f * \\alpha M * D_k * D_k + D_f * D_f * \\alpha M * \\alpha N\n$$",
"_____no_output_____"
],
[
"### Resolution Multiplier\n\nThe 3rd technique for reducing the computational cost is the \"resolution multiplier\" which is a hyperparameter inhabiting the range [0, 1] denoted here as $\\rho$. $\\rho$ reduces the size of the input feature map:\n\n$$\n\\rho D_f * \\rho D_f * M * D_k * D_k + \\rho D_f * \\rho D_f * M * N\n$$",
"_____no_output_____"
],
[
"Combining the width and resolution multipliers results in a computational cost of:\n\n$$\n\\rho D_f * \\rho D_f * a M * D_k * D_k + \\rho D_f * \\rho D_f * a M * a N\n$$\n\nTraining *MobileNets* with different values of $\\alpha$ and $\\rho$ will result in different speed vs. accuracy tradeoffs. The folks at Google have run these experiments, the result are shown in the graphic below:\n\n",
"_____no_output_____"
],
[
"MACs (M) represents the number of multiplication-add operations in the millions.",
"_____no_output_____"
],
[
"### Exercise 1 - Implement Separable Depthwise Convolution\n\nIn this exercise you'll implement a separable depthwise convolution block and compare the number of parameters to a standard convolution block. For this exercise we'll assume the width and resolution multipliers are set to 1.\n\nDocs:\n\n* [depthwise convolution](https://www.tensorflow.org/api_docs/python/tf/nn/depthwise_conv2d)",
"_____no_output_____"
]
],
[
[
"def vanilla_conv_block(x, kernel_size, output_channels):\n \"\"\"\n Vanilla Conv -> Batch Norm -> ReLU\n \"\"\"\n x = tf.layers.conv2d(\n x, output_channels, kernel_size, (2, 2), padding='SAME')\n x = tf.layers.batch_normalization(x)\n return tf.nn.relu(x)\n\n# TODO: implement MobileNet conv block\ndef mobilenet_conv_block(x, kernel_size, output_channels):\n \"\"\"\n Depthwise Conv -> Batch Norm -> ReLU -> Pointwise Conv -> Batch Norm -> ReLU\n \"\"\"\n pass",
"_____no_output_____"
]
],
[
[
"**[Sample solution](./exercise-solutions/e1.py)**\n\nLet's compare the number of parameters in each block.",
"_____no_output_____"
]
],
[
[
"# constants but you can change them so I guess they're not so constant :)\nINPUT_CHANNELS = 32\nOUTPUT_CHANNELS = 512\nKERNEL_SIZE = 3\nIMG_HEIGHT = 256\nIMG_WIDTH = 256\n\nwith tf.Session(graph=tf.Graph()) as sess:\n # input\n x = tf.constant(np.random.randn(1, IMG_HEIGHT, IMG_WIDTH, INPUT_CHANNELS), dtype=tf.float32)\n\n with tf.variable_scope('vanilla'):\n vanilla_conv = vanilla_conv_block(x, KERNEL_SIZE, OUTPUT_CHANNELS)\n with tf.variable_scope('mobile'):\n mobilenet_conv = mobilenet_conv_block(x, KERNEL_SIZE, OUTPUT_CHANNELS)\n\n vanilla_params = [\n (v.name, np.prod(v.get_shape().as_list()))\n for v in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, 'vanilla')\n ]\n mobile_params = [\n (v.name, np.prod(v.get_shape().as_list()))\n for v in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, 'mobile')\n ]\n\n print(\"VANILLA CONV BLOCK\")\n total_vanilla_params = sum([p[1] for p in vanilla_params])\n for p in vanilla_params:\n print(\"Variable {0}: number of params = {1}\".format(p[0], p[1]))\n print(\"Total number of params =\", total_vanilla_params)\n print()\n\n print(\"MOBILENET CONV BLOCK\")\n total_mobile_params = sum([p[1] for p in mobile_params])\n for p in mobile_params:\n print(\"Variable {0}: number of params = {1}\".format(p[0], p[1]))\n print(\"Total number of params =\", total_mobile_params)\n print()\n\n print(\"{0:.3f}x parameter reduction\".format(total_vanilla_params /\n total_mobile_params))",
"_____no_output_____"
]
],
[
[
"Your solution should show the majority of the parameters in *MobileNet* block stem from the pointwise convolution.",
"_____no_output_____"
],
[
"## *MobileNet* SSD\n\nIn this section you'll use a pretrained *MobileNet* [SSD](https://arxiv.org/abs/1512.02325) model to perform object detection. You can download the *MobileNet* SSD and other models from the [TensorFlow detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) (*note*: we'll provide links to specific models further below). [Paper](https://arxiv.org/abs/1611.10012) describing comparing several object detection models.\n\nAlright, let's get into SSD!",
"_____no_output_____"
],
[
"### Single Shot Detection (SSD)\n\nMany previous works in object detection involve more than one training phase. For example, the [Faster-RCNN](https://arxiv.org/abs/1506.01497) architecture first trains a Region Proposal Network (RPN) which decides which regions of the image are worth drawing a box around. RPN is then merged with a pretrained model for classification (classifies the regions). The image below is an RPN:\n\n",
"_____no_output_____"
],
[
"The SSD architecture is a single convolutional network which learns to predict bounding box locations and classify the locations in one pass. Put differently, SSD can be trained end to end while Faster-RCNN cannot. The SSD architecture consists of a base network followed by several convolutional layers: \n\n\n\n**NOTE:** In this lab the base network is a MobileNet (instead of VGG16.)\n\n#### Detecting Boxes\n\nSSD operates on feature maps to predict bounding box locations. Recall a feature map is of size $D_f * D_f * M$. For each feature map location $k$ bounding boxes are predicted. Each bounding box carries with it the following information:\n\n* 4 corner bounding box **offset** locations $(cx, cy, w, h)$\n* $C$ class probabilities $(c_1, c_2, ..., c_p)$\n\nSSD **does not** predict the shape of the box, rather just where the box is. The $k$ bounding boxes each have a predetermined shape. This is illustrated in the figure below:\n\n\n\nThe shapes are set prior to actual training. For example, In figure (c) in the above picture there are 4 boxes, meaning $k$ = 4.",
"_____no_output_____"
],
[
"### Exercise 2 - SSD Feature Maps\n\nIt would be a good exercise to read the SSD paper prior to a answering the following questions.\n\n***Q: Why does SSD use several differently sized feature maps to predict detections?***",
"_____no_output_____"
],
[
"A: Your answer here\n\n**[Sample answer](./exercise-solutions/e2.md)**",
"_____no_output_____"
],
[
"The current approach leaves us with thousands of bounding box candidates, clearly the vast majority of them are nonsensical.\n\n### Exercise 3 - Filtering Bounding Boxes\n\n***Q: What are some ways which we can filter nonsensical bounding boxes?***",
"_____no_output_____"
],
[
"A: Your answer here\n\n**[Sample answer](./exercise-solutions/e3.md)**",
"_____no_output_____"
],
[
"#### Loss\n\nWith the final set of matched boxes we can compute the loss:\n\n$$\nL = \\frac {1} {N} * ( L_{class} + L_{box})\n$$\n\nwhere $N$ is the total number of matched boxes, $L_{class}$ is a softmax loss for classification, and $L_{box}$ is a L1 smooth loss representing the error of the matched boxes with the ground truth boxes. L1 smooth loss is a modification of L1 loss which is more robust to outliers. In the event $N$ is 0 the loss is set 0.\n\n",
"_____no_output_____"
],
[
"### SSD Summary\n\n* Starts from a base model pretrained on ImageNet. \n* The base model is extended by several convolutional layers.\n* Each feature map is used to predict bounding boxes. Diversity in feature map size allows object detection at different resolutions.\n* Boxes are filtered by IoU metrics and hard negative mining.\n* Loss is a combination of classification (softmax) and dectection (smooth L1)\n* Model can be trained end to end.",
"_____no_output_____"
],
[
"## Object Detection Inference\n\nIn this part of the lab you'll detect objects using pretrained object detection models. You can download the latest pretrained models from the [model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md), although do note that you may need a newer version of TensorFlow (such as v1.8) in order to use the newest models.\n\nWe are providing the download links for the below noted files to ensure compatibility between the included environment file and the models.\n\n[SSD_Mobilenet 11.6.17 version](http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_coco_11_06_2017.tar.gz)\n\n[RFCN_ResNet101 11.6.17 version](http://download.tensorflow.org/models/object_detection/rfcn_resnet101_coco_11_06_2017.tar.gz)\n\n[Faster_RCNN_Inception_ResNet 11.6.17 version](http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_resnet_v2_atrous_coco_11_06_2017.tar.gz)\n\nMake sure to extract these files prior to continuing!",
"_____no_output_____"
]
],
[
[
"# Frozen inference graph files. NOTE: change the path to where you saved the models.\nSSD_GRAPH_FILE = 'ssd_mobilenet_v1_coco_11_06_2017/frozen_inference_graph.pb'\nRFCN_GRAPH_FILE = 'rfcn_resnet101_coco_11_06_2017/frozen_inference_graph.pb'\nFASTER_RCNN_GRAPH_FILE = 'faster_rcnn_inception_resnet_v2_atrous_coco_11_06_2017/frozen_inference_graph.pb'",
"_____no_output_____"
]
],
[
[
"Below are utility functions. The main purpose of these is to draw the bounding boxes back onto the original image.",
"_____no_output_____"
]
],
[
[
"# Colors (one for each class)\ncmap = ImageColor.colormap\nprint(\"Number of colors =\", len(cmap))\nCOLOR_LIST = sorted([c for c in cmap.keys()])\n\n#\n# Utility funcs\n#\n\ndef filter_boxes(min_score, boxes, scores, classes):\n \"\"\"Return boxes with a confidence >= `min_score`\"\"\"\n n = len(classes)\n idxs = []\n for i in range(n):\n if scores[i] >= min_score:\n idxs.append(i)\n \n filtered_boxes = boxes[idxs, ...]\n filtered_scores = scores[idxs, ...]\n filtered_classes = classes[idxs, ...]\n return filtered_boxes, filtered_scores, filtered_classes\n\ndef to_image_coords(boxes, height, width):\n \"\"\"\n The original box coordinate output is normalized, i.e [0, 1].\n \n This converts it back to the original coordinate based on the image\n size.\n \"\"\"\n box_coords = np.zeros_like(boxes)\n box_coords[:, 0] = boxes[:, 0] * height\n box_coords[:, 1] = boxes[:, 1] * width\n box_coords[:, 2] = boxes[:, 2] * height\n box_coords[:, 3] = boxes[:, 3] * width\n \n return box_coords\n\ndef draw_boxes(image, boxes, classes, thickness=4):\n \"\"\"Draw bounding boxes on the image\"\"\"\n draw = ImageDraw.Draw(image)\n for i in range(len(boxes)):\n bot, left, top, right = boxes[i, ...]\n class_id = int(classes[i])\n color = COLOR_LIST[class_id]\n draw.line([(left, top), (left, bot), (right, bot), (right, top), (left, top)], width=thickness, fill=color)\n \ndef load_graph(graph_file):\n \"\"\"Loads a frozen inference graph\"\"\"\n graph = tf.Graph()\n with graph.as_default():\n od_graph_def = tf.GraphDef()\n with tf.gfile.GFile(graph_file, 'rb') as fid:\n serialized_graph = fid.read()\n od_graph_def.ParseFromString(serialized_graph)\n tf.import_graph_def(od_graph_def, name='')\n return graph",
"_____no_output_____"
]
],
[
[
"Below we load the graph and extract the relevant tensors using [`get_tensor_by_name`](https://www.tensorflow.org/api_docs/python/tf/Graph#get_tensor_by_name). These tensors reflect the input and outputs of the graph, or least the ones we care about for detecting objects.",
"_____no_output_____"
]
],
[
[
"detection_graph = load_graph(SSD_GRAPH_FILE)\n# detection_graph = load_graph(RFCN_GRAPH_FILE)\n# detection_graph = load_graph(FASTER_RCNN_GRAPH_FILE)\n\n# The input placeholder for the image.\n# `get_tensor_by_name` returns the Tensor with the associated name in the Graph.\nimage_tensor = detection_graph.get_tensor_by_name('image_tensor:0')\n\n# Each box represents a part of the image where a particular object was detected.\ndetection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0')\n\n# Each score represent how level of confidence for each of the objects.\n# Score is shown on the result image, together with the class label.\ndetection_scores = detection_graph.get_tensor_by_name('detection_scores:0')\n\n# The classification of the object (integer id).\ndetection_classes = detection_graph.get_tensor_by_name('detection_classes:0')",
"_____no_output_____"
]
],
[
[
"Run detection and classification on a sample image.",
"_____no_output_____"
]
],
[
[
"# Load a sample image.\nimage = Image.open('./assets/sample1.jpg')\nimage_np = np.expand_dims(np.asarray(image, dtype=np.uint8), 0)\n\nwith tf.Session(graph=detection_graph) as sess: \n # Actual detection.\n (boxes, scores, classes) = sess.run([detection_boxes, detection_scores, detection_classes], \n feed_dict={image_tensor: image_np})\n\n # Remove unnecessary dimensions\n boxes = np.squeeze(boxes)\n scores = np.squeeze(scores)\n classes = np.squeeze(classes)\n\n confidence_cutoff = 0.8\n # Filter boxes with a confidence score less than `confidence_cutoff`\n boxes, scores, classes = filter_boxes(confidence_cutoff, boxes, scores, classes)\n\n # The current box coordinates are normalized to a range between 0 and 1.\n # This converts the coordinates actual location on the image.\n width, height = image.size\n box_coords = to_image_coords(boxes, height, width)\n\n # Each class with be represented by a differently colored box\n draw_boxes(image, box_coords, classes)\n\n plt.figure(figsize=(12, 8))\n plt.imshow(image) ",
"_____no_output_____"
]
],
[
[
"## Timing Detection\n\nThe model zoo comes with a variety of models, each its benefits and costs. Below you'll time some of these models. The general tradeoff being sacrificing model accuracy for seconds per frame (SPF).",
"_____no_output_____"
]
],
[
[
"def time_detection(sess, img_height, img_width, runs=10):\n image_tensor = sess.graph.get_tensor_by_name('image_tensor:0')\n detection_boxes = sess.graph.get_tensor_by_name('detection_boxes:0')\n detection_scores = sess.graph.get_tensor_by_name('detection_scores:0')\n detection_classes = sess.graph.get_tensor_by_name('detection_classes:0')\n\n # warmup\n gen_image = np.uint8(np.random.randn(1, img_height, img_width, 3))\n sess.run([detection_boxes, detection_scores, detection_classes], feed_dict={image_tensor: gen_image})\n \n times = np.zeros(runs)\n for i in range(runs):\n t0 = time.time()\n sess.run([detection_boxes, detection_scores, detection_classes], feed_dict={image_tensor: image_np})\n t1 = time.time()\n times[i] = (t1 - t0) * 1000\n return times",
"_____no_output_____"
],
[
"with tf.Session(graph=detection_graph) as sess:\n times = time_detection(sess, 600, 1000, runs=10)",
"_____no_output_____"
],
[
"# Create a figure instance\nfig = plt.figure(1, figsize=(9, 6))\n\n# Create an axes instance\nax = fig.add_subplot(111)\nplt.title(\"Object Detection Timings\")\nplt.ylabel(\"Time (ms)\")\n\n# Create the boxplot\nplt.style.use('fivethirtyeight')\nbp = ax.boxplot(times)",
"_____no_output_____"
]
],
[
[
"### Exercise 4 - Model Tradeoffs\n\nDownload a few models from the [model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) and compare the timings.",
"_____no_output_____"
],
[
"## Detection on a Video\n\nFinally run your pipeline on [this short video](https://s3-us-west-1.amazonaws.com/udacity-selfdrivingcar/advanced_deep_learning/driving.mp4).",
"_____no_output_____"
]
],
[
[
"# Import everything needed to edit/save/watch video clips\nfrom moviepy.editor import VideoFileClip\nfrom IPython.display import HTML",
"_____no_output_____"
],
[
"HTML(\"\"\"\n<video width=\"960\" height=\"600\" controls>\n <source src=\"{0}\" type=\"video/mp4\">\n</video>\n\"\"\".format('driving.mp4'))",
"_____no_output_____"
]
],
[
[
"### Exercise 5 - Object Detection on a Video\n\nRun an object detection pipeline on the above clip.",
"_____no_output_____"
]
],
[
[
"clip = VideoFileClip('driving.mp4')",
"_____no_output_____"
],
[
"# TODO: Complete this function.\n# The input is an NumPy array.\n# The output should also be a NumPy array.\ndef pipeline(img):\n pass",
"_____no_output_____"
]
],
[
[
"**[Sample solution](./exercise-solutions/e5.py)**",
"_____no_output_____"
]
],
[
[
"with tf.Session(graph=detection_graph) as sess:\n image_tensor = sess.graph.get_tensor_by_name('image_tensor:0')\n detection_boxes = sess.graph.get_tensor_by_name('detection_boxes:0')\n detection_scores = sess.graph.get_tensor_by_name('detection_scores:0')\n detection_classes = sess.graph.get_tensor_by_name('detection_classes:0')\n \n new_clip = clip.fl_image(pipeline)\n \n # write to file\n new_clip.write_videofile('result.mp4')",
"_____no_output_____"
],
[
"HTML(\"\"\"\n<video width=\"960\" height=\"600\" controls>\n <source src=\"{0}\" type=\"video/mp4\">\n</video>\n\"\"\".format('result.mp4'))",
"_____no_output_____"
]
],
[
[
"## Further Exploration\n\nSome ideas to take things further:\n\n* Finetune the model on a new dataset more relevant to autonomous vehicles. Instead of loading the frozen inference graph you'll load the checkpoint.\n* Optimize the model and get the FPS as low as possible.\n* Build your own detector. There are several base model pretrained on ImageNet you can choose from. [Keras](https://keras.io/applications/) is probably the quickest way to get setup in this regard.\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
d09628b24fbeddf932854acdf5b7f0c78dc1ae5a | 398,678 | ipynb | Jupyter Notebook | examples/random_stuff/tests.ipynb | cshenry/cobrakbase | 9ab3db059171e7532082646302db22338ab61a55 | [
"MIT"
] | 3 | 2018-11-28T12:48:54.000Z | 2022-02-28T22:20:32.000Z | examples/random_stuff/tests.ipynb | cshenry/cobrakbase | 9ab3db059171e7532082646302db22338ab61a55 | [
"MIT"
] | 2 | 2020-06-26T20:13:16.000Z | 2020-10-27T05:10:34.000Z | examples/random_stuff/tests.ipynb | cshenry/cobrakbase | 9ab3db059171e7532082646302db22338ab61a55 | [
"MIT"
] | 1 | 2020-09-02T17:40:34.000Z | 2020-09-02T17:40:34.000Z | 33.364968 | 2,630 | 0.560665 | [
[
[
"import sys\nimport json\nsys.path.insert(0, \"../\")\nprint(sys.path)\nimport pymongo",
"['../', '/Users/fliu/workspace/jupyter/python3/cobrakbase/examples/random_stuff', '/Users/fliu/opt/anaconda3/lib/python37.zip', '/Users/fliu/opt/anaconda3/lib/python3.7', '/Users/fliu/opt/anaconda3/lib/python3.7/lib-dynload', '', '/Users/fliu/opt/anaconda3/lib/python3.7/site-packages', '/Users/fliu/opt/anaconda3/lib/python3.7/site-packages/aeosa', '/Users/fliu/opt/anaconda3/lib/python3.7/site-packages/IPython/extensions', '/Users/fliu/.ipython']\n"
],
[
"#31470/5/1",
"_____no_output_____"
],
[
"import sys\nimport json\nimport cobrakbase\nimport cobrakbase.core.model\nimport cobra\nimport logging\n#from cobra.core import Gene, Metabolite, Model, Reaction\n#from pyeda import *\n#from pyeda.inter import *\n#from pyeda.boolalg import expr\nimport pandas as pd",
"_____no_output_____"
],
[
"fbamodel = None\ndata = None\nwith open('community_model.json', 'r') as fh:\n data = json.loads(fh.read())\n fbamodel = cobrakbase.core.KBaseFBAModel(data)",
"_____no_output_____"
],
[
"for r in data['modelreactions']:\n if 'rxn0020_c0' in r['id']:\n print(r)",
"{'aliases': [], 'dblinks': {}, 'direction': '>', 'edits': {}, 'gapfill_data': {}, 'id': 'rxn0020_c0', 'maxforflux': 1000000, 'maxrevflux': 1000000, 'modelReactionProteins': [{'complex_ref': '', 'modelReactionProteinSubunits': [{'feature_refs': [], 'note': 'Imported GPR', 'optionalSubunit': 0, 'role': '', 'triggering': 0}], 'note': 'Imported GPR', 'source': ''}, {'complex_ref': '', 'modelReactionProteinSubunits': [{'feature_refs': [], 'note': 'Imported GPR', 'optionalSubunit': 0, 'role': '', 'triggering': 0}], 'note': 'Imported GPR', 'source': ''}, {'complex_ref': '', 'modelReactionProteinSubunits': [{'feature_refs': [], 'note': 'Imported GPR', 'optionalSubunit': 0, 'role': '', 'triggering': 0}], 'note': 'Imported GPR', 'source': ''}], 'modelReactionReagents': [{'coefficient': -1, 'modelcompound_ref': '~/modelcompounds/id/cpd00001e0_e0'}, {'coefficient': -1, 'modelcompound_ref': '~/modelcompounds/id/cpd00158e0_e0'}, {'coefficient': 2, 'modelcompound_ref': '~/modelcompounds/id/cpd00027e0_e0'}], 'modelcompartment_ref': '~/modelcompartments/id/c0', 'name': 'CustomReaction_c0', 'numerical_attributes': {}, 'probability': 0, 'protons': 0, 'reaction_ref': '~/template/reactions/id/rxn00000_c', 'string_attributes': {}}\n"
],
[
"for r in fbamodel.get_reactions():\n r = fbamodel.get_reaction(r['id'])\n o = b.convert_modelreaction(r)\n if 'rxn0020_c0' in o.id:\n print(o, r.id)",
"rxn0020_c0: cpd00001e0_e0 + cpd00158e0_e0 --> 2 cpd00027e0_e0 rxn0020_c0\n"
],
[
"kbase_api = cobrakbase.KBaseAPI(\"64XQ7SABQILQWSEW3CQKZXJA63DXZBGH\")\n#ref = kbase_api.get_object_info_from_ref(fbamodel.data['genome_ref'])",
"_____no_output_____"
],
[
"modelseed = cobrakbase.modelseed.from_local('../../../../ModelSEEDDatabase')",
"load: ../../../../ModelSEEDDatabase/Biochemistry/reactions.tsv\nload: ../../../../ModelSEEDDatabase/Biochemistry/compounds.tsv\nload: ../../../../ModelSEEDDatabase/Biochemistry/Structures/Unique_ModelSEED_Structures.txt\nload: ../../../../ModelSEEDDatabase/Biochemistry/Aliases/Unique_ModelSEED_Compound_Aliases.txt\nload: ../../../../ModelSEEDDatabase/Biochemistry/Aliases/Unique_ModelSEED_Reaction_Aliases.txt\nload: ../../../../ModelSEEDDatabase/Biochemistry/Aliases/Unique_ModelSEED_Reaction_ECs.txt\n"
],
[
"for seed_id in modelseed.reactions:\n seed_rxn = modelseed.get_seed_reaction(seed_id)\n ec_numbers = set()\n if 'Enzyme Class' in seed_rxn.ec_numbers:\n for ec in seed_rxn.ec_numbers['Enzyme Class']:\n if ec.startswith('EC-'):\n ec_numbers.add(ec[3:])\n else:\n ec_numbers.add(ec)",
"_____no_output_____"
],
[
"def annotate_model_reactions_with_modelseed(model, modelseed):\n for r in model.reactions:\n seed_id = None\n if 'seed.reaction' in r.annotation:\n annotation = {}\n seed_id = r.annotation['seed.reaction']\n seed_rxn = modelseed.get_seed_reaction(seed_id)\n if not seed_rxn == None:\n 1\n else:\n print('!', r.id)\n print(seed_id)\n break\n return r\nr= annotate_model_reactions_with_modelseed(model, modelseed)",
"! rxn0020_c0\nrxn0020\n"
],
[
"r",
"_____no_output_____"
],
[
"r.annotation",
"_____no_output_____"
],
[
"b = cobrakbase.core.converters.KBaseFBAModelToCobraBuilder(fbamodel)\nif 'genome_ref' in fbamodel.data:\n logging.info(f\"Annotating model with genome information: {fbamodel.data['genome_ref']}\")\n #ref = kbase_api.get_object_info_from_ref(fbamodel.data['genome_ref'])\n #genome_data = kbase_api.get_object(ref.id, ref.ws)\n #genome = self.dfu.get_objects(\n # {'object_refs': [ret['data']['genome_ref']]})['data'][0]['data']\n # #adding Genome to the Builder\n # builder.with_genome(KBaseGenome(genome))\nmodel = b.build()\nprint(cobrakbase.annotate_model_with_modelseed(model, modelseed))",
"None\n"
],
[
"model.summary()",
"_____no_output_____"
],
[
"model = cobra.io.read_sbml_model('../../../../data/sbml/saccharomyces.xml')",
"_____no_output_____"
],
[
"cobra.io.write_sbml_model",
"_____no_output_____"
],
[
"solution = model.optimize()\nsolution",
"_____no_output_____"
],
[
"o_data = kbase_dev.get_object('GCF_000005845.2.beta.fba', 'filipeliu:narrative_1556512034170')\n",
"_____no_output_____"
],
[
"fba = cobrakbase.core.KBaseFBA(o_data)\nfba.data.keys('FBAReactionVariables')",
"_____no_output_____"
],
[
"with open('../../../../data/www/mpa19/flux.txt', 'w+') as f:\n for o in fba.data['FBAReactionVariables']:\n v = o['value']\n rxn_id = o['modelreaction_ref'].split('/')[-1]\n #print(rxn_id, v)\n f.write(\"{},{}\\n\".format(rxn_id, v))",
"_____no_output_____"
],
[
"#model = cobra.io.read_sbml_model('/Users/fliu/Downloads/iML1515.kb.SBML/iML1515.kb.xml')",
"_____no_output_____"
],
[
"kbase = cobrakbase.KBaseAPI(\"YAFOCRSMRNDXZ7KMW7GCK5AC3SBNTEFD\")\nkbase_dev = cobrakbase.KBaseAPI(\"YAFOCRSMRNDXZ7KMW7GCK5AC3SBNTEFD\", dev=True)",
"_____no_output_____"
],
[
"#12998\nkbase.ws_client.ver.get_workspace_info({'id' : 23938})",
"_____no_output_____"
],
[
"kbase.ws_client.get_workspace_info({'id' : 12998})",
"_____no_output_____"
],
[
"ref_info = kbase.get_object_info_from_ref('12998/1/2')\nprint(ref_info.id, ref_info.workspace_id, ref_info.workspace_uid, ref_info.uid, ref_info)",
"GramNegModelTemplate NewKBaseModelTemplates 12998 1 12998/1/2\n"
],
[
"a = kbase_dev.ws_client.get_workspace_info({'workspace' : 'NewKBaseModelTemplates'})",
"_____no_output_____"
],
[
"kbase_dev.list_objects('NewKBaseModelTemplates')",
"_____no_output_____"
],
[
"kmodel = kbase_dev.get_object('Escherichia_coli_K-12_MG1655_output', 'filipeliu:narrative_1564175222344')\nkmodel.keys()",
"_____no_output_____"
],
[
"kmodel['genome_ref']",
"_____no_output_____"
],
[
"kmodel['template_ref'] = '50/1/2'\ngenome_ref = '31470/4/1'\nfor mr in kmodel['modelreactions']:\n #print(mr)\n for mrp in mr['modelReactionProteins']:\n #print(modelReactionProtein)\n for mrps in mrp['modelReactionProteinSubunits']:\n for i in range(len(mrps['feature_refs'])):\n a, b = mrps['feature_refs'][i].split('features')\n #print(a, b)\n mrps['feature_refs'][i] = kmodel['genome_ref'] + '/features' + b\n #mrps['feature_refs'][i] = kmodel['genome_ref'] + f_block\n #print(i, mrps['feature_refs'][i], f_block)\n\n\nkbase_dev.save_object('Escherichia_coli_K-12_MG1655_output', 'filipeliu:narrative_1564175222344', 'KBaseFBA.FBAModel', kmodel)",
"_____no_output_____"
],
[
"kmodel['genome_ref']",
"_____no_output_____"
],
[
"kbase_dev.ws_client.get_object_info3({'objects' : [{'ref' : '31470/5/2'}]})",
"_____no_output_____"
],
[
"%run ../../../scripts/bios_utils.py\nwith open('aww.json', 'w') as f:\n f.write(json.dumps(kmodel, indent=4, sort_keys=True))\n",
"_____no_output_____"
],
[
"kbase_dev.list_objects('filipeliu:narrative_1564175222344')",
"_____no_output_____"
],
[
"os = kbase.list_objects('filipeliu:narrative_1564417971147')",
"_____no_output_____"
],
[
"kmodel = kbase.get_object('test', 'filipeliu:narrative_1564417971147')",
"_____no_output_____"
],
[
"kmodel['gapfillings']",
"_____no_output_____"
],
[
"os[4] #31470/3/1",
"_____no_output_____"
],
[
"o_data['gapfillings']",
"_____no_output_____"
],
[
"ref = kbase.get_object_info_from_ref('262/34/1')\nref.id",
"_____no_output_____"
],
[
"for o in os:\n if not o[2].startswith('KBaseNarrative.Narrative') and not o[2].startswith('KBaseGenomes.Genome'):\n print(o)\n\n o_data = kbase.get_object(o[1], 'zahmeeth:narrative_1561761748173')\n if 'genome_ref' in o_data:\n o_data['genome_ref'] = '31470/2/1'\n if 'gapfillings' in o_data:\n for gapfilling in o_data['gapfillings']:\n print(o[1], gapfilling['gapfill_id'])\n if 'media_ref' in gapfilling:\n gapfilling['media_ref'] = '31470/3/1'\n #kbase_dev.save_object(o[1], 'filipeliu:narrative_1564175222344', o[2].split('-')[0], o_data)\n ",
"[3, 'Escherichia_coli_K-12_MG1655_output', 'KBaseFBA.FBAModel-11.0', '2019-06-28T22:55:10+0000', 1, 'mlee', 45678, 'zahmeeth:narrative_1561761748173', 'fc83d29622d8c6ae4dc03e363dc3bc9f', 827042, None]\nEscherichia_coli_K-12_MG1655_output Escherichia_coli_K-12_MG1655_output.gf.0\n[6, 'iML1515.kb', 'KBaseFBA.FBAModel-11.0', '2019-07-24T14:42:14+0000', 1, 'zahmeeth', 45678, 'zahmeeth:narrative_1561761748173', 'bd8bf7cc9e067e5644f2322e0f71058d', 3314981, None]\n[7, 'Carbon-D-Glucose-iML1515', 'KBaseBiochem.Media-4.1', '2019-07-24T14:46:27+0000', 1, 'zahmeeth', 45678, 'zahmeeth:narrative_1561761748173', 'd383ea1f8ad16e9f60879f989c02bb1f', 2669, None]\n"
],
[
"kmodel = kbase.get_object('iML1515.kb', 'zahmeeth:narrative_1561761748173')",
"_____no_output_____"
],
[
"ref = kbase.get_object_info_from_ref(kmodel['genome_ref'])\nkgenome = kbase.get_object(ref.id, ref.workspace_id)",
"_____no_output_____"
],
[
"genome = cobrakbase.core.KBaseGenome(kgenome)",
"_____no_output_____"
],
[
"builder = cobrakbase.core.converters.KBaseFBAModelToCobraBuilder(cobrakbase.core.model.KBaseFBAModel(kmodel))",
"_____no_output_____"
],
[
"builder = builder.with_genome(genome)",
"_____no_output_____"
],
[
"model = builder.build()",
"WARNING:cobrakbase.core.converters:duplicate reaction: DM_cpd02701_c0\nWARNING:cobrakbase.core.converters:copy reaction: [DM_cpd02701_c0] -> [DM_cpd02701_c0_copy1]\n"
],
[
"gene = model.genes[5]\ngene.annotation",
"_____no_output_____"
],
[
"cobrakbase.COBRA_DEFAULT_LB = -1000\ncobrakbase.COBRA_DEFAULT_UB = 1000",
"_____no_output_____"
],
[
"kmodel = kbase.get_object('iAF1260.fix2.kb', 'filipeliu:narrative_1504192868437')",
"_____no_output_____"
],
[
"exprvar",
"_____no_output_____"
],
[
"os = kbase.list_objects('jplfaria:narrative_1524466549180')",
"_____no_output_____"
],
[
"genomes = set()\nfor o in os:\n if o[1].endswith('RAST'):\n genomes.add(o[1])\nprint(len(genomes))",
"1675\n"
],
[
"os = kbase.list_objects('filipeliu:narrative_1549385719110')",
"_____no_output_____"
],
[
"genomes2 = set()\nfor o in os:\n if o[1].endswith('RAST.mdl.gfrelease.Carbon-D-Glucose'):\n id = o[1].split('.mdl.')[0]\n genomes2.add(id)\n\nprint(len(genomes2))\n#genomes2",
"1672\n"
],
[
"genomes3 = set()\nfor o in os:\n if o[1].endswith('RAST.mdl.gfrelease.Carbon-D-Glucose.fba'):\n id = o[1].split('.mdl.')[0]\n genomes3.add(id)\n\nprint(len(genomes3))",
"1346\n"
],
[
"kmedia = kbase.get_object('Carbon-D-Glucose', 'filipeliu:narrative_1549385719110')\nmedia_const = cobrakbase.convert_media(kmedia)",
"_____no_output_____"
],
[
"genome_id = 'GCF_000005845.2.RAST'",
"_____no_output_____"
],
[
"def eval_fba(genome_id):\n kmodel = kbase.get_object(genome_id + '.mdl.gfrelease.Carbon-D-Glucose', 'filipeliu:narrative_1549385719110')\n #genome = kbase.get_object('GCF_000005845.2.RAST', 'jplfaria:narrative_1524466549180')\n enforce_direaction_bounds(kmodel)\n kmodel_fba = kbase.get_object(genome_id + '.mdl.gfrelease.Carbon-D-Glucose.fba', 'filipeliu:narrative_1549385719110')\n fbamodel = cobrakbase.core.model.KBaseFBAModel(kmodel)\n\n kbase_fba = kmodel_fba['objectiveValue']\n\n model = cobrakbase.convert_kmodel(kmodel, media_const)\n solution = model.optimize()\n cobra_fba = solution.objective_value\n print(kbase_fba, cobra_fba, cobra_fba - cobra_fba)\n return kbase_fba, cobra_fba, cobra_fba - cobra_fba",
"_____no_output_____"
],
[
"kmodel = kbase.get_object(genome_id + '.mdl.gfrelease.Carbon-D-Glucose', 'filipeliu:narrative_1549385719110')\n#genome = kbase.get_object('GCF_000005845.2.RAST', 'jplfaria:narrative_1524466549180')\nenforce_direaction_bounds(kmodel)\nkmodel_fba = kbase.get_object(genome_id + '.mdl.gfrelease.Carbon-D-Glucose.fba', 'filipeliu:narrative_1549385719110')\nfbamodel = cobrakbase.core.model.KBaseFBAModel(kmodel)",
"_____no_output_____"
],
[
"cpd_ref_file = '../../../../kbase/ModelSEEDDatabase/Biochemistry/Aliases/Unique_ModelSEED_Compound_Aliases.txt'\nrxn_ref_file = '../../../../kbase/ModelSEEDDatabase/Biochemistry/Aliases/Unique_ModelSEED_Reaction_Aliases.txt'\nrxn_ec_file = '../../../../kbase/ModelSEEDDatabase/Biochemistry/Aliases/Unique_ModelSEED_Reaction_ECs.txt'\ncpd_stru_file = '../../../../kbase/ModelSEEDDatabase/Biochemistry/Structures/ModelSEED_Structures.txt'\n\ncpd_df = pd.read_csv(cpd_ref_file, sep='\\t')\nrxn_df = pd.read_csv(rxn_ref_file, sep='\\t')\nrxn_ec_df = pd.read_csv(rxn_ec_file, sep='\\t')\nstru_df = pd.read_csv(cpd_stru_file, sep='\\t')\n \nstructures = cobrakbase.read_modelseed_compound_structures(stru_df)\nrxn_aliases = cobrakbase.read_modelseed_reaction_aliases2(rxn_df)\ncpd_aliases = cobrakbase.read_modelseed_compound_aliases2(cpd_df)\ngene_aliases = cobrakbase.read_genome_aliases(genome)",
"_____no_output_____"
],
[
"def annotate_model(model, cpd_aliases, rxn_aliases, gene_aliases, structures):\n for m in model.metabolites:\n seed_id = None\n if 'seed.compound' in m.annotation:\n seed_id = m.annotation['seed.compound']\n if seed_id in structures:\n m.annotation.update(structures[seed_id])\n if seed_id in cpd_aliases:\n m.annotation.update(cpd_aliases[seed_id])\n\n for r in model.reactions:\n seed_id = None\n if 'seed.reaction' in r.annotation:\n seed_id = r.annotation['seed.reaction']\n if seed_id in rxn_aliases:\n r.annotation.update(rxn_aliases[seed_id])\n\n for g in model.genes:\n if g.id in gene_aliases:\n g.annotation.update(gene_aliases[g.id])\n\n for r in model.reactions:\n if cobrakbase.is_translocation(r):\n if cobrakbase.is_transport(r):\n r.annotation['sbo'] = 'SBO:0000655'\n else:\n r.annotation['sbo'] = 'SBO:0000185'\n\n kbase_sinks = ['rxn13783_c0', 'rxn13784_c0', 'rxn13782_c0']\n\n for r in model.reactions:\n #r.annotation['ec-code'] = '1.1.1.1'\n #r.annotation['metanetx.reaction'] = 'MNXR103371'\n if r.id in kbase_sinks:\n r.annotation['sbo'] = 'SBO:0000632'\n if r.id.startswith('DM_'):\n r.annotation['sbo'] = 'SBO:0000628'",
"_____no_output_____"
],
[
"#",
"_____no_output_____"
],
[
"cpd_ref_file = '../../../../kbase/ModelSEEDDatabase/Biochemistry/Aliases/Unique_ModelSEED_Compound_Aliases.txt'\nrxn_ref_file = '../../../../kbase/ModelSEEDDatabase/Biochemistry/Aliases/Unique_ModelSEED_Reaction_Aliases.txt'\nrxn_ec_file = '../../../../kbase/ModelSEEDDatabase/Biochemistry/Aliases/Unique_ModelSEED_Reaction_ECs.txt'\ncpd_stru_file = '../../../../kbase/ModelSEEDDatabase/Biochemistry/Structures/ModelSEED_Structures.txt'\n\ncpd_df = pd.read_csv(cpd_ref_file, sep='\\t')\nrxn_df = pd.read_csv(rxn_ref_file, sep='\\t')\nrxn_ec_df = pd.read_csv(rxn_ec_file, sep='\\t')\nstru_df = pd.read_csv(cpd_stru_file, sep='\\t')\n \nstructures = cobrakbase.read_modelseed_compound_structures(stru_df)\nrxn_aliases = cobrakbase.read_modelseed_reaction_aliases2(rxn_df)\ncpd_aliases = cobrakbase.read_modelseed_compound_aliases2(cpd_df)\n\n\nexclude = genomes - genomes2\ni = 0\nfor genome_id in genomes:\n if not genome_id in exclude:\n kmodel = kbase.get_object(genome_id + '.mdl.gfrelease.Carbon-D-Glucose', 'filipeliu:narrative_1549385719110')\n genome = kbase.get_object(genome_id, 'jplfaria:narrative_1524466549180')\n enforce_direaction_bounds(kmodel)\n #kmodel_fba = kbase.get_object(genome_id + '.mdl.gfrelease.Carbon-D-Glucose.fba', 'filipeliu:narrative_1549385719110')\n fbamodel = cobrakbase.core.model.KBaseFBAModel(kmodel)\n \n gene_aliases = cobrakbase.read_genome_aliases(genome)\n \n model = cobrakbase.convert_kmodel(kmodel, media_const)\n \n annotate_model(model, cpd_aliases, rxn_aliases, gene_aliases, structures)\n\n for r in model.reactions:\n ub = r.upper_bound\n lb = r.lower_bound\n if ub == 1000000:\n ub = 1000\n if lb == -1000000:\n lb = -1000\n r.upper_bound = ub\n r.lower_bound = lb\n \n cobra.io.write_sbml_model(model, '../../data/memote_models/' + genome_id.split('.RAST')[0] + '.xml')\n print(i, genome_id)\n i += 1",
"Add Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 65 SK: 3\n0 GCF_000143965.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 53 SK: 1\n1 GCF_000027325.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 125 SK: 3\n2 GCF_001050115.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 96 SK: 3\n3 GCF_000023445.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 91 SK: 3\n4 GCF_000240165.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 102 SK: 2\n5 GCF_001578105.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 90 SK: 1\n6 GCF_000829395.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 65 SK: 2\n7 GCF_000246985.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 86 SK: 3\n8 GCF_000024385.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 54 SK: 1\n9 GCF_000069945.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 98 SK: 2\n10 GCF_000626675.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 74 SK: 1\n11 GCF_000022725.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 52 SK: 3\n12 GCF_000012505.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 74 SK: 2\n13 GCF_000019525.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 116 SK: 3\n14 GCF_000832605.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 66 SK: 3\n15 GCF_001653755.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 94 SK: 2\n16 GCF_000014185.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 68 SK: 3\n17 GCF_000195955.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 86 SK: 3\n18 GCF_001507665.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 81 SK: 2\n19 GCF_000803645.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 80 SK: 2\n20 GCF_000018945.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 37 SK: 1\n21 GCF_000021965.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 53 SK: 2\n22 GCF_000009365.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 134 SK: 3\n23 GCF_001543265.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 59 SK: 2\n24 GCF_000016605.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 53 SK: 3\n25 GCF_000737535.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 74 SK: 3\n26 GCF_000277125.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 77 SK: 3\n27 GCF_001190945.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 23 SK: 2\n28 GCF_000026005.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 71 SK: 3\n29 GCF_000025185.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 106 SK: 3\n30 GCF_000219605.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 114 SK: 3\n31 GCF_000015745.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 119 SK: 2\n32 GCF_000747345.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 75 SK: 2\n33 GCF_000317495.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 25 SK: 1\n34 GCF_000340865.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 35 SK: 2\n35 GCF_000238995.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 75 SK: 3\n36 GCF_000013405.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 88 SK: 3\n37 GCF_000012685.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 82 SK: 2\n38 GCF_001553955.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 69 SK: 1\n39 GCF_000019685.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 83 SK: 2\n40 GCF_000179915.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 110 SK: 3\n41 GCF_000247715.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 76 SK: 2\n42 GCF_000835165.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 44 SK: 3\n43 GCF_000012725.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 53 SK: 3\n44 GCF_000025705.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 71 SK: 2\n45 GCF_000317635.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 25 SK: 1\n46 GCF_000018205.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 94 SK: 1\n47 GCF_001611135.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 65 SK: 2\n48 GCF_000970285.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 94 SK: 2\n49 GCF_000024545.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 84 SK: 3\n50 GCF_000025725.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 154 SK: 3\n51 GCF_000300455.3.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 88 SK: 3\n52 GCF_000024345.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 62 SK: 3\n53 GCF_000800395.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 113 SK: 3\n54 GCF_000019725.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 72 SK: 2\n55 GCF_000008565.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 72 SK: 2\n56 GCF_000525655.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 90 SK: 2\n57 GCF_000237995.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 61 SK: 1\n58 GCF_000328725.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 113 SK: 2\n59 GCF_000020365.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 93 SK: 1\n60 GCF_001262075.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 24 SK: 1\n61 GCF_000093165.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 79 SK: 2\n62 GCF_000237085.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 47 SK: 1\n63 GCF_000017945.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 120 SK: 3\n64 GCF_000196255.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 65 SK: 2\n65 GCF_000163895.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 131 SK: 3\n66 GCF_000271665.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 102 SK: 3\n67 GCF_000069185.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 102 SK: 2\n68 GCF_000177235.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 80 SK: 2\n69 GCF_000463355.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 73 SK: 2\n70 GCF_000812665.2.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 89 SK: 1\n71 GCF_001021065.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 24 SK: 1\n72 GCF_000284075.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 75 SK: 3\n73 GCF_000024845.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 95 SK: 3\n74 GCF_000512205.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 38 SK: 2\n75 GCF_000093065.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 86 SK: 2\n76 GCF_000830985.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 43 SK: 1\n77 GCF_001547975.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 74 SK: 3\n78 GCF_000281175.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 91 SK: 1\n79 GCF_000164695.2.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 34 SK: 1\n80 GCF_000499765.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 116 SK: 3\n81 GCF_001447335.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 22 SK: 2\n82 GCF_000010405.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 61 SK: 2\n83 GCF_000204415.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 100 SK: 2\n84 GCF_000006965.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 105 SK: 1\n85 GCF_001685435.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 70 SK: 2\n86 GCF_000159155.2.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 45 SK: 1\n87 GCF_000025665.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 102 SK: 3\n88 GCF_001618685.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 152 SK: 3\n89 GCF_000759475.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 117 SK: 2\n90 GCF_000347595.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 66 SK: 3\n91 GCF_000242455.2.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 57 SK: 1\n92 GCF_001006045.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 45 SK: 3\n93 GCF_000730385.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 25 SK: 1\n94 GCF_000284155.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 59 SK: 2\n95 GCF_000470655.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 63 SK: 2\n96 GCF_000816185.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 67 SK: 3\n97 GCF_000317675.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 49 SK: 3\n98 GCF_000012825.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 62 SK: 3\n99 GCF_000020625.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 75 SK: 2\n100 GCF_000348785.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 159 SK: 3\n101 GCF_001514455.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 54 SK: 2\n102 GCF_000204925.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 70 SK: 3\n103 GCF_000347635.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 62 SK: 2\n104 GCF_000013945.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 64 SK: 2\n105 GCF_000007905.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 96 SK: 3\n106 GCF_001046955.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 75 SK: 2\n107 GCF_000723465.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 60 SK: 3\n108 GCF_000021485.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 61 SK: 2\n109 GCF_000970085.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 88 SK: 1\n110 GCF_000014425.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 88 SK: 3\n111 GCF_001682385.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 111 SK: 3\n112 GCF_000021325.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 92 SK: 2\n113 GCF_001020985.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 102 SK: 3\n114 GCF_000008345.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 43 SK: 2\n115 GCF_000006175.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 61 SK: 2\n116 GCF_000970265.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 75 SK: 2\n117 GCF_000196135.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 57 SK: 1\n118 GCF_000063605.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 66 SK: 2\n119 GCF_000217815.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 24 SK: 2\n120 GCF_000829315.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 108 SK: 2\n121 GCF_000504125.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 75 SK: 2\n122 GCF_000953195.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 61 SK: 2\n123 GCF_000193375.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 65 SK: 3\n124 GCF_000214155.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 65 SK: 1\n125 GCF_001922545.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 102 SK: 2\n126 GCF_000012905.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 70 SK: 3\n127 GCF_001021935.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 60 SK: 3\n128 GCF_000023025.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 53 SK: 1\n129 GCF_000027345.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 22 SK: 2\n130 GCF_000508225.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 118 SK: 2\n131 GCF_000185885.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 87 SK: 2\n132 GCF_000418365.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 68 SK: 2\n133 GCF_000092905.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 66 SK: 2\n134 GCF_000725425.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 104 SK: 3\n135 GCF_001499615.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 94 SK: 2\n136 GCF_000019845.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 63 SK: 3\n137 GCF_000341395.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 85 SK: 2\n138 GCF_000331995.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 108 SK: 3\n139 GCF_000953695.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 80 SK: 3\n140 GCF_000091125.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 121 SK: 3\n141 GCF_000015005.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 110 SK: 1\n142 GCF_000063585.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 105 SK: 3\n143 GCF_000024785.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 58 SK: 2\n144 GCF_000186365.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 83 SK: 2\n145 GCF_000218545.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 60 SK: 3\n146 GCF_001314995.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 78 SK: 2\n147 GCF_000463505.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 72 SK: 2\n148 GCF_000008545.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 108 SK: 3\n149 GCF_000949425.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 62 SK: 2\n150 GCF_000304355.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 83 SK: 3\n151 GCF_000023225.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 100 SK: 2\n152 GCF_000017305.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 81 SK: 1\n153 GCF_000328765.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 98 SK: 3\n154 GCF_000266905.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 172 SK: 3\n155 GCF_000005845.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 82 SK: 3\n156 GCF_000968135.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 108 SK: 2\n157 GCF_001267925.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 129 SK: 2\n158 GCF_001953195.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 103 SK: 3\n159 GCF_000015305.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 91 SK: 3\n160 GCF_001941345.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 100 SK: 2\n161 GCF_000092985.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 66 SK: 2\n162 GCF_001483385.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 81 SK: 2\n163 GCF_000055785.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 52 SK: 2\n164 GCF_001886695.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 35 SK: 1\n165 GCF_000341355.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 90 SK: 3\n166 GCF_000325665.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 100 SK: 2\n167 GCF_000550785.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 58 SK: 2\n168 GCF_000020905.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 40 SK: 1\n169 GCF_001305655.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 62 SK: 1\n170 GCF_000230955.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 67 SK: 2\n171 GCF_000828475.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 62 SK: 3\n172 GCF_000183725.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 73 SK: 2\n173 GCF_000230695.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 131 SK: 3\n174 GCF_000007565.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 106 SK: 2\n175 GCF_000940845.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 52 SK: 1\n176 GCF_000069925.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 47 SK: 3\n177 GCF_000284315.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 91 SK: 2\n178 GCF_000006865.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 71 SK: 2\n179 GCF_000316605.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 78 SK: 3\n180 GCF_000185965.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 27 SK: 1\n181 GCF_000800805.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 123 SK: 2\n182 GCF_000412695.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 57 SK: 3\n183 GCF_000011065.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 103 SK: 2\n184 GCF_000831645.3.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 63 SK: 3\n185 GCF_000014825.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 72 SK: 2\n186 GCF_000317105.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 70 SK: 2\n187 GCF_000177535.2.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 82 SK: 1\n188 GCF_000750535.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 71 SK: 3\n189 GCF_000008745.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 96 SK: 3\n190 GCF_001889105.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 72 SK: 2\n191 GCF_000178975.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 95 SK: 3\n192 GCF_000016285.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 106 SK: 3\n193 GCF_001620305.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 74 SK: 2\n194 GCF_000017845.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 100 SK: 2\n195 GCF_000831005.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 58 SK: 3\n196 GCF_000316515.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 96 SK: 2\n197 GCF_001314225.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 106 SK: 2\n198 GCF_000493735.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 100 SK: 2\n199 GCF_000236685.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 84 SK: 3\n200 GCF_000317475.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 108 SK: 3\n201 GCF_001787335.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 88 SK: 3\n202 GCF_000513295.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 104 SK: 2\n203 GCF_000024945.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 110 SK: 2\n204 GCF_000255115.2.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 68 SK: 1\n205 GCF_000189415.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 54 SK: 1\n206 GCF_000191585.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 35 SK: 1\n207 GCF_001269425.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 72 SK: 2\n208 GCF_000218625.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 73 SK: 2\n209 GCF_000253395.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 80 SK: 2\n210 GCF_001278035.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 102 SK: 3\n211 GCF_000344805.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 39 SK: 1\n212 GCF_000020565.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 115 SK: 3\n213 GCF_000340885.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 60 SK: 3\n214 GCF_000024765.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 93 SK: 3\n215 GCF_000316665.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 43 SK: 2\n216 GCF_000287355.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 45 SK: 2\n217 GCF_000015825.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 42 SK: 1\n218 GCF_000404225.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 84 SK: 2\n219 GCF_000146185.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 40 SK: 3\n220 GCF_000017165.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 111 SK: 1\n221 GCF_001702175.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 99 SK: 3\n222 GCF_000092025.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 56 SK: 1\n223 GCF_000699505.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 76 SK: 2\n224 GCF_000024565.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 77 SK: 2\n225 GCF_000013985.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 37 SK: 2\n226 GCF_000008465.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 84 SK: 2\n227 GCF_000019965.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 46 SK: 3\n228 GCF_000020785.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 143 SK: 3\n229 GCF_000982825.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 109 SK: 2\n230 GCF_000203855.3.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 61 SK: 3\n231 GCF_000020385.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 88 SK: 3\n232 GCF_000015265.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 116 SK: 3\n233 GCF_000022085.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 69 SK: 1\n234 GCF_000591055.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 106 SK: 2\n235 GCF_000026505.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 131 SK: 3\n236 GCF_000014805.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 117 SK: 2\n237 GCF_000816085.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 62 SK: 3\n238 GCF_000238255.3.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 85 SK: 3\n239 GCF_001586215.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 57 SK: 3\n240 GCF_000023785.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 34 SK: 2\n241 GCF_000225465.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 119 SK: 2\n242 GCF_000020165.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 91 SK: 3\n243 GCF_000785495.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 66 SK: 2\n244 GCF_000213255.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 83 SK: 2\n245 GCF_000442645.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 54 SK: 3\n246 GCF_000737575.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 91 SK: 2\n247 GCF_000025345.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 22 SK: 2\n248 GCF_000632985.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 86 SK: 2\n249 GCF_001941805.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 60 SK: 3\n250 GCF_000007005.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 82 SK: 3\n251 GCF_000014245.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 81 SK: 3\n252 GCF_000827125.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 126 SK: 2\n253 GCF_001431725.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 133 SK: 3\n254 GCF_000253175.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 99 SK: 3\n255 GCF_001676765.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 111 SK: 3\n256 GCF_000969645.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 22 SK: 2\n257 GCF_000815025.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 71 SK: 3\n258 GCF_000317835.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 37 SK: 2\n259 GCF_000305785.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 66 SK: 2\n260 GCF_000196315.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 44 SK: 1\n261 GCF_000195915.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 104 SK: 2\n262 GCF_001687565.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 62 SK: 2\n263 GCF_000969965.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 86 SK: 3\n264 GCF_001308105.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 69 SK: 3\n265 GCF_000583875.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 58 SK: 2\n266 GCF_000012965.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 77 SK: 3\n267 GCF_000143085.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 102 SK: 2\n268 GCF_000328625.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 56 SK: 2\n269 GCF_000969905.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 78 SK: 2\n270 GCF_001514435.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 68 SK: 1\n271 GCF_000217715.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 72 SK: 2\n272 GCF_000007605.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 151 SK: 3\n273 GCF_000012005.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 40 SK: 2\n274 GCF_000183425.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 44 SK: 3\n275 GCF_000025605.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 141 SK: 3\n276 GCF_000009045.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 71 SK: 2\n277 GCF_000259275.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 113 SK: 2\n278 GCF_000689415.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 45 SK: 2\n279 GCF_001460635.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 131 SK: 3\n280 GCF_000397205.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 32 SK: 2\n281 GCF_000064305.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 102 SK: 3\n282 GCF_000145275.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 72 SK: 2\n283 GCF_000023325.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 86 SK: 3\n284 GCF_000270305.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 85 SK: 1\n285 GCF_000215705.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 74 SK: 3\n286 GCF_000025885.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 58 SK: 3\n287 GCF_000016085.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 107 SK: 2\n288 GCF_000219535.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 107 SK: 3\n289 GCF_000590475.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 42 SK: 2\n290 GCF_000812185.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 69 SK: 2\n291 GCF_000025925.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 68 SK: 3\n292 GCF_000192845.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 73 SK: 3\n293 GCF_000297055.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 89 SK: 2\n294 GCF_000190555.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 85 SK: 3\n295 GCF_000014225.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 66 SK: 1\n296 GCF_001412615.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 90 SK: 2\n297 GCF_001652465.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 48 SK: 2\n298 GCF_000242335.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 73 SK: 2\n299 GCF_000195335.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 49 SK: 2\n300 GCF_001456315.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 46 SK: 1\n301 GCF_000327505.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 77 SK: 2\n302 GCF_000165905.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 144 SK: 3\n303 GCF_000196615.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 74 SK: 3\n304 GCF_000023705.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 124 SK: 3\n305 GCF_000964565.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 87 SK: 1\n306 GCF_000231385.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 79 SK: 2\n307 GCF_000317695.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 127 SK: 3\n308 GCF_001648175.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 57 SK: 3\n309 GCF_002222615.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 79 SK: 3\n310 GCF_000025305.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 78 SK: 1\n311 GCF_000023205.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 92 SK: 2\n312 GCF_000007265.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 34 SK: 2\n313 GCF_000196655.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 46 SK: 1\n314 GCF_000226315.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 101 SK: 2\n315 GCF_000024025.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 31 SK: 1\n316 GCF_001021085.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 71 SK: 3\n317 GCF_000204135.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 43 SK: 2\n318 GCF_000014025.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 94 SK: 3\n319 GCF_000327045.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 127 SK: 3\n320 GCF_000761155.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 44 SK: 3\n321 GCF_000011465.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 83 SK: 2\n322 GCF_000477435.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 73 SK: 2\n323 GCF_000215085.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 80 SK: 1\n324 GCF_001412575.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 58 SK: 3\n325 GCF_000011205.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 63 SK: 1\n326 GCF_000478885.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 105 SK: 3\n327 GCF_001542565.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 35 SK: 1\n328 GCF_000179035.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 73 SK: 3\n329 GCF_000695095.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 126 SK: 3\n330 GCF_000009745.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 77 SK: 2\n331 GCF_000016785.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 85 SK: 2\n332 GCF_000204155.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 121 SK: 3\n333 GCF_000091465.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 88 SK: 3\n334 GCF_001457455.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 61 SK: 2\n335 GCF_000974425.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 71 SK: 3\n336 GCF_000260985.3.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 87 SK: 2\n337 GCF_000306785.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 98 SK: 3\n338 GCF_000731315.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 24 SK: 1\n339 GCF_000376585.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 58 SK: 1\n340 GCF_000439435.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 53 SK: 2\n341 GCF_000212395.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 93 SK: 2\n342 GCF_001189535.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 148 SK: 3\n343 GCF_000023605.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 74 SK: 2\n344 GCF_000953635.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 122 SK: 3\n345 GCF_001558415.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 86 SK: 1\n346 GCF_000184705.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 84 SK: 3\n347 GCF_000195775.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 113 SK: 2\n348 GCF_000758685.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 112 SK: 2\n349 GCF_000967305.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 35 SK: 2\n350 GCF_000017105.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 24 SK: 2\n351 GCF_000828515.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 50 SK: 2\n352 GCF_000698785.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 80 SK: 1\n353 GCF_000196895.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 79 SK: 2\n354 GCF_000816345.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 58 SK: 1\n355 GCF_000969765.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 59 SK: 1\n356 GCF_000011445.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 70 SK: 2\n357 GCF_001190755.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 155 SK: 3\n358 GCF_000255535.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 111 SK: 3\n359 GCF_000016405.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 52 SK: 2\n360 GCF_000734015.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 62 SK: 2\n361 GCF_000253055.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 74 SK: 3\n362 GCF_000016065.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 96 SK: 2\n363 GCF_000833575.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 95 SK: 3\n364 GCF_000017505.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 88 SK: 2\n365 GCF_000316625.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 86 SK: 3\n366 GCF_000590555.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 75 SK: 3\n367 GCF_000010305.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 106 SK: 2\n368 GCF_001483965.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 53 SK: 2\n369 GCF_000013045.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 94 SK: 2\n370 GCF_000165465.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 65 SK: 1\n371 GCF_000739475.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 56 SK: 3\n372 GCF_000017185.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 128 SK: 2\n373 GCF_000010165.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 109 SK: 3\n374 GCF_002240355.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 93 SK: 3\n375 GCF_001275345.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 99 SK: 3\n376 GCF_001278075.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 90 SK: 1\n377 GCF_001729525.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 45 SK: 2\n378 GCF_001262015.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 117 SK: 3\n379 GCF_000176855.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 111 SK: 2\n380 GCF_000021925.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 96 SK: 2\n381 GCF_000025265.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 60 SK: 3\n382 GCF_001644565.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 24 SK: 1\n383 GCF_000014345.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 44 SK: 2\n384 GCF_001444445.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 51 SK: 3\n385 GCF_000010505.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 78 SK: 3\n386 GCF_000013565.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 114 SK: 2\n387 GCF_001586195.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 79 SK: 3\n388 GCF_000265425.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 74 SK: 2\n389 GCF_000016425.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 137 SK: 3\n390 GCF_000009065.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 82 SK: 3\n391 GCF_000550805.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 22 SK: 2\n392 GCF_000632845.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 72 SK: 2\n393 GCF_000300295.4.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 103 SK: 2\n394 GCF_000242255.2.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 34 SK: 1\n395 GCF_000477415.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 92 SK: 3\n396 GCF_000739455.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 90 SK: 2\n397 GCF_000016765.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 99 SK: 2\n398 GCF_001482365.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 107 SK: 3\n399 GCF_000018545.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 81 SK: 2\n400 GCF_000023905.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 116 SK: 2\n401 GCF_000695235.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 82 SK: 3\n402 GCF_001908775.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 56 SK: 3\n403 GCF_000010065.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 79 SK: 2\n404 GCF_000017545.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 58 SK: 2\n405 GCF_000006845.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 48 SK: 1\n406 GCF_000025125.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 52 SK: 2\n407 GCF_000092465.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 122 SK: 2\n408 GCF_000007645.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 58 SK: 2\n409 GCF_000195935.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 75 SK: 3\n410 GCF_000828835.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 132 SK: 3\n411 GCF_000237065.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 86 SK: 2\n412 GCF_000020485.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 85 SK: 3\n413 GCF_000214215.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 95 SK: 2\n414 GCF_000507245.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 35 SK: 2\n415 GCF_000007765.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 113 SK: 2\n416 GCF_000511405.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 67 SK: 1\n417 GCF_000319575.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 30 SK: 3\n418 GCF_000212375.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 34 SK: 2\n419 GCF_000521565.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 67 SK: 3\n420 GCF_000012925.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 62 SK: 3\n421 GCF_000014145.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 58 SK: 2\n422 GCF_000014745.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 92 SK: 3\n423 GCF_000317935.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 66 SK: 2\n424 GCF_000941055.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 72 SK: 2\n425 GCF_000183745.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 80 SK: 2\n426 GCF_000022045.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 157 SK: 3\n427 GCF_000006925.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 80 SK: 2\n428 GCF_000010985.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 49 SK: 1\n429 GCF_000525675.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 87 SK: 2\n430 GCF_900086555.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 112 SK: 1\n431 GCF_000009865.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 129 SK: 2\n432 GCF_000007745.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 111 SK: 3\n433 GCF_000178835.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 72 SK: 2\n434 GCF_000742835.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 120 SK: 3\n435 GCF_000007825.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 36 SK: 3\n436 GCF_000008325.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 61 SK: 3\n437 GCF_000023285.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 80 SK: 3\n438 GCF_000013705.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 68 SK: 3\n439 GCF_000226295.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 84 SK: 2\n440 GCF_000191545.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 38 SK: 2\n441 GCF_000754265.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 63 SK: 3\n442 GCF_000195295.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 72 SK: 2\n443 GCF_000987835.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 127 SK: 2\n444 GCF_001456215.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 70 SK: 2\n445 GCF_000024225.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 89 SK: 3\n446 GCF_000284035.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 95 SK: 3\n447 GCF_000010825.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 98 SK: 3\n448 GCF_000802245.2.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 67 SK: 1\n449 GCF_000445425.4.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 46 SK: 3\n450 GCF_000512735.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 61 SK: 2\n451 GCF_001314305.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 22 SK: 1\n452 GCF_000300075.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 138 SK: 3\n453 GCF_000011645.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 85 SK: 3\n454 GCF_000632805.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 64 SK: 3\n455 GCF_000214665.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 51 SK: 2\n456 GCF_000325745.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 73 SK: 2\n457 GCF_001027545.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 117 SK: 2\n458 GCF_000832905.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 117 SK: 2\n459 GCF_001578205.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 98 SK: 1\n460 GCF_000317975.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 130 SK: 3\n461 GCF_000008505.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 60 SK: 3\n462 GCF_000012845.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 51 SK: 1\n463 GCF_000013005.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 88 SK: 2\n464 GCF_001543285.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 120 SK: 2\n465 GCF_001558255.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 105 SK: 3\n466 GCF_000092225.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 35 SK: 1\n467 GCF_000011745.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 76 SK: 2\n468 GCF_000248095.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 75 SK: 2\n469 GCF_000312705.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 97 SK: 2\n470 GCF_001274535.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 98 SK: 2\n471 GCF_000016985.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 82 SK: 2\n472 GCF_000166415.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 59 SK: 3\n473 GCF_000220645.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 113 SK: 2\n474 GCF_000521505.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 77 SK: 2\n475 GCF_000011965.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 105 SK: 2\n476 GCF_001262035.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 108 SK: 3\n477 GCF_000017705.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 106 SK: 3\n478 GCF_001010285.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 27 SK: 2\n479 GCF_000746585.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 129 SK: 3\n480 GCF_000213805.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 78 SK: 2\n481 GCF_000010425.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 63 SK: 3\n482 GCF_000211855.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 81 SK: 2\n483 GCF_000833025.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 69 SK: 2\n484 GCF_002075285.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 46 SK: 3\n485 GCF_000299235.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 104 SK: 3\n486 GCF_000330885.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 82 SK: 1\n487 GCF_000056065.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 62 SK: 2\n488 GCF_000022565.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 61 SK: 3\n489 GCF_000217795.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 85 SK: 3\n490 GCF_000284615.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 71 SK: 3\n491 GCF_000007145.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 80 SK: 2\n492 GCF_001314325.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 79 SK: 2\n493 GCF_000014265.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 82 SK: 3\n494 GCF_000723425.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 66 SK: 2\n495 GCF_000009085.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 90 SK: 2\n496 GCF_000341345.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 72 SK: 3\n497 GCF_001499655.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 86 SK: 2\n498 GCF_000014905.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 64 SK: 2\n499 GCF_000013765.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 96 SK: 1\n500 GCF_001955715.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 51 SK: 2\n501 GCF_000183135.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 98 SK: 2\n502 GCF_001606005.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 25 SK: 2\n503 GCF_000013165.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 110 SK: 1\n504 GCF_000017045.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 82 SK: 2\n505 GCF_000214785.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 78 SK: 3\n506 GCF_000014785.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 104 SK: 2\n507 GCF_000612685.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 110 SK: 3\n508 GCF_001660045.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 57 SK: 3\n509 GCF_000014585.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 100 SK: 3\n510 GCF_001307545.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 66 SK: 1\n511 GCF_000172995.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 149 SK: 3\n512 GCF_000784965.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 83 SK: 2\n513 GCF_000376545.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 47 SK: 3\n514 GCF_000008625.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 103 SK: 2\n515 GCF_000024405.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 95 SK: 3\n516 GCF_000224005.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 80 SK: 2\n517 GCF_000259255.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 59 SK: 3\n518 GCF_000022385.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 64 SK: 2\n519 GCF_000953655.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 61 SK: 2\n520 GCF_000972865.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 108 SK: 3\n521 GCF_000284295.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 53 SK: 2\n522 GCF_000969885.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 20 SK: 1\n523 GCF_000217635.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 74 SK: 1\n524 GCF_000219805.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 68 SK: 2\n525 GCF_000009725.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 60 SK: 2\n526 GCF_000299115.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 88 SK: 2\n527 GCF_000212415.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 60 SK: 1\n528 GCF_000404165.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 129 SK: 3\n529 GCF_000758725.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 51 SK: 2\n530 GCF_000195855.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 106 SK: 2\n531 GCF_000935025.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 63 SK: 2\n532 GCF_000021805.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 68 SK: 2\n533 GCF_000724625.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 56 SK: 3\n534 GCF_000165715.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 82 SK: 3\n535 GCF_000025485.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 155 SK: 3\n536 GCF_000016325.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 39 SK: 2\n537 GCF_000304455.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 72 SK: 2\n538 GCF_000015225.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 96 SK: 1\n539 GCF_000204645.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 121 SK: 3\n540 GCF_001305595.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 99 SK: 1\n541 GCF_000015565.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 52 SK: 1\n542 GCF_000046705.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 101 SK: 2\n543 GCF_000157355.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 62 SK: 2\n544 GCF_000006685.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 171 SK: 3\n545 GCF_000299455.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 83 SK: 2\n546 GCF_000014045.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 75 SK: 3\n547 GCF_000021745.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 61 SK: 2\n548 GCF_000953135.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 53 SK: 3\n549 GCF_000020985.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 55 SK: 3\n550 GCF_000147355.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 136 SK: 1\n551 GCF_000725365.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 96 SK: 3\n552 GCF_000311765.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 130 SK: 3\n553 GCF_000012245.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 82 SK: 2\n554 GCF_000025205.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 66 SK: 3\n555 GCF_001020955.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 79 SK: 3\n556 GCF_000819445.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 92 SK: 3\n557 GCF_001547995.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 69 SK: 2\n558 GCF_001028625.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 88 SK: 2\n559 GCF_000317855.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 83 SK: 3\n560 GCF_000154785.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 63 SK: 3\n561 GCF_000266885.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 63 SK: 2\n562 GCF_001880325.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 84 SK: 1\n563 GCF_000732945.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 67 SK: 2\n564 GCF_000317065.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 86 SK: 3\n565 GCF_000010285.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 94 SK: 3\n566 GCF_000158275.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 64 SK: 2\n567 GCF_000017265.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 129 SK: 3\n568 GCF_001605965.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 37 SK: 1\n569 GCF_000513475.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 160 SK: 3\n570 GCF_001709315.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 97 SK: 1\n571 GCF_000785105.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 123 SK: 3\n572 GCF_000196095.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 83 SK: 2\n573 GCF_000013025.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 55 SK: 2\n574 GCF_000306725.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 81 SK: 2\n575 GCF_000235405.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 61 SK: 2\n576 GCF_000022365.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 82 SK: 2\n577 GCF_000816145.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 101 SK: 3\n578 GCF_000019785.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 82 SK: 2\n579 GCF_000008925.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 118 SK: 2\n580 GCF_000349845.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 76 SK: 2\n581 GCF_000018885.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 43 SK: 2\n582 GCF_000006605.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 52 SK: 3\n583 GCF_001482385.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 101 SK: 2\n584 GCF_001688625.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 46 SK: 1\n585 GCF_000730245.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 62 SK: 1\n586 GCF_000500935.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 55 SK: 2\n587 GCF_000191045.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 105 SK: 3\n588 GCF_000389965.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 71 SK: 3\n589 GCF_000146165.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 77 SK: 3\n590 GCF_000022325.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 87 SK: 1\n591 GCF_000144645.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 91 SK: 2\n592 GCF_000283575.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 62 SK: 1\n593 GCF_000941075.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 104 SK: 3\n594 GCF_000235605.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 86 SK: 3\n595 GCF_000266945.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 53 SK: 2\n596 GCF_001010805.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 104 SK: 2\n597 GCF_000612485.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 69 SK: 1\n598 GCF_000225325.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 116 SK: 3\n599 GCF_000204565.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 86 SK: 1\n600 GCF_000307165.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 90 SK: 3\n601 GCF_000195755.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 106 SK: 3\n602 GCF_000830005.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 99 SK: 2\n603 GCF_000013085.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 72 SK: 1\n604 GCF_000196435.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 21 SK: 1\n605 GCF_000306885.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 79 SK: 3\n606 GCF_000317125.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 101 SK: 2\n607 GCF_001688625.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 74 SK: 3\n608 GCF_000801295.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 65 SK: 2\n609 GCF_001483845.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 96 SK: 3\n610 GCF_000182745.2.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 116 SK: 1\n611 GCF_000332735.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 64 SK: 2\n612 GCF_000196295.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 82 SK: 2\n613 GCF_000025685.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 94 SK: 2\n614 GCF_000016165.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 65 SK: 3\n615 GCF_000190315.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 67 SK: 2\n616 GCF_000196535.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 65 SK: 1\n617 GCF_000222305.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 77 SK: 1\n618 GCF_000223905.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 108 SK: 3\n619 GCF_000067205.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 72 SK: 3\n620 GCF_000058485.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 94 SK: 1\n621 GCF_000298115.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 92 SK: 2\n622 GCF_000219725.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 126 SK: 1\n623 GCF_000706685.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 83 SK: 3\n624 GCF_000661895.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 93 SK: 2\n625 GCF_000023925.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 105 SK: 2\n626 GCF_001439585.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 72 SK: 2\n627 GCF_000020965.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 79 SK: 2\n628 GCF_000018025.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 72 SK: 3\n629 GCF_000022265.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 74 SK: 3\n630 GCF_000186885.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 77 SK: 3\n631 GCF_000619905.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 71 SK: 2\n632 GCF_000013105.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 68 SK: 1\n633 GCF_000265405.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 118 SK: 3\n634 GCF_000454045.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 56 SK: 1\n635 GCF_000214725.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 44 SK: 2\n636 GCF_000024605.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 58 SK: 1\n637 GCF_000210915.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 48 SK: 3\n638 GCF_002211785.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 66 SK: 3\n639 GCF_000024205.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 109 SK: 3\n640 GCF_000968175.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 73 SK: 2\n641 GCF_000007085.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 114 SK: 2\n642 GCF_000092385.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 100 SK: 3\n643 GCF_000511305.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 58 SK: 2\n644 GCF_000970205.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 83 SK: 2\n645 GCF_001025175.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 59 SK: 1\n646 GCF_000195875.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 59 SK: 2\n647 GCF_000317085.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 79 SK: 3\n648 GCF_000184345.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 95 SK: 2\n649 GCF_000023105.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 83 SK: 2\n650 GCF_001606025.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 53 SK: 1\n651 GCF_000023945.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 49 SK: 3\n652 GCF_001562115.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 58 SK: 3\n653 GCF_001028705.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 47 SK: 1\n654 GCF_000012545.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 26 SK: 1\n655 GCF_000008645.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 70 SK: 2\n656 GCF_000981545.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 69 SK: 2\n657 GCF_000250635.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 33 SK: 2\n658 GCF_000007725.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 96 SK: 3\n659 GCF_001420915.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 77 SK: 1\n660 GCF_000306765.2.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 61 SK: 1\n661 GCF_000008305.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 62 SK: 3\n662 GCF_000227685.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 108 SK: 3\n663 GCF_001013905.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 43 SK: 2\n664 GCF_000252855.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 72 SK: 2\n665 GCF_000284235.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 118 SK: 3\n666 GCF_000376645.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 43 SK: 1\n667 GCF_000327485.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 55 SK: 3\n668 GCF_000737595.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 145 SK: 3\n669 GCF_000195995.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 77 SK: 2\n670 GCF_000024065.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 130 SK: 3\n671 GCF_000829415.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 82 SK: 1\n672 GCF_000018325.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 118 SK: 2\n673 GCF_000011545.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 76 SK: 3\n674 GCF_000183405.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 93 SK: 3\n675 GCF_000212675.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 45 SK: 3\n676 GCF_000512915.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 60 SK: 2\n677 GCF_000263735.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 101 SK: 3\n678 GCF_001483865.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 81 SK: 2\n679 GCF_000347695.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 74 SK: 3\n680 GCF_000177635.2.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 70 SK: 1\n681 GCF_000270285.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 62 SK: 3\n682 GCF_000007385.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 54 SK: 3\n683 GCF_000190735.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 123 SK: 3\n684 GCF_000940995.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 79 SK: 2\n685 GCF_001025155.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 72 SK: 3\n686 GCF_000179635.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 130 SK: 2\n687 GCF_001411495.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 52 SK: 3\n688 GCF_000213215.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 84 SK: 3\n689 GCF_000146505.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 94 SK: 2\n690 GCF_000196455.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 149 SK: 3\n691 GCF_001022135.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 156 SK: 3\n692 GCF_000164865.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 73 SK: 2\n693 GCF_000993805.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 98 SK: 3\n694 GCF_000013745.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 103 SK: 3\n695 GCF_000015885.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 113 SK: 3\n696 GCF_000801275.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 65 SK: 3\n697 GCF_000020645.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 74 SK: 2\n698 GCF_000018865.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 61 SK: 3\n699 GCF_000177615.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 78 SK: 2\n700 GCF_000144605.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 93 SK: 2\n701 GCF_000829195.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 72 SK: 2\n702 GCF_000020005.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 71 SK: 1\n703 GCF_000600005.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 87 SK: 3\n704 GCF_000747315.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 119 SK: 1\n705 GCF_001559115.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 85 SK: 3\n706 GCF_000967425.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 118 SK: 3\n707 GCF_000283655.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 106 SK: 3\n708 GCF_000444875.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 133 SK: 3\n709 GCF_000009345.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 61 SK: 2\n710 GCF_000025625.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 143 SK: 3\n711 GCF_000520015.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 72 SK: 2\n712 GCF_000511355.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 102 SK: 3\n713 GCF_000203835.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 61 SK: 3\n714 GCF_000785705.2.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 57 SK: 1\n715 GCF_000025845.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 64 SK: 2\n716 GCF_000021285.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 82 SK: 2\n717 GCF_000007045.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 52 SK: 3\n718 GCF_001273775.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 61 SK: 1\n719 GCF_000565175.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 76 SK: 1\n720 GCF_000010145.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 21 SK: 1\n721 GCF_000166095.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 164 SK: 3\n722 GCF_001558935.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 54 SK: 1\n723 GCF_000015445.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 90 SK: 1\n724 GCF_000018145.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 171 SK: 3\n725 GCF_000026325.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 73 SK: 2\n726 GCF_000828895.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 106 SK: 3\n727 GCF_001594265.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 105 SK: 2\n728 GCF_000024925.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 42 SK: 3\n729 GCF_001444405.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 72 SK: 1\n730 GCF_000012865.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 35 SK: 1\n731 GCF_000277795.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 164 SK: 3\n732 GCF_000240185.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 67 SK: 1\n733 GCF_000012085.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 39 SK: 2\n734 GCF_000512355.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 103 SK: 3\n735 GCF_000015405.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 108 SK: 2\n736 GCF_001729485.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 62 SK: 3\n737 GCF_001705565.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 137 SK: 2\n738 GCF_900093775.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 80 SK: 1\n739 GCF_000014725.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 56 SK: 3\n740 GCF_000196815.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 53 SK: 2\n741 GCF_000021565.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 63 SK: 3\n742 GCF_000006985.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 93 SK: 2\n743 GCF_000189775.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 72 SK: 2\n744 GCF_000764555.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 60 SK: 2\n745 GCF_000008805.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 72 SK: 3\n746 GCF_000828635.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 110 SK: 3\n747 GCF_000024865.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 25 SK: 2\n748 GCF_000022525.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 26 SK: 2\n749 GCF_000219355.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 68 SK: 1\n750 GCF_000747525.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 96 SK: 2\n751 GCF_000734895.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 29 SK: 2\n752 GCF_001077715.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 112 SK: 2\n753 GCF_001005905.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 97 SK: 2\n754 GCF_001011035.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 53 SK: 1\n755 GCF_000223395.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 63 SK: 1\n756 GCF_001310255.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 73 SK: 3\n757 GCF_000014705.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 104 SK: 2\n758 GCF_000209655.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 125 SK: 2\n759 GCF_001038625.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 55 SK: 3\n760 GCF_000194135.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 84 SK: 3\n761 GCF_000147835.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 104 SK: 3\n762 GCF_000022125.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 75 SK: 3\n763 GCF_000196115.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 44 SK: 3\n764 GCF_000015645.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 80 SK: 3\n765 GCF_000166055.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 105 SK: 2\n766 GCF_000021025.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 59 SK: 3\n767 GCF_001314975.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 88 SK: 3\n768 GCF_000024905.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 105 SK: 3\n769 GCF_000019945.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 103 SK: 3\n770 GCF_000724485.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 102 SK: 1\n771 GCF_000807275.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 31 SK: 2\n772 GCF_000496595.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 35 SK: 3\n773 GCF_000013885.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 75 SK: 2\n774 GCF_000155735.2.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 60 SK: 1\n775 GCF_000565215.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 77 SK: 1\n776 GCF_000175215.2.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 72 SK: 1\n777 GCF_000237865.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 75 SK: 3\n778 GCF_000214095.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 75 SK: 3\n779 GCF_000253035.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 65 SK: 3\n780 GCF_000012745.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 69 SK: 3\n781 GCF_000020545.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 83 SK: 2\n782 GCF_000023885.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 71 SK: 2\n783 GCF_000813245.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 104 SK: 2\n784 GCF_000242935.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 66 SK: 2\n785 GCF_001444425.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 84 SK: 2\n786 GCF_000014205.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 96 SK: 3\n787 GCF_000014125.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 52 SK: 3\n788 GCF_000023465.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 26 SK: 2\n789 GCF_000013125.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 95 SK: 2\n790 GCF_000012985.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 82 SK: 2\n791 GCF_000023145.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 109 SK: 3\n792 GCF_000328565.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 79 SK: 2\n793 GCF_000255135.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 95 SK: 3\n794 GCF_001021385.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 66 SK: 3\n795 GCF_000013665.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 74 SK: 2\n796 GCF_000224085.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 82 SK: 2\n797 GCF_001454945.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 25 SK: 2\n798 GCF_000194605.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 85 SK: 3\n799 GCF_000019225.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 71 SK: 3\n800 GCF_000145255.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 122 SK: 3\n801 GCF_000217675.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 35 SK: 1\n802 GCF_000508245.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 78 SK: 1\n803 GCF_000470775.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 81 SK: 3\n804 GCF_000016545.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 51 SK: 2\n805 GCF_000190595.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 79 SK: 2\n806 GCF_000145035.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 68 SK: 1\n807 GCF_001008165.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 90 SK: 3\n808 GCF_000176915.2.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 24 SK: 1\n809 GCF_000008045.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 74 SK: 3\n810 GCF_000015245.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 97 SK: 3\n811 GCF_000015505.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 47 SK: 1\n812 GCF_000012225.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 127 SK: 2\n813 GCF_000473245.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 71 SK: 1\n814 GCF_000478905.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 64 SK: 2\n815 GCF_000013205.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 103 SK: 3\n816 GCF_002073715.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 81 SK: 2\n817 GCF_000265365.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 46 SK: 2\n818 GCF_000024085.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 62 SK: 2\n819 GCF_001281385.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 37 SK: 2\n820 GCF_000218895.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 56 SK: 2\n821 GCF_000222485.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 106 SK: 3\n822 GCF_000010605.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 119 SK: 3\n823 GCF_000016565.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 125 SK: 2\n824 GCF_000191145.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 95 SK: 2\n825 GCF_000269945.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 93 SK: 3\n826 GCF_000092045.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 99 SK: 2\n827 GCF_000018685.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 115 SK: 2\n828 GCF_001412595.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 74 SK: 2\n829 GCF_000195275.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 89 SK: 2\n830 GCF_000006785.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 83 SK: 2\n831 GCF_000007125.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 50 SK: 1\n832 GCF_000319385.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 48 SK: 2\n833 GCF_000017565.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 47 SK: 3\n834 GCF_000227665.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 103 SK: 2\n835 GCF_001553935.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 99 SK: 3\n836 GCF_000011325.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 63 SK: 1\n837 GCF_000403645.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 65 SK: 1\n838 GCF_000400955.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 85 SK: 2\n839 GCF_001042595.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 62 SK: 1\n840 GCF_001262715.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 46 SK: 3\n841 GCF_000012465.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 84 SK: 2\n842 GCF_000145235.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 69 SK: 3\n843 GCF_000471025.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 59 SK: 2\n844 GCF_000151105.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 82 SK: 2\n845 GCF_000755145.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 88 SK: 2\n846 GCF_000300005.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 62 SK: 2\n847 GCF_000013445.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 100 SK: 2\n848 GCF_000024965.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 24 SK: 1\n849 GCF_000195735.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 68 SK: 3\n850 GCF_000815185.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 113 SK: 3\n851 GCF_000209675.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 143 SK: 3\n852 GCF_001586165.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 48 SK: 2\n853 GCF_000287335.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 70 SK: 3\n854 GCF_000737515.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 66 SK: 1\n855 GCF_000968055.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 94 SK: 2\n856 GCF_000270145.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 90 SK: 3\n857 GCF_000233715.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 82 SK: 3\n858 GCF_000255295.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 107 SK: 3\n859 GCF_000697965.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 67 SK: 2\n860 GCF_001305615.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 31 SK: 2\n861 GCF_000010645.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 60 SK: 2\n862 GCF_000153165.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 71 SK: 3\n863 GCF_000012805.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 96 SK: 1\n864 GCF_000283615.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 110 SK: 1\n865 GCF_001050475.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 104 SK: 3\n866 GCF_000816845.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 66 SK: 1\n867 GCF_001767275.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 43 SK: 2\n868 GCF_000146065.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 34 SK: 2\n869 GCF_000009605.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 96 SK: 3\n870 GCF_001182745.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 103 SK: 3\n871 GCF_000008765.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 68 SK: 3\n872 GCF_000195835.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 55 SK: 1\n873 GCF_001190745.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 85 SK: 3\n874 GCF_900116045.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 57 SK: 3\n875 GCF_000024125.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 64 SK: 3\n876 GCF_000332115.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 115 SK: 3\n877 GCF_000020205.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 87 SK: 3\n878 GCF_000014285.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 75 SK: 3\n879 GCF_000355765.4.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 70 SK: 2\n880 GCF_000023265.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 72 SK: 2\n881 GCF_000300295.3.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 70 SK: 2\n882 GCF_000967895.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 21 SK: 2\n883 GCF_000012565.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 55 SK: 1\n884 GCF_000011225.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 63 SK: 2\n885 GCF_000828715.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 99 SK: 2\n886 GCF_000230555.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 67 SK: 2\n887 GCF_001700965.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 62 SK: 3\n888 GCF_000968535.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 83 SK: 2\n889 GCF_000189295.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 98 SK: 3\n890 GCF_000284375.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 69 SK: 1\n891 GCF_000233775.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 100 SK: 1\n892 GCF_000027145.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 50 SK: 2\n893 GCF_001023575.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 79 SK: 3\n894 GCF_000021385.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 88 SK: 2\n895 GCF_000202635.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 98 SK: 1\n896 GCF_000196035.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 97 SK: 1\n897 GCF_001458475.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 107 SK: 3\n898 GCF_000237145.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 83 SK: 3\n899 GCF_000196275.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 81 SK: 3\n900 GCF_000331735.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 75 SK: 2\n901 GCF_000195315.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 72 SK: 1\n902 GCF_001189495.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 108 SK: 2\n903 GCF_001078275.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 104 SK: 1\n904 GCF_001050435.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 121 SK: 3\n905 GCF_000242895.2.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 54 SK: 1\n906 GCF_000789255.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 83 SK: 2\n907 GCF_001442535.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 119 SK: 3\n908 GCF_001685465.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 60 SK: 1\n909 GCF_000251105.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 70 SK: 3\n910 GCF_000024725.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 81 SK: 2\n911 GCF_001267175.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 140 SK: 3\n912 GCF_001708485.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 94 SK: 2\n913 GCF_000316575.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 108 SK: 2\n914 GCF_000444995.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 73 SK: 3\n915 GCF_001011155.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 91 SK: 3\n916 GCF_000980835.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 67 SK: 1\n917 GCF_000025505.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 168 SK: 3\n918 GCF_000215745.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 86 SK: 2\n919 GCF_000018285.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 61 SK: 1\n920 GCF_001281045.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 102 SK: 3\n921 GCF_001652565.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 126 SK: 3\n922 GCF_000006765.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 75 SK: 1\n923 GCF_001314945.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 106 SK: 3\n924 GCF_001017435.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 61 SK: 2\n925 GCF_000215995.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 118 SK: 2\n926 GCF_001878675.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 99 SK: 3\n927 GCF_001553605.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 24 SK: 1\n928 GCF_000236405.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 92 SK: 2\n929 GCF_000447675.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 95 SK: 3\n930 GCF_000208405.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 66 SK: 3\n931 GCF_000801315.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 97 SK: 1\n932 GCF_001267865.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 104 SK: 3\n933 GCF_000494755.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 20 SK: 1\n934 GCF_000090965.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 46 SK: 3\n935 GCF_000015665.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 54 SK: 1\n936 GCF_000015345.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 82 SK: 3\n937 GCF_000192865.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 65 SK: 2\n938 GCF_000012305.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 109 SK: 3\n939 GCF_001686985.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 88 SK: 2\n940 GCF_000147335.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 65 SK: 2\n941 GCF_000317575.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 88 SK: 3\n942 GCF_000246855.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 44 SK: 1\n943 GCF_000020065.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 151 SK: 3\n944 GCF_000757785.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 100 SK: 2\n945 GCF_000152245.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 119 SK: 2\n946 GCF_001652485.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 99 SK: 3\n947 GCF_000179395.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 112 SK: 3\n948 GCF_001610955.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 75 SK: 3\n949 GCF_000763575.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 73 SK: 1\n950 GCF_000016825.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 56 SK: 2\n951 GCF_000011385.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 90 SK: 2\n952 GCF_000960975.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 127 SK: 2\n953 GCF_000013425.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 62 SK: 2\n954 GCF_000008725.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 85 SK: 3\n955 GCF_000219105.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 69 SK: 2\n956 GCF_000092405.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 94 SK: 2\n957 GCF_000013865.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 75 SK: 2\n958 GCF_000299335.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 57 SK: 3\n959 GCF_000011345.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 106 SK: 3\n960 GCF_001698225.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 56 SK: 3\n961 GCF_000185805.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 98 SK: 1\n962 GCF_000014465.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 46 SK: 1\n963 GCF_000294715.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 96 SK: 2\n964 GCF_000015725.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 59 SK: 1\n965 GCF_000008205.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 63 SK: 2\n966 GCF_000970045.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 55 SK: 3\n967 GCF_000012605.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 76 SK: 2\n968 GCF_000243115.2.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 58 SK: 1\n969 GCF_000815065.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 45 SK: 1\n970 GCF_001029265.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 90 SK: 1\n971 GCF_000026065.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 89 SK: 2\n972 GCF_000831485.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 60 SK: 2\n973 GCF_000585495.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 100 SK: 2\n974 GCF_000092505.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 44 SK: 1\n975 GCF_000011185.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 55 SK: 1\n976 GCF_000026045.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 60 SK: 2\n977 GCF_000769655.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 76 SK: 2\n978 GCF_000025945.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 77 SK: 2\n979 GCF_000317025.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 69 SK: 3\n980 GCF_001007935.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 86 SK: 3\n981 GCF_001021045.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 108 SK: 2\n982 GCF_001484935.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 104 SK: 3\n983 GCF_000328705.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 92 SK: 3\n984 GCF_000950575.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 67 SK: 2\n985 GCF_000218875.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 72 SK: 3\n986 GCF_001078055.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 96 SK: 3\n987 GCF_000184435.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 88 SK: 2\n988 GCF_000024285.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 68 SK: 3\n989 GCF_000092105.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 79 SK: 2\n990 GCF_000325705.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 45 SK: 1\n991 GCF_000007485.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 82 SK: 2\n992 GCF_000010185.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 82 SK: 2\n993 GCF_000147695.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 56 SK: 2\n994 GCF_000970305.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 65 SK: 3\n995 GCF_001263175.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 105 SK: 2\n996 GCF_000196235.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 101 SK: 2\n997 GCF_000577895.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 82 SK: 3\n998 GCF_001027285.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 56 SK: 1\n999 GCF_000013725.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 69 SK: 2\n1000 GCF_000484535.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 84 SK: 2\n1001 GCF_000743945.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 107 SK: 2\n1002 GCF_001308145.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 100 SK: 2\n1003 GCF_000153405.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 53 SK: 3\n1004 GCF_000016645.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 120 SK: 3\n1005 GCF_001593245.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 62 SK: 1\n1006 GCF_000439455.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 69 SK: 2\n1007 GCF_000143145.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 101 SK: 2\n1008 GCF_000019405.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 158 SK: 3\n1009 GCF_000147055.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 85 SK: 1\n1010 GCF_000196855.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 77 SK: 3\n1011 GCF_000012885.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 81 SK: 2\n1012 GCF_000724605.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 90 SK: 1\n1013 GCF_000014505.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 53 SK: 2\n1014 GCF_001705175.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 34 SK: 2\n1015 GCF_000240075.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 56 SK: 3\n1016 GCF_000055945.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 44 SK: 2\n1017 GCF_000018365.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 58 SK: 2\n1018 GCF_000981505.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 96 SK: 3\n1019 GCF_000250675.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 56 SK: 3\n1020 GCF_000186345.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 125 SK: 3\n1021 GCF_001941945.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 73 SK: 3\n1022 GCF_000178955.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 95 SK: 3\n1023 GCF_001583415.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 160 SK: 3\n1024 GCF_001572725.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 59 SK: 3\n1025 GCF_000019665.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 104 SK: 3\n1026 GCF_000517605.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 96 SK: 2\n1027 GCF_000612055.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 22 SK: 2\n1028 GCF_000441555.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 85 SK: 2\n1029 GCF_001042635.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 106 SK: 3\n1030 GCF_000982715.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 96 SK: 1\n1031 GCF_000010525.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 78 SK: 2\n1032 GCF_000767275.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 97 SK: 2\n1033 GCF_900011245.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 81 SK: 1\n1034 GCF_000422165.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 49 SK: 2\n1035 GCF_000953015.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 85 SK: 2\n1036 GCF_000016665.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 78 SK: 2\n1037 GCF_000022005.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 85 SK: 3\n1038 GCF_000495935.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 77 SK: 3\n1039 GCF_000015865.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 88 SK: 2\n1040 GCF_000092865.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 77 SK: 2\n1041 GCF_001442785.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 117 SK: 3\n1042 GCF_000062885.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 117 SK: 2\n1043 GCF_000018665.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 108 SK: 3\n1044 GCF_001642655.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 86 SK: 3\n1045 GCF_000144675.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 69 SK: 3\n1046 GCF_000172635.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 81 SK: 2\n1047 GCF_000143845.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 84 SK: 2\n1048 GCF_000018605.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 78 SK: 3\n1049 GCF_000468615.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 73 SK: 3\n1050 GCF_000091325.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 87 SK: 2\n1051 GCF_000009905.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 86 SK: 3\n1052 GCF_000012665.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 39 SK: 3\n1053 GCF_000092305.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 59 SK: 1\n1054 GCF_000875755.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 114 SK: 3\n1055 GCF_000147815.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 95 SK: 3\n1056 GCF_000024365.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 102 SK: 2\n1057 GCF_001043175.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 89 SK: 2\n1058 GCF_000018265.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 35 SK: 2\n1059 GCF_000007765.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 61 SK: 2\n1060 GCF_000736415.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 115 SK: 3\n1061 GCF_000968195.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 155 SK: 2\n1062 GCF_000972245.3.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 64 SK: 2\n1063 GCF_000009965.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 66 SK: 2\n1064 GCF_000473305.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 105 SK: 2\n1065 GCF_900087055.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 35 SK: 3\n1066 GCF_000010325.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 83 SK: 3\n1067 GCF_000025905.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 95 SK: 3\n1068 GCF_000015285.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 84 SK: 2\n1069 GCF_000317515.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 70 SK: 2\n1070 GCF_001187845.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 57 SK: 1\n1071 GCF_000183385.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 86 SK: 2\n1072 GCF_000972765.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 114 SK: 3\n1073 GCF_001511815.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 87 SK: 2\n1074 GCF_000981525.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 60 SK: 3\n1075 GCF_000355675.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 57 SK: 2\n1076 GCF_000626635.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 90 SK: 2\n1077 GCF_000270245.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 36 SK: 2\n1078 GCF_000828815.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 24 SK: 1\n1079 GCF_000233435.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 75 SK: 3\n1080 GCF_000019285.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 134 SK: 3\n1081 GCF_001420855.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 77 SK: 3\n1082 GCF_000013365.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 85 SK: 2\n1083 GCF_000195715.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 125 SK: 2\n1084 GCF_000746645.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 67 SK: 2\n1085 GCF_000013225.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 84 SK: 1\n1086 GCF_000304215.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 63 SK: 2\n1087 GCF_000091545.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 93 SK: 3\n1088 GCF_000764535.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 52 SK: 3\n1089 GCF_000010785.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 60 SK: 2\n1090 GCF_000233915.3.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 102 SK: 3\n1091 GCF_000166295.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 100 SK: 3\n1092 GCF_000021865.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 42 SK: 1\n1093 GCF_000970325.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 94 SK: 2\n1094 GCF_000022905.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 40 SK: 2\n1095 GCF_000023725.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 85 SK: 3\n1096 GCF_000817955.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 59 SK: 2\n1097 GCF_000092185.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 32 SK: 2\n1098 GCF_001280225.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 118 SK: 2\n1099 GCF_000756615.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 110 SK: 3\n1100 GCF_000019365.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 106 SK: 2\n1101 GCF_000017405.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 43 SK: 3\n1102 GCF_000321415.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 99 SK: 2\n1103 GCF_001465835.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 69 SK: 3\n1104 GCF_000226565.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 62 SK: 1\n1105 GCF_000230715.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 81 SK: 2\n1106 GCF_000568815.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 80 SK: 3\n1107 GCF_000022065.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 102 SK: 3\n1108 GCF_000092925.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 107 SK: 3\n1109 GCF_001677275.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 99 SK: 2\n1110 GCF_000247605.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 52 SK: 1\n1111 GCF_001477655.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 73 SK: 3\n1112 GCF_000219585.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 95 SK: 1\n1113 GCF_001922025.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 118 SK: 2\n1114 GCF_000010085.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 125 SK: 2\n1115 GCF_000484505.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 57 SK: 2\n1116 GCF_000264495.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 66 SK: 3\n1117 GCF_000007985.2.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 66 SK: 1\n1118 GCF_000023845.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 76 SK: 2\n1119 GCF_000006905.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 110 SK: 3\n1120 GCF_001315015.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 52 SK: 2\n1121 GCF_000953535.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 91 SK: 3\n1122 GCF_000993785.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 64 SK: 2\n1123 GCF_000953215.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 118 SK: 3\n1124 GCF_000021045.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 69 SK: 3\n1125 GCF_000196675.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 123 SK: 2\n1126 GCF_001421015.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 116 SK: 3\n1127 GCF_001548015.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 91 SK: 3\n1128 GCF_000299355.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 78 SK: 2\n1129 GCF_000237205.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 84 SK: 3\n1130 GCF_000184685.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 31 SK: 2\n1131 GCF_000346595.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 109 SK: 2\n1132 GCF_000789395.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 69 SK: 3\n1133 GCF_000828615.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 75 SK: 3\n1134 GCF_000019165.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 84 SK: 2\n1135 GCF_000011985.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 67 SK: 1\n1136 GCF_001263395.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 65 SK: 1\n1137 GCF_000166395.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 68 SK: 2\n1138 GCF_000015945.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 131 SK: 2\n1139 GCF_000959245.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 24 SK: 1\n1140 GCF_000348805.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 56 SK: 3\n1141 GCF_000963865.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 35 SK: 1\n1142 GCF_000973545.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 53 SK: 1\n1143 GCF_000092585.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 58 SK: 2\n1144 GCF_000011105.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 34 SK: 2\n1145 GCF_000328665.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 22 SK: 1\n1146 GCF_000471965.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 89 SK: 3\n1147 GCF_000284415.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 47 SK: 1\n1148 GCF_000512895.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 62 SK: 2\n1149 GCF_000018305.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 94 SK: 3\n1150 GCF_000023245.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 103 SK: 1\n1151 GCF_001281465.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 64 SK: 2\n1152 GCF_000238215.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 75 SK: 2\n1153 GCF_000305935.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 24 SK: 2\n1154 GCF_000008885.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 78 SK: 3\n1155 GCF_000093025.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 79 SK: 3\n1156 GCF_000178875.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 112 SK: 3\n1157 GCF_000184745.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 23 SK: 1\n1158 GCF_000008385.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 160 SK: 3\n1159 GCF_000025565.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 73 SK: 3\n1160 GCF_000204255.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 110 SK: 3\n1161 GCF_001040945.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 81 SK: 2\n1162 GCF_000020025.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 62 SK: 1\n1163 GCF_000313635.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 91 SK: 3\n1164 GCF_000012425.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 49 SK: 2\n1165 GCF_000006725.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 108 SK: 3\n1166 GCF_000009125.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 104 SK: 3\n1167 GCF_000732925.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 52 SK: 3\n1168 GCF_000195975.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 77 SK: 2\n1169 GCF_000014005.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 77 SK: 2\n1170 GCF_000265295.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 65 SK: 1\n1171 GCF_000023965.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 114 SK: 3\n1172 GCF_000023585.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 76 SK: 3\n1173 GCF_000009145.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 81 SK: 2\n1174 GCF_000017865.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 51 SK: 3\n1175 GCF_000092205.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 85 SK: 3\n1176 GCF_000307105.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 36 SK: 2\n1177 GCF_000023985.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 123 SK: 2\n1178 GCF_000696485.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 112 SK: 3\n1179 GCF_000091305.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 120 SK: 1\n1180 GCF_000009825.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 95 SK: 3\n1181 GCF_001548075.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 51 SK: 1\n1182 GCF_000026205.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 56 SK: 3\n1183 GCF_000020725.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 61 SK: 2\n1184 GCF_000279145.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 96 SK: 3\n1185 GCF_001051995.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 73 SK: 3\n1186 GCF_000196355.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 73 SK: 3\n1187 GCF_000340435.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 100 SK: 3\n1188 GCF_000196695.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 75 SK: 3\n1189 GCF_001278055.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 124 SK: 2\n1190 GCF_001902315.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 69 SK: 2\n1191 GCF_000020945.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 75 SK: 2\n1192 GCF_000317045.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 65 SK: 1\n1193 GCF_000400935.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 112 SK: 3\n1194 GCF_000953735.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 139 SK: 3\n1195 GCF_000517425.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 96 SK: 3\n1196 GCF_000023865.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 48 SK: 1\n1197 GCF_000008665.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 105 SK: 3\n1198 GCF_000011805.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 165 SK: 3\n1199 GCF_000008865.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 51 SK: 2\n1200 GCF_000236705.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 92 SK: 2\n1201 GCF_000767055.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 86 SK: 2\n1202 GCF_000828675.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 75 SK: 3\n1203 GCF_000973725.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 113 SK: 3\n1204 GCF_000739085.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 79 SK: 2\n1205 GCF_001610975.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 99 SK: 1\n1206 GCF_000599985.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 109 SK: 2\n1207 GCF_000164985.3.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 71 SK: 2\n1208 GCF_001548055.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 65 SK: 2\n1209 GCF_000230735.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 130 SK: 3\n1210 GCF_000007845.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 42 SK: 3\n1211 GCF_000021985.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 71 SK: 2\n1212 GCF_000008525.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 114 SK: 1\n1213 GCF_000010125.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 85 SK: 3\n1214 GCF_000009985.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 24 SK: 1\n1215 GCF_000022605.2.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 66 SK: 1\n1216 GCF_000224475.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 110 SK: 3\n1217 GCF_000550765.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 22 SK: 2\n1218 GCF_000024505.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 111 SK: 3\n1219 GCF_000284095.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 24 SK: 1\n1220 GCF_000334405.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 80 SK: 2\n1221 GCF_001887595.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 67 SK: 3\n1222 GCF_000017685.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 77 SK: 3\n1223 GCF_000767465.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 86 SK: 2\n1224 GCF_000025645.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 84 SK: 1\n1225 GCF_000022205.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 61 SK: 2\n1226 GCF_000021645.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 114 SK: 2\n1227 GCF_000011245.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 53 SK: 1\n1228 GCF_000830885.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 72 SK: 2\n1229 GCF_000018105.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 91 SK: 1\n1230 GCF_001698145.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 91 SK: 1\n1231 GCF_000025325.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 72 SK: 2\n1232 GCF_000503895.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 87 SK: 1\n1233 GCF_000011025.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 46 SK: 3\n1234 GCF_000012645.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 56 SK: 3\n1235 GCF_000014865.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 89 SK: 3\n1236 GCF_001693385.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 103 SK: 2\n1237 GCF_000190435.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 72 SK: 3\n1238 GCF_000283515.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 126 SK: 3\n1239 GCF_000354175.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 71 SK: 3\n1240 GCF_000015025.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 61 SK: 2\n1241 GCF_000012325.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 76 SK: 3\n1242 GCF_000072485.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 79 SK: 3\n1243 GCF_000023745.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 87 SK: 3\n1244 GCF_001750785.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 76 SK: 2\n1245 GCF_000258405.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 138 SK: 3\n1246 GCF_001578185.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 87 SK: 2\n1247 GCF_000007945.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 168 SK: 3\n1248 GCF_000183345.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 44 SK: 3\n1249 GCF_000214825.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 67 SK: 2\n1250 GCF_000967915.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 76 SK: 2\n1251 GCF_000828655.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 60 SK: 2\n1252 GCF_000313175.2.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 61 SK: 1\n1253 GCF_000265385.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 70 SK: 3\n1254 GCF_000157895.3.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 125 SK: 2\n1255 GCF_001941825.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 74 SK: 3\n1256 GCF_000218565.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 79 SK: 2\n1257 GCF_000243135.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 74 SK: 3\n1258 GCF_000015045.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 75 SK: 2\n1259 GCF_000092245.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 106 SK: 2\n1260 GCF_000093085.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 60 SK: 2\n1261 GCF_000015205.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 66 SK: 2\n1262 GCF_001443605.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 90 SK: 3\n1263 GCF_000227745.2.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 60 SK: 1\n1264 GCF_000069025.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 25 SK: 1\n1265 GCF_000340905.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 83 SK: 2\n1266 GCF_000007525.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 44 SK: 2\n1267 GCF_001465295.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 84 SK: 2\n1268 GCF_000017005.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 62 SK: 3\n1269 GCF_000242635.2.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 23 SK: 1\n1270 GCF_000008025.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 105 SK: 2\n1271 GCF_000829035.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 78 SK: 2\n1272 GCF_000092785.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 98 SK: 1\n1273 GCF_000007785.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 87 SK: 3\n1274 GCF_000016185.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 123 SK: 2\n1275 GCF_000724775.3.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 73 SK: 2\n1276 GCF_000018785.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 36 SK: 1\n1277 GCF_000350305.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 74 SK: 2\n1278 GCF_000025005.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 83 SK: 3\n1279 GCF_000973105.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 96 SK: 2\n1280 GCF_001597285.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 76 SK: 2\n1281 GCF_000011685.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 63 SK: 3\n1282 GCF_000012485.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 78 SK: 1\n1283 GCF_000024265.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 109 SK: 3\n1284 GCF_001042715.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 43 SK: 1\n1285 GCF_000085865.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 63 SK: 2\n1286 GCF_000143165.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 36 SK: 1\n1287 GCF_000300255.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 91 SK: 2\n1288 GCF_001887285.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 137 SK: 3\n1289 GCF_000091565.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 74 SK: 3\n1290 GCF_000007865.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 25 SK: 1\n1291 GCF_000012145.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 81 SK: 2\n1292 GCF_000164675.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 110 SK: 3\n1293 GCF_000525635.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 20 SK: 1\n1294 GCF_000063545.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 53 SK: 1\n1295 GCF_000186985.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 93 SK: 1\n1296 GCF_000014385.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 106 SK: 3\n1297 GCF_000023065.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 90 SK: 3\n1298 GCF_000202835.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 79 SK: 2\n1299 GCF_001042675.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 63 SK: 3\n1300 GCF_000220625.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 94 SK: 2\n1301 GCF_000214355.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 84 SK: 2\n1302 GCF_000092825.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 120 SK: 3\n1303 GCF_000250655.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 107 SK: 3\n1304 GCF_000185905.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 124 SK: 3\n1305 GCF_000027225.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 121 SK: 3\n1306 GCF_000180175.2.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 56 SK: 1\n1307 GCF_000299095.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 120 SK: 3\n1308 GCF_001457475.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 74 SK: 3\n1309 GCF_001021975.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 104 SK: 2\n1310 GCF_000014525.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 73 SK: 2\n1311 GCF_000230275.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 65 SK: 1\n1312 GCF_000015145.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 107 SK: 3\n1313 GCF_000009765.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 86 SK: 3\n1314 GCF_000523235.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 86 SK: 1\n1315 GCF_000024985.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 61 SK: 3\n1316 GCF_000012285.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 80 SK: 3\n1317 GCF_000738435.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 107 SK: 3\n1318 GCF_000317305.3.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 106 SK: 3\n1319 GCF_000165505.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 43 SK: 3\n1320 GCF_001750165.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 54 SK: 1\n1321 GCF_000152825.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 51 SK: 3\n1322 GCF_000063525.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 104 SK: 3\n1323 GCF_000196495.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 63 SK: 1\n1324 GCF_000196175.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 45 SK: 1\n1325 GCF_000019345.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 100 SK: 1\n1326 GCF_000174395.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 82 SK: 2\n1327 GCF_000027165.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 95 SK: 2\n1328 GCF_001854245.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 22 SK: 2\n1329 GCF_000015105.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 105 SK: 2\n1330 GCF_000148645.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 74 SK: 2\n1331 GCF_000022965.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 82 SK: 2\n1332 GCF_000284335.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 59 SK: 2\n1333 GCF_000144915.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 80 SK: 3\n1334 GCF_000024325.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 85 SK: 3\n1335 GCF_000214435.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 33 SK: 3\n1336 GCF_000024625.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 85 SK: 3\n1337 GCF_001693675.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 97 SK: 3\n1338 GCF_001477625.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 99 SK: 1\n1339 GCF_001951175.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 73 SK: 1\n1340 GCF_000283555.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 128 SK: 3\n1341 GCF_000294775.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 69 SK: 3\n1342 GCF_000828975.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 99 SK: 2\n1343 GCF_001543175.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 21 SK: 1\n1344 GCF_000953435.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 61 SK: 3\n1345 GCF_000020505.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 37 SK: 1\n1346 GCF_000019745.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 59 SK: 1\n1347 GCF_000025865.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 56 SK: 3\n1348 GCF_000186245.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 93 SK: 3\n1349 GCF_001021025.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 83 SK: 3\n1350 GCF_001402875.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 123 SK: 2\n1351 GCF_001421015.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 112 SK: 2\n1352 GCF_001191005.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 57 SK: 3\n1353 GCF_000063505.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 71 SK: 2\n1354 GCF_000231405.2.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 44 SK: 1\n1355 GCF_000009845.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 117 SK: 3\n1356 GCF_001456065.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 83 SK: 2\n1357 GCF_000973085.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 47 SK: 2\n1358 GCF_000017585.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 65 SK: 3\n1359 GCF_000178115.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 71 SK: 3\n1360 GCF_000260985.4.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 55 SK: 1\n1361 GCF_000348725.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 78 SK: 2\n1362 GCF_000818035.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 84 SK: 2\n1363 GCF_001636015.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 25 SK: 1\n1364 GCF_000283595.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 77 SK: 2\n1365 GCF_000208385.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 101 SK: 2\n1366 GCF_001277995.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 107 SK: 3\n1367 GCF_000807255.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 101 SK: 2\n1368 GCF_000300235.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 58 SK: 3\n1369 GCF_000214415.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 51 SK: 1\n1370 GCF_000194625.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 35 SK: 1\n1371 GCF_000185985.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 72 SK: 3\n1372 GCF_000014885.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 66 SK: 1\n1373 GCF_001488575.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 101 SK: 3\n1374 GCF_000252445.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 44 SK: 2\n1375 GCF_000193395.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 56 SK: 3\n1376 GCF_000219045.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 111 SK: 2\n1377 GCF_000294515.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 63 SK: 2\n1378 GCF_000009705.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 93 SK: 2\n1379 GCF_000013285.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 74 SK: 1\n1380 GCF_000800295.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 83 SK: 2\n1381 GCF_001886815.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 75 SK: 2\n1382 GCF_000021945.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 60 SK: 3\n1383 GCF_000953475.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 108 SK: 2\n1384 GCF_001584185.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 71 SK: 3\n1385 GCF_000316685.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 121 SK: 2\n1386 GCF_001188915.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 58 SK: 2\n1387 GCF_000007305.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 55 SK: 1\n1388 GCF_000063445.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 57 SK: 1\n1389 GCF_000016525.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 65 SK: 2\n1390 GCF_000284115.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 40 SK: 3\n1391 GCF_000517565.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 81 SK: 3\n1392 GCF_000576555.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 116 SK: 3\n1393 GCF_000954135.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 57 SK: 2\n1394 GCF_000217995.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 59 SK: 2\n1395 GCF_001586255.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 62 SK: 2\n1396 GCF_000092565.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 83 SK: 2\n1397 GCF_000215105.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 70 SK: 2\n1398 GCF_001027025.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 93 SK: 2\n1399 GCF_000092425.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 123 SK: 3\n1400 GCF_000284515.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 115 SK: 2\n1401 GCF_000723165.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 67 SK: 1\n1402 GCF_000814825.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 93 SK: 2\n1403 GCF_000242595.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 64 SK: 2\n1404 GCF_000187005.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 158 SK: 3\n1405 GCF_000006945.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 91 SK: 3\n1406 GCF_000012405.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 109 SK: 3\n1407 GCF_001484605.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 125 SK: 1\n1408 GCF_001432245.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 65 SK: 3\n1409 GCF_000012585.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 70 SK: 1\n1410 GCF_001702135.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 51 SK: 1\n1411 GCF_000695835.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 61 SK: 2\n1412 GCF_000265525.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 90 SK: 3\n1413 GCF_000214375.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 103 SK: 3\n1414 GCF_001420995.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 131 SK: 3\n1415 GCF_000959725.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 31 SK: 3\n1416 GCF_000091665.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 79 SK: 2\n1417 GCF_000284015.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 39 SK: 1\n1418 GCF_000762265.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 41 SK: 3\n1419 GCF_000017225.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 34 SK: 2\n1420 GCF_000225445.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 83 SK: 3\n1421 GCF_000014965.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 47 SK: 3\n1422 GCF_000212735.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 27 SK: 1\n1423 GCF_000145295.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 104 SK: 3\n1424 GCF_000827005.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 96 SK: 2\n1425 GCF_000069225.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 97 SK: 3\n1426 GCF_000022145.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 61 SK: 1\n1427 GCF_001719165.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 110 SK: 3\n1428 GCF_000349945.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 106 SK: 3\n1429 GCF_000006745.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 63 SK: 1\n1430 GCF_000015765.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 98 SK: 3\n1431 GCF_000027305.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 122 SK: 3\n1432 GCF_000759535.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 89 SK: 1\n1433 GCF_000022305.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 32 SK: 3\n1434 GCF_000179575.2.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 57 SK: 1\n1435 GCF_000253015.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 67 SK: 2\n1436 GCF_000008485.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 60 SK: 2\n1437 GCF_000007345.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 88 SK: 3\n1438 GCF_000819565.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 55 SK: 2\n1439 GCF_000263195.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 83 SK: 3\n1440 GCF_000024165.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 117 SK: 2\n1441 GCF_000832305.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 81 SK: 2\n1442 GCF_001187785.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 70 SK: 2\n1443 GCF_000309885.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 53 SK: 1\n1444 GCF_000011905.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 130 SK: 3\n1445 GCF_000026105.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 128 SK: 3\n1446 GCF_000007805.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 60 SK: 2\n1447 GCF_000019485.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 88 SK: 2\n1448 GCF_000224985.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 110 SK: 3\n1449 GCF_000389635.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 132 SK: 3\n1450 GCF_000772105.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 87 SK: 1\n1451 GCF_000227705.2.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 50 SK: 1\n1452 GCF_000510265.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 31 SK: 2\n1453 GCF_000183665.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 96 SK: 2\n1454 GCF_000344785.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 116 SK: 3\n1455 GCF_000961095.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 38 SK: 1\n1456 GCF_000341385.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 76 SK: 3\n1457 GCF_000014765.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 84 SK: 2\n1458 GCF_000026605.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 46 SK: 2\n1459 GCF_000215975.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 81 SK: 1\n1460 GCF_001042405.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 105 SK: 3\n1461 GCF_000754275.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 24 SK: 1\n1462 GCF_000340795.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 75 SK: 3\n1463 GCF_000186265.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 89 SK: 3\n1464 GCF_000020685.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 118 SK: 3\n1465 GCF_000196395.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 66 SK: 2\n1466 GCF_000199675.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 169 SK: 3\n1467 GCF_000648515.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 40 SK: 1\n1468 GCF_000565195.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 93 SK: 3\n1469 GCF_000007325.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 103 SK: 2\n1470 GCF_000144625.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 133 SK: 3\n1471 GCF_000008165.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 43 SK: 1\n1472 GCF_000828855.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 91 SK: 1\n1473 GCF_000007625.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 107 SK: 2\n1474 GCF_001484065.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 61 SK: 2\n1475 GCF_000317615.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 77 SK: 1\n1476 GCF_000328685.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 130 SK: 3\n1477 GCF_000017425.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 69 SK: 1\n1478 GCF_000737865.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 44 SK: 3\n1479 GCF_000175575.2.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 90 SK: 1\n1480 GCF_001936235.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 115 SK: 2\n1481 GCF_000011705.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 86 SK: 2\n1482 GCF_000725405.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 113 SK: 1\n1483 GCF_000025085.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 87 SK: 1\n1484 GCF_002214645.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 68 SK: 3\n1485 GCF_000144695.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 97 SK: 3\n1486 GCF_000017145.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 59 SK: 1\n1487 GCF_001274875.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 156 SK: 3\n1488 GCF_000025405.2.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 32 SK: 1\n1489 GCF_000195085.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 65 SK: 2\n1490 GCF_001042695.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 94 SK: 1\n1491 GCF_000026405.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 91 SK: 3\n1492 GCF_000092845.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 95 SK: 3\n1493 GCF_000011305.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 88 SK: 2\n1494 GCF_000092365.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 84 SK: 1\n1495 GCF_000145615.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 94 SK: 3\n1496 GCF_000021885.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 104 SK: 3\n1497 GCF_001865855.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 63 SK: 2\n1498 GCF_000068585.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 56 SK: 2\n1499 GCF_000196515.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 58 SK: 2\n1500 GCF_000008365.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 104 SK: 3\n1501 GCF_000318015.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 35 SK: 2\n1502 GCF_000013185.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 95 SK: 3\n1503 GCF_000230895.2.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 61 SK: 1\n1504 GCF_000012765.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 108 SK: 3\n1505 GCF_001908275.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 30 SK: 1\n1506 GCF_000152265.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 119 SK: 3\n1507 GCF_000473995.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 38 SK: 3\n1508 GCF_000190535.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 92 SK: 3\n1509 GCF_000940805.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 76 SK: 1\n1510 GCF_000147875.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 87 SK: 2\n1511 GCF_000225345.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 65 SK: 3\n1512 GCF_000015125.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 32 SK: 2\n1513 GCF_002211765.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 60 SK: 1\n1514 GCF_001553195.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 116 SK: 1\n1515 GCF_001028645.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 82 SK: 2\n1516 GCF_000023125.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 97 SK: 2\n1517 GCF_000024105.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 34 SK: 2\n1518 GCF_000007365.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 94 SK: 3\n1519 GCF_001274895.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 97 SK: 3\n1520 GCF_000007465.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 48 SK: 3\n1521 GCF_000190575.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 68 SK: 1\n1522 GCF_000385565.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 87 SK: 3\n1523 GCF_000018405.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 52 SK: 3\n1524 GCF_001318345.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 92 SK: 2\n1525 GCF_001307195.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 74 SK: 1\n1526 GCF_002222635.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 69 SK: 3\n1527 GCF_000021685.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 25 SK: 1\n1528 GCF_000340825.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 71 SK: 2\n1529 GCF_001641285.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 65 SK: 3\n1530 GCF_000265465.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 41 SK: 2\n1531 GCF_000021725.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 96 SK: 3\n1532 GCF_001187595.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 78 SK: 3\n1533 GCF_000023825.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 93 SK: 2\n1534 GCF_000243155.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 72 SK: 3\n1535 GCF_000590925.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 25 SK: 1\n1536 GCF_000007025.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 86 SK: 3\n1537 GCF_000024005.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 60 SK: 2\n1538 GCF_000231015.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 148 SK: 3\n1539 GCF_000832985.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 79 SK: 2\n1540 GCF_000026125.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 67 SK: 3\n1541 GCF_000092125.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 131 SK: 2\n1542 GCF_000196735.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 95 SK: 2\n1543 GCF_000212695.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 81 SK: 3\n1544 GCF_002234495.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 82 SK: 3\n1545 GCF_000016745.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 90 SK: 3\n1546 GCF_000737325.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 23 SK: 1\n1547 GCF_000012385.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 44 SK: 2\n1548 GCF_000144405.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 78 SK: 2\n1549 GCF_000511385.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 55 SK: 1\n1550 GCF_000025285.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 71 SK: 2\n1551 GCF_000091785.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 107 SK: 2\n1552 GCF_000284255.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 87 SK: 2\n1553 GCF_000213825.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 77 SK: 2\n1554 GCF_000800475.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 82 SK: 3\n1555 GCF_001189295.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 78 SK: 3\n1556 GCF_000010665.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 137 SK: 3\n1557 GCF_001518815.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 121 SK: 2\n1558 GCF_000833105.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 175 SK: 3\n1559 GCF_000026345.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 87 SK: 2\n1560 GCF_000026745.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 35 SK: 1\n1561 GCF_000012345.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 112 SK: 3\n1562 GCF_000011365.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 101 SK: 3\n1563 GCF_000005825.2.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 72 SK: 2\n1564 GCF_000024465.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 60 SK: 2\n1565 GCF_000970025.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 56 SK: 2\n1566 GCF_000024185.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 65 SK: 3\n1567 GCF_000153485.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 80 SK: 2\n1568 GCF_000092645.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 63 SK: 2\n1569 GCF_000020525.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 32 SK: 2\n1570 GCF_000455605.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 24 SK: 1\n1571 GCF_000262715.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 69 SK: 3\n1572 GCF_000260965.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 71 SK: 3\n1573 GCF_000015585.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 108 SK: 3\n1574 GCF_001655245.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 53 SK: 1\n1575 GCF_000304735.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 85 SK: 2\n1576 GCF_001185205.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 73 SK: 1\n1577 GCF_000270205.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 118 SK: 2\n1578 GCF_000196015.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 44 SK: 2\n1579 GCF_000981765.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 111 SK: 3\n1580 GCF_000196835.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 65 SK: 3\n1581 GCF_001007995.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 65 SK: 2\n1582 GCF_000092265.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 66 SK: 2\n1583 GCF_000025225.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 91 SK: 3\n1584 GCF_001676785.2.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 58 SK: 1\n1585 GCF_000524555.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 100 SK: 3\n1586 GCF_000269985.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 117 SK: 3\n1587 GCF_000014565.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 41 SK: 2\n1588 GCF_001444365.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 57 SK: 3\n1589 GCF_000317795.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 91 SK: 2\n1590 GCF_000236665.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 112 SK: 2\n1591 GCF_000767615.3.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 74 SK: 2\n1592 GCF_000276685.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 57 SK: 3\n1593 GCF_000018425.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 21 SK: 2\n1594 GCF_000013145.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 114 SK: 3\n1595 GCF_001302585.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 71 SK: 3\n1596 GCF_000175095.2.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 65 SK: 2\n1597 GCF_000010625.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 126 SK: 2\n1598 GCF_000017245.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 111 SK: 3\n1599 GCF_002215215.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 46 SK: 3\n1600 GCF_000253275.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 62 SK: 1\n1601 GCF_000224105.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 122 SK: 2\n1602 GCF_001618885.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 63 SK: 2\n1603 GCF_000020145.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 35 SK: 1\n1604 GCF_000973505.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 22 SK: 2\n1605 GCF_000020305.1.RAST\nAdd Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 67 SK: 3\n1606 GCF_000008985.1.RAST\nAdd Sink cpd11416_c0\nSetup Drains. EX: 64 SK: 1\n1607 GCF_001267155.1.RAST\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 116 SK: 2\n1608 GCF_000264455.2.RAST\n"
],
[
"data = {\n 'genome_id' : [],\n 'cobra' : []\n}\nfor genome_id in genomes:\n if not genome_id in exclude:\n print(genome_id)\n model = cobra.io.read_sbml_model('../../data/memote_models/' + genome_id.split('.RAST')[0] + '.xml')\n solution = model.optimize()\n cobra_fba = solution.objective_value\n data['genome_id'].append(genome_id)\n data['cobra'].append(cobra_fba)\ndf = pd.DataFrame(data)\ndf = df.set_index('genome_id')",
"GCF_000143965.1.RAST\nGCF_000027325.1.RAST\nGCF_001050115.1.RAST\nGCF_000023445.1.RAST\nGCF_000240165.1.RAST\nGCF_001578105.1.RAST\nGCF_000829395.1.RAST\nGCF_000246985.2.RAST\nGCF_000024385.1.RAST\nGCF_000069945.1.RAST\nGCF_000626675.1.RAST\nGCF_000022725.1.RAST\nGCF_000012505.1.RAST\nGCF_000019525.1.RAST\nGCF_000832605.1.RAST\nGCF_001653755.1.RAST\nGCF_000014185.1.RAST\nGCF_000195955.2.RAST\nGCF_001507665.1.RAST\nGCF_000803645.1.RAST\nGCF_000018945.1.RAST\nGCF_000021965.1.RAST\nGCF_000009365.1.RAST\nGCF_001543265.1.RAST\nGCF_000016605.1.RAST\nGCF_000737535.1.RAST\nGCF_000277125.1.RAST\nGCF_001190945.1.RAST\nGCF_000026005.1.RAST\nGCF_000025185.1.RAST\nGCF_000219605.1.RAST\nGCF_000015745.1.RAST\nGCF_000747345.1.RAST\nGCF_000317495.1.RAST\nGCF_000340865.1.RAST\nGCF_000238995.1.RAST\nGCF_000013405.1.RAST\nGCF_000012685.1.RAST\nGCF_001553955.1.RAST\nGCF_000019685.1.RAST\nGCF_000179915.2.RAST\nGCF_000247715.1.RAST\nGCF_000835165.1.RAST\nGCF_000012725.1.RAST\nGCF_000025705.1.RAST\nGCF_000317635.1.RAST\nGCF_000018205.1.RAST\nGCF_001611135.1.RAST\nGCF_000970285.1.RAST\nGCF_000024545.1.RAST\nGCF_000025725.1.RAST\nGCF_000300455.3.RAST\nGCF_000024345.1.RAST\nGCF_000800395.1.RAST\nGCF_000019725.1.RAST\nGCF_000008565.1.RAST\nGCF_000525655.1.RAST\nGCF_000237995.1.RAST\nGCF_000328725.1.RAST\nGCF_000020365.1.RAST\nGCF_001262075.1.RAST\nGCF_000093165.1.RAST\nGCF_000237085.1.RAST\nGCF_000017945.1.RAST\nGCF_000196255.1.RAST\nGCF_000163895.2.RAST\nGCF_000271665.2.RAST\nGCF_000069185.1.RAST\nGCF_000177235.2.RAST\nGCF_000463355.1.RAST\nGCF_000812665.2.RAST\nGCF_001021065.1.RAST\nGCF_000284075.1.RAST\nGCF_000024845.1.RAST\nGCF_000512205.2.RAST\nGCF_000093065.1.RAST\nGCF_000830985.1.RAST\nGCF_001547975.1.RAST\nGCF_000281175.1.RAST\nGCF_000164695.2.RAST\nGCF_000499765.1.RAST\nGCF_001447335.1.RAST\nGCF_000010405.1.RAST\nGCF_000204415.1.RAST\nGCF_000006965.1.RAST\nGCF_001685435.2.RAST\nGCF_000159155.2.RAST\nGCF_000025665.1.RAST\nGCF_001618685.1.RAST\nGCF_000759475.1.RAST\nGCF_000347595.1.RAST\nGCF_000242455.2.RAST\nGCF_001006045.1.RAST\nGCF_000730385.1.RAST\nGCF_000284155.1.RAST\nGCF_000470655.1.RAST\nGCF_000816185.1.RAST\nGCF_000317675.1.RAST\nGCF_000012825.1.RAST\nGCF_000020625.1.RAST\nGCF_000348785.1.RAST\nGCF_001514455.1.RAST\nGCF_000204925.1.RAST\nGCF_000347635.1.RAST\nGCF_000013945.1.RAST\nGCF_000007905.1.RAST\nGCF_001046955.1.RAST\nGCF_000723465.1.RAST\nGCF_000021485.1.RAST\nGCF_000970085.1.RAST\nGCF_000014425.1.RAST\nGCF_001682385.1.RAST\nGCF_000021325.1.RAST\nGCF_001020985.1.RAST\nGCF_000008345.1.RAST\nGCF_000006175.1.RAST\nGCF_000970265.1.RAST\nGCF_000196135.1.RAST\nGCF_000063605.1.RAST\nGCF_000217815.1.RAST\nGCF_000829315.1.RAST\nGCF_000504125.1.RAST\nGCF_000953195.1.RAST\nGCF_000193375.1.RAST\nGCF_000214155.1.RAST\nGCF_001922545.1.RAST\nGCF_000012905.2.RAST\nGCF_001021935.1.RAST\nGCF_000023025.1.RAST\nGCF_000027345.1.RAST\nGCF_000508225.1.RAST\nGCF_000185885.1.RAST\nGCF_000418365.1.RAST\nGCF_000092905.1.RAST\nGCF_000725425.1.RAST\nGCF_001499615.1.RAST\nGCF_000019845.1.RAST\nGCF_000341395.1.RAST\nGCF_000331995.1.RAST\nGCF_000953695.1.RAST\nGCF_000091125.1.RAST\nGCF_000015005.1.RAST\nGCF_000063585.1.RAST\nGCF_000024785.1.RAST\nGCF_000186365.1.RAST\nGCF_000218545.1.RAST\nGCF_001314995.1.RAST\nGCF_000463505.1.RAST\nGCF_000008545.1.RAST\nGCF_000949425.1.RAST\nGCF_000304355.2.RAST\nGCF_000023225.1.RAST\nGCF_000017305.1.RAST\nGCF_000328765.2.RAST\nGCF_000266905.1.RAST\nGCF_000005845.2.RAST\nGCF_000968135.1.RAST\nGCF_001267925.1.RAST\nGCF_001953195.1.RAST\nGCF_000015305.1.RAST\nGCF_001941345.1.RAST\nGCF_000092985.1.RAST\nGCF_001483385.1.RAST\nGCF_000055785.1.RAST\nGCF_001886695.1.RAST\nGCF_000341355.1.RAST\nGCF_000325665.1.RAST\nGCF_000550785.1.RAST\nGCF_000020905.1.RAST\nGCF_001305655.1.RAST\nGCF_000230955.2.RAST\nGCF_000828475.1.RAST\nGCF_000183725.1.RAST\nGCF_000230695.2.RAST\nGCF_000007565.2.RAST\nGCF_000940845.1.RAST\nGCF_000069925.1.RAST\nGCF_000284315.1.RAST\nGCF_000006865.1.RAST\nGCF_000316605.1.RAST\nGCF_000185965.1.RAST\nGCF_000800805.1.RAST\nGCF_000412695.1.RAST\nGCF_000011065.1.RAST\nGCF_000831645.3.RAST\nGCF_000014825.1.RAST\nGCF_000317105.1.RAST\nGCF_000177535.2.RAST\nGCF_000750535.1.RAST\nGCF_000008745.1.RAST\nGCF_001889105.1.RAST\nGCF_000178975.2.RAST\nGCF_000016285.1.RAST\nGCF_001620305.1.RAST\nGCF_000017845.1.RAST\nGCF_000831005.1.RAST\nGCF_000316515.1.RAST\nGCF_001314225.1.RAST\nGCF_000493735.1.RAST\nGCF_000236685.1.RAST\nGCF_000317475.1.RAST\nGCF_001787335.1.RAST\nGCF_000513295.1.RAST\nGCF_000024945.1.RAST\nGCF_000255115.2.RAST\nGCF_000189415.1.RAST\nGCF_000191585.1.RAST\nGCF_001269425.1.RAST\nGCF_000218625.1.RAST\nGCF_000253395.1.RAST\nGCF_001278035.1.RAST\nGCF_000344805.1.RAST\nGCF_000020565.1.RAST\nGCF_000340885.1.RAST\nGCF_000024765.1.RAST\nGCF_000316665.1.RAST\nGCF_000287355.1.RAST\nGCF_000015825.1.RAST\nGCF_000404225.1.RAST\nGCF_000146185.1.RAST\nGCF_000017165.1.RAST\nGCF_001702175.1.RAST\nGCF_000092025.1.RAST\nGCF_000699505.1.RAST\nGCF_000024565.1.RAST\nGCF_000013985.1.RAST\nGCF_000008465.1.RAST\nGCF_000019965.1.RAST\nGCF_000020785.1.RAST\nGCF_000982825.1.RAST\nGCF_000203855.3.RAST\nGCF_000020385.1.RAST\nGCF_000015265.1.RAST\nGCF_000022085.1.RAST\nGCF_000591055.1.RAST\nGCF_000026505.1.RAST\nGCF_000014805.1.RAST\nGCF_000816085.1.RAST\nGCF_000238255.3.RAST\nGCF_001586215.1.RAST\nGCF_000023785.1.RAST\nGCF_000225465.1.RAST\nGCF_000020165.1.RAST\nGCF_000785495.1.RAST\nGCF_000213255.1.RAST\nGCF_000442645.1.RAST\nGCF_000737575.1.RAST\nGCF_000025345.1.RAST\nGCF_000632985.1.RAST\nGCF_001941805.1.RAST\nGCF_000007005.1.RAST\nGCF_000014245.1.RAST\nGCF_000827125.1.RAST\nGCF_001431725.1.RAST\nGCF_000253175.1.RAST\nGCF_001676765.1.RAST\nGCF_000969645.2.RAST\nGCF_000815025.1.RAST\nGCF_000317835.1.RAST\nGCF_000305785.2.RAST\nGCF_000196315.1.RAST\nGCF_000195915.1.RAST\nGCF_001687565.2.RAST\nGCF_000969965.1.RAST\nGCF_001308105.1.RAST\nGCF_000583875.1.RAST\nGCF_000012965.1.RAST\nGCF_000143085.1.RAST\nGCF_000328625.1.RAST\nGCF_000969905.1.RAST\nGCF_001514435.1.RAST\nGCF_000217715.1.RAST\nGCF_000007605.1.RAST\nGCF_000012005.1.RAST\nGCF_000183425.1.RAST\nGCF_000025605.1.RAST\nGCF_000009045.1.RAST\nGCF_000259275.1.RAST\nGCF_000689415.1.RAST\nGCF_001460635.1.RAST\nGCF_000397205.1.RAST\nGCF_000064305.2.RAST\nGCF_000145275.1.RAST\nGCF_000023325.1.RAST\nGCF_000270305.1.RAST\nGCF_000215705.1.RAST\nGCF_000025885.1.RAST\nGCF_000016085.1.RAST\nGCF_000219535.2.RAST\nGCF_000590475.1.RAST\nGCF_000812185.1.RAST\nGCF_000025925.1.RAST\nGCF_000192845.1.RAST\nGCF_000297055.2.RAST\nGCF_000190555.1.RAST\nGCF_000014225.1.RAST\nGCF_001412615.1.RAST\nGCF_001652465.1.RAST\nGCF_000242335.1.RAST\nGCF_000195335.1.RAST\nGCF_001456315.1.RAST\nGCF_000327505.1.RAST\nGCF_000165905.1.RAST\nGCF_000196615.1.RAST\nGCF_000023705.1.RAST\nGCF_000964565.1.RAST\nGCF_000231385.2.RAST\nGCF_000317695.1.RAST\nGCF_001648175.1.RAST\nGCF_002222615.2.RAST\nGCF_000025305.1.RAST\nGCF_000023205.1.RAST\nGCF_000007265.1.RAST\nGCF_000196655.1.RAST\nGCF_000226315.1.RAST\nGCF_000024025.1.RAST\nGCF_001021085.1.RAST\nGCF_000204135.1.RAST\nGCF_000014025.1.RAST\nGCF_000327045.1.RAST\nGCF_000761155.1.RAST\nGCF_000011465.1.RAST\nGCF_000477435.1.RAST\nGCF_000215085.1.RAST\nGCF_001412575.1.RAST\nGCF_000011205.1.RAST\nGCF_000478885.1.RAST\nGCF_001542565.1.RAST\nGCF_000179035.2.RAST\nGCF_000695095.2.RAST\nGCF_000009745.1.RAST\nGCF_000016785.1.RAST\nGCF_000204155.1.RAST\nGCF_000091465.1.RAST\nGCF_001457455.1.RAST\nGCF_000974425.1.RAST\nGCF_000260985.3.RAST\nGCF_000306785.1.RAST\nGCF_000731315.1.RAST\nGCF_000376585.1.RAST\nGCF_000439435.1.RAST\nGCF_000212395.1.RAST\nGCF_001189535.1.RAST\nGCF_000023605.1.RAST\nGCF_000953635.1.RAST\nGCF_001558415.1.RAST\nGCF_000184705.1.RAST\nGCF_000195775.1.RAST\nGCF_000758685.1.RAST\nGCF_000967305.2.RAST\nGCF_000017105.1.RAST\nGCF_000828515.1.RAST\nGCF_000698785.1.RAST\nGCF_000196895.1.RAST\nGCF_000816345.1.RAST\nGCF_000969765.1.RAST\nGCF_000011445.1.RAST\nGCF_001190755.1.RAST\nGCF_000255535.1.RAST\nGCF_000016405.1.RAST\nGCF_000734015.1.RAST\nGCF_000253055.1.RAST\nGCF_000016065.1.RAST\nGCF_000833575.1.RAST\nGCF_000017505.1.RAST\nGCF_000316625.1.RAST\nGCF_000590555.1.RAST\nGCF_000010305.1.RAST\nGCF_001483965.1.RAST\nGCF_000013045.1.RAST\nGCF_000165465.1.RAST\nGCF_000739475.1.RAST\nGCF_000017185.1.RAST\nGCF_000010165.1.RAST\nGCF_002240355.1.RAST\nGCF_001275345.1.RAST\nGCF_001278075.1.RAST\nGCF_001729525.1.RAST\nGCF_001262015.1.RAST\nGCF_000176855.2.RAST\nGCF_000021925.1.RAST\nGCF_000025265.1.RAST\nGCF_001644565.1.RAST\nGCF_000014345.1.RAST\nGCF_001444445.1.RAST\nGCF_000010505.1.RAST\nGCF_000013565.1.RAST\nGCF_001586195.1.RAST\nGCF_000265425.1.RAST\nGCF_000016425.1.RAST\nGCF_000009065.1.RAST\nGCF_000550805.1.RAST\nGCF_000632845.1.RAST\nGCF_000300295.4.RAST\nGCF_000242255.2.RAST\nGCF_000477415.1.RAST\nGCF_000739455.1.RAST\nGCF_000016765.1.RAST\nGCF_001482365.1.RAST\nGCF_000018545.1.RAST\nGCF_000023905.1.RAST\nGCF_000695235.1.RAST\nGCF_001908775.1.RAST\nGCF_000010065.1.RAST\nGCF_000017545.1.RAST\nGCF_000006845.1.RAST\nGCF_000025125.1.RAST\nGCF_000092465.1.RAST\nGCF_000007645.1.RAST\nGCF_000195935.2.RAST\nGCF_000828835.1.RAST\nGCF_000237065.1.RAST\nGCF_000020485.1.RAST\nGCF_000214215.1.RAST\nGCF_000507245.1.RAST\nGCF_000007765.1.RAST\nGCF_000511405.1.RAST\nGCF_000319575.2.RAST\nGCF_000212375.1.RAST\nGCF_000521565.1.RAST\nGCF_000012925.1.RAST\nGCF_000014145.1.RAST\nGCF_000014745.1.RAST\nGCF_000317935.1.RAST\nGCF_000941055.1.RAST\nGCF_000183745.1.RAST\nGCF_000022045.1.RAST\nGCF_000006925.2.RAST\nGCF_000010985.1.RAST\nGCF_000525675.1.RAST\nGCF_900086555.1.RAST\nGCF_000009865.1.RAST\nGCF_000007745.1.RAST\nGCF_000178835.2.RAST\nGCF_000742835.1.RAST\nGCF_000007825.1.RAST\nGCF_000008325.1.RAST\nGCF_000023285.1.RAST\nGCF_000013705.1.RAST\nGCF_000226295.1.RAST\nGCF_000191545.1.RAST\nGCF_000754265.1.RAST\nGCF_000195295.1.RAST\nGCF_000987835.1.RAST\nGCF_001456215.1.RAST\nGCF_000024225.1.RAST\nGCF_000284035.1.RAST\nGCF_000010825.1.RAST\nGCF_000802245.2.RAST\nGCF_000445425.4.RAST\nGCF_000512735.1.RAST\nGCF_001314305.1.RAST\nGCF_000300075.1.RAST\nGCF_000011645.1.RAST\nGCF_000632805.1.RAST\nGCF_000214665.1.RAST\nGCF_000325745.1.RAST\nGCF_001027545.1.RAST\nGCF_000832905.1.RAST\nGCF_001578205.1.RAST\nGCF_000317975.2.RAST\nGCF_000008505.1.RAST\nGCF_000012845.1.RAST\nGCF_000013005.1.RAST\nGCF_001543285.1.RAST\nGCF_001558255.1.RAST\nGCF_000092225.1.RAST\nGCF_000011745.1.RAST\nGCF_000248095.2.RAST\nGCF_000312705.1.RAST\nGCF_001274535.1.RAST\nGCF_000016985.1.RAST\nGCF_000166415.1.RAST\nGCF_000220645.1.RAST\nGCF_000521505.1.RAST\nGCF_000011965.2.RAST\nGCF_001262035.1.RAST\nGCF_000017705.1.RAST\nGCF_001010285.1.RAST\nGCF_000746585.1.RAST\nGCF_000213805.1.RAST\nGCF_000010425.1.RAST\nGCF_000211855.2.RAST\nGCF_000833025.1.RAST\nGCF_002075285.2.RAST\nGCF_000299235.1.RAST\nGCF_000330885.1.RAST\nGCF_000056065.1.RAST\nGCF_000022565.1.RAST\nGCF_000217795.1.RAST\nGCF_000284615.1.RAST\nGCF_000007145.1.RAST\nGCF_001314325.1.RAST\nGCF_000014265.1.RAST\nGCF_000723425.2.RAST\nGCF_000009085.1.RAST\nGCF_000341345.1.RAST\nGCF_001499655.1.RAST\nGCF_000014905.1.RAST\nGCF_000013765.1.RAST\nGCF_001955715.1.RAST\nGCF_000183135.1.RAST\nGCF_001606005.1.RAST\nGCF_000013165.1.RAST\nGCF_000017045.1.RAST\nGCF_000214785.1.RAST\nGCF_000014785.1.RAST\nGCF_000612685.1.RAST\nGCF_001660045.1.RAST\nGCF_000014585.1.RAST\nGCF_001307545.1.RAST\nGCF_000172995.2.RAST\nGCF_000784965.1.RAST\nGCF_000376545.2.RAST\nGCF_000008625.1.RAST\nGCF_000024405.1.RAST\nGCF_000224005.2.RAST\nGCF_000259255.1.RAST\nGCF_000022385.1.RAST\nGCF_000953655.1.RAST\nGCF_000972865.1.RAST\nGCF_000284295.1.RAST\nGCF_000969885.1.RAST\nGCF_000217635.1.RAST\nGCF_000219805.1.RAST\nGCF_000009725.1.RAST\nGCF_000299115.1.RAST\nGCF_000212415.1.RAST\nGCF_000404165.1.RAST\nGCF_000758725.1.RAST\nGCF_000195855.1.RAST\nGCF_000935025.1.RAST\nGCF_000021805.1.RAST\nGCF_000724625.1.RAST\nGCF_000165715.2.RAST\nGCF_000025485.1.RAST\nGCF_000016325.1.RAST\nGCF_000304455.1.RAST\nGCF_000015225.1.RAST\nGCF_000204645.1.RAST\nGCF_001305595.1.RAST\nGCF_000015565.1.RAST\nGCF_000046705.1.RAST\nGCF_000157355.2.RAST\nGCF_000006685.1.RAST\nGCF_000299455.1.RAST\nGCF_000014045.1.RAST\nGCF_000021745.1.RAST\nGCF_000953135.1.RAST\nGCF_000020985.1.RAST\nGCF_000147355.1.RAST\nGCF_000725365.1.RAST\nGCF_000311765.1.RAST\nGCF_000012245.1.RAST\nGCF_000025205.1.RAST\nGCF_001020955.1.RAST\nGCF_000819445.1.RAST\nGCF_001547995.1.RAST\nGCF_001028625.1.RAST\nGCF_000317855.1.RAST\nGCF_000154785.2.RAST\nGCF_000266885.1.RAST\nGCF_001880325.1.RAST\nGCF_000732945.1.RAST\nGCF_000317065.1.RAST\nGCF_000010285.1.RAST\nGCF_000158275.2.RAST\nGCF_000017265.1.RAST\nGCF_001605965.1.RAST\nGCF_000513475.1.RAST\nGCF_001709315.1.RAST\nGCF_000785105.2.RAST\nGCF_000196095.1.RAST\nGCF_000013025.1.RAST\nGCF_000306725.1.RAST\nGCF_000235405.2.RAST\nGCF_000022365.1.RAST\nGCF_000816145.1.RAST\nGCF_000019785.1.RAST\nGCF_000008925.1.RAST\nGCF_000349845.1.RAST\nGCF_000018885.1.RAST\nGCF_000006605.1.RAST\nGCF_001482385.1.RAST\nGCF_001688625.1.RAST\nGCF_000730245.1.RAST\nGCF_000500935.1.RAST\nGCF_000191045.1.RAST\nGCF_000389965.1.RAST\nGCF_000146165.2.RAST\nGCF_000022325.1.RAST\nGCF_000144645.1.RAST\nGCF_000283575.1.RAST\nGCF_000941075.1.RAST\nGCF_000235605.1.RAST\nGCF_000266945.1.RAST\nGCF_001010805.1.RAST\nGCF_000612485.1.RAST\nGCF_000225325.1.RAST\nGCF_000204565.1.RAST\nGCF_000307165.1.RAST\nGCF_000195755.1.RAST\nGCF_000830005.1.RAST\nGCF_000013085.1.RAST\nGCF_000196435.1.RAST\nGCF_000306885.1.RAST\nGCF_000317125.1.RAST\nGCF_001688625.2.RAST\nGCF_000801295.1.RAST\nGCF_001483845.1.RAST\nGCF_000182745.2.RAST\nGCF_000332735.1.RAST\nGCF_000196295.1.RAST\nGCF_000025685.1.RAST\nGCF_000016165.1.RAST\nGCF_000190315.1.RAST\nGCF_000196535.1.RAST\nGCF_000222305.1.RAST\nGCF_000223905.1.RAST\nGCF_000067205.1.RAST\nGCF_000058485.1.RAST\nGCF_000298115.2.RAST\nGCF_000219725.1.RAST\nGCF_000706685.1.RAST\nGCF_000661895.1.RAST\nGCF_000023925.1.RAST\nGCF_001439585.2.RAST\nGCF_000020965.1.RAST\nGCF_000018025.1.RAST\nGCF_000022265.1.RAST\nGCF_000186885.1.RAST\nGCF_000619905.2.RAST\nGCF_000013105.1.RAST\nGCF_000265405.1.RAST\nGCF_000454045.1.RAST\nGCF_000214725.1.RAST\nGCF_000024605.1.RAST\nGCF_000210915.2.RAST\nGCF_002211785.1.RAST\nGCF_000024205.1.RAST\nGCF_000968175.1.RAST\nGCF_000007085.1.RAST\nGCF_000092385.1.RAST\nGCF_000511305.1.RAST\nGCF_000970205.1.RAST\nGCF_001025175.1.RAST\nGCF_000195875.1.RAST\nGCF_000317085.1.RAST\nGCF_000184345.1.RAST\nGCF_000023105.1.RAST\nGCF_001606025.1.RAST\nGCF_000023945.1.RAST\nGCF_001562115.1.RAST\nGCF_001028705.1.RAST\nGCF_000012545.1.RAST\nGCF_000008645.1.RAST\nGCF_000981545.1.RAST\nGCF_000250635.1.RAST\nGCF_000007725.1.RAST\nGCF_001420915.1.RAST\nGCF_000306765.2.RAST\nGCF_000008305.1.RAST\nGCF_000227685.2.RAST\nGCF_001013905.1.RAST\nGCF_000252855.1.RAST\nGCF_000284235.1.RAST\nGCF_000376645.1.RAST\nGCF_000327485.1.RAST\nGCF_000737595.1.RAST\nGCF_000195995.1.RAST\nGCF_000024065.1.RAST\nGCF_000829415.1.RAST\nGCF_000018325.1.RAST\nGCF_000011545.1.RAST\nGCF_000183405.1.RAST\nGCF_000212675.2.RAST\nGCF_000512915.1.RAST\nGCF_000263735.1.RAST\nGCF_001483865.1.RAST\nGCF_000347695.1.RAST\nGCF_000177635.2.RAST\nGCF_000270285.1.RAST\nGCF_000007385.1.RAST\nGCF_000190735.1.RAST\nGCF_000940995.1.RAST\nGCF_001025155.1.RAST\nGCF_000179635.2.RAST\nGCF_001411495.1.RAST\nGCF_000213215.1.RAST\nGCF_000146505.1.RAST\nGCF_000196455.1.RAST\nGCF_001022135.1.RAST\nGCF_000164865.1.RAST\nGCF_000993805.1.RAST\nGCF_000013745.1.RAST\nGCF_000015885.1.RAST\nGCF_000801275.2.RAST\nGCF_000020645.1.RAST\nGCF_000018865.1.RAST\nGCF_000177615.2.RAST\nGCF_000144605.1.RAST\nGCF_000829195.1.RAST\nGCF_000020005.1.RAST\nGCF_000600005.1.RAST\nGCF_000747315.1.RAST\nGCF_001559115.1.RAST\nGCF_000967425.1.RAST\nGCF_000283655.1.RAST\nGCF_000444875.1.RAST\nGCF_000009345.1.RAST\nGCF_000025625.1.RAST\nGCF_000520015.2.RAST\nGCF_000511355.1.RAST\nGCF_000203835.1.RAST\nGCF_000785705.2.RAST\nGCF_000025845.1.RAST\nGCF_000021285.1.RAST\nGCF_000007045.1.RAST\nGCF_001273775.1.RAST\nGCF_000565175.1.RAST\nGCF_000010145.1.RAST\nGCF_000166095.1.RAST\nGCF_001558935.1.RAST\nGCF_000015445.1.RAST\nGCF_000018145.1.RAST\nGCF_000026325.1.RAST\nGCF_000828895.1.RAST\nGCF_001594265.1.RAST\nGCF_000024925.1.RAST\nGCF_001444405.1.RAST\nGCF_000012865.1.RAST\nGCF_000277795.1.RAST\nGCF_000240185.1.RAST\nGCF_000012085.2.RAST\nGCF_000512355.1.RAST\nGCF_000015405.1.RAST\nGCF_001729485.1.RAST\nGCF_001705565.1.RAST\nGCF_900093775.1.RAST\nGCF_000014725.1.RAST\nGCF_000196815.1.RAST\nGCF_000021565.1.RAST\nGCF_000006985.1.RAST\nGCF_000189775.2.RAST\nGCF_000764555.1.RAST\nGCF_000008805.1.RAST\nGCF_000828635.1.RAST\nGCF_000024865.1.RAST\nGCF_000022525.1.RAST\nGCF_000219355.1.RAST\nGCF_000747525.1.RAST\nGCF_000734895.2.RAST\nGCF_001077715.1.RAST\nGCF_001005905.1.RAST\nGCF_001011035.1.RAST\nGCF_000223395.1.RAST\nGCF_001310255.1.RAST\nGCF_000014705.1.RAST\nGCF_000209655.1.RAST\nGCF_001038625.1.RAST\nGCF_000194135.1.RAST\nGCF_000147835.2.RAST\nGCF_000022125.1.RAST\nGCF_000196115.1.RAST\nGCF_000015645.1.RAST\nGCF_000166055.1.RAST\nGCF_000021025.1.RAST\nGCF_001314975.1.RAST\nGCF_000024905.1.RAST\nGCF_000019945.1.RAST\nGCF_000724485.1.RAST\nGCF_000807275.1.RAST\nGCF_000496595.1.RAST\nGCF_000013885.1.RAST\nGCF_000155735.2.RAST\nGCF_000565215.1.RAST\nGCF_000175215.2.RAST\nGCF_000237865.1.RAST\nGCF_000214095.2.RAST\nGCF_000253035.1.RAST\nGCF_000012745.1.RAST\nGCF_000020545.1.RAST\nGCF_000023885.1.RAST\nGCF_000813245.1.RAST\nGCF_000242935.2.RAST\nGCF_001444425.1.RAST\nGCF_000014205.1.RAST\nGCF_000014125.1.RAST\nGCF_000023465.1.RAST\nGCF_000013125.1.RAST\nGCF_000012985.1.RAST\nGCF_000023145.1.RAST\nGCF_000328565.1.RAST\nGCF_000255135.1.RAST\nGCF_001021385.1.RAST\nGCF_000013665.1.RAST\nGCF_000224085.1.RAST\nGCF_001454945.1.RAST\nGCF_000194605.1.RAST\nGCF_000019225.1.RAST\nGCF_000145255.1.RAST\nGCF_000217675.1.RAST\nGCF_000508245.1.RAST\nGCF_000470775.1.RAST\nGCF_000016545.1.RAST\nGCF_000190595.1.RAST\nGCF_000145035.1.RAST\nGCF_001008165.2.RAST\nGCF_000176915.2.RAST\nGCF_000008045.1.RAST\nGCF_000015245.1.RAST\nGCF_000015505.1.RAST\nGCF_000012225.1.RAST\nGCF_000473245.1.RAST\nGCF_000478905.1.RAST\nGCF_000013205.1.RAST\nGCF_002073715.2.RAST\nGCF_000265365.1.RAST\nGCF_000024085.1.RAST\nGCF_001281385.1.RAST\nGCF_000218895.1.RAST\nGCF_000222485.1.RAST\nGCF_000010605.1.RAST\nGCF_000016565.1.RAST\nGCF_000191145.1.RAST\nGCF_000269945.1.RAST\nGCF_000092045.1.RAST\nGCF_000018685.1.RAST\nGCF_001412595.1.RAST\nGCF_000195275.1.RAST\nGCF_000006785.2.RAST\nGCF_000007125.1.RAST\nGCF_000319385.1.RAST\nGCF_000017565.1.RAST\nGCF_000227665.2.RAST\nGCF_001553935.1.RAST\nGCF_000011325.1.RAST\nGCF_000403645.1.RAST\nGCF_000400955.1.RAST\nGCF_001042595.1.RAST\nGCF_001262715.1.RAST\nGCF_000012465.1.RAST\nGCF_000145235.1.RAST\nGCF_000471025.2.RAST\nGCF_000151105.2.RAST\nGCF_000755145.1.RAST\nGCF_000300005.1.RAST\nGCF_000013445.1.RAST\nGCF_000024965.1.RAST\nGCF_000195735.1.RAST\nGCF_000815185.1.RAST\nGCF_000209675.1.RAST\nGCF_001586165.1.RAST\nGCF_000287335.1.RAST\nGCF_000737515.1.RAST\nGCF_000968055.1.RAST\nGCF_000270145.1.RAST\nGCF_000233715.2.RAST\nGCF_000255295.1.RAST\nGCF_000697965.2.RAST\nGCF_001305615.1.RAST\nGCF_000010645.1.RAST\nGCF_000153165.2.RAST\nGCF_000012805.1.RAST\nGCF_000283615.1.RAST\nGCF_001050475.1.RAST\nGCF_000816845.1.RAST\nGCF_001767275.1.RAST\nGCF_000146065.2.RAST\nGCF_000009605.1.RAST\nGCF_001182745.1.RAST\nGCF_000008765.1.RAST\nGCF_000195835.1.RAST\nGCF_001190745.1.RAST\nGCF_900116045.1.RAST\nGCF_000024125.1.RAST\nGCF_000332115.1.RAST\nGCF_000020205.1.RAST\nGCF_000014285.1.RAST\nGCF_000355765.4.RAST\nGCF_000023265.1.RAST\nGCF_000300295.3.RAST\nGCF_000967895.1.RAST\nGCF_000012565.1.RAST\nGCF_000011225.1.RAST\nGCF_000828715.1.RAST\nGCF_000230555.1.RAST\nGCF_001700965.1.RAST\nGCF_000968535.2.RAST\nGCF_000189295.2.RAST\nGCF_000284375.1.RAST\nGCF_000233775.1.RAST\nGCF_000027145.1.RAST\nGCF_001023575.1.RAST\nGCF_000021385.1.RAST\nGCF_000202635.1.RAST\nGCF_000196035.1.RAST\nGCF_001458475.1.RAST\nGCF_000237145.1.RAST\nGCF_000196275.1.RAST\nGCF_000331735.1.RAST\nGCF_000195315.1.RAST\nGCF_001189495.1.RAST\nGCF_001078275.1.RAST\nGCF_001050435.1.RAST\nGCF_000242895.2.RAST\nGCF_000789255.1.RAST\nGCF_001442535.1.RAST\nGCF_001685465.1.RAST\nGCF_000251105.1.RAST\nGCF_000024725.1.RAST\nGCF_001267175.1.RAST\nGCF_001708485.1.RAST\nGCF_000316575.1.RAST\nGCF_000444995.1.RAST\nGCF_001011155.1.RAST\nGCF_000980835.1.RAST\nGCF_000025505.1.RAST\nGCF_000215745.1.RAST\nGCF_000018285.1.RAST\nGCF_001281045.1.RAST\nGCF_001652565.1.RAST\nGCF_000006765.1.RAST\nGCF_001314945.1.RAST\nGCF_001017435.1.RAST\nGCF_000215995.1.RAST\nGCF_001878675.1.RAST\nGCF_001553605.1.RAST\nGCF_000236405.1.RAST\nGCF_000447675.1.RAST\nGCF_000208405.1.RAST\nGCF_000801315.1.RAST\nGCF_001267865.1.RAST\nGCF_000494755.1.RAST\nGCF_000090965.1.RAST\nGCF_000015665.1.RAST\nGCF_000015345.1.RAST\nGCF_000192865.1.RAST\nGCF_000012305.1.RAST\nGCF_001686985.1.RAST\nGCF_000147335.1.RAST\nGCF_000317575.1.RAST\nGCF_000246855.1.RAST\nGCF_000020065.1.RAST\nGCF_000757785.1.RAST\nGCF_000152245.2.RAST\nGCF_001652485.1.RAST\nGCF_000179395.2.RAST\nGCF_001610955.1.RAST\nGCF_000763575.1.RAST\nGCF_000016825.1.RAST\nGCF_000011385.1.RAST\nGCF_000960975.1.RAST\nGCF_000013425.1.RAST\nGCF_000008725.1.RAST\nGCF_000219105.1.RAST\nGCF_000092405.1.RAST\nGCF_000013865.1.RAST\nGCF_000299335.2.RAST\nGCF_000011345.1.RAST\nGCF_001698225.1.RAST\nGCF_000185805.1.RAST\nGCF_000014465.1.RAST\nGCF_000294715.1.RAST\nGCF_000015725.1.RAST\nGCF_000008205.1.RAST\nGCF_000970045.1.RAST\nGCF_000012605.1.RAST\nGCF_000243115.2.RAST\nGCF_000815065.1.RAST\nGCF_001029265.1.RAST\nGCF_000026065.1.RAST\nGCF_000831485.1.RAST\nGCF_000585495.1.RAST\nGCF_000092505.1.RAST\nGCF_000011185.1.RAST\nGCF_000026045.1.RAST\nGCF_000769655.1.RAST\nGCF_000025945.1.RAST\nGCF_000317025.1.RAST\nGCF_001007935.1.RAST\nGCF_001021045.1.RAST\nGCF_001484935.1.RAST\nGCF_000328705.1.RAST\nGCF_000950575.1.RAST\nGCF_000218875.1.RAST\nGCF_001078055.1.RAST\nGCF_000184435.1.RAST\nGCF_000024285.1.RAST\nGCF_000092105.1.RAST\nGCF_000325705.1.RAST\nGCF_000007485.1.RAST\nGCF_000010185.1.RAST\nGCF_000147695.2.RAST\nGCF_000970305.1.RAST\nGCF_001263175.1.RAST\nGCF_000196235.1.RAST\nGCF_000577895.1.RAST\nGCF_001027285.1.RAST\nGCF_000013725.1.RAST\nGCF_000484535.1.RAST\nGCF_000743945.1.RAST\nGCF_001308145.2.RAST\nGCF_000153405.2.RAST\nGCF_000016645.1.RAST\nGCF_001593245.1.RAST\nGCF_000439455.1.RAST\nGCF_000143145.1.RAST\nGCF_000019405.1.RAST\nGCF_000147055.1.RAST\nGCF_000196855.1.RAST\nGCF_000012885.1.RAST\nGCF_000724605.1.RAST\nGCF_000014505.1.RAST\nGCF_001705175.1.RAST\nGCF_000240075.2.RAST\nGCF_000055945.1.RAST\nGCF_000018365.1.RAST\nGCF_000981505.1.RAST\nGCF_000250675.2.RAST\nGCF_000186345.1.RAST\nGCF_001941945.1.RAST\nGCF_000178955.2.RAST\nGCF_001583415.1.RAST\nGCF_001572725.1.RAST\nGCF_000019665.1.RAST\nGCF_000517605.1.RAST\nGCF_000612055.1.RAST\nGCF_000441555.1.RAST\nGCF_001042635.1.RAST\nGCF_000982715.1.RAST\nGCF_000010525.1.RAST\nGCF_000767275.2.RAST\nGCF_900011245.1.RAST\nGCF_000422165.1.RAST\nGCF_000953015.1.RAST\nGCF_000016665.1.RAST\nGCF_000022005.1.RAST\nGCF_000495935.2.RAST\nGCF_000015865.1.RAST\nGCF_000092865.1.RAST\nGCF_001442785.1.RAST\nGCF_000062885.1.RAST\nGCF_000018665.1.RAST\nGCF_001642655.1.RAST\nGCF_000144675.1.RAST\nGCF_000172635.2.RAST\nGCF_000143845.1.RAST\nGCF_000018605.1.RAST\nGCF_000468615.2.RAST\nGCF_000091325.1.RAST\nGCF_000009905.1.RAST\nGCF_000012665.1.RAST\nGCF_000092305.1.RAST\nGCF_000875755.1.RAST\nGCF_000147815.2.RAST\nGCF_000024365.1.RAST\nGCF_001043175.1.RAST\nGCF_000018265.1.RAST\nGCF_000007765.2.RAST\nGCF_000736415.1.RAST\nGCF_000968195.1.RAST\nGCF_000972245.3.RAST\nGCF_000009965.1.RAST\nGCF_000473305.1.RAST\nGCF_900087055.1.RAST\nGCF_000010325.1.RAST\nGCF_000025905.1.RAST\nGCF_000015285.1.RAST\nGCF_000317515.1.RAST\nGCF_001187845.1.RAST\nGCF_000183385.1.RAST\nGCF_000972765.1.RAST\nGCF_001511815.1.RAST\nGCF_000981525.1.RAST\nGCF_000355675.1.RAST\nGCF_000626635.1.RAST\nGCF_000270245.1.RAST\nGCF_000828815.1.RAST\nGCF_000233435.1.RAST\nGCF_000019285.1.RAST\nGCF_001420855.1.RAST\nGCF_000013365.1.RAST\nGCF_000195715.1.RAST\nGCF_000746645.1.RAST\nGCF_000013225.1.RAST\nGCF_000304215.1.RAST\nGCF_000091545.1.RAST\nGCF_000764535.1.RAST\nGCF_000010785.1.RAST\nGCF_000233915.3.RAST\nGCF_000166295.1.RAST\nGCF_000021865.1.RAST\nGCF_000970325.1.RAST\nGCF_000022905.1.RAST\nGCF_000023725.1.RAST\nGCF_000817955.1.RAST\nGCF_000092185.1.RAST\nGCF_001280225.1.RAST\nGCF_000756615.1.RAST\nGCF_000019365.1.RAST\nGCF_000017405.1.RAST\nGCF_000321415.2.RAST\nGCF_001465835.2.RAST\nGCF_000226565.1.RAST\nGCF_000230715.2.RAST\nGCF_000568815.1.RAST\nGCF_000022065.1.RAST\nGCF_000092925.1.RAST\nGCF_001677275.1.RAST\nGCF_000247605.1.RAST\nGCF_001477655.1.RAST\nGCF_000219585.1.RAST\nGCF_001922025.1.RAST\nGCF_000010085.1.RAST\nGCF_000484505.1.RAST\nGCF_000264495.1.RAST\nGCF_000007985.2.RAST\nGCF_000023845.1.RAST\nGCF_000006905.1.RAST\nGCF_001315015.1.RAST\nGCF_000953535.1.RAST\nGCF_000993785.2.RAST\nGCF_000953215.1.RAST\nGCF_000021045.1.RAST\nGCF_000196675.1.RAST\nGCF_001421015.2.RAST\nGCF_001548015.1.RAST\nGCF_000299355.1.RAST\nGCF_000237205.1.RAST\nGCF_000184685.1.RAST\nGCF_000346595.1.RAST\nGCF_000789395.1.RAST\nGCF_000828615.1.RAST\nGCF_000019165.1.RAST\nGCF_000011985.1.RAST\nGCF_001263395.1.RAST\nGCF_000166395.1.RAST\nGCF_000015945.1.RAST\nGCF_000959245.1.RAST\nGCF_000348805.1.RAST\nGCF_000963865.1.RAST\nGCF_000973545.1.RAST\nGCF_000092585.1.RAST\nGCF_000011105.1.RAST\nGCF_000328665.1.RAST\nGCF_000471965.1.RAST\nGCF_000284415.1.RAST\nGCF_000512895.1.RAST\nGCF_000018305.1.RAST\nGCF_000023245.1.RAST\nGCF_001281465.1.RAST\nGCF_000238215.1.RAST\nGCF_000305935.1.RAST\nGCF_000008885.1.RAST\nGCF_000093025.1.RAST\nGCF_000178875.2.RAST\nGCF_000184745.1.RAST\nGCF_000008385.1.RAST\nGCF_000025565.1.RAST\nGCF_000204255.1.RAST\nGCF_001040945.1.RAST\nGCF_000020025.1.RAST\nGCF_000313635.1.RAST\nGCF_000012425.1.RAST\nGCF_000006725.1.RAST\nGCF_000009125.1.RAST\nGCF_000732925.1.RAST\nGCF_000195975.1.RAST\nGCF_000014005.1.RAST\nGCF_000265295.1.RAST\nGCF_000023965.1.RAST\nGCF_000023585.1.RAST\nGCF_000009145.1.RAST\nGCF_000017865.1.RAST\nGCF_000092205.1.RAST\nGCF_000307105.1.RAST\nGCF_000023985.1.RAST\nGCF_000696485.1.RAST\nGCF_000091305.1.RAST\nGCF_000009825.1.RAST\nGCF_001548075.1.RAST\nGCF_000026205.1.RAST\nGCF_000020725.1.RAST\nGCF_000279145.1.RAST\nGCF_001051995.2.RAST\nGCF_000196355.1.RAST\nGCF_000340435.2.RAST\nGCF_000196695.1.RAST\nGCF_001278055.1.RAST\nGCF_001902315.1.RAST\nGCF_000020945.1.RAST\nGCF_000317045.1.RAST\nGCF_000400935.1.RAST\nGCF_000953735.1.RAST\nGCF_000517425.1.RAST\nGCF_000023865.1.RAST\nGCF_000008665.1.RAST\nGCF_000011805.1.RAST\nGCF_000008865.1.RAST\nGCF_000236705.1.RAST\nGCF_000767055.1.RAST\nGCF_000828675.1.RAST\nGCF_000973725.1.RAST\nGCF_000739085.1.RAST\nGCF_001610975.1.RAST\nGCF_000599985.1.RAST\nGCF_000164985.3.RAST\nGCF_001548055.1.RAST\nGCF_000230735.2.RAST\nGCF_000007845.1.RAST\nGCF_000021985.1.RAST\nGCF_000008525.1.RAST\nGCF_000010125.1.RAST\nGCF_000009985.1.RAST\nGCF_000022605.2.RAST\nGCF_000224475.1.RAST\nGCF_000550765.1.RAST\nGCF_000024505.1.RAST\nGCF_000284095.1.RAST\nGCF_000334405.1.RAST\nGCF_001887595.1.RAST\nGCF_000017685.1.RAST\nGCF_000767465.1.RAST\nGCF_000025645.1.RAST\nGCF_000022205.1.RAST\nGCF_000021645.1.RAST\nGCF_000011245.1.RAST\nGCF_000830885.1.RAST\nGCF_000018105.1.RAST\nGCF_001698145.1.RAST\nGCF_000025325.1.RAST\nGCF_000503895.1.RAST\nGCF_000011025.1.RAST\nGCF_000012645.1.RAST\nGCF_000014865.1.RAST\nGCF_001693385.1.RAST\nGCF_000190435.1.RAST\nGCF_000283515.1.RAST\nGCF_000354175.2.RAST\nGCF_000015025.1.RAST\nGCF_000012325.1.RAST\nGCF_000072485.1.RAST\nGCF_000023745.1.RAST\nGCF_001750785.1.RAST\nGCF_000258405.1.RAST\nGCF_001578185.1.RAST\nGCF_000007945.1.RAST\nGCF_000183345.1.RAST\nGCF_000214825.1.RAST\nGCF_000967915.1.RAST\nGCF_000828655.1.RAST\nGCF_000313175.2.RAST\nGCF_000265385.1.RAST\nGCF_000157895.3.RAST\nGCF_001941825.1.RAST\nGCF_000218565.1.RAST\nGCF_000243135.2.RAST\nGCF_000015045.1.RAST\nGCF_000092245.1.RAST\nGCF_000093085.1.RAST\nGCF_000015205.1.RAST\nGCF_001443605.1.RAST\nGCF_000227745.2.RAST\nGCF_000069025.1.RAST\nGCF_000340905.1.RAST\nGCF_000007525.1.RAST\nGCF_001465295.1.RAST\nGCF_000017005.1.RAST\nGCF_000242635.2.RAST\nGCF_000008025.1.RAST\nGCF_000829035.1.RAST\nGCF_000092785.1.RAST\nGCF_000007785.1.RAST\nGCF_000016185.1.RAST\nGCF_000724775.3.RAST\nGCF_000018785.1.RAST\nGCF_000350305.1.RAST\nGCF_000025005.1.RAST\nGCF_000973105.1.RAST\nGCF_001597285.1.RAST\nGCF_000011685.1.RAST\nGCF_000012485.1.RAST\nGCF_000024265.1.RAST\nGCF_001042715.1.RAST\nGCF_000085865.1.RAST\nGCF_000143165.1.RAST\nGCF_000300255.2.RAST\nGCF_001887285.1.RAST\nGCF_000091565.1.RAST\nGCF_000007865.1.RAST\nGCF_000012145.1.RAST\nGCF_000164675.2.RAST\nGCF_000525635.1.RAST\nGCF_000063545.1.RAST\nGCF_000186985.1.RAST\nGCF_000014385.1.RAST\nGCF_000023065.1.RAST\nGCF_000202835.1.RAST\nGCF_001042675.1.RAST\nGCF_000220625.1.RAST\nGCF_000214355.1.RAST\nGCF_000092825.1.RAST\nGCF_000250655.1.RAST\nGCF_000185905.1.RAST\nGCF_000027225.1.RAST\nGCF_000180175.2.RAST\nGCF_000299095.1.RAST\nGCF_001457475.1.RAST\nGCF_001021975.1.RAST\nGCF_000014525.1.RAST\nGCF_000230275.1.RAST\nGCF_000015145.1.RAST\nGCF_000009765.2.RAST\nGCF_000523235.1.RAST\nGCF_000024985.1.RAST\nGCF_000012285.1.RAST\nGCF_000738435.1.RAST\nGCF_000317305.3.RAST\nGCF_000165505.1.RAST\nGCF_001750165.1.RAST\nGCF_000152825.2.RAST\nGCF_000063525.1.RAST\nGCF_000196495.1.RAST\nGCF_000196175.1.RAST\nGCF_000019345.1.RAST\nGCF_000174395.2.RAST\nGCF_000027165.1.RAST\nGCF_001854245.1.RAST\nGCF_000015105.1.RAST\nGCF_000148645.1.RAST\nGCF_000022965.1.RAST\nGCF_000284335.1.RAST\nGCF_000144915.1.RAST\nGCF_000024325.1.RAST\nGCF_000214435.1.RAST\nGCF_000024625.1.RAST\nGCF_001693675.1.RAST\nGCF_001477625.1.RAST\nGCF_001951175.1.RAST\nGCF_000283555.1.RAST\nGCF_000294775.2.RAST\nGCF_000828975.1.RAST\nGCF_001543175.1.RAST\nGCF_000953435.1.RAST\nGCF_000020505.1.RAST\nGCF_000019745.1.RAST\nGCF_000025865.1.RAST\nGCF_000186245.1.RAST\nGCF_001021025.1.RAST\nGCF_001402875.1.RAST\nGCF_001421015.1.RAST\nGCF_001191005.1.RAST\nGCF_000063505.1.RAST\nGCF_000231405.2.RAST\nGCF_000009845.1.RAST\nGCF_001456065.2.RAST\nGCF_000973085.1.RAST\nGCF_000017585.1.RAST\nGCF_000178115.2.RAST\nGCF_000260985.4.RAST\nGCF_000348725.1.RAST\nGCF_000818035.1.RAST\nGCF_001636015.1.RAST\nGCF_000283595.1.RAST\nGCF_000208385.1.RAST\nGCF_001277995.1.RAST\nGCF_000807255.1.RAST\nGCF_000300235.2.RAST\nGCF_000214415.1.RAST\nGCF_000194625.1.RAST\nGCF_000185985.2.RAST\nGCF_000014885.1.RAST\nGCF_001488575.1.RAST\nGCF_000252445.1.RAST\nGCF_000193395.1.RAST\nGCF_000219045.1.RAST\nGCF_000294515.1.RAST\nGCF_000009705.1.RAST\nGCF_000013285.1.RAST\nGCF_000800295.1.RAST\nGCF_001886815.1.RAST\nGCF_000021945.1.RAST\nGCF_000953475.1.RAST\nGCF_001584185.1.RAST\nGCF_000316685.1.RAST\nGCF_001188915.1.RAST\nGCF_000007305.1.RAST\nGCF_000063445.1.RAST\nGCF_000016525.1.RAST\nGCF_000284115.1.RAST\nGCF_000517565.1.RAST\nGCF_000576555.1.RAST\nGCF_000954135.1.RAST\nGCF_000217995.1.RAST\nGCF_001586255.1.RAST\nGCF_000092565.1.RAST\nGCF_000215105.1.RAST\nGCF_001027025.1.RAST\nGCF_000092425.1.RAST\nGCF_000284515.1.RAST\nGCF_000723165.1.RAST\nGCF_000814825.1.RAST\nGCF_000242595.2.RAST\nGCF_000187005.1.RAST\nGCF_000006945.2.RAST\nGCF_000012405.1.RAST\nGCF_001484605.1.RAST\nGCF_001432245.1.RAST\nGCF_000012585.1.RAST\nGCF_001702135.1.RAST\nGCF_000695835.1.RAST\nGCF_000265525.1.RAST\nGCF_000214375.1.RAST\nGCF_001420995.1.RAST\nGCF_000959725.1.RAST\nGCF_000091665.1.RAST\nGCF_000284015.1.RAST\nGCF_000762265.1.RAST\nGCF_000017225.1.RAST\nGCF_000225445.1.RAST\nGCF_000014965.1.RAST\nGCF_000212735.1.RAST\nGCF_000145295.1.RAST\nGCF_000827005.1.RAST\nGCF_000069225.1.RAST\nGCF_000022145.1.RAST\nGCF_001719165.1.RAST\nGCF_000349945.1.RAST\nGCF_000006745.1.RAST\nGCF_000015765.1.RAST\nGCF_000027305.1.RAST\nGCF_000759535.1.RAST\nGCF_000022305.1.RAST\nGCF_000179575.2.RAST\nGCF_000253015.1.RAST\nGCF_000008485.1.RAST\nGCF_000007345.1.RAST\nGCF_000819565.1.RAST\nGCF_000263195.1.RAST\nGCF_000024165.1.RAST\nGCF_000832305.1.RAST\nGCF_001187785.1.RAST\nGCF_000309885.1.RAST\nGCF_000011905.1.RAST\nGCF_000026105.1.RAST\nGCF_000007805.1.RAST\nGCF_000019485.1.RAST\nGCF_000224985.1.RAST\nGCF_000389635.1.RAST\nGCF_000772105.1.RAST\nGCF_000227705.2.RAST\nGCF_000510265.1.RAST\nGCF_000183665.1.RAST\nGCF_000344785.1.RAST\nGCF_000961095.1.RAST\nGCF_000341385.1.RAST\nGCF_000014765.1.RAST\nGCF_000026605.1.RAST\nGCF_000215975.1.RAST\nGCF_001042405.1.RAST\nGCF_000754275.1.RAST\nGCF_000340795.1.RAST\nGCF_000186265.1.RAST\nGCF_000020685.1.RAST\nGCF_000196395.1.RAST\nGCF_000199675.1.RAST\nGCF_000648515.1.RAST\nGCF_000565195.1.RAST\nGCF_000007325.1.RAST\nGCF_000144625.1.RAST\nGCF_000008165.1.RAST\nGCF_000828855.1.RAST\nGCF_000007625.1.RAST\nGCF_001484065.1.RAST\nGCF_000317615.1.RAST\nGCF_000328685.1.RAST\nGCF_000017425.1.RAST\nGCF_000737865.1.RAST\nGCF_000175575.2.RAST\nGCF_001936235.1.RAST\nGCF_000011705.1.RAST\nGCF_000725405.1.RAST\nGCF_000025085.1.RAST\nGCF_002214645.1.RAST\nGCF_000144695.1.RAST\nGCF_000017145.1.RAST\nGCF_001274875.1.RAST\nGCF_000025405.2.RAST\nGCF_000195085.1.RAST\nGCF_001042695.1.RAST\nGCF_000026405.1.RAST\nGCF_000092845.1.RAST\nGCF_000011305.1.RAST\nGCF_000092365.1.RAST\nGCF_000145615.1.RAST\nGCF_000021885.1.RAST\nGCF_001865855.1.RAST\nGCF_000068585.1.RAST\nGCF_000196515.1.RAST\nGCF_000008365.1.RAST\nGCF_000318015.1.RAST\nGCF_000013185.1.RAST\nGCF_000230895.2.RAST\nGCF_000012765.1.RAST\nGCF_001908275.1.RAST\nGCF_000152265.2.RAST\nGCF_000473995.1.RAST\nGCF_000190535.1.RAST\nGCF_000940805.1.RAST\nGCF_000147875.1.RAST\nGCF_000225345.1.RAST\nGCF_000015125.1.RAST\nGCF_002211765.1.RAST\nGCF_001553195.1.RAST\nGCF_001028645.1.RAST\nGCF_000023125.1.RAST\nGCF_000024105.1.RAST\nGCF_000007365.1.RAST\nGCF_001274895.1.RAST\nGCF_000007465.2.RAST\nGCF_000190575.1.RAST\nGCF_000385565.1.RAST\nGCF_000018405.1.RAST\nGCF_001318345.1.RAST\nGCF_001307195.1.RAST\nGCF_002222635.1.RAST\nGCF_000021685.1.RAST\nGCF_000340825.1.RAST\nGCF_001641285.1.RAST\nGCF_000265465.1.RAST\nGCF_000021725.1.RAST\nGCF_001187595.1.RAST\nGCF_000023825.1.RAST\nGCF_000243155.2.RAST\nGCF_000590925.1.RAST\nGCF_000007025.1.RAST\nGCF_000024005.1.RAST\nGCF_000231015.2.RAST\nGCF_000832985.1.RAST\nGCF_000026125.1.RAST\nGCF_000092125.1.RAST\nGCF_000196735.1.RAST\nGCF_000212695.1.RAST\nGCF_002234495.1.RAST\nGCF_000016745.1.RAST\nGCF_000737325.1.RAST\nGCF_000012385.1.RAST\nGCF_000144405.1.RAST\nGCF_000511385.1.RAST\nGCF_000025285.1.RAST\nGCF_000091785.1.RAST\nGCF_000284255.1.RAST\nGCF_000213825.1.RAST\nGCF_000800475.2.RAST\nGCF_001189295.1.RAST\nGCF_000010665.1.RAST\nGCF_001518815.1.RAST\nGCF_000833105.2.RAST\nGCF_000026345.1.RAST\nGCF_000026745.1.RAST\nGCF_000012345.1.RAST\nGCF_000011365.1.RAST\nGCF_000005825.2.RAST\nGCF_000024465.1.RAST\nGCF_000970025.1.RAST\nGCF_000024185.1.RAST\nGCF_000153485.2.RAST\nGCF_000092645.1.RAST\nGCF_000020525.1.RAST\nGCF_000455605.1.RAST\nGCF_000262715.1.RAST\nGCF_000260965.1.RAST\nGCF_000015585.1.RAST\nGCF_001655245.1.RAST\nGCF_000304735.1.RAST\nGCF_001185205.1.RAST\nGCF_000270205.1.RAST\nGCF_000196015.1.RAST\nGCF_000981765.1.RAST\nGCF_000196835.1.RAST\nGCF_001007995.1.RAST\nGCF_000092265.1.RAST\nGCF_000025225.2.RAST\nGCF_001676785.2.RAST\nGCF_000524555.1.RAST\nGCF_000269985.1.RAST\nGCF_000014565.1.RAST\nGCF_001444365.1.RAST\nGCF_000317795.1.RAST\nGCF_000236665.1.RAST\nGCF_000767615.3.RAST\nGCF_000276685.1.RAST\nGCF_000018425.1.RAST\nGCF_000013145.1.RAST\nGCF_001302585.1.RAST\nGCF_000175095.2.RAST\nGCF_000010625.1.RAST\nGCF_000017245.1.RAST\nGCF_002215215.1.RAST\nGCF_000253275.1.RAST\nGCF_000224105.1.RAST\nGCF_001618885.1.RAST\nGCF_000020145.1.RAST\nGCF_000973505.1.RAST\nGCF_000020305.1.RAST\nGCF_000008985.1.RAST\nGCF_001267155.1.RAST\nGCF_000264455.2.RAST\nGCF_000010585.1.RAST\nGCF_000024885.1.RAST\nGCF_000763535.1.RAST\nGCF_000739065.1.RAST\nGCF_000009545.1.RAST\nGCF_000828915.1.RAST\nGCF_000192745.1.RAST\nGCF_000757825.1.RAST\nGCF_900078695.1.RAST\nGCF_000517445.1.RAST\nGCF_000226625.1.RAST\nGCF_000007705.1.RAST\nGCF_000008185.1.RAST\nGCF_000517625.1.RAST\nGCF_000067165.1.RAST\nGCF_000300135.1.RAST\nGCF_001281115.1.RAST\nGCF_000008685.2.RAST\nGCF_000017805.1.RAST\nGCF_000013325.1.RAST\nGCF_000021545.1.RAST\nGCF_000017625.1.RAST\nGCF_000022885.2.RAST\nGCF_000007925.1.RAST\nGCF_000219175.1.RAST\nGCF_000013625.1.RAST\nGCF_000165485.1.RAST\nGCF_000009925.1.RAST\nGCF_000019605.1.RAST\nGCF_001262055.1.RAST\nGCF_000785515.1.RAST\nGCF_000013605.1.RAST\nGCF_001026985.1.RAST\nGCF_000190635.1.RAST\nGCF_000008605.1.RAST\nGCF_000025965.1.RAST\nGCF_000015965.1.RAST\nGCF_000147715.2.RAST\nGCF_000021825.1.RAST\nGCF_000023405.1.RAST\nGCF_000014445.1.RAST\nGCF_000020465.1.RAST\nGCF_000723505.1.RAST\nGCF_001886715.1.RAST\nGCF_000317435.1.RAST\nGCF_000019905.1.RAST\nGCF_000253295.1.RAST\nGCF_000016905.1.RAST\nGCF_000008265.1.RAST\nGCF_000223375.1.RAST\nGCF_000980815.1.RAST\nGCF_000270085.1.RAST\nGCF_001465855.1.RAST\nGCF_000043285.1.RAST\nGCF_000022545.1.RAST\nGCF_000316645.1.RAST\nGCF_000011485.1.RAST\nGCF_000060345.1.RAST\nGCF_000016505.1.RAST\nGCF_000247565.1.RAST\nGCF_000175255.2.RAST\nGCF_000020225.1.RAST\nGCF_000198515.1.RAST\n"
],
[
"df.to_csv('../../data/export_fba.tsv', sep='\\t')",
"_____no_output_____"
],
[
"data = {\n 'genome_id' : [],\n 'kbase' : [],\n 'cobra' : [],\n 'error' : [],\n}\nexclude = genomes - genomes3\nfor genome_id in genomes:\n break\n if not genome_id in exclude:\n kbase_fba, cobra_fba, e = eval_fba(genome_id)\n data['genome_id'].append(genome_id)\n data['kbase'].append(kbase_fba)\n data['cobra'].append(cobra_fba)\n data['error'].append(e)\n \ndf = pd.DataFrame(data)\ndf = df.set_index('genome_id')\n#df.to_csv('../../data/cobrakbase_fba.tsv', sep='\\t')",
"_____no_output_____"
],
[
"print()\nkmodel_fba.keys()",
"0.697833\n"
],
[
"def enforce_direaction_bounds(kmodel):\n for r in kmodel['modelreactions']:\n direction = r['direction']\n if direction == '>':\n r['maxrevflux'] = 0\n r['maxforflux'] = 1000\n elif direction == '=':\n r['maxrevflux'] = 1000\n r['maxforflux'] = 1000\n elif direction == '<':\n r['maxrevflux'] = 1000\n r['maxforflux'] = 0\n\n \n",
"_____no_output_____"
],
[
"\nfor r in kmodel_fba['FBAReactionVariables']:\n break\n r_id = r['modelreaction_ref'].split('/')[-1]\n frxn = fbamodel.get_reaction(r_id)\n #print(frxn.data)\n rxn = model.reactions.get_by_id(r_id)\n cobra_bound = (rxn.lower_bound, rxn.upper_bound)\n lb_ub = (r['lowerBound'], r['upperBound'])\n min_max = (r['min'], r['max'])\n direction = frxn.data['direction'] \n if direction == '>':\n rxn.lower_bound = 0\n rxn.upper_bound = 1000\n elif direction == '=':\n rxn.lower_bound = -1000\n rxn.upper_bound = 1000\n elif direction == '<':\n rxn.lower_bound = -1000\n rxn.upper_bound = 0\n print(frxn.data['direction'], cobra_bound, lb_ub, min_max, rxn.flux, r['value'], rxn)\n break",
"_____no_output_____"
],
[
"solution = model.optimize()",
"_____no_output_____"
],
[
"print(solution.objective_value)",
"0.6978327749368358\n"
],
[
"b0002 = exprvar('b0002')",
"_____no_output_____"
],
[
"f1 = b0002 & z\nf1",
"_____no_output_____"
],
[
"expr.expr(\"(b0078 & b0077) | (b3670 & (b3671 | k))\").to_dnf().cover",
"_____no_output_____"
],
[
"f10 = Or(And(Not(a), b), And(c, Not(d)))\nf10",
"_____no_output_____"
],
[
"a, b, c, d, k, z, w = map(exprvar, \"abcdkzw\")",
"_____no_output_____"
],
[
"f0 = a & (b | c) | k & (z | w)",
"_____no_output_____"
],
[
"dnf = f0.to_dnf()\ndnf",
"_____no_output_____"
],
[
"def get_protein_sets(dnf):\n print('get_protein_sets', dnf)\n protein_sets = []\n for k in dnf.iter_dfs():\n print(type(k))\n if type(k) == expr.AndOp:\n protein_set = set()\n for gene in k.iter_dfs():\n if type(gene) == expr.Variable:\n protein_set.add(gene)\n #print(k, gene)\n protein_sets.append(protein_set)\n elif type(k) == expr.Variable:\n 1\n #protein_sets.append(set([k]))\n elif type(k) == expr.OrOp:\n for k_childs in k.iter_dfs():\n 1\n #print(k_childs)\n #protein_sets.append(set([k]))\n return protein_sets\n\ndef get_protein_sets2(dnf):\n protein_sets = []\n for k in dnf.iter_dfs():\n if type(k) == expr.Variable:\n protein_set = set()\n for gene in k.iter_dfs():\n if type(gene) == expr.Variable:\n protein_set.add(gene)\n #print(k, gene)\n protein_sets.append(protein_set)\n return protein_sets\nprotein_sets = get_protein_sets(dnf)\nprint(dnf, protein_sets)\nf100 = expr.expr(\"(b0078 | b0077) | (b3670 & b3671)\").to_dnf()\nprotein_sets = get_protein_sets(f100)\nprint(f100, protein_sets)",
"get_protein_sets Or(And(a, b), And(a, c), And(k, z), And(k, w))\n<class 'pyeda.boolalg.expr.Variable'>\n<class 'pyeda.boolalg.expr.Variable'>\n<class 'pyeda.boolalg.expr.AndOp'>\n<class 'pyeda.boolalg.expr.Variable'>\n<class 'pyeda.boolalg.expr.Variable'>\n<class 'pyeda.boolalg.expr.AndOp'>\n<class 'pyeda.boolalg.expr.Variable'>\n<class 'pyeda.boolalg.expr.Variable'>\n<class 'pyeda.boolalg.expr.AndOp'>\n<class 'pyeda.boolalg.expr.Variable'>\n<class 'pyeda.boolalg.expr.Variable'>\n<class 'pyeda.boolalg.expr.AndOp'>\n<class 'pyeda.boolalg.expr.OrOp'>\nOr(And(a, b), And(a, c), And(k, z), And(k, w)) [{a, b}, {c, a}, {k, z}, {k, w}]\nget_protein_sets Or(b0078, b0077, And(b3670, b3671))\n<class 'pyeda.boolalg.expr.Variable'>\n<class 'pyeda.boolalg.expr.Variable'>\n<class 'pyeda.boolalg.expr.Variable'>\n<class 'pyeda.boolalg.expr.Variable'>\n<class 'pyeda.boolalg.expr.AndOp'>\n<class 'pyeda.boolalg.expr.OrOp'>\nOr(b0078, b0077, And(b3670, b3671)) [{b3670, b3671}]\n"
],
[
"ast = f100.to_ast()\ndef get_protein_sets(ast, protein_sets):\n print('get_protein_sets', ast)\n #protein_sets = []\n t = ast[0]\n if t == 'or':\n for child in ast:\n if type(child) == tuple:\n get_protein_sets(child, protein_sets)\n elif t == 'and':\n get_protein_set = set()\n for child in ast:\n if type(child) == tuple:\n get_protein_set.add(child[1])\n protein_sets.append(get_protein_set)\n elif t == 'lit':\n protein_sets.append(set([ast[1]]))\n else:\n print('invalid type', t)\n return protein_sets\nf100.NAME\nprint(f100, get_protein_sets(ast, []))",
"get_protein_sets ('or', ('lit', 10), ('lit', 11), ('and', ('lit', 12), ('lit', 13)))\nget_protein_sets ('lit', 10)\nget_protein_sets ('lit', 11)\nget_protein_sets ('and', ('lit', 12), ('lit', 13))\nOr(b0078, b0077, And(b3670, b3671)) [{10}, {11}, {12, 13}]\n"
],
[
"def get_protein_sets(e, protein_sets):\n for var in e.xs:\n if var.depth == 0:\n for a in var.cover:\n #print(type(a.))\n print(a)\n protein_sets.append(set([a]))\n else:\n for a in var.xs:\n print(a)\n print(var, var.depth)\n return protein_sets\n\nprint(f100, get_protein_sets(f100, []))",
"frozenset({b0078})\nfrozenset({b0077})\nb3670\nb3671\nAnd(b3670, b3671) 1\nOr(b0078, b0077, And(b3670, b3671)) [{frozenset({b0078})}, {frozenset({b0077})}]\n"
],
[
"for var in ast:\n print(var)",
"or\n('lit', 10)\n('lit', 11)\n('and', ('lit', 12), ('lit', 13))\n"
],
[
"dnf.cover",
"_____no_output_____"
],
[
"f0.to_cnf()",
"_____no_output_____"
],
[
"model = cobrakbase.convert_kmodel(kmodel)",
"Add Sink cpd02701_c0\nSetup Drains. EX: 305 SK: 1\n"
],
[
"\"(b0078 and b0077) or (b3670 and b3671)\".replace('and', '&').replace('or', '|')",
"_____no_output_____"
],
[
"a = ['b0241', 'b0002']\nprint(a)\na.sort()\nprint(a)",
"['b0241', 'b0002']\n['b0002', 'b0241']\n"
],
[
"mapping = pd.read_csv('/Volumes/My Passport/var/argonne/annotation/manual/iAF1260_rxn_pred.tsv', sep='\\t')",
"_____no_output_____"
],
[
"to_seed = {}\nfor _, row in mapping.iterrows():\n if not pd.isna(row['ModelSeedReaction']):\n to_seed[row['iAF1260'][2:]] = row['ModelSeedReaction']",
"_____no_output_____"
],
[
"prot_to_rxn = {}\nfor r in model_bigg.reactions:\n gpr = r.gene_name_reaction_rule\n gpr = gpr.replace('and', '&').replace('or', '|')\n if len(gpr) > 0:\n gpr_expression = expr.expr(gpr)\n gpr_expression = gpr_expression.to_dnf()\n psets = gpr_expression.cover\n for pset in psets:\n prot = []\n for p in pset:\n #print(type(p), str(p))\n prot.append(str(p))\n prot.sort()\n prot = ';'.join(prot)\n if not prot in prot_to_rxn:\n prot_to_rxn[prot] = set()\n prot_to_rxn[prot].add(r.id)\n #print(gpr_expression, psets)\n \n",
"_____no_output_____"
],
[
"data = []\n\nfor gene in prot_to_rxn:\n #print(gene)\n rxn_ids = []\n for rxn_id in prot_to_rxn[gene]:\n seed_id = to_seed[rxn_id]\n rxn_ids.append(seed_id)\n data.append([gene, ';'.join(rxn_ids)])\n \ndf = pd.DataFrame(data, columns=['genes', 'reactions'])\ndf.to_csv('iAF1260.csv')\n",
"_____no_output_____"
],
[
"model_bigg = cobra.io.read_sbml_model('iAF1260.xml')",
"_____no_output_____"
],
[
"media = None\nwith open('glucose_media.json', 'r') as f:\n data = json.loads(f.read())\n media = cobrakbase.convert_media(data)",
"_____no_output_____"
],
[
"model = None\nwith open('test_model.json', 'r') as f:\n data = json.loads(f.read())\n model = cobrakbase.convert_kmodel(data, media)\nmodel.summary()",
"Add Sink cpd15302_c0\nAdd Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 172 SK: 3\nIN FLUXES OUT FLUXES OBJECTIVES\n-------------------- --------------------- -------------------\ncpd00013_e0 6.53 cpd00001_e0 16.6 bio1_biomass 0.762\ncpd00027_e0 5 cpd00067_e0 5.31\ncpd00009_e0 0.618 cpd00007_e0 2.8\ncpd00048_e0 0.169 cpd11416_c0 0.762\ncpd10516_e0 0.00943 cpd15378_e0 0.00236\ncpd00030_e0 0.00236\ncpd00034_e0 0.00236\ncpd00058_e0 0.00236\ncpd00063_e0 0.00236\ncpd00099_e0 0.00236\ncpd00149_e0 0.00236\ncpd00205_e0 0.00236\ncpd00254_e0 0.00236\n"
],
[
"for m in model.metabolites:\n print(m.id)\n print(m.annotation)\n break",
"cpd00443_c0\n{'SBO': 'SBO:0000247', 'seed.compound': 'cpd00443'}\n"
],
[
"\"cpd11\".startswith('cpd')",
"_____no_output_____"
],
[
"#https://raw.githubusercontent.com/ModelSEED/ModelSEEDDatabase/dev/Biochemistry/Aliases/Compounds_Aliases.tsv\nimport pandas as pd\nfrom urllib.request import urlopen\ndata = urlopen('https://raw.githubusercontent.com/ModelSEED/ModelSEEDDatabase/dev/Biochemistry/Aliases/Compounds_Aliases.tsv')\n\ndf = pd.read_csv(data, sep='\\t')\ndata = urlopen('https://raw.githubusercontent.com/ModelSEED/ModelSEEDDatabase/dev/Biochemistry/Aliases/Reactions_Aliases.tsv')\n\nrxn_df = pd.read_csv(data, sep='\\t')\n\ndata = urlopen('https://raw.githubusercontent.com/ModelSEED/ModelSEEDDatabase/dev/Biochemistry/Structures/ModelSEED_Structures.txt')\nstru_df = pd.read_csv(data, sep='\\t')",
"_____no_output_____"
],
[
"def read_modelseed_compound_aliases(df):\n aliases = {}\n for a, row in df.iterrows():\n if row[3] == 'BiGG':\n if not row[0] in aliases:\n aliases[row[0]] = {}\n aliases[row[0]]['bigg.metabolite'] = row[2]\n if row[3] == 'MetaCyc':\n if not row[0] in aliases:\n aliases[row[0]] = {}\n aliases[row[0]]['biocyc'] = row[2]\n if row[3] == 'KEGG' and row[2][0] == 'C':\n if not row[0] in aliases:\n aliases[row[0]] = {}\n aliases[row[0]]['kegg.compound'] = row[2]\n return aliases\n\ndef read_modelseed_reaction_aliases(df):\n aliases = {}\n for a, row in df.iterrows():\n if row[3] == 'BiGG':\n if not row[0] in aliases:\n aliases[row[0]] = {}\n aliases[row[0]]['bigg.reaction'] = row[2]\n if row[3] == 'MetaCyc':\n if not row[0] in aliases:\n aliases[row[0]] = {}\n aliases[row[0]]['biocyc'] = row[2]\n if row[3] == 'KEGG' and row[2][0] == 'R':\n if not row[0] in aliases:\n aliases[row[0]] = {}\n aliases[row[0]]['kegg.reaction'] = row[2]\n return aliases\n\ndef read_modelseed_compound_structures(df):\n structures = {}\n for _, row in df.iterrows():\n #print(row[0], row[1], row[3])\n if row[1] == 'InChIKey':\n if not row[0] in structures:\n structures[row[0]] = {}\n structures[row[0]]['inchikey'] = row[3]\n return structures\n",
"_____no_output_____"
],
[
"structures = read_modelseed_compound_structures(stru_df)",
"_____no_output_____"
],
[
"structures['cpd00001']",
"_____no_output_____"
],
[
"rxn_aliases = read_modelseed_reaction_aliases(rxn_df)\nprint(len(rxn_aliases))",
"23865\n"
],
[
"print(aliases['cpd00001'])\nprint(rxn_aliases['rxn00001'])",
"{'kegg.compound': 'C01328', 'biocyc': 'WATER', 'bigg.metabolite': 'oh1'}\n{'bigg.reaction': 'PPA_1', 'kegg.reaction': 'R00004', 'biocyc': 'INORGPYROPHOSPHAT-RXN.c'}\n"
],
[
"#jplfaria:narrative_1492808527866\n#jplfaria:narrative_1524466549180\nkbase = cobrakbase.KBaseAPI('SHO64Q2X7HKU4PP4BV7XQMY3WYIK2QRJ')",
"_____no_output_____"
],
[
"#31045/4997/1\nkbase.get_object_info_from_ref('31045/4997/1')",
"_____no_output_____"
],
[
"wsos = kbase.list_objects('jplfaria:narrative_1492808527866')",
"_____no_output_____"
],
[
"#KBaseGenomes.Genome\n#KBaseFBA.FBAModel\nmodels = set()\nfor o in wsos:\n if 'KBaseFBA.FBAModel' in o[2]:\n models.add(o[1])",
"_____no_output_____"
],
[
"kmodel = kbase.get_object('GCF_000005845.2.RAST.mdl', 'jplfaria:narrative_1492808527866')",
"_____no_output_____"
],
[
"for m_id in models:\n kmodel = kbase.get_object(m_id, 'jplfaria:narrative_1492808527866')\n save_model_mongo(kmodel)\n",
"_____no_output_____"
],
[
"mclient = pymongo.MongoClient('mongodb://localhost:27017/')\ndatabase = mclient['Models']\nkbasemodels = database['TemplateV1']",
"_____no_output_____"
],
[
"a = set()\na.add(1)\na.update([2, 3])\na",
"_____no_output_____"
],
[
"def save_model_mongo(kmodel):\n\n model_id = kmodel['id']\n genome_info = kbase.get_object_info_from_ref(kmodel['genome_ref'])\n genome = genome_info['infos'][0][1]\n rxn_to_genes = {}\n\n for modelreactions in kmodel['modelreactions']:\n rxn_id = modelreactions['reaction_ref'].split('/')[-1].split('_')[0]\n\n genes = []\n for modelReactionProteins in modelreactions['modelReactionProteins']:\n for modelReactionProteinSubunits in modelReactionProteins['modelReactionProteinSubunits']:\n for feature_refs in modelReactionProteinSubunits['feature_refs']:\n gene = feature_refs.split('/')[-1]\n genes.append(gene)\n if len(genes) > 0:\n if not rxn_id in rxn_to_genes:\n rxn_to_genes[rxn_id] = set()\n rxn_to_genes[rxn_id].update(genes)\n #break\n\n for k in rxn_to_genes:\n rxn_to_genes[k] = list(rxn_to_genes[k])\n\n data = {'genome' : genome, 'ws' : 'jplfaria:narrative_1492808527866', 'rxn_to_genes' : rxn_to_genes}\n kbasemodels.update_one({'_id' : model_id}, {'$set' : data}, upsert=True)",
"_____no_output_____"
],
[
"%%HTML\n<b>Why not R-r0317 in master_fungal_template_fix mapped</b><br>\n<i>H2O[c0] + LACT[c0] <=> D-glucose[c0] + Galactose[c0]</i><br>\n<i>rxn00816 Lactose galactohydrolase 1 H2O [0] + 1 LACT [0] 1 D-Glucose [0] + 1 Galactose [0]</i><br>\n<b>Answer: 1 compound is not integrated:</b> D-glucose[c0] > '~/modelcompounds/id/M-dglc-c_c0'<br>\n<br>\n<b>ATP Synthases! MERGE</b>",
"_____no_output_____"
],
[
"Asppeni1_model = cobrakbase.API.get_object(id=\"Asppeni1_model\", ws=\"janakakbase:narrative_1540435363582\")",
"_____no_output_____"
],
[
"for r in Asppeni1_model['modelreactions']:\n if '/' in r['id']:\n id = r['id']\n if id[:2] == 'R-':\n id = id[2:]\n id = id.replace('/','-')\n print(r['id'], '->', id)\n r['id'] = id\n\ndef save_object(wsc, o, id, ws, t):\n wsc.save_objects(\n {'workspace': ws,\n 'objects' : [{'data' : o, 'name' : id, 'type' : t}]\n })\n \nsave_object(cobrakbase.API.wsClient, Asppeni1_model, \"Asppeni1_model_fix\", \"janakakbase:narrative_1540435363582\", \"KBaseFBA.FBAModel\")",
"_____no_output_____"
],
[
"template = cobrakbase.API.get_object(id=\"Fungi\", ws=\"NewKBaseModelTemplates\")\nmaster_fungal_template_fix = cobrakbase.API.get_object(id=\"master_fungal_template_fix\", ws=\"jplfaria:narrative_1510597445008\")",
"_____no_output_____"
],
[
"#rxn08617 GLCtex\n#rxn08606\nlookup = [\"rxn08617\", \"rxn08606\", \"rxn05226\"]\nfor r in template['reactions']:\n #print(r['id'], r['name'])\n if r['id'] in lookup:\n print(r)\n #print(r)\n #break",
"_____no_output_____"
],
[
"print(template.keys())\nfor r in template['reactions']:\n atp = False\n adp = False\n h = False\n pi = False\n h2o = False\n for c in r['templateReactionReagents']:\n if 'cpd00002' in c['templatecompcompound_ref']:\n atp = True\n if 'cpd00008' in c['templatecompcompound_ref']:\n adp = True\n if 'cpd00009' in c['templatecompcompound_ref']:\n pi = True\n if 'cpd00067' in c['templatecompcompound_ref']:\n h = True\n if 'cpd00001' in c['templatecompcompound_ref']:\n h2o = True\n if atp and adp and h and pi and h2o and 'rxf' and len(r['templateReactionReagents']) == 5:\n print(r)",
"dict_keys(['__VERSION__', 'biochemistry_ref', 'biomasses', 'compartments', 'compcompounds', 'complexes', 'compounds', 'domain', 'id', 'name', 'pathways', 'reactions', 'roles', 'subsystems', 'type'])\n{'GapfillDirection': '=', 'base_cost': 2, 'deltaG': -6.82, 'deltaGErr': 0.72, 'direction': '>', 'forward_penalty': 0, 'id': 'rxn00062_c', 'maxforflux': 100, 'maxrevflux': -100, 'name': 'ATP phosphohydrolase', 'reaction_ref': 'kbase/default/reactions/id/rxn00062', 'reverse_penalty': 1, 'status': 'OK', 'templateReactionReagents': [{'coefficient': -1, 'templatecompcompound_ref': '~/compcompounds/id/cpd00001_c'}, {'coefficient': -1, 'templatecompcompound_ref': '~/compcompounds/id/cpd00002_c'}, {'coefficient': 1, 'templatecompcompound_ref': '~/compcompounds/id/cpd00008_c'}, {'coefficient': 1, 'templatecompcompound_ref': '~/compcompounds/id/cpd00009_c'}, {'coefficient': 1, 'templatecompcompound_ref': '~/compcompounds/id/cpd00067_c'}], 'templatecompartment_ref': '~/compartments/id/c', 'templatecomplex_refs': ['~/complexes/id/cpx34580', '~/complexes/id/cpx34596', '~/complexes/id/cpx34618', '~/complexes/id/cpx34632', '~/complexes/id/cpx34636', '~/complexes/id/cpx34637', '~/complexes/id/cpx34638', '~/complexes/id/cpx34639', '~/complexes/id/cpx34641', '~/complexes/id/cpx34642', '~/complexes/id/cpx34643', '~/complexes/id/cpx34644', '~/complexes/id/cpx34645', '~/complexes/id/cpx34646', '~/complexes/id/cpx34647', '~/complexes/id/cpx34648', '~/complexes/id/cpx34649', '~/complexes/id/cpx34650', '~/complexes/id/cpx34651', '~/complexes/id/cpx34652', '~/complexes/id/cpx34653', '~/complexes/id/cpx34654', '~/complexes/id/cpx34655', '~/complexes/id/cpx34656', '~/complexes/id/cpx34657', '~/complexes/id/cpx34658', '~/complexes/id/cpx34659', '~/complexes/id/cpx34660', '~/complexes/id/cpx34661', '~/complexes/id/cpx34662', '~/complexes/id/cpx34663', '~/complexes/id/cpx34664', '~/complexes/id/cpx34665', '~/complexes/id/cpx34666', '~/complexes/id/cpx34667', '~/complexes/id/cpx34668', '~/complexes/id/cpx34669', '~/complexes/id/cpx34670', '~/complexes/id/cpx34671', '~/complexes/id/cpx34672', '~/complexes/id/cpx34673', '~/complexes/id/cpx34674', '~/complexes/id/cpx34676', '~/complexes/id/cpx34677', '~/complexes/id/cpx34678', '~/complexes/id/cpx34679', '~/complexes/id/cpx34680', '~/complexes/id/cpx34681', '~/complexes/id/cpx34682', '~/complexes/id/cpx34683', '~/complexes/id/cpx34684', '~/complexes/id/cpx34685', '~/complexes/id/cpx34686', '~/complexes/id/cpx34687', '~/complexes/id/cpx34688', '~/complexes/id/cpx34689', '~/complexes/id/cpx34690', '~/complexes/id/cpx34691', '~/complexes/id/cpx34694', '~/complexes/id/cpx34695', '~/complexes/id/cpx34696', '~/complexes/id/cpx34697', '~/complexes/id/cpx34698', '~/complexes/id/cpx34699', '~/complexes/id/cpx34700', '~/complexes/id/cpx34701'], 'type': 'conditional'}\n{'GapfillDirection': '=', 'base_cost': 7, 'deltaG': -6.82, 'deltaGErr': 0.72, 'direction': '=', 'forward_penalty': 0, 'id': 'rxn09694_c', 'maxforflux': 100, 'maxrevflux': -100, 'name': 'H+-exporting ATPase', 'reaction_ref': 'kbase/default/reactions/id/rxn09694', 'reverse_penalty': 1, 'status': 'OK', 'templateReactionReagents': [{'coefficient': -1, 'templatecompcompound_ref': '~/compcompounds/id/cpd00001_c'}, {'coefficient': -1, 'templatecompcompound_ref': '~/compcompounds/id/cpd00002_c'}, {'coefficient': 1, 'templatecompcompound_ref': '~/compcompounds/id/cpd00008_c'}, {'coefficient': 1, 'templatecompcompound_ref': '~/compcompounds/id/cpd00009_c'}, {'coefficient': 1, 'templatecompcompound_ref': '~/compcompounds/id/cpd00067_e'}], 'templatecompartment_ref': '~/compartments/id/c', 'templatecomplex_refs': [], 'type': 'gapfilling'}\n"
],
[
"for r in template['reactions']:\n #print(r['id'])\n 1\n\nfor r in master_fungal_template_fix['modelreactions']:\n for c in r['modelReactionReagents']:\n if not 'modelcompounds/id/cpd' in c['modelcompound_ref']:\n 1 #print(c)\n #if 'R-r0317' in r['id']:\n # print(r)",
"_____no_output_____"
],
[
"fmodel = cobrakbase.API.get_object(id=\"Asppeni1_model_fix_GP_GMM\", ws=\"janakakbase:narrative_1540435363582\")",
"_____no_output_____"
],
[
"template.keys()\nfor r in template['modelreactions']:\n #print(r['id'])\n if 'R-r0317' in r['id']:\n print(r)",
"{'aliases': [], 'dblinks': {}, 'direction': '=', 'edits': {}, 'gapfill_data': {}, 'id': 'R-r0317_c0', 'maxforflux': 1000000.0, 'maxrevflux': 1000000.0, 'modelReactionProteins': [{'complex_ref': '', 'modelReactionProteinSubunits': [{'feature_refs': ['~/genome/features/id/ATEG_07446', '~/genome/features/id/ATEG_04784', '~/genome/features/id/ATEG_10243', '~/genome/features/id/ATEG_00712'], 'note': 'Imported GPR', 'optionalSubunit': 0, 'role': '', 'triggering': 0}], 'note': '', 'source': 'SBML'}], 'modelReactionReagents': [{'coefficient': -1.0, 'modelcompound_ref': '~/modelcompounds/id/cpd00001_c0'}, {'coefficient': -1.0, 'modelcompound_ref': '~/modelcompounds/id/cpd00208_c0'}, {'coefficient': 1.0, 'modelcompound_ref': '~/modelcompounds/id/M-dglc-c_c0'}, {'coefficient': 1.0, 'modelcompound_ref': '~/modelcompounds/id/cpd00108_c0'}], 'modelcompartment_ref': '~/modelcompartments/id/c0', 'name': 'CustomReaction_c0', 'numerical_attributes': {}, 'probability': 1.0, 'protons': 1.0, 'reaction_ref': '~/template/reactions/id/rxn00000_c', 'string_attributes': {}}\n"
],
[
"gmedia = cobrakbase.API.get_object(id=\"Carbon-D-Glucose\", ws=\"janakakbase:narrative_1540435363582\")\nmedia = cobrakbase.convert_media(gmedia)",
"_____no_output_____"
],
[
"#cobrakbase.\nmodel = cobrakbase.convert_kmodel(fmodel, media=media)",
"Add Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 697 SK: 2\n"
],
[
"for r in model.sinks:\n print(\"SK\", r)\nfor r in model.demands:\n print(\"DM\", r)\nfor r in model.exchanges:\n #print(\"EX\", r, r.lower_bound)\n 1\nfor r in model.reactions:\n #print(r)\n 1",
"SK rxn13782_c0: <=> cpd17041_c0\nSK rxn13783_c0: <=> cpd17042_c0\nSK rxn13784_c0: <=> cpd17043_c0\n"
],
[
"def demand(cpd, value, model):\n dm = Reaction(id=\"DM_\" + cpd, name=\"Demand for \" + cpd, lower_bound=value, upper_bound=1000)\n dm.add_metabolites({model.metabolites.get_by_id(cpd) : -1})\n print(cpd, value, dm)\n model.add_reaction(dm)\n 1\n \nbio = model.reactions.get_by_id('bio1_biomass')\nprint(bio)\nfor a in bio.metabolites:\n #print(a.id, a.name, bio.metabolites[a])\n #demand(a.id, -1, model)\n 1\n\nfor a in bio.metabolites:\n model.reactions.get_by_id(\"DM_\" + a.id).lower_bound = -1\n\nmodel.reactions.get_by_id('DM_cpd00030_c0').lower_bound = 0.0 #Mn2+\nmodel.reactions.get_by_id('DM_cpd00205_c0').lower_bound = 0.0 #K+\nmodel.reactions.get_by_id('DM_cpd00149_c0').lower_bound = 0.0 #Co2+\nmodel.reactions.get_by_id('DM_cpd00063_c0').lower_bound = 0.0 #Ca2+\n\nmodel.reactions.get_by_id('DM_cpd11416_c0').lower_bound = 0.0 #Biomass\nmodel.reactions.get_by_id('DM_cpd00107_c0').lower_bound = -0.1 #L-Leucine\nmodel.reactions.get_by_id('DM_cpd00069_c0').lower_bound = 0.0 #L-Tyrosine\nmodel.reactions.get_by_id('DM_cpd12370_c0').lower_bound = 0.0 #apo-ACP\n\nmodel.reactions.get_by_id('DM_cpd00003_c0').lower_bound = 0.0 #NAD\nmodel.reactions.get_by_id('DM_cpd00006_c0').lower_bound = 0.0 #NADP\n\nmodel.summary()\nfor a in bio.metabolites:\n coef = bio.get_coefficient(a)\n z = \"+\"\n if coef < 0:\n z = \"-\"\n flux = model.reactions.get_by_id(\"DM_\" + a.id).flux\n if not flux == 0.0:\n print(a, z, a.name, flux)\n 1\n#print(model.reactions.DM_cpd00053_c0.flux)\n#model.metabolites.get_by_id(\"cpd00205_c0\").summary()",
"bio1_biomass: 60.029641491222 cpd00002_c0 + 0.00489685351114816 cpd00003_c0 + 0.00489685351114816 cpd00006_c0 + 0.00489685351114816 cpd00010_c0 + 0.00489685351114816 cpd00015_c0 + 0.00489685351114816 cpd00016_c0 + 0.00489685351114816 cpd00017_c0 + 0.264993755075485 cpd00023_c0 + 0.00489685351114816 cpd00028_c0 + 0.00489685351114816 cpd00030_c0 + 0.321625484990811 cpd00033_c0 + 0.00489685351114816 cpd00034_c0 + 0.353325006767314 cpd00035_c0 + 0.0296414912220477 cpd00038_c0 + 0.236855977318814 cpd00039_c0 + 0.169539015343993 cpd00041_c0 + 0.00489685351114816 cpd00042_c0 + 0.00489685351114816 cpd00048_c0 + 0.134277749547658 cpd00051_c0 + 0.0290054077194286 cpd00052_c0 + 0.264993755075485 cpd00053_c0 + 0.250746778996158 cpd00054_c0 + 0.00489685351114816 cpd00056_c0 + 0.00489685351114816 cpd00058_c0 + 0.0495082418756619 cpd00060_c0 + 0.0389283103602858 cpd00062_c0 + 0.00489685351114816 cpd00063_c0 + 0.027781603354688 cpd00065_c0 + 0.113263459830651 cpd00066_c0 + 60 cpd00067_c0 + 0.095454739731492 cpd00069_c0 + 0.0423847538359983 cpd00084_c0 + 0.00489685351114816 cpd00087_c0 + 0.00489685351114816 cpd00099_c0 + 0.247541209378309 cpd00107_c0 + 0.00255750751455551 cpd00115_c0 + 0.00489685351114816 cpd00118_c0 + 0.0740842756125012 cpd00119_c0 + 0.127154261507995 cpd00129_c0 + 0.169539015343993 cpd00132_c0 + 0.00489685351114816 cpd00149_c0 + 0.347863247863248 cpd00155_c0 + 0.25430852301599 cpd00156_c0 + 0.194471223482816 cpd00161_c0 + 0.00489685351114816 cpd00201_c0 + 0.00489685351114816 cpd00205_c0 + 0.00489685351114816 cpd00220_c0 + 0.00157736594362068 cpd00241_c0 + 0.00489685351114816 cpd00254_c0 + 0.00489685351114816 cpd00264_c0 + 0.169539015343993 cpd00322_c0 + 0.00489685351114816 cpd00345_c0 + 0.00157736594362068 cpd00356_c0 + 0.00255750751455551 cpd00357_c0 + 0.00489685351114816 cpd00557_c0 + 0.347863247863248 cpd00794_c0 + 0.0144855144855145 cpd01170_c0 + 0.0144855144855145 cpd01188_c0 + 0.0144855144855145 cpd02755_c0 + 0.0144855144855145 cpd03221_c0 + 0.00489685351114816 cpd10515_c0 + 0.00489685351114816 cpd10516_c0 + 0.00489685351114816 cpd11493_c0 + 0.0144855144855145 cpd14514_c0 + 0.00489685351114816 cpd15560_c0 + cpd17041_c0 + cpd17042_c0 + cpd17043_c0 --> 63.5613878454298 cpd00001_c0 + 60 cpd00008_c0 + 59.9951031464889 cpd00009_c0 + 0.135486447440162 cpd00012_c0 + cpd11416_c0 + 0.00489685351114816 cpd12370_c0\nIN FLUXES OUT FLUXES OBJECTIVES\n---------------------- -------------------- -------------------\ncpd00067_e0 100 cpd00179_e0 37.4 bio1_biomass 0.404\ncpd00030_e0 0.00198 cpd00053_e0 1.75\ncpd00058_e0 0.00198 cpd00161_e0 1.58\ncpd00063_e0 0.00198 cpd00041_e0 1\ncpd00149_e0 0.00198 cpd00132_e0 0.932\ncpd00205_e0 0.00198 cpd00558_e0 0.499\n cpd11416_c0 0.404\n cpd11791_e0 0.374\n cpd00083_e0 0.0561\n cpd03198_e0 0.0476\ncpd00107_c0 - L-Leucine -0.1\ncpd00156_c0 - L-Valine -0.031013318978161952\ncpd00042_c0 - GSH -0.0019781972962992223\ncpd03221_c0 - Zymosterol -1.0\ncpd00010_c0 - CoA -0.0756768885640795\ncpd00220_c0 - Riboflavin 0.24746484120004886\ncpd11493_c0 - ACP 3.1086244689504383e-15\ncpd00356_c0 - dCTP -1.0\ncpd15560_c0 - Ubiquinone-8 -0.4984939417449589\ncpd00051_c0 - L-Arginine -1.0\ncpd00161_c0 - L-Threonine -1.0\ncpd00264_c0 - Spermidine -1.0\ncpd00065_c0 - L-Tryptophan 1.63574474543032\ncpd00129_c0 - L-Proline -0.05136690647482023\ncpd00048_c0 - Sulfate -1.0\ncpd00028_c0 - Heme -0.05903443828560018\ncpd00038_c0 - GTP 0.3767101957416271\ncpd02755_c0 - Fecosterol -0.005851758792766004\ncpd10516_c0 - fe3 -0.0019781972962992444\ncpd00009_c0 + Phosphate -1.0\ncpd00557_c0 - Siroheme 0.0913267013783492\ncpd00066_c0 - L-Phenylalanine -0.045755395683453326\ncpd00056_c0 - TPP -0.0019781972962992444\ncpd00012_c0 + PPi -1.0\ncpd00115_c0 - dATP -0.0010331643450310901\ncpd00201_c0 - 10-Formyltetrahydrofolate -1.0\ncpd00008_c0 + ADP -1.0\ncpd00118_c0 - Putrescine 1.4427880997390048\ncpd00054_c0 - L-Serine -1.0\ncpd01170_c0 - Ergosterol -1.0\ncpd00039_c0 - L-Lysine -0.09568345323741022\ncpd00015_c0 - FAD -0.0019781972962993554\ncpd01188_c0 - Lanosterol 0.6561692338054469\ncpd00119_c0 - L-Histidine -0.5801764171566699\ncpd00033_c0 - Glycine 0.3754590005718763\ncpd00034_c0 - Zn2+_c0 -0.0019781972962992444\ncpd00099_c0 - Cl- -0.0019781972962992444\ncpd00254_c0 - Mg_c0 -0.0019781972962992444\ncpd00016_c0 - Pyridoxal phosphate 0.04938408531371752\ncpd00345_c0 - 5-Methyltetrahydrofolate -0.02197819729629924\ncpd00023_c0 - L-Glutamate -1.0\ncpd00041_c0 - L-Aspartate -1.0\ncpd00357_c0 - TTP 0.27501762819262876\ncpd00002_c0 - ATP 1.3548478479350585\ncpd14514_c0 - Episterol 1.9824447236217022\ncpd00052_c0 - CTP 1.3693520239072638\ncpd10515_c0 - Fe2+ -0.03822685498164679\ncpd00132_c0 - L-Asparagine -1.0\ncpd00322_c0 - L-Isoleucine -1.0\ncpd00087_c0 - Tetrahydrofolate -0.22902862445701733\ncpd00035_c0 - L-Alanine -1.0\ncpd00084_c0 - L-Cysteine 0.7754951579087623\ncpd00062_c0 - UTP -0.673483427425984\ncpd00017_c0 - S-Adenosyl-L-methionine 0.2551466393118438\n"
],
[
"def get_flux_distribution(fba):\n fdist = {}\n for a in fba['FBAReactionVariables']:\n flux = a['value']\n #if '~/fbamodel/modelreactions/id/pi_m0' == a['modelreaction_ref']:\n # a['modelreaction_ref'] = '~/fbamodel/modelreactions/id/tr-succ/pi_m0'\n id = cobrakbase.get_id_from_ref(a['modelreaction_ref'], stok='/')\n #print(a['modelreaction_ref'], id, flux)\n fdist[id] = flux\n\n biomass = \"bio1_biomass\"\n if not fba['objectiveValue'] == 0:\n flux = fba['objectiveValue']\n fdist[biomass] = flux\n \n return fdist\n\ndef get_net_convertion(model, fdist):\n net = {}\n for rxnId in fdist:\n flux = fdist[rxnId]\n rselect = rxnId\n #print(rselect)\n if \"R-\" in rselect:\n rselect = rselect[2:]\n #print(rselect)\n id = rxnId\n #id = cobrakbase.get_id_from_ref(rxnId)\n #print(id)\n if \"R-\" in id:\n id = id[2:]\n #print(id)\n #print(id, a['value'])\n if not flux == 0:\n r = model.reactions.get_by_id(id)\n #print(r, flux)\n #print(dir(r))\n for k in r.reactants:\n if not k in net:\n net[k] = 0\n net[k] += r.get_coefficient(k) * flux\n for k in r.products:\n if not k in net:\n net[k] = 0\n net[k] += r.get_coefficient(k) * flux\n return net",
"_____no_output_____"
],
[
"cobrakbase.login(\"TUEVGXRO3JJUJCEPAHBSGW67ZM7UURGC\", dev=False)",
"_____no_output_____"
],
[
"fba = cobrakbase.API.get_object(\"Asppeni1_model_fix_GP_GMM.gf.1\", \"janakakbase:narrative_1540435363582\")\nprint(fba['objectiveValue'])",
"18.4432\n"
],
[
"fdist = get_flux_distribution(fba)\n#model.reactions.get_by_id(\"r0516_m0\")",
"_____no_output_____"
],
[
"print(model.reactions.get_by_id(\"tr-succ-pi_m0\"))\nnet = get_net_convertion(model, fdist)\n\n#cpd11416\ne = 1e-3\nfor cpd in net:\n flux = net[cpd]\n if flux > e or flux < -e:\n if False or \"_c0\" in cpd.id:\n print(cpd, flux)",
"tr-succ-pi_m0: cpd00009_m0 + cpd00036_c0 <=> cpd00009_c0 + cpd00036_m0\ncpd00001_c0 -4.637142189169481\ncpd00008_c0 -0.001806299999998373\ncpd00067_c0 3.296329200000855\ncpd00020_c0 -0.090733300000013\ncpd00002_c0 0.0010571490943220851\ncpd00264_c0 -2.566823648676808\ncpd00011_c0 -1.1595638000000383\ncpd00005_c0 3.206933999999822\ncpd00006_c0 -3.206926648677181\ncpd00007_c0 3.117351099999948\ncpd00029_c0 -0.09005369999999857\ndglc-c_c0 -0.00100000000009004\ncpd00025_c0 0.08912670000011327\ncpd00084_c0 0.09046250805184985\ncpd00790_c0 0.0903137\ncpd00100_c0 0.001139000000043966\ncpd00281_c0 -0.0903137\ncpd03221_c0 -1.068638240759241\ncpd00123_c0 -4.870897\ncpd00198_c0 0.09030999999999967\ncpd00726_c0 2.56683\nH-PO_c0 -0.002479999999991378\ncpd03035_c0 1.06864\ncpd02654_c0 -0.0903137\ncpd01646_c0 -4.56546\ncpd00040_c0 0.0903137\ncpd11416_c0 18.4432\n"
],
[
"cobra_model = cobrakbase.read_model_with_media(\"GCF_000005845.2\", \"Carbon-D-Glucose\", \"filipeliu:narrative_1504192868437\")\n#jsonMedia = cobrakbase.API.get_object(\"Carbon-D-Glucose\", \"filipeliu:narrative_1504192868437\")\n#jsonModel = cobrakbase.API.get_object(\"GCF_000005845.2\", \"filipeliu:narrative_1504192868437\")\n\n#for r in jsonModel['modelreactions']:\n# if \"rxn00159_c0\" in r['id']:\n# print(r)\n\ncobra_model.reactions.get_by_id(\"rxn00159_c0\")\n#cobra_model.medium\n#met = cobra_model.metabolites.get_by_id(\"cpd00011_e0\")\n#object_stoichiometry = {met : -1}\n#reaction = Reaction(id=\"EX_cpd00011_e0\", name=\"Exchange for \" + met.name, lower_bound=-8, upper_bound=1000)\n#cobra_model.add_reaction(reaction)\n#with open('iMR1_799.json', 'w') as outfile:\n# json.dump(model, outfile)\n#cobra_model.summary()",
"Add Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 178 SK: 2\n"
],
[
"for r in model.reactions:\n if False or \"EX_\" in r.id and r.lower_bound == 0:\n #print(r)\n #if r.lower_bound == 0:\n #r.lower_bound = -1\n #print(r, \":\", r.lower_bound, r.upper_bound)\n 1\n#cobra_model.reactions.get_by_id(\"EX_cpd00011_e0\").lower_bound = 7.99\n#cobra_model.objective = \"bio1_biomass\"\n\n#cobra_model.summary()\n#cobra_model.metabolites.cpd00011_c0.summary()",
"_____no_output_____"
],
[
"CONSUMING REACTIONS -- CO2_c0 (cpd0001...)\n------------------------------------------\n% FLUX RXN ID REACTION\n--- ------ ---------- --------------------------------------------------\n87% 7.99 rxn0546... cpd00011_e0 <=> cpd00011_c0\n15% 1.39 rxn0534... cpd11466_c0 + cpd11492_c0 <=> cpd00011_c0 + cpd...\n3% 0.255 rxn0292... 2 cpd00001_c0 + 2 cpd00067_c0 + cpd02103_c0 <=>...\n2% 0.206 rxn0920... 2 cpd00067_c0 + cpd15555_c0 <=> cpd00011_c0 + c...\n2% 0.175 rxn0293... cpd00067_c0 + cpd02893_c0 <=> cpd00011_c0 + cpd...\n\nCONSUMING REACTIONS -- CO2_c0 (cpd0001...)\n------------------------------------------\n% FLUX RXN ID REACTION\n--- ------ ---------- --------------------------------------------------\n32% 1.9 rxn0534... cpd11466_c0 + cpd11492_c0 <=> cpd00011_c0 + cpd...\n28% 1.67 rxn0916... cpd00001_c0 + cpd00020_c0 + cpd15560_c0 <=> cpd...\n20% 1.21 rxn0015... cpd00003_c0 + cpd00130_c0 <=> cpd00004_c0 + cpd...\n6% 0.347 rxn0292... 2 cpd00001_c0 + 2 cpd00067_c0 + cpd02103_c0 <=>...\n5% 0.321 rxn0637... cpd00033_c0 + cpd00067_c0 + cpd12005_c0 <=> cpd...\n5% 0.281 rxn0920... 2 cpd00067_c0 + cpd15555_c0 <=> cpd00011_c0 + c...\n4% 0.239 rxn0293... cpd00067_c0 + cpd02893_c0 <=> cpd00011_c0 + cpd...",
"_____no_output_____"
],
[
"#jsonModel = cobrakbase.API.get_object(\"GCF_000005845.2\", \"filipeliu:narrative_1504192868437\")\n#jsonMedia = cobrakbase.API.get_object(\"Carbon-D-Glucose\", \"filipeliu:narrative_1504192868437\")\ndef fix_flux_bounds(m):\n for r in m['modelreactions']:\n lb = -1 * r['maxrevflux']\n ub = r['maxforflux']\n di = r['direction']\n cdi = \"=\"\n if lb == 0 and ub > 0:\n cdi = \">\"\n elif ub == 0 and lb < 0:\n cdi = '<'\n if not cdi == di:\n if di == '>':\n r['maxrevflux'] = 0\n elif di == '<':\n r['maxforflux'] = 0\n else:\n 1\n print(r['id'], di, cdi, lb, ub)\nfix_flux_bounds(jsonModel)\ncobra_model = cobrakbase.convert_kmodel(jsonModel, media=cobrakbase.convert_media(jsonMedia))",
"Add Sink cpd02701_c0\nAdd Sink cpd11416_c0\nSetup Drains. EX: 178 SK: 2\n"
],
[
"from memote.suite.cli.reports import report\nimport memote.suite.api as api\nfrom memote.suite.reporting import ReportConfiguration\n#a, results = api.test_model(cobra_model, results=True)",
"_____no_output_____"
],
[
"config = ReportConfiguration.load()\nhtml = api.snapshot_report(results, config)\nwith open(\"report.html\", \"w\") as text_file:\n print(html, file=text_file)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0962dfc7673679128a7281a8157b50bc386fd97 | 32,849 | ipynb | Jupyter Notebook | Code/.ipynb_checkpoints/HASOC_task1_hin-checkpoint.ipynb | punyajoy/HateMonitors-HASOC | b91003025eeeb55137def4d887d6d9a6517fd1ad | [
"MIT"
] | 5 | 2019-09-30T20:24:52.000Z | 2020-05-26T06:46:28.000Z | Code/.ipynb_checkpoints/HASOC_task1_hin-checkpoint.ipynb | hate-alert/HateALERT-HASOC | b91003025eeeb55137def4d887d6d9a6517fd1ad | [
"MIT"
] | null | null | null | Code/.ipynb_checkpoints/HASOC_task1_hin-checkpoint.ipynb | hate-alert/HateALERT-HASOC | b91003025eeeb55137def4d887d6d9a6517fd1ad | [
"MIT"
] | 2 | 2019-12-05T07:22:01.000Z | 2020-01-07T08:48:39.000Z | 38.464871 | 1,558 | 0.549393 | [
[
[
"import matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn import metrics\nfrom sklearn.metrics import classification_report, confusion_matrix, f1_score\nfrom sklearn.metrics import make_scorer, f1_score, accuracy_score, recall_score, precision_score, classification_report, precision_recall_fscore_support\n\nimport itertools\n",
"_____no_output_____"
],
[
"# file used to write preserve the results of the classfier\n# confusion matrix and precision recall fscore matrix\n\ndef plot_confusion_matrix(cm,\n target_names,\n title='Confusion matrix',\n cmap=None,\n normalize=True):\n \"\"\"\n given a sklearn confusion matrix (cm), make a nice plot\n\n Arguments\n ---------\n cm: confusion matrix from sklearn.metrics.confusion_matrix\n\n target_names: given classification classes such as [0, 1, 2]\n the class names, for example: ['high', 'medium', 'low']\n\n title: the text to display at the top of the matrix\n\n cmap: the gradient of the values displayed from matplotlib.pyplot.cm\n see http://matplotlib.org/examples/color/colormaps_reference.html\n plt.get_cmap('jet') or plt.cm.Blues\n\n normalize: If False, plot the raw numbers\n If True, plot the proportions\n\n Usage\n -----\n plot_confusion_matrix(cm = cm, # confusion matrix created by\n # sklearn.metrics.confusion_matrix\n normalize = True, # show proportions\n target_names = y_labels_vals, # list of names of the classes\n title = best_estimator_name) # title of graph\n\n Citiation\n ---------\n http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html\n\n \"\"\"\n\n accuracy = np.trace(cm) / float(np.sum(cm))\n misclass = 1 - accuracy\n if normalize:\n cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]\n\n if cmap is None:\n cmap = plt.get_cmap('Blues')\n\n plt.figure(figsize=(8, 6))\n plt.imshow(cm, interpolation='nearest', cmap=cmap)\n plt.title(title)\n plt.colorbar()\n\n if target_names is not None:\n tick_marks = np.arange(len(target_names))\n plt.xticks(tick_marks, target_names, rotation=45)\n plt.yticks(tick_marks, target_names)\n\n \n\n thresh = cm.max() / 1.5 if normalize else cm.max() / 2\n for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):\n if normalize:\n plt.text(j, i, \"{:0.4f}\".format(cm[i, j]),\n horizontalalignment=\"center\",\n color=\"white\" if cm[i, j] > thresh else \"black\")\n else:\n plt.text(j, i, \"{:,}\".format(cm[i, j]),\n horizontalalignment=\"center\",\n color=\"white\" if cm[i, j] > thresh else \"black\")\n\n\n plt.ylabel('True label')\n plt.xlabel('Predicted label\\naccuracy={:0.4f}; misclass={:0.4f}'.format(accuracy, misclass))\n plt.tight_layout()\n return plt",
"_____no_output_____"
],
[
"##saving the classification report\ndef pandas_classification_report(y_true, y_pred):\n metrics_summary = precision_recall_fscore_support(\n y_true=y_true, \n y_pred=y_pred)\n cm = confusion_matrix(y_true, y_pred)\n cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]\n \n \n avg = list(precision_recall_fscore_support(\n y_true=y_true, \n y_pred=y_pred,\n average='macro'))\n avg.append(accuracy_score(y_true, y_pred, normalize=True))\n metrics_sum_index = ['precision', 'recall', 'f1-score', 'support','accuracy']\n list_all=list(metrics_summary)\n list_all.append(cm.diagonal())\n class_report_df = pd.DataFrame(\n list_all,\n index=metrics_sum_index)\n\n support = class_report_df.loc['support']\n total = support.sum() \n avg[-2] = total\n\n class_report_df['avg / total'] = avg\n\n return class_report_df.T",
"_____no_output_____"
],
[
"from commen_preprocess import *",
".....start_cleaning.........\nhashtag britain exit hashtag rape refugee\n"
],
[
"from sklearn.metrics import accuracy_score\nimport joblib\nfrom sklearn.model_selection import StratifiedKFold as skf\n\n\n###all classifier \nfrom catboost import CatBoostClassifier\nfrom xgboost.sklearn import XGBClassifier\nfrom sklearn.svm import SVC, LinearSVC\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn import tree\nfrom sklearn import neighbors\nfrom sklearn import ensemble\nfrom sklearn import neural_network\nfrom sklearn import linear_model\nimport lightgbm as lgbm\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.linear_model import LogisticRegression\nfrom lightgbm import LGBMClassifier\nfrom nltk.classify.scikitlearn import SklearnClassifier\n",
"_____no_output_____"
],
[
"eng_train_dataset = pd.read_csv('../Data/hindi_dataset/hindi_dataset.tsv', sep='\\t')\n",
"_____no_output_____"
],
[
"eng_train_dataset.head()",
"_____no_output_____"
],
[
"l=eng_train_dataset['task_1'].value_counts()\nprint(\"the total dataset size:\",len(eng_train_dataset),'\\n',l)",
"the total dataset size: 4665 \n HOF 2469\nNOT 2196\nName: task_1, dtype: int64\n"
],
[
"import numpy as np\nfrom tqdm import tqdm\nimport pickle\n####loading laser embeddings for english dataset\ndef load_laser_embeddings():\n dim = 1024\n engX_commen = np.fromfile(\"../Data/hindi_dataset/embeddings_hin_task1_commen.raw\", dtype=np.float32, count=-1) \n engX_lib = np.fromfile(\"../Data/hindi_dataset/embeddings_hin_task1_lib.raw\", dtype=np.float32, count=-1) \n engX_commen.resize(engX_commen.shape[0] // dim, dim) \n engX_lib.resize(engX_lib.shape[0] // dim, dim) \n return engX_commen,engX_lib\n \ndef load_bert_embeddings():\n file = open('../Data/hindi_dataset/no_preprocess_bert_embed_task1.pkl', 'rb')\n embeds = pickle.load(file)\n return np.array(embeds)\n \ndef merge_feature(*args):\n feat_all=[]\n print(args[0].shape)\n for i in tqdm(range(args[0].shape[0])):\n feat=[]\n for arg in args:\n feat+=list(arg[i])\n feat_all.append(feat)\n return feat_all\n ",
"_____no_output_____"
],
[
"convert_label={\n 'HOF':1,\n 'NOT':0\n}\n\n\nconvert_reverse_label={\n 1:'HOF',\n 0:'NOT'\n}\n",
"_____no_output_____"
],
[
"labels=eng_train_dataset['task_1'].values\nengX_commen,engX_lib=load_laser_embeddings()\nbert_embeds =load_bert_embeddings()",
"_____no_output_____"
],
[
"feat_all=merge_feature(engX_commen,engX_lib,bert_embeds)\n#feat_all=merge_feature(engX_lib)\n\n\n# feat_all=[]\n# for i in range(len(labels)):\n# feat=list(engX_commen[i])+list(engX_lib[i])\n# feat_all.append(feat)",
" 7%|▋ | 314/4665 [00:00<00:01, 3132.91it/s]"
],
[
"len(feat_all[0])",
"_____no_output_____"
],
[
"from sklearn.utils.multiclass import type_of_target\n\nClassifier_Train_X=np.array(feat_all)\nlabels_int=[]\nfor i in range(len(labels)):\n labels_int.append(convert_label[labels[i]])\n\nClassifier_Train_Y=np.array(labels_int,dtype='float64')\n ",
"_____no_output_____"
],
[
"print(type_of_target(Classifier_Train_Y))\nClassifier_Train_Y",
"binary\n"
],
[
"\n",
"_____no_output_____"
],
[
"def train_model_no_ext(Classifier_Train_X,Classifier_Train_Y,model_type,save_model=False):\n kf = skf(n_splits=10,shuffle=True)\n y_total_preds=[] \n y_total=[]\n count=0\n img_name = 'cm.png'\n report_name = 'report.csv'\n \n scale=list(Classifier_Train_Y).count(0)/list(Classifier_Train_Y).count(1)\n print(scale)\n \n if(save_model==True):\n Classifier=get_model(scale,m_type=model_type)\n Classifier.fit(Classifier_Train_X,Classifier_Train_Y)\n filename = model_type+'_hin_task_1.joblib.pkl'\n joblib.dump(Classifier, filename, compress=9)\n# filename1 = model_name+'select_features_eng_task1.joblib.pkl'\n# joblib.dump(model_featureSelection, filename1, compress=9)\n else:\n for train_index, test_index in kf.split(Classifier_Train_X,Classifier_Train_Y):\n X_train, X_test = Classifier_Train_X[train_index], Classifier_Train_X[test_index]\n y_train, y_test = Classifier_Train_Y[train_index], Classifier_Train_Y[test_index]\n\n classifier=get_model(scale,m_type=model_type)\n print(type(y_train))\n classifier.fit(X_train,y_train)\n y_preds = classifier.predict(X_test)\n for ele in y_test:\n y_total.append(ele)\n for ele in y_preds:\n y_total_preds.append(ele)\n y_pred_train = classifier.predict(X_train)\n print(y_pred_train)\n print(y_train)\n count=count+1 \n print('accuracy_train:',accuracy_score(y_train, y_pred_train),'accuracy_test:',accuracy_score(y_test, y_preds))\n print('TRAINING:')\n print(classification_report( y_train, y_pred_train ))\n print(\"TESTING:\")\n print(classification_report( y_test, y_preds ))\n\n report = classification_report( y_total, y_total_preds )\n cm=confusion_matrix(y_total, y_total_preds)\n plt=plot_confusion_matrix(cm,normalize= True,target_names = ['NOT','HOF'],title = \"Confusion Matrix\")\n plt.savefig('hin_task1'+model_type+'_'+img_name)\n print(classifier)\n print(report)\n print(accuracy_score(y_total, y_total_preds))\n df_result=pandas_classification_report(y_total,y_total_preds)\n df_result.to_csv('hin_task1'+model_type+'_'+report_name, sep=',')\n",
"_____no_output_____"
],
[
"def get_model(scale,m_type=None):\n if not m_type:\n print(\"ERROR: Please specify a model type!\")\n return None\n if m_type == 'decision_tree_classifier':\n logreg = tree.DecisionTreeClassifier(max_features=1000,max_depth=3,class_weight='balanced')\n elif m_type == 'gaussian':\n logreg = GaussianNB()\n elif m_type == 'logistic_regression':\n logreg = LogisticRegression(n_jobs=10, random_state=42,class_weight='balanced',solver='liblinear')\n elif m_type == 'MLPClassifier':\n# logreg = neural_network.MLPClassifier((500))\n logreg = neural_network.MLPClassifier((100),random_state=42,early_stopping=True)\n elif m_type == 'KNeighborsClassifier':\n# logreg = neighbors.KNeighborsClassifier(n_neighbors = 10)\n logreg = neighbors.KNeighborsClassifier()\n elif m_type == 'ExtraTreeClassifier':\n logreg = tree.ExtraTreeClassifier()\n elif m_type == 'ExtraTreeClassifier_2':\n logreg = ensemble.ExtraTreesClassifier()\n elif m_type == 'RandomForestClassifier':\n logreg = ensemble.RandomForestClassifier(n_estimators=100, class_weight='balanced', n_jobs=12, max_depth=7)\n elif m_type == 'SVC':\n #logreg = LinearSVC(dual=False,max_iter=200)\n logreg = SVC(kernel='linear',random_state=1526)\n elif m_type == 'Catboost':\n logreg = CatBoostClassifier(iterations=100,learning_rate=0.2,l2_leaf_reg=500,depth=10,use_best_model=False, random_state=42,scale_pos_weight=SCALE_POS_WEIGHT)\n# logreg = CatBoostClassifier(scale_pos_weight=0.8, random_seed=42,);\n elif m_type == 'XGB_classifier':\n# logreg=XGBClassifier(silent=False,eta=0.1,objective='binary:logistic',max_depth=5,min_child_weight=0,gamma=0.2,subsample=0.8, colsample_bytree = 0.8,scale_pos_weight=1,n_estimators=500,reg_lambda=3,nthread=12)\n logreg=XGBClassifier(silent=False,objective='binary:logistic',scale_pos_weight=SCALE_POS_WEIGHT,reg_lambda=3,nthread=12, random_state=42)\n elif m_type == 'light_gbm':\n logreg = LGBMClassifier(objective='binary',max_depth=3,learning_rate=0.2,num_leaves=20,scale_pos_weight=scale,boosting_type='gbdt',\n metric='binary_logloss',random_state=5,reg_lambda=20,silent=False)\n else:\n print(\"give correct model\")\n print(logreg)\n return logreg",
"_____no_output_____"
],
[
"models_name=['decision_tree_classifier','gaussian','logistic_regression','MLPClassifier','RandomForestClassifier',\n 'SVC','light_gbm']",
"_____no_output_____"
],
[
"for model in models_name:\n train_model_no_ext(Classifier_Train_X,Classifier_Train_Y,model)",
"_____no_output_____"
],
[
"train_model_no_ext(Classifier_Train_X,Classifier_Train_Y,models_name[-1],save_model=True)\n",
"0.8894289185905225\nLGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,\n importance_type='split', learning_rate=0.2, max_depth=3,\n metric='binary_logloss', min_child_samples=20,\n min_child_weight=0.001, min_split_gain=0.0, n_estimators=100,\n n_jobs=-1, num_leaves=20, objective='binary', random_state=5,\n reg_alpha=0.0, reg_lambda=20, scale_pos_weight=0.8894289185905225,\n silent=False, subsample=1.0, subsample_for_bin=200000,\n subsample_freq=0)\n"
],
[
"train_model_no_ext(Classifier_Train_X,Classifier_Train_Y,'SVC')\n",
"1.588235294117647\nSVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,\n decision_function_shape='ovr', degree=3, gamma='auto_deprecated',\n kernel='linear', max_iter=-1, probability=False, random_state=1526,\n shrinking=True, tol=0.001, verbose=False)\n<class 'numpy.ndarray'>\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0963c5fbe1eef71fd98d6722798e72362a233d1 | 2,217 | ipynb | Jupyter Notebook | notebooks/converted_notebooks/shifting_time_offset.ipynb | mabrahamdevops/python_notebooks | 6d5e7383b60cc7fd476f6e85ab93e239c9c32330 | [
"BSD-3-Clause"
] | null | null | null | notebooks/converted_notebooks/shifting_time_offset.ipynb | mabrahamdevops/python_notebooks | 6d5e7383b60cc7fd476f6e85ab93e239c9c32330 | [
"BSD-3-Clause"
] | null | null | null | notebooks/converted_notebooks/shifting_time_offset.ipynb | mabrahamdevops/python_notebooks | 6d5e7383b60cc7fd476f6e85ab93e239c9c32330 | [
"BSD-3-Clause"
] | null | null | null | 21.317308 | 147 | 0.548489 | [
[
[
"[](https://neutronimaging.pages.ornl.gov/tutorial/notebooks/shifting_time_offset/)\n\n<img src='__docs/__all/notebook_rules.png' />",
"_____no_output_____"
],
[
"# Select Your IPTS ",
"_____no_output_____"
]
],
[
[
"from __code.select_files_and_folders import SelectFiles, SelectFolder\nfrom __code.shifting_time_offset import ShiftTimeOffset\n\nfrom __code import system\nsystem.System.select_working_dir()\nfrom __code.__all import custom_style\ncustom_style.style()",
"_____no_output_____"
]
],
[
[
"# Select Folder",
"_____no_output_____"
]
],
[
[
"o_shift = ShiftTimeOffset()\no_select = SelectFolder(system=system, is_input_folder=True, next_function=o_shift.display_counts_vs_time)",
"_____no_output_____"
]
],
[
[
"# Repeat on other folders?",
"_____no_output_____"
]
],
[
[
"o_other_folders = SelectFolder(working_dir=o_shift.working_dir,\n is_input_folder=True,\n multiple_flags=True,\n next_function=o_shift.selected_other_folders)",
"_____no_output_____"
]
],
[
[
"# Output Images ",
"_____no_output_____"
]
],
[
[
"o_shift.offset_images()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0963f42f9b6b10d53ba6923187bef4e33c1bfac | 115,196 | ipynb | Jupyter Notebook | Malariya_infection_detection.ipynb | amystar101/Malariya-Infection-Detection | 28f58e08ec5e3a1c256855f7c62ace197a0ee712 | [
"Unlicense"
] | 2 | 2021-05-27T17:26:00.000Z | 2021-05-27T17:26:19.000Z | Malariya_infection_detection.ipynb | amystar101/Malariya-Infection-Detection | 28f58e08ec5e3a1c256855f7c62ace197a0ee712 | [
"Unlicense"
] | null | null | null | Malariya_infection_detection.ipynb | amystar101/Malariya-Infection-Detection | 28f58e08ec5e3a1c256855f7c62ace197a0ee712 | [
"Unlicense"
] | null | null | null | 257.133929 | 42,534 | 0.906125 | [
[
[
"# importing\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\nimport os",
"_____no_output_____"
],
[
"# loading images\npath_dir = \"/content/drive/MyDrive/Dataset/malariya_cell_data_set/cell_images/\"\n\nloaded = 0\npath = path_dir+\"Uninfected/\"\nuninfected_list = os.listdir(path)\n\npath = path_dir + \"Parasitized\"\ninfected_list = os.listdir(path)",
"_____no_output_____"
],
[
"img = plt.imread(path_dir+\"Uninfected/\"+uninfected_list[0])\nplt.imshow(img)",
"_____no_output_____"
],
[
"img = plt.imread(path_dir+\"Parasitized/\"+infected_list[0])\nplt.imshow(img)",
"_____no_output_____"
],
[
"# Kearas implementation",
"_____no_output_____"
],
[
"print(\"uninfected count: \",len(os.listdir(path_dir+\"/Uninfected\")))\nprint(\"Parasitized count: \",len(os.listdir(path_dir+\"/Parasitized\")))",
"uninfected count: 13780\nParasitized count: 13789\n"
],
[
"dataGen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1/255.0,validation_split=0.2)",
"_____no_output_____"
],
[
"dataset_train = dataGen.flow_from_directory(path_dir,target_size=(128,128),batch_size=32,class_mode=\"binary\",shuffle=True,seed=10,subset=\"training\")\ndataset_test = dataGen.flow_from_directory(path_dir,target_size=(128,128),batch_size=32,class_mode=\"binary\",shuffle=True,seed=10,subset=\"validation\")",
"Found 22055 images belonging to 2 classes.\nFound 5512 images belonging to 2 classes.\n"
],
[
"# printintg the loaded classes\ndataset_train",
"_____no_output_____"
],
[
"# designing the model\nmodel = tf.keras.Sequential()\n\nmodel.add(tf.keras.layers.Conv2D(32,(3,3),padding=\"same\",input_shape=(128,128,3),activation=tf.keras.layers.LeakyReLU()))\nmodel.add(tf.keras.layers.MaxPool2D(pool_size=(2,2),strides=2,padding=\"same\"))\n\nmodel.add(tf.keras.layers.Conv2D(64,(3,3),strides=(1,1),padding=\"same\",input_shape=(128,128,3),activation=tf.keras.layers.LeakyReLU()))\nmodel.add(tf.keras.layers.MaxPool2D(strides=2,padding=\"same\"))\n\nmodel.add(tf.keras.layers.Conv2D(128,(3,3),strides=(1,1),padding=\"same\",input_shape=(128,128,3),activation=tf.keras.layers.LeakyReLU()))\nmodel.add(tf.keras.layers.MaxPool2D(strides=2,padding=\"same\"))\n\nmodel.add(tf.keras.layers.Conv2D(256,3,strides=(1,1),padding=\"same\",input_shape=(128,128,3),activation=tf.keras.layers.LeakyReLU()))\nmodel.add(tf.keras.layers.MaxPool2D(strides=2,padding=\"same\"))\n\nmodel.add(tf.keras.layers.Flatten())\n\nmodel.add(tf.keras.layers.Dense(128,activation=tf.keras.layers.LeakyReLU()))\nmodel.add(tf.keras.layers.Dropout(0.2))\n\nmodel.add(tf.keras.layers.Dense(64,activation=tf.keras.layers.LeakyReLU()))\nmodel.add(tf.keras.layers.Dropout(0.2))\n\nmodel.add(tf.keras.layers.Dense(1,activation='sigmoid'))\n",
"_____no_output_____"
],
[
"# compiling the model\nmodel.compile(optimizer=\"adam\",loss=\"binary_crossentropy\",metrics=[\"accuracy\"])",
"_____no_output_____"
],
[
"# model summary\nmodel.summary()",
"Model: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv2d (Conv2D) (None, 128, 128, 32) 896 \n_________________________________________________________________\nmax_pooling2d (MaxPooling2D) (None, 64, 64, 32) 0 \n_________________________________________________________________\nconv2d_1 (Conv2D) (None, 64, 64, 64) 18496 \n_________________________________________________________________\nmax_pooling2d_1 (MaxPooling2 (None, 32, 32, 64) 0 \n_________________________________________________________________\nconv2d_2 (Conv2D) (None, 32, 32, 128) 73856 \n_________________________________________________________________\nmax_pooling2d_2 (MaxPooling2 (None, 16, 16, 128) 0 \n_________________________________________________________________\nconv2d_3 (Conv2D) (None, 16, 16, 256) 295168 \n_________________________________________________________________\nmax_pooling2d_3 (MaxPooling2 (None, 8, 8, 256) 0 \n_________________________________________________________________\nflatten (Flatten) (None, 16384) 0 \n_________________________________________________________________\ndense (Dense) (None, 128) 2097280 \n_________________________________________________________________\ndropout (Dropout) (None, 128) 0 \n_________________________________________________________________\ndense_1 (Dense) (None, 64) 8256 \n_________________________________________________________________\ndropout_1 (Dropout) (None, 64) 0 \n_________________________________________________________________\ndense_2 (Dense) (None, 1) 65 \n=================================================================\nTotal params: 2,494,017\nTrainable params: 2,494,017\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"# defining early stoping\nearly_stop = tf.keras.callbacks.EarlyStopping(monitor=\"val_loss\",patience=2,verbose=1)",
"_____no_output_____"
],
[
"model_history = model.fit(dataset_train,epochs=20,callbacks=early_stop,validation_data=dataset_test)",
"Epoch 1/20\n690/690 [==============================] - 92s 123ms/step - loss: 0.4031 - accuracy: 0.7927 - val_loss: 0.1806 - val_accuracy: 0.9445\nEpoch 2/20\n690/690 [==============================] - 85s 123ms/step - loss: 0.1460 - accuracy: 0.9566 - val_loss: 0.1883 - val_accuracy: 0.9459\nEpoch 3/20\n690/690 [==============================] - 84s 122ms/step - loss: 0.1453 - accuracy: 0.9556 - val_loss: 0.1593 - val_accuracy: 0.9452\nEpoch 4/20\n690/690 [==============================] - 85s 123ms/step - loss: 0.1248 - accuracy: 0.9592 - val_loss: 0.1941 - val_accuracy: 0.9454\nEpoch 5/20\n690/690 [==============================] - 84s 122ms/step - loss: 0.1063 - accuracy: 0.9647 - val_loss: 0.2351 - val_accuracy: 0.9409\nEpoch 00005: early stopping\n"
],
[
"plt.plot(model_history.history[\"accuracy\"])\nplt.plot(model_history.history[\"val_accuracy\"])\nplt.title(\"Model Accuracy\")\nplt.xlabel('epochs')\nplt.ylabel('acuracy')\nplt.show()",
"_____no_output_____"
],
[
"model.save(\"./malariya_classification_acc94.h5\")",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d09646484fa2ace908fd5ffebf877e38e4dd6877 | 2,447 | ipynb | Jupyter Notebook | locale/examples/03-widgets/spline-widget.ipynb | tkoyama010/pyvista-doc-translations | 23bb813387b7f8bfe17e86c2244d5dd2243990db | [
"MIT"
] | 4 | 2020-08-07T08:19:19.000Z | 2020-12-04T09:51:11.000Z | locale/examples/03-widgets/spline-widget.ipynb | tkoyama010/pyvista-doc-translations | 23bb813387b7f8bfe17e86c2244d5dd2243990db | [
"MIT"
] | 19 | 2020-08-06T00:24:30.000Z | 2022-03-30T19:22:24.000Z | locale/examples/03-widgets/spline-widget.ipynb | tkoyama010/pyvista-doc-translations | 23bb813387b7f8bfe17e86c2244d5dd2243990db | [
"MIT"
] | 1 | 2021-03-09T07:50:40.000Z | 2021-03-09T07:50:40.000Z | 37.646154 | 598 | 0.572538 | [
[
[
"%matplotlib inline\nfrom pyvista import set_plot_theme\nset_plot_theme('document')",
"_____no_output_____"
]
],
[
[
"\n# Spline Widget\n\n\nA spline widget can be enabled and disabled by the\n:func:`pyvista.WidgetHelper.add_spline_widget` and\n:func:`pyvista.WidgetHelper.clear_spline_widgets` methods respectively.\nThis widget allows users to interactively create a poly line (spline) through\na scene and use that spline.\n\nA common task with splines is to slice a volumetric dataset using an irregular\npath. To do this, we have added a convenient helper method which leverages the\n:func:`pyvista.DataSetFilters.slice_along_line` filter named\n:func:`pyvista.WidgetHelper.add_mesh_slice_spline`.\n\n",
"_____no_output_____"
]
],
[
[
"import pyvista as pv\nimport numpy as np",
"_____no_output_____"
],
[
"mesh = pv.Wavelet()\n\n# initial spline to seed the example\npoints = np.array([[-8.64208925, -7.34294559, -9.13803458],\n [-8.25601497, -2.54814702, 0.93860914],\n [-0.30179377, -3.21555997, -4.19999019],\n [ 3.24099167, 2.05814768, 3.39041509],\n [ 4.39935227, 4.18804542, 8.96391132]])\n\np = pv.Plotter()\np.add_mesh(mesh.outline(), color='black')\np.add_mesh_slice_spline(mesh, initial_points=points, n_handles=5)\np.camera_position = [(30, -42, 30),\n (0.0, 0.0, 0.0),\n (-0.09, 0.53, 0.84)]\np.show()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d09646ed3f1dfc7bfe6acd857ab52b9becd626b9 | 302,148 | ipynb | Jupyter Notebook | docs/notebooks/neuralnet/neural_network_formulations.ipynb | fracek/OMLT | b0ccafda34d1ea2b1187186081ed50f17c10ba7f | [
"BSD-3-Clause"
] | null | null | null | docs/notebooks/neuralnet/neural_network_formulations.ipynb | fracek/OMLT | b0ccafda34d1ea2b1187186081ed50f17c10ba7f | [
"BSD-3-Clause"
] | null | null | null | docs/notebooks/neuralnet/neural_network_formulations.ipynb | fracek/OMLT | b0ccafda34d1ea2b1187186081ed50f17c10ba7f | [
"BSD-3-Clause"
] | null | null | null | 155.505919 | 81,072 | 0.810268 | [
[
[
"# Using Neural Network Formulations in OMLT\n\nIn this notebook we show how OMLT can be used to build different optimization formulations of neural networks within Pyomo. It specifically demonstrates the following examples:<br>\n1.) A neural network with smooth sigmoid activation functions represented using full-space and reduced-space formulations <br>\n2.) A neural network with non-smooth ReLU activation functions represented using complementarity and mixed integer formulations <br>\n3.) A neural network with mixed ReLU and sigmoid activation functions represented using complementarity (for ReLU) and full-space (for sigmoid) formulations <br>\n<br>\nAfter building the OMLT formulations, we minimize each representation of the function and compare the results.",
"_____no_output_____"
],
[
"## Library Setup\nThis notebook assumes you have a working Tensorflow environment in addition to necessary Python packages described here. We use Keras to train neural networks of interest for our example which requires the Python Tensorflow package. The neural networks are then formulated in Pyomo using OMLT which therefore requires working Pyomo and OMLT installations.\n\nThe required Python libraries used this notebook are as follows: <br>\n- `pandas`: used for data import and management <br>\n- `matplotlib`: used for plotting the results in this example\n- `tensorflow`: the machine learning language we use to train our neural network\n- `pyomo`: the algebraic modeling language for Python, it is used to define the optimization model passed to the solver\n- `omlt`: The package this notebook demonstates. OMLT can formulate machine learning models (such as neural networks) within Pyomo",
"_____no_output_____"
]
],
[
[
"#Start by importing the following libraries\n#data manipulation and plotting\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport matplotlib\nmatplotlib.rc('font', size=24)\nplt.rc('axes', titlesize=24)\n\n#tensorflow objects\nfrom tensorflow.keras.models import Sequential, Model\nfrom tensorflow.keras.layers import Dense, Input\nfrom tensorflow.keras.optimizers import Adam\n\n#pyomo for optimization\nimport pyomo.environ as pyo\n\n#omlt for interfacing our neural network with pyomo\nfrom omlt import OmltBlock\nfrom omlt.neuralnet import NetworkDefinition, NeuralNetworkFormulation, ReducedSpaceNeuralNetworkFormulation\nfrom omlt.neuralnet.activations import ComplementarityReLUActivation\nfrom omlt.io import keras_reader\nimport omlt",
"_____no_output_____"
]
],
[
[
"## Import the Data",
"_____no_output_____"
],
[
"We begin by training neural networks that learn from data given the following imported dataframe. In practice, this data could represent the output of a simulation, real sensor measurements, or some other external data source. The data contains a single input `x` and a single output `y` and contains 10,000 total samples",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv(\"../data/sin_quadratic.csv\",index_col=[0]);",
"_____no_output_____"
]
],
[
[
"The data we use for training is plotted below (on the left figure). We also scale the training data to a mean of zero with unit standard deviation. The scaled inputs and outputs are added to the dataframe and plotted next to the original data values (on the right).",
"_____no_output_____"
]
],
[
[
"#retrieve input 'x' and output 'y' from the dataframe\nx = df[\"x\"]\ny = df[\"y\"]\n\n#calculate mean and standard deviation, add scaled 'x' and scaled 'y' to the dataframe\nmean_data = df.mean(axis=0)\nstd_data = df.std(axis=0)\ndf[\"x_scaled\"] = (df['x'] - mean_data['x']) / std_data['x']\ndf[\"y_scaled\"] = (df['y'] - mean_data['y']) / std_data['y']\n\n#create plots for unscaled and scaled data\nf, (ax1, ax2) = plt.subplots(1, 2,figsize = (16,8))\n\nax1.plot(x, y)\nax1.set_xlabel(\"x\")\nax1.set_ylabel(\"y\");\nax1.set_title(\"Training Data\")\n\nax2.plot(df[\"x_scaled\"], df[\"y_scaled\"])\nax2.set_xlabel(\"x_scaled\")\nax2.set_ylabel(\"y_scaled\");\nax2.set_title(\"Scaled Training Data\")\n\nplt.tight_layout()",
"_____no_output_____"
]
],
[
[
"## Train the Neural Networks\nAfter importing the dataset we use Tensorflow (with Keras) to train three neural network models. Each neural network contains 2 layers with 100 nodes per layer with a single output layer. <br>\n1.) The first network (`nn1`) uses sigmoid activation functions for both layers.<br>\n2.) The second network (`nn2`) uses ReLU activations<br>\n3.) The last network (`nn3`) mixes ReLU and sigmoid activation functions. The first layer is sigmoid, the second layer is ReLU. <br>\nWe use the ADAM optimizer and train the first two neural networks for 50 epochs. We train `nn3` for 150 epochs since we observe difficulty obtaining a good fit with the mixed network.",
"_____no_output_____"
]
],
[
[
"#sigmoid neural network\nnn1 = Sequential(name='sin_wave_sigmoid')\nnn1.add(Input(1))\nnn1.add(Dense(100, activation='sigmoid'))\nnn1.add(Dense(100, activation='sigmoid'))\nnn1.add(Dense(1))\nnn1.compile(optimizer=Adam(), loss='mse')\n\n#relu neural network\nnn2 = Sequential(name='sin_wave_relu')\nnn2.add(Input(1))\nnn2.add(Dense(100, activation='relu'))\nnn2.add(Dense(100, activation='relu'))\nnn2.add(Dense(1))\nnn2.compile(optimizer=Adam(), loss='mse')\n\n#mixed neural network\nnn3 = Sequential(name='sin_wave_mixed')\nnn3.add(Input(1))\nnn3.add(Dense(100, activation='sigmoid'))\nnn3.add(Dense(100, activation='relu'))\nnn3.add(Dense(1))\nnn3.compile(optimizer=Adam(), loss='mse')",
"_____no_output_____"
],
[
"#train all three neural networks\nhistory1 = nn1.fit(x=df['x_scaled'], y=df['y_scaled'],verbose=1, epochs=50)\nhistory2 = nn2.fit(x=df['x_scaled'], y=df['y_scaled'],verbose=1, epochs=50)\nhistory3 = nn3.fit(x=df['x_scaled'], y=df['y_scaled'],verbose=1, epochs=150)",
"Epoch 1/50\n313/313 [==============================] - 1s 2ms/step - loss: 1.0197\nEpoch 2/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.9949\nEpoch 3/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.9749\nEpoch 4/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.7148\nEpoch 5/50\n313/313 [==============================] - 1s 3ms/step - loss: 0.3070\nEpoch 6/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.2495\nEpoch 7/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.2226\nEpoch 8/50\n313/313 [==============================] - 1s 3ms/step - loss: 0.2064\nEpoch 9/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.1886\nEpoch 10/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.1675\nEpoch 11/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.1411\nEpoch 12/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.1205\nEpoch 13/50\n313/313 [==============================] - 1s 3ms/step - loss: 0.1049\nEpoch 14/50\n313/313 [==============================] - 1s 3ms/step - loss: 0.0952\nEpoch 15/50\n313/313 [==============================] - 1s 3ms/step - loss: 0.0891\nEpoch 16/50\n313/313 [==============================] - 1s 3ms/step - loss: 0.0846\nEpoch 17/50\n313/313 [==============================] - 1s 3ms/step - loss: 0.0819\nEpoch 18/50\n313/313 [==============================] - 1s 3ms/step - loss: 0.0780\nEpoch 19/50\n313/313 [==============================] - 1s 3ms/step - loss: 0.0742\nEpoch 20/50\n313/313 [==============================] - 1s 3ms/step - loss: 0.0669\nEpoch 21/50\n313/313 [==============================] - 1s 4ms/step - loss: 0.0592\nEpoch 22/50\n313/313 [==============================] - 1s 3ms/step - loss: 0.0508\nEpoch 23/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0423\nEpoch 24/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0328\nEpoch 25/50\n313/313 [==============================] - 1s 4ms/step - loss: 0.0244\nEpoch 26/50\n313/313 [==============================] - 1s 4ms/step - loss: 0.0160\nEpoch 27/50\n313/313 [==============================] - 1s 4ms/step - loss: 0.0098\nEpoch 28/50\n313/313 [==============================] - 1s 3ms/step - loss: 0.0058\nEpoch 29/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0036\nEpoch 30/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0024\nEpoch 31/50\n313/313 [==============================] - 1s 3ms/step - loss: 0.0019\nEpoch 32/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0016\nEpoch 33/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0015\nEpoch 34/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0013\nEpoch 35/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0013\nEpoch 36/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0014\nEpoch 37/50\n313/313 [==============================] - 1s 3ms/step - loss: 0.0012A: 1\nEpoch 38/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0011\nEpoch 39/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0011\nEpoch 40/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0011\nEpoch 41/50\n313/313 [==============================] - 1s 3ms/step - loss: 0.0010\nEpoch 42/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0010\nEpoch 43/50\n313/313 [==============================] - 1s 4ms/step - loss: 9.4469e-04\nEpoch 44/50\n313/313 [==============================] - 1s 3ms/step - loss: 9.1601e-04\nEpoch 45/50\n313/313 [==============================] - 1s 3ms/step - loss: 9.2864e-04\nEpoch 46/50\n313/313 [==============================] - 1s 3ms/step - loss: 9.2708e-04\nEpoch 47/50\n313/313 [==============================] - 1s 2ms/step - loss: 9.0207e-04\nEpoch 48/50\n313/313 [==============================] - 1s 2ms/step - loss: 8.6175e-04\nEpoch 49/50\n313/313 [==============================] - 1s 2ms/step - loss: 8.6889e-04\nEpoch 50/50\n313/313 [==============================] - 1s 2ms/step - loss: 8.4783e-04\nEpoch 1/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.3035\nEpoch 2/50\n313/313 [==============================] - 1s 3ms/step - loss: 0.1054\nEpoch 3/50\n313/313 [==============================] - 1s 3ms/step - loss: 0.0889\nEpoch 4/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0765\nEpoch 5/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0719\nEpoch 6/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0698\nEpoch 7/50\n313/313 [==============================] - 1s 3ms/step - loss: 0.0689\nEpoch 8/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0667\nEpoch 9/50\n313/313 [==============================] - 1s 3ms/step - loss: 0.0680\nEpoch 10/50\n313/313 [==============================] - 1s 3ms/step - loss: 0.0670\nEpoch 11/50\n313/313 [==============================] - 1s 3ms/step - loss: 0.0665\nEpoch 12/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0651\nEpoch 13/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0584\nEpoch 14/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0439\nEpoch 15/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0230\nEpoch 16/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0091\nEpoch 17/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0040\nEpoch 18/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0019\nEpoch 19/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0016\nEpoch 20/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0013\nEpoch 21/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0014\nEpoch 22/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0014\nEpoch 23/50\n313/313 [==============================] - 1s 3ms/step - loss: 0.0013\nEpoch 24/50\n313/313 [==============================] - 1s 3ms/step - loss: 0.0014\nEpoch 25/50\n313/313 [==============================] - 1s 3ms/step - loss: 0.0015\nEpoch 26/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0013\nEpoch 27/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0012\nEpoch 28/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0012\nEpoch 29/50\n313/313 [==============================] - 1s 3ms/step - loss: 0.0012\nEpoch 30/50\n313/313 [==============================] - 1s 3ms/step - loss: 0.0014\nEpoch 31/50\n313/313 [==============================] - 1s 3ms/step - loss: 0.0012\nEpoch 32/50\n313/313 [==============================] - 1s 3ms/step - loss: 0.0013\nEpoch 33/50\n313/313 [==============================] - 1s 3ms/step - loss: 0.0012\nEpoch 34/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0012\nEpoch 35/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0011\nEpoch 36/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0010\nEpoch 37/50\n313/313 [==============================] - 1s 3ms/step - loss: 0.0011\nEpoch 38/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0012\nEpoch 39/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0012\nEpoch 40/50\n313/313 [==============================] - 1s 2ms/step - loss: 0.0013\nEpoch 41/50\n313/313 [==============================] - 1s 3ms/step - loss: 0.0011\nEpoch 42/50\n313/313 [==============================] - 1s 3ms/step - loss: 0.0011\nEpoch 43/50\n313/313 [==============================] - 1s 4ms/step - loss: 0.0011\nEpoch 44/50\n313/313 [==============================] - 1s 3ms/step - loss: 0.0011\nEpoch 45/50\n313/313 [==============================] - 1s 3ms/step - loss: 0.0010\nEpoch 46/50\n313/313 [==============================] - 1s 3ms/step - loss: 0.0011\nEpoch 47/50\n313/313 [==============================] - 1s 3ms/step - loss: 0.0010\nEpoch 48/50\n313/313 [==============================] - 1s 4ms/step - loss: 0.0010\nEpoch 49/50\n313/313 [==============================] - 2s 6ms/step - loss: 0.0011\nEpoch 50/50\n313/313 [==============================] - 2s 5ms/step - loss: 0.0010\nEpoch 1/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.9082\nEpoch 2/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.3519\nEpoch 3/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.2221\nEpoch 4/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.1921\nEpoch 5/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.1861\nEpoch 6/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.1855\nEpoch 7/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.1876\nEpoch 8/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.1834\nEpoch 9/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.1845\nEpoch 10/150\n313/313 [==============================] - 1s 4ms/step - loss: 0.1874\nEpoch 11/150\n313/313 [==============================] - 2s 5ms/step - loss: 0.1827\nEpoch 12/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.1810\nEpoch 13/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.1837\nEpoch 14/150\n313/313 [==============================] - 1s 4ms/step - loss: 0.1837\nEpoch 15/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.1817\nEpoch 16/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.1785\nEpoch 17/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.1801\nEpoch 18/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.1745\nEpoch 19/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.1732\nEpoch 20/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.1670\nEpoch 21/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.1593\nEpoch 22/150\n313/313 [==============================] - 1s 4ms/step - loss: 0.1529\nEpoch 23/150\n313/313 [==============================] - 1s 4ms/step - loss: 0.1430\nEpoch 24/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.1325\nEpoch 25/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.1229\nEpoch 26/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.1184\nEpoch 27/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.1151\nEpoch 28/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.1099\nEpoch 29/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.1064\nEpoch 30/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.0950\nEpoch 31/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.0754\nEpoch 32/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.0585\nEpoch 33/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.0461\nEpoch 34/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.0385\nEpoch 35/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.0326\nEpoch 36/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.0302\nEpoch 37/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.0264\nEpoch 38/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.0267\nEpoch 39/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.0243\nEpoch 40/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.0256\nEpoch 41/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.0251\nEpoch 42/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.0252\nEpoch 43/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.0233\nEpoch 44/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.0237\nEpoch 45/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.0230\nEpoch 46/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.0218\nEpoch 47/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.0223\nEpoch 48/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.0212\nEpoch 49/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.0215\nEpoch 50/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.0211A: 0s - los\nEpoch 51/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.0196\nEpoch 52/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.0202\nEpoch 53/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.0189\nEpoch 54/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.0186\nEpoch 55/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.0185\nEpoch 56/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.0176\nEpoch 57/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.0174\nEpoch 58/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.0168\nEpoch 59/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.0165\nEpoch 60/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.0160\nEpoch 61/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.0154\nEpoch 62/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.0147\nEpoch 63/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.0142\nEpoch 64/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.0137\nEpoch 65/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.0107\nEpoch 66/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.0080\nEpoch 67/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.0056\nEpoch 68/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.0042\nEpoch 69/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.0036\nEpoch 70/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.0030\nEpoch 71/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.0030\nEpoch 72/150\n313/313 [==============================] - 1s 5ms/step - loss: 0.0029\nEpoch 73/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.0031\nEpoch 74/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.0029\nEpoch 75/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.0031\nEpoch 76/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.0028\nEpoch 77/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.0027\nEpoch 78/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.0039\nEpoch 79/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.0034\nEpoch 80/150\n313/313 [==============================] - 1s 4ms/step - loss: 0.0032\nEpoch 81/150\n313/313 [==============================] - 1s 4ms/step - loss: 0.0029\nEpoch 82/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.0036\nEpoch 83/150\n313/313 [==============================] - 1s 4ms/step - loss: 0.0026\nEpoch 84/150\n313/313 [==============================] - 1s 2ms/step - loss: 0.0035\nEpoch 85/150\n313/313 [==============================] - 1s 4ms/step - loss: 0.0032\nEpoch 86/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.0033\nEpoch 87/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.0032\nEpoch 88/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.0030\nEpoch 89/150\n313/313 [==============================] - 1s 4ms/step - loss: 0.0025\nEpoch 90/150\n313/313 [==============================] - 1s 4ms/step - loss: 0.0032\nEpoch 91/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.0028\nEpoch 92/150\n313/313 [==============================] - 2s 5ms/step - loss: 0.0026\nEpoch 93/150\n313/313 [==============================] - 1s 5ms/step - loss: 0.0031\nEpoch 94/150\n313/313 [==============================] - 1s 4ms/step - loss: 0.0027\nEpoch 95/150\n313/313 [==============================] - 2s 5ms/step - loss: 0.0028\nEpoch 96/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.0030\nEpoch 97/150\n313/313 [==============================] - 1s 3ms/step - loss: 0.0023\nEpoch 98/150\n"
]
],
[
[
"## Check the predictions\nBefore we formulate our trained neural networks in OMLT, we check to see that they adequately represent the data. While we would normally use some accuracy measure, we suffice with a visual plot of the fits.",
"_____no_output_____"
]
],
[
[
"#note: we calculate the unscaled output for each neural network to check the predictions\n#nn1\ny_predict_scaled_sigmoid = nn1.predict(x=df['x_scaled'])\ny_predict_sigmoid = y_predict_scaled_sigmoid*(std_data['y']) + mean_data['y']\n\n#nn2\ny_predict_scaled_relu = nn2.predict(x=df['x_scaled'])\ny_predict_relu = y_predict_scaled_relu*(std_data['y']) + mean_data['y']\n\n#nn3\ny_predict_scaled_mixed = nn3.predict(x=df['x_scaled'])\ny_predict_mixed = y_predict_scaled_mixed*(std_data['y']) + mean_data['y']",
"_____no_output_____"
],
[
"#create a single plot with the original data and each neural network's predictions\nfig,ax = plt.subplots(1,figsize = (8,8))\nax.plot(x,y,linewidth = 3.0,label = \"data\", alpha = 0.5)\nax.plot(x,y_predict_relu,linewidth = 3.0,linestyle=\"dotted\",label = \"relu\")\nax.plot(x,y_predict_sigmoid,linewidth = 3.0,linestyle=\"dotted\",label = \"sigmoid\")\nax.plot(x,y_predict_mixed,linewidth = 3.0,linestyle=\"dotted\",label = \"mixed\")\nplt.xlabel(\"x\")\nplt.ylabel(\"y\")\nplt.legend();",
"_____no_output_____"
]
],
[
[
"## Formulating Neural Networks with OMLT\nWe now show how OMLT can formulate neural networks within Pyomo. We specifically show how to specify and build different neural network optimization formulations and how to connect them with a broader Pyomo model. In these examples we use Pyomo solvers to find the input that minimizes each neural network output.\n<br><br>\nOMLT can formulate what we call full-space and reduced-space neural network representations using the `NeuralNetworkFormulation` object (for full-space) and `ReducedSpaceNeuralNetworkFormulation` object (for reduced-space). The reduced-space representation can be represented more compactly than the full-space within an optimization setting (i.e. it produces less variables and constraints), but we will see that full-space representation is necessary to represent non-smooth activation formulations (e.g. ReLU with binary variables).\n\n### Reduced Space (supports smooth activations) <br>\nThe reduced-space representation (`ReducedSpaceNeuralNetworkFormulation`) provided by OMLT hides intermediate neural network variables and activation functions from the underlying optimizer and represents the neural network using one constraint as following:\n\n$\\hat{y} = N(x)$\n\nHere, $\\hat{y}$ is a vector of outputs from the neural network, $x$ is a vector of inputs, and $N(\\cdot)$ represents the encoded neural network function that internally uses weights, biases, and activation functions to map $x \\rightarrow \\hat{y}$. From an implementation standpoint, OMLT builds the reduced-space formulation by encoding the sequential layer logic and activation functions as Pyomo `Expression` objects that depend only on the input variables.\n\n### Full Space (supports smooth and non-smooth activations) <br>\nThe full space formulation (`NeuralNetworkFormulation`) creates intermediate variables associated with the neural network nodes and activation functions and exposes them to the optimizer. This is represented by the following set of equations where $x$ and $\\hat{y}$ are again the neural network input and output vectors, and we introduce $\\hat{z}_{\\ell}$ and $z_{\\ell}$ to represent pre-activation and post-activation vectors for each each layer $\\ell$. We further use the notation $\\hat z_{\\ell,i}$ to denote node $i$ in layer $\\ell$ where $N_\\ell$ is the number of nodes in layer $\\ell$ and $N_L$ is the number of layers in the neural network. As such, the first equation maps the input to the first layer values $z_0$, the second equation represents the pre-activation values obtained from the weights, biases, and outputs of the previous layer, the third equation applies the activation function, and the last equation maps the final layer to the output. Note that the reduced-space formulation effectively captures these equations using a single constraint.\n\n$\\begin{align*}\n& x = z_0 &\\\\\n& \\hat z_{\\ell,i} = \\sum_{j{=}1}^{N_{\\ell-1}} w_{ij} z_j + b_i & \\forall i \\in \\{1,...,N_\\ell \\}, \\quad \\ell \\in \\{1,...N_L\\} \\\\\n& z_{\\ell,i} = \\sigma(\\hat z_{\\ell}) & \\forall i \\in \\{1,...,N_\\ell \\}, \\quad \\ell \\in \\{1,...N_L\\} \\\\\n& \\hat{y} = z_{N_L} &\n\\end{align*}\n$\n\n### Full Space ReLU with Binary Variables\nThe full space formulation supports non-smooth ReLU activation functions (i.e. the function $z_i = max(0,\\hat{z}_i)$) by using binary indicator variables. When using `NeuralNetworkFormulation` with a neural network that contains ReLU activations, OMLT will formulate the below set of variables and constraints for each node in a ReLU layer. Here, $q_{\\ell,i}$ is a binary indicator variable that determines whether the output from node $i$ on layer $\\ell$ is $0$ or whether it is $\\hat{z}_{\\ell,i}$. $M_{\\ell,i}^U$ and $M_{\\ell,i}^L$ are 'BigM' constants used to enforce the ReLU logic. Values for 'BigM' are often taken to be arbitrarily large numbers, but OMLT will automatically determine values by propagating the bounds on the input variables.\n\n$\n\\begin{align*}\n& z_{\\ell,i} \\ge \\hat{z}_{\\ell,i} & \\forall i \\in \\{1,...,N_\\ell \\}, \\quad \\ell \\in \\{1,...N_L\\}\\\\\n& z_{\\ell,i} \\ge 0 & \\forall i \\in \\{1,...,N_\\ell \\}, \\quad \\ell \\in \\{1,...N_L\\}\\\\\n& z_{\\ell,i} \\le M_{\\ell,i}^L q_{\\ell,i} & \\forall i \\in \\{1,...,N_\\ell \\}, \\quad \\ell \\in \\{1,...N_L\\} \\\\\n& z_{\\ell,i} \\le \\hat{z}_{\\ell,i} - M_{\\ell,i}^U(1-q_{\\ell,i}) & \\forall i \\in \\{1,...,N_\\ell \\}, \\quad \\ell \\in \\{1,...N_L\\}\n\\end{align*} \n$\n\n\n### Full Space ReLU with Complementarity Constraints\nReLU activation functions can also be represented using the following complementarity condition:\n$\n\\begin{align*}\n0 \\le (z_{\\ell,i} - \\hat{z}_{\\ell,i}) \\perp z_{\\ell,i} \\ge 0 & \\quad \\forall i \\in \\{1,...,N_\\ell \\}, \\quad \\ell \\in \\{1,...N_L\\}\n\\end{align*}\n$\n\nThis condition means that both of the expressions must be satisfied, where exactly one expression must be satisfied with equality. Hence, we must have that $z_{\\ell,i} \\ge \\hat{z}_{\\ell,i}$ and $z_{\\ell,i} \\ge 0$ with either $z_{\\ell,i} = \\hat{z}_{\\ell,i}$, or $z_{\\ell,i} = 0$.\n\nOMLT uses a `ComplementarityReLUActivation` object to specify that ReLU activation functions should be formulated using complementarity conditions. Within the formulation code, it uses `pyomo.mpec` to transform this complementarity condition into nonlinear constraints which facilitates using smooth optimization solvers (such as Ipopt) to optimize over ReLU activation functions.",
"_____no_output_____"
],
[
"## Solving Optimization Problems with Neural Networks using OMLT\n\nWe now show how to use the above neural network formulations in OMLT for our trained neural networks: `nn1`, `nn2`, and `nn3`. For each formulation we solve the simple optimization problem below using Pyomo where we find the input $x$ that minimizes the output $\\hat y$ of the neural network. \n\n$\n\\begin{align*} \n& \\min_x \\ \\hat{y}\\\\\n& s.t. \\hat{y} = N(x) \n\\end{align*}\n$\n\nFor each neural network we trained, we instantiate a Pyomo `ConcreteModel` and create variables that represent the neural network input $x$ and output $\\hat y$. We also create an objective function that seeks to minimize the output $\\hat y$.\n\nEach example uses the same general workflow:\n- Use the `keras_reader` to import the neural network into a OMLT `NetworkDefinition` object.\n- Create a Pyomo model with variables `x` and `y` where we intend to minimize `y`.\n- Create an `OmltBlock`.\n- Create a formulation object. Note that we use `ReducedSpaceNeuralNetworkFormulation` for the reudced-space and `NeuralNetworkFormulation` for full-space and ReLU. \n- Build the formulation object on the `OmltBlock`.\n- Add constraints connecting `x` to the neural network input and `y` to the neural network output.\n- Solve with an optimization solver (this example uses ipopt).\n- Query the solution.\n\nWe also print model size and solution time following each cell where we optimize the Pyomo model. ",
"_____no_output_____"
],
[
"### Setup scaling and input bounds\nWe assume that our Pyomo model operates in the unscaled space with respect to our neural network inputs and outputs. We additionally assume input bounds to our neural networks are given by the limits of our training data. \n\nTo handle this, OMLT can be given scaling information (in the form of an OMLT scaling object) and input bounds (in the form of a dictionary where indices correspond to neural network indices and values are 2-length tuples of lower and upper bounds). This maintains the space of the optimization problem and scaling is handled by OMLT underneath. The scaling object and input bounds are passed to keras reader method `load_keras_sequential` when importing the associated neural networks. ",
"_____no_output_____"
]
],
[
[
"#create an omlt scaling object\nscaler = omlt.scaling.OffsetScaling(offset_inputs=[mean_data['x']],\n factor_inputs=[std_data['x']],\n offset_outputs=[mean_data['y']],\n factor_outputs=[std_data['y']])\n\n#create the input bounds. note that the key `0` corresponds to input `0` and that we also scale the input bounds\ninput_bounds={0:((min(df['x']) - mean_data['x'])/std_data['x'],\n (max(df['x']) - mean_data['x'])/std_data['x'])};\nprint(scaler)\nprint(\"Scaled input bounds: \",input_bounds)",
"<omlt.scaling.OffsetScaling object at 0x7fdd940bc850>\nScaled input bounds: {0: (-1.731791015101997, 1.731791015101997)}\n"
]
],
[
[
"## Neural Network 1: Sigmoid Activations with Full-Space and Reduced-Space Formulations\nThe first neural network contains sigmoid activation functions which we formulate with full-space and reduced-space representations and solve with Ipopt.\n\n### Reduced Space Model\nWe begin with the reduced-space formulation and build the Pyomo model according to the above workflow. Note that the reduced-space model only contains 6 variables (`x` and `y` created on the Pyomo model, and the `OmltBlock` scaled and unscaled input and output which get created internally). The full-space formulation (shown next) will contain many more.",
"_____no_output_____"
]
],
[
[
"#create a network definition\nnet_sigmoid = keras_reader.load_keras_sequential(nn1,scaler,input_bounds)\n\n#create a pyomo model with variables x and y\nmodel1_reduced = pyo.ConcreteModel()\nmodel1_reduced.x = pyo.Var(initialize = 0)\nmodel1_reduced.y = pyo.Var(initialize = 0)\nmodel1_reduced.obj = pyo.Objective(expr=(model1_reduced.y))\n\n#create an OmltBlock\nmodel1_reduced.nn = OmltBlock()\n\n#use the reduced-space formulation\nformulation1_reduced = ReducedSpaceNeuralNetworkFormulation(net_sigmoid)\nmodel1_reduced.nn.build_formulation(formulation1_reduced)\n\n#connect pyomo variables to the neural network\n@model1_reduced.Constraint()\ndef connect_inputs(mdl):\n return mdl.x == mdl.nn.inputs[0]\n\n@model1_reduced.Constraint()\ndef connect_outputs(mdl):\n return mdl.y == mdl.nn.outputs[0]\n\n#solve the model and query the solution\nstatus_1_reduced = pyo.SolverFactory('ipopt').solve(model1_reduced, tee=True)\nsolution_1_reduced = (pyo.value(model1_reduced.x),pyo.value(model1_reduced.y))",
"Ipopt 3.13.3: \n\n******************************************************************************\nThis program contains Ipopt, a library for large-scale nonlinear optimization.\n Ipopt is released as open source code under the Eclipse Public License (EPL).\n For more information visit https://github.com/coin-or/Ipopt\n******************************************************************************\n\nThis is Ipopt version 3.13.3, running with linear solver ma27.\n\nNumber of nonzeros in equality constraint Jacobian...: 10\nNumber of nonzeros in inequality constraint Jacobian.: 0\nNumber of nonzeros in Lagrangian Hessian.............: 1\n\nTotal number of variables............................: 6\n variables with only lower bounds: 0\n variables with lower and upper bounds: 2\n variables with only upper bounds: 0\nTotal number of equality constraints.................: 5\nTotal number of inequality constraints...............: 0\n inequality constraints with only lower bounds: 0\n inequality constraints with lower and upper bounds: 0\n inequality constraints with only upper bounds: 0\n\niter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls\n 0 0.0000000e+00 1.38e+00 3.79e-01 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0\n 1 -9.8893905e+00 1.02e+01 1.08e+01 -1.0 1.21e+01 - 4.57e-01 8.14e-01f 1\n 2 2.9626647e+00 1.13e-01 5.96e+00 -1.0 1.29e+01 - 1.57e-01 1.00e+00h 1\n 3 -2.4674035e+00 3.17e+00 1.81e+00 -1.0 5.43e+00 - 1.00e+00 1.00e+00f 1\n 4 -2.2644992e+00 2.77e+00 2.83e+02 -1.0 1.70e+00 2.0 1.00e+00 2.45e-01h 2\n 5 1.5487116e+00 6.43e-05 3.93e+00 -1.0 3.81e+00 - 1.00e+00 1.00e+00h 1\n 6 1.1507197e+00 1.39e-01 5.85e-01 -1.0 3.98e-01 - 1.00e+00 1.00e+00f 1\n 7 1.3383820e+00 1.61e-03 2.22e-03 -1.7 1.88e-01 - 1.00e+00 1.00e+00h 1\n 8 1.3404666e+00 4.98e-05 8.87e-05 -3.8 2.65e-03 - 1.00e+00 1.00e+00h 1\n 9 1.3405352e+00 1.20e-08 2.06e-08 -5.7 6.86e-05 - 1.00e+00 1.00e+00h 1\niter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls\n 10 1.3405352e+00 5.47e-13 9.46e-13 -8.6 2.79e-07 - 1.00e+00 1.00e+00h 1\n\nNumber of Iterations....: 10\n\n (scaled) (unscaled)\nObjective...............: 1.3405352390223917e+00 1.3405352390223917e+00\nDual infeasibility......: 9.4615009298140281e-13 9.4615009298140281e-13\nConstraint violation....: 5.4733995114020217e-13 5.4733995114020217e-13\nComplementarity.........: 2.5068055461199896e-09 2.5068055461199896e-09\nOverall NLP error.......: 2.5068055461199896e-09 2.5068055461199896e-09\n\n\nNumber of objective function evaluations = 13\nNumber of objective gradient evaluations = 11\nNumber of equality constraint evaluations = 13\nNumber of inequality constraint evaluations = 0\nNumber of equality constraint Jacobian evaluations = 11\nNumber of inequality constraint Jacobian evaluations = 0\nNumber of Lagrangian Hessian evaluations = 10\nTotal CPU secs in IPOPT (w/o function evaluations) = 0.010\nTotal CPU secs in NLP function evaluations = 0.057\n\nEXIT: Optimal Solution Found.\n\b\b\b\b\b\b\b\b\b\b\b\b\b\b"
],
[
"#print out model size and solution values\nprint(\"Reduced Space Solution:\")\nprint(\"# of variables: \",model1_reduced.nvariables())\nprint(\"# of constraints: \",model1_reduced.nconstraints())\nprint(\"x = \", solution_1_reduced[0])\nprint(\"y = \", solution_1_reduced[1])\nprint(\"Solve Time: \", status_1_reduced['Solver'][0]['Time'])",
"Reduced Space Solution:\n# of variables: 6\n# of constraints: 5\nx = -1.4257385602216635\ny = 1.3405352390223917\nSolve Time: 0.16946029663085938\n"
]
],
[
[
"### Full Space Model\nFor the full-space representation we use `NeuralNetworkFormulation` instead of `ReducedSpaceNeuralNetworkFormulation`. The key difference is that this formulation creates additional variables and constraints to represent each node and activation function in the neural network.\n\nNote that when we print this model there are over 400 variables and constraints each owing to the number of neural network nodes. The solution consequently takes longer with more iterations (this effect is more pronounced for larger models). The full-space also finds a different local minima, but this was by no means guaranteed to happen. ",
"_____no_output_____"
]
],
[
[
"net_sigmoid = keras_reader.load_keras_sequential(nn1,scaler,input_bounds)\n\nmodel1_full = pyo.ConcreteModel()\nmodel1_full.x = pyo.Var(initialize = 0)\nmodel1_full.y = pyo.Var(initialize = 0)\nmodel1_full.obj = pyo.Objective(expr=(model1_full.y))\nmodel1_full.nn = OmltBlock()\n\nformulation2_full = NeuralNetworkFormulation(net_sigmoid)\nmodel1_full.nn.build_formulation(formulation2_full)\n\n@model1_full.Constraint()\ndef connect_inputs(mdl):\n return mdl.x == mdl.nn.inputs[0]\n\n@model1_full.Constraint()\ndef connect_outputs(mdl):\n return mdl.y == mdl.nn.outputs[0]\n\nstatus_1_full = pyo.SolverFactory('ipopt').solve(model1_full, tee=True)\nsolution_1_full = (pyo.value(model1_full.x),pyo.value(model1_full.y))",
"Ipopt 3.13.3: \n\n******************************************************************************\nThis program contains Ipopt, a library for large-scale nonlinear optimization.\n Ipopt is released as open source code under the Eclipse Public License (EPL).\n For more information visit https://github.com/coin-or/Ipopt\n******************************************************************************\n\nThis is Ipopt version 3.13.3, running with linear solver ma27.\n\nNumber of nonzeros in equality constraint Jacobian...: 10815\nNumber of nonzeros in inequality constraint Jacobian.: 0\nNumber of nonzeros in Lagrangian Hessian.............: 200\n\nTotal number of variables............................: 409\n variables with only lower bounds: 0\n variables with lower and upper bounds: 405\n variables with only upper bounds: 0\nTotal number of equality constraints.................: 408\nTotal number of inequality constraints...............: 0\n inequality constraints with only lower bounds: 0\n inequality constraints with lower and upper bounds: 0\n inequality constraints with only upper bounds: 0\n\niter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls\n 0 0.0000000e+00 8.02e+00 6.90e-02 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0\n 1 -6.5996891e-02 8.01e+00 6.91e-02 -1.0 5.70e+01 - 1.12e-03 1.16e-03h 1\n 2 -6.8934363e-02 8.01e+00 3.83e+01 -1.0 3.89e+01 - 1.45e-03 8.82e-05h 1\n 3r-6.8934363e-02 8.01e+00 9.99e+02 0.9 0.00e+00 - 0.00e+00 3.03e-07R 3\n 4r-4.5021476e-02 7.72e+00 9.99e+02 0.9 9.30e+02 - 6.21e-04 3.34e-04f 1\n 5r 7.8780532e-04 7.49e+00 9.98e+02 0.9 3.63e+02 - 6.60e-04 6.84e-04f 1\n 6r 1.5183032e-01 6.90e+00 9.95e+02 0.9 2.88e+02 - 1.48e-03 2.68e-03f 1\n 7 1.4278978e-01 6.89e+00 3.93e+00 -1.0 4.18e+01 - 4.35e-05 2.16e-04f 1\n 8 1.3899788e-01 6.89e+00 4.00e+00 -1.0 4.01e+01 - 2.51e-04 1.49e-04h 1\n 9r 1.3899788e-01 6.89e+00 9.99e+02 0.8 0.00e+00 - 0.00e+00 4.53e-07R 3\niter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls\n 10r 1.4796983e-01 6.87e+00 9.99e+02 0.8 2.27e+02 - 3.78e-03 2.19e-04f 1\n 11r 2.7820045e-01 5.99e+00 9.94e+02 0.8 2.82e+02 - 4.50e-03 5.22e-03f 1\n 12 2.5407474e-01 5.99e+00 2.02e+00 -1.0 2.92e+01 - 2.46e-03 8.26e-04h 1\n 13 2.5341839e-01 5.99e+00 4.19e+02 -1.0 3.49e+01 - 3.96e-03 3.88e-05h 1\n 14r 2.5341839e-01 5.99e+00 9.99e+02 0.8 0.00e+00 - 0.00e+00 4.14e-07R 3\n 15r 2.5325143e-01 5.95e+00 9.99e+02 0.8 4.24e+02 - 5.77e-03 8.27e-05f 1\n 16r 2.4244619e-01 5.64e+00 9.97e+02 0.8 1.46e+02 - 1.47e-02 2.22e-03f 1\n 17r 1.0116850e-01 4.40e+00 9.79e+02 0.8 7.85e+01 - 1.97e-02 1.72e-02f 1\n 18 9.8878175e-02 4.39e+00 1.20e+01 -1.0 2.39e+01 - 2.69e-03 2.24e-04h 1\n 19 9.8851956e-02 4.39e+00 3.86e+02 -1.0 3.61e+01 - 3.57e-03 8.93e-05h 1\niter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls\n 20r 9.8851956e-02 4.39e+00 9.99e+02 0.6 0.00e+00 - 0.00e+00 3.38e-07R 3\n 21r 9.2620876e-02 4.30e+00 9.98e+02 0.6 2.96e+02 - 7.96e-03 3.34e-04f 1\n 22r-7.6641305e-03 3.10e+00 9.88e+02 0.6 1.22e+02 - 2.06e-02 1.02e-02f 1\n 23 -9.2674922e-03 3.10e+00 1.34e+01 -1.0 2.51e+01 - 2.93e-03 2.56e-04h 1\n 24 -8.8157947e-03 3.10e+00 3.68e+02 -1.0 3.41e+01 - 3.34e-03 1.37e-04h 1\n 25r-8.8157947e-03 3.10e+00 9.99e+02 0.5 0.00e+00 - 0.00e+00 2.63e-07R 4\n 26r-1.0045983e-02 3.08e+00 1.02e+03 0.5 3.84e+02 - 1.36e-02 5.19e-05f 1\n 27r-1.7161051e-01 1.79e+00 1.30e+03 0.5 9.82e+01 - 1.82e-02 1.33e-02f 1\n 28 -1.7355694e-01 1.79e+00 1.21e+01 -1.0 2.71e+01 - 2.74e-03 3.71e-04h 1\n 29 -1.7267514e-01 1.79e+00 4.24e+02 -1.0 3.23e+01 - 3.16e-03 1.90e-04h 1\niter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls\n 30r-1.7267514e-01 1.79e+00 9.99e+02 0.3 0.00e+00 - 0.00e+00 4.37e-07R 4\n 31r-1.7386165e-01 1.76e+00 1.18e+03 0.3 4.77e+02 - 6.40e-03 6.71e-05f 1\n 32r-2.7518477e-01 5.68e-01 9.91e+02 0.3 2.12e+02 - 5.98e-03 7.36e-03f 1\n 33 -2.7673613e-01 5.67e-01 2.41e+01 -1.0 2.08e+01 - 2.93e-03 6.36e-04h 1\n 34 -2.7610565e-01 5.67e-01 2.10e+03 -1.0 3.03e+01 - 4.27e-03 1.24e-04h 1\n 35r-2.7610565e-01 5.67e-01 9.99e+02 -0.2 0.00e+00 - 0.00e+00 2.54e-07R 5\n 36r-2.7703492e-01 5.71e-01 1.05e+03 -0.2 3.85e+02 - 1.40e-03 9.21e-04f 1\n 37r-2.8794606e-01 5.80e-01 9.96e+02 -0.2 3.32e+02 - 9.97e-04 2.06e-03f 1\n 38r-2.8504887e-01 5.87e-01 9.95e+02 -0.2 3.91e+02 - 1.19e-03 1.77e-03f 1\n 39r-2.7108517e-01 5.94e-01 9.92e+02 -0.2 4.54e+02 - 2.32e-03 1.93e-03f 1\niter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls\n 40r-2.7436308e-01 5.97e-01 1.04e+03 -0.2 3.73e+02 - 2.17e-03 7.41e-04f 1\n 41r-2.4178089e-01 6.00e-01 1.04e+03 -0.2 4.90e+02 - 2.01e-03 1.28e-03f 1\n 42r-1.9767771e-01 6.04e-01 1.04e+03 -0.2 4.30e+02 - 2.21e-03 1.62e-03f 1\n 43r-1.1946850e-01 6.08e-01 1.03e+03 -0.2 4.30e+02 - 3.69e-03 2.17e-03f 1\n 44r-5.8247648e-02 6.12e-01 1.03e+03 -0.2 2.52e+02 - 4.95e-03 2.75e-03f 1\n 45r-5.6926163e-02 6.19e-01 1.03e+03 -0.2 1.12e+02 - 1.95e-03 2.91e-03f 1\n 46r-6.5090034e-02 6.26e-01 1.02e+03 -0.2 1.16e+02 - 4.14e-03 4.00e-03f 1\n 47r-1.1932132e-01 6.25e-01 1.02e+03 -0.2 6.34e+02 - 7.13e-04 8.11e-04f 1\n 48r-1.7446905e-01 6.24e-01 1.08e+03 -0.2 4.70e+02 - 1.08e-03 1.25e-03f 1\n 49r-2.6383530e-01 6.20e-01 1.08e+03 -0.2 2.10e+03 - 5.69e-04 3.79e-04f 1\niter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls\n 50r-2.8505035e-01 6.21e-01 1.08e+03 -0.2 3.05e+02 - 8.33e-04 1.35e-03f 1\n 51r-2.7149041e-01 6.24e-01 1.08e+03 -0.2 1.74e+02 - 2.16e-03 2.01e-03f 1\n 52r-1.7000426e-01 6.16e-01 1.47e+03 -0.2 1.80e+02 0.0 2.93e-04 3.92e-03f 1\n 53r-6.8704183e-02 6.14e-01 1.33e+03 -0.2 5.97e+02 -0.5 1.05e-03 8.66e-04f 1\n 54r 2.5044431e-01 6.10e-01 1.80e+03 -0.2 6.54e+03 -1.0 3.54e-05 2.61e-04f 1\n 55r 2.8504815e-01 6.07e-01 1.74e+03 -0.2 5.47e+02 - 1.76e-03 7.14e-04f 1\n 56r 2.7321802e-01 5.98e-01 1.75e+03 -0.2 4.69e+02 - 7.96e-04 1.36e-03f 1\n 57r 2.2468616e-01 5.91e-01 1.76e+03 -0.2 6.35e+02 - 2.34e-04 1.08e-03f 1\n 58r 2.0739882e-01 5.83e-01 1.76e+03 -0.2 3.65e+02 - 1.49e-03 1.38e-03f 1\n 59r 2.6772641e-01 5.60e-01 1.86e+03 -0.2 4.44e+02 - 7.25e-04 2.32e-03f 1\niter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls\n 60r 3.3545875e-01 5.51e-01 1.82e+03 -0.2 4.39e+02 - 1.99e-03 1.39e-03f 1\n 61r 3.9836416e-01 5.48e-01 1.83e+03 -0.2 2.56e+02 - 9.28e-04 1.83e-03f 1\n 62r 4.6239818e-01 5.63e-01 1.83e+03 -0.2 2.44e+02 - 9.97e-04 2.04e-03f 1\n 63r 5.5785641e-01 5.53e-01 1.82e+03 -0.2 2.43e+02 - 4.02e-03 3.65e-03f 1\n 64r 5.7463683e-01 5.47e-01 1.82e+03 -0.2 2.06e+02 - 1.26e-03 1.47e-03f 1\n 65r 5.8578857e-01 5.42e-01 1.82e+03 -0.2 1.67e+02 - 2.60e-03 1.17e-03f 1\n 66r 6.5902790e-01 5.23e-01 1.83e+03 -0.2 6.54e+02 - 4.83e-04 1.52e-03f 1\n 67r 7.2109122e-01 5.18e-01 1.81e+03 -0.2 3.29e+02 - 4.60e-03 2.00e-03f 1\n 68r 7.3500182e-01 5.11e-01 1.81e+03 -0.2 1.46e+02 - 3.25e-03 2.32e-03f 1\n 69r 7.4910009e-01 4.76e-01 1.80e+03 -0.2 1.12e+02 - 2.11e-03 4.56e-03f 1\niter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls\n 70 7.7342994e-01 4.71e-01 2.19e+03 -1.0 1.42e+01 - 7.42e-02 9.20e-03f 1\n 71 7.7524529e-01 4.67e-01 4.33e+03 -1.0 1.92e+00 - 4.46e-03 8.35e-03f 1\n 72 9.2212651e-01 1.24e-02 3.72e+04 -1.0 1.64e+00 - 7.91e-03 9.90e-01f 1\n 73 8.9986438e-01 6.26e-06 4.36e+03 -1.0 2.96e-02 - 9.93e-01 1.00e+00h 1\n 74 6.2225764e-01 2.44e-02 5.48e+06 -1.0 2.24e+00 - 4.30e-01 1.00e+00F 1\n 75 6.2867884e-01 3.07e-04 1.74e-01 -1.0 1.44e-01 - 1.00e+00 1.00e+00h 1\n 76 3.4410046e-01 3.75e-02 2.58e+06 -5.7 2.55e+00 - 3.54e-01 5.90e-01f 1\n 77 -3.0049576e-02 2.69e-02 5.97e+06 -5.7 1.51e+00 - 8.91e-02 1.00e+00h 1\n 78 -2.1421512e-01 8.95e-03 4.63e+06 -5.7 7.71e-01 - 2.29e-01 1.00e+00h 1\n 79 -3.6759089e-01 1.18e-02 1.71e+06 -5.7 7.48e-01 - 6.26e-01 1.00e+00h 1\niter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls\n 80 -6.5277202e-01 5.69e-02 9.25e+05 -5.7 1.64e+00 - 4.57e-01 9.13e-01f 1\n 81 -8.8227098e-01 3.17e-02 4.43e+05 -5.7 1.53e+00 - 5.30e-01 1.00e+00h 1\n 82 -9.7025779e-01 5.28e-02 2.52e+05 -5.7 1.49e+00 - 4.45e-01 1.00e+00h 1\n 83 -8.8932296e-01 2.75e-03 1.66e+05 -5.7 3.31e-01 - 2.71e-01 1.00e+00h 1\n 84 -8.8551886e-01 8.68e-05 7.45e+03 -5.7 5.70e-02 - 9.55e-01 1.00e+00h 1\n 85 -8.8538905e-01 3.04e-07 3.47e-08 -5.7 3.36e-03 - 1.00e+00 1.00e+00h 1\n 86 -8.8538859e-01 1.12e-09 3.03e-10 -8.6 2.04e-04 - 1.00e+00 1.00e+00h 1\n\nNumber of Iterations....: 86\n\n (scaled) (unscaled)\nObjective...............: -8.8538859014697113e-01 -8.8538859014697113e-01\nDual infeasibility......: 3.0301215893228920e-10 3.0301215893228920e-10\nConstraint violation....: 1.1227411778058638e-09 1.1227411778058638e-09\nComplementarity.........: 2.7295967427846085e-09 2.7295967427846085e-09\nOverall NLP error.......: 2.7295967427846085e-09 2.7295967427846085e-09\n\n\nNumber of objective function evaluations = 114\nNumber of objective gradient evaluations = 46\nNumber of equality constraint evaluations = 114\nNumber of inequality constraint evaluations = 0\nNumber of equality constraint Jacobian evaluations = 94\nNumber of inequality constraint Jacobian evaluations = 0\nNumber of Lagrangian Hessian evaluations = 86\nTotal CPU secs in IPOPT (w/o function evaluations) = 0.243\nTotal CPU secs in NLP function evaluations = 0.016\n\nEXIT: Optimal Solution Found.\n"
],
[
"#print out model size and solution values\nprint(\"Full Space Solution:\")\nprint(\"# of variables: \",model1_full.nvariables())\nprint(\"# of constraints: \",model1_full.nconstraints())\nprint(\"x = \", solution_1_full[0])\nprint(\"y = \", solution_1_full[1])\nprint(\"Solve Time: \", status_1_full['Solver'][0]['Time'])",
"Full Space Solution:\n# of variables: 409\n# of constraints: 408\nx = -0.27928922891858343\ny = -0.8853885901469711\nSolve Time: 0.3363785743713379\n"
]
],
[
[
"## Neural Network 2: ReLU Neural Network using Complementarity Constraints and Binary Variables\nThe second neural network contains ReLU activation functions which we represent using complementarity constraints and binary variables.\n\n### ReLU Complementarity Constraints\nTo represent ReLU using complementarity constraints we use the `ComplementarityReLUActivation` object which we pass as a keyword argument to a `NeuralNetworkFormulation`. This overrides the default ReLU behavior which uses binary variables (shown in the next model). Importantly, the complementarity formulation allows us to solve the model using a continuous solver (in this case using Ipopt).",
"_____no_output_____"
]
],
[
[
"net_relu = keras_reader.load_keras_sequential(nn2,scaler,input_bounds)\n\nmodel2_comp = pyo.ConcreteModel()\nmodel2_comp.x = pyo.Var(initialize = 0)\nmodel2_comp.y = pyo.Var(initialize = 0)\nmodel2_comp.obj = pyo.Objective(expr=(model2_comp.y))\nmodel2_comp.nn = OmltBlock()\n\nformulation2_comp = NeuralNetworkFormulation(net_relu,activation_constraints={\n \"relu\": ComplementarityReLUActivation()})\nmodel2_comp.nn.build_formulation(formulation2_comp)\n\n@model2_comp.Constraint()\ndef connect_inputs(mdl):\n return mdl.x == mdl.nn.inputs[0]\n\n@model2_comp.Constraint()\ndef connect_outputs(mdl):\n return mdl.y == mdl.nn.outputs[0]\n\nstatus_2_comp = pyo.SolverFactory('ipopt').solve(model2_comp, tee=True)\nsolution_2_comp = (pyo.value(model2_comp.x),pyo.value(model2_comp.y))",
"Ipopt 3.13.3: \n\n******************************************************************************\nThis program contains Ipopt, a library for large-scale nonlinear optimization.\n Ipopt is released as open source code under the Eclipse Public License (EPL).\n For more information visit https://github.com/coin-or/Ipopt\n******************************************************************************\n\nThis is Ipopt version 3.13.3, running with linear solver ma27.\n\nNumber of nonzeros in equality constraint Jacobian...: 11015\nNumber of nonzeros in inequality constraint Jacobian.: 600\nNumber of nonzeros in Lagrangian Hessian.............: 200\n\nTotal number of variables............................: 609\n variables with only lower bounds: 200\n variables with lower and upper bounds: 103\n variables with only upper bounds: 0\nTotal number of equality constraints.................: 408\nTotal number of inequality constraints...............: 400\n inequality constraints with only lower bounds: 200\n inequality constraints with lower and upper bounds: 0\n inequality constraints with only upper bounds: 200\n\niter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls\n 0 0.0000000e+00 1.38e+00 1.54e+00 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0\n 1 9.1301315e-02 1.30e+00 1.08e+02 -1.0 1.44e+00 - 2.90e-02 6.33e-02f 1\n 2 9.1136017e-01 4.89e-01 1.95e+02 -1.0 1.32e+00 - 3.41e-02 6.22e-01f 1\n 3 9.4397968e-01 4.53e-01 1.75e+02 -1.0 1.76e+00 - 1.64e-01 7.40e-02f 1\n 4 1.2316569e+00 3.19e-01 9.60e+01 -1.0 1.34e+00 - 7.67e-01 2.96e-01h 1\n 5 1.0079536e+00 2.77e-01 8.34e+01 -1.0 1.72e+00 - 1.42e-01 1.30e-01h 1\n 6 6.0563510e-01 2.32e-01 6.98e+01 -1.0 2.46e+00 - 1.60e-01 1.64e-01h 1\n 7 7.5442388e-02 1.73e-01 9.93e+01 -1.0 2.08e+00 - 6.85e-01 2.55e-01h 1\n 8 -3.9382949e-01 9.74e-02 1.53e+02 -1.0 1.07e+00 - 8.40e-01 4.37e-01h 1\n 9 -6.0333599e-01 5.17e-02 4.77e+02 -1.0 4.47e-01 - 1.00e+00 4.69e-01h 1\niter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls\n 10 -7.3475699e-01 2.15e-02 6.57e+02 -1.0 2.25e-01 - 1.00e+00 5.84e-01h 1\n 11 -7.9682215e-01 9.98e-03 2.08e+03 -1.0 1.16e-01 - 1.00e+00 5.36e-01h 1\n 12 -8.3366792e-01 4.05e-03 3.95e+03 -1.0 6.21e-02 - 1.00e+00 5.94e-01h 1\n 13 -8.5064175e-01 1.73e-03 1.03e+04 -1.0 2.96e-02 - 1.00e+00 5.74e-01h 1\n 14 -8.5877260e-01 7.09e-04 2.33e+04 -1.0 1.38e-02 - 1.00e+00 5.90e-01h 1\n 15 -8.6216090e-01 2.93e-04 5.70e+04 -1.0 5.78e-03 - 1.00e+00 5.86e-01h 1\n 16 -8.6363356e-01 1.19e-04 1.32e+05 -1.0 2.48e-03 - 1.00e+00 5.95e-01h 1\n 17 -8.6414975e-01 4.70e-05 3.02e+05 -1.0 8.53e-04 - 1.00e+00 6.05e-01h 1\n 18 -8.6436377e-01 1.72e-05 6.21e+05 -1.0 3.37e-04 - 1.00e+00 6.35e-01h 1\n 19 -8.6430151e-01 5.00e-06 9.85e+05 -1.0 6.87e-04 - 1.00e+00 7.09e-01h 1\niter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls\n 20 -8.6428603e-01 1.76e-06 1.77e+06 -1.0 1.95e-04 - 1.00e+00 6.48e-01h 1\n 21 -8.6409586e-01 5.51e-07 2.50e+06 -1.0 8.69e-04 - 1.00e+00 6.87e-01h 1\n 22 -8.6391451e-01 1.91e-07 2.87e+06 -1.0 8.34e-04 - 1.00e+00 6.54e-01h 1\n 23 -8.6373122e-01 1.34e-08 5.71e+05 -1.0 5.92e-04 - 1.00e+00 9.30e-01h 1\n 24 -8.6361524e-01 1.00e-08 3.98e+06 -1.0 1.38e-03 - 1.00e+00 2.50e-01f 3\n 25 -8.6324969e-01 5.83e-16 1.55e+03 -1.0 1.09e-03 - 1.00e+00 1.00e+00h 1\n 26 -8.6425223e-01 2.89e-15 8.56e+06 -2.5 2.72e-03 - 6.25e-01 1.00e+00F 1\n 27 -8.6446326e-01 8.88e-16 2.06e+03 -2.5 6.27e-04 - 1.00e+00 1.00e+00f 1\n 28 -8.6449861e-01 8.88e-16 3.50e+01 -2.5 1.05e-04 2.0 1.00e+00 1.00e+00f 1\n 29 -8.6738863e-01 8.88e-16 5.94e+02 -3.8 3.40e-01 - 2.44e-02 2.52e-02f 1\niter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls\n 30 -8.7475850e-01 7.49e-16 8.29e+02 -3.8 1.51e+00 - 1.46e-02 1.45e-02f 1\n 31 -8.7469453e-01 1.55e-15 4.81e+05 -3.8 1.02e-04 - 5.30e-01 1.00e+00f 1\n 32 -8.7469580e-01 1.58e-15 7.77e-01 -3.8 2.16e-06 5.1 1.00e+00 1.00e+00f 1\n 33 -8.7472697e-01 2.22e-15 8.92e-01 -3.8 6.31e-05 - 1.00e+00 1.00e+00f 1\n 34 -8.7471661e-01 2.44e-15 3.92e-02 -3.8 1.75e-05 - 1.00e+00 1.00e+00h 1\n 35 -8.7471986e-01 5.83e-16 1.98e+03 -5.7 2.28e-05 - 9.33e-01 1.00e+00h 1\n 36 -8.7473066e-01 6.66e-16 1.69e+03 -5.7 8.78e-04 - 1.47e-01 5.99e-02f 2\n 37 -8.7473251e-01 1.50e-15 1.20e-01 -5.7 2.64e-06 4.7 1.00e+00 1.00e+00h 1\n 38 -8.7474483e-01 1.69e-15 1.86e+02 -5.7 1.71e-04 - 6.80e-01 1.18e-01f 2\n 39 -8.7475229e-01 2.19e-15 5.33e+00 -5.7 2.80e-05 - 1.00e+00 4.37e-01h 2\niter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls\n 40 -8.7475638e-01 2.41e-15 1.49e-05 -5.7 5.93e-06 - 1.00e+00 1.00e+00h 1\n 41 -8.7476086e-01 1.89e-15 5.85e+01 -8.6 8.11e-06 - 8.40e-01 1.00e+00h 1\n 42 -8.7476409e-01 1.83e-15 1.69e+01 -8.6 3.22e-06 - 7.06e-01 1.00e+00h 1\n 43 -8.7476691e-01 2.36e-15 4.15e+00 -8.6 2.83e-06 - 7.53e-01 1.00e+00f 1\n 44 -8.7476880e-01 1.30e-15 6.75e-01 -8.6 1.89e-06 - 8.37e-01 1.00e+00f 1\n 45 -8.7476951e-01 6.11e-16 8.49e-08 -8.6 7.08e-07 - 1.00e+00 1.00e+00h 1\n 46 -8.7476954e-01 1.67e-15 1.04e-09 -8.6 3.60e-08 - 1.00e+00 1.00e+00h 1\n\nNumber of Iterations....: 46\n\n (scaled) (unscaled)\nObjective...............: -8.7476954372962457e-01 -8.7476954372962457e-01\nDual infeasibility......: 1.0358442020796943e-09 1.0358442020796943e-09\nConstraint violation....: 1.6653345369377348e-15 1.6653345369377348e-15\nComplementarity.........: 2.5844034635082066e-09 2.5844034635082066e-09\nOverall NLP error.......: 2.5844034635082066e-09 2.5844034635082066e-09\n\n\nNumber of objective function evaluations = 57\nNumber of objective gradient evaluations = 47\nNumber of equality constraint evaluations = 57\nNumber of inequality constraint evaluations = 57\nNumber of equality constraint Jacobian evaluations = 47\nNumber of inequality constraint Jacobian evaluations = 47\nNumber of Lagrangian Hessian evaluations = 46\nTotal CPU secs in IPOPT (w/o function evaluations) = 0.251\nTotal CPU secs in NLP function evaluations = 0.008\n\nEXIT: Optimal Solution Found.\n"
],
[
"#print out model size and solution values\nprint(\"ReLU Complementarity Solution:\")\nprint(\"# of variables: \",model2_comp.nvariables())\nprint(\"# of constraints: \",model2_comp.nconstraints())\nprint(\"x = \", solution_2_comp[0])\nprint(\"y = \", solution_2_comp[1])\nprint(\"Solve Time: \", status_2_comp['Solver'][0]['Time'])",
"ReLU Complementarity Solution:\n# of variables: 609\n# of constraints: 808\nx = -0.2970834231188838\ny = -0.8747695437296246\nSolve Time: 0.3795132637023926\n"
]
],
[
[
"### ReLU with Binary Variables and BigM Constraints\nFor the binary variable formulation of ReLU we use the default activation function settings. These are applied automatically if a `NetworkDefinition` contains ReLU activation functions. \n\nNote that we solve the optimization problem with Cbc which can handle binary decisions. While the solution takes considerably longer than the continuous complementarity formulation, it is guaranteed to find the global minimum.",
"_____no_output_____"
]
],
[
[
"net_relu = keras_reader.load_keras_sequential(nn2,scaler,input_bounds)\n\nmodel2_bigm = pyo.ConcreteModel()\nmodel2_bigm.x = pyo.Var(initialize = 0)\nmodel2_bigm.y = pyo.Var(initialize = 0)\nmodel2_bigm.obj = pyo.Objective(expr=(model2_bigm.y))\nmodel2_bigm.nn = OmltBlock()\n\nformulation2_bigm = NeuralNetworkFormulation(net_relu)\nmodel2_bigm.nn.build_formulation(formulation2_bigm)\n\n@model2_bigm.Constraint()\ndef connect_inputs(mdl):\n return mdl.x == mdl.nn.inputs[0]\n\n@model2_bigm.Constraint()\ndef connect_outputs(mdl):\n return mdl.y == mdl.nn.outputs[0]\n\nstatus_2_bigm = pyo.SolverFactory('cbc').solve(model2_bigm, tee=True)\nsolution_2_bigm = (pyo.value(model2_bigm.x),pyo.value(model2_bigm.y))",
"Welcome to the CBC MILP Solver \nVersion: 2.10.5 \nBuild Date: Apr 7 2020 \n\ncommand line - /home/jhjalvi/anaconda3/bin/cbc -printingOptions all -import /tmp/tmp9bbul57f.pyomo.lp -stat=1 -solve -solu /tmp/tmp9bbul57f.pyomo.soln (default strategy 1)\nOption for printingOptions changed from normal to all\nPresolve 677 (-332) rows, 483 (-127) columns and 11039 (-977) elements\nStatistics for presolved model\nOriginal problem has 200 integers (200 of which binary)\nPresolved problem has 186 integers (186 of which binary)\n==== 482 zero objective 2 different\n482 variables have objective of 0\n1 variables have objective of 1.37862\n==== absolute objective values 2 different\n482 variables have objective of 0\n1 variables have objective of 1.37862\n==== for integers 186 zero objective 1 different\n186 variables have objective of 0\n==== for integers absolute objective values 1 different\n186 variables have objective of 0\n===== end objective counts\n\n\nProblem has 677 rows, 483 columns (1 with objective) and 11039 elements\nThere are 1 singletons with objective \nColumn breakdown:\n0 of type 0.0->inf, 195 of type 0.0->up, 0 of type lo->inf, \n102 of type lo->up, 0 of type free, 0 of type fixed, \n0 of type -inf->0.0, 0 of type -inf->up, 186 of type 0.0->1.0 \nRow breakdown:\n5 of type E 0.0, 0 of type E 1.0, 0 of type E -1.0, \n96 of type E other, 0 of type G 0.0, 0 of type G 1.0, \n0 of type G other, 286 of type L 0.0, 0 of type L 1.0, \n290 of type L other, 0 of type Range 0.0->1.0, 0 of type Range other, \n0 of type Free \nContinuous objective value is -10.2095 - 0.03 seconds\nCgl0003I 0 fixed, 0 tightened bounds, 255 strengthened rows, 0 substitutions\nCgl0003I 0 fixed, 0 tightened bounds, 118 strengthened rows, 0 substitutions\nCgl0003I 0 fixed, 0 tightened bounds, 50 strengthened rows, 0 substitutions\nCgl0003I 0 fixed, 0 tightened bounds, 47 strengthened rows, 0 substitutions\nCgl0003I 0 fixed, 0 tightened bounds, 44 strengthened rows, 0 substitutions\nCgl0004I processed model has 583 rows, 389 columns (186 integer (186 of which binary)) and 20050 elements\nCbc0038I Initial state - 101 integers unsatisfied sum - 32.4708\nCbc0038I Pass 1: suminf. 10.11361 (61) obj. -0.331421 iterations 130\nCbc0038I Pass 2: suminf. 9.25094 (60) obj. -0.399356 iterations 9\nCbc0038I Pass 3: suminf. 3.66822 (37) obj. -0.484809 iterations 32\nCbc0038I Pass 4: suminf. 0.52900 (13) obj. -0.802558 iterations 64\nCbc0038I Solution found of -0.802558\nCbc0038I Relaxing continuous gives -0.803013\nCbc0038I Before mini branch and bound, 80 integers at bound fixed and 80 continuous\nCbc0038I Full problem 583 rows 389 columns, reduced to 389 rows 222 columns - 13 fixed gives 376, 209 - still too large\nCbc0038I Full problem 583 rows 389 columns, reduced to 266 rows 142 columns\nCbc0038I Mini branch and bound improved solution from -0.803013 to -0.874765 (0.44 seconds)\nCbc0038I Freeing continuous variables gives a solution of -0.874765\nCbc0038I Round again with cutoff of -1.75651\nCbc0038I Pass 5: suminf. 10.40762 (59) obj. -1.75651 iterations 51\nCbc0038I Pass 6: suminf. 9.54589 (58) obj. -1.75651 iterations 5\nCbc0038I Pass 7: suminf. 3.52736 (33) obj. -1.75651 iterations 71\nCbc0038I Pass 8: suminf. 1.43099 (13) obj. -1.75651 iterations 138\nCbc0038I Pass 9: suminf. 0.71429 (12) obj. -1.75651 iterations 12\nCbc0038I Pass 10: suminf. 0.29257 (2) obj. -1.75651 iterations 89\nCbc0038I Pass 11: suminf. 0.17620 (2) obj. -1.75651 iterations 5\nCbc0038I Pass 12: suminf. 8.25790 (37) obj. -1.75651 iterations 83\nCbc0038I Pass 13: suminf. 7.85866 (36) obj. -1.75651 iterations 1\nCbc0038I Pass 14: suminf. 0.73278 (16) obj. -1.75651 iterations 55\nCbc0038I Pass 15: suminf. 9.29924 (46) obj. -1.75651 iterations 74\nCbc0038I Pass 16: suminf. 2.03023 (26) obj. -1.75651 iterations 44\nCbc0038I Pass 17: suminf. 0.54140 (16) obj. -1.75651 iterations 70\nCbc0038I Pass 18: suminf. 0.47624 (16) obj. -1.75651 iterations 5\nCbc0038I Pass 19: suminf. 0.27245 (4) obj. -1.75651 iterations 34\nCbc0038I Pass 20: suminf. 0.19170 (2) obj. -1.75651 iterations 9\nCbc0038I Pass 21: suminf. 7.55404 (43) obj. -1.75651 iterations 74\nCbc0038I Pass 22: suminf. 1.67908 (22) obj. -1.75651 iterations 33\nCbc0038I Pass 23: suminf. 0.26750 (4) obj. -1.75651 iterations 59\nCbc0038I Pass 24: suminf. 0.18807 (3) obj. -1.75651 iterations 8\nCbc0038I Pass 25: suminf. 7.76162 (40) obj. -1.75651 iterations 65\nCbc0038I Pass 26: suminf. 7.35789 (39) obj. -1.75651 iterations 4\nCbc0038I Pass 27: suminf. 0.59937 (11) obj. -1.75651 iterations 45\nCbc0038I Pass 28: suminf. 0.53796 (11) obj. -1.75651 iterations 4\nCbc0038I Pass 29: suminf. 0.26750 (4) obj. -1.75651 iterations 45\nCbc0038I Pass 30: suminf. 0.18807 (3) obj. -1.75651 iterations 8\nCbc0038I Pass 31: suminf. 7.84744 (46) obj. -1.75651 iterations 82\nCbc0038I Pass 32: suminf. 7.54623 (45) obj. -1.75651 iterations 8\nCbc0038I Pass 33: suminf. 0.79174 (17) obj. -1.75651 iterations 72\nCbc0038I Pass 34: suminf. 0.66909 (18) obj. -1.75651 iterations 4\nCbc0038I No solution found this major pass\nCbc0038I Before mini branch and bound, 31 integers at bound fixed and 78 continuous\nCbc0038I Full problem 583 rows 389 columns, reduced to 393 rows 226 columns - 43 fixed gives 350, 183 - still too large\nCbc0038I Full problem 583 rows 389 columns, reduced to 244 rows 122 columns\nCbc0038I Mini branch and bound did not improve solution (0.63 seconds)\nCbc0038I After 0.63 seconds - Feasibility pump exiting with objective of -0.874765 - took 0.32 seconds\nCbc0012I Integer solution of -0.87476544 found by feasibility pump after 0 iterations and 0 nodes (0.64 seconds)\nCbc0038I Full problem 583 rows 389 columns, reduced to 383 rows 218 columns - 43 fixed gives 340, 175 - still too large\nCbc0038I Full problem 583 rows 389 columns, reduced to 228 rows 112 columns\nCbc0031I 143 added rows had average density of 62.006993\nCbc0013I At root node, 143 cuts changed objective from -9.69208 to -5.986171 in 100 passes\nCbc0014I Cut generator 0 (Probing) - 35368 row cuts average 2.1 elements, 0 column cuts (0 active) in 1.400 seconds - new frequency is 1\nCbc0014I Cut generator 1 (Gomory) - 4300 row cuts average 260.2 elements, 0 column cuts (0 active) in 1.980 seconds - new frequency is 1\nCbc0014I Cut generator 2 (Knapsack) - 1 row cuts average 2.0 elements, 0 column cuts (0 active) in 0.114 seconds - new frequency is -100\nCbc0014I Cut generator 3 (Clique) - 0 row cuts average 0.0 elements, 0 column cuts (0 active) in 0.017 seconds - new frequency is -100\nCbc0014I Cut generator 4 (MixedIntegerRounding2) - 2918 row cuts average 73.6 elements, 0 column cuts (0 active) in 3.409 seconds - new frequency is 1\nCbc0014I Cut generator 5 (FlowCover) - 0 row cuts average 0.0 elements, 0 column cuts (0 active) in 0.423 seconds - new frequency is -100\nCbc0014I Cut generator 6 (TwoMirCuts) - 584 row cuts average 118.0 elements, 0 column cuts (0 active) in 0.280 seconds - new frequency is 1\nCbc0010I After 0 nodes, 1 on tree, -0.87476544 best solution, best possible -5.986171 (24.25 seconds)\nCbc0038I Full problem 583 rows 389 columns, reduced to 330 rows 187 columns - 21 fixed gives 309, 166 - still too large\nCbc0038I Full problem 583 rows 389 columns, reduced to 219 rows 108 columns\nCbc0038I Full problem 583 rows 389 columns, reduced to 348 rows 178 columns - 18 fixed gives 330, 160 - still too large\nCbc0038I Full problem 583 rows 389 columns, reduced to 186 rows 96 columns\nCbc0038I Full problem 583 rows 389 columns, reduced to 422 rows 228 columns - 38 fixed gives 384, 190 - still too large\nCbc0038I Full problem 583 rows 389 columns, reduced to 219 rows 108 columns\nCbc0038I Full problem 583 rows 389 columns, reduced to 405 rows 242 columns - 40 fixed gives 359, 202 - still too large\nCbc0038I Full problem 583 rows 389 columns, reduced to 233 rows 122 columns\nCbc0001I Search completed - best objective -0.8747654383780675, took 126111 iterations and 442 nodes (76.64 seconds)\nCbc0032I Strong branching done 1930 times (102286 iterations), fathomed 14 nodes and fixed 12 variables\nCbc0035I Maximum depth 46, 53 variables fixed on reduced cost\nCuts at root node changed objective from -9.69208 to -5.98617\nProbing was tried 1066 times and created 40387 cuts of which 0 were active after adding rounds of cuts (2.758 seconds)\nGomory was tried 1063 times and created 6432 cuts of which 0 were active after adding rounds of cuts (5.161 seconds)\nKnapsack was tried 100 times and created 1 cuts of which 0 were active after adding rounds of cuts (0.114 seconds)\nClique was tried 100 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.017 seconds)\nMixedIntegerRounding2 was tried 1063 times and created 23546 cuts of which 0 were active after adding rounds of cuts (8.061 seconds)\nFlowCover was tried 100 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.423 seconds)\nTwoMirCuts was tried 1063 times and created 2372 cuts of which 0 were active after adding rounds of cuts (1.191 seconds)\nZeroHalf was tried 1 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)\nImplicationCuts was tried 569 times and created 98700 cuts of which 0 were active after adding rounds of cuts (1.221 seconds)\n\nResult - Optimal solution found\n\nObjective value: -0.87476544\nEnumerated nodes: 442\nTotal iterations: 126111\nTime (CPU seconds): 76.77\nTime (Wallclock seconds): 82.60\n\nTotal time (CPU seconds): 76.79 (Wallclock seconds): 82.62\n\n"
],
[
"#print out model size and solution values\nprint(\"ReLU BigM Solution:\")\nprint(\"# of variables: \",model2_bigm.nvariables())\nprint(\"# of constraints: \",model2_bigm.nconstraints())\nprint(\"x = \", solution_2_bigm[0])\nprint(\"y = \", solution_2_bigm[1])\nprint(\"Solve Time: \", status_2_bigm['Solver'][0]['Time'])",
"ReLU BigM Solution:\n# of variables: 609\n# of constraints: 1008\nx = -0.29708481\ny = -0.87476544\nSolve Time: 82.65653038024902\n"
]
],
[
[
"## Neural Network 3: Mixed ReLU and Sigmoid Activation Functions\nThe last neural network contains both ReLU and sigmoid activation functions. These networks can be represented by using the complementarity formulation of relu and mixing it with the full-space formulation for the sigmoid functions.",
"_____no_output_____"
]
],
[
[
"net_mixed = keras_reader.load_keras_sequential(nn3,scaler,input_bounds)\n\nmodel3_mixed = pyo.ConcreteModel()\nmodel3_mixed.x = pyo.Var(initialize = 0)\nmodel3_mixed.y = pyo.Var(initialize = 0)\nmodel3_mixed.obj = pyo.Objective(expr=(model3_mixed.y))\nmodel3_mixed.nn = OmltBlock()\n\nformulation3_mixed = NeuralNetworkFormulation(net_mixed,activation_constraints={\n \"relu\": ComplementarityReLUActivation()})\nmodel3_mixed.nn.build_formulation(formulation3_mixed)\n\n@model3_mixed.Constraint()\ndef connect_inputs(mdl):\n return mdl.x == mdl.nn.inputs[0]\n\n@model3_mixed.Constraint()\ndef connect_outputs(mdl):\n return mdl.y == mdl.nn.outputs[0]\n\nstatus_3_mixed = pyo.SolverFactory('ipopt').solve(model3_mixed, tee=True)\nsolution_3_mixed = (pyo.value(model3_mixed.x),pyo.value(model3_mixed.y))",
"Ipopt 3.13.3: \n\n******************************************************************************\nThis program contains Ipopt, a library for large-scale nonlinear optimization.\n Ipopt is released as open source code under the Eclipse Public License (EPL).\n For more information visit https://github.com/coin-or/Ipopt\n******************************************************************************\n\nThis is Ipopt version 3.13.3, running with linear solver ma27.\n\nNumber of nonzeros in equality constraint Jacobian...: 10915\nNumber of nonzeros in inequality constraint Jacobian.: 300\nNumber of nonzeros in Lagrangian Hessian.............: 200\n\nTotal number of variables............................: 509\n variables with only lower bounds: 100\n variables with lower and upper bounds: 303\n variables with only upper bounds: 0\nTotal number of equality constraints.................: 408\nTotal number of inequality constraints...............: 200\n inequality constraints with only lower bounds: 100\n inequality constraints with lower and upper bounds: 0\n inequality constraints with only upper bounds: 100\n\niter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls\n 0 0.0000000e+00 2.64e+00 9.45e-01 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0\n 1 -2.4491518e-02 2.64e+00 9.63e-01 -1.0 1.09e+01 - 2.21e-03 2.25e-03f 1\n 2 -3.3431992e-02 2.61e+00 1.04e+01 -1.0 9.93e+00 - 2.41e-03 1.12e-02f 1\n 3 4.8207488e-02 2.44e+00 4.68e+01 -1.0 9.81e+00 - 1.36e-02 6.41e-02f 1\n 4 3.8051454e-01 1.76e+00 6.44e+01 -1.0 8.97e+00 - 8.43e-02 2.80e-01f 1\n 5 4.3631272e-01 1.45e+00 5.50e+01 -1.0 6.02e+00 - 3.22e-01 1.77e-01h 1\n 6 5.5340812e-01 1.01e+00 4.17e+01 -1.0 5.02e+00 - 5.42e-01 3.04e-01h 1\n 7 5.9417264e-01 7.17e-01 3.38e+01 -1.0 3.62e+00 - 5.83e-01 2.89e-01h 1\n 8 3.9816576e-01 5.02e-01 2.02e+02 -1.0 3.91e+00 - 8.15e-01 3.00e-01h 1\n 9 2.0283360e-01 3.56e-01 1.44e+02 -1.0 3.15e+00 - 3.50e-01 2.90e-01h 1\niter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls\n 10 -7.5798939e-02 2.56e-01 1.62e+03 -1.0 3.16e+00 - 1.00e+00 2.82e-01h 1\n 11 -3.1380671e-01 1.30e-01 1.80e+03 -1.0 1.85e+00 - 1.00e+00 4.91e-01h 1\n 12 -4.3168522e-01 7.05e-02 8.46e+03 -1.0 9.62e-01 - 1.00e+00 4.59e-01h 1\n 13 -4.8684229e-01 2.75e-02 1.22e+04 -1.0 4.31e-01 - 1.00e+00 6.10e-01h 1\n 14 -5.1090134e-01 1.22e-02 3.90e+04 -1.0 1.82e-01 - 1.00e+00 5.56e-01h 1\n 15 -5.2067452e-01 4.91e-03 8.18e+04 -1.0 7.51e-02 - 1.00e+00 5.98e-01h 1\n 16 -5.2464334e-01 2.00e-03 2.01e+05 -1.0 2.99e-02 - 1.00e+00 5.93e-01h 1\n 17 -5.2595311e-01 7.65e-04 4.32e+05 -1.0 1.07e-02 - 1.00e+00 6.17e-01h 1\n 18 -5.2618307e-01 2.60e-04 8.25e+05 -1.0 2.86e-03 - 1.00e+00 6.60e-01h 1\n 19 -5.2593558e-01 5.76e-05 9.63e+05 -1.0 1.64e-03 - 1.00e+00 7.79e-01h 1\niter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls\n 20 -5.2562620e-01 1.00e-05 8.64e+05 -1.0 9.61e-04 - 1.00e+00 8.26e-01h 1\n 21 -5.2559965e-01 6.41e-06 3.93e+06 -1.0 1.82e-04 - 1.00e+00 3.62e-01h 1\n 22 -5.2490857e-01 1.94e-06 2.37e+06 -1.0 2.23e-03 - 1.00e+00 6.98e-01h 1\n 23 -5.2460610e-01 4.44e-07 1.74e+06 -1.0 8.85e-04 - 1.00e+00 7.71e-01h 1\n 24 -5.2460351e-01 4.40e-07 7.86e+06 -1.0 7.50e-04 - 1.00e+00 7.81e-03f 8\n 25 -5.2422501e-01 6.44e-10 2.56e+03 -1.0 8.55e-04 - 1.00e+00 1.00e+00h 1\n 26 -5.2420028e-01 2.79e-12 3.51e+02 -2.5 5.64e-05 - 1.00e+00 1.00e+00h 1\n 27 -5.2420104e-01 6.22e-15 1.70e+00 -2.5 2.10e-06 4.0 1.00e+00 1.00e+00f 1\n 28 -5.2455960e-01 5.61e-10 4.70e+04 -3.8 4.48e-03 - 2.65e-01 1.78e-01f 2\n 29 -5.2456249e-01 4.62e-14 2.85e+04 -3.8 7.25e-06 3.5 9.38e-01 1.00e+00h 1\niter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls\n 30 -5.2456598e-01 1.59e-13 2.64e+04 -3.8 2.49e-05 3.0 1.00e+00 5.00e-01f 2\n 31 -5.2606619e-01 9.91e-09 9.78e+03 -3.8 9.21e-03 - 2.60e-01 3.65e-01f 1\n 32 -5.2608942e-01 3.05e-12 2.55e+00 -3.8 5.89e-05 2.6 1.00e+00 1.00e+00f 1\n 33 -5.2617417e-01 3.16e-11 5.42e+00 -3.8 1.90e-04 2.1 1.00e+00 1.00e+00f 1\n 34 -5.2642369e-01 2.74e-10 2.30e-02 -3.8 5.58e-04 1.6 1.00e+00 1.00e+00f 1\n 35 -5.2717098e-01 2.48e-09 6.36e+03 -5.7 1.68e-03 1.1 8.03e-01 1.00e+00f 1\n 36 -5.3259075e-01 1.33e-07 1.30e+05 -5.7 1.58e+00 - 1.83e-01 7.73e-03f 1\n 37 -5.4182141e-01 5.12e-07 1.27e+05 -5.7 8.21e-01 - 2.56e-02 2.53e-02f 1\n 38 -5.4144189e-01 5.13e-07 1.27e+05 -5.7 4.05e+00 - 2.10e-04 2.10e-04s 16\n 39 -5.9470446e-01 1.36e-05 1.63e+05 -5.7 9.53e+01 - 1.26e-03 0.00e+00S 16\niter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls\n 40 -6.0185558e-01 1.38e-05 1.63e+05 -5.7 7.19e+00 - 2.04e-01 2.24e-03f 1\n 41 -6.0184142e-01 1.97e-12 4.71e+05 -5.7 3.88e-05 - 1.18e-01 1.00e+00h 1\n 42 -6.0184614e-01 1.68e-12 3.08e+05 -5.7 5.30e-05 - 8.67e-01 2.05e-01f 2\n 43 -6.0401828e-01 9.16e-07 7.21e+04 -5.7 8.89e-03 1.6 7.65e-03 7.63e-01f 1\n 44 -6.0400892e-01 8.88e-07 6.36e+04 -5.7 1.13e-03 - 4.78e-01 3.12e-02f 6\n 45 -6.0411146e-01 4.20e-08 5.86e+04 -5.7 1.98e-02 3.8 1.00e+00 1.71e-02f 1\n 46 -6.0617713e-01 7.73e-08 5.69e+04 -5.7 2.19e-01 - 5.62e-04 2.81e-02f 1\n 47 -6.0617718e-01 7.72e-08 5.23e+04 -5.7 2.11e-04 3.3 1.00e+00 8.64e-04f 2\n 48 -6.0661540e-01 7.88e-08 5.30e+04 -5.7 1.31e+01 - 3.04e-04 9.93e-05f 1\n 49 -6.0660163e-01 3.37e-07 1.80e+04 -5.7 1.18e-04 2.8 1.00e+00 6.54e-01f 1\niter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls\n 50 -6.0760465e-01 3.10e-07 1.84e+04 -5.7 9.38e-02 - 1.58e-01 3.17e-02f 1\n 51 -6.1281938e-01 2.63e-07 2.50e+04 -5.7 1.29e-01 - 1.00e+00 1.20e-01f 1\n 52 -6.7216756e-01 3.15e-05 1.74e+05 -5.7 2.21e+01 - 5.80e-04 8.00e-03f 1\n 53 -6.7216668e-01 1.46e-05 8.47e+04 -5.7 9.91e-05 2.4 1.00e+00 5.37e-01f 1\n 54 -7.5275652e-01 7.74e-05 7.13e+04 -5.7 1.49e+01 - 1.40e-02 1.61e-02f 1\n 55 -7.4801720e-01 7.72e-05 5.82e+05 -5.7 2.71e+00 - 1.39e-04 5.22e-03h 4\n 56 -7.4801742e-01 7.07e-05 1.53e+06 -5.7 2.89e-04 1.9 1.00e+00 8.35e-02h 1\n 57 -7.7812815e-01 8.01e-05 1.53e+06 -5.7 7.21e+01 - 1.23e-03 1.25e-03f 1\n 58 -7.7812836e-01 8.01e-05 1.53e+06 -5.7 4.10e-01 - 5.85e-06 3.79e-06h 2\n 59 -7.7810755e-01 7.95e-05 7.02e+05 -5.7 4.63e-03 1.4 1.12e-01 7.81e-03h 8\niter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls\n 60 -7.7813821e-01 6.43e-05 5.23e+05 -5.7 8.91e-04 - 3.98e-01 1.91e-01h 1\n 61 -8.0347623e-01 1.14e-04 3.68e+05 -5.7 1.43e+01 - 3.11e-05 1.42e-02f 1\n 62 -8.0870819e-01 1.16e-04 3.90e+05 -5.7 2.03e+01 - 5.64e-03 2.06e-03f 1\n 63 -9.1294828e-01 1.10e-03 3.55e+05 -5.7 2.01e+01 - 3.82e-02 4.17e-02f 1\n 64 -8.5741605e-01 3.11e-04 8.00e+04 -5.7 4.46e-01 - 7.87e-01 1.00e+00h 1\n 65 -9.0337508e-01 3.15e-04 9.86e+01 -5.7 5.50e-01 - 9.62e-01 6.80e-01h 1\n 66 -9.0339610e-01 3.15e-04 4.70e+04 -5.7 7.15e-02 - 1.00e+00 2.52e-03h 1\n 67 -9.1162118e-01 8.76e-06 3.93e+03 -5.7 7.06e-02 - 9.23e-01 1.00e+00h 1\n 68 -9.1162089e-01 8.08e-06 3.74e+03 -5.7 7.07e-05 - 1.00e+00 7.81e-02h 1\n 69 -9.1161628e-01 2.81e-11 6.05e-02 -5.7 6.80e-05 - 1.00e+00 1.00e+00f 1\niter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls\n 70 -9.1161632e-01 5.33e-15 8.20e-06 -5.7 5.53e-08 - 1.00e+00 1.00e+00f 1\n 71 -9.1161999e-01 1.73e-13 9.95e-01 -8.6 1.02e-05 - 9.97e-01 1.00e+00f 1\n 72 -9.1162011e-01 7.11e-15 3.98e-08 -8.6 1.26e-07 - 1.00e+00 1.00e+00h 1\n 73 -9.1162017e-01 6.22e-15 1.15e-09 -8.6 6.19e-08 - 1.00e+00 1.00e+00h 1\n\nNumber of Iterations....: 73\n\n (scaled) (unscaled)\nObjective...............: -9.1162017257266570e-01 -9.1162017257266570e-01\nDual infeasibility......: 1.1528198673449452e-09 1.1528198673449452e-09\nConstraint violation....: 6.2172489379008766e-15 6.2172489379008766e-15\nComplementarity.........: 2.7941614152620012e-09 2.7941614152620012e-09\nOverall NLP error.......: 2.7941614152620012e-09 2.7941614152620012e-09\n\n\nNumber of objective function evaluations = 126\nNumber of objective gradient evaluations = 74\nNumber of equality constraint evaluations = 126\nNumber of inequality constraint evaluations = 126\nNumber of equality constraint Jacobian evaluations = 74\nNumber of inequality constraint Jacobian evaluations = 74\nNumber of Lagrangian Hessian evaluations = 73\nTotal CPU secs in IPOPT (w/o function evaluations) = 0.405\nTotal CPU secs in NLP function evaluations = 0.017\n\nEXIT: Optimal Solution Found.\n"
],
[
"#print out model size and solution values\nprint(\"Mixed NN Solution:\")\nprint(\"# of variables: \",model3_mixed.nvariables())\nprint(\"# of constraints: \",model3_mixed.nconstraints())\nprint(\"x = \", solution_3_mixed[0])\nprint(\"y = \", solution_3_mixed[1])\nprint(\"Solve Time: \", status_3_mixed['Solver'][0]['Time'])",
"Mixed NN Solution:\n# of variables: 509\n# of constraints: 608\nx = -0.33286905796510236\ny = -0.9116201725726657\nSolve Time: 0.6036348342895508\n"
]
],
[
[
"### Final Plots and Discussion\n\nWe lastly plot the results of each optimization problem. Some of the main take-aways from this notebook are as follows:\n- A broad set of dense neural network architectures can be represented in Pyomo using OMLT. This notebook used the Keras reader to import sequential Keras models but OMLT also supports using ONNX models (see `import_network.ipynb`). OMLT additionally supports Convolutional Neural Networks (see `mnist_example_cnn.ipynb`).\n- The reduced-space formulation provides a computationally tractable means to represent neural networks that contain smooth activation functions and can be used with continuous optimizers to obtain local solutions.\n- The full-space formulation permits representing ReLU activation functions using either complementarity or 'BigM' approaches with binary variables (as well as partition-based approaches not shown in this notebook).\n- The full-space formulation further allows one to optimize over neural networks that contain mixed activation functions by formulating ReLU logic as complementarity conditions.\n- Using binary variables to represent ReLU can attain global solutions (if the rest of the problem is convex), whereas the complementarity formulation provides local solutions but tends to be more scalable.",
"_____no_output_____"
]
],
[
[
"#create a plot with 3 subplots\nfig,axs = plt.subplots(1,3,figsize = (24,8))\n\n#nn1 - sigmoid\naxs[0].plot(x,y_predict_sigmoid,linewidth = 3.0,linestyle=\"dotted\",color = \"orange\")\naxs[0].set_title(\"sigmoid\")\naxs[0].scatter([solution_1_reduced[0]],[solution_1_reduced[1]],color = \"black\",s = 300, label=\"reduced space\")\naxs[0].scatter([solution_1_full[0]],[solution_1_full[1]],color = \"blue\",s = 300, label=\"full space\")\naxs[0].legend()\n\n#nn2 - relu\naxs[1].plot(x,y_predict_relu,linewidth = 3.0,linestyle=\"dotted\",color = \"green\")\naxs[1].set_title(\"relu\")\naxs[1].scatter([solution_2_comp[0]],[solution_2_comp[1]],color = \"black\",s = 300, label=\"complementarity\")\naxs[1].scatter([solution_2_bigm[0]],[solution_2_bigm[1]],color = \"blue\",s = 300, label=\"bigm\")\naxs[1].legend()\n\n#nn3 - mixed\naxs[2].plot(x,y_predict_mixed,linewidth = 3.0,linestyle=\"dotted\", color = \"red\")\naxs[2].set_title(\"mixed\")\naxs[2].scatter([solution_3_mixed[0]],[solution_3_mixed[1]],color = \"black\",s = 300);",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d096483e0482874dfcee810b27d96673293c8520 | 46,839 | ipynb | Jupyter Notebook | examples/17_TPA_FRF_based.ipynb | anantagrg/FBS_Substructuring | 66555a1f80208c7bac16355822ac12fd195f5f68 | [
"MIT"
] | null | null | null | examples/17_TPA_FRF_based.ipynb | anantagrg/FBS_Substructuring | 66555a1f80208c7bac16355822ac12fd195f5f68 | [
"MIT"
] | null | null | null | examples/17_TPA_FRF_based.ipynb | anantagrg/FBS_Substructuring | 66555a1f80208c7bac16355822ac12fd195f5f68 | [
"MIT"
] | null | null | null | 102.717105 | 36,440 | 0.865667 | [
[
[
"# transmissibility-based TPA: FRF based",
"_____no_output_____"
],
[
"In this example a numerical example is used to demonstrate a FRF based TPA example.",
"_____no_output_____"
]
],
[
[
"import pyFBS\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom matplotlib import cm\nfrom matplotlib.colors import LogNorm\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"## Example datasets",
"_____no_output_____"
],
[
"Load the required predefined datasets:",
"_____no_output_____"
]
],
[
[
"pyFBS.download_lab_testbench()",
"100%|█████████████████████████████████████████████████████████████████████████████████| 6/6 [00:00<00:00, 12390.85it/s]\n100%|██████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 3000.93it/s]\n100%|██████████████████████████████████████████████████████████████████████████████████| 7/7 [00:00<00:00, 4654.43it/s]"
],
[
"xlsx_pos = r\"./lab_testbench/Measurements/TPA_synt.xlsx\"\n\nstl_A = r\"./lab_testbench/STL/A.stl\"\nstl_B = r\"./lab_testbench/STL/B.stl\"\nstl_AB = r\"./lab_testbench/STL/AB.stl\"\n\ndf_acc_AB = pd.read_excel(xlsx_pos, sheet_name='Sensors_AB')\ndf_chn_AB = pd.read_excel(xlsx_pos, sheet_name='Channels_AB')\ndf_imp_AB = pd.read_excel(xlsx_pos, sheet_name='Impacts_AB')\n\ndf_vp = pd.read_excel(xlsx_pos, sheet_name='VP_Channels')\ndf_vpref = pd.read_excel(xlsx_pos, sheet_name='VP_RefChannels')",
"_____no_output_____"
]
],
[
[
"## Numerical model",
"_____no_output_____"
],
[
"Load the corresponding .full and .ress file from the example datasets:",
"_____no_output_____"
]
],
[
[
"full_file_AB = r\"./lab_testbench/FEM/AB.full\"\nress_file_AB = r\"./lab_testbench/FEM/AB.rst\"",
"_____no_output_____"
]
],
[
[
" Create an MK model for each component:",
"_____no_output_____"
]
],
[
[
"MK_AB = pyFBS.MK_model(ress_file_AB, full_file_AB, no_modes=100, recalculate=False)",
"C:\\Users\\tomaz.bregar\\Anaconda3\\lib\\site-packages\\pyvista\\core\\pointset.py:610: UserWarning: VTK 9 no longer accepts an offset array\n warnings.warn('VTK 9 no longer accepts an offset array')\n"
]
],
[
[
"The locations and directions of responses and excitations often do not match exactly with the numerical model, so we need to find the nodes closest to these points. Only the locations are updated, the directions remain the same.",
"_____no_output_____"
]
],
[
[
"df_chn_AB_up = MK_AB.update_locations_df(df_chn_AB)\ndf_imp_AB_up = MK_AB.update_locations_df(df_imp_AB)",
"_____no_output_____"
]
],
[
[
"## 3D view",
"_____no_output_____"
],
[
"Open 3D viewer in the background. With the 3D viewer the subplot capabilities of PyVista can be used.",
"_____no_output_____"
]
],
[
[
"view3D = pyFBS.view3D(show_origin=False, show_axes=False, title=\"TPA\")",
"_____no_output_____"
]
],
[
[
"Add the STL file of structure AB to the plot and show the corresponding accelerometer, channels and impacts.",
"_____no_output_____"
]
],
[
[
"view3D.plot.add_text(\"AB\", position='upper_left', font_size=10, color=\"k\", font=\"times\", name=\"AB_structure\")\n\nview3D.add_stl(stl_AB, name=\"AB_structure\", color=\"#8FB1CC\", opacity=.1)\nview3D.plot.add_mesh(MK_AB.mesh, scalars=np.zeros(MK_AB.mesh.points.shape[0]), show_scalar_bar=False, name=\"mesh_AB\", cmap=\"coolwarm\", show_edges=True)\nview3D.show_chn(df_chn_AB_up, color=\"green\", overwrite=True)\nview3D.show_imp(df_imp_AB_up, color=\"red\", overwrite=True);\nview3D.show_acc(df_acc_AB, overwrite=True)\nview3D.show_vp(df_vp, color=\"blue\", overwrite=True)\n\nview3D.label_imp(df_imp_AB_up)\nview3D.label_acc(df_acc_AB)",
"_____no_output_____"
]
],
[
[
"## FRF sythetization",
"_____no_output_____"
],
[
" Perform the FRF sythetization for each component based on the updated locations:",
"_____no_output_____"
]
],
[
[
"MK_AB.FRF_synth(df_chn_AB_up, df_imp_AB_up, f_start=0, modal_damping=0.003, frf_type=\"accelerance\")",
"_____no_output_____"
]
],
[
[
"First, structural admittance $\\boldsymbol{\\text{Y}}_{31}^{\\text{AB}}$ is obtained.",
"_____no_output_____"
]
],
[
[
"imp_loc = 10\n\nY31_AB = MK_AB.FRF[:, 9:12, imp_loc:imp_loc+1]\nY31_AB.shape",
"_____no_output_____"
]
],
[
[
"Then, structural admittance $\\boldsymbol{\\text{Y}}_{41}^{\\text{AB}}$ is obtained.",
"_____no_output_____"
]
],
[
[
"Y41_AB = MK_AB.FRF[:, :9, imp_loc:imp_loc+1]\nY41_AB.shape",
"_____no_output_____"
]
],
[
[
"## Aplication of the FRF based TPA",
"_____no_output_____"
],
[
"Calculation of transmissibility matrix $\\boldsymbol{\\text{T}}_{34, f_1}^{\\text{AB}}$:",
"_____no_output_____"
]
],
[
[
"T34 = Y31_AB @ np.linalg.pinv(Y41_AB)\nT34.shape",
"_____no_output_____"
]
],
[
[
"Define operational displacements $\\boldsymbol{\\text{u}}_4$:",
"_____no_output_____"
]
],
[
[
"u4 = MK_AB.FRF[:, :9, imp_loc:imp_loc+1]\nu4.shape",
"_____no_output_____"
]
],
[
[
"Calcualting response $\\boldsymbol{\\text{u}}_3^{\\text{TPA}}$.",
"_____no_output_____"
]
],
[
[
"u3 = T34 @ u4\nu3.shape",
"_____no_output_____"
]
],
[
[
"On board validation: comparison of predicted $\\boldsymbol{\\text{u}}_{3}^{\\text{TPA}}$ and operational $\\boldsymbol{\\text{u}}_{3}^{\\text{MK}}$:",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(10, 5))\n\nu3_MK = MK_AB.FRF[:, 9:12, imp_loc:imp_loc+1]\nsel = 0\n\nplt.subplot(211)\nplt.semilogy(np.abs(u3_MK[:,sel,0]), label='MK');\nplt.semilogy(np.abs(u3[:,sel,0]), '--', label='TPA');\nplt.ylim(10**-8, 10**4);\nplt.xlim(0, 2000)\nplt.legend(loc=0);\n\nplt.subplot(413)\nplt.plot(np.angle(u3_MK[:,sel,0]));\nplt.plot(np.angle(u3[:,sel,0]), '--');\nplt.xlim(0, 2000);",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0964b3576c0ec9db23d12ccb6338ae996399840 | 813,311 | ipynb | Jupyter Notebook | exact-match-centroid-pipeline/embedding_eval.ipynb | nicklein/table-linker-pipelines | 3362ef348c0e156a6082c357b95951cd4b293ade | [
"MIT"
] | null | null | null | exact-match-centroid-pipeline/embedding_eval.ipynb | nicklein/table-linker-pipelines | 3362ef348c0e156a6082c357b95951cd4b293ade | [
"MIT"
] | 1 | 2021-07-16T22:45:20.000Z | 2021-07-16T22:45:20.000Z | exact-match-centroid-pipeline/embedding_eval.ipynb | nicklein/table-linker-pipelines | 3362ef348c0e156a6082c357b95951cd4b293ade | [
"MIT"
] | 6 | 2021-04-05T10:59:55.000Z | 2021-08-17T20:17:02.000Z | 71.06256 | 51,180 | 0.648866 | [
[
[
"import warnings\nwarnings.simplefilter(action='ignore', category=FutureWarning)\n\nimport pandas as pd\nimport numpy as np\nimport sklearn.metrics\n\npd.reset_option('all')",
"\n: boolean\n use_inf_as_null had been deprecated and will be removed in a future\n version. Use `use_inf_as_na` instead.\n\n"
],
[
"# 84575189_0_6365692015941409487 -> no matches at all\ndata = pd.read_csv('/Users/summ7t/dev/novartis/table-linker/SemTab2019/embedding_evaluation_files/14067031_0_559833072073397908.csv')\ndata",
"_____no_output_____"
],
[
"data = data.fillna('')\ndata",
"_____no_output_____"
]
],
[
[
"### Which is better text embedding or graph embedding?",
"_____no_output_____"
]
],
[
[
"# define this question \ndata[(data['kg_id'] == data['GT_kg_id']) & (data['kg_id'] != '')]",
"_____no_output_____"
]
],
[
[
"### By cell linking task. Count/compute\n- Number of tasks\n- Number and fraction of tasks with known ground truth\n- Number and fraction of tasks with ground truth in the candidate set\n- Number and fraction of singleton candidate sets\n- Number and fraction of singleton candidate sets containing ground truth\n- Top-1 accuracy, Top-5 accuracy and NDCG using retrieval_score, text-embedding-score and graph-embedding-score. In our case with binary relevance I think NDCG is the same as DCG.\n- Average Top-1, Top-5 and NDCG metrics",
"_____no_output_____"
]
],
[
[
"row_idx, col_idx = 2, 0\n\nrelevant_df = data[(data['column'] == col_idx) & (data['row'] == row_idx) & (data['kg_id'] != '')]\n\nnum_tasks = len(relevant_df)\nnum_tasks",
"_____no_output_____"
],
[
"num_tasks_known_gt = len(relevant_df[relevant_df['GT_kg_id'] != ''])\nnum_tasks_known_gt",
"_____no_output_____"
],
[
"is_gt_in_candidate = len(relevant_df[relevant_df['GT_kg_id'] == relevant_df['kg_id']])\nis_gt_in_candidate",
"_____no_output_____"
],
[
"is_candidate_set_singleton = len(relevant_df) == 1\nis_candidate_set_singleton",
"_____no_output_____"
],
[
"is_top_one_accurate = False\ntop_one_row = relevant_df.iloc[0]\nif top_one_row['kg_id'] == top_one_row['GT_kg_id']:\n is_top_one_accurate = True\nis_top_one_accurate",
"_____no_output_____"
],
[
"is_top_five_accurate = False\ntop_five_rows = relevant_df.iloc[0:5]\nfor i, row in top_five_rows.iterrows():\n if row['kg_id'] == row['GT_kg_id']:\n is_top_five_accurate = True\nis_top_five_accurate",
"_____no_output_____"
],
[
"is_top_ten_accurate = False\ntop_ten_rows = relevant_df.iloc[0:10]\nfor i, row in top_ten_rows.iterrows():\n if row['kg_id'] == row['GT_kg_id']:\n is_top_ten_accurate = True\nis_top_ten_accurate",
"_____no_output_____"
],
[
"# parse eval file\ndef parse_eval_file_stats(file_name=None, eval_data=None):\n if file_name is not None and eval_data is None:\n eval_data = pd.read_csv(file_name)\n eval_data = eval_data.fillna('')\n parsed_eval_data = {}\n for ei, erow in eval_data.iterrows():\n if 'table_id' not in erow:\n table_id = file_name.split('/')[-1].split('.csv')[0]\n else:\n table_id = erow['table_id']\n \n row_idx, col_idx = erow['row'], erow['column']\n if (table_id, row_idx, col_idx) in parsed_eval_data:\n continue\n relevant_df = eval_data[(eval_data['column'] == col_idx) & (eval_data['row'] == row_idx) & (eval_data['kg_id'] != '')]\n \n if len(relevant_df) == 0:\n parsed_eval_data[(row_idx, col_idx)] = {\n 'table_id': table_id,\n 'GT_kg_id': erow['GT_kg_id'],\n 'row': row_idx,\n 'column': col_idx,\n 'num_candidate': 0,\n 'num_candidate_known_gt': 0,\n 'is_gt_in_candidate': False,\n 'is_candidate_set_singleton': False,\n 'is_top_one_accurate': False,\n 'is_top_five_accurate': False\n }\n continue\n \n row_col_stats = {}\n row_col_stats['table_id'] = table_id\n row_col_stats['GT_kg_id'] = erow['GT_kg_id']\n row_col_stats['row'] = erow['row']\n row_col_stats['column'] = erow['column']\n row_col_stats['num_candidate'] = len(relevant_df)\n row_col_stats['num_candidate_known_gt'] = len(relevant_df[relevant_df['GT_kg_id'] != ''])\n row_col_stats['is_gt_in_candidate'] = len(relevant_df[relevant_df['GT_kg_id'] == relevant_df['kg_id']]) > 0\n row_col_stats['is_candidate_set_singleton'] = len(relevant_df) == 1\n\n is_top_one_accurate = False\n top_one_row = relevant_df.iloc[0]\n if top_one_row['kg_id'] == top_one_row['GT_kg_id']:\n is_top_one_accurate = True\n row_col_stats['is_top_one_accurate'] = is_top_one_accurate\n \n is_top_five_accurate = False\n top_five_rows = relevant_df.iloc[0:5]\n for i, row in top_five_rows.iterrows():\n if row['kg_id'] == row['GT_kg_id']:\n is_top_five_accurate = True\n row_col_stats['is_top_five_accurate'] = is_top_five_accurate\n \n parsed_eval_data[(table_id, row_idx, col_idx)] = row_col_stats\n return parsed_eval_data",
"_____no_output_____"
],
[
"e_data = parse_eval_file_stats(file_name='/Users/summ7t/dev/novartis/table-linker/SemTab2019/embedding_evaluation_files/84575189_0_6365692015941409487.csv')\nlen(e_data), e_data[(\"84575189_0_6365692015941409487\", 0, 2)]",
"_____no_output_____"
],
[
"e_data = parse_eval_file_stats(eval_data=all_data)\nlen(e_data), e_data[(\"84575189_0_6365692015941409487\", 0, 2)]",
"_____no_output_____"
],
[
"import json\nwith open('./eval_all.json', 'w') as f:\n json.dump(list(e_data.values()), f, indent=4)",
"_____no_output_____"
],
[
"import json\nwith open('./eval_14067031_0_559833072073397908.json', 'w') as f:\n json.dump(list(e_data.values()), f, indent=4)",
"_____no_output_____"
],
[
"len([k for k in e_data if e_data[k]['is_gt_in_candidate']])",
"_____no_output_____"
],
[
"import os\n\neval_file_names = []\n\nfor (dirpath, dirnames, filenames) in os.walk('/Users/summ7t/dev/novartis/table-linker/SemTab2019/embedding_evaluation_files/'):\n for fn in filenames:\n if \"csv\" not in fn:\n continue\n abs_fn = dirpath + fn\n assert os.path.isfile(abs_fn)\n if os.path.getsize(abs_fn) == 0:\n continue\n eval_file_names.append(abs_fn)\nlen(eval_file_names)",
"_____no_output_____"
],
[
"eval_file_names",
"_____no_output_____"
],
[
"# merge all eval files in one df\n\ndef merge_df(file_names: list):\n df_list = []\n for fn in file_names:\n fid = fn.split('/')[-1].split('.csv')[0]\n df = pd.read_csv(fn)\n df['table_id'] = fid\n # df = df.fillna('')\n df_list.append(df)\n \n return pd.concat(df_list)",
"_____no_output_____"
],
[
"all_data = merge_df(eval_file_names)\nall_data",
"_____no_output_____"
],
[
"all_data[all_data['table_id'] == '14067031_0_559833072073397908']",
"_____no_output_____"
],
[
"# filter out empty task: NaN in candidate\nno_nan_all_data = all_data[pd.notna(all_data['kg_id'])]\nno_nan_all_data",
"_____no_output_____"
],
[
"all_data[pd.isna(all_data['kg_id'])]",
"_____no_output_____"
],
[
"# parse eval file\nfrom pandas.core.common import SettingWithCopyError\nimport numpy as np\nimport sklearn.metrics\n\npd.options.mode.chained_assignment = 'raise'\n\ndef parse_eval_files_stats(eval_data):\n res = {}\n candidate_eval_data = eval_data.groupby(['table_id', 'row', 'column'])['table_id'].count().reset_index(name=\"count\")\n res['num_tasks'] = len(eval_data.groupby(['table_id', 'row', 'column']))\n res['num_tasks_with_gt'] = len(eval_data[pd.notna(eval_data['GT_kg_id'])].groupby(['table_id', 'row', 'column']))\n res['num_tasks_with_gt_in_candidate'] = len(eval_data[eval_data['evaluation_label'] == 1].groupby(['table_id', 'row', 'column']))\n res['num_tasks_with_singleton_candidate'] = len(candidate_eval_data[candidate_eval_data['count'] == 1].groupby(['table_id', 'row', 'column']))\n \n singleton_eval_data = candidate_eval_data[candidate_eval_data['count'] == 1]\n num_tasks_with_singleton_candidate_with_gt = 0\n for i, row in singleton_eval_data.iterrows():\n table_id, row_idx, col_idx = row['table_id'], row['row'], row['column']\n c_e_data = eval_data[(eval_data['table_id'] == table_id) & (eval_data['row'] == row_idx) & (eval_data['column'] == col_idx)]\n assert len(c_e_data) == 1\n if c_e_data.iloc[0]['evaluation_label'] == 1:\n num_tasks_with_singleton_candidate_with_gt += 1\n res['num_tasks_with_singleton_candidate_with_gt'] = num_tasks_with_singleton_candidate_with_gt\n \n num_tasks_with_retrieval_top_one_accurate = []\n num_tasks_with_retrieval_top_five_accurate = []\n num_tasks_with_text_top_one_accurate = []\n num_tasks_with_text_top_five_accurate = []\n num_tasks_with_graph_top_one_accurate = []\n num_tasks_with_graph_top_five_accurate = []\n ndcg_score_r_list = []\n ndcg_score_t_list = []\n ndcg_score_g_list = []\n has_gt_list = []\n has_gt_in_candidate = []\n # candidate_eval_data = candidate_eval_data[:1]\n for i, row in candidate_eval_data.iterrows():\n table_id, row_idx, col_idx = row['table_id'], row['row'], row['column']\n # print(f\"working on {table_id}: {row_idx}, {col_idx}\")\n c_e_data = eval_data[(eval_data['table_id'] == table_id) & (eval_data['row'] == row_idx) & (eval_data['column'] == col_idx)]\n assert len(c_e_data) > 0\n \n if np.nan not in set(c_e_data['GT_kg_id']):\n has_gt_list.append(1)\n else:\n has_gt_list.append(0)\n \n if 1 in set(c_e_data['evaluation_label']):\n has_gt_in_candidate.append(1)\n else:\n has_gt_in_candidate.append(0)\n \n # handle retrieval score\n s_data = c_e_data.sort_values(by=['retrieval_score'], ascending=False)\n if s_data.iloc[0]['evaluation_label'] == 1:\n num_tasks_with_retrieval_top_one_accurate.append(1)\n else:\n num_tasks_with_retrieval_top_one_accurate.append(0)\n if 1 in set(s_data.iloc[0:5]['evaluation_label']):\n num_tasks_with_retrieval_top_five_accurate.append(1)\n else:\n num_tasks_with_retrieval_top_five_accurate.append(0)\n \n # handle text-embedding-score\n s_data = c_e_data.sort_values(by=['text-embedding-score'], ascending=False)\n if s_data.iloc[0]['evaluation_label'] == 1:\n num_tasks_with_text_top_one_accurate.append(1)\n else:\n num_tasks_with_text_top_one_accurate.append(0)\n if 1 in set(s_data.iloc[0:5]['evaluation_label']):\n num_tasks_with_text_top_five_accurate.append(1)\n else:\n num_tasks_with_text_top_five_accurate.append(0)\n \n # handle graph-embedding-score\n s_data = c_e_data.sort_values(by=['graph-embedding-score'], ascending=False)\n if s_data.iloc[0]['evaluation_label'] == 1:\n num_tasks_with_graph_top_one_accurate.append(1)\n else:\n num_tasks_with_graph_top_one_accurate.append(0)\n if 1 in set(s_data.iloc[0:5]['evaluation_label']):\n num_tasks_with_graph_top_five_accurate.append(1)\n else:\n num_tasks_with_graph_top_five_accurate.append(0)\n \n cf_e_data = c_e_data.copy()\n cf_e_data['evaluation_label'] = cf_e_data['evaluation_label'].replace(-1, 0)\n cf_e_data['text-embedding-score'] = cf_e_data['text-embedding-score'].replace(np.nan, 0)\n cf_e_data['graph-embedding-score'] = cf_e_data['graph-embedding-score'].replace(np.nan, 0)\n try:\n ndcg_score_r_list.append(\n sklearn.metrics.ndcg_score(\n np.array([list(cf_e_data['evaluation_label'])]),\n np.array([list(cf_e_data['retrieval_score'])])\n )\n )\n except:\n if len(cf_e_data['evaluation_label']) == 1 and cf_e_data['evaluation_label'].iloc[0] == 1:\n ndcg_score_r_list.append(1.0)\n elif len(cf_e_data['evaluation_label']) == 1 and cf_e_data['evaluation_label'].iloc[0] != 1:\n ndcg_score_r_list.append(0.0)\n else:\n print(\"why am i here\")\n try:\n ndcg_score_t_list.append(\n sklearn.metrics.ndcg_score(\n np.array([list(cf_e_data['evaluation_label'])]),\n np.array([list(cf_e_data['text-embedding-score'])])\n )\n )\n except:\n if len(cf_e_data['evaluation_label']) == 1 and cf_e_data['evaluation_label'].iloc[0] == 1:\n ndcg_score_t_list.append(1.0)\n elif len(cf_e_data['evaluation_label']) == 1 and cf_e_data['evaluation_label'].iloc[0] != 1:\n ndcg_score_t_list.append(0.0)\n else:\n print(\"text\", cf_e_data['evaluation_label'], cf_e_data['text-embedding-score'] )\n print(\"why am i here\")\n try:\n ndcg_score_g_list.append(\n sklearn.metrics.ndcg_score(\n np.array([list(cf_e_data['evaluation_label'])]),\n np.array([list(cf_e_data['graph-embedding-score'])])\n )\n )\n except:\n if len(cf_e_data['evaluation_label']) == 1 and cf_e_data['evaluation_label'].iloc[0] == 1:\n ndcg_score_g_list.append(1.0)\n elif len(cf_e_data['evaluation_label']) == 1 and cf_e_data['evaluation_label'].iloc[0] != 1:\n ndcg_score_g_list.append(0.0)\n else:\n print(\"graph\", cf_e_data['evaluation_label'], cf_e_data['graph-embedding-score'])\n print(\"why am i here\")\n\n candidate_eval_data['r_ndcg'] = ndcg_score_r_list\n candidate_eval_data['t_ndcg'] = ndcg_score_t_list\n candidate_eval_data['g_ndcg'] = ndcg_score_g_list\n candidate_eval_data['retrieval_top_one_accurate'] = num_tasks_with_retrieval_top_one_accurate\n candidate_eval_data['retrieval_top_five_accurate'] = num_tasks_with_retrieval_top_five_accurate\n candidate_eval_data['text_top_one_accurate'] = num_tasks_with_text_top_one_accurate\n candidate_eval_data['text_top_five_accurate'] = num_tasks_with_text_top_five_accurate\n candidate_eval_data['graph_top_one_accurate'] = num_tasks_with_graph_top_one_accurate\n candidate_eval_data['graph_top_five_accurate'] = num_tasks_with_graph_top_five_accurate\n candidate_eval_data['has_gt'] = has_gt_list\n candidate_eval_data['has_gt_in_candidate'] = has_gt_in_candidate\n \n res['num_tasks_with_retrieval_top_one_accurate'] = sum(num_tasks_with_retrieval_top_one_accurate)\n res['num_tasks_with_retrieval_top_five_accurate'] = sum(num_tasks_with_retrieval_top_five_accurate)\n res['num_tasks_with_text_top_one_accurate'] = sum(num_tasks_with_text_top_one_accurate)\n res['num_tasks_with_text_top_five_accurate'] = sum(num_tasks_with_text_top_five_accurate)\n res['num_tasks_with_graph_top_one_accurate'] = sum(num_tasks_with_graph_top_one_accurate)\n res['num_tasks_with_graph_top_five_accurate'] = sum(num_tasks_with_graph_top_five_accurate)\n \n return res, candidate_eval_data",
"_____no_output_____"
],
[
"# no_nan_all_data[no_nan_all_data['table_id'] == \"84575189_0_6365692015941409487\"]",
"_____no_output_____"
],
[
"res, candidate_eval_data = parse_eval_files_stats(no_nan_all_data[no_nan_all_data['table_id'] == \"84575189_0_6365692015941409487\"])\nres",
"_____no_output_____"
],
[
"res, candidate_eval_data = parse_eval_files_stats(no_nan_all_data)\nprint(res)\ndisplay(candidate_eval_data)",
"{'num_tasks': 583, 'num_tasks_with_gt': 580, 'num_tasks_with_gt_in_candidate': 525, 'num_tasks_with_singleton_candidate': 245, 'num_tasks_with_singleton_candidate_with_gt': 231, 'num_tasks_with_retrieval_top_one_accurate': 297, 'num_tasks_with_retrieval_top_five_accurate': 463, 'num_tasks_with_text_top_one_accurate': 467, 'num_tasks_with_text_top_five_accurate': 517, 'num_tasks_with_graph_top_one_accurate': 502, 'num_tasks_with_graph_top_five_accurate': 522}\n"
],
[
"candidate_eval_data['has_gt'].sum(), candidate_eval_data['has_gt_in_candidate'].sum()",
"_____no_output_____"
],
[
"candidate_eval_data.to_csv('./candidate_eval_no_empty.csv', index=False)",
"_____no_output_____"
],
[
"# Conclusion of exact-match on all tasks with ground truth (no filtering)\nprint(f\"number of tasks: {res['num_tasks']}\")\nprint(f\"number of tasks with ground truth: {res['num_tasks_with_gt']}\")\nprint(f\"number of tasks with ground truth in candidate set: {res['num_tasks_with_gt_in_candidate']}, which is {res['num_tasks_with_gt_in_candidate']/res['num_tasks_with_gt'] * 100}%\")\nprint(f\"number of tasks has singleton candidate set: {res['num_tasks_with_singleton_candidate']}, which is {res['num_tasks_with_singleton_candidate']/res['num_tasks_with_gt'] * 100}%\")\nprint(f\"number of tasks has singleton candidate set which is ground truth: {res['num_tasks_with_singleton_candidate_with_gt']}, which is {res['num_tasks_with_singleton_candidate_with_gt']/res['num_tasks_with_gt'] * 100}%\")\nprint()\nprint(f\"number of tasks with top-1 accuracy in terms of retrieval score: {res['num_tasks_with_retrieval_top_one_accurate']}, which is {res['num_tasks_with_retrieval_top_one_accurate']/res['num_tasks_with_gt'] * 100}%\")\nprint(f\"number of tasks with top-5 accuracy in terms of retrieval score: {res['num_tasks_with_retrieval_top_five_accurate']}, which is {res['num_tasks_with_retrieval_top_five_accurate']/res['num_tasks_with_gt'] * 100}%\")\nprint(f\"number of tasks with top-1 accuracy in terms of text embedding score: {res['num_tasks_with_text_top_one_accurate']}, which is {res['num_tasks_with_text_top_one_accurate']/res['num_tasks_with_gt'] * 100}%\")\nprint(f\"number of tasks with top-5 accuracy in terms of text embedding score: {res['num_tasks_with_text_top_five_accurate']}, which is {res['num_tasks_with_text_top_five_accurate']/res['num_tasks_with_gt'] * 100}%\")\nprint(f\"number of tasks with top-1 accuracy in terms of graph embedding score: {res['num_tasks_with_graph_top_one_accurate']}, which is {res['num_tasks_with_graph_top_one_accurate']/res['num_tasks_with_gt'] * 100}%\")\nprint(f\"number of tasks with top-5 accuracy in terms of graph embedding score: {res['num_tasks_with_graph_top_five_accurate']}, which is {res['num_tasks_with_graph_top_five_accurate']/res['num_tasks_with_gt'] * 100}%\")\nprint()\ncandidate_eval_data_with_gt = candidate_eval_data[candidate_eval_data['has_gt'] == 1]\nprint(f\"average ndcg score ranked by retrieval score: {candidate_eval_data_with_gt['r_ndcg'].mean()}\")\nprint(f\"average ndcg score ranked by text-embedding-score: {candidate_eval_data_with_gt['t_ndcg'].mean()}\")\nprint(f\"average ndcg score ranked by graph-embedding-score: {candidate_eval_data_with_gt['g_ndcg'].mean()}\")",
"number of tasks: 583\nnumber of tasks with ground truth: 580\nnumber of tasks with ground truth in candidate set: 525, which is 90.51724137931035%\nnumber of tasks has singleton candidate set: 245, which is 42.241379310344826%\nnumber of tasks has singleton candidate set which is ground truth: 231, which is 39.827586206896555%\n\nnumber of tasks with top-1 accuracy in terms of retrieval score: 297, which is 51.206896551724135%\nnumber of tasks with top-5 accuracy in terms of retrieval score: 463, which is 79.82758620689656%\nnumber of tasks with top-1 accuracy in terms of text embedding score: 467, which is 80.51724137931035%\nnumber of tasks with top-5 accuracy in terms of text embedding score: 517, which is 89.13793103448275%\nnumber of tasks with top-1 accuracy in terms of graph embedding score: 502, which is 86.55172413793103%\nnumber of tasks with top-5 accuracy in terms of graph embedding score: 522, which is 90.0%\n\naverage ndcg score ranked by retrieval score: 0.693731616102479\naverage ndcg score ranked by text-embedding-score: 0.859656090416565\naverage ndcg score ranked by graph-embedding-score: 0.8868615205413546\n"
],
[
"# Conclusion of exact-match on filtered tasks: candidate set is non singleton and has ground truth\nf_candidate_eval_data = candidate_eval_data[(candidate_eval_data['has_gt'] == 1) & (candidate_eval_data['count'] > 1)]\nf_candidate_eval_data",
"_____no_output_____"
],
[
"num_tasks = len(f_candidate_eval_data)\ndf_has_gt_in_candidate = f_candidate_eval_data[f_candidate_eval_data['has_gt_in_candidate'] == 1]\ndf_singleton_candidate = f_candidate_eval_data[f_candidate_eval_data['count'] == 1]\ndf_singleton_candidate_has_gt = f_candidate_eval_data[(f_candidate_eval_data['count'] == 1) & (f_candidate_eval_data['has_gt_in_candidate'] == 1)]\ndf_retrieval_top_one_accurate = f_candidate_eval_data[f_candidate_eval_data['retrieval_top_one_accurate'] == 1]\ndf_retrieval_top_five_accurate = f_candidate_eval_data[f_candidate_eval_data['retrieval_top_five_accurate'] == 1]\ndf_text_top_one_accurate = f_candidate_eval_data[f_candidate_eval_data['text_top_one_accurate'] == 1]\ndf_text_top_five_accurate = f_candidate_eval_data[f_candidate_eval_data['text_top_five_accurate'] == 1]\ndf_graph_top_one_accurate = f_candidate_eval_data[f_candidate_eval_data['graph_top_one_accurate'] == 1]\ndf_graph_top_five_accurate = f_candidate_eval_data[f_candidate_eval_data['graph_top_five_accurate'] == 1]\n\nprint(f\"number of tasks with ground truth: {num_tasks}\")\nprint(f\"number of tasks with ground truth in candidate set: {len(df_has_gt_in_candidate)}, which is {len(df_has_gt_in_candidate)/num_tasks * 100}%\")\nprint(f\"number of tasks has singleton candidate set: {len(df_singleton_candidate)}, which is {len(df_singleton_candidate)/num_tasks * 100}%\")\nprint(f\"number of tasks has singleton candidate set which is ground truth: {len(df_singleton_candidate_has_gt)}, which is {len(df_singleton_candidate_has_gt)/num_tasks * 100}%\")\nprint()\nprint(f\"number of tasks with top-1 accuracy in terms of retrieval score: {len(df_retrieval_top_one_accurate)}, which is {len(df_retrieval_top_one_accurate)/num_tasks * 100}%\")\nprint(f\"number of tasks with top-5 accuracy in terms of retrieval score: {len(df_retrieval_top_five_accurate)}, which is {len(df_retrieval_top_five_accurate)/num_tasks * 100}%\")\nprint(f\"number of tasks with top-1 accuracy in terms of text embedding score: {len(df_text_top_one_accurate)}, which is {len(df_text_top_one_accurate)/num_tasks * 100}%\")\nprint(f\"number of tasks with top-5 accuracy in terms of text embedding score: {len(df_text_top_five_accurate)}, which is {len(df_text_top_five_accurate)/num_tasks * 100}%\")\nprint(f\"number of tasks with top-1 accuracy in terms of graph embedding score: {len(df_graph_top_one_accurate)}, which is {len(df_graph_top_one_accurate)/num_tasks * 100}%\")\nprint(f\"number of tasks with top-5 accuracy in terms of graph embedding score: {len(df_graph_top_five_accurate)}, which is {len(df_graph_top_five_accurate)/num_tasks * 100}%\")\nprint()\nprint(f\"average ndcg score ranked by retrieval score: {df_has_gt_in_candidate['r_ndcg'].mean()}\")\nprint(f\"average ndcg score ranked by text-embedding-score: {df_has_gt_in_candidate['t_ndcg'].mean()}\")\nprint(f\"average ndcg score ranked by graph-embedding-score: {df_has_gt_in_candidate['g_ndcg'].mean()}\")",
"number of tasks with ground truth: 336\nnumber of tasks with ground truth in candidate set: 294, which is 87.5%\nnumber of tasks has singleton candidate set: 0, which is 0.0%\nnumber of tasks has singleton candidate set which is ground truth: 0, which is 0.0%\n\nnumber of tasks with top-1 accuracy in terms of retrieval score: 66, which is 19.642857142857142%\nnumber of tasks with top-5 accuracy in terms of retrieval score: 232, which is 69.04761904761905%\nnumber of tasks with top-1 accuracy in terms of text embedding score: 236, which is 70.23809523809523%\nnumber of tasks with top-5 accuracy in terms of text embedding score: 286, which is 85.11904761904762%\nnumber of tasks with top-1 accuracy in terms of graph embedding score: 271, which is 80.65476190476191%\nnumber of tasks with top-5 accuracy in terms of graph embedding score: 291, which is 86.60714285714286%\n\naverage ndcg score ranked by retrieval score: 0.5828718957123736\naverage ndcg score ranked by text-embedding-score: 0.9102058926585296\naverage ndcg score ranked by graph-embedding-score: 0.9638764690951898\n"
],
[
"test_data = all_data[(all_data['table_id'] == \"14067031_0_559833072073397908\") & (all_data['row'] == 3) & (all_data['column'] == 0)]\ntest_data",
"_____no_output_____"
],
[
"sklearn.metrics.ndcg_score(np.array([list(all_data[:5]['evaluation_label'])]), np.array([list(all_data[:5]['retrieval_score'])]))",
"_____no_output_____"
],
[
"# Some ground truth is empty??? why???\nall_data[all_data['GT_kg_id'] == '']",
"_____no_output_____"
]
],
[
[
"### Graphs",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport sys",
"_____no_output_____"
],
[
"candidate_eval_data = pd.read_csv('./candidate_eval_no_empty.csv', index_col=False)\ncandidate_eval_data",
"_____no_output_____"
],
[
"# Line plot of top-1, top-5 and NDCG versus size of candidate set\nx_candidate_set_size = list(pd.unique(candidate_eval_data['count']))\nx_candidate_set_size.sort()\n\ny_r_top_one = []\ny_r_top_five = []\ny_t_top_one = []\ny_t_top_five = []\ny_g_top_one = []\ny_g_top_five = []\ny_avg_r_ndcg = []\ny_avg_t_ndcg = []\ny_avg_g_ndcg = []\n\nfor c in x_candidate_set_size:\n dff = candidate_eval_data[candidate_eval_data['count'] == c]\n y_r_top_one.append(len(dff[dff['retrieval_top_one_accurate'] == 1])/len(dff) * 100)\n y_r_top_five.append(len(dff[dff['retrieval_top_five_accurate'] == 1])/len(dff) * 100)\n \n y_t_top_one.append(len(dff[dff['text_top_one_accurate'] == 1])/len(dff) * 100)\n y_t_top_five.append(len(dff[dff['text_top_five_accurate'] == 1])/len(dff) * 100)\n \n y_g_top_one.append(len(dff[dff['graph_top_one_accurate'] == 1])/len(dff) * 100)\n y_g_top_five.append(len(dff[dff['graph_top_five_accurate'] == 1])/len(dff) * 100)\n \n y_avg_r_ndcg.append(dff['r_ndcg'].mean())\n y_avg_t_ndcg.append(dff['t_ndcg'].mean())\n y_avg_g_ndcg.append(dff['g_ndcg'].mean())\nlen(y_r_top_one), len(y_g_top_one), len(y_t_top_one), len(y_avg_r_ndcg)",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nfig, ax = plt.subplots()\nax.set_ylabel('percent')\nax.set_xlabel('candidate set size')\nax.plot(x_candidate_set_size, y_r_top_one, 'ro', label='retrieval_top_one_accurate')\nax.plot(x_candidate_set_size, y_r_top_five, 'bo', label='retrieval_top_five_accurate')\nax.legend(bbox_to_anchor=(1,1), loc=\"upper left\")\nfig.show()",
"/Users/summ7t/dev/novartis/novartis_env/lib/python3.6/site-packages/ipykernel_launcher.py:8: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.\n \n"
],
[
"fig, ax = plt.subplots()\nax.set_ylabel('percent')\nax.set_xlabel('candidate set size')\nax.plot(x_candidate_set_size, y_t_top_one, 'ro', label='text_top_one_accurate')\nax.plot(x_candidate_set_size, y_t_top_five, 'bo', label='text_top_five_accurate')\nax.legend(bbox_to_anchor=(1,1), loc=\"upper left\")\nfig.show()",
"/Users/summ7t/dev/novartis/novartis_env/lib/python3.6/site-packages/ipykernel_launcher.py:7: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.\n import sys\n"
],
[
"fig, ax = plt.subplots()\nax.set_ylabel('percent')\nax.set_xlabel('candidate set size')\nax.plot(x_candidate_set_size, y_g_top_one, 'ro', label='graph_top_one_accurate')\nax.plot(x_candidate_set_size, y_g_top_five, 'bo', label='graph_top_five_accurate')\nax.legend(bbox_to_anchor=(1,1), loc=\"upper left\")\nfig.show()",
"/Users/summ7t/dev/novartis/novartis_env/lib/python3.6/site-packages/ipykernel_launcher.py:7: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.\n import sys\n"
],
[
"fig, ax = plt.subplots()\nax.set_ylabel('average ndcg')\nax.set_xlabel('candidate set size')\nax.plot(x_candidate_set_size, y_avg_r_ndcg, 'ro', label='average ndcg score ranked by retrieval score')\nax.plot(x_candidate_set_size, y_avg_t_ndcg, 'bo', label='average ndcg score ranked by text-embedding-score')\nax.plot(x_candidate_set_size, y_avg_g_ndcg, 'go', label='average ndcg score ranked by graph-embedding-score')\nfig.legend(bbox_to_anchor=(1,1), loc=\"upper left\")\nfig.show()",
"/Users/summ7t/dev/novartis/novartis_env/lib/python3.6/site-packages/ipykernel_launcher.py:8: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.\n \n"
]
],
[
[
"### 02/16 stats on each eval file",
"_____no_output_____"
]
],
[
[
"import pandas as pd\ncandidate_eval_data = pd.read_csv('./candidate_eval.csv', index_col=False)\ncandidate_eval_data",
"_____no_output_____"
],
[
"# candidate_eval_data = candidate_eval_data.drop(['Unnamed: 0'], axis=1)\n# candidate_eval_data",
"_____no_output_____"
],
[
"import os\n\neval_file_names = []\neval_file_ids = []\n\nfor (dirpath, dirnames, filenames) in os.walk('/Users/summ7t/dev/novartis/table-linker/SemTab2019/embedding_evaluation_files/'):\n for fn in filenames:\n if \"csv\" not in fn:\n continue\n abs_fn = dirpath + fn\n assert os.path.isfile(abs_fn)\n if os.path.getsize(abs_fn) == 0:\n continue\n eval_file_names.append(abs_fn)\n eval_file_ids.append(fn.split('.csv')[0])\nlen(eval_file_names), len(eval_file_ids)",
"_____no_output_____"
],
[
"eval_file_ids",
"_____no_output_____"
],
[
"f_candidate_eval_data = candidate_eval_data[candidate_eval_data['table_id'] == '52299421_0_4473286348258170200']\nf_candidate_eval_data",
"_____no_output_____"
],
[
"def compute_eval_file_stats(f_candidate_eval_data):\n res = {}\n num_tasks = len(f_candidate_eval_data)\n df_has_gt = f_candidate_eval_data[f_candidate_eval_data['has_gt'] == 1]\n df_has_gt_in_candidate = f_candidate_eval_data[f_candidate_eval_data['has_gt_in_candidate'] == 1]\n df_singleton_candidate = f_candidate_eval_data[f_candidate_eval_data['count'] == 1]\n df_singleton_candidate_has_gt = f_candidate_eval_data[(f_candidate_eval_data['count'] == 1) & (f_candidate_eval_data['has_gt_in_candidate'] == 1)]\n df_retrieval_top_one_accurate = f_candidate_eval_data[f_candidate_eval_data['retrieval_top_one_accurate'] == 1]\n df_retrieval_top_five_accurate = f_candidate_eval_data[f_candidate_eval_data['retrieval_top_five_accurate'] == 1]\n df_text_top_one_accurate = f_candidate_eval_data[f_candidate_eval_data['text_top_one_accurate'] == 1]\n df_text_top_five_accurate = f_candidate_eval_data[f_candidate_eval_data['text_top_five_accurate'] == 1]\n df_graph_top_one_accurate = f_candidate_eval_data[f_candidate_eval_data['graph_top_one_accurate'] == 1]\n df_graph_top_five_accurate = f_candidate_eval_data[f_candidate_eval_data['graph_top_five_accurate'] == 1]\n \n res['table_id'] = f_candidate_eval_data['table_id'].iloc[0]\n res['num_tasks'] = num_tasks\n res['num_tasks_with_gt'] = len(df_has_gt)\n res['num_tasks_with_gt_in_candidate'] = len(df_has_gt_in_candidate) / len(df_has_gt) * 100\n res['num_tasks_with_singleton_candidate'] = len(df_singleton_candidate) / len(df_has_gt) * 100\n res['num_tasks_with_singleton_candidate_with_gt'] = len(df_singleton_candidate_has_gt) / len(df_has_gt) * 100\n res['num_tasks_with_retrieval_top_one_accurate'] = len(df_retrieval_top_one_accurate) / len(df_has_gt) * 100\n res['num_tasks_with_retrieval_top_five_accurate'] = len(df_retrieval_top_five_accurate) / len(df_has_gt) * 100\n res['num_tasks_with_text_top_one_accurate'] = len(df_text_top_one_accurate) / len(df_has_gt) * 100\n res['num_tasks_with_text_top_five_accurate'] = len(df_text_top_five_accurate) / len(df_has_gt) * 100\n res['num_tasks_with_graph_top_one_accurate'] = len(df_graph_top_one_accurate) / len(df_has_gt) * 100\n res['num_tasks_with_graph_top_five_accurate'] = len(df_graph_top_five_accurate) / len(df_has_gt) * 100\n \n res['average_ndcg_retrieval'] = df_has_gt['r_ndcg'].mean()\n res['average_ndcg_text'] = df_has_gt['t_ndcg'].mean()\n res['average_ndcg_graph'] = df_has_gt['g_ndcg'].mean()\n \n return res",
"_____no_output_____"
],
[
"def compute_eval_file_stats_count(f_candidate_eval_data):\n res = {}\n num_tasks = len(f_candidate_eval_data)\n df_has_gt = f_candidate_eval_data[f_candidate_eval_data['has_gt'] == 1]\n df_has_gt_in_candidate = f_candidate_eval_data[f_candidate_eval_data['has_gt_in_candidate'] == 1]\n df_singleton_candidate = f_candidate_eval_data[f_candidate_eval_data['count'] == 1]\n df_singleton_candidate_has_gt = f_candidate_eval_data[(f_candidate_eval_data['count'] == 1) & (f_candidate_eval_data['has_gt_in_candidate'] == 1)]\n df_retrieval_top_one_accurate = f_candidate_eval_data[f_candidate_eval_data['retrieval_top_one_accurate'] == 1]\n df_retrieval_top_five_accurate = f_candidate_eval_data[f_candidate_eval_data['retrieval_top_five_accurate'] == 1]\n df_text_top_one_accurate = f_candidate_eval_data[f_candidate_eval_data['text_top_one_accurate'] == 1]\n df_text_top_five_accurate = f_candidate_eval_data[f_candidate_eval_data['text_top_five_accurate'] == 1]\n df_graph_top_one_accurate = f_candidate_eval_data[f_candidate_eval_data['graph_top_one_accurate'] == 1]\n df_graph_top_five_accurate = f_candidate_eval_data[f_candidate_eval_data['graph_top_five_accurate'] == 1]\n \n res['table_id'] = f_candidate_eval_data['table_id'].iloc[0]\n res['num_tasks'] = num_tasks\n res['num_tasks_with_gt'] = len(df_has_gt)\n res['num_tasks_with_gt_in_candidate'] = len(df_has_gt_in_candidate)\n res['num_tasks_with_singleton_candidate'] = len(df_singleton_candidate)\n res['num_tasks_with_singleton_candidate_with_gt'] = len(df_singleton_candidate_has_gt)\n res['num_tasks_with_retrieval_top_one_accurate'] = len(df_retrieval_top_one_accurate) / len(df_has_gt) * 100\n res['num_tasks_with_retrieval_top_five_accurate'] = len(df_retrieval_top_five_accurate) / len(df_has_gt) * 100\n res['num_tasks_with_text_top_one_accurate'] = len(df_text_top_one_accurate) / len(df_has_gt) * 100\n res['num_tasks_with_text_top_five_accurate'] = len(df_text_top_five_accurate) / len(df_has_gt) * 100\n res['num_tasks_with_graph_top_one_accurate'] = len(df_graph_top_one_accurate) / len(df_has_gt) * 100\n res['num_tasks_with_graph_top_five_accurate'] = len(df_graph_top_five_accurate) / len(df_has_gt) * 100\n \n res['average_ndcg_retrieval'] = df_has_gt['r_ndcg'].mean()\n res['average_ndcg_text'] = df_has_gt['t_ndcg'].mean()\n res['average_ndcg_graph'] = df_has_gt['g_ndcg'].mean()\n \n return res",
"_____no_output_____"
],
[
"res = compute_eval_file_stats(f_candidate_eval_data)\nprint(f\"table id is {res['table_id']}\")\nprint(f\"number of tasks: {res['num_tasks']}\")\nprint(f\"number of tasks with ground truth: {res['num_tasks_with_gt']}\")\nprint(f\"number of tasks with ground truth in candidate set: {res['num_tasks_with_gt_in_candidate']}\")\nprint(f\"number of tasks has singleton candidate set: {res['num_tasks_with_singleton_candidate']}\")\nprint(f\"number of tasks has singleton candidate set which is ground truth: {res['num_tasks_with_singleton_candidate_with_gt']}\")\nprint()\nprint(f\"number of tasks with top-1 accuracy in terms of retrieval score: {res['num_tasks_with_retrieval_top_one_accurate']}\")\nprint(f\"number of tasks with top-5 accuracy in terms of retrieval score: {res['num_tasks_with_retrieval_top_five_accurate']}\")\nprint(f\"number of tasks with top-1 accuracy in terms of text embedding score: {res['num_tasks_with_text_top_one_accurate']}\")\nprint(f\"number of tasks with top-5 accuracy in terms of text embedding score: {res['num_tasks_with_text_top_five_accurate']}\")\nprint(f\"number of tasks with top-1 accuracy in terms of graph embedding score: {res['num_tasks_with_graph_top_one_accurate']}\")\nprint(f\"number of tasks with top-5 accuracy in terms of graph embedding score: {res['num_tasks_with_graph_top_five_accurate']}\")\nprint()\nprint(f\"average ndcg score ranked by retrieval score: {res['average_ndcg_retrieval']}\")\nprint(f\"average ndcg score ranked by text-embedding-score: {res['average_ndcg_text']}\")\nprint(f\"average ndcg score ranked by graph-embedding-score: {res['average_ndcg_graph']}\")",
"table id is 52299421_0_4473286348258170200\nnumber of tasks: 92\nnumber of tasks with ground truth: 91\nnumber of tasks with ground truth in candidate set: 91.20879120879121\nnumber of tasks has singleton candidate set: 9.89010989010989\nnumber of tasks has singleton candidate set which is ground truth: 1.098901098901099\n\nnumber of tasks with top-1 accuracy in terms of retrieval score: 15.384615384615385\nnumber of tasks with top-5 accuracy in terms of retrieval score: 60.43956043956044\nnumber of tasks with top-1 accuracy in terms of text embedding score: 69.23076923076923\nnumber of tasks with top-5 accuracy in terms of text embedding score: 84.61538461538461\nnumber of tasks with top-1 accuracy in terms of graph embedding score: 89.01098901098901\nnumber of tasks with top-5 accuracy in terms of graph embedding score: 91.20879120879121\n\naverage ndcg score ranked by retrieval score: 0.4778326176841276\naverage ndcg score ranked by text-embedding-score: 0.796349211696553\naverage ndcg score ranked by graph-embedding-score: 0.9039764781004715\n"
],
[
"all_tables = {}\nfor tid in eval_file_ids:\n f_candidate_eval_data = candidate_eval_data[candidate_eval_data['table_id'] == tid]\n all_tables[tid] = compute_eval_file_stats(f_candidate_eval_data)\nall_tables",
"_____no_output_____"
],
[
"all_tables = {}\nfor tid in eval_file_ids:\n f_candidate_eval_data = candidate_eval_data[candidate_eval_data['table_id'] == tid]\n all_tables[tid] = compute_eval_file_stats_count(f_candidate_eval_data)\nall_tables",
"_____no_output_____"
],
[
"eval_file_ids",
"_____no_output_____"
],
[
"# visualize ten dev eval file stats\n# Recompute all tables if needed\nx_eval_fid = [\n 'movies',\n 'players I',\n 'video games',\n 'magazines',\n 'companies',\n 'country I',\n 'players II',\n 'pope',\n 'country II'\n]\nx_eval_fidx = range(len(x_eval_fid))\ny_num_tasks_with_gt_in_candidate = []\ny_num_tasks_with_singleton_candidate = []\ny_num_tasks_with_singleton_candidate_with_gt = []\ny_num_tasks_with_retrieval_top_one_accurate = []\ny_num_tasks_with_retrieval_top_five_accurate = []\ny_num_tasks_with_text_top_one_accurate = []\ny_num_tasks_with_text_top_five_accurate = []\ny_num_tasks_with_graph_top_one_accurate = []\ny_num_tasks_with_graph_top_five_accurate = []\ny_average_ndcg_retrieval = []\ny_average_ndcg_text = []\ny_average_ndcg_graph = []\n\nfor idx in range(len(x_eval_fid)):\n table_id = eval_file_ids[idx]\n y_num_tasks_with_gt_in_candidate.append(all_tables[table_id]['num_tasks_with_gt_in_candidate'])\n y_num_tasks_with_singleton_candidate.append(all_tables[table_id]['num_tasks_with_singleton_candidate'])\n y_num_tasks_with_singleton_candidate_with_gt.append(all_tables[table_id]['num_tasks_with_singleton_candidate_with_gt'])\n y_num_tasks_with_retrieval_top_one_accurate.append(all_tables[table_id]['num_tasks_with_retrieval_top_one_accurate'])\n y_num_tasks_with_retrieval_top_five_accurate.append(all_tables[table_id]['num_tasks_with_retrieval_top_five_accurate'])\n y_num_tasks_with_text_top_one_accurate.append(all_tables[table_id]['num_tasks_with_text_top_one_accurate'])\n y_num_tasks_with_text_top_five_accurate.append(all_tables[table_id]['num_tasks_with_text_top_five_accurate'])\n y_num_tasks_with_graph_top_one_accurate.append(all_tables[table_id]['num_tasks_with_graph_top_one_accurate'])\n y_num_tasks_with_graph_top_five_accurate.append(all_tables[table_id]['num_tasks_with_graph_top_five_accurate'])\n y_average_ndcg_retrieval.append(all_tables[table_id]['average_ndcg_retrieval'])\n y_average_ndcg_text.append(all_tables[table_id]['average_ndcg_text'])\n y_average_ndcg_graph.append(all_tables[table_id]['average_ndcg_graph'])\n \ny_num_tasks_with_text_top_five_accurate",
"_____no_output_____"
],
[
"import statistics\ndef compute_list_stats(l):\n return min(l), max(l), statistics.median(l), statistics.mean(l), statistics.stdev(l)",
"_____no_output_____"
],
[
"print('% tasks_with_gt_in_candidate : \\n min is {},\\n max is {},\\n median is {},\\n mean is {},\\n std is {}'.format(*compute_list_stats(y_num_tasks_with_gt_in_candidate)))\nprint('% tasks_with_singleton_candidate : \\n min is {},\\n max is {},\\n median is {},\\n mean is {},\\n std is {}'.format(*compute_list_stats(y_num_tasks_with_singleton_candidate)))\nprint('% tasks_with_singleton_candidate_with_gt : \\n min is {},\\n max is {},\\n median is {},\\n mean is {},\\n std is {}'.format(*compute_list_stats(y_num_tasks_with_singleton_candidate_with_gt)))\nprint('% tasks_with_retrieval_top_one_accurate : \\n min is {},\\n max is {},\\n median is {},\\n mean is {},\\n std is {}'.format(*compute_list_stats(y_num_tasks_with_retrieval_top_one_accurate)))\nprint('% tasks_with_retrieval_top_five_accurate : \\n min is {},\\n max is {},\\n median is {},\\n mean is {},\\n std is {}'.format(*compute_list_stats(y_num_tasks_with_retrieval_top_five_accurate)))\nprint('% tasks_with_text_top_one_accurate : \\n min is {},\\n max is {},\\n median is {},\\n mean is {},\\n std is {}'.format(*compute_list_stats(y_num_tasks_with_text_top_one_accurate)))\nprint('% tasks_with_text_top_five_accurate : \\n min is {},\\n max is {},\\n median is {},\\n mean is {},\\n std is {}'.format(*compute_list_stats(y_num_tasks_with_text_top_five_accurate)))\nprint('% tasks_with_graph_top_one_accurate : \\n min is {},\\n max is {},\\n median is {},\\n mean is {},\\n std is {}'.format(*compute_list_stats(y_num_tasks_with_graph_top_one_accurate)))\nprint('% tasks_with_graph_top_five_accurate : \\n min is {},\\n max is {},\\n median is {},\\n mean is {},\\n std is {}'.format(*compute_list_stats(y_average_ndcg_retrieval)))\nprint('average_ndcg_retrieval : \\n min is {},\\n max is {},\\n median is {},\\n mean is {},\\n std is {}'.format(*compute_list_stats(y_num_tasks_with_graph_top_five_accurate)))\nprint('average_ndcg_text : \\n min is {},\\n max is {},\\n median is {},\\n mean is {},\\n std is {}'.format(*compute_list_stats(y_average_ndcg_text)))\nprint('average_ndcg_graph : \\n min is {}, \\nmax is {},\\n median is {},\\n mean is {},\\n std is {}'.format(*compute_list_stats(y_average_ndcg_graph)))",
"% tasks_with_gt_in_candidate : \n min is 0.0,\n max is 100.0,\n median is 84.90566037735849,\n mean is 72.84022472596897,\n std is 30.122460987028397\n% tasks_with_singleton_candidate : \n min is 9.89010989010989,\n max is 88.77551020408163,\n median is 44.44444444444444,\n mean is 50.245196984639925,\n std is 29.50233630776287\n% tasks_with_singleton_candidate_with_gt : \n min is 0.0,\n max is 50.90909090909091,\n median is 37.03703703703704,\n mean is 25.88109568822357,\n std is 20.926451392789865\n% tasks_with_retrieval_top_one_accurate : \n min is 0.0,\n max is 59.25925925925925,\n median is 48.80952380952381,\n mean is 36.92211383615996,\n std is 20.536912271188104\n% tasks_with_retrieval_top_five_accurate : \n min is 0.0,\n max is 88.88888888888889,\n median is 72.72727272727273,\n mean is 64.39506207745201,\n std is 26.325153159169574\n% tasks_with_text_top_one_accurate : \n min is 0.0,\n max is 95.0,\n median is 69.0,\n mean is 65.6266707901928,\n std is 27.782162920801042\n% tasks_with_text_top_five_accurate : \n min is 0.0,\n max is 100.0,\n median is 84.61538461538461,\n mean is 71.94600783175207,\n std is 29.650709169230907\n% tasks_with_graph_top_one_accurate : \n min is 0.0,\n max is 95.0,\n median is 81.0,\n mean is 68.42334530908954,\n std is 30.128412498933365\n% tasks_with_graph_top_five_accurate : \n min is 0.0,\n max is 0.7672928877020503,\n median is 0.5810693655613735,\n mean is 0.5411521260033642,\n std is 0.2250672409283447\naverage_ndcg_retrieval : \n min is 0.0,\n max is 100.0,\n median is 84.90566037735849,\n mean is 72.62810351384775,\n std is 30.093505016238296\naverage_ndcg_text : \n min is 0.0,\n max is 0.9715338279036697,\n median is 0.7887582334884491,\n mean is 0.6948592971142319,\n std is 0.2872919102282234\naverage_ndcg_graph : \n min is 0.0, \nmax is 0.975,\n median is 0.8387992620139386,\n mean is 0.7083701412627407,\n std is 0.29912461535241813\n"
],
[
"import matplotlib.pyplot as plt\n\nfig, ax = plt.subplots(figsize=(10, 10))\nax.set_ylabel('average ndcg')\nax.set_xlabel('table content')\nax.plot(x_eval_fid, y_average_ndcg_retrieval, 'rx', label='average ndcg score ranked by retrieval score')\nax.plot(x_eval_fid, y_average_ndcg_text, 'bx', label='average ndcg score ranked by text embedding score')\nax.plot(x_eval_fid, y_average_ndcg_graph, 'gx', label='average ndcg score ranked by graph embedding score')\nax.legend(bbox_to_anchor=(1,1), loc=\"upper left\")\nfig.show()",
"/Users/summ7t/dev/novartis/novartis_env/lib/python3.6/site-packages/ipykernel_launcher.py:10: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.\n # Remove the CWD from sys.path while we load stuff.\n"
],
[
"fig, ax = plt.subplots(figsize=(10, 10))\nax.set_ylabel('percent')\nax.set_xlabel('table content')\nax.plot(x_eval_fid, y_num_tasks_with_retrieval_top_one_accurate, 'rx', label='ranked by retrieval score top-1 accuracy')\nax.plot(x_eval_fid, y_num_tasks_with_text_top_one_accurate, 'bx', label='ranked by text embedding score top-1 accuracy')\nax.plot(x_eval_fid, y_num_tasks_with_graph_top_one_accurate, 'gx', label='ranked by graph embedding score top-1 accuracy')\nax.plot(x_eval_fid, y_num_tasks_with_retrieval_top_five_accurate, 'ro', label='ranked by retrieval score top-5 accuracy')\nax.plot(x_eval_fid, y_num_tasks_with_text_top_five_accurate, 'bo', label='ranked by text embedding score top-5 accuracy')\nax.plot(x_eval_fid, y_num_tasks_with_graph_top_five_accurate, 'go', label='ranked by graph embedding score top-5 accuracy')\nax.legend(bbox_to_anchor=(1,1), loc=\"upper left\")\nfig.show()",
"/Users/summ7t/dev/novartis/novartis_env/lib/python3.6/site-packages/ipykernel_launcher.py:11: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.\n # This is added back by InteractiveShellApp.init_path()\n"
],
[
"# fig, ax = plt.subplots(figsize=(10, 10))\n# ax.set_ylabel('percent')\n# ax.set_xlabel('table_id idx')\n# ax.plot(x_eval_fid, y_num_tasks_with_retrieval_top_five_accurate, 'rx', label='ranked by retrieval score top-5 accuracy')\n# ax.plot(x_eval_fid, y_num_tasks_with_text_top_five_accurate, 'bx', label='ranked by text embedding score top-5 accuracy')\n# ax.plot(x_eval_fid, y_num_tasks_with_graph_top_five_accurate, 'gx', label='ranked by graph embedding score top-5 accuracy')\n# ax.legend(bbox_to_anchor=(1,1), loc=\"upper left\")\n# fig.show()",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(figsize=(10, 10))\nax.set_ylabel('percent')\nax.set_xlabel('table content')\nax.plot(x_eval_fid, y_num_tasks_with_singleton_candidate, 'rx', label='percent of tasks with singleton candidate set')\nax.plot(x_eval_fid, y_num_tasks_with_singleton_candidate_with_gt, 'bx', label='percent of tasks with ground truth in singleton candidate set')\nax.legend(bbox_to_anchor=(1,1), loc=\"upper left\")\nfig.show()",
"/Users/summ7t/dev/novartis/novartis_env/lib/python3.6/site-packages/ipykernel_launcher.py:7: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.\n import sys\n"
]
],
[
[
"### 02/17 More plots",
"_____no_output_____"
]
],
[
[
"candidate_eval_data[candidate_eval_data['count'] == 1]",
"_____no_output_____"
],
[
"[all_tables[tid]['num_tasks_with_singleton_candidate'] for tid in all_tables]",
"_____no_output_____"
],
[
"# x_axis percetage of singleton\n\nx_pos = [all_tables[tid]['num_tasks_with_singleton_candidate'] for tid in all_tables]\nx_posgt = [all_tables[tid]['num_tasks_with_singleton_candidate_with_gt'] for tid in all_tables]\n\nlen(x_pos), len(x_posgt)",
"_____no_output_____"
],
[
"fig, ax = plt.subplots()\nax.set_ylabel('average ndcg')\n# ax.set_xlabel('percentage of singleton candidate set')\nax.set_xlabel('number of singleton candidate set')\nax.plot(x_pos, y_average_ndcg_retrieval, 'rx', label='average ndcg score ranked by retrieval score')\nax.plot(x_pos, y_average_ndcg_text, 'bx', label='average ndcg score ranked by text embedding score')\nax.plot(x_pos, y_average_ndcg_graph, 'gx', label='average ndcg score ranked by graph embedding score')\nax.legend(bbox_to_anchor=(1,1), loc=\"upper left\")\nfig.show()",
"/Users/summ7t/dev/novartis/novartis_env/lib/python3.6/site-packages/ipykernel_launcher.py:9: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.\n if __name__ == '__main__':\n"
],
[
"fig, ax = plt.subplots()\nax.set_ylabel('percent')\n# ax.set_xlabel('percentage of singleton candidate set')\nax.set_xlabel('number of singleton candidate set')\n\nax.plot(x_pos, y_num_tasks_with_retrieval_top_one_accurate, 'rx', label='ranked by retrieval score top-1 accuracy')\nax.plot(x_pos, y_num_tasks_with_text_top_one_accurate, 'bx', label='ranked by text embedding score top-1 accuracy')\nax.plot(x_pos, y_num_tasks_with_graph_top_one_accurate, 'gx', label='ranked by graph embedding score top-1 accuracy')\nax.plot(x_pos, y_num_tasks_with_retrieval_top_five_accurate, 'ro', label='ranked by retrieval score top-5 accuracy')\nax.plot(x_pos, y_num_tasks_with_text_top_five_accurate, 'bo', label='ranked by text embedding score top-5 accuracy')\nax.plot(x_pos, y_num_tasks_with_graph_top_five_accurate, 'go', label='ranked by graph embedding score top-5 accuracy')\nax.legend(bbox_to_anchor=(1,1), loc=\"upper left\")\nfig.show()",
"/Users/summ7t/dev/novartis/novartis_env/lib/python3.6/site-packages/ipykernel_launcher.py:13: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.\n del sys.path[0]\n"
],
[
"fig, ax = plt.subplots()\nax.set_ylabel('percent')\n# ax.set_xlabel('percentage of singleton candidate set with ground truth')\nax.set_xlabel('number of singleton candidate set with ground truth')\nax.plot(x_posgt, y_num_tasks_with_retrieval_top_one_accurate, 'rx', label='ranked by retrieval score top-1 accuracy')\nax.plot(x_posgt, y_num_tasks_with_text_top_one_accurate, 'bx', label='ranked by text embedding score top-1 accuracy')\nax.plot(x_posgt, y_num_tasks_with_graph_top_one_accurate, 'gx', label='ranked by graph embedding score top-1 accuracy')\nax.plot(x_posgt, y_num_tasks_with_retrieval_top_five_accurate, 'ro', label='ranked by retrieval score top-5 accuracy')\nax.plot(x_posgt, y_num_tasks_with_text_top_five_accurate, 'bo', label='ranked by text embedding score top-5 accuracy')\nax.plot(x_posgt, y_num_tasks_with_graph_top_five_accurate, 'go', label='ranked by graph embedding score top-5 accuracy')\nax.legend(bbox_to_anchor=(1,1), loc=\"upper left\")\nfig.show()",
"/Users/summ7t/dev/novartis/novartis_env/lib/python3.6/site-packages/ipykernel_launcher.py:12: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.\n if sys.path[0] == '':\n"
],
[
"fig, ax = plt.subplots()\nax.set_ylabel('average ndcg')\nax.set_xlabel('number of singleton candidate set with ground truth')\n# ax.set_xlabel('percentage of singleton candidate set with ground truth')\nax.plot(x_posgt, y_average_ndcg_retrieval, 'rx', label='average ndcg score ranked by retrieval score')\nax.plot(x_posgt, y_average_ndcg_text, 'bx', label='average ndcg score ranked by text embedding score')\nax.plot(x_posgt, y_average_ndcg_graph, 'gx', label='average ndcg score ranked by graph embedding score')\nax.legend(bbox_to_anchor=(1,1), loc=\"upper left\")\nfig.show()",
"/Users/summ7t/dev/novartis/novartis_env/lib/python3.6/site-packages/ipykernel_launcher.py:9: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.\n if __name__ == '__main__':\n"
]
],
[
[
"### 02/19 More experiments: wrong singleton",
"_____no_output_____"
]
],
[
[
"import pandas as pd\ncandidate_eval_data = pd.read_csv('./candidate_eval_no_empty.csv', index_col=False)\ncandidate_eval_data",
"_____no_output_____"
],
[
"# Sub all singleton candidate set to see how \"good\" the algorithm can be\nsubbed_candidate_eval_data = candidate_eval_data.copy()\nfor i, row in subbed_candidate_eval_data.iterrows():\n if row['count'] == 1:\n subbed_candidate_eval_data.loc[i, 'retrieval_top_one_accurate'] = 1\n subbed_candidate_eval_data.loc[i, 'retrieval_top_five_accurate'] = 1\n subbed_candidate_eval_data.loc[i, 'text_top_one_accurate'] = 1\n subbed_candidate_eval_data.loc[i, 'text_top_five_accurate'] = 1\n subbed_candidate_eval_data.loc[i, 'graph_top_one_accurate'] = 1\n subbed_candidate_eval_data.loc[i, 'graph_top_five_accurate'] = 1\n subbed_candidate_eval_data.loc[i, 'has_gt'] = 1\n subbed_candidate_eval_data.loc[i, 'has_gt_in_candidate'] = 1\n subbed_candidate_eval_data.loc[i, 'r_ndcg'] = 1\n subbed_candidate_eval_data.loc[i, 't_ndcg'] = 1\n subbed_candidate_eval_data.loc[i, 'g_ndcg'] = 1\nsubbed_candidate_eval_data",
"_____no_output_____"
],
[
"dropped_candidate_eval_data = candidate_eval_data.copy()[(candidate_eval_data['count'] != 1) | (candidate_eval_data['has_gt_in_candidate'] == 1)]\ndropped_candidate_eval_data",
"_____no_output_____"
],
[
"candidate_eval_data[candidate_eval_data['count'] == 1]",
"_____no_output_____"
],
[
"subbed_candidate_eval_data[subbed_candidate_eval_data['count'] == 1]",
"_____no_output_____"
],
[
"dropped_candidate_eval_data[dropped_candidate_eval_data['count'] == 1]",
"_____no_output_____"
],
[
"# compute the same metrics\nimport os\n\neval_file_names = []\neval_file_ids = []\n\nfor (dirpath, dirnames, filenames) in os.walk('/Users/summ7t/dev/novartis/table-linker/SemTab2019/embedding_evaluation_files/'):\n for fn in filenames:\n if \"csv\" not in fn:\n continue\n abs_fn = dirpath + fn\n assert os.path.isfile(abs_fn)\n if os.path.getsize(abs_fn) == 0:\n continue\n eval_file_names.append(abs_fn)\n eval_file_ids.append(fn.split('.csv')[0])\nlen(eval_file_names), len(eval_file_ids)",
"_____no_output_____"
],
[
"subbed_all_tables = {}\nfor tid in eval_file_ids:\n f_candidate_eval_data = subbed_candidate_eval_data[subbed_candidate_eval_data['table_id'] == tid]\n subbed_all_tables[tid] = compute_eval_file_stats(f_candidate_eval_data)\nsubbed_all_tables",
"_____no_output_____"
],
[
"dropped_all_tables = {}\nfor tid in eval_file_ids:\n f_candidate_eval_data = dropped_candidate_eval_data[dropped_candidate_eval_data['table_id'] == tid]\n dropped_all_tables[tid] = compute_eval_file_stats(f_candidate_eval_data)\ndropped_all_tables",
"_____no_output_____"
],
[
"# visualize ten dev eval file stats\n# Same process as before\n\nx_eval_fid = [\n 'movies',\n 'players I',\n 'video games',\n 'magazines',\n 'companies',\n 'country I',\n 'players II',\n 'pope',\n 'country II'\n]\nx_eval_fidx = range(len(x_eval_fid))\nr_y_num_tasks_with_gt_in_candidate = []\nr_y_num_tasks_with_singleton_candidate = []\nr_y_num_tasks_with_singleton_candidate_with_gt = []\nr_y_num_tasks_with_retrieval_top_one_accurate = []\nr_y_num_tasks_with_retrieval_top_five_accurate = []\nr_y_num_tasks_with_text_top_one_accurate = []\nr_y_num_tasks_with_text_top_five_accurate = []\nr_y_num_tasks_with_graph_top_one_accurate = []\nr_y_num_tasks_with_graph_top_five_accurate = []\nr_y_average_ndcg_retrieval = []\nr_y_average_ndcg_text = []\nr_y_average_ndcg_graph = []\n\nfor idx in range(len(x_eval_fid)):\n table_id = eval_file_ids[idx]\n r_y_num_tasks_with_gt_in_candidate.append(subbed_all_tables[table_id]['num_tasks_with_gt_in_candidate'])\n r_y_num_tasks_with_singleton_candidate.append(subbed_all_tables[table_id]['num_tasks_with_singleton_candidate'])\n r_y_num_tasks_with_singleton_candidate_with_gt.append(subbed_all_tables[table_id]['num_tasks_with_singleton_candidate_with_gt'])\n r_y_num_tasks_with_retrieval_top_one_accurate.append(subbed_all_tables[table_id]['num_tasks_with_retrieval_top_one_accurate'])\n r_y_num_tasks_with_retrieval_top_five_accurate.append(subbed_all_tables[table_id]['num_tasks_with_retrieval_top_five_accurate'])\n r_y_num_tasks_with_text_top_one_accurate.append(subbed_all_tables[table_id]['num_tasks_with_text_top_one_accurate'])\n r_y_num_tasks_with_text_top_five_accurate.append(subbed_all_tables[table_id]['num_tasks_with_text_top_five_accurate'])\n r_y_num_tasks_with_graph_top_one_accurate.append(subbed_all_tables[table_id]['num_tasks_with_graph_top_one_accurate'])\n r_y_num_tasks_with_graph_top_five_accurate.append(subbed_all_tables[table_id]['num_tasks_with_graph_top_five_accurate'])\n r_y_average_ndcg_retrieval.append(subbed_all_tables[table_id]['average_ndcg_retrieval'])\n r_y_average_ndcg_text.append(subbed_all_tables[table_id]['average_ndcg_text'])\n r_y_average_ndcg_graph.append(subbed_all_tables[table_id]['average_ndcg_graph'])\n \nr_y_average_ndcg_retrieval, y_average_ndcg_retrieval",
"_____no_output_____"
],
[
"x_eval_fid = [\n 'movies',\n 'players I',\n 'video games',\n 'magazines',\n 'companies',\n 'country I',\n 'players II',\n 'pope',\n 'country II'\n]\nx_eval_fidx = range(len(x_eval_fid))\nd_y_num_tasks_with_gt_in_candidate = []\nd_y_num_tasks_with_singleton_candidate = []\nd_y_num_tasks_with_singleton_candidate_with_gt = []\nd_y_num_tasks_with_retrieval_top_one_accurate = []\nd_y_num_tasks_with_retrieval_top_five_accurate = []\nd_y_num_tasks_with_text_top_one_accurate = []\nd_y_num_tasks_with_text_top_five_accurate = []\nd_y_num_tasks_with_graph_top_one_accurate = []\nd_y_num_tasks_with_graph_top_five_accurate = []\nd_y_average_ndcg_retrieval = []\nd_y_average_ndcg_text = []\nd_y_average_ndcg_graph = []\n\nfor idx in range(len(x_eval_fid)):\n table_id = eval_file_ids[idx]\n d_y_num_tasks_with_gt_in_candidate.append(dropped_all_tables[table_id]['num_tasks_with_gt_in_candidate'])\n d_y_num_tasks_with_singleton_candidate.append(dropped_all_tables[table_id]['num_tasks_with_singleton_candidate'])\n d_y_num_tasks_with_singleton_candidate_with_gt.append(dropped_all_tables[table_id]['num_tasks_with_singleton_candidate_with_gt'])\n d_y_num_tasks_with_retrieval_top_one_accurate.append(dropped_all_tables[table_id]['num_tasks_with_retrieval_top_one_accurate'])\n d_y_num_tasks_with_retrieval_top_five_accurate.append(dropped_all_tables[table_id]['num_tasks_with_retrieval_top_five_accurate'])\n d_y_num_tasks_with_text_top_one_accurate.append(dropped_all_tables[table_id]['num_tasks_with_text_top_one_accurate'])\n d_y_num_tasks_with_text_top_five_accurate.append(dropped_all_tables[table_id]['num_tasks_with_text_top_five_accurate'])\n d_y_num_tasks_with_graph_top_one_accurate.append(dropped_all_tables[table_id]['num_tasks_with_graph_top_one_accurate'])\n d_y_num_tasks_with_graph_top_five_accurate.append(dropped_all_tables[table_id]['num_tasks_with_graph_top_five_accurate'])\n d_y_average_ndcg_retrieval.append(dropped_all_tables[table_id]['average_ndcg_retrieval'])\n d_y_average_ndcg_text.append(dropped_all_tables[table_id]['average_ndcg_text'])\n d_y_average_ndcg_graph.append(dropped_all_tables[table_id]['average_ndcg_graph'])\n \nd_y_average_ndcg_retrieval, y_average_ndcg_retrieval",
"_____no_output_____"
],
[
"# import matplotlib.pyplot as plt\n\n# fig, ax = plt.subplots(figsize=(10, 10))\n# ax.set_ylabel('average ndcg')\n# ax.set_xlabel('table content')\n# ax.plot(x_eval_fid, r_y_average_ndcg_retrieval, 'ro', label='R: average ndcg score ranked by retrieval score')\n# ax.plot(x_eval_fid, r_y_average_ndcg_text, 'bo', label='R: average ndcg score ranked by text embedding score')\n# ax.plot(x_eval_fid, r_y_average_ndcg_graph, 'go', label='R: average ndcg score ranked by graph embedding score')\n# ax.plot(x_eval_fid, y_average_ndcg_retrieval, 'rx', label='average ndcg score ranked by retrieval score')\n# ax.plot(x_eval_fid, y_average_ndcg_text, 'bx', label='average ndcg score ranked by text embedding score')\n# ax.plot(x_eval_fid, y_average_ndcg_graph, 'gx', label='average ndcg score ranked by graph embedding score')\n# ax.legend(bbox_to_anchor=(1,1), loc=\"upper left\")\n# fig.show()",
"_____no_output_____"
],
[
"# fig, ax = plt.subplots(figsize=(10, 10))\n# ax.set_ylabel('R: percent')\n# ax.set_xlabel('table content')\n# ax.plot(x_eval_fid, r_y_num_tasks_with_retrieval_top_one_accurate, 'rx', label='ranked by retrieval score top-1 accuracy')\n# ax.plot(x_eval_fid, r_y_num_tasks_with_text_top_one_accurate, 'bx', label='ranked by text embedding score top-1 accuracy')\n# ax.plot(x_eval_fid, r_y_num_tasks_with_graph_top_one_accurate, 'gx', label='ranked by graph embedding score top-1 accuracy')\n# ax.plot(x_eval_fid, r_y_num_tasks_with_retrieval_top_five_accurate, 'ro', label='ranked by retrieval score top-5 accuracy')\n# ax.plot(x_eval_fid, r_y_num_tasks_with_text_top_five_accurate, 'bo', label='ranked by text embedding score top-5 accuracy')\n# ax.plot(x_eval_fid, r_y_num_tasks_with_graph_top_five_accurate, 'go', label='ranked by graph embedding score top-5 accuracy')\n# ax.legend(bbox_to_anchor=(1,1), loc=\"upper left\")\n# fig.show()",
"_____no_output_____"
],
[
"# p_min, p_max, p_median, p_mean, p_std = compute_list_stats(y_num_tasks_with_text_top_five_accurate)\n# r_min, r_max, r_median, r_mean, r_std = compute_list_stats(r_y_num_tasks_with_text_top_five_accurate)\n# r_min - p_min, r_max - p_max, r_median - p_median, r_mean - p_mean, r_std - p_std",
"_____no_output_____"
],
[
"# Plot dropped wrong singleton\n# import matplotlib.pyplot as plt\n\n# fig, ax = plt.subplots(figsize=(10, 10))\n# ax.set_ylabel('average ndcg')\n# ax.set_xlabel('table content')\n# ax.plot(x_eval_fid, d_y_average_ndcg_retrieval, 'ro', label='D: average ndcg score ranked by retrieval score')\n# ax.plot(x_eval_fid, d_y_average_ndcg_text, 'bo', label='D: average ndcg score ranked by text embedding score')\n# ax.plot(x_eval_fid, d_y_average_ndcg_graph, 'go', label='D: average ndcg score ranked by graph embedding score')\n# ax.plot(x_eval_fid, r_y_average_ndcg_retrieval, 'r+', label='R: average ndcg score ranked by retrieval score')\n# ax.plot(x_eval_fid, r_y_average_ndcg_text, 'b+', label='R: average ndcg score ranked by text embedding score')\n# ax.plot(x_eval_fid, r_y_average_ndcg_graph, 'g+', label='R: average ndcg score ranked by graph embedding score')\n# ax.plot(x_eval_fid, y_average_ndcg_retrieval, 'rx', label='average ndcg score ranked by retrieval score')\n# ax.plot(x_eval_fid, y_average_ndcg_text, 'bx', label='average ndcg score ranked by text embedding score')\n# ax.plot(x_eval_fid, y_average_ndcg_graph, 'gx', label='average ndcg score ranked by graph embedding score')\n# ax.legend(bbox_to_anchor=(1,1), loc=\"upper left\")\n# fig.show()",
"_____no_output_____"
],
[
"# fig, ax = plt.subplots(figsize=(10, 10))\n# ax.set_ylabel('D: percent')\n# ax.set_xlabel('table content')\n# ax.plot(x_eval_fid, d_y_num_tasks_with_retrieval_top_one_accurate, 'rx', label='ranked by retrieval score top-1 accuracy')\n# ax.plot(x_eval_fid, d_y_num_tasks_with_text_top_one_accurate, 'bx', label='ranked by text embedding score top-1 accuracy')\n# ax.plot(x_eval_fid, d_y_num_tasks_with_graph_top_one_accurate, 'gx', label='ranked by graph embedding score top-1 accuracy')\n# ax.plot(x_eval_fid, d_y_num_tasks_with_retrieval_top_five_accurate, 'ro', label='ranked by retrieval score top-5 accuracy')\n# ax.plot(x_eval_fid, d_y_num_tasks_with_text_top_five_accurate, 'bo', label='ranked by text embedding score top-5 accuracy')\n# ax.plot(x_eval_fid, d_y_num_tasks_with_graph_top_five_accurate, 'go', label='ranked by graph embedding score top-5 accuracy')\n# ax.legend(bbox_to_anchor=(1,1), loc=\"upper left\")\n# fig.show()",
"_____no_output_____"
],
[
"dropped_all_tables",
"_____no_output_____"
],
[
"# construct differene table\ndiff_ndcg_df = pd.DataFrame(columns=['table_content', 'r_ndcg', 'R: r_ndcg', 'D: r_ndcg', 't_ndcg', 'R: t_ndcg', 'D: t_ndcg', 'g_ndcg', 'R: g_ndcg', 'D: g_ndcg'])\nfor idx in range(len(x_eval_fid)):\n table_id = eval_file_ids[idx]\n diff_ndcg_df.loc[table_id] = [\n x_eval_fid[idx],\n y_average_ndcg_retrieval[idx],\n r_y_average_ndcg_retrieval[idx],\n d_y_average_ndcg_retrieval[idx],\n y_average_ndcg_text[idx],\n r_y_average_ndcg_text[idx],\n d_y_average_ndcg_text[idx],\n y_average_ndcg_graph[idx],\n r_y_average_ndcg_graph[idx],\n d_y_average_ndcg_graph[idx]\n ]\ndiff_ndcg_df",
"_____no_output_____"
],
[
"diff_accuracy_df = pd.DataFrame(columns=[\n 'table_content', 'top1-retr', 'R: top1-retr', 'D: top1-retr',\n 'top1-text', 'R: top1-text', 'D: top1-text',\n 'top1-graph', 'R: top1-graph', 'D: top1-graph'\n])\nfor idx in range(len(x_eval_fid)):\n table_id = eval_file_ids[idx]\n diff_accuracy_df.loc[table_id] = [\n x_eval_fid[idx],\n y_num_tasks_with_retrieval_top_one_accurate[idx],\n r_y_num_tasks_with_retrieval_top_one_accurate[idx],\n d_y_num_tasks_with_retrieval_top_one_accurate[idx],\n y_num_tasks_with_text_top_one_accurate[idx],\n r_y_num_tasks_with_text_top_one_accurate[idx],\n d_y_num_tasks_with_text_top_one_accurate[idx],\n y_num_tasks_with_graph_top_one_accurate[idx],\n r_y_num_tasks_with_graph_top_one_accurate[idx],\n d_y_num_tasks_with_graph_top_one_accurate[idx]\n ]\ndiff_accuracy_df",
"_____no_output_____"
],
[
"diff_accuracy_f_df = pd.DataFrame(columns=[\n 'table_content',\n 'top5-retr', 'R: top5-retr', 'D: top5-retr',\n 'top5-text', 'R: top5-text', 'D: top5-text',\n 'top5-graph', 'R: top5-graph', 'D: top5-graph'\n])\nfor idx in range(len(x_eval_fid)):\n table_id = eval_file_ids[idx]\n diff_accuracy_f_df.loc[table_id] = [\n x_eval_fid[idx],\n y_num_tasks_with_retrieval_top_five_accurate[idx],\n r_y_num_tasks_with_retrieval_top_five_accurate[idx],\n d_y_num_tasks_with_retrieval_top_five_accurate[idx],\n y_num_tasks_with_text_top_five_accurate[idx],\n r_y_num_tasks_with_text_top_five_accurate[idx],\n d_y_num_tasks_with_text_top_five_accurate[idx],\n y_num_tasks_with_graph_top_five_accurate[idx],\n r_y_num_tasks_with_graph_top_five_accurate[idx],\n d_y_num_tasks_with_graph_top_five_accurate[idx]\n ]\ndiff_accuracy_f_df",
"_____no_output_____"
],
[
"# distribution of wrong singleton\nwrong_singleton_df = candidate_eval_data[(candidate_eval_data['count'] == 1) & (candidate_eval_data['has_gt_in_candidate'] != 1)]\nwrong_singleton_df",
"_____no_output_____"
],
[
"# get candidate from eval file + get label from ground truth file\nwrong_files = list(pd.unique(wrong_singleton_df['table_id']))\nwrong_tasks_df = pd.DataFrame(columns=['table_id', 'row', 'column', 'GT_kg_label', 'GT_kg_id', 'candidates'])\nfor fid in wrong_files:\n f_data = pd.read_csv(f'/Users/summ7t/dev/novartis/table-linker/SemTab2019/embedding_evaluation_files/{fid}.csv')\n f_wrong_tasks = wrong_singleton_df[wrong_singleton_df['table_id'] == fid]\n for i, row in f_wrong_tasks.iterrows():\n candidates_df = f_data[(f_data['row'] == row['row']) & (f_data['column'] == row['column'])]\n candidates_df = candidates_df.fillna(\"\")\n# print(row)\n# display(candidates_df)\n assert row['count'] == len(candidates_df)\n c_list = list(pd.unique(candidates_df['kg_id']))\n GT_kg_id = candidates_df['GT_kg_id'].iloc[0]\n GT_kg_label = candidates_df['GT_kg_label'].iloc[0]\n \n# print(row['row'], row['column'], GT_kg_label, GT_kg_id)\n# print(c_list)\n wrong_tasks_df = wrong_tasks_df.append({\n 'table_id': fid,\n 'row': row['row'],\n 'column': row['column'],\n 'GT_kg_label': GT_kg_label,\n 'GT_kg_id': GT_kg_id,\n 'candidates': \" \".join(c_list)\n }, ignore_index=True)\nwrong_tasks_df",
"_____no_output_____"
],
[
"pd.unique(wrong_tasks_df['candidates'])",
"_____no_output_____"
],
[
"wrong_tasks_df[wrong_tasks_df['candidates'] > '']",
"_____no_output_____"
],
[
"data[242:245]",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0965fe1033fc30eab44983072cdb3ae0310d706 | 12,458 | ipynb | Jupyter Notebook | homeworks/PHYS_440_540_F20_HW5.ipynb | KleinWang/PHYS_440_540 | 0c01d63ca4b068068f24635185663b2564740aeb | [
"MIT"
] | 9 | 2020-08-18T04:34:51.000Z | 2021-12-26T03:41:02.000Z | homeworks/PHYS_440_540_F20_HW5.ipynb | KleinWang/PHYS_440_540 | 0c01d63ca4b068068f24635185663b2564740aeb | [
"MIT"
] | null | null | null | homeworks/PHYS_440_540_F20_HW5.ipynb | KleinWang/PHYS_440_540 | 0c01d63ca4b068068f24635185663b2564740aeb | [
"MIT"
] | 5 | 2020-09-15T14:55:24.000Z | 2021-07-07T19:17:25.000Z | 29.8753 | 488 | 0.554904 | [
[
[
"# Homework 5: Problems\n## Due Wednesday 28 October, before class\n\n### PHYS 440/540, Fall 2020\nhttps://github.com/gtrichards/PHYS_440_540/\n\n\n## Problems 1&2\n\nComplete Chapters 1 and 2 in the *unsupervised learning* course in Data Camp. The last video (and the two following code examples) in Chapter 2 are off topic, but we'll discuss those next week, so this will be a good intro. The rest is highly relevant to this week's material. These are worth 1000 and 900 points, respectively. I'll be grading on the number of points earned instead of completion (as I have been), so try to avoid using the hints unless you really need them.\n\n## Problem 3\n\nFill in the blanks below. This exercise will take you though an example of everything that we did this week. Please copy the relevant import statements (below) to the cells where they are used (so that they can be run out of order). \n\nIf a question is calling for a word-based answer, I'm not looking for more than ~1 sentence.",
"_____no_output_____"
],
[
"---",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.metrics.cluster import homogeneity_score\nfrom sklearn.datasets import make_blobs\nfrom sklearn.neighbors import KernelDensity\nfrom astroML.density_estimation import KNeighborsDensity\nfrom sklearn.model_selection import GridSearchCV\nfrom sklearn.mixture import GaussianMixture\nfrom sklearn.cluster import KMeans\nfrom sklearn.cluster import DBSCAN",
"_____no_output_____"
]
],
[
[
"Setup up the data set. We will do both density estimation and clustering on it.",
"_____no_output_____"
]
],
[
[
"from sklearn.datasets import make_blobs\n#Make two blobs with 3 features and 1000 samples\nN=1000\nX,y = make_blobs(n_samples=N, centers=5, n_features=2, random_state=25)\nplt.figure(figsize=(10,10))\nplt.scatter(X[:, 0], X[:, 1], s=100, c=y)",
"_____no_output_____"
]
],
[
[
"Start with kernel density estimation, including a grid search to find the best bandwidth",
"_____no_output_____"
]
],
[
[
"bwrange = np.linspace(____,____,____) # Test 30 bandwidths from 0.1 to 1.0 ####\nK = ____ # 5-fold cross validation ####\ngrid = GridSearchCV(KernelDensity(), {'bandwidth': ____}, cv=K) ####\ngrid.fit(X) #Fit the histogram data that we started the lecture with.\nh_opt = ____.best_params_['bandwidth'] ####\nprint(h_opt)\n\nkde = KernelDensity(kernel='gaussian', bandwidth=h_opt)\nkde.fit(X) #fit the model to the data\n\nu = v = np.linspace(-15,15,100)\nXgrid = np.vstack(map(np.ravel, np.meshgrid(u, v))).T\ndens = np.exp(kde.score_samples(Xgrid)) #evaluate the model on the grid\n\nplt.scatter(____[:,0],____[:,1], c=dens, cmap=\"Purples\", edgecolor=\"None\") ####\nplt.colorbar()",
"_____no_output_____"
]
],
[
[
"---",
"_____no_output_____"
],
[
"Now try a nearest neighbors approach to estimating the density. ",
"_____no_output_____"
],
[
"#### What value of $k$ do you need to make the plot look similar to the one above?",
"_____no_output_____"
]
],
[
[
"# Compute density with Bayesian nearest neighbors\nk=____ ####\nnbrs = KNeighborsDensity('bayesian',n_neighbors=____) ####\nnbrs.____(X) ####\ndens_nbrs = nbrs.eval(Xgrid) / N\n\nplt.scatter(Xgrid[:,0],Xgrid[:,1], c=dens_nbrs, cmap=\"Purples\", edgecolor=\"None\")\nplt.colorbar()",
"_____no_output_____"
]
],
[
[
"---",
"_____no_output_____"
],
[
"Now do a Gaussian mixture model. Do a grid search for between 1 and 10 components.",
"_____no_output_____"
]
],
[
[
"#Kludge to fix the bug with draw_ellipse in astroML v1.0\nfrom matplotlib.patches import Ellipse\n\ndef draw_ellipse(mu, C, scales=[1, 2, 3], ax=None, **kwargs):\n if ax is None:\n ax = plt.gca()\n\n # find principal components and rotation angle of ellipse\n sigma_x2 = C[0, 0]\n sigma_y2 = C[1, 1]\n sigma_xy = C[0, 1]\n\n alpha = 0.5 * np.arctan2(2 * sigma_xy,\n (sigma_x2 - sigma_y2))\n tmp1 = 0.5 * (sigma_x2 + sigma_y2)\n tmp2 = np.sqrt(0.25 * (sigma_x2 - sigma_y2) ** 2 + sigma_xy ** 2)\n\n sigma1 = np.sqrt(tmp1 + tmp2)\n sigma2 = np.sqrt(tmp1 - tmp2)\n\n for scale in scales:\n ax.add_patch(Ellipse((mu[0], mu[1]),\n 2 * scale * sigma1, 2 * scale * sigma2,\n alpha * 180. / np.pi,\n **kwargs))",
"_____no_output_____"
],
[
"ncomps = np.arange(____,____,____) # Test 10 bandwidths from 1 to 10 ####\nK = 5 # 5-fold cross validation\ngrid = ____(GaussianMixture(), {'n_components': ncomps}, cv=____) ####\ngrid.fit(X) #Fit the histogram data that we started the lecture with.\nncomp_opt = grid.____['n_components'] ####\nprint(ncomp_opt)\n\ngmm = ____(n_components=ncomp_opt) ####\ngmm.fit(X)\n\nfig = plt.figure(figsize=(8, 8))\nax = fig.add_subplot(111)\nax.scatter(X[:,0],X[:,1])\n\nax.scatter(gmm.means_[:,0], gmm.means_[:,1], marker='s', c='red', s=80)\nfor mu, C, w in zip(gmm.means_, gmm.covariances_, gmm.weights_):\n draw_ellipse(mu, 1*C, scales=[2], ax=ax, fc='none', ec='k') #2 sigma ellipses for each component",
"_____no_output_____"
]
],
[
[
"#### Do you get the same answer (the same number of components) each time you run it?",
"_____no_output_____"
],
[
"---",
"_____no_output_____"
],
[
"Now try Kmeans. Here we will scale the data.",
"_____no_output_____"
]
],
[
[
"kmeans = KMeans(n_clusters=5)\nscaler = StandardScaler()\nX_scaled = ____.____(X) ####\nkmeans.fit(X_scaled)\ncenters=kmeans.____ #location of the clusters ####\nlabels=kmeans.predict(____) #labels for each of the points ####\ncenters_unscaled = scaler.____(centers) ####",
"_____no_output_____"
],
[
"fig,ax = plt.subplots(1,2,figsize=(16, 8))\nax[0].scatter(X[:,0],X[:,1],c=labels)\nax[0].scatter(centers_unscaled[:,0], centers_unscaled[:,1], marker='s', c='red', s=80)\nax[0].set_title(\"Predictions\")\n\nax[1].scatter(X[:, 0], X[:, 1], c=y)\nax[1].set_title(\"Truth\")",
"_____no_output_____"
]
],
[
[
"Let's evaluate how well we did in two other ways: a matrix and a score.",
"_____no_output_____"
]
],
[
[
"df = pd.DataFrame({'predictions': labels, 'truth': y})\nct = pd.crosstab(df['predictions'], df['truth'])\nprint(ct)",
"_____no_output_____"
],
[
"from sklearn.metrics.cluster import homogeneity_score\nscore = homogeneity_score(df['truth'], df['predictions'])\nprint(score)",
"_____no_output_____"
]
],
[
[
"#### What is the score for 3 clusters?",
"_____no_output_____"
],
[
"---",
"_____no_output_____"
],
[
"Finally, let's use DBSCAN. Note that outliers are flagged as `labels_=-1`, so there is one more class that you might think.\n\nFull credit if you can get a score of 0.6 or above. Extra credit (0.1 of 5 points) for a score of 0.85 or above.\n",
"_____no_output_____"
]
],
[
[
"def plot_dbscan(dbscan, X, size, show_xlabels=True, show_ylabels=True):\n core_mask = np.zeros_like(dbscan.labels_, dtype=bool)\n core_mask[dbscan.core_sample_indices_] = True\n anomalies_mask = dbscan.labels_ == -1\n non_core_mask = ~(core_mask | anomalies_mask)\n\n cores = dbscan.components_\n anomalies = X[anomalies_mask]\n non_cores = X[non_core_mask]\n \n plt.scatter(cores[:, 0], cores[:, 1],\n c=dbscan.labels_[core_mask], marker='o', s=size, cmap=\"Paired\")\n plt.scatter(cores[:, 0], cores[:, 1], marker='*', s=20, c=dbscan.labels_[core_mask])\n plt.scatter(anomalies[:, 0], anomalies[:, 1],\n c=\"r\", marker=\"x\", s=100)\n plt.scatter(non_cores[:, 0], non_cores[:, 1], c=dbscan.labels_[non_core_mask], marker=\".\")\n if show_xlabels:\n plt.xlabel(\"$x_1$\", fontsize=14)\n else:\n plt.tick_params(labelbottom=False)\n if show_ylabels:\n plt.ylabel(\"$x_2$\", fontsize=14, rotation=0)\n else:\n plt.tick_params(labelleft=False)\n plt.title(\"eps={:.2f}, min_samples={}\".format(dbscan.eps, dbscan.min_samples), fontsize=14)",
"_____no_output_____"
],
[
"dbscan = DBSCAN(eps=0.15, min_samples=7)\ndbscan.fit(X_scaled)\n\nplt.figure(figsize=(10, 10))\nplot_dbscan(dbscan, X_scaled, size=100)\nn_clusters=np.unique(dbscan.labels_)\nprint(len(n_clusters)) #Number of clusters found (+1)",
"_____no_output_____"
],
[
"df2 = pd.DataFrame({'predictions': dbscan.labels_, 'truth': y})\nct2 = pd.crosstab(df2['predictions'], df2['truth'])\nprint(ct2)",
"_____no_output_____"
],
[
"from sklearn.metrics.cluster import homogeneity_score\nscore2 = homogeneity_score(df2['truth'], df2['predictions'])\nprint(score2)",
"_____no_output_____"
]
],
[
[
"#### Why do you think DBSCAN is having a hard time? Think about what the Gaussian Mixture Model result showed.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d096723368119973eb907adf0efe16fad1197842 | 136,565 | ipynb | Jupyter Notebook | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects | 79f295195c365beb1b11c1bf30140e3d663caf01 | [
"Apache-2.0"
] | 1 | 2021-02-08T21:00:18.000Z | 2021-02-08T21:00:18.000Z | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects | 79f295195c365beb1b11c1bf30140e3d663caf01 | [
"Apache-2.0"
] | null | null | null | boston_housing/house_price_prediction.ipynb | Mohamed-3ql/RegressionProjects | 79f295195c365beb1b11c1bf30140e3d663caf01 | [
"Apache-2.0"
] | null | null | null | 77.198982 | 9,804 | 0.812456 | [
[
[
"### Dataset Source:\nAbout this file\nBoston House Price dataset\n\n",
"_____no_output_____"
],
[
"\n### columns:\n* CRIM per capita crime rate by town\n* ZN proportion of residential land zoned for lots over 25,000 sq.ft.\n* INDUS proportion of non-retail business acres per town\n* CHAS Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)\n* NOX nitric oxides concentration (parts per 10 million)\n* RM average number of rooms per dwelling\n* AGE proportion of owner-occupied units built prior to 1940\n* DIS weighted distances to five Boston employment centres\n* RAD index of accessibility to radial highways\n* TAX full-value property-tax rate per 10,000\n* PTRATIO pupil-teacher ratio by town\n* B where Bk is the proportion of blacks by town\n* LSTAT percentage lower status of the population\n* MEDV Median value of owner-occupied homes in 1000$",
"_____no_output_____"
],
[
"### Load Modules",
"_____no_output_____"
]
],
[
[
"import numpy as np # linear algebra python library\nimport pandas as pd # data structure for tabular data.\nimport matplotlib.pyplot as plt # visualization library\n\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"<br>\nLoading data",
"_____no_output_____"
]
],
[
[
"filename = \"housing.csv\"\nboston_data = pd.read_csv(filename, delim_whitespace=True, header=None)\nheader = [\"CRIM\",\"ZN\",\"INDUS\",\"CHAS\",\"NOX\",\"RM\",\n \"AGE\",\"DIS\",\"RAD\",\"TAX\",\"PTRATIO\",\"B\",\"LSTAT\",\"MEDV\"]\n\nboston_data.columns = header\n# display the first 10 rows of dataframe.\nboston_data.head(10)",
"_____no_output_____"
]
],
[
[
"<br>\nInspecting variable types",
"_____no_output_____"
]
],
[
[
"boston_data.dtypes",
"_____no_output_____"
]
],
[
[
"<p class=\"alert alert-warning\">In many datasets, integer variables are cast as float. So, after inspecting\nthe data type of the variable, even if you get float as output, go ahead\nand check the unique values to make sure that those variables are discrete\nand not continuous.</p>",
"_____no_output_____"
],
[
"### Inspecting all variables\n<br>",
"_____no_output_____"
],
[
"\ninspecting distinct values of `RAD`(index of accessibility to radial highways).",
"_____no_output_____"
]
],
[
[
"boston_data['RAD'].unique()",
"_____no_output_____"
]
],
[
[
"<br>",
"_____no_output_____"
],
[
"inspecting distinct values of `CHAS` Charles River dummy variable (= 1 if tract bounds river; 0 otherwise).",
"_____no_output_____"
]
],
[
[
"boston_data['CHAS'].unique()",
"_____no_output_____"
]
],
[
[
"<br>",
"_____no_output_____"
],
[
"#### inspecting the first 20 distinct values of all continous variables as following:\n* CRIM per capita crime rate by town\n* ZN proportion of residential land zoned for lots over 25,000 sq.ft.\n* INDUS proportion of non-retail business acres per town\n* NOX nitric oxides concentration (parts per 10 million)\n* RM average number of rooms per dwelling\n* AGE proportion of owner-occupied units built prior to 1940\n* DIS weighted distances to five Boston employment centres\n* TAX full-value property-tax rate per 10,000\n* PTRATIO pupil-teacher ratio by town\n* B where Bk is the proportion of blacks by town\n* LSTAT percentage lower status of the population\n* MEDV Median value of owner-occupied homes in 1000$",
"_____no_output_____"
],
[
"<br>\nCRIM per capita crime rate by town.",
"_____no_output_____"
]
],
[
[
"boston_data['CRIM'].unique()[0:20]",
"_____no_output_____"
]
],
[
[
"<br>\nZN proportion of residential land zoned for lots over 25,000 sq.ft.",
"_____no_output_____"
]
],
[
[
"boston_data['ZN'].unique()[0:20]",
"_____no_output_____"
]
],
[
[
"<br>\nINDUS proportion of non-retail business acres per town",
"_____no_output_____"
]
],
[
[
"boston_data['INDUS'].unique()[0:20]",
"_____no_output_____"
]
],
[
[
"<br>\nNOX nitric oxides concentration (parts per 10 million)",
"_____no_output_____"
]
],
[
[
"boston_data['NOX'].unique()[0:20]",
"_____no_output_____"
]
],
[
[
"<br>\nRM average number of rooms per dwelling",
"_____no_output_____"
]
],
[
[
"boston_data['RM'].unique()[0:20]",
"_____no_output_____"
]
],
[
[
"<br>\nAGE proportion of owner-occupied units built prior to 1940",
"_____no_output_____"
]
],
[
[
"boston_data['AGE'].unique()[0:20]",
"_____no_output_____"
]
],
[
[
"<br>\nDIS weighted distances to five Boston employment centres",
"_____no_output_____"
]
],
[
[
"boston_data['DIS'].unique()[0:20]",
"_____no_output_____"
]
],
[
[
"<br>\nTAX full-value property-tax rate per 10,000",
"_____no_output_____"
]
],
[
[
"boston_data['TAX'].unique()[0:20]",
"_____no_output_____"
]
],
[
[
"<br>\nPTRATIO pupil-teacher ratio by town",
"_____no_output_____"
]
],
[
[
"boston_data['PTRATIO'].unique()",
"_____no_output_____"
]
],
[
[
"<br>\nB where Bk is the proportion of blacks by town",
"_____no_output_____"
]
],
[
[
"boston_data['B'].unique()[0:20]",
"_____no_output_____"
]
],
[
[
"<br>\nLSTAT percentage lower status of the population",
"_____no_output_____"
]
],
[
[
"boston_data['LSTAT'].unique()[0:20]",
"_____no_output_____"
]
],
[
[
"<br>\nMEDV Median value of owner-occupied homes in 1000$",
"_____no_output_____"
]
],
[
[
"boston_data['MEDV'].unique()[0:20]",
"_____no_output_____"
]
],
[
[
"<p class=\"alert alert-info\" role=\"alert\">after we checked the dat type of each variable. we have 2 discrete numerical variable and 10 floating or continuous variales.</p>",
"_____no_output_____"
],
[
"#### To understand wheather a variable is contious or discrete. we can also make a histogram for each:\n* CRIM per capita crime rate by town\n* ZN proportion of residential land zoned for lots over 25,000 sq.ft.\n* INDUS proportion of non-retail business acres per town\n* CHAS Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)\n* NOX nitric oxides concentration (parts per 10 million)\n* RM average number of rooms per dwelling\n* AGE proportion of owner-occupied units built prior to 1940\n* DIS weighted distances to five Boston employment centres\n* RAD index of accessibility to radial highways\n* TAX full-value property-tax rate per 10,000\n* PTRATIO pupil-teacher ratio by town\n* B where Bk is the proportion of blacks by town\n* LSTAT percentage lower status of the population\n* MEDV Median value of owner-occupied homes in 1000$\n<br>",
"_____no_output_____"
],
[
"making histogram for crime rate by town `CRIM` vatiable by dividing the variable range into intervals.",
"_____no_output_____"
]
],
[
[
"n_data = len(boston_data['CRIM'])\nbins = int(np.sqrt(n_data))\nboston_data['CRIM'].hist(bins=bins)",
"_____no_output_____"
]
],
[
[
"<br>",
"_____no_output_____"
],
[
"making histogram for proportion of residential land zoned for lots over 25,000 sq.ft `ZN`, variable by dividing the variable range into intervals.",
"_____no_output_____"
]
],
[
[
"n_data = len(boston_data['ZN'])\nbins = int(np.sqrt(n_data))\nboston_data['ZN'].hist(bins=bins)",
"_____no_output_____"
]
],
[
[
"<br>",
"_____no_output_____"
],
[
"making histogram for proportion of non-retail business acres per town `INDUS`, variable by dividing the variable range into intervals.",
"_____no_output_____"
]
],
[
[
"n_data = len(boston_data['INDUS'])\nbins = int(np.sqrt(n_data))\nboston_data['INDUS'].hist(bins=bins)",
"_____no_output_____"
]
],
[
[
"<br>",
"_____no_output_____"
],
[
"making histogram for nitric oxides concentration (parts per 10 million) `NOX`, variable by dividing the variable range into intervals.",
"_____no_output_____"
]
],
[
[
"n_data = len(boston_data['NOX'])\nbins = int(np.sqrt(n_data))\nboston_data['NOX'].hist(bins=bins)",
"22\n"
]
],
[
[
"<br>",
"_____no_output_____"
],
[
"making histogram for average number of rooms per dwelling `RM`, variable by dividing the variable range into intervals.",
"_____no_output_____"
]
],
[
[
"n_data = len(boston_data['RM'])\nbins = int(np.sqrt(n_data))\nboston_data['RM'].hist(bins=bins)",
"_____no_output_____"
]
],
[
[
"<br>",
"_____no_output_____"
],
[
"making histogram for proportion of owner-occupied units built prior to 1940 `AGE`, variable by dividing the variable range into intervals.",
"_____no_output_____"
]
],
[
[
"n_data = len(boston_data['AGE'])\nbins = int(np.sqrt(n_data))\nboston_data['AGE'].hist(bins=bins)",
"_____no_output_____"
]
],
[
[
"<br>",
"_____no_output_____"
],
[
"making histogram for weighted distances to five Boston employment centres `DIS`, variable by dividing the variable range into intervals.",
"_____no_output_____"
]
],
[
[
"n_data = len(boston_data['DIS'])\nbins = int(np.sqrt(n_data))\nboston_data['DIS'].hist(bins=bins)",
"_____no_output_____"
]
],
[
[
"<br>",
"_____no_output_____"
],
[
" making histogram for full-value property-tax rate per 10,000 `TAX`, variable by dividing the variable range into intervals.",
"_____no_output_____"
]
],
[
[
"n_data = len(boston_data['TAX'])\nbins = int(np.sqrt(n_data))\nboston_data['TAX'].hist(bins=bins)",
"_____no_output_____"
]
],
[
[
"<br>",
"_____no_output_____"
],
[
" making histogram for pupil-teacher ratio by town `PTRATIO`, variable by dividing the variable range into intervals. ",
"_____no_output_____"
]
],
[
[
"n_data = len(boston_data['PTRATIO'])\nbins = int(np.sqrt(n_data))\nboston_data['PTRATIO'].hist(bins=bins)",
"_____no_output_____"
]
],
[
[
"<br>",
"_____no_output_____"
],
[
"making histogram where Bk is the proportion of blacks by town `B`, variable by dividing the variable range into intervals. ",
"_____no_output_____"
]
],
[
[
"n_data = len(boston_data['B'])\nbins = int(np.sqrt(n_data))\nboston_data['B'].hist(bins=bins)",
"_____no_output_____"
]
],
[
[
"<br>",
"_____no_output_____"
],
[
"making histogram for percentage lower status of the population `LSTAT `, variable by dividing the variable range into intervals. ",
"_____no_output_____"
]
],
[
[
"n_data = len(boston_data['LSTAT'])\nbins = int(np.sqrt(n_data))\nboston_data['LSTAT'].hist(bins=bins)",
"_____no_output_____"
]
],
[
[
"<br>",
"_____no_output_____"
],
[
"making histogram for Median value of owner-occupied homes in 1000$ ` MEDV`, variable by dividing the variable range into intervals.",
"_____no_output_____"
]
],
[
[
"n_data = len(boston_data['MEDV'])\nbins = int(np.sqrt(n_data))\nboston_data['MEDV'].hist(bins=bins)",
"_____no_output_____"
]
],
[
[
"<br>",
"_____no_output_____"
],
[
"<br>",
"_____no_output_____"
],
[
"making histogram for index of accessibility to radial highways`RAD`, variable by dividing the variable range into intervals.",
"_____no_output_____"
]
],
[
[
"n_data = len(boston_data['RAD'])\nbins = int(np.sqrt(n_data))\nboston_data['RAD'].hist(bins=bins)",
"_____no_output_____"
]
],
[
[
"<br>",
"_____no_output_____"
],
[
"<p class=\"alert alert-success\">by taking a look to histogram of features we noticing that the continuous variables values range is not discrete.</p>",
"_____no_output_____"
],
[
"making histogram for Charles River dummy variable (= 1 if tract bounds river; 0 otherwise) ` CHAS`, variable by dividing the variable range into intervals.",
"_____no_output_____"
]
],
[
[
"n_data = len(boston_data['CHAS'])\nbins = int(np.sqrt(n_data))\nboston_data['CHAS'].hist(bins=bins)",
"_____no_output_____"
]
],
[
[
"<p class=\"alert alert-info\">\nwe noticing here the values of this variable is discrete.\n</p>",
"_____no_output_____"
],
[
"#### Quantifying Missing Data\ncalculating the missing values in the dataset.",
"_____no_output_____"
]
],
[
[
"boston_data.isnull().sum()",
"_____no_output_____"
]
],
[
[
"<p class=\"alert alert-info\">There is no Missing Values</p>",
"_____no_output_____"
],
[
"<br>",
"_____no_output_____"
],
[
"#### Determining the cardinality in cateogrical varaibles",
"_____no_output_____"
],
[
"<br>\nfind unique values in each categorical variable",
"_____no_output_____"
]
],
[
[
"boston_data.nunique()",
"_____no_output_____"
]
],
[
[
"<p class=\"alert alert-info\">The <b>nunique()</b> method ignores missing values by default. If we want to\nconsider missing values as an additional category, we should set the\n dropna argument to <i>False</i>: <b>data.nunique(dropna=False).</b><p>\n<br>",
"_____no_output_____"
],
[
"let's print out the unique category in Charles River dummy variable (= 1 if tract bounds river; 0 otherwise) ` CHAS`",
"_____no_output_____"
]
],
[
[
"boston_data['CHAS'].unique()",
"_____no_output_____"
]
],
[
[
"<p class=\"alert alert-info\">pandas <b>nunique()</b> can be used in the entire dataframe. pandas\n <b>unique()</b>, on the other hand, works only on a pandas Series. Thus, we\nneed to specify the column name that we want to return the unique values\nfor.</p>",
"_____no_output_____"
],
[
"<br>",
"_____no_output_____"
]
],
[
[
"boston_data[['CHAS','RAD']].nunique().plot.bar(figsize=(12,6))\nplt.xlabel(\"Variables\")\nplt.ylabel(\"Number Of Unique Values\")\nplt.title(\"Cardinality\")\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
d0967be76fb64d1e64756087f3ca16fc57c0cc2e | 51,383 | ipynb | Jupyter Notebook | pegbook_chap5.ipynb | kkuramitsu/pegbook2021 | c75ddd79631ac67d502b3705d76d168d4d5f842f | [
"MIT"
] | 2 | 2022-02-19T13:46:47.000Z | 2022-03-05T11:54:23.000Z | pegbook_chap5.ipynb | kkuramitsu/pegbook2022 | c75ddd79631ac67d502b3705d76d168d4d5f842f | [
"MIT"
] | null | null | null | pegbook_chap5.ipynb | kkuramitsu/pegbook2022 | c75ddd79631ac67d502b3705d76d168d4d5f842f | [
"MIT"
] | null | null | null | 72.472496 | 6,372 | 0.541697 | [
[
[
"# 第5章 計算機を作る\n\n",
"_____no_output_____"
],
[
"## 5.1.2 スタックマシン",
"_____no_output_____"
]
],
[
[
"def calc(expression: str):\n # 空白で分割して字句にする \n tokens = expression.split() \n stack = []\n for token in tokens:\n if token.isdigit():\n # 数値はスタックに push する \n stack.append(int(token)) \n continue\n # 数値でないなら,演算子として処理する \n x = stack.pop()\n y = stack.pop()\n if token == '+':\n stack.append(x+y) \n elif token == '*':\n stack.append(x*y)\n return stack.pop()\n\ncalc('1 2 + 2 3 + *')\n\n",
"_____no_output_____"
],
[
"# !pip install pegtree\nimport pegtree as pg\nfrom pegtree.colab import peg, pegtree, example",
"_____no_output_____"
]
],
[
[
"構文木を表示するためには、graphviz があらかじめインストールされている必要がある。",
"_____no_output_____"
]
],
[
[
"%%peg\nExpr = Prod (\"+\" Prod)*\nProd = Value (\"*\" Value)*\nValue = { [0-9]+ #Int } _\n\nexample Expr 1+2+3",
"_____no_output_____"
],
[
"%%peg\n\nExpr = { Prod (\"+\" Prod)* #Add }\nProd = { Value (\"*\" Value)* #Mul }\nValue = { [0-9]+ #Int } _\n\nexample Expr 1+2+3",
"_____no_output_____"
],
[
"%%peg\n\nExpr = Prod {^ \"+\" Prod #Add }*\nProd = Value {^ \"*\" Value #Mul }*\nValue = { [0-9]+ #Int } _\n\nexample Expr 1+2+3",
"_____no_output_____"
],
[
"%%peg\n\nExpr = Prod {^ \"+\" Prod #Add }* \nProd = Value {^ \"*\" Value #Mul }* \nValue = \"(\" Expr \")\" / Int\nInt = { [0-9]+ #Int} _\n\nexample Expr 1+(2+3)",
"_____no_output_____"
]
],
[
[
"## PegTree によるパーザ生成",
"_____no_output_____"
]
],
[
[
"%%peg calc.pegtree\n\nStart = Expr EOF // 未消費文字を構文エラーに\nExpr = Prod ({^ \"+\" Prod #Add } / {^ \"-\" Prod #Sub } )*\nProd = Value ({^ \"*\" Value #Mul } / {^ \"/\" Value #Div } )* \nValue = { [0-9]+ #Int} _ / \"(\" Expr \")\"\nexample Expr 1+2*3\nexample Expr (1+2)*3\nexample Expr 1*2+3",
"_____no_output_____"
]
],
[
[
"## PegTree 文法のロード",
"_____no_output_____"
]
],
[
[
"peg = pg.grammar('calc.pegtree')",
"_____no_output_____"
],
[
"GRAMMAR = '''\nStart = Expr EOF\nExpr = Prod ({^ \"+\" Prod #Add } / {^ \"-\" Prod #Sub } )*\nProd = Value ({^ \"*\" Value #Mul } / {^ \"/\" Value #Div } )* \nValue = { [0-9]+ #Int} _ / \"(\" Expr \")\"\n'''\npeg = pg.grammar(GRAMMAR)",
"_____no_output_____"
],
[
"peg['Expr']",
"_____no_output_____"
]
],
[
[
"## 5.3.2 パーザの生成",
"_____no_output_____"
]
],
[
[
"parser = pg.generate(peg)",
"_____no_output_____"
],
[
"tree = parser('1+2') \nprint(repr(tree))",
"[#Add [#Int '1'][#Int '2']]\n"
],
[
"tree = parser('3@14') \nprint(repr(tree))",
"Syntax Error ((unknown source):1:1+1)\n3@14\n ^ \n"
]
],
[
[
"## 構文木とVisitor パターン",
"_____no_output_____"
]
],
[
[
"peg = pg.grammar('calc.pegtree') \nparser = pg.generate(peg)\ntree = parser('1+2*3')",
"_____no_output_____"
],
[
"tree.getTag()",
"_____no_output_____"
],
[
"len(tree)",
"_____no_output_____"
],
[
"left = tree[0]\nleft.getTag()",
"_____no_output_____"
],
[
"left = tree[0]\nstr(left)",
"_____no_output_____"
],
[
"def calc(tree):\n tag = tree.getTag() \n if tag == 'Add':\n t0 = tree[0]\n t1 = tree[1]\n return calc(t0) + calc(t1)\n if tag == 'Mul': \n t0 = tree[0] \n t1 = tree[1]\n return calc(t0) * calc(t1) \n if tag == 'Int':\n token = tree.getToken()\n return int(token)\n print(f'TODO: {tag}') # 未実装のタグの報告 \n return 0\n\ntree = parser('1+2*3') \nprint(calc(tree))",
"7\n"
]
],
[
[
"## Visitor パターン",
"_____no_output_____"
]
],
[
[
"class Visitor(object):\n def visit(self, tree):\n tag = tree.getTag()\n name = f'accept{tag}'\n if hasattr(self, name): # accept メソッドがあるか調べる\n # メソッド名からメソッドを得る \n acceptMethod = getattr(self, name) \n return acceptMethod(tree)\n print(f'TODO: accept{tag} method') \n return None",
"_____no_output_____"
],
[
"class Calc(Visitor): # Visitor の継承 \n \n def __init__(self, parser):\n self.parser = parser\n \n def eval(self, source):\n tree = self.parser(source)\n return self.visit(tree)\n \n def acceptInt(self, tree):\n token = tree.getToken()\n return int(token)\n \n def acceptAdd(self, tree):\n t0 = tree.get(0)\n t1 = tree.get(1)\n v0 = self.visit(t0)\n v1 = self.visit(t1)\n return v0 + v1\n \n def acceptMul(self, tree):\n t0 = tree.get(0)\n t1 = tree.get(1)\n v0 = self.visit(t0)\n v1 = self.visit(t1)\n return v0 * v1\n \n def accepterr(self, tree):\n print(repr(tree))\n raise SyntaxError()",
"_____no_output_____"
],
[
"calc = Calc(parser)\nprint(calc.eval(\"1+2*3\"))\nprint(calc.eval(\"(1+2)*3\"))\nprint(calc.eval(\"1*2+3\"))",
"7\n9\n5\n"
],
[
"calc.eval('1@2')",
"Syntax Error ((unknown source):1:1+1)\n1@2\n ^ \n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d0968d96155f974b36c7bc3319c626b1c6be8212 | 7,481 | ipynb | Jupyter Notebook | tutorial-notebook.ipynb | mtchem/python_for_colorado-API | 69e25be286ce8271d506d0178c28950c20c7b838 | [
"MIT"
] | 1 | 2017-04-07T13:57:59.000Z | 2017-04-07T13:57:59.000Z | tutorial-notebook.ipynb | mtchem/python_for_colorado-API | 69e25be286ce8271d506d0178c28950c20c7b838 | [
"MIT"
] | null | null | null | tutorial-notebook.ipynb | mtchem/python_for_colorado-API | 69e25be286ce8271d506d0178c28950c20c7b838 | [
"MIT"
] | null | null | null | 29.924 | 264 | 0.517043 | [
[
[
"# This notebook will walk through the steps of using the Colorado Information Marketplace (https://data.colorado.gov/) API with python",
"_____no_output_____"
],
[
"## The first step will be to aquire the API endpoint and user tokens. For this tutorial I will be using the Aquaculture Permittees in Colorado dataset (https://data.colorado.gov/Agriculture/Aquaculture-Permittees-in-Colorado/e6e8-qmi7)\n",
"_____no_output_____"
],
[
"### Once you have navigated to your database of interest click on the API button\n\n<img src=\"images/CIM_1.png\" alt=\"hi\" class=\"inline\"/>",
"_____no_output_____"
],
[
"### Click on the blue copy button to copy the entire API Endpoint url, you are going to need that later.",
"_____no_output_____"
],
[
"### To be granted access tokens you need to click on the API docs button and scroll down until you reach the section about tokens\n<img src=\"images/app_token_1.png\" alt=\"hi\" class=\"inline\"/>",
"_____no_output_____"
],
[
"### Click on the \"Sign up for an app token!\" button, which will take you to a sign in page. If you don't have an account, create one now, otherwise just sign in. \n<img src=\"images/sign_up_1.png\" alt=\"hi\" class=\"inline\"/>\n",
"_____no_output_____"
],
[
"### Once you sign in, you will be asked to create an application. Fill out the form then click create.\n<img src=\"images/app_token_3.png\" alt=\"hi\" class=\"inline\"/>\n\n\n\n### You should then have both an App Token and a Secret Token\n<img src=\"images/app_token_2.png\" alt=\"hi\" class=\"inline\"/>\n\n",
"_____no_output_____"
],
[
"## The second step is to retrieve some data using the api and putting it into a pandas dataframe",
"_____no_output_____"
]
],
[
[
"# import pandas\nimport pandas as pd",
"_____no_output_____"
],
[
"# create these variables using the information found in step one\napi_url = 'https://data.colorado.gov/xxxxx/xxxxxx.json'\nApp_Token = 'xxxxxxxxxxxxxxxx'",
"_____no_output_____"
]
],
[
[
"### Create urls to request data. The maximum number of returns for each request is 50,000 records per page. The *limit* parameter chooses how many records to return per page, and the *offset* parameter determines which record the request will start with. ",
"_____no_output_____"
]
],
[
[
"limit = 100 # limits the number of records to 100\noffset = 20 # starts collecting records at record 20\n\n# url_1 creates a url that will query the API and return all the fields (columns) of records 20-120\nurl_1 = api_url + '?' + '$$app_token=' + App_Token + '&$limit=' + str(limit) + '&$offset=' + str(offset)",
"_____no_output_____"
]
],
[
[
"### Pandas has a method that retrieves api data and creates a dataframe",
"_____no_output_____"
]
],
[
[
"raw_data = pd.read_json(url_1)\n",
"_____no_output_____"
]
],
[
[
"# That's it! You can now transform the raw data using python and pandas dataframes",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d096937888a310cf330c98f3c01c49d68211109e | 57,606 | ipynb | Jupyter Notebook | data_files/forc_diagram/.ipynb_checkpoints/forc_diagram-checkpoint.ipynb | apivarunas/PmagPy | 623423b0a5cb9f8ab16c5c2b389d1f31ad31a19f | [
"BSD-3-Clause"
] | 2 | 2020-07-05T01:11:33.000Z | 2020-07-05T01:11:39.000Z | data_files/forc_diagram/.ipynb_checkpoints/forc_diagram-checkpoint.ipynb | schwehr/PmagPy | 5e9edc5dc9a7a243b8e7f237fa156e0cd782076b | [
"BSD-3-Clause"
] | 1 | 2018-08-27T22:59:09.000Z | 2018-08-27T22:59:09.000Z | data_files/forc_diagram/.ipynb_checkpoints/forc_diagram-checkpoint.ipynb | PmagPy/PmagPy-notebooks | 490cb13e3a78e1323368125de9a503bd327c7174 | [
"BSD-3-Clause"
] | null | null | null | 538.373832 | 55,000 | 0.94721 | [
[
[
"This is for conventional and irregular forc diagrams.\n\ntwo example of irforc and convential forc data in PmagPy/data_files/forc_diagram/\n\nThe input data format could be very different due to different softwares and instruments,\n\nNevertheless, only if the line [' Field Moment '] included before the measured data, the software will work.",
"_____no_output_____"
],
[
"in command line:\n\npython3 forcdiagram /data_files/irforc_example.irforc 3\n\nwill plot the FORC diagram with SF=3",
"_____no_output_____"
]
],
[
[
"from forc_diagram import *\nfrom matplotlib import pyplot as plt",
"_____no_output_____"
],
[
"#forc = Forc(fileAdres='/data_files/irforc_example.irforc',SF=3)\nforc = Forc(fileAdres='../example/MSM33-60-1-d416_2.irforc',SF=3)\n\nfig = plt.figure(figsize=(6,5), facecolor='white')\n\nfig.subplots_adjust(left=0.18, right=0.97,\n bottom=0.18, top=0.9, wspace=0.5, hspace=0.5)\nplt.contour(forc.xi*1000,\n forc.yi*1000,\n forc.zi,9,\n colors='k',linewidths=0.5)#mt to T\n\nplt.pcolormesh(forc.xi*1000,\n forc.yi*1000,\n forc.zi,\n cmap=plt.get_cmap('rainbow'))#vmin=np.min(rho)-0.2)\nplt.colorbar()\nplt.xlabel('B$_{c}$ (mT)',fontsize=12)\nplt.ylabel('B$_{i}$ (mT)',fontsize=12)\n\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
d096967e85f3270be370eee2c66dd3fa88af6340 | 891,527 | ipynb | Jupyter Notebook | experiments/main_simulations/plot_environment_convergence.ipynb | rflperry/sparse_shift | 7c0d68be21d56f706d1251b914d305786a4c9726 | [
"MIT"
] | 2 | 2022-01-31T14:12:54.000Z | 2022-02-01T18:17:24.000Z | experiments/main_simulations/plot_environment_convergence.ipynb | rflperry/sparse_shift | 7c0d68be21d56f706d1251b914d305786a4c9726 | [
"MIT"
] | null | null | null | experiments/main_simulations/plot_environment_convergence.ipynb | rflperry/sparse_shift | 7c0d68be21d56f706d1251b914d305786a4c9726 | [
"MIT"
] | null | null | null | 2,539.962963 | 880,036 | 0.956778 | [
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport pandas as pd",
"_____no_output_____"
],
[
"EXPERIMENT = 'environment_convergence'\n\ndf = pd.read_csv(f'./results/{EXPERIMENT}_results.csv', sep=', ', engine='python')",
"_____no_output_____"
],
[
"df.head(5)",
"_____no_output_____"
],
[
"plot_df = df.melt(\n id_vars=[\n 'params_index', 'n_variables', 'n_total_environments', 'sparsity',\n 'sample_size', 'dag_density', 'reps', 'data_simulator', 'dag_simulator',\n 'Method', 'Number of environments', 'Rep', 'MEC size', 'Soft'],\n # value_vars=['True orientation rate', 'False orientation rate', 'Average precision'], # 'Fraction of possible DAGs'], \n value_vars=['Precision', 'Recall', 'Average precision'],\n var_name='Metric',\n value_name='Average fraction',\n)",
"_____no_output_____"
],
[
"for ds in df['dag_simulator'].unique():\n g = sns.relplot(\n data=plot_df[\n (plot_df['sample_size'] == plot_df['sample_size'].max())\n & (plot_df['dag_simulator'] == ds)\n & (plot_df['Soft'] == False)\n # & (plot_df['sparsity'].isin([2, 4]))\n ],\n x='Number of environments',\n y='Average fraction',\n hue='Method',\n row='sparsity',\n col='Metric',\n # ci=None,\n kind='line',\n # height=3,\n # aspect=2, # 3,\n # legend='Full',\n facet_kws={'sharey': False, 'sharex': True},\n )\n \n col_vals = g.data[g._col_var].unique()\n for c, col_val in enumerate(col_vals):\n g.axes[0, c].set_ylabel(col_val, visible=True)\n \n# row_vals = g.data[g._row_var].unique()\n# for r, row_val in enumerate(row_vals):\n# for c, col_val in enumerate(col_vals):\n# g.axes[r, c].set_title(f'{g._row_var} = {row_val}')\n# g.axes[r, c].set_ylabel(col_val, visible=True)\n \n title_dict = dict({\"er\": \"Erdos-Renyi\", \"ba\": \"Hub\"})\n # g.fig.suptitle(f'DAG model: {title_dict[ds]}', fontsize=14, y=1.02, x=0.45)\n plt.subplots_adjust(wspace=0.06)\n plt.show()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
d0969b5a2f993fe6a34422381238113892663f52 | 9,357 | ipynb | Jupyter Notebook | Chapter_1/The way of the Program.ipynb | Shrihith/Think-python | 08fecb1d77f49a6bfede57145bb4e7d80b82f921 | [
"MIT"
] | 1 | 2021-12-12T02:35:07.000Z | 2021-12-12T02:35:07.000Z | Chapter_1/The way of the Program.ipynb | Shrihith/Think-python | 08fecb1d77f49a6bfede57145bb4e7d80b82f921 | [
"MIT"
] | null | null | null | Chapter_1/The way of the Program.ipynb | Shrihith/Think-python | 08fecb1d77f49a6bfede57145bb4e7d80b82f921 | [
"MIT"
] | null | null | null | 22.766423 | 368 | 0.539062 | [
[
[
"# Chapter 1: The way of program",
"_____no_output_____"
],
[
"The First Program",
"_____no_output_____"
]
],
[
[
"print('Hello, World!')",
"Hello, World!\n"
]
],
[
[
"Arithmetic Operators ( +, -, *, /, **, ^) (addition, substraction, multiplication, division, exponentiation, XOR)",
"_____no_output_____"
]
],
[
[
"40 + 2 # add\n43 - 1 # Sub\n6 * 7 # multiply\n84/ 2 # Division\n6**2 + 6 # Exponent\n6 ^ 2 # XOR(Bitwise operator)",
"_____no_output_____"
]
],
[
[
"Values and Types: A valus is one of the basic things a program works with,like a letter or a number.",
"_____no_output_____"
]
],
[
[
"type(2)",
"_____no_output_____"
],
[
"type(42.0)",
"_____no_output_____"
],
[
"type('Hello, World!')",
"_____no_output_____"
],
[
"1,000,000",
"_____no_output_____"
]
],
[
[
"Formal and Natural Languages:\n1).Programming languages are formal languages that have been designed to express computations.\n2).Syntax rules comes with two flavors pertaining to token(such as, words, numbers, and chemical elements) and Token combination( order, well structured).\n3).parsing:figure outh the structure in formal language sentence in english or a statement.\n4).Programs: The meaning of a computer program is a unambiguous and literal, and can be understood entirely by analysis of the tokens and structure.\n5).Programming errors are called bugs and the process of tracking them down is called debugging.",
"_____no_output_____"
],
[
"Glossory:\nproblem solving, high-level language, low-level language, portability, interpeter, prompt, program, print statement, operator, value, type, integer, floating-point, string, natural language, formal language, token, syntax, parse, bug, debugging\n - refer book",
"_____no_output_____"
],
[
"EXERCISES:\n",
"_____no_output_____"
]
],
[
[
"print('helloworld)",
"_____no_output_____"
],
[
"print 'helloworld'",
"_____no_output_____"
],
[
"2++2",
"_____no_output_____"
],
[
"02+02",
"_____no_output_____"
],
[
"2 2",
"_____no_output_____"
]
],
[
[
"1. How many seconds are ther in 42 minutes 42 seconds?",
"_____no_output_____"
]
],
[
[
"42 * 60 + 42",
"_____no_output_____"
]
],
[
[
"2. How many miles are there in 10 kilometers? hint: there are 1.61 kilometers in a mile",
"_____no_output_____"
]
],
[
[
"10/1.61",
"_____no_output_____"
]
],
[
[
"3. If you run a 10 kilometer race in 42 minutes 42 seconds, what is your average pace(time per mile in minutes and seconds)? what is your average speed in miles per hour?",
"_____no_output_____"
]
],
[
[
"10/1.61\n42*60+42\n6.214/2562\n",
"_____no_output_____"
],
[
"42+42/60\n6.214/42.7",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0969c61af2a032414a14812507547791413fec4 | 4,781 | ipynb | Jupyter Notebook | Final_Exam.ipynb | JuliusCaezarEugenio/CPEN21-A-CPE-1-1 | 38064b19e202e60e639376fc81de2a39991219d8 | [
"Apache-2.0"
] | null | null | null | Final_Exam.ipynb | JuliusCaezarEugenio/CPEN21-A-CPE-1-1 | 38064b19e202e60e639376fc81de2a39991219d8 | [
"Apache-2.0"
] | null | null | null | Final_Exam.ipynb | JuliusCaezarEugenio/CPEN21-A-CPE-1-1 | 38064b19e202e60e639376fc81de2a39991219d8 | [
"Apache-2.0"
] | null | null | null | 27.477011 | 243 | 0.455972 | [
[
[
"<a href=\"https://colab.research.google.com/github/JuliusCaezarEugenio/CPEN21-A-CPE-1-1/blob/main/Final_Exam.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"##Final Exam",
"_____no_output_____"
],
[
"***Problem Statement #1***: Create a Python Program that will produce an output of sum of 10 numbers less than 5 using FOR LOOP statement.",
"_____no_output_____"
]
],
[
[
"sum = 0\nnum = [-5, -4, -3, -2, -1, 0, 1, 2, 3, 4]\nfor x in (num):\n sum = sum + x\nprint (\"The sum of 10 numbers less than five is\",sum)",
"The sum of 10 numbers less than five is -5\n"
]
],
[
[
"***Problem Statement #2***: Create a Python program that will produce accept five numbers and determine the sum of\nfirst and last number among the five numbers entered using WHILE LOOP",
"_____no_output_____"
]
],
[
[
"num = int(input(\"1st number: \"))\nwhile (num !=0):\n l = int(input(\"2nd number: \"))\n o = int(input(\"3rd number: \"))\n v = int(input(\"4th number: \"))\n e = int(input(\"5th number: \"))\n break\n x = e\nwhile (x!=0):\n x = num + e \n print (\"The sum of first and last number is\", x)\n num -=1\n break",
"1st number: 1\n2nd number: 2\n3rd number: 3\n4th number: 4\n5th number: 5\nThe sum of first and last number is 6\n"
]
],
[
[
"***Problem Statement #3***: Create a Python program to calculate student grades. It accepts a numerical grade as input\nand it will display the character grade as output based on the given scale: (Use Nested-IF-Else\nstatement)",
"_____no_output_____"
]
],
[
[
"grade = float(input(\"Enter numerical grade: \"))\nif grade >=90:\n print (\"Character Grade: A\")\nelif grade <=89 and grade >80:\n print (\"Character Grade: B\")\nelif grade <=79 and grade >70:\n print (\"Character Grade: C\")\nelif grade <= 69 and grade >60:\n print (\"Character Grade: D\")\nelse:\n print (\"Character Grade: F\")\n\n\n ",
"Enter numerical grade: 98.5\nCharacter Grade: A\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0969dced5c82febb9f5af440a1809084ab22093 | 33,081 | ipynb | Jupyter Notebook | course1/week3-lab/C1W3_Data_Labeling_Ungraded_Lab.ipynb | sidmontu/MLEP-public | f74aca1bd539d422199e5f1838abee7e93896f1d | [
"Apache-2.0"
] | null | null | null | course1/week3-lab/C1W3_Data_Labeling_Ungraded_Lab.ipynb | sidmontu/MLEP-public | f74aca1bd539d422199e5f1838abee7e93896f1d | [
"Apache-2.0"
] | null | null | null | course1/week3-lab/C1W3_Data_Labeling_Ungraded_Lab.ipynb | sidmontu/MLEP-public | f74aca1bd539d422199e5f1838abee7e93896f1d | [
"Apache-2.0"
] | null | null | null | 35.570968 | 497 | 0.6037 | [
[
[
"# Week 3 - Ungraded Lab: Data Labeling\n\n\nWelcome to the ungraded lab for week 3 of Machine Learning Engineering for Production. In this lab, you will see how the data labeling process affects the performance of a classification model. Labeling data is usually a very labor intensive and costly task but it is of great importance.\n\nAs you saw in the lectures there are many ways to label data, this is dependant on the strategy used. Recall the example with the iguanas, all of the following are valid labeling alternatives but they clearly follow different criteria. \n\n<table><tr><td><img src='assets/iguanas1.png'></td><td><img src='assets/iguanas2.png'></td><td><img src='assets/iguanas3.png'></td></tr></table>\n\n**You can think of every labeling strategy as a result of different labelers following different labeling rules**. If your data is labeled by people using different criteria this will have a negative impact on your learning algorithm. It is desired to have consistent labeling across your dataset.\n\nThis lab will touch on the effect of labeling strategies from a slighlty different angle. You will explore how different strategies affect the performance of a machine learning model by simulating the process of having different labelers label the data. This, by defining a set of rules and performing automatic labeling based on those rules.\n\n**The main objective of this ungraded lab is to compare performance across labeling options to understand the role that good labeling plays on the performance of Machine Learning models**, these options are:\n1. Randomly generated labels (performance lower bound)\n2. Automatic generated labels based on three different label strategies\n3. True labels (performance upper bound)\n\nAlthough the example with the iguanas is a computer vision task, the same concepts regarding labeling can be applied to other types of data. In this lab you will be working with text data, concretely you will be using a dataset containing comments from the 2015 top 5 most popular Youtube videos. Each comment has been labeled as `spam` or `not_spam` depending on its contents.",
"_____no_output_____"
]
],
[
[
"import os\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"## Loading the dataset\n\nThe dataset consists of 5 CSV files, one for each video. Pandas `DataFrame` are very powerful to handle data in CSV format. The following helper function will load the data using pandas:",
"_____no_output_____"
]
],
[
[
"def load_labeled_spam_dataset():\n \"\"\"Load labeled spam dataset.\"\"\"\n\n # Path where csv files are located\n base_path = \"./data/\"\n\n # List of csv files with full path\n csv_files = [os.path.join(base_path, csv) for csv in os.listdir(base_path)]\n\n # List of dataframes for each file\n dfs = [pd.read_csv(filename) for filename in csv_files]\n\n # Concatenate dataframes into a single one\n df = pd.concat(dfs)\n\n # Rename columns\n df = df.rename(columns={\"CONTENT\": \"text\", \"CLASS\": \"label\"})\n\n # Set a seed for the order of rows\n df = df.sample(frac=1, random_state=824)\n \n return df.reset_index()\n\n\n# Save the dataframe into the df_labeled variable\ndf_labeled = load_labeled_spam_dataset()",
"_____no_output_____"
]
],
[
[
"To have a feeling of how the data is organized, let's inspect the top 5 rows of the data:",
"_____no_output_____"
]
],
[
[
"# Take a look at the first 5 rows\ndf_labeled.head()",
"_____no_output_____"
]
],
[
[
"## Further inspection and preprocessing\n\n\n### Checking for data imbalance\n\nIt is fairly common to assume that the data you are working on is balanced. This means that the dataset contains a similar proportion of examples for all classes. Before moving forward let's actually test this assumption:",
"_____no_output_____"
]
],
[
[
"# Print actual value count\nprint(f\"Value counts for each class:\\n\\n{df_labeled.label.value_counts()}\\n\")\n\n# Display pie chart to visually check the proportion\ndf_labeled.label.value_counts().plot.pie(y='label', title='Proportion of each class')\nplt.show()",
"_____no_output_____"
]
],
[
[
"There is roughly the same number of data points for each class so class imbalance is not an issue for this particular dataset.\n\n\n### Cleaning the dataset\n\nIf you scroll back to the cell where you inspected the data, you will realize that the dataframe includes information that is not relevant for the task at hand. At the moment, you are only interested in the comments and the corresponding labels (the video that each comment belongs to will be used later). Let's drop the remaining columns.",
"_____no_output_____"
]
],
[
[
"# Drop unused columns\ndf_labeled = df_labeled.drop(['index', 'COMMENT_ID', 'AUTHOR', 'DATE'], axis=1)\n\n# Look at the cleaned dataset\ndf_labeled.head()",
"_____no_output_____"
]
],
[
[
"Now the dataset only includes the information you are going to use moving forward.\n\n### Splitting the dataset\n\nBefore jumping to the data labeling section let's split the data into training and test sets so you can use the latter to measure the performance of models that were trained using data labeled through different methods. As a safety measure when doing this split, remember to use stratification so the proportion of classes is maintained within each split.",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import train_test_split\n\n# Save the text into the X variable\nX = df_labeled.drop(\"label\", axis=1)\n\n# Save the true labels into the y variable\ny = df_labeled[\"label\"]\n\n# Use 1/5 of the data for testing later\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y)\n\n# Print number of comments for each set\nprint(f\"There are {X_train.shape[0]} comments for training.\")\nprint(f\"There are {X_test.shape[0]} comments for testing\")",
"_____no_output_____"
]
],
[
[
"Let's do a visual to check that the stratification actually worked:",
"_____no_output_____"
]
],
[
[
"plt.subplot(1, 3, 1)\ny_train.value_counts().plot.pie(y='label', title='Proportion of each class for train set', figsize=(10, 6))\n\nplt.subplot(1, 3, 3)\ny_test.value_counts().plot.pie(y='label', title='Proportion of each class for test set', figsize=(10, 6))\n\nplt.tight_layout()\nplt.show()",
"_____no_output_____"
]
],
[
[
"Both, the training and test sets a balanced proportion of examples per class. So, the code successfully implemented stratification. \n\nLet's get going!",
"_____no_output_____"
],
[
"## Data Labeling \n\n### Establishing performance lower and upper bounds for reference\n\nTo properly compare different labeling strategies you need to establish a baseline for model accuracy, in this case you will establish both a lower and an upper bound to compare against. \n\n",
"_____no_output_____"
],
[
"### Calculate accuracy of a labeling strategy\n\n[CountVectorizer](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer) is a handy tool included in the sklearn ecosystem to encode text based data.\n\nFor more information on how to work with text data using sklearn check out this [resource](https://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html).",
"_____no_output_____"
]
],
[
[
"from sklearn.feature_extraction.text import CountVectorizer\n\n# Allow unigrams and bigrams\nvectorizer = CountVectorizer(ngram_range=(1, 5))",
"_____no_output_____"
]
],
[
[
"Now that the text encoding is defined, you need to select a model to make predictions. For simplicity you will use a [Multinomial Naive Bayes](https://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.MultinomialNB.html) classifier. This model is well suited for text classification and is fairly quick to train.\n\nLet's define a function which will handle the model fitting and print out the accuracy on the test data:",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import accuracy_score\nfrom sklearn.naive_bayes import MultinomialNB\n\n\ndef calculate_accuracy(X_tr, y_tr, X_te=X_test, y_te=y_test, \n clf=MultinomialNB(), vectorizer=vectorizer):\n \n # Encode train text\n X_train_vect = vectorizer.fit_transform(X_tr.text.tolist())\n \n # Fit model\n clf.fit(X=X_train_vect, y=y_tr)\n \n # Vectorize test text\n X_test_vect = vectorizer.transform(X_te.text.tolist())\n \n # Make predictions for the test set\n preds = clf.predict(X_test_vect)\n \n # Return accuracy score\n return accuracy_score(preds, y_te)\n",
"_____no_output_____"
]
],
[
[
"Now let's create a dictionary to store the accuracy of each labeling method:",
"_____no_output_____"
]
],
[
[
"# Empty dictionary\naccs = dict()",
"_____no_output_____"
]
],
[
[
"### Random Labeling\n\nGenerating random labels is a natural way to establish a lower bound. You will expect that any successful alternative labeling model to outperform randomly generated labels. \n\nNow let's calculate the accuracy for the random labeling method",
"_____no_output_____"
]
],
[
[
"# Calculate random labels\nrnd_labels = np.random.randint(0, 2, X_train.shape[0])\n\n# Feed them alongside X_train to calculate_accuracy function\nrnd_acc = calculate_accuracy(X_train, rnd_labels)\n\nrnd_acc",
"_____no_output_____"
]
],
[
[
"You will see a different accuracy everytime you run the previous cell. This is due to the fact that the labeling is done randomly. Remember, this is a binary classification problem and both classes are balanced, so you can expect to see accuracies that revolve around 50%.\n\nTo further gain intuition let's look at the average accuracy over 10 runs:",
"_____no_output_____"
]
],
[
[
"# Empty list to save accuracies\nrnd_accs = []\n\nfor _ in range(10):\n # Add every accuracy to the list\n rnd_accs.append(calculate_accuracy(X_train, np.random.randint(0, 2, X_train.shape[0])))\n\n# Save result in accs dictionary\naccs['random-labels'] = sum(rnd_accs)/len(rnd_accs)\n\n# Print result\nprint(f\"The random labelling method achieved and accuracy of {accs['random-labels']*100:.2f}%\")",
"_____no_output_____"
]
],
[
[
"Random labelling is completely disregarding the information from the solution space you are working on, and is just guessing the correct label. You can't probably do worse than this (or maybe you can). For this reason, this method serves as reference for comparing other labeling methods\n\n\n### Labeling with true values\n\nNow let's look at the other end of the spectrum, this is using the correct labels for your data points. Let's retrain the Multinomial Naive Bayes classifier with the actual labels ",
"_____no_output_____"
]
],
[
[
"# Calculate accuracy when using the true labels\ntrue_acc = calculate_accuracy(X_train, y_train)\n\n# Save the result\naccs['true-labels'] = true_acc\n\nprint(f\"The true labelling method achieved and accuracy of {accs['true-labels']*100:.2f}%\")",
"_____no_output_____"
]
],
[
[
"Training with the true labels produced a noticeable boost in accuracy. This is expected as the classifier is now able to properly identify patterns in the training data which were lacking with randomly generated labels. \n\nAchieving higher accuracy is possible by either fine-tunning the model or even selecting a different one. For the time being you will keep the model as it is and use this accuracy as what we should strive for with the automatic labeling algorithms you will see next.",
"_____no_output_____"
],
[
"## Automatic labeling - Trying out different labeling strategies",
"_____no_output_____"
],
[
"Let's suppose that for some reason you don't have access to the true labels associated with each data point in this dataset. It is a natural idea to think that there are patterns in the data that will provide clues of which are the correct labels. This is of course very dependant on the kind of data you are working with and to even hypothesize which patterns exist requires great domain knowledge.\n\nThe dataset used in this lab was used for this reason. It is reasonable for many people to come up with rules that might help identify a spam comment from a non-spam one for a Youtube video. In the following section you will be performing automatic labeling using such rules. **You can think of each iteration of this process as a labeler with different criteria for labeling** and your job is to hire the most promising one.\n\nNotice the word **rules**. In order to perform automatic labeling you will define some rules such as \"if the comment contains the word 'free' classify it as spam\".\n\nFirst things first. Let's define how we are going to encode the labeling:\n- `SPAM` is represented by 1\n\n\n- `NOT_SPAM` by 0 \n\n\n- `NO_LABEL` as -1\n\n\nYou might be wondering about the `NO_LABEL` keyword. Depending on the rules you come up with, these might not be applicable to some data points. For such cases it is better to refuse from giving a label rather than guessing, which you already saw yields poor results.",
"_____no_output_____"
],
[
"### First iteration - Define some rules\n\nFor this first iteration you will create three rules based on the intuition of common patterns that appear on spam comments. The rules are simple, classify as SPAM if any of the following patterns is present within the comment or NO_LABEL otherwise:\n- `free` - spam comments usually lure users by promoting free stuff\n- `subs` - spam comments tend to ask users to subscribe to some website or channel\n- `http` - spam comments include links very frequently",
"_____no_output_____"
]
],
[
[
"def labeling_rules_1(x):\n \n # Convert text to lowercase\n x = x.lower()\n \n # Define list of rules\n rules = [\n \"free\" in x,\n \"subs\" in x,\n \"http\" in x\n ]\n \n # If the comment falls under any of the rules classify as SPAM\n if any(rules):\n return 1\n \n # Otherwise, NO_LABEL\n return -1",
"_____no_output_____"
],
[
"# Apply the rules the comments in the train set\nlabels = [labeling_rules_1(label) for label in X_train.text]\n\n# Convert to a numpy array\nlabels = np.asarray(labels)\n\n# Take a look at the automatic labels\nlabels",
"_____no_output_____"
]
],
[
[
"For lots of points the automatic labeling algorithm decided to not settle for a label, this is expected given the nature of the rules that were defined. These points should be deleted since they don't provide information about the classification process and tend to hurt performance.",
"_____no_output_____"
]
],
[
[
"# Create the automatic labeled version of X_train by removing points with NO_LABEL label\nX_train_al = X_train[labels != -1]\n\n# Remove predictions with NO_LABEL label\nlabels_al = labels[labels != -1]\n\nprint(f\"Predictions with concrete label have shape: {labels_al.shape}\")\n\nprint(f\"Proportion of data points kept: {labels_al.shape[0]/labels.shape[0]*100:.2f}%\")",
"_____no_output_____"
]
],
[
[
"Notice that only 379 data points remained out of the original 1564. The rules defined didn't provide enough context for the labeling algorithm to settle on a label, so around 75% of the data has been trimmed.\n\nLet's test the accuracy of the model when using these automatic generated labels:",
"_____no_output_____"
]
],
[
[
"# Compute accuracy when using these labels\niter_1_acc = calculate_accuracy(X_train_al, labels_al)\n\n# Display accuracy\nprint(f\"First iteration of automatic labeling has an accuracy of {iter_1_acc*100:.2f}%\")\n\n# Save the result\naccs['first-iteration'] = iter_1_acc",
"_____no_output_____"
]
],
[
[
"Let's compare this accuracy to the baselines by plotting:",
"_____no_output_____"
]
],
[
[
"def plot_accuracies(accs=accs):\n colors = list(\"rgbcmy\")\n items_num = len(accs)\n cont = 1\n\n for x, y in accs.items():\n if x in ['true-labels', 'random-labels', 'true-labels-best-clf']:\n plt.hlines(y, 0, (items_num-2)*2, colors=colors.pop())\n else:\n plt.scatter(cont, y, s=100)\n cont+=2\n plt.legend(accs.keys(), loc=\"center left\",bbox_to_anchor=(1, 0.5))\n plt.show()\n \nplot_accuracies()",
"_____no_output_____"
]
],
[
[
"This first iteration had an accuracy very close to the random labeling, we should strive to do better than this. ",
"_____no_output_____"
],
[
"Before moving forward let's define the `label_given_rules` function that performs all of the steps you just saw, these are: \n- Apply the rules to a dataframe of comments\n- Cast the resulting labels to a numpy array\n- Delete all data points with NO_LABEL as label\n- Calculate the accuracy of the model using the automatic labels\n- Save the accuracy for plotting\n- Print some useful metrics of the process",
"_____no_output_____"
]
],
[
[
"def label_given_rules(df, rules_function, name, \n accs_dict=accs, verbose=True):\n \n # Apply labeling rules to the comments\n labels = [rules_function(label) for label in df.text]\n \n # Convert to a numpy array\n labels = np.asarray(labels)\n \n # Save initial number of data points\n initial_size = labels.shape[0]\n \n # Trim points with NO_LABEL label\n X_train_al = df[labels != -1]\n labels = labels[labels != -1]\n \n # Save number of data points after trimming\n final_size = labels.shape[0]\n \n # Compute accuracy\n acc = calculate_accuracy(X_train_al, labels)\n \n # Print useful information\n if verbose:\n print(f\"Proportion of data points kept: {final_size/initial_size*100:.2f}%\\n\")\n print(f\"{name} labeling has an accuracy of {acc*100:.2f}%\\n\")\n \n # Save accuracy to accuracies dictionary\n accs_dict[name] = acc\n \n return X_train_al, labels, acc",
"_____no_output_____"
]
],
[
[
"Going forward we should come up with rules that have a better coverage of the training data, thus making pattern discovery an easier task. Also notice how the rules were only able to label as either SPAM or NO_LABEL, we should also create some rules that help the identification of NOT_SPAM comments.",
"_____no_output_____"
],
[
"### Second iteration - Coming up with better rules\n\nIf you inspect the comments in the dataset you might be able to distinguish certain patterns at a glimpse. For example, not spam comments often make references to either the number of views since these were the most watched videos of 2015 or the song in the video and its contents . As for spam comments other common patterns are to promote gifts or ask to follow some channel or website.\n\nLet's create some new rules that include these patterns:",
"_____no_output_____"
]
],
[
[
"def labeling_rules_2(x):\n \n # Convert text to lowercase\n x = x.lower()\n \n # Define list of rules to classify as NOT_SPAM\n not_spam_rules = [\n \"view\" in x,\n \"song\" in x\n ]\n \n # Define list of rules to classify as SPAM\n spam_rules = [\n \"free\" in x,\n \"subs\" in x,\n \"gift\" in x,\n \"follow\" in x,\n \"http\" in x\n ]\n \n # Classify depending on the rules\n if any(not_spam_rules):\n return 0\n \n if any(spam_rules):\n return 1\n \n return -1",
"_____no_output_____"
]
],
[
[
"This new set of rules looks more promising as it includes more patterns to classify as SPAM as well as some patterns to classify as NOT_SPAM. This should result in more data points with a label different to NO_LABEL.\n\nLet's check if this is the case.",
"_____no_output_____"
]
],
[
[
"label_given_rules(X_train, labeling_rules_2, \"second-iteration\")\n\nplot_accuracies()",
"_____no_output_____"
]
],
[
[
"This time 44% of the original dataset was given a decisive label and there were data points for both labels, this helped the model reach a higher accuracy when compared to the first iteration. Now the accuracy is considerably higher than the random labeling but it is still very far away from the upper bound.\n\nLet's see if we can make it even better!",
"_____no_output_____"
],
[
"### Third Iteration - Even more rules\n\nThe rules we have defined so far are doing a fair job. Let's add two additional rules, one for classifying SPAM comments and the other for the opposite task.\n\nAt a glimpse it looks like NOT_SPAM comments are usually shorter. This may be due to them not including hyperlinks but also in general they tend to be more concrete such as \"I love this song!\".\n\nLet's take a look at the average number of characters for SPAM comments vs NOT_SPAM oned:",
"_____no_output_____"
]
],
[
[
"from statistics import mean\n\nprint(f\"NOT_SPAM comments have an average of {mean([len(t) for t in df_labeled[df_labeled.label==0].text]):.2f} characters.\")\nprint(f\"SPAM comments have an average of {mean([len(t) for t in df_labeled[df_labeled.label==1].text]):.2f} characters.\")",
"_____no_output_____"
]
],
[
[
"It sure looks like there is a big difference in the number of characters for both types of comments.\n\nTo decide on a threshold to classify as NOT_SPAM let's plot a histogram of the number of characters for NOT_SPAM comments:",
"_____no_output_____"
]
],
[
[
"plt.hist([len(t) for t in df_labeled[df_labeled.label==0].text], range=(0,100))\nplt.show()",
"_____no_output_____"
]
],
[
[
"The majority of NOT_SPAM comments have 30 or less characters so we'll use that as a threshold.\n\nAnother prevalent pattern in spam comments is to ask users to \"check out\" a channel, website or link.\n\nLet's add these two new rules:",
"_____no_output_____"
]
],
[
[
"def labeling_rules_3(x):\n \n # Convert text to lowercase\n x = x.lower()\n \n # Define list of rules to classify as NOT_SPAM\n not_spam_rules = [\n \"view\" in x,\n \"song\" in x,\n len(x) < 30\n ]\n \n\n # Define list of rules to classify as SPAM\n spam_rules = [\n \"free\" in x,\n \"subs\" in x,\n \"gift\" in x,\n \"follow\" in x,\n \"http\" in x,\n \"check out\" in x\n ]\n \n # Classify depending on the rules\n if any(not_spam_rules):\n return 0\n \n if any(spam_rules):\n return 1\n \n return -1",
"_____no_output_____"
],
[
"label_given_rules(X_train, labeling_rules_3, \"third-iteration\")\n\nplot_accuracies()",
"_____no_output_____"
]
],
[
[
"These new rules do a pretty good job at both, covering the dataset and having a good model accuracy. To be more concrete this labeling strategy reached an accuracy of ~86%! We are getting closer and closer to the upper bound defined by using the true labels.\n\nWe could keep going on adding more rules to improve accuracy and we do encourage you to try it out yourself!\n\n\n### Come up with your own rules\n\nThe following cells contain some code to help you inspect the dataset for patterns and to test out these patterns. The ones used before are commented out in case you want start from scratch or re-use them.",
"_____no_output_____"
]
],
[
[
"# Configure pandas to print out all rows to check the complete dataset\npd.set_option('display.max_rows', None)\n\n# Check NOT_SPAM comments\ndf_labeled[df_labeled.label==0]",
"_____no_output_____"
],
[
"# Check SPAM comments\ndf_labeled[df_labeled.label==1]",
"_____no_output_____"
],
[
"def your_labeling_rules(x):\n \n # Convert text to lowercase\n x = x.lower()\n \n # Define your rules for classifying as NOT_SPAM\n not_spam_rules = [\n# \"view\" in x,\n# \"song\" in x,\n# len(x) < 30\n ]\n \n\n # Define your rules for classifying as SPAM\n spam_rules = [\n# \"free\" in x,\n# \"subs\" in x,\n# \"gift\" in x,\n# \"follow\" in x,\n# \"http\" in x,\n# \"check out\" in x\n ]\n \n # Classify depending on your rules\n if any(not_spam_rules):\n return 0\n \n if any(spam_rules):\n return 1\n \n return -1\n\n\ntry:\n label_given_rules(X_train, your_labeling_rules, \"your-iteration\")\n plot_accuracies()\n \nexcept ValueError:\n print(\"You have not defined any rules.\")",
"_____no_output_____"
]
],
[
[
"**Congratulations on finishing this ungraded lab!**\n\nBy now you should have a better understanding of having good labelled data. In general, **the better your labels are, the better your models will be**. Also it is important to realize that the process of correctly labeling data is a very complex one. **Remember, you can think of each one of the iterations of the automatic labeling process to be a different labeler with different criteria for labeling**. If you assume you are hiring labelers you will want to hire the latter for sure! \n\nAnother important point to keep in mind is that establishing baselines to compare against is really important as they provide perspective on how well your data and models are performing.\n\n**Keep it up!**",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
]
] |
d096a61e7d6a39a7d268916e247e12236d6dcd1b | 3,704 | ipynb | Jupyter Notebook | Index.ipynb | Ravirajadrangi/A-Whirlwind-Tour-of-Python | c4b4c571903f47a2c71d463daa130a139a791619 | [
"CC0-1.0"
] | 1 | 2019-01-24T21:33:07.000Z | 2019-01-24T21:33:07.000Z | Index.ipynb | Ravirajadrangi/A-Whirlwind-Tour-of-Python | c4b4c571903f47a2c71d463daa130a139a791619 | [
"CC0-1.0"
] | null | null | null | Index.ipynb | Ravirajadrangi/A-Whirlwind-Tour-of-Python | c4b4c571903f47a2c71d463daa130a139a791619 | [
"CC0-1.0"
] | 1 | 2018-02-26T18:43:36.000Z | 2018-02-26T18:43:36.000Z | 38.989474 | 125 | 0.649838 | [
[
[
"# A Whirlwind Tour of Python\n\n*Jake VanderPlas, Summer 2016*\n\nThese are the Jupyter Notebooks behind my O'Reilly report,\n[*A Whirlwind Tour of Python*](http://www.oreilly.com/programming/free/a-whirlwind-tour-of-python.csp).\nThe full notebook listing is available [on Github](https://github.com/jakevdp/WhirlwindTourOfPython).\n\n*A Whirlwind Tour of Python* is a fast-paced introduction to essential\ncomponents of the Python language for researchers and developers who are\nalready familiar with programming in another language.\n\nThe material is particularly aimed at those who wish to use Python for data \nscience and/or scientific programming, and in this capacity serves as an\nintroduction to my upcoming book, *The Python Data Science Handbook*.\nThese notebooks are adapted from lectures and workshops I've given on these\ntopics at University of Washington and at various conferences, meetings, and\nworkshops around the world.",
"_____no_output_____"
],
[
"## Index\n\n1. [Introduction](00-Introduction.ipynb)\n2. [How to Run Python Code](01-How-to-Run-Python-Code.ipynb)\n3. [Basic Python Syntax](02-Basic-Python-Syntax.ipynb)\n4. [Python Semantics: Variables](03-Semantics-Variables.ipynb)\n5. [Python Semantics: Operators](04-Semantics-Operators.ipynb)\n6. [Built-In Scalar Types](05-Built-in-Scalar-Types.ipynb)\n7. [Built-In Data Structures](06-Built-in-Data-Structures.ipynb)\n8. [Control Flow Statements](07-Control-Flow-Statements.ipynb)\n9. [Defining Functions](08-Defining-Functions.ipynb)\n10. [Errors and Exceptions](09-Errors-and-Exceptions.ipynb)\n11. [Iterators](10-Iterators.ipynb)\n12. [List Comprehensions](11-List-Comprehensions.ipynb)\n13. [Generators and Generator Expressions](12-Generators.ipynb)\n14. [Modules and Packages](13-Modules-and-Packages.ipynb)\n15. [Strings and Regular Expressions](14-Strings-and-Regular-Expressions.ipynb)\n16. [Preview of Data Science Tools](15-Preview-of-Data-Science-Tools.ipynb)\n17. [Resources for Further Learning](16-Further-Resources.ipynb)\n18. [Appendix: Code To Reproduce Figures](17-Figures.ipynb)",
"_____no_output_____"
],
[
"## License\n\nThis material is released under the \"No Rights Reserved\" [CC0](LICENSE)\nlicense, and thus you are free to re-use, modify, build-on, and enhance\nthis material for any purpose.\n\nThat said, I request (but do not require) that if you use or adapt this material,\nyou include a proper attribution and/or citation; for example\n\n> *A Whirlwind Tour of Python* by Jake VanderPlas (O’Reilly). Copyright 2016 O’Reilly Media, Inc., 978-1-491-96465-1\n\nRead more about CC0 [here](https://creativecommons.org/share-your-work/public-domain/cc0/).",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
]
] |
d096c5c41c7285290f594b4a594bd24bb2838ca2 | 31,745 | ipynb | Jupyter Notebook | examples/PyVertical Example.ipynb | Koukyosyumei/PyVertical | a44511d5ae70b42e326987a2e11d5245e6bb1155 | [
"Apache-2.0"
] | 105 | 2020-06-07T20:18:42.000Z | 2022-03-31T08:35:45.000Z | examples/PyVertical Example.ipynb | Koukyosyumei/PyVertical | a44511d5ae70b42e326987a2e11d5245e6bb1155 | [
"Apache-2.0"
] | 69 | 2020-06-07T15:02:19.000Z | 2022-01-20T06:28:44.000Z | examples/PyVertical Example.ipynb | Koukyosyumei/PyVertical | a44511d5ae70b42e326987a2e11d5245e6bb1155 | [
"Apache-2.0"
] | 45 | 2020-06-08T15:22:52.000Z | 2022-03-08T07:59:56.000Z | 74.344262 | 10,140 | 0.79329 | [
[
[
"epochs = 5",
"_____no_output_____"
]
],
[
[
"# Example - Simple Vertically Partitioned Split Neural Network\n\n- <b>Alice</b>\n - Has model Segment 1\n - Has the handwritten Images\n- <b>Bob</b>\n - Has model Segment 2\n - Has the image Labels\n \nBased on [SplitNN - Tutorial 3](https://github.com/OpenMined/PySyft/blob/master/examples/tutorials/advanced/split_neural_network/Tutorial%203%20-%20Folded%20Split%20Neural%20Network.ipynb) from Adam J Hall - Twitter: [@AJH4LL](https://twitter.com/AJH4LL) · GitHub: [@H4LL](https://github.com/H4LL)\n\nAuthors:\n- Pavlos Papadopoulos · GitHub: [@pavlos-p](https://github.com/pavlos-p)\n- Tom Titcombe · GitHub: [@TTitcombe](https://github.com/TTitcombe)\n- Robert Sandmann · GitHub: [@rsandmann](https://github.com/rsandmann)\n",
"_____no_output_____"
]
],
[
[
"class SplitNN:\n def __init__(self, models, optimizers):\n self.models = models\n self.optimizers = optimizers\n\n self.data = []\n self.remote_tensors = []\n\n def forward(self, x):\n data = []\n remote_tensors = []\n\n data.append(self.models[0](x))\n\n if data[-1].location == self.models[1].location:\n remote_tensors.append(data[-1].detach().requires_grad_())\n else:\n remote_tensors.append(\n data[-1].detach().move(self.models[1].location).requires_grad_()\n )\n\n i = 1\n while i < (len(models) - 1):\n data.append(self.models[i](remote_tensors[-1]))\n\n if data[-1].location == self.models[i + 1].location:\n remote_tensors.append(data[-1].detach().requires_grad_())\n else:\n remote_tensors.append(\n data[-1].detach().move(self.models[i + 1].location).requires_grad_()\n )\n\n i += 1\n\n data.append(self.models[i](remote_tensors[-1]))\n\n self.data = data\n self.remote_tensors = remote_tensors\n\n return data[-1]\n\n def backward(self):\n for i in range(len(models) - 2, -1, -1):\n if self.remote_tensors[i].location == self.data[i].location:\n grads = self.remote_tensors[i].grad.copy()\n else:\n grads = self.remote_tensors[i].grad.copy().move(self.data[i].location)\n \n self.data[i].backward(grads)\n\n def zero_grads(self):\n for opt in self.optimizers:\n opt.zero_grad()\n\n def step(self):\n for opt in self.optimizers:\n opt.step()",
"_____no_output_____"
],
[
"import sys\nsys.path.append('../')\n\nimport torch\nfrom torchvision import datasets, transforms\nfrom torch import nn, optim\nfrom torchvision.datasets import MNIST\nfrom torchvision.transforms import ToTensor\n\nimport syft as sy\n\nfrom src.dataloader import VerticalDataLoader\nfrom src.psi.util import Client, Server\nfrom src.utils import add_ids\n\nhook = sy.TorchHook(torch)",
"_____no_output_____"
],
[
"# Create dataset\ndata = add_ids(MNIST)(\".\", download=True, transform=ToTensor()) # add_ids adds unique IDs to data points\n\n# Batch data\ndataloader = VerticalDataLoader(data, batch_size=128) # partition_dataset uses by default \"remove_data=True, keep_order=False\"",
"_____no_output_____"
]
],
[
[
"## Check if the datasets are unordered\nIn MNIST, we have 2 datasets (the images and the labels).",
"_____no_output_____"
]
],
[
[
"# We need matplotlib library to plot the dataset\nimport matplotlib.pyplot as plt\n\n# Plot the first 10 entries of the labels and the dataset\nfigure = plt.figure()\nnum_of_entries = 10\nfor index in range(1, num_of_entries + 1):\n plt.subplot(6, 10, index)\n plt.axis('off')\n plt.imshow(dataloader.dataloader1.dataset.data[index].numpy().squeeze(), cmap='gray_r')\n print(dataloader.dataloader2.dataset[index][0], end=\" \")",
"1 4 0 7 0 1 5 1 5 1 "
]
],
[
[
"## Implement PSI and order the datasets accordingly",
"_____no_output_____"
]
],
[
[
"# Compute private set intersection\nclient_items = dataloader.dataloader1.dataset.get_ids()\nserver_items = dataloader.dataloader2.dataset.get_ids()\n\nclient = Client(client_items)\nserver = Server(server_items)\n\nsetup, response = server.process_request(client.request, len(client_items))\nintersection = client.compute_intersection(setup, response)\n\n# Order data\ndataloader.drop_non_intersecting(intersection)\ndataloader.sort_by_ids()",
"_____no_output_____"
]
],
[
[
"## Check again if the datasets are ordered",
"_____no_output_____"
]
],
[
[
"# We need matplotlib library to plot the dataset\nimport matplotlib.pyplot as plt\n\n# Plot the first 10 entries of the labels and the dataset\nfigure = plt.figure()\nnum_of_entries = 10\nfor index in range(1, num_of_entries + 1):\n plt.subplot(6, 10, index)\n plt.axis('off')\n plt.imshow(dataloader.dataloader1.dataset.data[index].numpy().squeeze(), cmap='gray_r')\n print(dataloader.dataloader2.dataset[index][0], end=\" \")",
"1 6 7 3 9 1 8 1 9 8 "
],
[
"torch.manual_seed(0)\n\n# Define our model segments\n\ninput_size = 784\nhidden_sizes = [128, 640]\noutput_size = 10\n\nmodels = [\n nn.Sequential(\n nn.Linear(input_size, hidden_sizes[0]),\n nn.ReLU(),\n nn.Linear(hidden_sizes[0], hidden_sizes[1]),\n nn.ReLU(),\n ),\n nn.Sequential(nn.Linear(hidden_sizes[1], output_size), nn.LogSoftmax(dim=1)),\n]\n\n# Create optimisers for each segment and link to them\noptimizers = [\n optim.SGD(model.parameters(), lr=0.03,)\n for model in models\n]\n\n# create some workers\nalice = sy.VirtualWorker(hook, id=\"alice\")\nbob = sy.VirtualWorker(hook, id=\"bob\")\n\n# Send Model Segments to model locations\nmodel_locations = [alice, bob]\nfor model, location in zip(models, model_locations):\n model.send(location)\n\n#Instantiate a SpliNN class with our distributed segments and their respective optimizers\nsplitNN = SplitNN(models, optimizers)",
"_____no_output_____"
],
[
"def train(x, target, splitNN):\n \n #1) Zero our grads\n splitNN.zero_grads()\n \n #2) Make a prediction\n pred = splitNN.forward(x)\n \n #3) Figure out how much we missed by\n criterion = nn.NLLLoss()\n loss = criterion(pred, target)\n \n #4) Backprop the loss on the end layer\n loss.backward()\n \n #5) Feed Gradients backward through the nework\n splitNN.backward()\n \n #6) Change the weights\n splitNN.step()\n \n return loss, pred",
"_____no_output_____"
],
[
"for i in range(epochs):\n running_loss = 0\n correct_preds = 0\n total_preds = 0\n\n for (data, ids1), (labels, ids2) in dataloader:\n # Train a model\n data = data.send(models[0].location)\n data = data.view(data.shape[0], -1)\n labels = labels.send(models[-1].location)\n\n # Call model\n loss, preds = train(data, labels, splitNN)\n\n # Collect statistics\n running_loss += loss.get()\n correct_preds += preds.max(1)[1].eq(labels).sum().get().item()\n total_preds += preds.get().size(0)\n\n print(f\"Epoch {i} - Training loss: {running_loss/len(dataloader):.3f} - Accuracy: {100*correct_preds/total_preds:.3f}\")",
"C:\\Users\\Pavlos\\anaconda3\\envs\\pyvertical-dev\\lib\\site-packages\\syft\\frameworks\\torch\\tensors\\interpreters\\native.py:156: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations.\n to_return = self.native_grad\n"
],
[
"print(\"Labels pointing to: \", labels)\nprint(\"Images pointing to: \", data)",
"Labels pointing to: (Wrapper)>[PointerTensor | me:88412365445 -> bob:61930132897]\nImages pointing to: (Wrapper)>[PointerTensor | me:17470208323 -> alice:25706803556]\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
d096c84f7ddede0c7e6b258d8dd16370c94cbd53 | 75,714 | ipynb | Jupyter Notebook | notebooks/1.3_multiqubit_representation_and_operations.ipynb | speed1313/quantum-native-dojo | 9c9a73487fbcc441064a6e5e609c8af0ea9088d0 | [
"BSD-3-Clause"
] | 180 | 2019-02-02T13:00:19.000Z | 2022-03-31T07:06:22.000Z | notebooks/1.3_multiqubit_representation_and_operations.ipynb | speed1313/quantum-native-dojo | 9c9a73487fbcc441064a6e5e609c8af0ea9088d0 | [
"BSD-3-Clause"
] | 25 | 2019-01-20T05:12:26.000Z | 2021-05-17T04:26:08.000Z | notebooks/1.3_multiqubit_representation_and_operations.ipynb | speed1313/quantum-native-dojo | 9c9a73487fbcc441064a6e5e609c8af0ea9088d0 | [
"BSD-3-Clause"
] | 64 | 2019-02-04T05:10:15.000Z | 2022-03-07T01:52:17.000Z | 43.639193 | 4,584 | 0.563832 | [
[
[
"## 1-3. 複数量子ビットの記述",
"_____no_output_____"
],
[
"ここまでは1量子ビットの状態とその操作(演算)の記述について学んできた。この章の締めくくりとして、$n$個の量子ビットがある場合の状態の記述について学んでいこう。テンソル積がたくさん出てきてややこしいが、コードをいじりながら身につけていってほしい。\n\n$n$個の**古典**ビットの状態は$n$個の$0,1$の数字によって表現され、そのパターンの総数は$2^n$個ある。\n量子力学では、これらすべてのパターンの重ね合わせ状態が許されているので、$n$個の**量子**ビットの状態$|\\psi \\rangle$はどのビット列がどのような重みで重ね合わせになっているかという$2^n$個の複素確率振幅で記述される:\n\n$$\n\\begin{eqnarray}\n|\\psi \\rangle &= & \nc_{00...0} |00...0\\rangle +\nc_{00...1} |00...1\\rangle + \\cdots +\nc_{11...1} |11...1\\rangle =\n\\left(\n\\begin{array}{c}\nc_{00...0}\n\\\\\nc_{00...1}\n\\\\\n\\vdots\n\\\\\nc_{11...1}\n\\end{array}\n\\right).\n\\end{eqnarray}\n$$\n\nただし、\n複素確率振幅は規格化\n$\\sum _{i_1,..., i_n} |c_{i_1...i_n}|^2=1$\nされているものとする。 \nそして、この$n$量子ビットの量子状態を測定するとビット列$i_1 ... i_n$が確率\n\n$$\n\\begin{eqnarray}\np_{i_1 ... i_n} &=&|c_{i_1 ... i_n}|^2\n\\label{eq02}\n\\end{eqnarray}\n$$\n\nでランダムに得られ、測定後の状態は$|i_1 \\dotsc i_n\\rangle$となる。\n\n**このように**$n$**量子ビットの状態は、**$n$**に対して指数的に大きい**$2^n$**次元の複素ベクトルで記述する必要があり、ここに古典ビットと量子ビットの違いが顕著に現れる**。\nそして、$n$量子ビット系に対する操作は$2^n \\times 2^n$次元のユニタリ行列として表される。 \n言ってしまえば、量子コンピュータとは、量子ビット数に対して指数的なサイズの複素ベクトルを、物理法則に従ってユニタリ変換するコンピュータのことなのである。\n\n※ここで、複数量子ビットの順番と表記の関係について注意しておく。状態をケットで記述する際に、「1番目」の量子ビット、「2番目」の量子ビット、……の状態に対応する0と1を左から順番に並べて表記した。例えば$|011\\rangle$と書けば、1番目の量子ビットが0、2番目の量子ビットが1、3番目の量子ビットが1である状態を表す。一方、例えば011を2進数の表記と見た場合、上位ビットが左、下位ビットが右となることに注意しよう。すなわち、一番左の0は最上位ビットであって$2^2$の位に対応し、真ん中の1は$2^1$の位、一番右の1は最下位ビットであって$2^0=1$の位に対応する。つまり、「$i$番目」の量子ビットは、$n$桁の2進数表記の$n-i+1$桁目に対応している。このことは、SymPyなどのパッケージで複数量子ビットを扱う際に気を付ける必要がある(下記「SymPyを用いた演算子のテンソル積」も参照)。\n\n(詳細は Nielsen-Chuang の `1.2.1 Multiple qbits` を参照)",
"_____no_output_____"
],
[
"### 例:2量子ビットの場合\n2量子ビットの場合は、 00, 01, 10, 11 の4通りの状態の重ね合わせをとりうるので、その状態は一般的に\n\n$$\nc_{00} |00\\rangle + c_{01} |01\\rangle + c_{10}|10\\rangle + c_{11} |11\\rangle = \n\\left( \n\\begin{array}{c}\nc_{00}\n\\\\\nc_{01}\n\\\\\nc_{10}\n\\\\\nc_{11}\n\\end{array}\n\\right)\n$$\n\nとかける。",
"_____no_output_____"
],
[
"一方、2量子ビットに対する演算は$4 \\times 4$行列で書け、各列と各行はそれぞれ $\\langle00|,\\langle01|,\\langle10|, \\langle11|, |00\\rangle,|01\\rangle,|10\\rangle, |01\\rangle$ に対応する。 \nこのような2量子ビットに作用する演算としてもっとも重要なのが**制御NOT演算(CNOT演算)**であり、\n行列表示では\n\n$$\n\\begin{eqnarray}\n\\Lambda(X) =\n\\left(\n\\begin{array}{cccc}\n1 & 0 & 0& 0\n\\\\\n0 & 1 & 0& 0\n\\\\\n0 & 0 & 0 & 1\n\\\\\n0 & 0 & 1& 0\n\\end{array}\n\\right)\n\\end{eqnarray}\n$$\n\nとなる。 \nCNOT演算が2つの量子ビットにどのように作用するか見てみよう。まず、1つ目の量子ビットが$|0\\rangle$の場合、$c_{10} = c_{11} = 0$なので、\n\n$$\n\\Lambda(X)\n\\left(\n\\begin{array}{c}\nc_{00}\\\\\nc_{01}\\\\\n0\\\\\n0\n\\end{array}\n\\right) =\n\\left(\n\\begin{array}{c}\nc_{00}\\\\\nc_{01}\\\\\n0\\\\\n0\n\\end{array}\n\\right)\n$$\n\nとなり、状態は変化しない。一方、1つ目の量子ビットが$|1\\rangle$の場合、$c_{00} = c_{01} = 0$なので、\n\n$$\n\\Lambda(X)\n\\left(\n\\begin{array}{c}\n0\\\\\n0\\\\\nc_{10}\\\\\nc_{11}\n\\end{array}\n\\right) =\n\\left(\n\\begin{array}{c}\n0\\\\\n0\\\\\nc_{11}\\\\\nc_{10}\n\\end{array}\n\\right)\n$$\n\nとなり、$|10\\rangle$と$|11\\rangle$の確率振幅が入れ替わる。すなわち、2つ目の量子ビットが反転している。\n\nつまり、CNOT演算は1つ目の量子ビットをそのままに保ちつつ、\n\n- 1つ目の量子ビットが$|0\\rangle$の場合は、2つ目の量子ビットにも何もしない(恒等演算$I$が作用)\n- 1つ目の量子ビットが$|1\\rangle$の場合は、2つ目の量子ビットを反転させる($X$が作用)\n\nという効果を持つ。\nそこで、1つ目の量子ビットを**制御量子ビット**、2つ目の量子ビットを**ターゲット量子ビット**と呼ぶ。\n\nこのCNOT演算の作用は、$\\oplus$を mod 2の足し算、つまり古典計算における排他的論理和(XOR)とすると、\n\n$$\n\\begin{eqnarray}\n\\Lambda(X) |ij \\rangle = |i \\;\\; (i\\oplus j)\\rangle \\:\\:\\: (i,j=0,1)\n\\end{eqnarray}\n$$\n\nとも書ける。よって、CNOT演算は古典計算でのXORを可逆にしたものとみなせる\n(ユニタリー行列は定義$U^\\dagger U = U U^\\dagger = I$より可逆であることに注意)。\n例えば、1つ目の量子ビットを$|0\\rangle$と$|1\\rangle$の\n重ね合わせ状態にし、2つ目の量子ビットを$|0\\rangle$として\n\n$$\n\\begin{eqnarray}\n\\frac{1}{\\sqrt{2}}(|0\\rangle + |1\\rangle )\\otimes |0\\rangle =\n\\frac{1}{\\sqrt{2}}\n\\left(\n\\begin{array}{c}\n1\n\\\\\n0\n\\\\\n1\n\\\\\n0\n\\end{array}\n\\right)\n\\end{eqnarray}\n$$\n\nにCNOTを作用させると、\n\n$$\n\\begin{eqnarray}\n\\frac{1}{\\sqrt{2}}( |00\\rangle + |11\\rangle ) =\n\\frac{1}{\\sqrt{2}}\n\\left(\n\\begin{array}{c}\n1\n\\\\\n0\n\\\\\n0\n\\\\\n1\n\\end{array}\n\\right)\n\\end{eqnarray}\n$$\n\nが得られ、2つ目の量子ビットがそのままである状態$|00\\rangle$と反転された状態$|11\\rangle$の重ね合わせになる。(記号$\\otimes$については次節参照)\n\nさらに、CNOT ゲートを組み合わせることで重要な2量子ビットゲートである**SWAP ゲート**を作ることができる。\n\n$$\\Lambda(X)_{i,j}$$\n\nを$i$番目の量子ビットを制御、$j$番目の量子ビットをターゲットとするCNOT ゲートとして、\n\n$$\n\\begin{align}\n\\mathrm{SWAP} &= \\Lambda(X)_{1,2} \\Lambda(X)_{2,1} \\Lambda(X)_{1,2}\\\\\n&=\n\\left(\n\\begin{array}{cccc}\n1 & 0 & 0 & 0 \\\\\n0 & 1 & 0 & 0 \\\\\n0 & 0 & 0 & 1 \\\\\n0 & 0 & 1 & 0\n\\end{array}\n\\right)\n\\left(\n\\begin{array}{cccc}\n1 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 1 \\\\\n0 & 0 & 1 & 0 \\\\\n0 & 1 & 0 & 0\n\\end{array}\n\\right)\n\\left(\n\\begin{array}{cccc}\n1 & 0 & 0 & 0 \\\\\n0 & 1 & 0 & 0 \\\\\n0 & 0 & 0 & 1 \\\\\n0 & 0 & 1 & 0\n\\end{array}\n\\right)\\\\\n&=\n\\left(\n\\begin{array}{cccc}\n1 & 0 & 0 & 0 \\\\\n0 & 0 & 1 & 0 \\\\\n0 & 1 & 0 & 0 \\\\\n0 & 0 & 0 & 1\n\\end{array}\n\\right)\n\\end{align}\n$$\n\nのように書ける。これは1 番目の量子ビットと2 番目の量子ビットが交換するゲートであることが分かる。\n\nこのことは、上記のmod 2の足し算$\\oplus$を使った表記で簡単に確かめることができる。3つのCNOTゲート$\\Lambda(X)_{1,2} \\Lambda(X)_{2,1} \\Lambda(X)_{1,2}$の$|ij\\rangle$への作用を1ステップずつ書くと、$i \\oplus (i \\oplus j) = (i \\oplus i) \\oplus j = 0 \\oplus j = j$であることを使って、\n\n$$\n\\begin{align}\n|ij\\rangle &\\longrightarrow\n|i \\;\\; (i\\oplus j)\\rangle\\\\\n&\\longrightarrow\n|(i\\oplus (i\\oplus j)) \\;\\; (i\\oplus j)\\rangle =\n|j \\;\\; (i\\oplus j)\\rangle\\\\\n&\\longrightarrow\n|j \\;\\; (j\\oplus (i\\oplus j))\\rangle =\n|ji\\rangle\n\\end{align}\n$$\n\nとなり、2つの量子ビットが交換されていることが分かる。\n\n(詳細は Nielsen-Chuang の `1.3.2 Multiple qbit gates` を参照)",
"_____no_output_____"
],
[
"### テンソル積の計算\n手計算や解析計算で威力を発揮するのは、**テンソル積**($\\otimes$)である。\nこれは、複数の量子ビットがある場合に、それをどのようにして、上で見た大きな一つのベクトルへと変換するのか?という計算のルールを与えてくれる。\n\n量子力学の世界では、2つの量子系があってそれぞれの状態が$|\\psi \\rangle$と$|\\phi \\rangle$のとき、\n\n$$\n|\\psi \\rangle \\otimes |\\phi\\rangle\n$$\n\nとテンソル積 $\\otimes$ を用いて書く。このような複数の量子系からなる系のことを**複合系**と呼ぶ。例えば2量子ビット系は複合系である。\n\n基本的にはテンソル積は、**多項式と同じような計算ルール**で計算してよい。\n例えば、\n\n$$ \n(\\alpha |0\\rangle + \\beta |1\\rangle )\\otimes (\\gamma |0\\rangle + \\delta |1\\rangle )\n= \\alpha \\gamma |0\\rangle |0\\rangle + \\alpha \\delta |0\\rangle |1\\rangle + \\beta \\gamma |1 \\rangle | 0\\rangle + \\beta \\delta |1\\rangle |1\\rangle \n$$\n\nのように計算する。列ベクトル表示すると、$|00\\rangle$, $|01\\rangle$, $|10\\rangle$, $|11\\rangle$に対応する4次元ベクトル、\n\n$$\n\\left(\n\\begin{array}{c}\n\\alpha\n\\\\\n\\beta\n\\end{array}\n\\right)\n\\otimes \n\\left(\n\\begin{array}{c}\n\\gamma\n\\\\\n\\delta\n\\end{array}\n\\right) =\n\\left(\n\\begin{array}{c}\n\\alpha \\gamma\n\\\\\n\\alpha \\delta\n\\\\\n\\beta \\gamma\n\\\\\n\\beta \\delta\n\\end{array}\n\\right)\n$$\n\nを得る計算になっている。",
"_____no_output_____"
],
[
"### SymPyを用いたテンソル積の計算\n",
"_____no_output_____"
]
],
[
[
"from IPython.display import Image, display_png\nfrom sympy import *\nfrom sympy.physics.quantum import *\nfrom sympy.physics.quantum.qubit import Qubit,QubitBra\nfrom sympy.physics.quantum.gate import X,Y,Z,H,S,T,CNOT,SWAP, CPHASE\ninit_printing() # ベクトルや行列を綺麗に表示するため",
"_____no_output_____"
],
[
"# Google Colaboratory上でのみ実行してください\nfrom IPython.display import HTML\ndef setup_mathjax():\n display(HTML('''\n <script>\n if (!window.MathJax && window.google && window.google.colab) {\n window.MathJax = {\n 'tex2jax': {\n 'inlineMath': [['$', '$'], ['\\\\(', '\\\\)']],\n 'displayMath': [['$$', '$$'], ['\\\\[', '\\\\]']],\n 'processEscapes': true,\n 'processEnvironments': true,\n 'skipTags': ['script', 'noscript', 'style', 'textarea', 'code'],\n 'displayAlign': 'center',\n },\n 'HTML-CSS': {\n 'styles': {'.MathJax_Display': {'margin': 0}},\n 'linebreaks': {'automatic': true},\n // Disable to prevent OTF font loading, which aren't part of our\n // distribution.\n 'imageFont': null,\n },\n 'messageStyle': 'none'\n };\n var script = document.createElement(\"script\");\n script.src = \"https://colab.research.google.com/static/mathjax/MathJax.js?config=TeX-AMS_HTML-full,Safe\";\n document.head.appendChild(script);\n }\n </script>\n '''))\nget_ipython().events.register('pre_run_cell', setup_mathjax)",
"_____no_output_____"
],
[
"a,b,c,d = symbols('alpha,beta,gamma,delta')\npsi = a*Qubit('0')+b*Qubit('1')\nphi = c*Qubit('0')+d*Qubit('1')",
"_____no_output_____"
],
[
"TensorProduct(psi, phi) #テンソル積",
"_____no_output_____"
],
[
"represent(TensorProduct(psi, phi))",
"_____no_output_____"
]
],
[
[
"さらに$|\\psi\\rangle$とのテンソル積をとると8次元のベクトルになる:\n",
"_____no_output_____"
]
],
[
[
"represent(TensorProduct(psi,TensorProduct(psi, phi)))",
"_____no_output_____"
]
],
[
[
"### 演算子のテンソル積\n演算子についても何番目の量子ビットに作用するのか、というのをテンソル積をもちいて表現することができる。たとえば、1つめの量子ビットには$A$という演算子、2つめの量子ビットには$B$という演算子を作用させるという場合には、\n\n$$ A \\otimes B$$\n\nとしてテンソル積演算子が与えられる。\n$A$と$B$をそれぞれ、2×2の行列とすると、$A\\otimes B$は4×4の行列として\n\n$$\n\\left(\n\\begin{array}{cc}\na_{11} & a_{12}\n\\\\\na_{21} & a_{22}\n\\end{array}\n\\right)\n\\otimes \n\\left(\n\\begin{array}{cc}\nb_{11} & b_{12}\n\\\\\nb_{21} & b_{22}\n\\end{array}\n\\right) =\n\\left(\n\\begin{array}{cccc}\na_{11} b_{11} & a_{11} b_{12} & a_{12} b_{11} & a_{12} b_{12}\n\\\\\na_{11} b_{21} & a_{11} b_{22} & a_{12} b_{21} & a_{12} b_{22}\n\\\\\na_{21} b_{11} & a_{21} b_{12} & a_{22} b_{11} & a_{22} b_{12}\n\\\\\na_{21} b_{21} & a_{21} b_{22} & a_{22} b_{21} & a_{22} b_{22}\n\\end{array}\n\\right)\n$$\n\nのように計算される。\n\nテンソル積状態 \n\n$$|\\psi \\rangle \\otimes | \\phi \\rangle $$ \n\nに対する作用は、\n\n$$ (A|\\psi \\rangle ) \\otimes (B |\\phi \\rangle )$$\n\nとなり、それぞれの部分系$|\\psi \\rangle$と$|\\phi\\rangle$に$A$と$B$が作用する。\n足し算に対しては、多項式のように展開してそれぞれの項を作用させればよい。\n\n$$\n(A+C)\\otimes (B+D) |\\psi \\rangle \\otimes | \\phi \\rangle =\n(A \\otimes B +A \\otimes D + C \\otimes B + C \\otimes D) |\\psi \\rangle \\otimes | \\phi \\rangle\\\\ =\n(A|\\psi \\rangle) \\otimes (B| \\phi \\rangle)\n+(A|\\psi \\rangle) \\otimes (D| \\phi \\rangle)\n+(C|\\psi \\rangle) \\otimes (B| \\phi \\rangle)\n+(C|\\psi \\rangle) \\otimes (D| \\phi \\rangle)\n$$\n\nテンソル積やテンソル積演算子は左右横並びで書いているが、本当は\n\n$$\n\\left(\n\\begin{array}{c}\nA\n\\\\\n\\otimes \n\\\\\nB\n\\end{array}\n\\right)\n\\begin{array}{c}\n|\\psi \\rangle \n\\\\\n\\otimes \n\\\\\n|\\phi\\rangle\n\\end{array}\n$$\n\nのように縦に並べた方がその作用の仕方がわかりやすいのかもしれない。\n\n例えば、CNOT演算を用いて作られるエンタングル状態は、\n\n$$\n\\left(\n\\begin{array}{c}\n|0\\rangle \\langle 0|\n\\\\\n\\otimes \n\\\\\nI\n\\end{array}\n+\n\\begin{array}{c}\n|1\\rangle \\langle 1|\n\\\\\n\\otimes \n\\\\\nX\n\\end{array}\n\\right)\n\\left(\n\\begin{array}{c}\n\\frac{1}{\\sqrt{2}}(|0\\rangle + |1\\rangle)\n\\\\\n\\otimes \n\\\\\n|0\\rangle\n\\end{array}\n\\right) =\n\\frac{1}{\\sqrt{2}}\\left(\n\\begin{array}{c}\n|0 \\rangle \n\\\\\n\\otimes \n\\\\\n|0\\rangle\n\\end{array}\n+\n\\begin{array}{c}\n|1 \\rangle \n\\\\\n\\otimes \n\\\\\n|1\\rangle\n\\end{array}\n\\right)\n$$\n\nのようになる。",
"_____no_output_____"
],
[
"### SymPyを用いた演算子のテンソル積\nSymPyで演算子を使用する時は、何桁目の量子ビットに作用する演算子かを常に指定する。「何**番目**」ではなく2進数表記の「何**桁目**」であることに注意しよう。$n$量子ビットのうちの左から$i$番目の量子ビットを指定する場合、SymPyのコードでは`n-i`を指定する(0を基点とするインデックス)。",
"_____no_output_____"
],
[
"`H(0)` は、1量子ビット空間で表示すると",
"_____no_output_____"
]
],
[
[
"represent(H(0),nqubits=1)",
"_____no_output_____"
]
],
[
[
"2量子ビット空間では$H \\otimes I$に対応しており、その表示は",
"_____no_output_____"
]
],
[
[
"represent(H(1),nqubits=2)",
"_____no_output_____"
]
],
[
[
"CNOT演算は、",
"_____no_output_____"
]
],
[
[
"represent(CNOT(1,0),nqubits=2)",
"_____no_output_____"
]
],
[
[
"パウリ演算子のテンソル積$X\\otimes Y \\otimes Z$も、",
"_____no_output_____"
]
],
[
[
"represent(X(2)*Y(1)*Z(0),nqubits=3)",
"_____no_output_____"
]
],
[
[
"このようにして、上記のテンソル積のルールを実際にたしかめてみることができる。",
"_____no_output_____"
],
[
"### 複数の量子ビットの一部分だけを測定した場合\n\n複数の量子ビットを全て測定した場合の測定結果の確率については既に説明した。複数の量子ビットのうち、一部だけを測定することもできる。その場合、測定結果の確率は、測定結果に対応する(部分系の)基底で射影したベクトルの長さの2乗になり、測定後の状態は射影されたベクトルを規格化したものになる。\n\n具体的に見ていこう。以下の$n$量子ビットの状態を考える。\n\\begin{align}\n|\\psi\\rangle &=\nc_{00...0} |00...0\\rangle +\nc_{00...1} |00...1\\rangle + \\cdots +\nc_{11...1} |11...1\\rangle\\\\\n&= \\sum_{i_1 \\dotsc i_n} c_{i_1 \\dotsc i_n} |i_1 \\dotsc i_n\\rangle =\n\\sum_{i_1 \\dotsc i_n} c_{i_1 \\dotsc i_n} |i_1\\rangle \\otimes \\cdots \\otimes |i_n\\rangle\n\\end{align}\n1番目の量子ビットを測定するとしよう。1つ目の量子ビットの状態空間の正規直交基底$|0\\rangle$, $|1\\rangle$に対する射影演算子はそれぞれ$|0\\rangle\\langle0|$, $|1\\rangle\\langle1|$と書ける。1番目の量子ビットを$|0\\rangle$に射影し、他の量子ビットには何もしない演算子\n\n$$\n|0\\rangle\\langle0| \\otimes I \\otimes \\cdots \\otimes I\n$$\n\nを使って、測定値0が得られる確率は\n\n$$\n\\bigl\\Vert \\bigl(|0\\rangle\\langle0| \\otimes I \\otimes \\cdots \\otimes I\\bigr) |\\psi\\rangle \\bigr\\Vert^2 =\n\\langle \\psi | \\bigl(|0\\rangle\\langle0| \\otimes I \\otimes \\cdots \\otimes I\\bigr) | \\psi \\rangle\n$$\n\nである。ここで\n\n$$\n\\bigl(|0\\rangle\\langle0| \\otimes I \\otimes \\cdots \\otimes I\\bigr) | \\psi \\rangle =\n\\sum_{i_2 \\dotsc i_n} c_{0 i_2 \\dotsc i_n} |0\\rangle \\otimes |i_2\\rangle \\otimes \\cdots \\otimes |i_n\\rangle\n$$\n\nなので、求める確率は\n\n$$\np_0 = \\sum_{i_2 \\dotsc i_n} |c_{0 i_2 \\dotsc i_n}|^2\n$$\n\nとなり、測定後の状態は\n\n$$\n\\frac{1}{\\sqrt{p_0}}\\sum_{i_2 \\dotsc i_n} c_{0 i_2 \\dotsc i_n} |0\\rangle \\otimes |i_2\\rangle \\otimes \\cdots \\otimes |i_n\\rangle\n$$\n\nとなる。0と1を入れ替えれば、測定値1が得られる確率と測定後の状態が得られる。\n\nここで求めた$p_0$, $p_1$の表式は、測定値$i_1, \\dotsc, i_n$が得られる同時確率分布$p_{i_1, \\dotsc, i_n}$から計算される$i_1$の周辺確率分布と一致することに注意しよう。実際、\n\n$$\n\\sum_{i_2, \\dotsc, i_n} p_{i_1, \\dotsc, i_n} = \\sum_{i_2, \\dotsc, i_n} |c_{i_1, \\dotsc, i_n}|^2 = p_{i_1}\n$$\n\nである。\n\n測定される量子ビットを増やし、最初の$k$個の量子ビットを測定する場合も同様に計算できる。測定結果$i_1, \\dotsc, i_k$を得る確率は\n\n$$\np_{i_1, \\dotsc, i_k} = \\sum_{i_{k+1}, \\dotsc, i_n} |c_{i_1, \\dotsc, i_n}|^2\n$$\n\nであり、測定後の状態は\n\n$$\n\\frac{1}{\\sqrt{p_{i_1, \\dotsc, i_k}}}\\sum_{i_{k+1} \\dotsc i_n} c_{i_1 \\dotsc i_n} |i_1 \\rangle \\otimes \\cdots \\otimes |i_n\\rangle\n$$\n\nとなる。(和をとるのは$i_{k+1},\\cdots,i_n$だけであることに注意)",
"_____no_output_____"
],
[
"SymPyを使ってさらに具体的な例を見てみよう。H演算とCNOT演算を組み合わせて作られる次の状態を考える。\n$$\n|\\psi\\rangle = \\Lambda(X) (H \\otimes H) |0\\rangle \\otimes |0\\rangle = \\frac{|00\\rangle + |10\\rangle + |01\\rangle + |11\\rangle}{2}\n$$",
"_____no_output_____"
]
],
[
[
"psi = qapply(CNOT(1, 0)*H(1)*H(0)*Qubit('00'))\npsi",
"_____no_output_____"
]
],
[
[
"この状態の1つ目の量子ビットを測定して0になる確率は\n\n$$\np_0 = \\langle \\psi | \\bigl( |0\\rangle\\langle0| \\otimes I \\bigr) | \\psi \\rangle =\n\\left(\\frac{\\langle 00 | + \\langle 10 | + \\langle 01 | + \\langle 11 |}{2}\\right)\n\\left(\\frac{| 00 \\rangle + | 01 \\rangle}{2}\\right) =\n\\frac{1}{2}\n$$\n\nで、測定後の状態は\n\n$$\n\\frac{1}{\\sqrt{p_0}} \\bigl( |0\\rangle\\langle0| \\otimes I \\bigr) | \\psi \\rangle =\n\\frac{| 00 \\rangle + | 01 \\rangle}{\\sqrt{2}}\n$$\n\nである。",
"_____no_output_____"
],
[
"この結果をSymPyでも計算してみよう。SymPyには測定用の関数が数種類用意されていて、一部の量子ビットを測定した場合の確率と測定後の状態を計算するには、`measure_partial`を用いればよい。測定する状態と、測定を行う量子ビットのインデックスを引数として渡すと、測定後の状態と測定の確率の組がリストとして出力される。1つめの量子ビットが0だった場合の量子状態と確率は`[0]`要素を参照すればよい。",
"_____no_output_____"
]
],
[
[
"from sympy.physics.quantum.qubit import measure_all, measure_partial\nmeasured_state_and_probability = measure_partial(psi, (1,))",
"_____no_output_____"
],
[
"measured_state_and_probability[0]",
"_____no_output_____"
]
],
[
[
"上で手計算した結果と合っていることが分かる。測定結果が1だった場合も同様に計算できる。",
"_____no_output_____"
]
],
[
[
"measured_state_and_probability[1]",
"_____no_output_____"
]
],
[
[
"---\n## コラム:ユニバーサルゲートセットとは\n\n古典計算機では、NANDゲート(論理積ANDの出力を反転したもの)さえあれば、これをいくつか組み合わせることで、任意の論理演算が実行できることが知られている。 \nそれでは、量子計算における対応物、すなわち任意の量子計算を実行するために最低限必要な量子ゲートは何であろうか? \n実は、本節で学んだ\n$$\\{H, T, {\\rm CNOT} \\}$$ \n\nの3種類のゲートがその役割を果たしている、いわゆる**ユニバーサルゲートセット**であることが知られている。 \nこれらをうまく組み合わせることで、任意の量子計算を実行できる、すなわち「**万能量子計算**」が可能である。 \n\n### 【より詳しく知りたい人のための注】\n\n以下では$\\{H, T, {\\rm CNOT} \\}$の3種のゲートの組が如何にしてユニバーサルゲートセットを構成するかを、順を追って説明する。 \n流れとしては、一般の$n$量子ビットユニタリ演算からスタートし、これをより細かい部品にブレイクダウンしていくことで、最終的に上記3種のゲートに行き着くことを見る。\n\n#### ◆ $n$量子ビットユニタリ演算の分解\nまず、任意の$n$量子ビットユニタリ演算は、以下の手順を経て、いくつかの**1量子ビットユニタリ演算**と**CNOTゲート**に分解できる。 \n\n1. 任意の$n$量子ビットユニタリ演算は、いくつかの**2準位ユニタリ演算**の積に分解できる。ここで2準位ユニタリ演算とは、例として3量子ビットの場合、$2^3=8$次元空間のうち2つの基底(e.g., $\\{|000\\rangle, |111\\rangle \\}$)の張る2次元部分空間にのみ作用するユニタリ演算である\n2. 任意の2準位ユニタリ演算は、**制御**$U$**ゲート**(CNOTゲートのNOT部分を任意の1量子ビットユニタリ演算$U$に置き換えたもの)と**Toffoliゲート**(CNOTゲートの制御量子ビットが2つになったもの)から構成できる\n3. 制御$U$ゲートとToffoliゲートは、どちらも**1量子ビットユニタリ演算**と**CNOTゲート**から構成できる\n\n#### ◆ 1量子ビットユニタリ演算の構成\nさらに、任意の1量子ビットユニタリ演算は、$\\{H, T\\}$の2つで構成できる。\n\n1. 任意の1量子ビットユニタリ演算は、オイラーの回転角の法則から、回転ゲート$\\{R_X(\\theta), R_Z(\\theta)\\}$で(厳密に)実現可能である\n2. 実は、ブロッホ球上の任意の回転は、$\\{H, T\\}$のみを用いることで実現可能である(注1)。これはある軸に関する$\\pi$の無理数倍の回転が$\\{H, T\\}$のみから実現できること(**Solovay-Kitaevアルゴリズム**)に起因する\n \n(注1) ブロッホ球上の連続的な回転を、離散的な演算である$\\{H, T\\}$で実現できるか疑問に思われる読者もいるかもしれない。実際、厳密な意味で1量子ビットユニタリ演算を離散的なゲート操作で実現しようとすると、無限個のゲートが必要となる。しかし実際には厳密なユニタリ演算を実現する必要はなく、必要な計算精度$\\epsilon$で任意のユニタリ演算を近似できれば十分である。ここでは、多項式個の$\\{H, T\\}$を用いることで、任意の1量子ビットユニタリ演算を**十分良い精度で近似的に構成できる**ことが、**Solovay-Kitaevの定理** [3] により保証されている。\n\n\n<br>\n \n以上の議論により、3種のゲート$\\{H, T, {\\rm CNOT} \\}$があれば、任意の$n$量子ビットユニタリ演算が実現できることがわかる。\n\nユニバーサルゲートセットや万能量子計算について、より詳しくは以下を参照されたい: \n[1] Nielsen-Chuang の `4.5 Universal quantum gates` \n[2] 藤井 啓祐 「量子コンピュータの基礎と物理との接点」(第62回物性若手夏の学校 講義)DOI: 10.14989/229039 http://mercury.yukawa.kyoto-u.ac.jp/~bussei.kenkyu/archives/1274.html \n[3] レビューとして、C. M. Dawson, M. A. Nielsen, “The Solovay-Kitaev algorithm“, https://arxiv.org/abs/quant-ph/0505030",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d096e5309937c75370ef482dc85dc62e43640450 | 480,421 | ipynb | Jupyter Notebook | tutorials/Tutorial5_Evaluation.ipynb | carloslago/haystack | ed5e0b3f756f3d311ad0de4229e42c9c0b725416 | [
"Apache-2.0"
] | null | null | null | tutorials/Tutorial5_Evaluation.ipynb | carloslago/haystack | ed5e0b3f756f3d311ad0de4229e42c9c0b725416 | [
"Apache-2.0"
] | null | null | null | tutorials/Tutorial5_Evaluation.ipynb | carloslago/haystack | ed5e0b3f756f3d311ad0de4229e42c9c0b725416 | [
"Apache-2.0"
] | null | null | null | 31.677502 | 4,636 | 0.585355 | [
[
[
"# Evaluation of a Pipeline and its Components\n\n[](https://colab.research.google.com/github/deepset-ai/haystack/blob/master/tutorials/Tutorial5_Evaluation.ipynb)\n\nTo be able to make a statement about the quality of results a question-answering pipeline or any other pipeline in haystack produces, it is important to evaluate it. Furthermore, evaluation allows determining which components of the pipeline can be improved.\nThe results of the evaluation can be saved as CSV files, which contain all the information to calculate additional metrics later on or inspect individual predictions.",
"_____no_output_____"
],
[
"### Prepare environment\n\n#### Colab: Enable the GPU runtime\nMake sure you enable the GPU runtime to experience decent speed in this tutorial.\n**Runtime -> Change Runtime type -> Hardware accelerator -> GPU**\n\n<img src=\"https://raw.githubusercontent.com/deepset-ai/haystack/master/docs/_src/img/colab_gpu_runtime.jpg\">",
"_____no_output_____"
]
],
[
[
"# Make sure you have a GPU running\n!nvidia-smi",
"_____no_output_____"
],
[
"# Install the latest release of Haystack in your own environment\n#! pip install farm-haystack\n\n# Install the latest master of Haystack\n!pip install --upgrade pip\n!pip install git+https://github.com/deepset-ai/haystack.git#egg=farm-haystack[colab]",
"_____no_output_____"
]
],
[
[
"## Start an Elasticsearch server\nYou can start Elasticsearch on your local machine instance using Docker. If Docker is not readily available in your environment (eg., in Colab notebooks), then you can manually download and execute Elasticsearch from source.",
"_____no_output_____"
]
],
[
[
"# If Docker is available: Start Elasticsearch as docker container\n# from haystack.utils import launch_es\n# launch_es()\n\n# Alternative in Colab / No Docker environments: Start Elasticsearch from source\n! wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.9.2-linux-x86_64.tar.gz -q\n! tar -xzf elasticsearch-7.9.2-linux-x86_64.tar.gz\n! chown -R daemon:daemon elasticsearch-7.9.2\n\nimport os\nfrom subprocess import Popen, PIPE, STDOUT\n\nes_server = Popen(\n [\"elasticsearch-7.9.2/bin/elasticsearch\"], stdout=PIPE, stderr=STDOUT, preexec_fn=lambda: os.setuid(1) # as daemon\n)\n# wait until ES has started\n! sleep 30",
"_____no_output_____"
]
],
[
[
"## Fetch, Store And Preprocess the Evaluation Dataset",
"_____no_output_____"
]
],
[
[
"from haystack.utils import fetch_archive_from_http\n\n# Download evaluation data, which is a subset of Natural Questions development set containing 50 documents with one question per document and multiple annotated answers\ndoc_dir = \"data/tutorial5\"\ns3_url = \"https://s3.eu-central-1.amazonaws.com/deepset.ai-farm-qa/datasets/nq_dev_subset_v2.json.zip\"\nfetch_archive_from_http(url=s3_url, output_dir=doc_dir)",
"_____no_output_____"
],
[
"# make sure these indices do not collide with existing ones, the indices will be wiped clean before data is inserted\ndoc_index = \"tutorial5_docs\"\nlabel_index = \"tutorial5_labels\"",
"_____no_output_____"
],
[
"# Connect to Elasticsearch\nfrom haystack.document_stores import ElasticsearchDocumentStore\n\n# Connect to Elasticsearch\ndocument_store = ElasticsearchDocumentStore(\n host=\"localhost\",\n username=\"\",\n password=\"\",\n index=doc_index,\n label_index=label_index,\n embedding_field=\"emb\",\n embedding_dim=768,\n excluded_meta_data=[\"emb\"],\n)",
"_____no_output_____"
],
[
"from haystack.nodes import PreProcessor\n\n# Add evaluation data to Elasticsearch Document Store\n# We first delete the custom tutorial indices to not have duplicate elements\n# and also split our documents into shorter passages using the PreProcessor\npreprocessor = PreProcessor(\n split_length=200,\n split_overlap=0,\n split_respect_sentence_boundary=False,\n clean_empty_lines=False,\n clean_whitespace=False,\n)\ndocument_store.delete_documents(index=doc_index)\ndocument_store.delete_documents(index=label_index)\n\n# The add_eval_data() method converts the given dataset in json format into Haystack document and label objects. Those objects are then indexed in their respective document and label index in the document store. The method can be used with any dataset in SQuAD format.\ndocument_store.add_eval_data(\n filename=\"data/tutorial5/nq_dev_subset_v2.json\",\n doc_index=doc_index,\n label_index=label_index,\n preprocessor=preprocessor,\n)",
"_____no_output_____"
]
],
[
[
"## Initialize the Two Components of an ExtractiveQAPipeline: Retriever and Reader",
"_____no_output_____"
]
],
[
[
"# Initialize Retriever\nfrom haystack.nodes import ElasticsearchRetriever\n\nretriever = ElasticsearchRetriever(document_store=document_store)\n\n# Alternative: Evaluate dense retrievers (EmbeddingRetriever or DensePassageRetriever)\n# The EmbeddingRetriever uses a single transformer based encoder model for query and document.\n# In contrast, DensePassageRetriever uses two separate encoders for both.\n\n# Please make sure the \"embedding_dim\" parameter in the DocumentStore above matches the output dimension of your models!\n# Please also take care that the PreProcessor splits your files into chunks that can be completely converted with\n# the max_seq_len limitations of Transformers\n# The SentenceTransformer model \"sentence-transformers/multi-qa-mpnet-base-dot-v1\" generally works well with the EmbeddingRetriever on any kind of English text.\n# For more information and suggestions on different models check out the documentation at: https://www.sbert.net/docs/pretrained_models.html\n\n# from haystack.retriever import EmbeddingRetriever, DensePassageRetriever\n# retriever = EmbeddingRetriever(document_store=document_store, model_format=\"sentence_transformers\",\n# embedding_model=\"sentence-transformers/multi-qa-mpnet-base-dot-v1\")\n# retriever = DensePassageRetriever(document_store=document_store,\n# query_embedding_model=\"facebook/dpr-question_encoder-single-nq-base\",\n# passage_embedding_model=\"facebook/dpr-ctx_encoder-single-nq-base\",\n# use_gpu=True,\n# max_seq_len_passage=256,\n# embed_title=True)\n# document_store.update_embeddings(retriever, index=doc_index)",
"_____no_output_____"
],
[
"# Initialize Reader\nfrom haystack.nodes import FARMReader\n\nreader = FARMReader(\"deepset/roberta-base-squad2\", top_k=4, return_no_answer=True)\n\n# Define a pipeline consisting of the initialized retriever and reader\nfrom haystack.pipelines import ExtractiveQAPipeline\n\npipeline = ExtractiveQAPipeline(reader=reader, retriever=retriever)\n\n# The evaluation also works with any other pipeline.\n# For example you could use a DocumentSearchPipeline as an alternative:\n\n# from haystack.pipelines import DocumentSearchPipeline\n# pipeline = DocumentSearchPipeline(retriever=retriever)",
"_____no_output_____"
]
],
[
[
"## Evaluation of an ExtractiveQAPipeline\nHere we evaluate retriever and reader in open domain fashion on the full corpus of documents i.e. a document is considered\ncorrectly retrieved if it contains the gold answer string within it. The reader is evaluated based purely on the\npredicted answer string, regardless of which document this came from and the position of the extracted span.\n\nThe generation of predictions is seperated from the calculation of metrics. This allows you to run the computation-heavy model predictions only once and then iterate flexibly on the metrics or reports you want to generate.\n",
"_____no_output_____"
]
],
[
[
"from haystack.schema import EvaluationResult, MultiLabel\n\n# We can load evaluation labels from the document store\n# We are also opting to filter out no_answer samples\neval_labels = document_store.get_all_labels_aggregated(drop_negative_labels=True, drop_no_answers=False)\neval_labels = [label for label in eval_labels if not label.no_answer] # filter out no_answer cases\n\n## Alternative: Define queries and labels directly\n\n# eval_labels = [\n# MultiLabel(\n# labels=[\n# Label(\n# query=\"who is written in the book of life\",\n# answer=Answer(\n# answer=\"every person who is destined for Heaven or the World to Come\",\n# offsets_in_context=[Span(374, 434)]\n# ),\n# document=Document(\n# id='1b090aec7dbd1af6739c4c80f8995877-0',\n# content_type=\"text\",\n# content='Book of Life - wikipedia Book of Life Jump to: navigation, search This article is\n# about the book mentioned in Christian and Jewish religious teachings...'\n# ),\n# is_correct_answer=True,\n# is_correct_document=True,\n# origin=\"gold-label\"\n# )\n# ]\n# )\n# ]\n\n# Similar to pipeline.run() we can execute pipeline.eval()\neval_result = pipeline.eval(labels=eval_labels, params={\"Retriever\": {\"top_k\": 5}})",
"_____no_output_____"
],
[
"# The EvaluationResult contains a pandas dataframe for each pipeline node.\n# That's why there are two dataframes in the EvaluationResult of an ExtractiveQAPipeline.\n\nretriever_result = eval_result[\"Retriever\"]\nretriever_result.head()",
"_____no_output_____"
],
[
"reader_result = eval_result[\"Reader\"]\nreader_result.head()",
"_____no_output_____"
],
[
"# We can filter for all documents retrieved for a given query\nquery = \"who is written in the book of life\"\nretriever_book_of_life = retriever_result[retriever_result[\"query\"] == query]",
"_____no_output_____"
],
[
"# We can also filter for all answers predicted for a given query\nreader_book_of_life = reader_result[reader_result[\"query\"] == query]",
"_____no_output_____"
],
[
"# Save the evaluation result so that we can reload it later and calculate evaluation metrics without running the pipeline again.\neval_result.save(\"../\")",
"_____no_output_____"
]
],
[
[
"## Calculating Evaluation Metrics\nLoad an EvaluationResult to quickly calculate standard evaluation metrics for all predictions,\nsuch as F1-score of each individual prediction of the Reader node or recall of the retriever.\nTo learn more about the metrics, see [Evaluation Metrics](https://haystack.deepset.ai/guides/evaluation#metrics-retrieval)",
"_____no_output_____"
]
],
[
[
"saved_eval_result = EvaluationResult.load(\"../\")\nmetrics = saved_eval_result.calculate_metrics()\nprint(f'Retriever - Recall (single relevant document): {metrics[\"Retriever\"][\"recall_single_hit\"]}')\nprint(f'Retriever - Recall (multiple relevant documents): {metrics[\"Retriever\"][\"recall_multi_hit\"]}')\nprint(f'Retriever - Mean Reciprocal Rank: {metrics[\"Retriever\"][\"mrr\"]}')\nprint(f'Retriever - Precision: {metrics[\"Retriever\"][\"precision\"]}')\nprint(f'Retriever - Mean Average Precision: {metrics[\"Retriever\"][\"map\"]}')\n\nprint(f'Reader - F1-Score: {metrics[\"Reader\"][\"f1\"]}')\nprint(f'Reader - Exact Match: {metrics[\"Reader\"][\"exact_match\"]}')",
"_____no_output_____"
]
],
[
[
"## Generating an Evaluation Report\nA summary of the evaluation results can be printed to get a quick overview. It includes some aggregated metrics and also shows a few wrongly predicted examples.",
"_____no_output_____"
]
],
[
[
"pipeline.print_eval_report(saved_eval_result)",
"_____no_output_____"
]
],
[
[
"## Advanced Evaluation Metrics\nAs an advanced evaluation metric, semantic answer similarity (SAS) can be calculated. This metric takes into account whether the meaning of a predicted answer is similar to the annotated gold answer rather than just doing string comparison.\nTo this end SAS relies on pre-trained models. For English, we recommend \"cross-encoder/stsb-roberta-large\", whereas for German we recommend \"deepset/gbert-large-sts\". A good multilingual model is \"sentence-transformers/paraphrase-multilingual-mpnet-base-v2\".\nMore info on this metric can be found in our [paper](https://arxiv.org/abs/2108.06130) or in our [blog post](https://www.deepset.ai/blog/semantic-answer-similarity-to-evaluate-qa).",
"_____no_output_____"
]
],
[
[
"advanced_eval_result = pipeline.eval(\n labels=eval_labels, params={\"Retriever\": {\"top_k\": 1}}, sas_model_name_or_path=\"cross-encoder/stsb-roberta-large\"\n)\n\nmetrics = advanced_eval_result.calculate_metrics()\nprint(metrics[\"Reader\"][\"sas\"])",
"_____no_output_____"
]
],
[
[
"## Isolated Evaluation Mode\nThe isolated node evaluation uses labels as input to the Reader node instead of the output of the preceeding Retriever node.\nThereby, we can additionally calculate the upper bounds of the evaluation metrics of the Reader. Note that even with isolated evaluation enabled, integrated evaluation will still be running.\n",
"_____no_output_____"
]
],
[
[
"eval_result_with_upper_bounds = pipeline.eval(\n labels=eval_labels, params={\"Retriever\": {\"top_k\": 5}, \"Reader\": {\"top_k\": 5}}, add_isolated_node_eval=True\n)",
"_____no_output_____"
],
[
"pipeline.print_eval_report(eval_result_with_upper_bounds)",
"_____no_output_____"
]
],
[
[
"## Evaluation of Individual Components: Retriever\nSometimes you might want to evaluate individual components, for example, if you don't have a pipeline but only a retriever or a reader with a model that you trained yourself.\nHere we evaluate only the retriever, based on whether the gold_label document is retrieved.",
"_____no_output_____"
]
],
[
[
"## Evaluate Retriever on its own\n# Note that no_answer samples are omitted when evaluation is performed with this method\nretriever_eval_results = retriever.eval(top_k=5, label_index=label_index, doc_index=doc_index)\n# Retriever Recall is the proportion of questions for which the correct document containing the answer is\n# among the correct documents\nprint(\"Retriever Recall:\", retriever_eval_results[\"recall\"])\n# Retriever Mean Avg Precision rewards retrievers that give relevant documents a higher rank\nprint(\"Retriever Mean Avg Precision:\", retriever_eval_results[\"map\"])",
"_____no_output_____"
]
],
[
[
"Just as a sanity check, we can compare the recall from `retriever.eval()` with the multi hit recall from `pipeline.eval(add_isolated_node_eval=True)`.\nThese two recall metrics are only comparable since we chose to filter out no_answer samples when generating eval_labels.\n",
"_____no_output_____"
]
],
[
[
"metrics = eval_result_with_upper_bounds.calculate_metrics()\nprint(metrics[\"Retriever\"][\"recall_multi_hit\"])",
"_____no_output_____"
]
],
[
[
"## Evaluation of Individual Components: Reader\nHere we evaluate only the reader in a closed domain fashion i.e. the reader is given one query\nand its corresponding relevant document and metrics are calculated on whether the right position in this text is selected by\nthe model as the answer span (i.e. SQuAD style)",
"_____no_output_____"
]
],
[
[
"# Evaluate Reader on its own\nreader_eval_results = reader.eval(document_store=document_store, label_index=label_index, doc_index=doc_index)\n# Evaluation of Reader can also be done directly on a SQuAD-formatted file without passing the data to Elasticsearch\n# reader_eval_results = reader.eval_on_file(\"../data/nq\", \"nq_dev_subset_v2.json\", device=device)\n\n# Reader Top-N-Accuracy is the proportion of predicted answers that match with their corresponding correct answer\nprint(\"Reader Top-N-Accuracy:\", reader_eval_results[\"top_n_accuracy\"])\n# Reader Exact Match is the proportion of questions where the predicted answer is exactly the same as the correct answer\nprint(\"Reader Exact Match:\", reader_eval_results[\"EM\"])\n# Reader F1-Score is the average overlap between the predicted answers and the correct answers\nprint(\"Reader F1-Score:\", reader_eval_results[\"f1\"])",
"_____no_output_____"
]
],
[
[
"## About us\n\nThis [Haystack](https://github.com/deepset-ai/haystack/) notebook was made with love by [deepset](https://deepset.ai/) in Berlin, Germany\n\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n \nSome of our other work: \n- [German BERT](https://deepset.ai/german-bert)\n- [GermanQuAD and GermanDPR](https://deepset.ai/germanquad)\n- [FARM](https://github.com/deepset-ai/FARM)\n\nGet in touch:\n[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)\n\nBy the way: [we're hiring!](https://www.deepset.ai/jobs)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d096f48aa5a13114cebeeee5654480468506cd0e | 4,962 | ipynb | Jupyter Notebook | docs/calculate-wdf.ipynb | ross-wilkinson/vantage-com | 07604de9481e5097646f7f19477869eaba46cb56 | [
"MIT"
] | null | null | null | docs/calculate-wdf.ipynb | ross-wilkinson/vantage-com | 07604de9481e5097646f7f19477869eaba46cb56 | [
"MIT"
] | null | null | null | docs/calculate-wdf.ipynb | ross-wilkinson/vantage-com | 07604de9481e5097646f7f19477869eaba46cb56 | [
"MIT"
] | null | null | null | 4,962 | 4,962 | 0.681378 | [
[
[
"# Calculate Rider-Bicycle Weight Distribution\nCompare weight distribution on front wheel (WDF) between force plate measurements and the CoM model prediction.\n\n## Calculate WDF from CoM model\nUse CoM model to predict WDF\n\n",
"_____no_output_____"
]
],
[
[
"# Set variables for gravity, mass, and frame geometry.\n\ng = -9.81 #gravity (m/s^2)\nMb = 4.94 #mass of bike (kg)\nMr = 87.2562 #mass of rider (kg)\nMt = Mb + Mr #mass of rider + bike\nWb = Mb * g #weight of bike\nWr = Mr * g #weight of rider\nWt = Wb + Wr #weight of bike + rider\nLfc = 0.579 #length bottom bracket to center of front wheel (e.g. Tarmac=0.579, Shiv=0.596, Epic=719)\nLrc = 0.410 #length bottom bracket to center of rear wheel (e.g. Tarmac=0.410, Shiv=0.415, Epic=433)\nLt = Lfc + Lrc #length wheelbase\nFfwb = 3.45 * g #use if measured for trainer with rear wheel off bike\n#Ffwb = Wb - ((Lfc / Lt) * Wb) #reaction force on front wheel - bike\nFrwb = Wb - ((Lrc / Lt) * Wb) #reaction force on rear wheel - bike",
"_____no_output_____"
]
],
[
[
"Mount your google drive folder",
"_____no_output_____"
]
],
[
[
"from google.colab import drive\ndrive.mount('/content/drive')",
"Mounted at /content/drive\n"
]
],
[
[
"*NOTE* You need to create a copy of the \"com2bb.json\" file in your main google drive folder for this to work.\n\nLoad CoM data from json file stored in Google Drive\n",
"_____no_output_____"
]
],
[
[
"import json\n\nwith open(\"/content/drive/My Drive/com2bb.json\", \"r\") as read_file:\n data = json.load(read_file)\n\ndata[:10]",
"_____no_output_____"
]
],
[
[
"Work on data",
"_____no_output_____"
]
],
[
[
"import statistics as st\nimport numpy as np\n\nLcmbb = np.array(data) / 1000 #convert mm to m\nLcmrc = Lrc - Lcmbb #length rider CoM to rear center\nFfwt = (Wr * Lcmrc) / Lt + Ffwb #reaction force on front wheel - total (N)\nWdft = Ffwt / Wt * 100 #distribution of weight on front wheel - total (%)\n\nprint(st.mean(Lcmbb))\nprint(st.mean(Wdft))",
"-0.04212060080728705\n47.00747507077737\n"
]
],
[
[
"## Calculate CoM from WDF\nUse force plate data to predict rider CoM position",
"_____no_output_____"
]
],
[
[
"Ffwt_fx = np.array(89.28) #measured force on plate under front wheel with rider\nFfwt_fx = Ffwt_fx / 2.205 * g #convert from lb to N\n\nLcmrc_fx = (Ffwt_fx - Ffwb) * Lt / Wr\nLcmfc_fx = Lt - Lcmrc_fx\nLcmbb_fx = Lcmfc_fx - Lfc\n\nprint(Lcmbb_fx)",
"-0.009825275032207537\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d096f8ca31d36bd98919f44fb448502fc5813952 | 10,786 | ipynb | Jupyter Notebook | Project-Uber/Support_notebook.ipynb | rafaelgrecco/Streamlit-library-Projects | ef39a43cf529f1de67272cc5c7b2e449a8c82f2a | [
"MIT"
] | null | null | null | Project-Uber/Support_notebook.ipynb | rafaelgrecco/Streamlit-library-Projects | ef39a43cf529f1de67272cc5c7b2e449a8c82f2a | [
"MIT"
] | null | null | null | Project-Uber/Support_notebook.ipynb | rafaelgrecco/Streamlit-library-Projects | ef39a43cf529f1de67272cc5c7b2e449a8c82f2a | [
"MIT"
] | null | null | null | 29.469945 | 103 | 0.337845 | [
[
[
"## **Knowing our data**",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np",
"_____no_output_____"
],
[
"data = 'https://s3-us-west-2.amazonaws.com/streamlit-demo-data/uber-raw-data-sep14.csv.gz'\ndf = pd.read_csv(data, nrows=500)\ndf.head()",
"_____no_output_____"
]
],
[
[
"## **Putting the column names in a lower case**\nTo avoid mistakes\n",
"_____no_output_____"
]
],
[
[
"lower_str = lambda x: str(x).lower() \ndf.rename(lower_str, axis='columns', inplace=True)\ndf.head()",
"_____no_output_____"
]
],
[
[
"As you can see the column names are in lower case\n",
"_____no_output_____"
],
[
"## **Checking if dates are on datetime**\nTo access only the hours of our column `date/time`.\nWe have to make sure that this column is in datetime",
"_____no_output_____"
]
],
[
[
"df.dtypes",
"_____no_output_____"
]
],
[
[
"As we can see the column is an object type, so we will convert it to datetime",
"_____no_output_____"
]
],
[
[
"df['date/time'] = pd.to_datetime(df['date/time'])",
"_____no_output_____"
],
[
"df.dtypes",
"_____no_output_____"
]
],
[
[
"Now our column is of the datetime type.\n\n**Note that using** `df['date/time']` **is the same as using** `df.date/time`.\n**However in this case, we cannot use** `df.date/time` **due to '/'**\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
d09708e2b8c27692905886d4ddd9db6344a21eea | 314,034 | ipynb | Jupyter Notebook | ipynb_viirs1/Landsat8 fire detection dev notes.ipynb | chryss/VIFDAHL | 4781a024604b9cc472ab74ab8844ba4c57320bab | [
"MIT"
] | 1 | 2020-03-18T18:43:10.000Z | 2020-03-18T18:43:10.000Z | ipynb_viirs1/Landsat8 fire detection dev notes.ipynb | chryss/VIFDAHL | 4781a024604b9cc472ab74ab8844ba4c57320bab | [
"MIT"
] | null | null | null | ipynb_viirs1/Landsat8 fire detection dev notes.ipynb | chryss/VIFDAHL | 4781a024604b9cc472ab74ab8844ba4c57320bab | [
"MIT"
] | null | null | null | 381.572296 | 79,296 | 0.925425 | [
[
[
"%matplotlib inline\nfrom __future__ import print_function, unicode_literals\nimport sys, os\nimport seaborn as sns\nimport numpy as np\nimport matplotlib\nfrom matplotlib import pyplot as plt",
"_____no_output_____"
],
[
"from pygaarst import raster",
"_____no_output_____"
],
[
"sys.path.append('../firedetection/')\nimport landsat8fire as lfire",
"_____no_output_____"
],
[
"sns.set(rc={'image.cmap': 'gist_heat'})\nsns.set(rc={'image.cmap': 'bone'})\n\nsns.set_context(\"poster\")\n\nmyfontsize = 20\nfont = {'family' : 'Calibri',\n 'weight': 'bold',\n 'size' : myfontsize}\nmatplotlib.rc('font', **font)\nmatplotlib.axes.rcParams['axes.labelsize']=myfontsize-4\nmatplotlib.axes.rcParams['axes.titlesize']=myfontsize\ncmap1 = matplotlib.colors.ListedColormap(sns.xkcd_palette(['white', 'red']))\ncmap2 = matplotlib.colors.ListedColormap(sns.xkcd_palette(['white', 'neon green']))\ncmap3 = matplotlib.colors.ListedColormap(sns.xkcd_palette(['white', 'orange']))",
"_____no_output_____"
],
[
"landsatpath = '/Volumes/SCIENCE_mobile_Mac/Fire/DATA_BY_PROJECT/2015VIIRSMODIS/Landsat/L8 OLI_TIRS Sockeye'\nlsscene = 'LC80700172015166LGN00'\nlandsat = raster.Landsatscene(os.path.join(landsatpath, lsscene))",
"_____no_output_____"
],
[
"landsat.infix = '_clip'\nrho7 = landsat.band7.reflectance\nrho6 = landsat.band6.reflectance\nrho5 = landsat.band5.reflectance\nrho4 = landsat.band4.reflectance\nrho3 = landsat.band3.reflectance\nrho2 = landsat.band2.reflectance\nrho1 = landsat.band1.reflectance\nR75 = rho7/rho5\nR76 = rho7/rho6",
"_____no_output_____"
],
[
"xmax = landsat.band7.ncol\nymax = landsat.band7.nrow",
"_____no_output_____"
]
],
[
[
"\"Unambiguous fire pixels\" test 1 (daytime, normal conditions).",
"_____no_output_____"
]
],
[
[
"firecond1 = np.logical_and(R75 > 2.5, rho7 > .5)\nfirecond1 = np.logical_and(firecond1, rho7 - rho5 > .3)\nfirecond1_masked = np.ma.masked_where(\n ~firecond1, np.ones((ymax, xmax)))",
"_____no_output_____"
]
],
[
[
"\"Unambiguous fire pixels\" test 2 (daytime, sensor anomalies)",
"_____no_output_____"
]
],
[
[
"firecond2 = np.logical_and(rho6 > .8, rho1 < .2)\nfirecond2 = np.logical_and(firecond2, \n np.logical_or(rho5 > .4, rho7 < .1)\n )\nfirecond2_masked = np.ma.masked_where(\n ~firecond2, np.ones((ymax, xmax)))",
"_____no_output_____"
]
],
[
[
"\"Relaxed conditions\"",
"_____no_output_____"
]
],
[
[
"firecond3 = np.logical_and(R75 > 1.8, rho7 - rho5 > .17)\nfirecond3_masked = np.ma.masked_where(\n ~firecond3, np.ones((ymax, xmax)))",
"_____no_output_____"
]
],
[
[
"\"Extra tests\" for relaxed conditions:\n\n1. R76 > 1.6\n2. R75 at least 3 sigma and 0.8 larger than avg of a 61x61 window of valid pixels\n3. rho7 at least 3 sigma and 0.08 larger than avg of a 61x61 window of valid pixels\n\nValid pixels are:\n\n1. Not \"unambiguous fire pixel\"\n2. rho7 > 0 \n3. Not water as per water test 1: rho4 > rho5 AND rho5 > rho6 AND rho6 > rho7 AND rho1 - rho7 < 0.2\n4. Not water as per test 2: rho3 > rho2 OR ( rho1 > rho2 AND rho2 > rho3 AND rho3 > rho4 )\n",
"_____no_output_____"
],
[
"So let's get started on the validation tests...",
"_____no_output_____"
]
],
[
[
"newfirecandidates = np.logical_and(~firecond1, ~firecond2)\nnewfirecandidates = np.logical_and(newfirecandidates, firecond3)\nnewfirecandidates = np.logical_and(newfirecandidates, R76 > 0)\nsum(sum(newfirecandidates))",
"_____no_output_____"
]
],
[
[
"We'll need a +-30 pixel window around a coordinate pair to carry out the averaging for the contextual tests",
"_____no_output_____"
]
],
[
[
"iidxmax, jidxmax = landsat.band1.data.shape \n\ndef get_window(ii, jj, N, iidxmax, jidxmax):\n \"\"\"Return 2D Boolean array that is True where a window of size N\n around a given point is masked out \"\"\"\n imin = max(0, ii-N)\n imax = min(iidxmax, ii+N)\n jmin = max(0, jj-N)\n jmax = min(jidxmax, jj+N)\n mask1 = np.zeros((iidxmax, jidxmax))\n mask1[imin:imax+1, jmin:jmax+1] = 1\n return mask1 == 1\n \nplt.imshow(get_window(100, 30, 30, iidxmax, jidxmax) , cmap=cmap3, vmin=0, vmax=1)",
"_____no_output_____"
]
],
[
[
"We can then get the union of those windows over all detected fire pixel candidates. ",
"_____no_output_____"
]
],
[
[
"windows = [get_window(ii, jj, 30, iidxmax, jidxmax) for ii, jj in np.argwhere(newfirecandidates)]\nwindow = np.any(windows, axis=0)\nplt.imshow(window , cmap=cmap3, vmin=0, vmax=1)",
"_____no_output_____"
]
],
[
[
"We also need a water mask... ",
"_____no_output_____"
]
],
[
[
"def get_l8watermask_frombands(\n rho1, rho2, rho3,\n rho4, rho5, rho6, rho7):\n \"\"\"\n Takes L8 bands, returns 2D Boolean numpy array of same shape\n \"\"\"\n turbidwater = get_l8turbidwater(rho1, rho2, rho3, rho4, rho5, rho6, rho7)\n deepwater = get_l8deepwater(rho1, rho2, rho3, rho4, rho5, rho6, rho7)\n return np.logical_or(turbidwater, deepwater)\n\ndef get_l8commonwater(rho1, rho4, rho5, rho6, rho7):\n \"\"\"Returns Boolean numpy array common to turbid and deep water schemes\"\"\"\n water1cond = np.logical_and(rho4 > rho5, rho5 > rho6)\n water1cond = np.logical_and(water1cond, rho6 > rho7)\n water1cond = np.logical_and(water1cond, rho1 - rho7 < 0.2)\n return water1cond\n\ndef get_l8turbidwater(rho1, rho2, rho3, rho4, rho5, rho6, rho7):\n \"\"\"Returns Boolean numpy array that marks shallow, turbid water\"\"\"\n watercond2 = get_l8commonwater(rho1, rho4, rho5, rho6, rho7)\n watercond2 = np.logical_and(watercond2, rho3 > rho2)\n return watercond2\n\ndef get_l8deepwater(rho1, rho2, rho3, rho4, rho5, rho6, rho7):\n \"\"\"Returns Boolean numpy array that marks deep, clear water\"\"\"\n watercond3 = get_l8commonwater(rho1, rho4, rho5, rho6, rho7)\n watercondextra = np.logical_and(rho1 > rho2, rho2 > rho3)\n watercondextra = np.logical_and(watercondextra, rho3 > rho4)\n return np.logical_and(watercond3, watercondextra)\n\nwater = get_l8watermask_frombands(rho1, rho2, rho3, rho4, rho5, rho6, rho7)\nplt.imshow(~water , cmap=cmap3, vmin=0, vmax=1)",
"_____no_output_____"
]
],
[
[
"Let's try out the two components, out of interest... apparently, only the \"deep water\" test catches the water bodies here. ",
"_____no_output_____"
]
],
[
[
"turbidwater = get_l8turbidwater(rho1, rho2, rho3, rho4, rho5, rho6, rho7)\ndeepwater = get_l8deepwater(rho1, rho2, rho3, rho4, rho5, rho6, rho7)\nplt.imshow(~turbidwater , cmap=cmap3, vmin=0, vmax=1)\nplt.show()\nplt.imshow(~deepwater , cmap=cmap3, vmin=0, vmax=1)",
"_____no_output_____"
],
[
"def get_valid_pixels(otherfirecond, rho1, rho2, rho3,\n rho4, rho5, rho6, rho7, mask=None):\n \"\"\"returns masked array of 1 for valid, 0 for not\"\"\"\n if not np.any(mask):\n mask = np.zeros(otherfirecond.shape)\n rho = {}\n for rho in [rho1, rho2, rho3, rho4, rho5, rho6, rho7]:\n rho = np.ma.masked_array(rho, mask=mask)\n watercond = get_l8watermask_frombands(\n rho1, rho2, rho3,\n rho4, rho5, rho6, rho7)\n greater0cond = rho7 > 0\n finalcond = np.logical_and(greater0cond, ~watercond)\n finalcond = np.logical_and(finalcond, ~otherfirecond)\n return np.ma.masked_array(finalcond, mask=mask)\n\notherfirecond = np.logical_or(firecond1, firecond2)\nvalidpix = get_valid_pixels(otherfirecond, rho1, rho2, rho3,\n rho4, rho5, rho6, rho7, mask=~window)",
"_____no_output_____"
],
[
"fig1 = plt.figure(1, figsize=(15, 15))\nax1 = fig1.add_subplot(111)\nax1.set_aspect('equal')\n\nax1.pcolormesh(np.flipud(validpix), cmap=cmap3, vmin=0, vmax=1)",
"_____no_output_____"
],
[
"iidxmax, jidxmax = landsat.band1.data.shape \noutput = np.zeros((iidxmax, jidxmax))\n\nfor ii, jj in np.argwhere(firecond3):\n window = get_window(ii, jj, 30, iidxmax, jidxmax)\n newmask = np.logical_or(~window, ~validpix.data)\n rho7_win = np.ma.masked_array(rho7, mask=newmask)\n R75_win = np.ma.masked_array(rho7/rho5, mask=newmask)\n rho7_bar = np.mean(rho7_win.flatten())\n rho7_std = np.std(rho7_win.flatten())\n R75_bar = np.mean(R75_win.flatten())\n R75_std = np.std(R75_win.flatten())\n rho7_test = rho7_win[ii, jj] - rho7_bar > max(3*rho7_std, 0.08)\n R75_test = R75_win[ii, jj]- R75_bar > max(3*R75_std, 0.8)\n if rho7_test and R75_test:\n output[ii, jj] = 1\n\nlowfirecond = output == 1",
"_____no_output_____"
],
[
"sum(sum(lowfirecond))",
"_____no_output_____"
],
[
"fig1 = plt.figure(1, figsize=(15, 15))\nax1 = fig1.add_subplot(111)\nax1.set_aspect('equal')\n\nax1.pcolormesh(np.flipud(lowfirecond), cmap=cmap1, vmin=0, vmax=1)",
"_____no_output_____"
],
[
"fig1 = plt.figure(1, figsize=(15, 15))\nax1 = fig1.add_subplot(111)\nax1.set_aspect('equal')\n\nax1.pcolormesh(np.flipud(firecond1), cmap=cmap3, vmin=0, vmax=1)",
"_____no_output_____"
],
[
"allfirecond = np.logical_or(firecond1, firecond2)\nallfirecond = np.logical_or(allfirecond, lowfirecond)",
"_____no_output_____"
],
[
"fig1 = plt.figure(1, figsize=(15, 15))\nax1 = fig1.add_subplot(111)\nax1.set_aspect('equal')\nax1.pcolormesh(np.flipud(allfirecond), cmap=cmap1, vmin=0, vmax=1)",
"_____no_output_____"
]
],
[
[
"So this works! Now we can do the same using the module that incorporates the above code:",
"_____no_output_____"
]
],
[
[
"testfire, highfire, anomfire, lowfire = lfire.get_l8fire(landsat)",
"_____no_output_____"
],
[
"sum(sum(lowfire))",
"_____no_output_____"
],
[
"sum(sum(testfire))",
"_____no_output_____"
],
[
"firecond1_masked = np.ma.masked_where(\n ~testfire, np.ones((ymax, xmax)))\nfirecondlow_masked = np.ma.masked_where(\n ~lowfire, np.ones((ymax, xmax)))",
"_____no_output_____"
],
[
"fig1 = plt.figure(1, figsize=(15, 15))\nax1 = fig1.add_subplot(111)\nax1.set_aspect('equal')\n\nax1.pcolormesh(np.flipud(firecond1_masked), cmap=cmap1, vmin=0, vmax=1)\nax1.pcolormesh(np.flipud(firecondlow_masked), cmap=cmap3, vmin=0, vmax=1)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
d0970a21867b50492d1f73d34b5658b8de24a24c | 9,825 | ipynb | Jupyter Notebook | Python/Benchmark dataset/Finding fraud patterns with FP-growth benchmark dataset.ipynb | limkhashing/Credit-Card-Fraud-Detection | 97ac1b9068e4de4e84616535357d8d5e798e29d8 | [
"Apache-2.0"
] | 2 | 2019-01-14T07:15:40.000Z | 2019-03-08T18:02:52.000Z | Python/Benchmark dataset/Finding fraud patterns with FP-growth benchmark dataset.ipynb | kslim888/Credit-Card-Fraud-Detection | 97ac1b9068e4de4e84616535357d8d5e798e29d8 | [
"Apache-2.0"
] | null | null | null | Python/Benchmark dataset/Finding fraud patterns with FP-growth benchmark dataset.ipynb | kslim888/Credit-Card-Fraud-Detection | 97ac1b9068e4de4e84616535357d8d5e798e29d8 | [
"Apache-2.0"
] | 2 | 2021-08-24T13:53:01.000Z | 2022-01-06T01:13:54.000Z | 26.69837 | 315 | 0.43827 | [
[
[
"# Finding fraud patterns with FP-growth",
"_____no_output_____"
],
[
"# Data Collection and Investigation",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\n# Input data files are available in the \"../input/\" directory\ndf = pd.read_csv('D:/Python Project/Credit Card Fraud Detection/benchmark dataset/Test FP-Growth.csv')\n\n# printing the first 5 columns for data visualization \ndf.head()\n",
"_____no_output_____"
]
],
[
[
"## Execute FP-growth algorithm",
"_____no_output_____"
],
[
"## Spark",
"_____no_output_____"
]
],
[
[
"# import environment path to pyspark\nimport os\nimport sys\n\nspark_path = r\"D:\\apache-spark\" # spark installed folder\nos.environ['SPARK_HOME'] = spark_path\nsys.path.insert(0, spark_path + \"/bin\")\nsys.path.insert(0, spark_path + \"/python/pyspark/\")\nsys.path.insert(0, spark_path + \"/python/lib/pyspark.zip\")\nsys.path.insert(0, spark_path + \"/python/lib/py4j-0.10.7-src.zip\")\n",
"_____no_output_____"
],
[
"# Export csv to txt file\ndf.to_csv('processed_itemsets.txt', index=None, sep=' ', mode='w+')\n",
"_____no_output_____"
],
[
"import csv\n\n# creating necessary variable\nnew_itemsets_list = []\nskip_first_iteration = 1\n\n# find the duplicate item and add a counter at behind\nwith open(\"processed_itemsets.txt\", 'r') as fp:\n itemsets_list = csv.reader(fp, delimiter =' ', skipinitialspace=True) \n for itemsets in itemsets_list:\n unique_itemsets = []\n counter = 2\n for item in itemsets:\n if itemsets.count(item) > 1:\n \n if skip_first_iteration == 1:\n unique_itemsets.append(item)\n skip_first_iteration = skip_first_iteration + 1\n continue\n \n duplicate_item = item + \"__(\" + str(counter) + \")\"\n unique_itemsets.append(duplicate_item)\n counter = counter + 1\n else:\n unique_itemsets.append(item)\n print(itemsets)\n new_itemsets_list.append(unique_itemsets)\n\n ",
"['M', 'O', 'N', 'K', 'E', 'Y']\n['D', 'O', 'N', 'K', 'E', 'Y']\n['M', 'A', 'K', 'E', '']\n['M', 'U', 'C', 'K', 'Y', '']\n['C', 'O', 'O', 'K', 'I', 'E']\n"
],
[
"# write the new itemsets into file\nwith open('processed_itemsets.txt', 'w+') as f:\n for items in new_itemsets_list:\n for item in items:\n f.write(\"{} \".format(item))\n f.write(\"\\n\")\n",
"_____no_output_____"
],
[
"from pyspark import SparkContext\nfrom pyspark.mllib.fpm import FPGrowth\n\n# initialize spark\nsc = SparkContext.getOrCreate()",
"_____no_output_____"
],
[
"data = sc.textFile('processed_itemsets.txt').cache()\ntransactions = data.map(lambda line: line.strip().split(' '))\n",
"_____no_output_____"
]
],
[
[
"__minSupport__: The minimum support for an itemset to be identified as frequent. <br>\nFor example, if an item appears 3 out of 5 transactions, it has a support of 3/5=0.6.\n\n__minConfidence__: Minimum confidence for generating Association Rule. Confidence is an indication of how often an association rule has been found to be true. For example, if in the transactions itemset X appears 4 times, X and Y co-occur only 2 times, the confidence for the rule X => Y is then 2/4 = 0.5.\n\n__numPartitions__: The number of partitions used to distribute the work. By default the param is not set, and number of partitions of the input dataset is used",
"_____no_output_____"
]
],
[
[
"model = FPGrowth.train(transactions, minSupport=0.6, numPartitions=10)\nresult = model.freqItemsets().collect()\n",
"_____no_output_____"
],
[
"print(\"Frequent Itemsets : Item Support\")\nprint(\"====================================\")\nfor index, frequent_itemset in enumerate(result):\n print(str(frequent_itemset.items) + ' : ' + str(frequent_itemset.freq))\n",
"Frequent Itemsets : Item Support\n====================================\n['K'] : 5\n['E'] : 4\n['E', 'K'] : 4\n['M'] : 3\n['M', 'K'] : 3\n['O'] : 3\n['O', 'E'] : 3\n['O', 'E', 'K'] : 3\n['O', 'K'] : 3\n['Y'] : 3\n['Y', 'K'] : 3\n"
],
[
"rules = sorted(model._java_model.generateAssociationRules(0.8).collect(), key=lambda x: x.confidence(), reverse=True)\n",
"_____no_output_____"
],
[
"print(\"Antecedent => Consequent : Min Confidence\")\nprint(\"========================================\")\nfor rule in rules[:200]:\n print(rule)\n",
"Antecedent => Consequent : Min Confidence\n========================================\n{O} => {E}: 1.0\n{O} => {K}: 1.0\n{E} => {K}: 1.0\n{Y} => {K}: 1.0\n{M} => {K}: 1.0\n{O,E} => {K}: 1.0\n{O,K} => {E}: 1.0\n{K} => {E}: 0.8\n"
],
[
"# stop spark session\nsc.stop()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
d097213fe133ae045d1fbf136233e5895b3fcf81 | 8,379 | ipynb | Jupyter Notebook | data_csv_preprocessing.ipynb | trishalabhasin/CSC_501 | 3ceca1ccf1b492bc57e6071449109b6af7801957 | [
"BSD-4-Clause-UC"
] | null | null | null | data_csv_preprocessing.ipynb | trishalabhasin/CSC_501 | 3ceca1ccf1b492bc57e6071449109b6af7801957 | [
"BSD-4-Clause-UC"
] | null | null | null | data_csv_preprocessing.ipynb | trishalabhasin/CSC_501 | 3ceca1ccf1b492bc57e6071449109b6af7801957 | [
"BSD-4-Clause-UC"
] | null | null | null | 33.38247 | 119 | 0.3979 | [
[
[
"import pandas as pd\nimport sqlite3\nimport datetime\n\n\n\ndef main():\n \n data_list = [(\"ml-latest-small/\", \"small_output/\", \"small\"), (\"ml-20m/\", \"20M_output/\",\"20M\")]\n \n for item in data_list:\n \n start_time = datetime.datetime.now()\n\n process_data(item[0], item[1])\n\n end_time = datetime.datetime.now()\n\n diff = end_time - start_time\n\n print(\"Pre-processing time for \", item[2], \" dataset ----> \", diff)\n \ndef process_data(input_path, output_path):\n \n # --------------------------------------------------------\n # read data to make a revised movies table and genre table\n # --------------------------------------------------------\n \n movies = input_path + \"movies.csv\" \n movies_data = pd.read_csv(movies)\n\n # print((movies_data.head(10)))\n\n data = {'movieId':[], \n 'title': [], \n 'genreID': []\n }\n\n genre_dict = {}\n gen_id = 1\n\n # genre_dict = dict([v,k] for k,v in genre_dict.items())\n\n # print(genre_dict)\n\n for item in movies_data.iterrows():\n mID = item[1][0]\n ttle = item[1][1]\n genre_list = item[1][2]\n\n\n for gen in genre_list.split('|'):\n\n if gen not in genre_dict:\n genre_dict[gen] = gen_id\n gen_id += 1\n\n data['movieId'].append(mID)\n data['title'].append(ttle)\n data['genreID'].append(genre_dict[gen])\n\n # print(genre_dict)\n # print(data) \n\n\n movies_df = pd.DataFrame(data)\n # print(df.head(20))\n\n print(\"There are \", len(movies_df), \"rows in the revised movies table\")\n\n # --------------------------------------------------------\n # create revised_movies.csv\n # --------------------------------------------------------\n\n revised_movies = output_path + \"revised_movies.csv\"\n movies_df.to_csv(revised_movies, index=False)\n\n # --------------------------------------------------------\n # get genre information\n # --------------------------------------------------------\n \n temp = {'genreID':[], 'genre':[]}\n\n for k,v in genre_dict.items():\n temp['genreID'].append(v)\n temp['genre'].append(k)\n\n # print(temp)\n\n genre_df = pd.DataFrame(temp)\n # print(genre_df.head(20))\n\n print(\"There are : \", len(genre_df), \"rows in the genre table\")\n\n # --------------------------------------------------------\n # create genres.csv\n # --------------------------------------------------------\n \n genres = output_path + \"genres.csv\"\n genre_df.to_csv(genres, index=False)\n\n\n # --------------------------------------------------------\n # timestamps in tags needs to be changed. \n # --------------------------------------------------------\n \n \n tags = input_path + \"tags.csv\"\n tags_data = pd.read_csv(tags)\n\n new_ts_list = []\n\n for item in tags_data['timestamp']:\n\n start_time = datetime.datetime(year=1970, month=1, day=1, hour=00, minute=00, second=00)\n\n t_delta=datetime.timedelta(seconds=item)\n\n dtime = start_time + t_delta\n\n new_ts_list.append(dtime)\n\n\n tags_data['tags_timestamp'] = new_ts_list\n\n del tags_data['timestamp']\n\n # print(tags_data.head(10))\n\n # # --------------------------------------------------------\n # # create revised_tags.csv\n # # --------------------------------------------------------\n \n revised_tags = output_path + \"revised_tags.csv\"\n tags_data.to_csv(revised_tags, index=False)\n\n\n print(\"There are \", len(tags_data), \"rows in the tags table\")\n\n # --------------------------------------------------------\n # timestamps in ratings needs to be changed. \n # --------------------------------------------------------\n\n \n ratings = input_path + \"ratings.csv\"\n ratings_data = pd.read_csv(ratings)\n\n new_ts_list = []\n\n for item in ratings_data['timestamp']:\n\n start_time = datetime.datetime(year=1970, month=1, day=1, hour=00, minute=00, second=00)\n\n t_delta=datetime.timedelta(seconds=item)\n\n dtime = start_time + t_delta\n\n new_ts_list.append(dtime)\n\n\n ratings_data['ratings_timestamp'] = new_ts_list\n\n del ratings_data['timestamp']\n\n # print(ratings_data.head(10))\n\n # --------------------------------------------------------\n # create revised_ratings.csv\n # --------------------------------------------------------\n \n revised_ratings = output_path + \"revised_ratings.csv\"\n ratings_data.to_csv(revised_ratings, index=False) \n\n print(\"There are \", len(ratings_data), \"rows in the ratings table\")\n\n # ---------------------------------------------------------------------------------\n # links.csv remains the same. \n # ---------------------------------------------------------------------------------\n \n # --------------------------------------------------------\n # How many rows are there in the links table\n # --------------------------------------------------------\n\n links = input_path + \"links.csv\"\n \n links_data = pd.read_csv(links)\n\n print(\"There are \", len(links_data), \"rows in the links table\")\n\n \nmain()",
"There are 22084 rows in the revised movies table\nThere are : 20 rows in the genre table\nThere are 3683 rows in the tags table\nThere are 100836 rows in the ratings table\nThere are 9742 rows in the links table\nPre-processing time for small dataset ----> 0:00:01.789247\nThere are 54406 rows in the revised movies table\nThere are : 20 rows in the genre table\nThere are 465564 rows in the tags table\nThere are 20000263 rows in the ratings table\nThere are 27278 rows in the links table\nPre-processing time for 20M dataset ----> 0:07:11.824364\n"
]
]
] | [
"code"
] | [
[
"code"
]
] |
d09729bc04b97d5fa1f296982f24fa772a2ac3b9 | 579,710 | ipynb | Jupyter Notebook | HW2/HW2 - Time Series Regression.ipynb | sychen6192/DataMining | d03ae1b3a07f3a7bab1edf878d4fca65183451a7 | [
"Apache-2.0"
] | null | null | null | HW2/HW2 - Time Series Regression.ipynb | sychen6192/DataMining | d03ae1b3a07f3a7bab1edf878d4fca65183451a7 | [
"Apache-2.0"
] | null | null | null | HW2/HW2 - Time Series Regression.ipynb | sychen6192/DataMining | d03ae1b3a07f3a7bab1edf878d4fca65183451a7 | [
"Apache-2.0"
] | null | null | null | 40.282816 | 182 | 0.55701 | [
[
[
"%cd ..",
"C:\\Users\\sychen\\Desktop\n"
],
[
"%ls",
" 磁碟區 C 中的磁碟沒有標籤。\n 磁碟區序號: 0A59-7A8C\n\n C:\\Users\\sychen\\Desktop 的目錄\n\n2019/10/22 下午 09:45 <DIR> .\n2019/10/22 下午 09:45 <DIR> ..\n2019/10/03 上午 11:32 14,881 0853426_陳紹雲.docx\n2019/09/09 下午 12:18 280,882 1567764411037.jpg\n2019/09/23 下午 10:49 9,623 70732501_425527064987094_2730806005695774720_n.jpg\n2019/10/16 下午 09:23 195,330 72525498_499542974229593_5107462906677559296_n.jpg\n2019/09/10 下午 06:40 196,230 ACFrOgCKl6YlT91neQPho5Cc75IU86ZUaRWQnf83uOpc4mXMp-kc-A4-_pdMJ21qFc07Sy52dOXMzaz_RDQULGi01zYv79YHT6UC_KRgeBP5TVzbEMSpcruDRdDIRiY=.pdf\n2019/09/25 下午 04:35 29,115,490 CH1_CaseStudy.pptx\n2019/10/17 下午 05:36 <DIR> cuda\n2019/09/30 下午 06:02 87,175 hw1.PNG\n2019/10/14 下午 08:33 68,349 hw2.PNG\n2019/10/24 下午 08:48 <DIR> ipython\n2019/10/07 下午 05:36 65 ipython.bat\n2019/10/13 下午 10:06 34,339 jupyter_notebook_config.py\n2019/10/17 下午 09:07 1,123 jupyter_notebook_config.py - 捷徑.lnk\n2019/10/02 下午 11:08 588,740 LEARNING_MAP.PNG\n2019/09/09 下午 12:12 1,210 LINE.lnk\n2019/09/23 下午 09:56 3,372 mis.csv\n2019/09/26 上午 11:24 761,071 ml_map.png\n2019/10/07 下午 06:13 860 my_ftp - 捷徑.lnk\n2018/12/11 下午 08:23 <DIR> Part01\n2019/09/12 上午 09:47 80,460 pw.jpg\n2019/09/23 下午 08:41 61,521 python-note\n2019/09/30 下午 02:31 14 python-note29\n2019/10/22 下午 08:02 20,364 result.PNG\n2019/10/22 下午 09:45 17,493 RSA就是一非對稱式加密一個典型的例子.docx\n2019/09/12 上午 10:26 1,855 Spotify.lnk\n2019/09/24 下午 05:49 330 tos.txt\n2019/10/22 下午 08:03 471,369 update3and8_紹.pptx\n2019/10/02 下午 06:03 273,062 千穗報價單.pdf\n2019/10/07 下午 03:16 270,289 迅杰.pdf\n2019/09/12 上午 10:04 705,605 明泰科技採購.pdf\n2019/09/11 下午 03:29 316,507 國立清華大學 -- 校務資訊系統.pdf\n2019/09/24 下午 07:29 10,380 通訊錄.xlsx\n2019/10/07 下午 03:15 65,335 報價單.docx\n2019/09/23 上午 10:52 56 新文字文件.txt\n2019/10/07 下午 07:51 1,999 爆爆王.lnk\n 32 個檔案 33,655,379 位元組\n 5 個目錄 363,827,421,184 位元組可用\n"
],
[
"import pandas as pd\nimport datetime\nimport numpy as np",
"_____no_output_____"
],
[
"# xls to csv\nxls = pd.read_excel(u'107年 竹苗空品區/107年新竹站_20190315.xls', index_col=0)\nxls.to_csv('107年 竹苗空品區/107年新竹站_20190315.csv', encoding='big5')",
"_____no_output_____"
],
[
"train = pd.read_csv('107年 竹苗空品區/107年新竹站_20190315.csv', encoding='big5', index_col = False)",
"_____no_output_____"
],
[
"train.iloc[26].ffill()",
"_____no_output_____"
],
[
"def str_2(x):\n x = str(x).rjust(2, '0')\n return x",
"_____no_output_____"
],
[
"def get_next(col, i): # 下一個時間\n col = int(col) + 1\n if col > 23:\n col = 0\n i += 18\n while(pd.isnull(train[str_2(col)][i]) \\\n or '*' in train[str_2(col)][i][-1] \\\n or '#' in train[str_2(col)][i][-1] \\\n or 'x' in train[str_2(col)][i][-1] \\\n or 'A' in train[str_2(col)][i][-1]):\n col += 1\n if col > 23:\n col = 0\n i += 18\n return float(train[str_2(col)][i])",
"_____no_output_____"
],
[
"def get_last(col, i): # 上一個時間\n col = int(col) - 1\n if col < 0:\n col = 23\n i -= 18\n while(pd.isnull(train[str_2(col)][i]) \\\n or '*' in train[str_2(col)][i][-1] \\\n or '#' in train[str_2(col)][i][-1] \\\n or 'x' in train[str_2(col)][i][-1] \\\n or 'A' in train[str_2(col)][i][-1]):\n col -= 1\n if col < 0:\n col = 23\n i -= 18\n return float(train[str_2(col)][i])",
"_____no_output_____"
],
[
"# 表示儀器檢核為無效值,* 表示程式檢核為無效值,x 表示人工檢核為無效值,NR 表示無降雨,空白 表示缺值。\n#,A 係指因儀器疑似故障警報所產生的無效值。\nfeat = train.columns\nprint(feat)\nfor col in feat:\n for i in range(len(train)):\n token = str(train[col][i])[-1]\n if train[col][i] == \"NR\":\n # c. NR表示無降雨,以0取代\n print(f' NR 表示無降雨 col: {col} index:{i}')\n train[col][i] = '0'\n elif pd.isnull(train[col][i]):\n print(f' 空白 表示缺值 col: {col} index:{i}')\n elif token =='*':\n print(f' * 表示程式檢核為無效值 col: {col} index:{i}')\n elif token =='#':\n print(f' # 表示儀器檢核為無效值 col: {col} index:{i}')\n elif token == 'A':\n print(f' A 係指因儀器疑似故障警報所產生的無效值 col: {col} index:{i}')\n elif token == 'x':\n if col != '測項':\n print(f' x 表示人工檢核為無效值 col: {col} index:{i}')",
"Index(['日期', '測站', '測項', '00', '01', '02', '03', '04', '05', '06', '07', '08',\n '09', '10', '11', '12', '13', '14', '15', '16', '17', '18', '19', '20',\n '21', '22', '23'],\n dtype='object')\n NR 表示無降雨 col: 00 index:10\n NR 表示無降雨 col: 00 index:28\n NR 表示無降雨 col: 00 index:46\n NR 表示無降雨 col: 00 index:64\n NR 表示無降雨 col: 00 index:82\n NR 表示無降雨 col: 00 index:100\n x 表示人工檢核為無效值 col: 00 index:138\n NR 表示無降雨 col: 00 index:190\n NR 表示無降雨 col: 00 index:208\n NR 表示無降雨 col: 00 index:226\n NR 表示無降雨 col: 00 index:244\n NR 表示無降雨 col: 00 index:262\n NR 表示無降雨 col: 00 index:280\n NR 表示無降雨 col: 00 index:298\n NR 表示無降雨 col: 00 index:316\n NR 表示無降雨 col: 00 index:352\n NR 表示無降雨 col: 00 index:370\n NR 表示無降雨 col: 00 index:388\n NR 表示無降雨 col: 00 index:406\n NR 表示無降雨 col: 00 index:424\n 空白 表示缺值 col: 00 index:426\n NR 表示無降雨 col: 00 index:442\n x 表示人工檢核為無效值 col: 00 index:444\n NR 表示無降雨 col: 00 index:460\n NR 表示無降雨 col: 00 index:478\n NR 表示無降雨 col: 00 index:496\n NR 表示無降雨 col: 00 index:514\n NR 表示無降雨 col: 00 index:550\n NR 表示無降雨 col: 00 index:604\n x 表示人工檢核為無效值 col: 00 index:624\n x 表示人工檢核為無效值 col: 00 index:642\n NR 表示無降雨 col: 00 index:658\n x 表示人工檢核為無效值 col: 00 index:660\n NR 表示無降雨 col: 00 index:676\n x 表示人工檢核為無效值 col: 00 index:678\n x 表示人工檢核為無效值 col: 00 index:696\n NR 表示無降雨 col: 00 index:712\n * 表示程式檢核為無效值 col: 00 index:714\n NR 表示無降雨 col: 00 index:730\n x 表示人工檢核為無效值 col: 00 index:732\n x 表示人工檢核為無效值 col: 00 index:750\n NR 表示無降雨 col: 00 index:766\n NR 表示無降雨 col: 00 index:784\n NR 表示無降雨 col: 00 index:802\n NR 表示無降雨 col: 00 index:820\n NR 表示無降雨 col: 00 index:838\n NR 表示無降雨 col: 00 index:856\n NR 表示無降雨 col: 00 index:874\n NR 表示無降雨 col: 00 index:892\n NR 表示無降雨 col: 00 index:910\n * 表示程式檢核為無效值 col: 00 index:922\n * 表示程式檢核為無效值 col: 00 index:923\n * 表示程式檢核為無效值 col: 00 index:924\n NR 表示無降雨 col: 00 index:928\n x 表示人工檢核為無效值 col: 00 index:948\n NR 表示無降雨 col: 00 index:964\n NR 表示無降雨 col: 00 index:982\n NR 表示無降雨 col: 00 index:1000\n NR 表示無降雨 col: 00 index:1018\n NR 表示無降雨 col: 00 index:1036\n 空白 表示缺值 col: 00 index:1038\n NR 表示無降雨 col: 00 index:1054\n NR 表示無降雨 col: 00 index:1072\n NR 表示無降雨 col: 00 index:1090\n NR 表示無降雨 col: 00 index:1108\n NR 表示無降雨 col: 00 index:1126\n NR 表示無降雨 col: 00 index:1144\n NR 表示無降雨 col: 00 index:1162\n * 表示程式檢核為無效值 col: 00 index:1174\n * 表示程式檢核為無效值 col: 00 index:1175\n * 表示程式檢核為無效值 col: 00 index:1176\n NR 表示無降雨 col: 00 index:1180\n NR 表示無降雨 col: 00 index:1216\n NR 表示無降雨 col: 00 index:1234\n NR 表示無降雨 col: 00 index:1252\n NR 表示無降雨 col: 00 index:1270\n NR 表示無降雨 col: 00 index:1288\n NR 表示無降雨 col: 00 index:1306\n NR 表示無降雨 col: 00 index:1324\n NR 表示無降雨 col: 00 index:1342\n NR 表示無降雨 col: 00 index:1360\n NR 表示無降雨 col: 00 index:1378\n NR 表示無降雨 col: 00 index:1396\n NR 表示無降雨 col: 00 index:1414\n NR 表示無降雨 col: 00 index:1432\n NR 表示無降雨 col: 00 index:1450\n NR 表示無降雨 col: 00 index:1468\n NR 表示無降雨 col: 00 index:1486\n NR 表示無降雨 col: 00 index:1504\n NR 表示無降雨 col: 00 index:1522\n NR 表示無降雨 col: 00 index:1540\n NR 表示無降雨 col: 00 index:1558\n NR 表示無降雨 col: 00 index:1576\n NR 表示無降雨 col: 00 index:1594\n NR 表示無降雨 col: 00 index:1612\n NR 表示無降雨 col: 00 index:1630\n NR 表示無降雨 col: 00 index:1648\n NR 表示無降雨 col: 00 index:1666\n NR 表示無降雨 col: 00 index:1684\n NR 表示無降雨 col: 00 index:1702\n NR 表示無降雨 col: 00 index:1720\n NR 表示無降雨 col: 00 index:1738\n NR 表示無降雨 col: 00 index:1756\n NR 表示無降雨 col: 00 index:1774\n NR 表示無降雨 col: 00 index:1792\n NR 表示無降雨 col: 00 index:1810\n NR 表示無降雨 col: 00 index:1828\n NR 表示無降雨 col: 00 index:1846\n NR 表示無降雨 col: 00 index:1864\n NR 表示無降雨 col: 00 index:1882\n NR 表示無降雨 col: 00 index:1900\n NR 表示無降雨 col: 00 index:1936\n NR 表示無降雨 col: 00 index:1954\n NR 表示無降雨 col: 00 index:1972\n NR 表示無降雨 col: 00 index:1990\n NR 表示無降雨 col: 00 index:2008\n NR 表示無降雨 col: 00 index:2026\n NR 表示無降雨 col: 00 index:2044\n NR 表示無降雨 col: 00 index:2062\n NR 表示無降雨 col: 00 index:2080\n NR 表示無降雨 col: 00 index:2098\n NR 表示無降雨 col: 00 index:2116\n NR 表示無降雨 col: 00 index:2134\n NR 表示無降雨 col: 00 index:2152\n NR 表示無降雨 col: 00 index:2170\n NR 表示無降雨 col: 00 index:2188\n NR 表示無降雨 col: 00 index:2206\n NR 表示無降雨 col: 00 index:2224\n NR 表示無降雨 col: 00 index:2242\n NR 表示無降雨 col: 00 index:2260\n NR 表示無降雨 col: 00 index:2278\n NR 表示無降雨 col: 00 index:2296\n NR 表示無降雨 col: 00 index:2314\n NR 表示無降雨 col: 00 index:2332\n NR 表示無降雨 col: 00 index:2350\n x 表示人工檢核為無效值 col: 00 index:2359\n x 表示人工檢核為無效值 col: 00 index:2361\n NR 表示無降雨 col: 00 index:2368\n x 表示人工檢核為無效值 col: 00 index:2371\n NR 表示無降雨 col: 00 index:2386\n NR 表示無降雨 col: 00 index:2404\n NR 表示無降雨 col: 00 index:2422\n NR 表示無降雨 col: 00 index:2440\n NR 表示無降雨 col: 00 index:2458\n NR 表示無降雨 col: 00 index:2476\n NR 表示無降雨 col: 00 index:2494\n NR 表示無降雨 col: 00 index:2512\n NR 表示無降雨 col: 00 index:2530\n NR 表示無降雨 col: 00 index:2548\n NR 表示無降雨 col: 00 index:2566\n NR 表示無降雨 col: 00 index:2584\n NR 表示無降雨 col: 00 index:2602\n NR 表示無降雨 col: 00 index:2620\n 空白 表示缺值 col: 00 index:2628\n 空白 表示缺值 col: 00 index:2629\n 空白 表示缺值 col: 00 index:2630\n 空白 表示缺值 col: 00 index:2631\n 空白 表示缺值 col: 00 index:2632\n 空白 表示缺值 col: 00 index:2633\n 空白 表示缺值 col: 00 index:2634\n 空白 表示缺值 col: 00 index:2635\n 空白 表示缺值 col: 00 index:2636\n 空白 表示缺值 col: 00 index:2637\n 空白 表示缺值 col: 00 index:2638\n 空白 表示缺值 col: 00 index:2639\n 空白 表示缺值 col: 00 index:2640\n 空白 表示缺值 col: 00 index:2641\n 空白 表示缺值 col: 00 index:2642\n 空白 表示缺值 col: 00 index:2643\n 空白 表示缺值 col: 00 index:2644\n 空白 表示缺值 col: 00 index:2645\n NR 表示無降雨 col: 00 index:2656\n NR 表示無降雨 col: 00 index:2674\n NR 表示無降雨 col: 00 index:2692\n NR 表示無降雨 col: 00 index:2710\n NR 表示無降雨 col: 00 index:2728\n NR 表示無降雨 col: 00 index:2746\n NR 表示無降雨 col: 00 index:2764\n NR 表示無降雨 col: 00 index:2782\n NR 表示無降雨 col: 00 index:2800\n NR 表示無降雨 col: 00 index:2818\n NR 表示無降雨 col: 00 index:2836\n NR 表示無降雨 col: 00 index:2854\n NR 表示無降雨 col: 00 index:2872\n NR 表示無降雨 col: 00 index:2890\n NR 表示無降雨 col: 00 index:2908\n NR 表示無降雨 col: 00 index:2926\n NR 表示無降雨 col: 00 index:2944\n 空白 表示缺值 col: 00 index:2946\n NR 表示無降雨 col: 00 index:2962\n * 表示程式檢核為無效值 col: 00 index:2964\n # 表示儀器檢核為無效值 col: 00 index:2974\n # 表示儀器檢核為無效值 col: 00 index:2975\n # 表示儀器檢核為無效值 col: 00 index:2976\n NR 表示無降雨 col: 00 index:2980\n NR 表示無降雨 col: 00 index:2998\n NR 表示無降雨 col: 00 index:3016\n NR 表示無降雨 col: 00 index:3034\n * 表示程式檢核為無效值 col: 00 index:3036\n NR 表示無降雨 col: 00 index:3052\n NR 表示無降雨 col: 00 index:3070\n NR 表示無降雨 col: 00 index:3088\n NR 表示無降雨 col: 00 index:3106\n NR 表示無降雨 col: 00 index:3124\n NR 表示無降雨 col: 00 index:3142\n NR 表示無降雨 col: 00 index:3160\n NR 表示無降雨 col: 00 index:3178\n NR 表示無降雨 col: 00 index:3196\n NR 表示無降雨 col: 00 index:3214\n * 表示程式檢核為無效值 col: 00 index:3216\n NR 表示無降雨 col: 00 index:3232\n NR 表示無降雨 col: 00 index:3250\n NR 表示無降雨 col: 00 index:3268\n NR 表示無降雨 col: 00 index:3286\n NR 表示無降雨 col: 00 index:3304\n NR 表示無降雨 col: 00 index:3322\n NR 表示無降雨 col: 00 index:3340\n 空白 表示缺值 col: 00 index:3352\n 空白 表示缺值 col: 00 index:3353\n 空白 表示缺值 col: 00 index:3354\n NR 表示無降雨 col: 00 index:3358\n 空白 表示缺值 col: 00 index:3370\n 空白 表示缺值 col: 00 index:3371\n 空白 表示缺值 col: 00 index:3372\n NR 表示無降雨 col: 00 index:3376\n NR 表示無降雨 col: 00 index:3394\n NR 表示無降雨 col: 00 index:3412\n NR 表示無降雨 col: 00 index:3430\n * 表示程式檢核為無效值 col: 00 index:3442\n * 表示程式檢核為無效值 col: 00 index:3443\n * 表示程式檢核為無效值 col: 00 index:3444\n # 表示儀器檢核為無效值 col: 00 index:3460\n # 表示儀器檢核為無效值 col: 00 index:3461\n # 表示儀器檢核為無效值 col: 00 index:3462\n NR 表示無降雨 col: 00 index:3466\n # 表示儀器檢核為無效值 col: 00 index:3478\n # 表示儀器檢核為無效值 col: 00 index:3479\n # 表示儀器檢核為無效值 col: 00 index:3480\n NR 表示無降雨 col: 00 index:3484\n NR 表示無降雨 col: 00 index:3502\n * 表示程式檢核為無效值 col: 00 index:3514\n # 表示儀器檢核為無效值 col: 00 index:3515\n # 表示儀器檢核為無效值 col: 00 index:3516\n NR 表示無降雨 col: 00 index:3520\n NR 表示無降雨 col: 00 index:3535\n NR 表示無降雨 col: 00 index:3550\n 空白 表示缺值 col: 00 index:3562\n 空白 表示缺值 col: 00 index:3563\n 空白 表示缺值 col: 00 index:3564\n NR 表示無降雨 col: 00 index:3568\n NR 表示無降雨 col: 00 index:3586\n NR 表示無降雨 col: 00 index:3604\n NR 表示無降雨 col: 00 index:3622\n NR 表示無降雨 col: 00 index:3640\n NR 表示無降雨 col: 00 index:3658\n NR 表示無降雨 col: 00 index:3676\n NR 表示無降雨 col: 00 index:3694\n NR 表示無降雨 col: 00 index:3712\n NR 表示無降雨 col: 00 index:3730\n NR 表示無降雨 col: 00 index:3748\n NR 表示無降雨 col: 00 index:3766\n NR 表示無降雨 col: 00 index:3784\n NR 表示無降雨 col: 00 index:3802\n NR 表示無降雨 col: 00 index:3820\n NR 表示無降雨 col: 00 index:3838\n NR 表示無降雨 col: 00 index:3856\n NR 表示無降雨 col: 00 index:3874\n NR 表示無降雨 col: 00 index:3892\n NR 表示無降雨 col: 00 index:3910\n NR 表示無降雨 col: 00 index:3928\n NR 表示無降雨 col: 00 index:3946\n NR 表示無降雨 col: 00 index:3964\n * 表示程式檢核為無效值 col: 00 index:3976\n * 表示程式檢核為無效值 col: 00 index:3977\n * 表示程式檢核為無效值 col: 00 index:3978\n NR 表示無降雨 col: 00 index:3982\n NR 表示無降雨 col: 00 index:4000\n NR 表示無降雨 col: 00 index:4018\n NR 表示無降雨 col: 00 index:4036\n NR 表示無降雨 col: 00 index:4054\n NR 表示無降雨 col: 00 index:4072\n NR 表示無降雨 col: 00 index:4090\n NR 表示無降雨 col: 00 index:4108\n NR 表示無降雨 col: 00 index:4126\n NR 表示無降雨 col: 00 index:4144\n * 表示程式檢核為無效值 col: 00 index:4146\n NR 表示無降雨 col: 00 index:4162\n NR 表示無降雨 col: 00 index:4180\n NR 表示無降雨 col: 00 index:4198\n NR 表示無降雨 col: 00 index:4216\n NR 表示無降雨 col: 00 index:4234\n NR 表示無降雨 col: 00 index:4252\n NR 表示無降雨 col: 00 index:4270\n NR 表示無降雨 col: 00 index:4288\n NR 表示無降雨 col: 00 index:4306\n NR 表示無降雨 col: 00 index:4324\n NR 表示無降雨 col: 00 index:4342\n NR 表示無降雨 col: 00 index:4360\n NR 表示無降雨 col: 00 index:4378\n NR 表示無降雨 col: 00 index:4396\n NR 表示無降雨 col: 00 index:4414\n NR 表示無降雨 col: 00 index:4432\n NR 表示無降雨 col: 00 index:4450\n NR 表示無降雨 col: 00 index:4468\n NR 表示無降雨 col: 00 index:4486\n"
],
[
"# 表示儀器檢核為無效值,* 表示程式檢核為無效值,x 表示人工檢核為無效值,NR 表示無降雨,空白 表示缺值\n#,A 係指因儀器疑似故障警報所產生的無效值。\nfeat = train.columns\nprint(feat)\nfor col in feat:\n for i in range(len(train)):\n token = str(train[col][i])[-1]\n if train[col][i] == \"NR\":\n train[col][i] = '0'\n elif pd.isnull(train[col][i]):\n train[col][i] = str((get_last(col, i) + get_next(col, i)) / 2)\n elif token =='*':\n train[col][i] = str((get_last(col, i) + get_next(col, i)) / 2)\n elif token =='A':\n train[col][i] = str((get_last(col, i) + get_next(col, i)) / 2)\n elif token =='#':\n train[col][i] = str((get_last(col, i) + get_next(col, i)) / 2)\n elif token == 'x':\n if col != '測項':\n train[col][i] = str((get_last(col, i) + get_next(col, i)) / 2)",
"Index(['日期', '測站', '測項', '00', '01', '02', '03', '04', '05', '06', '07', '08',\n '09', '10', '11', '12', '13', '14', '15', '16', '17', '18', '19', '20',\n '21', '22', '23'],\n dtype='object')\n"
],
[
"month_slice = train['日期'].apply(lambda x: int(x[5:7]))\n# Truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all()\ndf_train = train[(month_slice < 12) & (month_slice > 9)].reset_index(drop=True)\ndf_test = train[ month_slice == 12].reset_index(drop=True)",
"_____no_output_____"
],
[
"# kill 日期測站測項\ndf_train.drop(columns=['日期', '測站', '測項'], axis=1, inplace=True)\ndf_test.drop(columns=['日期', '測站', '測項'], axis=1, inplace=True)",
"_____no_output_____"
],
[
"df_train = np.array(df_train).reshape(18, -1)\ndf_test = np.array(df_test).reshape(18, -1)",
"_____no_output_____"
],
[
"df_test.shape",
"_____no_output_____"
],
[
"# test_item = list(train['測項'][0:18])\n# time_list = list(np.arange('2018-10-01', '2018-12-01', dtype='datetime64[h]'))\n# df_train = pd.DataFrame(data=df_train, index=test_item, columns=time_list) ",
"_____no_output_____"
],
[
"x_train_list=[]\ny_train_list=[]\nfor i in range(0, df_train.shape[1]-6):\n x_train_list.append(df_train[:, i:i+6])\nfor i in range(0, df_test.shape[1]-6):\n y_train_list.append(df_test[:, i:i+6])\n",
"_____no_output_____"
],
[
"len(x_train_list)",
"_____no_output_____"
],
[
"len(y_train_list)",
"_____no_output_____"
],
[
"x_train_list_np = np.array(x_train_list).reshape(1458, -1)\ny_train_list_np = np.array(y_train_list).reshape(738, -1)",
"_____no_output_____"
],
[
"from sklearn.linear_model import LinearRegression",
"_____no_output_____"
],
[
"x_train = x_train_list_np\ny_train = df_train[9][6:]",
"_____no_output_____"
],
[
"lr = LinearRegression()\nlr.fit(x_train, y_train)",
"_____no_output_____"
],
[
"x_test = y_train_list_np",
"_____no_output_____"
],
[
"pred = lr.predict(x_test)",
"_____no_output_____"
],
[
"x_test.shape",
"_____no_output_____"
],
[
"from sklearn import metrics",
"_____no_output_____"
],
[
"pred.shape",
"_____no_output_____"
],
[
"df_test[9][6:].shape",
"_____no_output_____"
],
[
"metrics.mean_squared_error(pred, df_test[9][6:])",
"_____no_output_____"
],
[
"a = np.array([[1, 2], [3, 4]])\nb = np.array([[5, 6], [7, 8]])\nc = np.concatenate((a,b), axis=1)\nc",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0972e37c88f4d4195f1285c11034b4323bad489 | 25,968 | ipynb | Jupyter Notebook | Model Validation in Python/.ipynb_checkpoints/Model Validation in Python-checkpoint.ipynb | frankgarciav/Datacamp-Courses | 5033f4eb1812c739a13b3f01f890d10840225b12 | [
"MIT"
] | 79 | 2019-06-06T23:13:56.000Z | 2022-03-27T21:57:23.000Z | Model Validation in Python/.ipynb_checkpoints/Model Validation in Python-checkpoint.ipynb | frankgarciav/Datacamp-Courses | 5033f4eb1812c739a13b3f01f890d10840225b12 | [
"MIT"
] | null | null | null | Model Validation in Python/.ipynb_checkpoints/Model Validation in Python-checkpoint.ipynb | frankgarciav/Datacamp-Courses | 5033f4eb1812c739a13b3f01f890d10840225b12 | [
"MIT"
] | 82 | 2019-08-13T19:55:19.000Z | 2022-03-22T20:36:53.000Z | 27.625532 | 361 | 0.563501 | [
[
[
"### MODULE 1\n### Basic Modeling in scikit-learn",
"_____no_output_____"
]
],
[
[
"Before we can validate models, we need an understanding of how to create and work with them. This chapter provides an introduction to running regression and classification models in scikit-learn. We will use this model building foundation throughout the remaining chapters.",
"_____no_output_____"
]
],
[
[
"### Seen vs. unseen data\n\n# The model is fit using X_train and y_train\nmodel.fit(X_train, y_train)\n\n# Create vectors of predictions\ntrain_predictions = model.predict(X_train)\ntest_predictions = model.predict(X_test)\n\n# Train/Test Errors\ntrain_error = mae(y_true=y_train, y_pred=train_predictions)\ntest_error = mae(y_true=y_test, y_pred=test_predictions)\n\n# Print the accuracy for seen and unseen data\nprint(\"Model error on seen data: {0:.2f}.\".format(train_error))\nprint(\"Model error on unseen data: {0:.2f}.\".format(test_error))",
"_____no_output_____"
],
[
"# Set parameters and fit a model\n\n# Set the number of trees\nrfr.n_estimators = 1000\n\n# Add a maximum depth\nrfr.max_depth = 6\n\n# Set the random state\nrfr.random_state = 11\n\n# Fit the model\nrfr.fit(X_train, y_train)",
"_____no_output_____"
],
[
"## Feature importances\n\n# Fit the model using X and y\nrfr.fit(X_train, y_train)\n\n# Print how important each column is to the model\nfor i, item in enumerate(rfr.feature_importances_):\n # Use i and item to print out the feature importance of each column\n print(\"{0:s}: {1:.2f}\".format(X_train.columns[i], item))",
"_____no_output_____"
],
[
"### lassification predictions\n\n# Fit the rfc model. \nrfc.fit(X_train, y_train)\n\n# Create arrays of predictions\nclassification_predictions = rfc.predict(X_test)\nprobability_predictions = rfc.predict_proba(X_test)\n\n# Print out count of binary predictions\nprint(pd.Series(classification_predictions).value_counts())\n\n# Print the first value from probability_predictions\nprint('The first predicted probabilities are: {}'.format(probability_predictions[0]))",
"_____no_output_____"
],
[
"## Reusing model parameters\n\nrfc = RandomForestClassifier(n_estimators=50, max_depth=6, random_state=1111)\n\n# Print the classification model\nprint(rfc)\n\n# Print the classification model's random state parameter\nprint('The random state is: {}'.format(rfc.random_state))\n\n# Print all parameters\nprint('Printing the parameters dictionary: {}'.format(rfc.get_params()))",
"_____no_output_____"
],
[
"## Random forest classifier\n\nfrom sklearn.ensemble import RandomForestClassifier\n\n# Create a random forest classifier\nrfc = RandomForestClassifier(n_estimators=50, max_depth=6, random_state=1111)\n\n# Fit rfc using X_train and y_train\nrfc.fit(X_train, y_train)\n\n# Create predictions on X_test\npredictions = rfc.predict(X_test)\nprint(predictions[0:5])\n\n# Print model accuracy using score() and the testing data\nprint(rfc.score(X_test, y_test))",
"_____no_output_____"
],
[
"## MODULE 2\n## Validation Basics",
"_____no_output_____"
]
],
[
[
"This chapter focuses on the basics of model validation. From splitting data into training, validation, and testing datasets, to creating an understanding of the bias-variance tradeoff, we build the foundation for the techniques of K-Fold and Leave-One-Out validation practiced in chapter three.",
"_____no_output_____"
]
],
[
[
"## Create one holdout set\n\n# Create dummy variables using pandas\nX = pd.get_dummies(tic_tac_toe.iloc[:,0:9])\ny = tic_tac_toe.iloc[:, 9]\n\n# Create training and testing datasets. Use 10% for the test set\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.1, random_state=1111)",
"_____no_output_____"
],
[
"## Create two holdout sets\n\n# Create temporary training and final testing datasets\nX_temp, X_test, y_temp, y_test =\\\n train_test_split(X, y, test_size=.2, random_state=1111)\n\n# Create the final training and validation datasets\nX_train, X_val, y_train, y_val = train_test_split(X_temp, y_temp, test_size=.25, random_state=1111)",
"_____no_output_____"
],
[
"### Mean absolute error\n\nfrom sklearn.metrics import mean_absolute_error\n\n# Manually calculate the MAE\nn = len(predictions)\nmae_one = sum(abs(y_test - predictions)) / n\nprint('With a manual calculation, the error is {}'.format(mae_one))\n\n# Use scikit-learn to calculate the MAE\nmae_two = mean_absolute_error(y_test, predictions)\nprint('Using scikit-lean, the error is {}'.format(mae_two))\n\n# <script.py> output:\n# With a manual calculation, the error is 5.9\n# Using scikit-lean, the error is 5.9\n",
"_____no_output_____"
],
[
"### Mean squared error\n\nfrom sklearn.metrics import mean_squared_error\n\nn = len(predictions)\n# Finish the manual calculation of the MSE\nmse_one = sum(abs(y_test - predictions)**2) / n\nprint('With a manual calculation, the error is {}'.format(mse_one))\n\n# Use the scikit-learn function to calculate MSE\nmse_two = mean_squared_error(y_test, predictions)\nprint('Using scikit-lean, the error is {}'.format(mse_two))\n",
"_____no_output_____"
],
[
"### Performance on data subsets\n\n# Find the East conference teams\neast_teams = labels == \"E\"\n\n# Create arrays for the true and predicted values\ntrue_east = y_test[east_teams]\npreds_east = predictions[east_teams]\n\n# Print the accuracy metrics\nprint('The MAE for East teams is {}'.format(\n mae(true_east, preds_east)))\n\n# Print the West accuracy\nprint('The MAE for West conference is {}'.format(west_error))",
"_____no_output_____"
],
[
"### Confusion matrices\n\n# Calculate and print the accuracy\naccuracy = (324 + 491) / (953)\nprint(\"The overall accuracy is {0: 0.2f}\".format(accuracy))\n\n# Calculate and print the precision\nprecision = (491) / (491 + 15)\nprint(\"The precision is {0: 0.2f}\".format(precision))\n\n# Calculate and print the recall\nrecall = (491) / (491 + 123)\nprint(\"The recall is {0: 0.2f}\".format(recall))",
"_____no_output_____"
],
[
"### Confusion matrices, again\n\nfrom sklearn.metrics import confusion_matrix\n\n# Create predictions\ntest_predictions = rfc.predict(X_test)\n\n# Create and print the confusion matrix\ncm = confusion_matrix(y_test, test_predictions)\nprint(cm)\n\n# Print the true positives (actual 1s that were predicted 1s)\nprint(\"The number of true positives is: {}\".format(cm[1, 1]))\n\n## <script.py> output:\n## [[177 123]\n## [ 92 471]]\n## The number of true positives is: 471\n\n## Row 1, column 1 represents the number of actual 1s that were predicted 1s (the true positives). \n## Always make sure you understand the orientation of the confusion matrix before you start using it!",
"_____no_output_____"
],
[
"### Precision vs. recall\n\nfrom sklearn.metrics import precision_score\n\ntest_predictions = rfc.predict(X_test)\n\n# Create precision or recall score based on the metric you imported\nscore = precision_score(y_test, test_predictions)\n\n# Print the final result\nprint(\"The precision value is {0:.2f}\".format(score))\n\n",
"_____no_output_____"
],
[
"### Error due to under/over-fitting\n\n# Update the rfr model\nrfr = RandomForestRegressor(n_estimators=25,\n random_state=1111,\n max_features=2)\nrfr.fit(X_train, y_train)\n\n# Print the training and testing accuracies \nprint('The training error is {0:.2f}'.format(\n mae(y_train, rfr.predict(X_train))))\nprint('The testing error is {0:.2f}'.format(\n mae(y_test, rfr.predict(X_test))))\n\n## <script.py> output:\n## The training error is 3.88\n## The testing error is 9.15\n\n\n# Update the rfr model\nrfr = RandomForestRegressor(n_estimators=25,\n random_state=1111,\n max_features=11)\nrfr.fit(X_train, y_train)\n\n# Print the training and testing accuracies \nprint('The training error is {0:.2f}'.format(\n mae(y_train, rfr.predict(X_train))))\nprint('The testing error is {0:.2f}'.format(\n mae(y_test, rfr.predict(X_test))))\n\n## <script.py> output:\n## The training error is 3.57\n## The testing error is 10.05\n \n \n# Update the rfr model\nrfr = RandomForestRegressor(n_estimators=25,\n random_state=1111,\n max_features=4)\nrfr.fit(X_train, y_train)\n\n# Print the training and testing accuracies \nprint('The training error is {0:.2f}'.format(\n mae(y_train, rfr.predict(X_train))))\nprint('The testing error is {0:.2f}'.format(\n mae(y_test, rfr.predict(X_test))))\n\n## <script.py> output:\n## The training error is 3.60\n## The testing error is 8.79",
"_____no_output_____"
],
[
"### Am I underfitting?\n\nfrom sklearn.metrics import accuracy_score\n\ntest_scores, train_scores = [], []\n\nfor i in [1, 2, 3, 4, 5, 10, 20, 50]:\n rfc = RandomForestClassifier(n_estimators=i, random_state=1111)\n rfc.fit(X_train, y_train)\n # Create predictions for the X_train and X_test datasets.\n train_predictions = rfc.predict(X_train)\n test_predictions = rfc.predict(X_test)\n # Append the accuracy score for the test and train predictions.\n train_scores.append(round(accuracy_score(y_train, train_predictions), 2))\n test_scores.append(round(accuracy_score(y_test, test_predictions), 2))\n \n# Print the train and test scores.\nprint(\"The training scores were: {}\".format(train_scores))\nprint(\"The testing scores were: {}\".format(test_scores))",
"_____no_output_____"
],
[
"### MODULE 3\n### Cross Validation",
"_____no_output_____"
]
],
[
[
"Holdout sets are a great start to model validation. However, using a single train and test set if often not enough. Cross-validation is considered the gold standard when it comes to validating model performance and is almost always used when tuning model hyper-parameters. This chapter focuses on performing cross-validation to validate model performance.",
"_____no_output_____"
]
],
[
[
"### Two samples\n\n# Create two different samples of 200 observations \nsample1 = tic_tac_toe.sample(200, random_state=1111)\nsample2 = tic_tac_toe.sample(200, random_state=1171)\n\n# Print the number of common observations \nprint(len([index for index in sample1.index if index in sample2.index]))\n\n# Print the number of observations in the Class column for both samples \nprint(sample1['Class'].value_counts())\nprint(sample2['Class'].value_counts())",
"_____no_output_____"
],
[
"### scikit-learn's KFold()\n\nfrom sklearn.model_selection import KFold\n\n# Use KFold\nkf = KFold(n_splits=5, shuffle=True, random_state=1111)\n\n# Create splits\nsplits = kf.split(X)\n\n# Print the number of indices\nfor train_index, val_index in splits:\n print(\"Number of training indices: %s\" % len(train_index))\n print(\"Number of validation indices: %s\" % len(val_index))",
"_____no_output_____"
],
[
"### Using KFold indices\n\nfrom sklearn.ensemble import RandomForestRegressor\nfrom sklearn.metrics import mean_squared_error\n\nrfc = RandomForestRegressor(n_estimators=25, random_state=1111)\n\n# Access the training and validation indices of splits\nfor train_index, val_index in splits:\n # Setup the training and validation data\n X_train, y_train = X[train_index], y[train_index]\n X_val, y_val = X[val_index], y[val_index]\n # Fit the random forest model\n rfc.fit(X_train, y_train)\n # Make predictions, and print the accuracy\n predictions = rfc.predict(X_val)\n print(\"Split accuracy: \" + str(mean_squared_error(y_val, predictions)))",
"_____no_output_____"
],
[
"### scikit-learn's methods\n\n# Instruction 1: Load the cross-validation method\nfrom sklearn.model_selection import cross_val_score\n\n# Instruction 2: Load the random forest regression model\nfrom sklearn.ensemble import RandomForestClassifier\n\n# Instruction 3: Load the mean squared error method\n# Instruction 4: Load the function for creating a scorer\nfrom sklearn.metrics import mean_squared_error, make_scorer\n\n## It is easy to see how all of the methods can get mixed up, but \n## it is important to know the names of the methods you need. \n## You can always review the scikit-learn documentation should you need any help",
"_____no_output_____"
],
[
"### Implement cross_val_score()\n\nrfc = RandomForestRegressor(n_estimators=25, random_state=1111)\nmse = make_scorer(mean_squared_error)\n\n# Set up cross_val_score\ncv = cross_val_score(estimator=rfc,\n X=X_train,\n y=y_train,\n cv=10,\n scoring=mse)\n\n# Print the mean error\nprint(cv.mean())",
"_____no_output_____"
],
[
"### Leave-one-out-cross-validation\n\nfrom sklearn.metrics import mean_absolute_error, make_scorer\n\n# Create scorer\nmae_scorer = make_scorer(mean_absolute_error)\n\nrfr = RandomForestRegressor(n_estimators=15, random_state=1111)\n\n# Implement LOOCV\nscores = cross_val_score(estimator=rfr, X=X, y=y, cv=85, scoring=mae_scorer)\n\n# Print the mean and standard deviation\nprint(\"The mean of the errors is: %s.\" % np.mean(scores))\nprint(\"The standard deviation of the errors is: %s.\" % np.std(scores))",
"_____no_output_____"
],
[
"### MODULE 4\n### Selecting the best model with Hyperparameter tuning.",
"_____no_output_____"
]
],
[
[
"The first three chapters focused on model validation techniques. In chapter 4 we apply these techniques, specifically cross-validation, while learning about hyperparameter tuning. After all, model validation makes tuning possible and helps us select the overall best model.",
"_____no_output_____"
]
],
[
[
"### Creating Hyperparameters\n\n# Review the parameters of rfr\nprint(rfr.get_params())\n\n# Maximum Depth\nmax_depth = [4, 8, 12]\n\n# Minimum samples for a split\nmin_samples_split = [2, 5, 10]\n\n# Max features \nmax_features = [4, 6, 8, 10]",
"_____no_output_____"
],
[
"### Running a model using ranges\n\nfrom sklearn.ensemble import RandomForestRegressor\n\n# Fill in rfr using your variables\nrfr = RandomForestRegressor(\n n_estimators=100,\n max_depth=random.choice(max_depth),\n min_samples_split=random.choice(min_samples_split),\n max_features=random.choice(max_features))\n\n# Print out the parameters\nprint(rfr.get_params())",
"_____no_output_____"
],
[
"### Preparing for RandomizedSearch\n\nfrom sklearn.ensemble import RandomForestRegressor\nfrom sklearn.metrics import make_scorer, mean_squared_error\n\n# Finish the dictionary by adding the max_depth parameter\nparam_dist = {\"max_depth\": [2, 4, 6, 8],\n \"max_features\": [2, 4, 6, 8, 10],\n \"min_samples_split\": [2, 4, 8, 16]}\n\n# Create a random forest regression model\nrfr = RandomForestRegressor(n_estimators=10, random_state=1111)\n\n# Create a scorer to use (use the mean squared error)\nscorer = make_scorer(mean_squared_error)",
"_____no_output_____"
],
[
"\n\n# Import the method for random search\nfrom sklearn.model_selection import RandomizedSearchCV\n\n# Build a random search using param_dist, rfr, and scorer\nrandom_search =\\\n RandomizedSearchCV(\n estimator=rfr,\n param_distributions=param_dist,\n n_iter=10,\n cv=5,\n scoring=scorer)",
"_____no_output_____"
],
[
"### Selecting the best precision model\n\nfrom sklearn.metrics import precision_score, make_scorer\n\n# Create a precision scorer\nprecision = make_scorer(precision_score)\n# Finalize the random search\nrs = RandomizedSearchCV(\n estimator=rfc, param_distributions=param_dist,\n scoring = precision,\n cv=5, n_iter=10, random_state=1111)\nrs.fit(X, y)\n\n# print the mean test scores:\nprint('The accuracy for each run was: {}.'.format(rs.cv_results_['mean_test_score']))\n# print the best model score:\nprint('The best accuracy for a single model was: {}'.format(rs.best_score_))",
"_____no_output_____"
]
]
] | [
"code",
"raw",
"code",
"raw",
"code",
"raw",
"code",
"raw",
"code"
] | [
[
"code"
],
[
"raw"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"raw"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"raw"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"raw"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
d0973a5c1f212abb61d4eb07e5915283a30d729d | 15,089 | ipynb | Jupyter Notebook | study_roadmaps/1_getting_started_roadmap/5_update_hyperparams/2_data_params/6) Dataset Transforms.ipynb | kshitij12345/monk_v1 | 9e2ccdd51f3c1335ed732cca5cc5fb7daea66139 | [
"Apache-2.0"
] | 2 | 2020-09-16T06:05:50.000Z | 2021-04-07T12:05:20.000Z | study_roadmaps/1_getting_started_roadmap/5_update_hyperparams/2_data_params/6) Dataset Transforms.ipynb | jayeshk7/monk_v1 | 9e2ccdd51f3c1335ed732cca5cc5fb7daea66139 | [
"Apache-2.0"
] | null | null | null | study_roadmaps/1_getting_started_roadmap/5_update_hyperparams/2_data_params/6) Dataset Transforms.ipynb | jayeshk7/monk_v1 | 9e2ccdd51f3c1335ed732cca5cc5fb7daea66139 | [
"Apache-2.0"
] | null | null | null | 23.430124 | 413 | 0.493737 | [
[
[
"# Goals\n\n\n### Learn how to change train validation splits",
"_____no_output_____"
],
[
"# Table of Contents\n\n\n## [0. Install](#0)\n\n\n## [1. Load experiment with defaut transforms](#1)\n\n\n## [2. Reset Transforms andapply new transforms](#2)",
"_____no_output_____"
],
[
"<a id='0'></a>\n# Install Monk\n \n - git clone https://github.com/Tessellate-Imaging/monk_v1.git\n \n - cd monk_v1/installation/Linux && pip install -r requirements_cu9.txt\n - (Select the requirements file as per OS and CUDA version)",
"_____no_output_____"
]
],
[
[
"!git clone https://github.com/Tessellate-Imaging/monk_v1.git",
"Cloning into 'monk_v1'...\nremote: Enumerating objects: 53, done.\u001b[K\nremote: Counting objects: 100% (53/53), done.\u001b[K\nremote: Compressing objects: 100% (53/53), done.\u001b[K\nremote: Total 2457 (delta 27), reused 0 (delta 0), pack-reused 2404\u001b[K\nReceiving objects: 100% (2457/2457), 78.20 MiB | 4.45 MiB/s, done.\nResolving deltas: 100% (1362/1362), done.\n"
],
[
"# Select the requirements file as per OS and CUDA version\n!cd monk_v1/installation/Linux && pip install -r requirements_cu9.txt",
"_____no_output_____"
]
],
[
[
"## Dataset - Broad Leaved Dock Image Classification\n - https://www.kaggle.com/gavinarmstrong/open-sprayer-images",
"_____no_output_____"
]
],
[
[
"! wget --load-cookies /tmp/cookies.txt \"https://docs.google.com/uc?export=download&confirm=$(wget --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1uL-VV4nV_u0kry3gLH1TATUTu8hWJ0_d' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\\1\\n/p')&id=1uL-VV4nV_u0kry3gLH1TATUTu8hWJ0_d\" -O open_sprayer_images.zip && rm -rf /tmp/cookies.txt",
"_____no_output_____"
],
[
"! unzip -qq open_sprayer_images.zip",
"_____no_output_____"
]
],
[
[
"# Imports",
"_____no_output_____"
]
],
[
[
"# Monk\nimport os\nimport sys\nsys.path.append(\"monk_v1/monk/\");",
"_____no_output_____"
],
[
"#Using mxnet-gluon backend \nfrom gluon_prototype import prototype",
"_____no_output_____"
]
],
[
[
"<a id='1'></a>\n# Load experiment with default transforms",
"_____no_output_____"
]
],
[
[
"gtf = prototype(verbose=1);\ngtf.Prototype(\"project\", \"understand_transforms\");",
"Mxnet Version: 1.5.0\n\nExperiment Details\n Project: project\n Experiment: understand_transforms\n Dir: /home/abhi/Desktop/Work/tess_tool/gui/v0.3/finetune_models/Organization/development/v5.3_roadmaps/1_getting_started_roadmap/5_update_hyperparams/2_data_params/workspace/project/understand_transforms/\n\n"
],
[
"gtf.Default(dataset_path=\"open_sprayer_images/train\",\n model_name=\"resnet18_v1\", \n freeze_base_network=True,\n num_epochs=5);\n\n#Read the summary generated once you run this cell. ",
"Dataset Details\n Train path: open_sprayer_images/train\n Val path: None\n CSV train path: None\n CSV val path: None\n\nDataset Params\n Input Size: 224\n Batch Size: 4\n Data Shuffle: True\n Processors: 4\n Train-val split: 0.7\n\nPre-Composed Train Transforms\n[{'RandomHorizontalFlip': {'p': 0.8}}, {'Normalize': {'mean': [0.485, 0.456, 0.406], 'std': [0.229, 0.224, 0.225]}}]\n\nPre-Composed Val Transforms\n[{'RandomHorizontalFlip': {'p': 0.8}}, {'Normalize': {'mean': [0.485, 0.456, 0.406], 'std': [0.229, 0.224, 0.225]}}]\n\nDataset Numbers\n Num train images: 4218\n Num val images: 1809\n Num classes: 2\n\nModel Params\n Model name: resnet18_v1\n Use Gpu: True\n Use pretrained: True\n Freeze base network: True\n\nModel Details\n Loading pretrained model\n Model Loaded on device\n Model name: resnet18_v1\n Num of potentially trainable layers: 41\n Num of actual trainable layers: 1\n\nOptimizer\n Name: sgd\n Learning rate: 0.01\n Params: {'lr': 0.01, 'momentum': 0, 'weight_decay': 0, 'momentum_dampening_rate': 0, 'clipnorm': 0.0, 'clipvalue': 0.0}\n\n\n\nLearning rate scheduler\n Name: steplr\n Params: {'step_size': 1, 'gamma': 0.98, 'last_epoch': -1}\n\nLoss\n Name: softmaxcrossentropy\n Params: {'weight': None, 'batch_axis': 0, 'axis_to_sum_over': -1, 'label_as_categories': True, 'label_smoothing': False}\n\nTraining params\n Num Epochs: 5\n\nDisplay params\n Display progress: True\n Display progress realtime: True\n Save Training logs: True\n Save Intermediate models: True\n Intermediate model prefix: intermediate_model_\n\n"
]
],
[
[
"## Default Transforms are\n \n Train Transforms\n \n {'RandomHorizontalFlip': {'p': 0.8}}, \n {'Normalize': {'mean': [0.485, 0.456, 0.406], 'std': [0.229, 0.224, 0.225]}}]\n\n\n\n Val Transforms\n\n {'RandomHorizontalFlip': {'p': 0.8}}, \n {'Normalize': {'mean': [0.485, 0.456, 0.406], 'std': [0.229, 0.224, 0.225]}}\n \n \n In that order \n",
"_____no_output_____"
],
[
"<a id='2'></a>\n# Reset transforms",
"_____no_output_____"
]
],
[
[
"# Reset train and validation transforms\ngtf.reset_transforms();\n\n\n# Reset test transforms\ngtf.reset_transforms(test=True);",
"_____no_output_____"
]
],
[
[
"## Apply new transforms",
"_____no_output_____"
]
],
[
[
"gtf.List_Transforms();",
"Transforms List: \n 1. apply_random_resized_crop\n 2. apply_center_crop\n 3. apply_color_jitter\n 4. apply_random_horizontal_flip\n 5. apply_random_vertical_flip\n 6. apply_random_lighting\n 7. apply_resize\n 8. apply_normalize\n\n"
],
[
"# Transform applied to only train and val\ngtf.apply_center_crop(224, \n train=True,\n val=True,\n test=False)",
"_____no_output_____"
],
[
"# Transform applied to all train, val and test\ngtf.apply_normalize(mean=[0.485, 0.456, 0.406],\n std=[0.229, 0.224, 0.225],\n train=True,\n val=True,\n test=True\n )",
"_____no_output_____"
],
[
"# Very important to reload post update\ngtf.Reload();",
"Pre-Composed Train Transforms\n[{'CenterCrop': {'input_size': 224}}, {'Normalize': {'mean': [0.485, 0.456, 0.406], 'std': [0.229, 0.224, 0.225]}}]\n\nPre-Composed Val Transforms\n[{'CenterCrop': {'input_size': 224}}, {'Normalize': {'mean': [0.485, 0.456, 0.406], 'std': [0.229, 0.224, 0.225]}}]\n\nDataset Numbers\n Num train images: 4218\n Num val images: 1809\n Num classes: 2\n\nModel Details\n Loading pretrained model\n Model Loaded on device\n Model name: resnet18_v1\n Num of potentially trainable layers: 41\n Num of actual trainable layers: 1\n\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d09741e79ea815ec705433752bb309fbe4375dee | 110,604 | ipynb | Jupyter Notebook | Validation/Untitled1.ipynb | olawaleibrahim/2020_FORCE_Lithology_Prediction | b55438d16eec36db04aca67294fbdaff8d27523a | [
"Apache-2.0"
] | 22 | 2020-10-28T06:28:34.000Z | 2022-03-03T10:28:38.000Z | Untitled1.ipynb | Dayo-Olalere/2020_FORCE_Lithology_Prediction | 5e1d39f5279ad52a3f12cf7be0e7d38033d91b5e | [
"Apache-2.0"
] | null | null | null | Untitled1.ipynb | Dayo-Olalere/2020_FORCE_Lithology_Prediction | 5e1d39f5279ad52a3f12cf7be0e7d38033d91b5e | [
"Apache-2.0"
] | 12 | 2020-10-30T21:58:38.000Z | 2021-09-28T09:17:07.000Z | 41.300971 | 8,372 | 0.469169 | [
[
[
"import pandas as pd\nimport numpy as np\nimport numpy.random as nr\nimport matplotlib.pyplot as plt\nimport matplotlib.pyplot as plt\nimport sklearn\nfrom sklearn.ensemble import RandomForestClassifier\nimport catboost as cat\nfrom catboost import CatBoostClassifier\nfrom sklearn import preprocessing\nimport sklearn.model_selection as ms\nfrom sklearn.model_selection import GridSearchCV, KFold, StratifiedKFold\nfrom sklearn.metrics import log_loss, confusion_matrix, accuracy_score\nimport xgboost as xgb\nimport lightgbm as lgb",
"_____no_output_____"
],
[
"def fill_missing_values(data):\n \n '''\n Function to input missing values based on the column object type\n '''\n \n cols = list(data.columns)\n for col in cols:\n if data[col].dtype == 'int64' or data[col].dtype == 'float64':\n \n data[col] = data[col].fillna(data[col].mean())\n \n #elif data[col].dtype == 'O' or data[col].dtype == 'object':\n # data[col] = data[col].fillna(data[col].mode()[0])\n \n else:\n data[col] = data[col].fillna(data[col].mode()[0])\n \n return data\n \ndef one_hot_encoding(traindata, *args):\n \n for ii in args:\n traindata = pd.get_dummies(traindata, prefix=[ii], columns=[ii])\n \n return traindata\n \ndef drop_columns(traindata, *args):\n \n #labels = np.array(traindata[target])\n \n columns = []\n for _ in args:\n columns.append(_)\n \n traindata = traindata.drop(columns, axis=1)\n #traindata = traindata.drop(target, axis=1)\n #testdata = testdata.drop(columns, axis=1)\n \n return traindata\n \ndef process(traindata):\n \n cols = list(traindata.columns)\n for _ in cols:\n traindata[_] = np.where(traindata[_] == np.inf, -999, traindata[_])\n traindata[_] = np.where(traindata[_] == np.nan, -999, traindata[_])\n traindata[_] = np.where(traindata[_] == -np.inf, -999, traindata[_])\n \n return traindata\n \ndef show_evaluation(pred, true):\n print(f'Default score: {score(true.values, pred)}')\n print(f'Accuracy is: {accuracy_score(true, pred)}')\n print(f'F1 is: {f1_score(pred, true.values, average=\"weighted\")}')\n \ndef freq_encode(data, cols):\n for i in cols:\n encoding = data.groupby(i).size()\n encoding = encoding/len(data)\n data[i + '_enc'] = data[i].map(encoding)\n return data\n \n \ndef mean_target(data, cols):\n kf = KFold(5)\n a = pd.DataFrame()\n for tr_ind, val_ind in kf.split(data):\n X_tr, X_val= data.iloc[tr_ind].copy(), data.iloc[val_ind].copy()\n for col in cols:\n means = X_val[col].map(X_tr.groupby(col).FORCE_2020_LITHOFACIES_LITHOLOGY.mean())\n X_val[col + '_mean_target'] = means + 0.0001\n a = pd.concat((a, X_val))\n #prior = FORCE_2020_LITHOFACIES_LITHOLOGY.mean()\n #a.fillna(prior, inplace=True)\n return a\n \ndef make_submission(prediction, filename):\n \n path = './'\n \n test = pd.read_csv('./Test.csv', sep=';')\n #test_prediction = model.predict(testdata)\n \n #test_prediction\n category_to_lithology = {y:x for x,y in lithology_numbers.items()}\n test_prediction_for_submission = np.vectorize(category_to_lithology.get)(prediction)\n np.savetxt(path+filename+'.csv', test_prediction_for_submission, header='lithology', fmt='%i')",
"_____no_output_____"
],
[
"A = np.load('penalty_matrix.npy')\n\ndef score(y_true, y_pred):\n S = 0.0\n y_true = y_true.astype(int)\n y_pred = y_pred.astype(int)\n for i in range(0, y_true.shape[0]):\n S -= A[y_true[i], y_pred[i]]\n return S/y_true.shape[0]\n\ndef evaluate(model, prediction, true_label):\n feat_imp = pd.Series(model.feature_importances_).sort_values(ascending=False)\n plt.figure(figsize=(12,8))\n feat_imp.plot(kind='bar', title=f'Feature Importances {len(model.feature_importances_)}')\n plt.ylabel('Feature Importance Score')",
"_____no_output_____"
],
[
"#importing files\ntrain = pd.read_csv('Train.csv', sep=';')\ntest = pd.read_csv('Test.csv', sep=';')\n\nntrain = train.shape[0]\nntest = test.shape[0]\ntarget = train.FORCE_2020_LITHOFACIES_LITHOLOGY.copy()\ndf = pd.concat((train, test)).reset_index(drop=True)",
"_____no_output_____"
],
[
"plt.scatter(train.X_LOC, train.Y_LOC)",
"_____no_output_____"
],
[
"plt.scatter(test.X_LOC, test.Y_LOC)",
"_____no_output_____"
],
[
"test.describe()",
"_____no_output_____"
],
[
"train.describe()",
"_____no_output_____"
],
[
"train.WELL.value_counts()",
"_____no_output_____"
],
[
"test.WELL.value_counts()",
"_____no_output_____"
],
[
"#importing files\ntrain = pd.read_csv('Train.csv', sep=';')\ntest = pd.read_csv('Test.csv', sep=';')\n\nntrain = train.shape[0]\nntest = test.shape[0]\ntarget = train.FORCE_2020_LITHOFACIES_LITHOLOGY.copy()\ndf = pd.concat((train, test)).reset_index(drop=True)",
"_____no_output_____"
],
[
"lithology = train['FORCE_2020_LITHOFACIES_LITHOLOGY']\n\nlithology_numbers = {30000: 0,\n 65030: 1,\n 65000: 2,\n 80000: 3,\n 74000: 4,\n 70000: 5,\n 70032: 6,\n 88000: 7,\n 86000: 8,\n 99000: 9,\n 90000: 10,\n 93000: 11}\n\nlithology = lithology.map(lithology_numbers)",
"_____no_output_____"
],
[
"np.array(lithology)",
"_____no_output_____"
],
[
"test.describe()",
"_____no_output_____"
],
[
"train.describe()",
"_____no_output_____"
],
[
"(train.isna().sum()/train.shape[0]) * 100",
"_____no_output_____"
],
[
"(df.isna().sum()/df.shape[0]) * 100",
"_____no_output_____"
],
[
"(df.WELL.value_counts()/df.WELL.shape[0]) * 100",
"_____no_output_____"
],
[
"print(df.shape)\ncols = ['FORCE_2020_LITHOFACIES_CONFIDENCE', 'SGR', \n 'DTS', 'DCAL', 'MUDWEIGHT', 'RMIC', 'ROPA', 'RXO']\ndf = drop_columns(df, *cols)\nprint(df.shape)",
"(1307297, 29)\n(1307297, 21)\n"
],
[
"train.FORMATION.value_counts()",
"_____no_output_____"
],
[
"train.WELL.value_counts()",
"_____no_output_____"
],
[
"one_hot_cols = ['GROUP']\n\ndf = one_hot_encoding(df, *one_hot_cols)\nprint(df.shape)",
"(1307297, 34)\n"
],
[
"df = freq_encode(df, ['FORMATION','WELL'])\ndf = df.copy()",
"_____no_output_____"
],
[
"print(df.shape)\n#df.isna().sum()",
"(1307297, 36)\n"
],
[
"df = mean_target(df, ['FORMATION', 'WELL'])\ndf.shape",
"_____no_output_____"
],
[
"df = df.drop(['FORMATION', 'WELL'], axis=1)\ndf.shape",
"_____no_output_____"
],
[
"df = df.fillna(-999)\ndata = df.copy()\n\ntrain2 = data[:ntrain].copy()\ntarget = train2.FORCE_2020_LITHOFACIES_LITHOLOGY.copy()\ntrain2.drop(['FORCE_2020_LITHOFACIES_LITHOLOGY'], axis=1, inplace=True)\n\ntest2 = data[ntrain:].copy()\ntest2.drop(['FORCE_2020_LITHOFACIES_LITHOLOGY'], axis=1, inplace=True)\ntest2 = test2.reset_index(drop=True)",
"_____no_output_____"
],
[
"train2.shape, train.shape, test.shape, test2.shape",
"_____no_output_____"
],
[
"traindata = train2\ntestdata = test2",
"_____no_output_____"
],
[
"#using StandardScaler function to scale the numeric features \n\nscaler = preprocessing.StandardScaler().fit(traindata)\ntraindata = pd.DataFrame(scaler.transform(traindata))\ntraindata.head()",
"_____no_output_____"
],
[
"testdata = pd.DataFrame(scaler.transform(testdata))\ntestdata.head()",
"_____no_output_____"
],
[
"class Model():\n \n def __init__(self, train, test, label):\n \n \n self.train = train\n self.test = test\n self.label = label\n \n def __call__(self, plot = True):\n return self.fit(plot)\n \n def fit(self, plot):\n \n #SPLIT ONE\n\n self.x_train, self.x_test, self.y_train, self.y_test = ms.train_test_split(self.train, \n pd.DataFrame(np.array(self.label)), \n test_size=0.25,\n random_state=42)\n\n #SPLIT TWO\n\n self.x_test1, self.x_test2, self.y_test1, self.y_test2 = ms.train_test_split(self.x_test,\n self.y_test,\n test_size=0.5,\n random_state=42)\n\n lgbm = CatBoostClassifier(n_estimators=15, max_depth=6, \n random_state=42, learning_rate=0.033,\n use_best_model=True, task_type='CPU',\n eval_metric='MultiClass')\n\n def show_evaluation(pred, true):\n\n print(f'Default score: {score(true.values, pred)}')\n print(f'Accuracy is: {accuracy_score(true, pred)}')\n print(f'F1 is: {f1_score(pred, true.values, average=\"weighted\")}')\n\n split = 3\n kf = StratifiedKFold(n_splits=split, shuffle=False)\n\n #TEST DATA\n pred_test = np.zeros((len(self.x_test1), 12))\n pred_val = np.zeros((len(self.x_test2), 12))\n pred_val = np.zeros((len(self.test), 12))\n \n for (train_index, test_index) in kf.split(pd.DataFrame(self.x_train), pd.DataFrame(self.y_train)):\n X_train,X_test = pd.DataFrame(self.x_train).iloc[train_index], pd.DataFrame(self.x_train).iloc[test_index]\n y_train,y_test = pd.DataFrame(self.y_train).iloc[train_index],pd.DataFrame(self.y_train).iloc[test_index]\n lgbm.fit(X_train, y_train, early_stopping_rounds=2, eval_set=[(X_test,y_test)])\n #scores.append(metric(lgbm.predict_proba(X_test),y_test))\n pred_test+=lgbm.predict_proba(self.x_test1)\n pred_val+=lgbm.predict_proba(self.x_test2)\n open_test_pred+=lgbm.predict_proba(self.test)\n\n\n pred_test_avg = pred_test/split\n pred_val_avg = pred_test/split\n\n print('----------------TEST EVALUATION------------------')\n show_evaluation(pred_test_avg, self.y_test1)\n\n print('----------------HOLD OUT EVALUATION------------------')\n show_evaluation(pred_val_avg, self.y_test2)\n \n if plot: self.plot_feat_imp(model)\n return open_test_pred, lgbm\n \n \n def plot_feat_imp(self, model):\n feat_imp = pd.Series(model.get_fscore()).sort_values(ascending=False)\n plt.figure(figsize=(12,8))\n feat_imp.plot(kind='bar', title='Feature Importances')\n plt.ylabel('Feature Importance Score')",
"_____no_output_____"
],
[
"func_= Model(traindata, testdata, lithology)\nval_p2, test_p2, model2 = func_()",
"_____no_output_____"
],
[
"pd.DataFrame(lithology)",
"_____no_output_____"
],
[
"i, j = Model(df, test, lithology)",
"_____no_output_____"
],
[
"params = {'n_estimators': 3000,\n 'max_depth': 6,\n 'learning_rate': 0.033,\n 'verbose': 2}\na = Model(train, test, 'FORCE_2020_LITHOFACIES_LITHOLOGY', 0.3, params)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d09745cd35142f81a959979d4e5374c43663015d | 7,092 | ipynb | Jupyter Notebook | 1/Activities/04-Stu_FarFarAway/Solved/.ipynb_checkpoints/faraway-arm-checkpoint.ipynb | arinmuk/python_apis | 9cbf74d9c02f2437c0240b4dab0248d259d4a96f | [
"ADSL"
] | null | null | null | 1/Activities/04-Stu_FarFarAway/Solved/.ipynb_checkpoints/faraway-arm-checkpoint.ipynb | arinmuk/python_apis | 9cbf74d9c02f2437c0240b4dab0248d259d4a96f | [
"ADSL"
] | null | null | null | 1/Activities/04-Stu_FarFarAway/Solved/.ipynb_checkpoints/faraway-arm-checkpoint.ipynb | arinmuk/python_apis | 9cbf74d9c02f2437c0240b4dab0248d259d4a96f | [
"ADSL"
] | null | null | null | 22.803859 | 96 | 0.482798 | [
[
[
"# Dependencies\nimport requests\nimport json",
"_____no_output_____"
],
[
"# URL for GET requests to retrieve Star Wars character data\nbase_url = \"https://swapi.co/api/people/\"",
"_____no_output_____"
],
[
"# Create a url with a specific character id\ncharacter_id = '4'\nurl = base_url + character_id\nprint(url)",
"https://swapi.co/api/people/4\n"
],
[
"# Perform a get request for this character\nresponse = requests.get(url)\nprint(response.url)",
"https://swapi.co/api/people/4/\n"
],
[
"# Storing the JSON response within a variable\ndata = response.json()\nprint(json.dumps(data, indent=4, sort_keys=True))",
"{\n \"birth_year\": \"41.9BBY\",\n \"created\": \"2014-12-10T15:18:20.704000Z\",\n \"edited\": \"2014-12-20T21:17:50.313000Z\",\n \"eye_color\": \"yellow\",\n \"films\": [\n \"https://swapi.co/api/films/2/\",\n \"https://swapi.co/api/films/6/\",\n \"https://swapi.co/api/films/3/\",\n \"https://swapi.co/api/films/1/\"\n ],\n \"gender\": \"male\",\n \"hair_color\": \"none\",\n \"height\": \"202\",\n \"homeworld\": \"https://swapi.co/api/planets/1/\",\n \"mass\": \"136\",\n \"name\": \"Darth Vader\",\n \"skin_color\": \"white\",\n \"species\": [\n \"https://swapi.co/api/species/1/\"\n ],\n \"starships\": [\n \"https://swapi.co/api/starships/13/\"\n ],\n \"url\": \"https://swapi.co/api/people/4/\",\n \"vehicles\": []\n}\n"
],
[
"# Collecting the name of the character collected\ncharacter_name = data[\"name\"]",
"_____no_output_____"
],
[
"# Counting how many films the character was in\nfilm_number = len(data[\"films\"])",
"_____no_output_____"
],
[
"# Figure out what their first starship was\nfirst_ship_url = data[\"starships\"][0]\nship_response = requests.get(first_ship_url).json()\nship_response",
"_____no_output_____"
],
[
"first_ship = ship_response[\"name\"]",
"_____no_output_____"
],
[
"# Print character name and how many films they were in\nprint(f\"{character_name} was in {film_number} films\")",
"Darth Vader was in 4 films\n"
],
[
"# Print what their first ship was\nprint(f\"Their first ship: {first_ship}\")",
"Their first ship: TIE Advanced x1\n"
],
[
"# BONUS\nfilms = []\n\nfor film in data['films']:\n cur_film = requests.get(film).json()\n film_title = cur_film[\"title\"]\n films.append(film_title)\n \nprint(f\"{character_name} was in:\")\nprint(films)",
"Darth Vader was in:\n['The Empire Strikes Back', 'Revenge of the Sith', 'Return of the Jedi', 'A New Hope']\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0974da89babf5b8d1d959fb11c0896661163d7f | 10,273 | ipynb | Jupyter Notebook | components/gcp/dataproc/submit_hadoop_job/sample.ipynb | ryan-williams/pipelines | 2accf4180a10e5513bf3f070051644d0119d78ff | [
"Apache-2.0"
] | 9 | 2019-03-28T02:20:45.000Z | 2021-12-01T22:43:36.000Z | components/gcp/dataproc/submit_hadoop_job/sample.ipynb | ryan-williams/pipelines | 2accf4180a10e5513bf3f070051644d0119d78ff | [
"Apache-2.0"
] | 1 | 2019-04-11T12:13:57.000Z | 2019-04-11T12:13:57.000Z | components/gcp/dataproc/submit_hadoop_job/sample.ipynb | ryan-williams/pipelines | 2accf4180a10e5513bf3f070051644d0119d78ff | [
"Apache-2.0"
] | 4 | 2019-04-11T12:09:59.000Z | 2020-10-11T15:53:53.000Z | 30.48368 | 284 | 0.602842 | [
[
[
"# Dataproc - Submit Hadoop Job\n\n## Intended Use\nA Kubeflow Pipeline component to submit a Apache Hadoop MapReduce job on Apache Hadoop YARN in Google Cloud Dataproc service. \n\n## Run-Time Parameters:\nName | Description\n:--- | :----------\nproject_id | Required. The ID of the Google Cloud Platform project that the cluster belongs to.\nregion | Required. The Cloud Dataproc region in which to handle the request.\ncluster_name | Required. The cluster to run the job.\nmain_jar_file_uri | The HCFS URI of the jar file containing the main class. Examples: `gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar` `hdfs:/tmp/test-samples/custom-wordcount.jar` `file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar`\nmain_class | The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris. \nargs | Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.\nhadoop_job | Optional. The full payload of a [HadoopJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/HadoopJob).\njob | Optional. The full payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs).\nwait_interval | Optional. The wait seconds between polling the operation. Defaults to 30s.\n\n## Output:\nName | Description\n:--- | :----------\njob_id | The ID of the created job.",
"_____no_output_____"
],
[
"## Sample\n\nNote: the sample code below works in both IPython notebook or python code directly.\n\n### Setup a Dataproc cluster\nFollow the [guide](https://cloud.google.com/dataproc/docs/guides/create-cluster) to create a new Dataproc cluster or reuse an existing one.\n\n### Prepare Hadoop job\nUpload your Hadoop jar file to a Google Cloud Storage (GCS) bucket. In the sample, we will use a jar file that is pre-installed in the main cluster, so there is no need to provide the `main_jar_file_uri`. We only set `main_class` to be `org.apache.hadoop.examples.WordCount`.\n\nHere is the [source code of example](https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/WordCount.java).\n\nTo package a self-contained Hadoop MapReduct application from source code, follow the [instructions](https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html).",
"_____no_output_____"
],
[
"### Set sample parameters",
"_____no_output_____"
]
],
[
[
"PROJECT_ID = '<Please put your project ID here>'\nCLUSTER_NAME = '<Please put your existing cluster name here>'\nOUTPUT_GCS_PATH = '<Please put your output GCS path here>'\nREGION = 'us-central1'\nMAIN_CLASS = 'org.apache.hadoop.examples.WordCount'\nINTPUT_GCS_PATH = 'gs://ml-pipeline-playground/shakespeare1.txt'\nEXPERIMENT_NAME = 'Dataproc - Submit Hadoop Job'\nCOMPONENT_SPEC_URI = 'https://raw.githubusercontent.com/kubeflow/pipelines/7622e57666c17088c94282ccbe26d6a52768c226/components/gcp/dataproc/submit_hadoop_job/component.yaml'",
"_____no_output_____"
]
],
[
[
"### Insepct Input Data\nThe input file is a simple text file:",
"_____no_output_____"
]
],
[
[
"!gsutil cat $INTPUT_GCS_PATH",
"With which he yoketh your rebellious necks Razeth your cities and subverts your towns And in a moment makes them desolate\r\n"
]
],
[
[
"### Clean up existing output files (Optional)\nThis is needed because the sample code requires the output folder to be a clean folder.\nTo continue to run the sample, make sure that the service account of the notebook server has access to the `OUTPUT_GCS_PATH`.\n\n**CAUTION**: This will remove all blob files under `OUTPUT_GCS_PATH`.",
"_____no_output_____"
]
],
[
[
"!gsutil rm $OUTPUT_GCS_PATH/**",
"CommandException: No URLs matched: gs://hongyes-ml-tests/dataproc/hadoop/output/**\r\n"
]
],
[
[
"### Install KFP SDK\nInstall the SDK (Uncomment the code if the SDK is not installed before)",
"_____no_output_____"
]
],
[
[
"# KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.12/kfp.tar.gz'\n# !pip3 install $KFP_PACKAGE --upgrade",
"_____no_output_____"
]
],
[
[
"### Load component definitions",
"_____no_output_____"
]
],
[
[
"import kfp.components as comp\n\ndataproc_submit_hadoop_job_op = comp.load_component_from_url(COMPONENT_SPEC_URI)\ndisplay(dataproc_submit_hadoop_job_op)",
"_____no_output_____"
]
],
[
[
"### Here is an illustrative pipeline that uses the component",
"_____no_output_____"
]
],
[
[
"import kfp.dsl as dsl\nimport kfp.gcp as gcp\nimport json\[email protected](\n name='Dataproc submit Hadoop job pipeline',\n description='Dataproc submit Hadoop job pipeline'\n)\ndef dataproc_submit_hadoop_job_pipeline(\n project_id = PROJECT_ID, \n region = REGION,\n cluster_name = CLUSTER_NAME,\n main_jar_file_uri = '',\n main_class = MAIN_CLASS,\n args = json.dumps([\n INTPUT_GCS_PATH,\n OUTPUT_GCS_PATH\n ]), \n hadoop_job='', \n job='{}', \n wait_interval='30'\n):\n dataproc_submit_hadoop_job_op(project_id, region, cluster_name, main_jar_file_uri, main_class,\n args, hadoop_job, job, wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa'))\n ",
"_____no_output_____"
]
],
[
[
"### Compile the pipeline",
"_____no_output_____"
]
],
[
[
"pipeline_func = dataproc_submit_hadoop_job_pipeline\npipeline_filename = pipeline_func.__name__ + '.pipeline.tar.gz'\nimport kfp.compiler as compiler\ncompiler.Compiler().compile(pipeline_func, pipeline_filename)",
"_____no_output_____"
]
],
[
[
"### Submit the pipeline for execution",
"_____no_output_____"
]
],
[
[
"#Specify pipeline argument values\narguments = {}\n\n#Get or create an experiment and submit a pipeline run\nimport kfp\nclient = kfp.Client()\nexperiment = client.create_experiment(EXPERIMENT_NAME)\n\n#Submit a pipeline run\nrun_name = pipeline_func.__name__ + ' run'\nrun_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)",
"_____no_output_____"
]
],
[
[
"### Inspect the outputs\n\nThe sample in the notebook will count the words in the input text and output them in sharded files. Here is the command to inspect them:",
"_____no_output_____"
]
],
[
[
"!gsutil cat $OUTPUT_GCS_PATH/*",
"AccessDeniedException: 403 \r\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0975ff3d1ab8299105c77f452da9a9adb0863b7 | 3,762 | ipynb | Jupyter Notebook | python/0_download_files.ipynb | ethz-esd/compaction_stoessel_2018 | 3b768baf8d5414907dbaa1792f95a3a13da85157 | [
"MIT"
] | 1 | 2018-03-29T13:30:35.000Z | 2018-03-29T13:30:35.000Z | python/0_download_files.ipynb | ethz-esd/compaction_stoessel_2018 | 3b768baf8d5414907dbaa1792f95a3a13da85157 | [
"MIT"
] | null | null | null | python/0_download_files.ipynb | ethz-esd/compaction_stoessel_2018 | 3b768baf8d5414907dbaa1792f95a3a13da85157 | [
"MIT"
] | 1 | 2018-03-29T14:08:24.000Z | 2018-03-29T14:08:24.000Z | 28.285714 | 420 | 0.606326 | [
[
[
"- The data used for the calculations is soil water balance data from CGIAR-CSI available at http://www.cgiar-csi.org/data/global-high-resolution-soil-water-balance.\n\n- Data on irrigation are from the FAO's AQUASTAT information system available at http://www.fao.org/nr/water/aquastat/irrigationmap/index10.stm.\n\n- Soil data is from soilgrids.org and can be downloaded from their ftp server: ftp://ftp.soilgrids.org/data/recent/.\nThe following code also does that (since the files are huge, the execution might take some time). The layers are used to calculate the topsoil average clay content, which is resampled from 250m to 1km resolution. Alternatively, you can download the averaged and resampled layer from the ETH research collection (https://doi.org/10.3929/ethz-b-000253177) and store it in the \"output/soilgrids_prepared\"-folder.\n\n- Data on crop-area and potato-area has been downloaded from http://www.earthstat.org/data-download/. The datasets “Cropland and Pasture Area in 2000” and “Harvested Area and Yield for 175 Crops” are used.\n\n- Data on potential potato-area has been downloaded from http://gaez.fao.org/Main.html. The dataset chosen was “Crop suitability index (class) for high input level rain-fed white potato”, Future period 2020s, MPI ECHAM4 B2, Without CO2 fertilization (res03ehb22020hsihr0wpo_package.zip)\n",
"_____no_output_____"
]
],
[
[
"import os\nfrom ftplib import FTP",
"_____no_output_____"
]
],
[
[
"Specify your data directory to where you want to download the files:",
"_____no_output_____"
]
],
[
[
"data_dir = os.path.join('..', 'data/soilgrids')",
"_____no_output_____"
]
],
[
[
"Download files:",
"_____no_output_____"
]
],
[
[
"# connect to data folder on ftp-server:\nftp = FTP('ftp.soilgrids.org')\nftp.login()\nftp.cwd('data/recent')",
"_____no_output_____"
],
[
"# select download directory:\nos.chdir(data_dir)",
"_____no_output_____"
],
[
"# download the given files:\nfiles = ['CLYPPT_M_sl1_250m.tif', 'CLYPPT_M_sl2_250m.tif', 'CLYPPT_M_sl3_250m.tif',\n 'CLYPPT_M_sl4_250m.tif']\nfor filename in files:\n file = open(filename, 'wb')\n ftp.retrbinary('RETR ' + filename, file.write)\n file.close()\nftp.quit()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d09762f0ca6d77adcc077205cb2c4d527b99a2bf | 711,699 | ipynb | Jupyter Notebook | d2l-en/mxnet/chapter_appendix-mathematics-for-deep-learning/random-variables.ipynb | gr8khan/d2lai | 7c10432f38c80e86978cd075d0024902b47842a0 | [
"MIT"
] | null | null | null | d2l-en/mxnet/chapter_appendix-mathematics-for-deep-learning/random-variables.ipynb | gr8khan/d2lai | 7c10432f38c80e86978cd075d0024902b47842a0 | [
"MIT"
] | null | null | null | d2l-en/mxnet/chapter_appendix-mathematics-for-deep-learning/random-variables.ipynb | gr8khan/d2lai | 7c10432f38c80e86978cd075d0024902b47842a0 | [
"MIT"
] | null | null | null | 65.323451 | 812 | 0.527151 | [
[
[
"# Random Variables\n:label:`sec_random_variables`\n\nIn :numref:`sec_prob` we saw the basics of how to work with discrete random variables, which in our case refer to those random variables which take either a finite set of possible values, or the integers. In this section, we develop the theory of *continuous random variables*, which are random variables which can take on any real value.\n\n## Continuous Random Variables\n\nContinuous random variables are a significantly more subtle topic than discrete random variables. A fair analogy to make is that the technical jump is comparable to the jump between adding lists of numbers and integrating functions. As such, we will need to take some time to develop the theory.\n\n### From Discrete to Continuous\n\nTo understand the additional technical challenges encountered when working with continuous random variables, let us perform a thought experiment. Suppose that we are throwing a dart at the dart board, and we want to know the probability that it hits exactly $2 \\text{cm}$ from the center of the board.\n\nTo start with, we imagine measuring a single digit of accuracy, that is to say with bins for $0 \\text{cm}$, $1 \\text{cm}$, $2 \\text{cm}$, and so on. We throw say $100$ darts at the dart board, and if $20$ of them fall into the bin for $2\\text{cm}$ we conclude that $20\\%$ of the darts we throw hit the board $2 \\text{cm}$ away from the center.\n\nHowever, when we look closer, this does not match our question! We wanted exact equality, whereas these bins hold all that fell between say $1.5\\text{cm}$ and $2.5\\text{cm}$.\n\nUndeterred, we continue further. We measure even more precisely, say $1.9\\text{cm}$, $2.0\\text{cm}$, $2.1\\text{cm}$, and now see that perhaps $3$ of the $100$ darts hit the board in the $2.0\\text{cm}$ bucket. Thus we conclude the probability is $3\\%$.\n\nHowever, this does not solve anything! We have just pushed the issue down one digit further. Let us abstract a bit. Imagine we know the probability that the first $k$ digits match with $2.00000\\ldots$ and we want to know the probability it matches for the first $k+1$ digits. It is fairly reasonable to assume that the ${k+1}^{\\mathrm{th}}$ digit is essentially a random choice from the set $\\{0, 1, 2, \\ldots, 9\\}$. At least, we cannot conceive of a physically meaningful process which would force the number of micrometers away form the center to prefer to end in a $7$ vs a $3$.\n\nWhat this means is that in essence each additional digit of accuracy we require should decrease probability of matching by a factor of $10$. Or put another way, we would expect that\n\n$$\nP(\\text{distance is}\\; 2.00\\ldots, \\;\\text{to}\\; k \\;\\text{digits} ) \\approx p\\cdot10^{-k}.\n$$\n\nThe value $p$ essentially encodes what happens with the first few digits, and the $10^{-k}$ handles the rest.\n\nNotice that if we know the position accurate to $k=4$ digits after the decimal. that means we know the value falls within the interval say $[(1.99995,2.00005]$ which is an interval of length $2.00005-1.99995 = 10^{-4}$. Thus, if we call the length of this interval $\\epsilon$, we can say\n\n$$\nP(\\text{distance is in an}\\; \\epsilon\\text{-sized interval around}\\; 2 ) \\approx \\epsilon \\cdot p.\n$$\n\nLet us take this one final step further. We have been thinking about the point $2$ the entire time, but never thinking about other points. Nothing is different there fundamentally, but it is the case that the value $p$ will likely be different. We would at least hope that a dart thrower was more likely to hit a point near the center, like $2\\text{cm}$ rather than $20\\text{cm}$. Thus, the value $p$ is not fixed, but rather should depend on the point $x$. This tells us that we should expect\n\n$$P(\\text{distance is in an}\\; \\epsilon \\text{-sized interval around}\\; x ) \\approx \\epsilon \\cdot p(x).$$\n:eqlabel:`eq_pdf_deriv`\n\nIndeed, :eqref:`eq_pdf_deriv` precisely defines the *probability density function*. It is a function $p(x)$ which encodes the relative probability of hitting near one point vs. another. Let us visualize what such a function might look like.\n",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nfrom d2l import mxnet as d2l\nfrom IPython import display\nfrom mxnet import np, npx\nnpx.set_np()\n\n# Plot the probability density function for some random variable\nx = np.arange(-5, 5, 0.01)\np = 0.2*np.exp(-(x - 3)**2 / 2)/np.sqrt(2 * np.pi) + \\\n 0.8*np.exp(-(x + 1)**2 / 2)/np.sqrt(2 * np.pi)\n\nd2l.plot(x, p, 'x', 'Density')",
"_____no_output_____"
]
],
[
[
"The locations where the function value is large indicates regions where we are more likely to find the random value. The low portions are areas where we are unlikely to find the random value.\n\n### Probability Density Functions\n\nLet us now investigate this further. We have already seen what a probability density function is intuitively for a random variable $X$, namely the density function is a function $p(x)$ so that\n\n$$P(X \\; \\text{is in an}\\; \\epsilon \\text{-sized interval around}\\; x ) \\approx \\epsilon \\cdot p(x).$$\n:eqlabel:`eq_pdf_def`\n\nBut what does this imply for the properties of $p(x)$?\n\nFirst, probabilities are never negative, thus we should expect that $p(x) \\ge 0$ as well.\n\nSecond, let us imagine that we slice up the $\\mathbb{R}$ into an infinite number of slices which are $\\epsilon$ wide, say with slices $(\\epsilon\\cdot i, \\epsilon \\cdot (i+1)]$. For each of these, we know from :eqref:`eq_pdf_def` the probability is approximately\n\n$$\nP(X \\; \\text{is in an}\\; \\epsilon\\text{-sized interval around}\\; x ) \\approx \\epsilon \\cdot p(\\epsilon \\cdot i),\n$$\n\nso summed over all of them it should be\n\n$$\nP(X\\in\\mathbb{R}) \\approx \\sum_i \\epsilon \\cdot p(\\epsilon\\cdot i).\n$$\n\nThis is nothing more than the approximation of an integral discussed in :numref:`sec_integral_calculus`, thus we can say that\n\n$$\nP(X\\in\\mathbb{R}) = \\int_{-\\infty}^{\\infty} p(x) \\; dx.\n$$\n\nWe know that $P(X\\in\\mathbb{R}) = 1$, since the random variable must take on *some* number, we can conclude that for any density\n\n$$\n\\int_{-\\infty}^{\\infty} p(x) \\; dx = 1.\n$$\n\nIndeed, digging into this further shows that for any $a$, and $b$, we see that\n\n$$\nP(X\\in(a, b]) = \\int _ {a}^{b} p(x) \\; dx.\n$$\n\nWe may approximate this in code by using the same discrete approximation methods as before. In this case we can approximate the probability of falling in the blue region.\n",
"_____no_output_____"
]
],
[
[
"# Approximate probability using numerical integration\nepsilon = 0.01\nx = np.arange(-5, 5, 0.01)\np = 0.2*np.exp(-(x - 3)**2 / 2) / np.sqrt(2 * np.pi) + \\\n 0.8*np.exp(-(x + 1)**2 / 2) / np.sqrt(2 * np.pi)\n\nd2l.set_figsize()\nd2l.plt.plot(x, p, color='black')\nd2l.plt.fill_between(x.tolist()[300:800], p.tolist()[300:800])\nd2l.plt.show()\n\nf'approximate Probability: {np.sum(epsilon*p[300:800])}'",
"_____no_output_____"
]
],
[
[
"It turns out that these two properties describe exactly the space of possible probability density functions (or *p.d.f.*'s for the commonly encountered abbreviation). They are non-negative functions $p(x) \\ge 0$ such that\n\n$$\\int_{-\\infty}^{\\infty} p(x) \\; dx = 1.$$\n:eqlabel:`eq_pdf_int_one`\n\nWe interpret this function by using integration to obtain the probability our random variable is in a specific interval:\n\n$$P(X\\in(a, b]) = \\int _ {a}^{b} p(x) \\; dx.$$\n:eqlabel:`eq_pdf_int_int`\n\nIn :numref:`sec_distributions` we will see a number of common distributions, but let us continue working in the abstract.\n\n### Cumulative Distribution Functions\n\nIn the previous section, we saw the notion of the p.d.f. In practice, this is a commonly encountered method to discuss continuous random variables, but it has one significant pitfall: that the values of the p.d.f. are not themselves probabilities, but rather a function that we must integrate to yield probabilities. There is nothing wrong with a density being larger than $10$, as long as it is not larger than $10$ for more than an interval of length $1/10$. This can be counter-intuitive, so people often also think in terms of the *cumulative distribution function*, or c.d.f., which *is* a probability.\n\nIn particular, by using :eqref:`eq_pdf_int_int`, we define the c.d.f. for a random variable $X$ with density $p(x)$ by\n\n$$\nF(x) = \\int _ {-\\infty}^{x} p(x) \\; dx = P(X \\le x).\n$$\n\nLet us observe a few properties.\n\n* $F(x) \\rightarrow 0$ as $x\\rightarrow -\\infty$.\n* $F(x) \\rightarrow 1$ as $x\\rightarrow \\infty$.\n* $F(x)$ is non-decreasing ($y > x \\implies F(y) \\ge F(x)$).\n* $F(x)$ is continuous (has no jumps) if $X$ is a continuous random variable.\n\nWith the fourth bullet point, note that this would not be true if $X$ were discrete, say taking the values $0$ and $1$ both with probability $1/2$. In that case\n\n$$\nF(x) = \\begin{cases}\n0 & x < 0, \\\\\n\\frac{1}{2} & x < 1, \\\\\n1 & x \\ge 1.\n\\end{cases}\n$$\n\nIn this example, we see one of the benefits of working with the c.d.f., the ability to deal with continuous or discrete random variables in the same framework, or indeed mixtures of the two (flip a coin: if heads return the roll of a die, if tails return the distance of a dart throw from the center of a dart board).\n\n### Means\n\nSuppose that we are dealing with a random variables $X$. The distribution itself can be hard to interpret. It is often useful to be able to summarize the behavior of a random variable concisely. Numbers that help us capture the behavior of a random variable are called *summary statistics*. The most commonly encountered ones are the *mean*, the *variance*, and the *standard deviation*.\n\nThe *mean* encodes the average value of a random variable. If we have a discrete random variable $X$, which takes the values $x_i$ with probabilities $p_i$, then the mean is given by the weighted average: sum the values times the probability that the random variable takes on that value:\n\n$$\\mu_X = E[X] = \\sum_i x_i p_i.$$\n:eqlabel:`eq_exp_def`\n\nThe way we should interpret the mean (albeit with caution) is that it tells us essentially where the random variable tends to be located.\n\nAs a minimalistic example that we will examine throughout this section, let us take $X$ to be the random variable which takes the value $a-2$ with probability $p$, $a+2$ with probability $p$ and $a$ with probability $1-2p$. We can compute using :eqref:`eq_exp_def` that, for any possible choice of $a$ and $p$, the mean is\n\n$$\n\\mu_X = E[X] = \\sum_i x_i p_i = (a-2)p + a(1-2p) + (a+2)p = a.\n$$\n\nThus we see that the mean is $a$. This matches the intuition since $a$ is the location around which we centered our random variable.\n\nBecause they are helpful, let us summarize a few properties.\n\n* For any random variable $X$ and numbers $a$ and $b$, we have that $\\mu_{aX+b} = a\\mu_X + b$.\n* If we have two random variables $X$ and $Y$, we have $\\mu_{X+Y} = \\mu_X+\\mu_Y$.\n\nMeans are useful for understanding the average behavior of a random variable, however the mean is not sufficient to even have a full intuitive understanding. Making a profit of $\\$10 \\pm \\$1$ per sale is very different from making $\\$10 \\pm \\$15$ per sale despite having the same average value. The second one has a much larger degree of fluctuation, and thus represents a much larger risk. Thus, to understand the behavior of a random variable, we will need at minimum one more measure: some measure of how widely a random variable fluctuates.\n\n### Variances\n\nThis leads us to consider the *variance* of a random variable. This is a quantitative measure of how far a random variable deviates from the mean. Consider the expression $X - \\mu_X$. This is the deviation of the random variable from its mean. This value can be positive or negative, so we need to do something to make it positive so that we are measuring the magnitude of the deviation.\n\nA reasonable thing to try is to look at $\\left|X-\\mu_X\\right|$, and indeed this leads to a useful quantity called the *mean absolute deviation*, however due to connections with other areas of mathematics and statistics, people often use a different solution.\n\nIn particular, they look at $(X-\\mu_X)^2.$ If we look at the typical size of this quantity by taking the mean, we arrive at the variance\n\n$$\\sigma_X^2 = \\mathrm{Var}(X) = E\\left[(X-\\mu_X)^2\\right] = E[X^2] - \\mu_X^2.$$\n:eqlabel:`eq_var_def`\n\nThe last equality in :eqref:`eq_var_def` holds by expanding out the definition in the middle, and applying the properties of expectation.\n\nLet us look at our example where $X$ is the random variable which takes the value $a-2$ with probability $p$, $a+2$ with probability $p$ and $a$ with probability $1-2p$. In this case $\\mu_X = a$, so all we need to compute is $E\\left[X^2\\right]$. This can readily be done:\n\n$$\nE\\left[X^2\\right] = (a-2)^2p + a^2(1-2p) + (a+2)^2p = a^2 + 8p.\n$$\n\nThus, we see that by :eqref:`eq_var_def` our variance is\n\n$$\n\\sigma_X^2 = \\mathrm{Var}(X) = E[X^2] - \\mu_X^2 = a^2 + 8p - a^2 = 8p.\n$$\n\nThis result again makes sense. The largest $p$ can be is $1/2$ which corresponds to picking $a-2$ or $a+2$ with a coin flip. The variance of this being $4$ corresponds to the fact that both $a-2$ and $a+2$ are $2$ units away from the mean, and $2^2 = 4$. On the other end of the spectrum, if $p=0$, this random variable always takes the value $0$ and so it has no variance at all.\n\nWe will list a few properties of variance below:\n\n* For any random variable $X$, $\\mathrm{Var}(X) \\ge 0$, with $\\mathrm{Var}(X) = 0$ if and only if $X$ is a constant.\n* For any random variable $X$ and numbers $a$ and $b$, we have that $\\mathrm{Var}(aX+b) = a^2\\mathrm{Var}(X)$.\n* If we have two *independent* random variables $X$ and $Y$, we have $\\mathrm{Var}(X+Y) = \\mathrm{Var}(X) + \\mathrm{Var}(Y)$.\n\nWhen interpreting these values, there can be a bit of a hiccup. In particular, let us try imagining what happens if we keep track of units through this computation. Suppose that we are working with the star rating assigned to a product on the web page. Then $a$, $a-2$, and $a+2$ are all measured in units of stars. Similarly, the mean $\\mu_X$ is then also measured in stars (being a weighted average). However, if we get to the variance, we immediately encounter an issue, which is we want to look at $(X-\\mu_X)^2$, which is in units of *squared stars*. This means that the variance itself is not comparable to the original measurements. To make it interpretable, we will need to return to our original units.\n\n### Standard Deviations\n\nThis summary statistics can always be deduced from the variance by taking the square root! Thus we define the *standard deviation* to be\n\n$$\n\\sigma_X = \\sqrt{\\mathrm{Var}(X)}.\n$$\n\nIn our example, this means we now have the standard deviation is $\\sigma_X = 2\\sqrt{2p}$. If we are dealing with units of stars for our review example, $\\sigma_X$ is again in units of stars.\n\nThe properties we had for the variance can be restated for the standard deviation.\n\n* For any random variable $X$, $\\sigma_{X} \\ge 0$.\n* For any random variable $X$ and numbers $a$ and $b$, we have that $\\sigma_{aX+b} = |a|\\sigma_{X}$\n* If we have two *independent* random variables $X$ and $Y$, we have $\\sigma_{X+Y} = \\sqrt{\\sigma_{X}^2 + \\sigma_{Y}^2}$.\n\nIt is natural at this moment to ask, \"If the standard deviation is in the units of our original random variable, does it represent something we can draw with regards to that random variable?\" The answer is a resounding yes! Indeed much like the mean told we the typical location of our random variable, the standard deviation gives the typical range of variation of that random variable. We can make this rigorous with what is known as Chebyshev's inequality:\n\n$$P\\left(X \\not\\in [\\mu_X - \\alpha\\sigma_X, \\mu_X + \\alpha\\sigma_X]\\right) \\le \\frac{1}{\\alpha^2}.$$\n:eqlabel:`eq_chebyshev`\n\nOr to state it verbally in the case of $\\alpha=10$, $99\\%$ of the samples from any random variable fall within $10$ standard deviations of the mean. This gives an immediate interpretation to our standard summary statistics.\n\nTo see how this statement is rather subtle, let us take a look at our running example again where $X$ is the random variable which takes the value $a-2$ with probability $p$, $a+2$ with probability $p$ and $a$ with probability $1-2p$. We saw that the mean was $a$ and the standard deviation was $2\\sqrt{2p}$. This means, if we take Chebyshev's inequality :eqref:`eq_chebyshev` with $\\alpha = 2$, we see that the expression is\n\n$$\nP\\left(X \\not\\in [a - 4\\sqrt{2p}, a + 4\\sqrt{2p}]\\right) \\le \\frac{1}{4}.\n$$\n\nThis means that $75\\%$ of the time, this random variable will fall within this interval for any value of $p$. Now, notice that as $p \\rightarrow 0$, this interval also converges to the single point $a$. But we know that our random variable takes the values $a-2, a$, and $a+2$ only so eventually we can be certain $a-2$ and $a+2$ will fall outside the interval! The question is, at what $p$ does that happen. So we want to solve: for what $p$ does $a+4\\sqrt{2p} = a+2$, which is solved when $p=1/8$, which is *exactly* the first $p$ where it could possibly happen without violating our claim that no more than $1/4$ of samples from the distribution would fall outside the interval ($1/8$ to the left, and $1/8$ to the right).\n\nLet us visualize this. We will show the probability of getting the three values as three vertical bars with height proportional to the probability. The interval will be drawn as a horizontal line in the middle. The first plot shows what happens for $p > 1/8$ where the interval safely contains all points.\n",
"_____no_output_____"
]
],
[
[
"# Define a helper to plot these figures\ndef plot_chebyshev(a, p):\n d2l.set_figsize()\n d2l.plt.stem([a-2, a, a+2], [p, 1-2*p, p], use_line_collection=True)\n d2l.plt.xlim([-4, 4])\n d2l.plt.xlabel('x')\n d2l.plt.ylabel('p.m.f.')\n\n d2l.plt.hlines(0.5, a - 4 * np.sqrt(2 * p),\n a + 4 * np.sqrt(2 * p), 'black', lw=4)\n d2l.plt.vlines(a - 4 * np.sqrt(2 * p), 0.53, 0.47, 'black', lw=1)\n d2l.plt.vlines(a + 4 * np.sqrt(2 * p), 0.53, 0.47, 'black', lw=1)\n d2l.plt.title(f'p = {p:.3f}')\n\n d2l.plt.show()\n\n# Plot interval when p > 1/8\nplot_chebyshev(0.0, 0.2)",
"_____no_output_____"
]
],
[
[
"The second shows that at $p = 1/8$, the interval exactly touches the two points. This shows that the inequality is *sharp*, since no smaller interval could be taken while keeping the inequality true.\n",
"_____no_output_____"
]
],
[
[
"# Plot interval when p = 1/8\nplot_chebyshev(0.0, 0.125)",
"_____no_output_____"
]
],
[
[
"The third shows that for $p < 1/8$ the interval only contains the center. This does not invalidate the inequality since we only needed to ensure that no more than $1/4$ of the probability falls outside the interval, which means that once $p < 1/8$, the two points at $a-2$ and $a+2$ can be discarded.\n",
"_____no_output_____"
]
],
[
[
"# Plot interval when p < 1/8\nplot_chebyshev(0.0, 0.05)",
"_____no_output_____"
]
],
[
[
"### Means and Variances in the Continuum\n\nThis has all been in terms of discrete random variables, but the case of continuous random variables is similar. To intuitively understand how this works, imagine that we split the real number line into intervals of length $\\epsilon$ given by $(\\epsilon i, \\epsilon (i+1)]$. Once we do this, our continuous random variable has been made discrete and we can use :eqref:`eq_exp_def` say that\n\n$$\n\\begin{aligned}\n\\mu_X & \\approx \\sum_{i} (\\epsilon i)P(X \\in (\\epsilon i, \\epsilon (i+1)]) \\\\\n& \\approx \\sum_{i} (\\epsilon i)p_X(\\epsilon i)\\epsilon, \\\\\n\\end{aligned}\n$$\n\nwhere $p_X$ is the density of $X$. This is an approximation to the integral of $xp_X(x)$, so we can conclude that\n\n$$\n\\mu_X = \\int_{-\\infty}^\\infty xp_X(x) \\; dx.\n$$\n\nSimilarly, using :eqref:`eq_var_def` the variance can be written as\n\n$$\n\\sigma^2_X = E[X^2] - \\mu_X^2 = \\int_{-\\infty}^\\infty x^2p_X(x) \\; dx - \\left(\\int_{-\\infty}^\\infty xp_X(x) \\; dx\\right)^2.\n$$\n\nEverything stated above about the mean, the variance, and the standard deviation still applies in this case. For instance, if we consider the random variable with density\n\n$$\np(x) = \\begin{cases}\n1 & x \\in [0,1], \\\\\n0 & \\text{otherwise}.\n\\end{cases}\n$$\n\nwe can compute\n\n$$\n\\mu_X = \\int_{-\\infty}^\\infty xp(x) \\; dx = \\int_0^1 x \\; dx = \\frac{1}{2}.\n$$\n\nand\n\n$$\n\\sigma_X^2 = \\int_{-\\infty}^\\infty x^2p(x) \\; dx - \\left(\\frac{1}{2}\\right)^2 = \\frac{1}{3} - \\frac{1}{4} = \\frac{1}{12}.\n$$\n\nAs a warning, let us examine one more example, known as the *Cauchy distribution*. This is the distribution with p.d.f. given by\n\n$$\np(x) = \\frac{1}{1+x^2}.\n$$\n",
"_____no_output_____"
]
],
[
[
"# Plot the Cauchy distribution p.d.f.\nx = np.arange(-5, 5, 0.01)\np = 1 / (1 + x**2)\n\nd2l.plot(x, p, 'x', 'p.d.f.')",
"_____no_output_____"
]
],
[
[
"This function looks innocent, and indeed consulting a table of integrals will show it has area one under it, and thus it defines a continuous random variable.\n\nTo see what goes astray, let us try to compute the variance of this. This would involve using :eqref:`eq_var_def` computing\n\n$$\n\\int_{-\\infty}^\\infty \\frac{x^2}{1+x^2}\\; dx.\n$$\n\nThe function on the inside looks like this:\n",
"_____no_output_____"
]
],
[
[
"# Plot the integrand needed to compute the variance\nx = np.arange(-20, 20, 0.01)\np = x**2 / (1 + x**2)\n\nd2l.plot(x, p, 'x', 'integrand')",
"_____no_output_____"
]
],
[
[
"This function clearly has infinite area under it since it is essentially the constant one with a small dip near zero, and indeed we could show that\n\n$$\n\\int_{-\\infty}^\\infty \\frac{x^2}{1+x^2}\\; dx = \\infty.\n$$\n\nThis means it does not have a well-defined finite variance.\n\nHowever, looking deeper shows an even more disturbing result. Let us try to compute the mean using :eqref:`eq_exp_def`. Using the change of variables formula, we see\n\n$$\n\\mu_X = \\int_{-\\infty}^{\\infty} \\frac{x}{1+x^2} \\; dx = \\frac{1}{2}\\int_1^\\infty \\frac{1}{u} \\; du.\n$$\n\nThe integral inside is the definition of the logarithm, so this is in essence $\\log(\\infty) = \\infty$, so there is no well-defined average value either!\n\nMachine learning scientists define their models so that we most often do not need to deal with these issues, and will in the vast majority of cases deal with random variables with well-defined means and variances. However, every so often random variables with *heavy tails* (that is those random variables where the probabilities of getting large values are large enough to make things like the mean or variance undefined) are helpful in modeling physical systems, thus it is worth knowing that they exist.\n\n### Joint Density Functions\n\nThe above work all assumes we are working with a single real valued random variable. But what if we are dealing with two or more potentially highly correlated random variables? This circumstance is the norm in machine learning: imagine random variables like $R_{i, j}$ which encode the red value of the pixel at the $(i, j)$ coordinate in an image, or $P_t$ which is a random variable given by a stock price at time $t$. Nearby pixels tend to have similar color, and nearby times tend to have similar prices. We cannot treat them as separate random variables, and expect to create a successful model (we will see in :numref:`sec_naive_bayes` a model that under-performs due to such an assumption). We need to develop the mathematical language to handle these correlated continuous random variables.\n\nThankfully, with the multiple integrals in :numref:`sec_integral_calculus` we can develop such a language. Suppose that we have, for simplicity, two random variables $X, Y$ which can be correlated. Then, similar to the case of a single variable, we can ask the question:\n\n$$\nP(X \\;\\text{is in an}\\; \\epsilon \\text{-sized interval around}\\; x \\; \\text{and} \\;Y \\;\\text{is in an}\\; \\epsilon \\text{-sized interval around}\\; y ).\n$$\n\nSimilar reasoning to the single variable case shows that this should be approximately\n\n$$\nP(X \\;\\text{is in an}\\; \\epsilon \\text{-sized interval around}\\; x \\; \\text{and} \\;Y \\;\\text{is in an}\\; \\epsilon \\text{-sized interval around}\\; y ) \\approx \\epsilon^{2}p(x, y),\n$$\n\nfor some function $p(x, y)$. This is referred to as the joint density of $X$ and $Y$. Similar properties are true for this as we saw in the single variable case. Namely:\n\n* $p(x, y) \\ge 0$;\n* $\\int _ {\\mathbb{R}^2} p(x, y) \\;dx \\;dy = 1$;\n* $P((X, Y) \\in \\mathcal{D}) = \\int _ {\\mathcal{D}} p(x, y) \\;dx \\;dy$.\n\nIn this way, we can deal with multiple, potentially correlated random variables. If we wish to work with more than two random variables, we can extend the multivariate density to as many coordinates as desired by considering $p(\\mathbf{x}) = p(x_1, \\ldots, x_n)$. The same properties of being non-negative, and having total integral of one still hold.\n\n### Marginal Distributions\nWhen dealing with multiple variables, we oftentimes want to be able to ignore the relationships and ask, \"how is this one variable distributed?\" Such a distribution is called a *marginal distribution*.\n\nTo be concrete, let us suppose that we have two random variables $X, Y$ with joint density given by $p _ {X, Y}(x, y)$. We will be using the subscript to indicate what random variables the density is for. The question of finding the marginal distribution is taking this function, and using it to find $p _ X(x)$.\n\nAs with most things, it is best to return to the intuitive picture to figure out what should be true. Recall that the density is the function $p _ X$ so that\n\n$$\nP(X \\in [x, x+\\epsilon]) \\approx \\epsilon \\cdot p _ X(x).\n$$\n\nThere is no mention of $Y$, but if all we are given is $p _{X, Y}$, we need to include $Y$ somehow. We can first observe that this is the same as\n\n$$\nP(X \\in [x, x+\\epsilon] \\text{, and } Y \\in \\mathbb{R}) \\approx \\epsilon \\cdot p _ X(x).\n$$\n\nOur density does not directly tell us about what happens in this case, we need to split into small intervals in $y$ as well, so we can write this as\n\n$$\n\\begin{aligned}\n\\epsilon \\cdot p _ X(x) & \\approx \\sum _ {i} P(X \\in [x, x+\\epsilon] \\text{, and } Y \\in [\\epsilon \\cdot i, \\epsilon \\cdot (i+1)]) \\\\\n& \\approx \\sum _ {i} \\epsilon^{2} p _ {X, Y}(x, \\epsilon\\cdot i).\n\\end{aligned}\n$$\n\n\n:label:`fig_marginal`\n\nThis tells us to add up the value of the density along a series of squares in a line as is shown in :numref:`fig_marginal`. Indeed, after canceling one factor of epsilon from both sides, and recognizing the sum on the right is the integral over $y$, we can conclude that\n\n$$\n\\begin{aligned}\n p _ X(x) & \\approx \\sum _ {i} \\epsilon p _ {X, Y}(x, \\epsilon\\cdot i) \\\\\n & \\approx \\int_{-\\infty}^\\infty p_{X, Y}(x, y) \\; dy.\n\\end{aligned}\n$$\n\nThus we see\n\n$$\np _ X(x) = \\int_{-\\infty}^\\infty p_{X, Y}(x, y) \\; dy.\n$$\n\nThis tells us that to get a marginal distribution, we integrate over the variables we do not care about. This process is often referred to as *integrating out* or *marginalized out* the unneeded variables.\n\n### Covariance\n\nWhen dealing with multiple random variables, there is one additional summary statistic which is helpful to know: the *covariance*. This measures the degree that two random variable fluctuate together.\n\nSuppose that we have two random variables $X$ and $Y$, to begin with, let us suppose they are discrete, taking on values $(x_i, y_j)$ with probability $p_{ij}$. In this case, the covariance is defined as\n\n$$\\sigma_{XY} = \\mathrm{Cov}(X, Y) = \\sum_{i, j} (x_i - \\mu_X) (y_j-\\mu_Y) p_{ij}. = E[XY] - E[X]E[Y].$$\n:eqlabel:`eq_cov_def`\n\nTo think about this intuitively: consider the following pair of random variables. Suppose that $X$ takes the values $1$ and $3$, and $Y$ takes the values $-1$ and $3$. Suppose that we have the following probabilities\n\n$$\n\\begin{aligned}\nP(X = 1 \\; \\text{and} \\; Y = -1) & = \\frac{p}{2}, \\\\\nP(X = 1 \\; \\text{and} \\; Y = 3) & = \\frac{1-p}{2}, \\\\\nP(X = 3 \\; \\text{and} \\; Y = -1) & = \\frac{1-p}{2}, \\\\\nP(X = 3 \\; \\text{and} \\; Y = 3) & = \\frac{p}{2},\n\\end{aligned}\n$$\n\nwhere $p$ is a parameter in $[0,1]$ we get to pick. Notice that if $p=1$ then they are both always their minimum or maximum values simultaneously, and if $p=0$ they are guaranteed to take their flipped values simultaneously (one is large when the other is small and vice versa). If $p=1/2$, then the four possibilities are all equally likely, and neither should be related. Let us compute the covariance. First, note $\\mu_X = 2$ and $\\mu_Y = 1$, so we may compute using :eqref:`eq_cov_def`:\n\n$$\n\\begin{aligned}\n\\mathrm{Cov}(X, Y) & = \\sum_{i, j} (x_i - \\mu_X) (y_j-\\mu_Y) p_{ij} \\\\\n& = (1-2)(-1-1)\\frac{p}{2} + (1-2)(3-1)\\frac{1-p}{2} + (3-2)(-1-1)\\frac{1-p}{2} + (3-2)(3-1)\\frac{p}{2} \\\\\n& = 4p-2.\n\\end{aligned}\n$$\n\nWhen $p=1$ (the case where they are both maximally positive or negative at the same time) has a covariance of $2$. When $p=0$ (the case where they are flipped) the covariance is $-2$. Finally, when $p=1/2$ (the case where they are unrelated), the covariance is $0$. Thus we see that the covariance measures how these two random variables are related.\n\nA quick note on the covariance is that it only measures these linear relationships. More complex relationships like $X = Y^2$ where $Y$ is randomly chosen from $\\{-2, -1, 0, 1, 2\\}$ with equal probability can be missed. Indeed a quick computation shows that these random variables have covariance zero, despite one being a deterministic function of the other.\n\nFor continuous random variables, much the same story holds. At this point, we are pretty comfortable with doing the transition between discrete and continuous, so we will provide the continuous analogue of :eqref:`eq_cov_def` without any derivation.\n\n$$\n\\sigma_{XY} = \\int_{\\mathbb{R}^2} (x-\\mu_X)(y-\\mu_Y)p(x, y) \\;dx \\;dy.\n$$\n\nFor visualization, let us take a look at a collection of random variables with tunable covariance.\n",
"_____no_output_____"
]
],
[
[
"# Plot a few random variables adjustable covariance\ncovs = [-0.9, 0.0, 1.2]\nd2l.plt.figure(figsize=(12, 3))\nfor i in range(3):\n X = np.random.normal(0, 1, 500)\n Y = covs[i]*X + np.random.normal(0, 1, (500))\n\n d2l.plt.subplot(1, 4, i+1)\n d2l.plt.scatter(X.asnumpy(), Y.asnumpy())\n d2l.plt.xlabel('X')\n d2l.plt.ylabel('Y')\n d2l.plt.title(f'cov = {covs[i]}')\nd2l.plt.show()",
"_____no_output_____"
]
],
[
[
"Let us see some properties of covariances:\n\n* For any random variable $X$, $\\mathrm{Cov}(X, X) = \\mathrm{Var}(X)$.\n* For any random variables $X, Y$ and numbers $a$ and $b$, $\\mathrm{Cov}(aX+b, Y) = \\mathrm{Cov}(X, aY+b) = a\\mathrm{Cov}(X, Y)$.\n* If $X$ and $Y$ are independent then $\\mathrm{Cov}(X, Y) = 0$.\n\nIn addition, we can use the covariance to expand a relationship we saw before. Recall that is $X$ and $Y$ are two independent random variables then\n\n$$\n\\mathrm{Var}(X+Y) = \\mathrm{Var}(X) + \\mathrm{Var}(Y).\n$$\n\nWith knowledge of covariances, we can expand this relationship. Indeed, some algebra can show that in general,\n\n$$\n\\mathrm{Var}(X+Y) = \\mathrm{Var}(X) + \\mathrm{Var}(Y) + 2\\mathrm{Cov}(X, Y).\n$$\n\nThis allows us to generalize the variance summation rule for correlated random variables.\n\n### Correlation\n\nAs we did in the case of means and variances, let us now consider units. If $X$ is measured in one unit (say inches), and $Y$ is measured in another (say dollars), the covariance is measured in the product of these two units $\\text{inches} \\times \\text{dollars}$. These units can be hard to interpret. What we will often want in this case is a unit-less measurement of relatedness. Indeed, often we do not care about exact quantitative correlation, but rather ask if the correlation is in the same direction, and how strong the relationship is.\n\nTo see what makes sense, let us perform a thought experiment. Suppose that we convert our random variables in inches and dollars to be in inches and cents. In this case the random variable $Y$ is multiplied by $100$. If we work through the definition, this means that $\\mathrm{Cov}(X, Y)$ will be multiplied by $100$. Thus we see that in this case a change of units change the covariance by a factor of $100$. Thus, to find our unit-invariant measure of correlation, we will need to divide by something else that also gets scaled by $100$. Indeed we have a clear candidate, the standard deviation! Indeed if we define the *correlation coefficient* to be\n\n$$\\rho(X, Y) = \\frac{\\mathrm{Cov}(X, Y)}{\\sigma_{X}\\sigma_{Y}},$$\n:eqlabel:`eq_cor_def`\n\nwe see that this is a unit-less value. A little mathematics can show that this number is between $-1$ and $1$ with $1$ meaning maximally positively correlated, whereas $-1$ means maximally negatively correlated.\n\nReturning to our explicit discrete example above, we can see that $\\sigma_X = 1$ and $\\sigma_Y = 2$, so we can compute the correlation between the two random variables using :eqref:`eq_cor_def` to see that\n\n$$\n\\rho(X, Y) = \\frac{4p-2}{1\\cdot 2} = 2p-1.\n$$\n\nThis now ranges between $-1$ and $1$ with the expected behavior of $1$ meaning most correlated, and $-1$ meaning minimally correlated.\n\nAs another example, consider $X$ as any random variable, and $Y=aX+b$ as any linear deterministic function of $X$. Then, one can compute that\n\n$$\\sigma_{Y} = \\sigma_{aX+b} = |a|\\sigma_{X},$$\n\n$$\\mathrm{Cov}(X, Y) = \\mathrm{Cov}(X, aX+b) = a\\mathrm{Cov}(X, X) = a\\mathrm{Var}(X),$$\n\nand thus by :eqref:`eq_cor_def` that\n\n$$\n\\rho(X, Y) = \\frac{a\\mathrm{Var}(X)}{|a|\\sigma_{X}^2} = \\frac{a}{|a|} = \\mathrm{sign}(a).\n$$\n\nThus we see that the correlation is $+1$ for any $a > 0$, and $-1$ for any $a < 0$ illustrating that correlation measures the degree and directionality the two random variables are related, not the scale that the variation takes.\n\nLet us again plot a collection of random variables with tunable correlation.\n",
"_____no_output_____"
]
],
[
[
"# Plot a few random variables adjustable correlations\ncors = [-0.9, 0.0, 1.0]\nd2l.plt.figure(figsize=(12, 3))\nfor i in range(3):\n X = np.random.normal(0, 1, 500)\n Y = cors[i] * X + np.sqrt(1 - cors[i]**2) * np.random.normal(0, 1, 500)\n\n d2l.plt.subplot(1, 4, i + 1)\n d2l.plt.scatter(X.asnumpy(), Y.asnumpy())\n d2l.plt.xlabel('X')\n d2l.plt.ylabel('Y')\n d2l.plt.title(f'cor = {cors[i]}')\nd2l.plt.show()",
"_____no_output_____"
]
],
[
[
"Let us list a few properties of the correlation below.\n\n* For any random variable $X$, $\\rho(X, X) = 1$.\n* For any random variables $X, Y$ and numbers $a$ and $b$, $\\rho(aX+b, Y) = \\rho(X, aY+b) = \\rho(X, Y)$.\n* If $X$ and $Y$ are independent with non-zero variance then $\\rho(X, Y) = 0$.\n\nAs a final note, you may feel like some of these formulae are familiar. Indeed, if we expand everything out assuming that $\\mu_X = \\mu_Y = 0$, we see that this is\n\n$$\n\\rho(X, Y) = \\frac{\\sum_{i, j} x_iy_ip_{ij}}{\\sqrt{\\sum_{i, j}x_i^2 p_{ij}}\\sqrt{\\sum_{i, j}y_j^2 p_{ij}}}.\n$$\n\nThis looks like a sum of a product of terms divided by the square root of sums of terms. This is exactly the formula for the cosine of the angle between two vectors $\\mathbf{v}, \\mathbf{w}$ with the different coordinates weighted by $p_{ij}$:\n\n$$\n\\cos(\\theta) = \\frac{\\mathbf{v}\\cdot \\mathbf{w}}{\\|\\mathbf{v}\\|\\|\\mathbf{w}\\|} = \\frac{\\sum_{i} v_iw_i}{\\sqrt{\\sum_{i}v_i^2}\\sqrt{\\sum_{i}w_i^2}}.\n$$\n\nIndeed if we think of norms as being related to standard deviations, and correlations as being cosines of angles, much of the intuition we have from geometry can be applied to thinking about random variables.\n\n## Summary\n* Continuous random variables are random variables that can take on a continuum of values. They have some technical difficulties that make them more challenging to work with compared to discrete random variables.\n* The probability density function allows us to work with continuous random variables by giving a function where the area under the curve on some interval gives the probability of finding a sample point in that interval.\n* The cumulative distribution function is the probability of observing the random variable to be less than a given threshold. It can provide a useful alternate viewpoint which unifies discrete and continuous variables.\n* The mean is the average value of a random variable.\n* The variance is the expected square of the difference between the random variable and its mean.\n* The standard deviation is the square root of the variance. It can be thought of as measuring the range of values the random variable may take.\n* Chebyshev's inequality allows us to make this intuition rigorous by giving an explicit interval that contains the random variable most of the time.\n* Joint densities allow us to work with correlated random variables. We may marginalize joint densities by integrating over unwanted random variables to get the distribution of the desired random variable.\n* The covariance and correlation coefficient provide a way to measure any linear relationship between two correlated random variables.\n\n## Exercises\n1. Suppose that we have the random variable with density given by $p(x) = \\frac{1}{x^2}$ for $x \\ge 1$ and $p(x) = 0$ otherwise. What is $P(X > 2)$?\n2. The Laplace distribution is a random variable whose density is given by $p(x = \\frac{1}{2}e^{-|x|}$. What is the mean and the standard deviation of this function? As a hint, $\\int_0^\\infty xe^{-x} \\; dx = 1$ and $\\int_0^\\infty x^2e^{-x} \\; dx = 2$.\n3. I walk up to you on the street and say \"I have a random variable with mean $1$, standard deviation $2$, and I observed $25\\%$ of my samples taking a value larger than $9$.\" Do you believe me? Why or why not?\n4. Suppose that you have two random variables $X, Y$, with joint density given by $p_{XY}(x, y) = 4xy$ for $x, y \\in [0,1]$ and $p_{XY}(x, y) = 0$ otherwise. What is the covariance of $X$ and $Y$?\n",
"_____no_output_____"
],
[
"[Discussions](https://discuss.d2l.ai/t/415)\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d097729ffbf494f1bb2d660b9a46303e2366a537 | 3,518 | ipynb | Jupyter Notebook | tools/templates/subsite/g3doc/tutorials/notebook.ipynb | manivaradarajan/docs | 9854295d779cf129e538681ce6702402809e3515 | [
"Apache-2.0"
] | 3 | 2020-07-28T20:42:26.000Z | 2020-08-15T03:29:12.000Z | tools/templates/subsite/g3doc/tutorials/notebook.ipynb | manivaradarajan/docs | 9854295d779cf129e538681ce6702402809e3515 | [
"Apache-2.0"
] | 2 | 2020-10-14T20:44:22.000Z | 2020-10-14T21:03:36.000Z | tools/templates/subsite/g3doc/tutorials/notebook.ipynb | manivaradarajan/docs | 9854295d779cf129e538681ce6702402809e3515 | [
"Apache-2.0"
] | 1 | 2020-08-03T20:17:42.000Z | 2020-08-03T20:17:42.000Z | 33.504762 | 299 | 0.535247 | [
[
[
"##### Copyright 2018 The TensorFlow Authors.\n",
"_____no_output_____"
]
],
[
[
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"_____no_output_____"
]
],
[
[
"# Sample tutorial or guide",
"_____no_output_____"
],
[
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/{PATH}\">\n <img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />\n View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs/blob/master/tools/templates/subsite/g3doc/tutorials/notebook.ipynb\">\n <img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />\n Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs/blob/master/tools/templates/subsite/g3doc/tutorials/notebook.ipynb\">\n <img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />\n View source on GitHub</a>\n </td>\n</table>",
"_____no_output_____"
],
[
"[Colab notebooks](https://colab.research.google.com/notebooks/welcome.ipynb) are a first-class documentation format on [tensorflow.org](https://www.tensorflow.org). When published, these notebooks are rendered as static HTML on the site, with a link to the executable notebook on Colab.\n\nSee the [notebook template](https://github.com/tensorflow/docs/blob/master/tools/templates/notebook.ipynb) for setup and style notes.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
d09773ca8c7cf6155a078577c813df96f8a3dba3 | 3,973 | ipynb | Jupyter Notebook | notebooks/adrasteia_04-01_get_gaiaSource_data.ipynb | gully/adrasteia | 2340d2fc773eb3f5fec6edd813a2f22eef08d180 | [
"MIT"
] | 3 | 2016-09-14T16:48:16.000Z | 2016-09-20T08:42:37.000Z | notebooks/adrasteia_04-01_get_gaiaSource_data.ipynb | gully/adrasteia | 2340d2fc773eb3f5fec6edd813a2f22eef08d180 | [
"MIT"
] | null | null | null | notebooks/adrasteia_04-01_get_gaiaSource_data.ipynb | gully/adrasteia | 2340d2fc773eb3f5fec6edd813a2f22eef08d180 | [
"MIT"
] | 2 | 2016-09-20T08:42:43.000Z | 2021-11-25T13:13:27.000Z | 19.100962 | 120 | 0.492575 | [
[
[
"# Gaia\n## Real data!\n\ngully \nSept 28, 2017",
"_____no_output_____"
],
[
"### Outline:\n\n1. Batch download GaiaSource",
"_____no_output_____"
],
[
"**Import these first-- I auto import them every time!:**",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%config InlineBackend.figure_format = 'retina'\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"### 1. Batch download the data",
"_____no_output_____"
]
],
[
[
"import os",
"_____no_output_____"
],
[
"i_max = 256",
"_____no_output_____"
]
],
[
[
"```python\nfor j in range(21):\n if j == 20:\n i_max = 111\n for i in range(i_max):\n fn = 'http://cdn.gea.esac.esa.int/Gaia/gaia_source/csv/GaiaSource_000-{:03d}-{:03d}.csv.gz'.format(j,i)\n executable = 'wget --directory-prefix=../data/GaiaSource/ '+fn\n print(executable)\n os.system(executable) ## Uncomment to actually download\n \n```",
"_____no_output_____"
]
],
[
[
"! ls ../data/GaiaSource/ | tail",
"GaiaSource_000-000-069.csv.gz\r\nGaiaSource_000-000-070.csv.gz\r\nGaiaSource_000-000-071.csv.gz\r\nGaiaSource_000-000-072.csv.gz\r\nGaiaSource_000-000-073.csv.gz\r\nGaiaSource_000-000-074.csv.gz\r\nGaiaSource_000-000-075.csv.gz\r\nGaiaSource_000-000-076.csv.gz\r\nGaiaSource_000-000-077.csv.gz\r\nGaiaSource_000-000-078.csv.gz\r\n"
]
],
[
[
"How many files are there?",
"_____no_output_____"
]
],
[
[
"20*256+110",
"_____no_output_____"
]
],
[
[
"Each file is about 40 MB. How many GB total is the dataset?",
"_____no_output_____"
]
],
[
[
"5230*40/1000",
"_____no_output_____"
]
],
[
[
"Lots of data. I'm queuing it to download on an external drive connected to GOPC.",
"_____no_output_____"
],
[
"### The end.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d097782e724bb2332c84284e66495be43302701c | 23,681 | ipynb | Jupyter Notebook | vocabularies/mesh/kokoaminen/finmesh-2017.ipynb | NatLibFi/Finto-data | 119ee836dbe843d73143e7804ace409083d84b32 | [
"CC0-1.0"
] | 14 | 2015-10-14T17:31:25.000Z | 2022-02-25T08:32:20.000Z | vocabularies/mesh/kokoaminen/finmesh-2017.ipynb | NatLibFi/Finto-data | 119ee836dbe843d73143e7804ace409083d84b32 | [
"CC0-1.0"
] | 659 | 2015-01-05T13:40:36.000Z | 2022-03-24T13:40:24.000Z | vocabularies/mesh/kokoaminen/finmesh-2017.ipynb | NatLibFi/Finto-data | 119ee836dbe843d73143e7804ace409083d84b32 | [
"CC0-1.0"
] | 8 | 2015-04-17T11:04:30.000Z | 2020-10-03T10:27:08.000Z | 24.017241 | 116 | 0.416917 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
d0979cddf4847839a82571c215c901036fea5dd7 | 63,843 | ipynb | Jupyter Notebook | Week 4/Capstone Project - Madiun Cafe Location Final.ipynb | Symefa/Coursera_Capstone-Madiun-Cafe | 27d44fe073df0f4c0e8a935f4193e1840c9bc70b | [
"MIT"
] | null | null | null | Week 4/Capstone Project - Madiun Cafe Location Final.ipynb | Symefa/Coursera_Capstone-Madiun-Cafe | 27d44fe073df0f4c0e8a935f4193e1840c9bc70b | [
"MIT"
] | null | null | null | Week 4/Capstone Project - Madiun Cafe Location Final.ipynb | Symefa/Coursera_Capstone-Madiun-Cafe | 27d44fe073df0f4c0e8a935f4193e1840c9bc70b | [
"MIT"
] | null | null | null | 209.321311 | 35,338 | 0.81525 | [
[
[
"# Capstone Project - Madiun Cafe Location\n\n## Introduction / business problem\n\ni am looking to open a cafe in Madiun City, **the question is**, where is the best location for open new cafe? **The background of the problem** it is not worth setting up a cafe in the close promixity of existing ones. because the location of the new cafe has a significant impact on the expected returns.\n\n\n## Data\n\n**A description of the data**: the data used to solve this problem is geolocation data collected from [FourSquare](https://foursquare.com/). Data is a single tabel, containing location of the existing cafe. **Explanation** of the location data are column `(lat, lng)`, where `lat` stands for latitude and `lng` for longitude. **Example** of the data:\n\n | Name | Shortname | Latitude | Londitude |\n | ------------------------ | ------------ | --------- | ---------- |\n | Markas Kopi | Coffee Shop | -7.648215 | 111.530610 |\n | Cafe Latté | Coffee Shop | -7.635934 | 111.519315 |\n | Coffee Toffee | Coffee Shop | -7.622158 | 111.536357 |\n\n\n**Data will be used**: by knowing the locations of already existing cafes, i will be using Kernel Density Estimation to determine the area of influence of the existing cafes, and recommend a new location which is not in the area of influence from existing cafe.\n\n",
"_____no_output_____"
],
[
"## Prep",
"_____no_output_____"
]
],
[
[
"!conda install -c conda-forge folium=0.5.0 --yes\nimport pandas as pd\nimport folium\nimport requests",
"Solving environment: done\n\n## Package Plan ##\n\n environment location: /opt/conda/envs/Python36\n\n added / updated specs: \n - folium=0.5.0\n\n\nThe following packages will be downloaded:\n\n package | build\n ---------------------------|-----------------\n python_abi-3.6 | 1_cp36m 4 KB conda-forge\n branca-0.4.1 | py_0 26 KB conda-forge\n folium-0.5.0 | py_0 45 KB conda-forge\n openssl-1.1.1g | h516909a_1 2.1 MB conda-forge\n ca-certificates-2020.6.20 | hecda079_0 145 KB conda-forge\n altair-4.1.0 | py_1 614 KB conda-forge\n vincent-0.4.4 | py_1 28 KB conda-forge\n certifi-2020.6.20 | py36h9f0ad1d_0 151 KB conda-forge\n ------------------------------------------------------------\n Total: 3.1 MB\n\nThe following NEW packages will be INSTALLED:\n\n altair: 4.1.0-py_1 conda-forge\n branca: 0.4.1-py_0 conda-forge\n folium: 0.5.0-py_0 conda-forge\n python_abi: 3.6-1_cp36m conda-forge\n vincent: 0.4.4-py_1 conda-forge\n\nThe following packages will be UPDATED:\n\n certifi: 2020.6.20-py36_0 --> 2020.6.20-py36h9f0ad1d_0 conda-forge\n openssl: 1.1.1g-h7b6447c_0 --> 1.1.1g-h516909a_1 conda-forge\n\nThe following packages will be DOWNGRADED:\n\n ca-certificates: 2020.6.24-0 --> 2020.6.20-hecda079_0 conda-forge\n\n\nDownloading and Extracting Packages\npython_abi-3.6 | 4 KB | ##################################### | 100% \nbranca-0.4.1 | 26 KB | ##################################### | 100% \nfolium-0.5.0 | 45 KB | ##################################### | 100% \nopenssl-1.1.1g | 2.1 MB | ##################################### | 100% \nca-certificates-2020 | 145 KB | ##################################### | 100% \naltair-4.1.0 | 614 KB | ##################################### | 100% \nvincent-0.4.4 | 28 KB | ##################################### | 100% \ncertifi-2020.6.20 | 151 KB | ##################################### | 100% \nPreparing transaction: done\nVerifying transaction: done\nExecuting transaction: done\n"
],
[
"# The code was removed by Watson Studio for sharing.",
"_____no_output_____"
],
[
"request_parameters = {\n \"client_id\": CLIENT_ID,\n \"client_secret\": CLIENT_SECRET,\n \"v\": VERSION,\n \"section\": \"coffee\",\n \"near\": \"Madiun\",\n \"radius\": 1000,\n \"limit\": 50}\n\ndata = requests.get(\"https://api.foursquare.com/v2/venues/explore\", params=request_parameters)",
"_____no_output_____"
],
[
"d = data.json()[\"response\"]\nd.keys()",
"_____no_output_____"
],
[
"d[\"headerLocationGranularity\"], d[\"headerLocation\"], d[\"headerFullLocation\"]",
"_____no_output_____"
],
[
"d[\"suggestedBounds\"], d[\"totalResults\"]",
"_____no_output_____"
],
[
"d[\"geocode\"]",
"_____no_output_____"
],
[
"d[\"groups\"][0].keys()",
"_____no_output_____"
],
[
"d[\"groups\"][0][\"type\"], d[\"groups\"][0][\"name\"]",
"_____no_output_____"
],
[
"items = d[\"groups\"][0][\"items\"]\nprint(\"items: %i\" % len(items))\nitems[0]",
"items: 20\n"
],
[
"items[1]",
"_____no_output_____"
],
[
"df_raw = []\nfor item in items:\n venue = item[\"venue\"]\n categories, uid, name, location = venue[\"categories\"], venue[\"id\"], venue[\"name\"], venue[\"location\"]\n assert len(categories) == 1\n shortname = categories[0][\"shortName\"]\n if not \"address\" in location:\n address = ''\n else:\n address = location[\"address\"]\n if not \"postalCode\" in location:\n postalcode = ''\n else:\n postalcode = location[\"postalCode\"]\n lat = location[\"lat\"]\n lng = location[\"lng\"]\n datarow = (uid, name, shortname, address, postalcode, lat, lng)\n df_raw.append(datarow)\ndf = pd.DataFrame(df_raw, columns=[\"uid\", \"name\", \"shortname\", \"address\", \"postalcode\", \"lat\", \"lng\"])\nprint(\"total %i cafes\" % len(df))\ndf.head()",
"total 20 cafes\n"
],
[
"madiun_center = d[\"geocode\"][\"center\"]\nmadiun_center",
"_____no_output_____"
]
],
[
[
"## Applying Heatmap to Map",
"_____no_output_____"
],
[
"Some density based estimator is a good to be used to determine where to start a new coffee business. Using HeatMap plugin in Folium, to visualize all the existing Cafes to same map:",
"_____no_output_____"
]
],
[
[
"\n\nfrom folium import plugins\n\n# create map of Helsinki using latitude and longitude values\nmap_madiun = folium.Map(location=[madiun_center[\"lat\"], madiun_center[\"lng\"]], zoom_start=14)\nfolium.LatLngPopup().add_to(map_madiun)\ndef add_markers(df):\n for (j, row) in df.iterrows():\n label = folium.Popup(row[\"name\"], parse_html=True)\n folium.CircleMarker(\n [row[\"lat\"], row[\"lng\"]],\n radius=10,\n popup=label,\n color='blue',\n fill=True,\n fill_color='#3186cc',\n fill_opacity=0.7,\n parse_html=False).add_to(map_madiun)\n\nadd_markers(df)\nhm_data = df[[\"lat\", \"lng\"]].as_matrix().tolist()\nmap_madiun.add_child(plugins.HeatMap(hm_data))\n\nmap_madiun\n\n",
"/opt/conda/envs/Python36/lib/python3.6/site-packages/ipykernel/__main__.py:22: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.\n"
]
],
[
[
"## Result",
"_____no_output_____"
],
[
"After further analysis, the best location for a new cafe is on Tulus Bakti Street, because it is not in close proximity with other cafe, and located near school and on densest population region in madiun. [BPS DATA](https://madiunkota.bps.go.id/statictable/2015/06/08/141/jumlah-penduduk-menurut-kecamatan-dan-agama-yang-dianut-di-kota-madiun-2013-.html)",
"_____no_output_____"
]
],
[
[
"lat = -7.6393\nlng = 111.5285\n\nschool_1_lat = -7.6403\nschool_1_lng = 111.5316\n\nmap_best = folium.Map(location=[lat, lng], zoom_start=17)\nadd_markers(df)\nfolium.CircleMarker(\n [school_1_lat, school_1_lng],\n radius=15,\n popup=\"School\",\n color='Yellow',\n fill=True,\n fill_color='#3186cc',\n fill_opacity=0.7,\n parse_html=False).add_to(map_best)\nfolium.CircleMarker(\n [lat, lng],\n radius=15,\n popup=\"Best Location!\",\n color='red',\n fill=True,\n fill_color='#3186cc',\n fill_opacity=0.7,\n parse_html=False).add_to(map_best)\nmap_best",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
d097aa1f673ef3ca0edc244fb60c25422cb73afc | 244,215 | ipynb | Jupyter Notebook | Week-5/ET5003_BayesianNN.ipynb | davidnol/ET5003_SEM1_2021-2 | 07b13b51b368e8dc22ddf3028203e5675ee7f96a | [
"BSD-3-Clause"
] | 1 | 2021-09-06T17:47:57.000Z | 2021-09-06T17:47:57.000Z | Week-5/ET5003_BayesianNN.ipynb | davidnol/ET5003_SEM1_2021-2 | 07b13b51b368e8dc22ddf3028203e5675ee7f96a | [
"BSD-3-Clause"
] | null | null | null | Week-5/ET5003_BayesianNN.ipynb | davidnol/ET5003_SEM1_2021-2 | 07b13b51b368e8dc22ddf3028203e5675ee7f96a | [
"BSD-3-Clause"
] | null | null | null | 331.813859 | 90,690 | 0.898696 | [
[
[
"<div>\n<img src=\"https://drive.google.com/uc?export=view&id=1vK33e_EqaHgBHcbRV_m38hx6IkG0blK_\" width=\"350\"/>\n</div> \n\n#**Artificial Intelligence - MSc**\nThis notebook is designed specially for the module\n\nET5003 - MACHINE LEARNING APPLICATIONS \n\nInstructor: Enrique Naredo\n###ET5003_BayesianNN\n\n© All rights reserved to the author, do not share outside this module.\n",
"_____no_output_____"
],
[
"## Introduction",
"_____no_output_____"
],
[
"A [Bayesian network](https://en.wikipedia.org/wiki/Bayesian_network) (also known as a Bayes network, Bayes net, belief network, or decision network) is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). \n\n* Bayesian networks are ideal for taking an event that occurred and predicting the likelihood that any one of several possible known causes was the contributing factor. \n* For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. \n* Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.",
"_____no_output_____"
],
[
"**Acknowledgement**\n\nThis notebook is refurbished taking source code from Alessio Benavoli's webpage and from the libraries numpy, GPy, pylab, and pymc3.",
"_____no_output_____"
],
[
"## Libraries",
"_____no_output_____"
]
],
[
[
"# Suppressing Warnings:\nimport warnings\nwarnings.filterwarnings(\"ignore\")",
"_____no_output_____"
],
[
"# https://pypi.org/project/GPy/\n!pip install gpy",
"Collecting gpy\n Downloading GPy-1.10.0.tar.gz (959 kB)\n\u001b[?25l\r\u001b[K |▍ | 10 kB 20.3 MB/s eta 0:00:01\r\u001b[K |▊ | 20 kB 14.9 MB/s eta 0:00:01\r\u001b[K |█ | 30 kB 10.5 MB/s eta 0:00:01\r\u001b[K |█▍ | 40 kB 8.9 MB/s eta 0:00:01\r\u001b[K |█▊ | 51 kB 5.5 MB/s eta 0:00:01\r\u001b[K |██ | 61 kB 5.6 MB/s eta 0:00:01\r\u001b[K |██▍ | 71 kB 5.4 MB/s eta 0:00:01\r\u001b[K |██▊ | 81 kB 6.0 MB/s eta 0:00:01\r\u001b[K |███ | 92 kB 6.2 MB/s eta 0:00:01\r\u001b[K |███▍ | 102 kB 5.2 MB/s eta 0:00:01\r\u001b[K |███▊ | 112 kB 5.2 MB/s eta 0:00:01\r\u001b[K |████ | 122 kB 5.2 MB/s eta 0:00:01\r\u001b[K |████▍ | 133 kB 5.2 MB/s eta 0:00:01\r\u001b[K |████▉ | 143 kB 5.2 MB/s eta 0:00:01\r\u001b[K |█████▏ | 153 kB 5.2 MB/s eta 0:00:01\r\u001b[K |█████▌ | 163 kB 5.2 MB/s eta 0:00:01\r\u001b[K |█████▉ | 174 kB 5.2 MB/s eta 0:00:01\r\u001b[K |██████▏ | 184 kB 5.2 MB/s eta 0:00:01\r\u001b[K |██████▌ | 194 kB 5.2 MB/s eta 0:00:01\r\u001b[K |██████▉ | 204 kB 5.2 MB/s eta 0:00:01\r\u001b[K |███████▏ | 215 kB 5.2 MB/s eta 0:00:01\r\u001b[K |███████▌ | 225 kB 5.2 MB/s eta 0:00:01\r\u001b[K |███████▉ | 235 kB 5.2 MB/s eta 0:00:01\r\u001b[K |████████▏ | 245 kB 5.2 MB/s eta 0:00:01\r\u001b[K |████████▌ | 256 kB 5.2 MB/s eta 0:00:01\r\u001b[K |████████▉ | 266 kB 5.2 MB/s eta 0:00:01\r\u001b[K |█████████▏ | 276 kB 5.2 MB/s eta 0:00:01\r\u001b[K |█████████▋ | 286 kB 5.2 MB/s eta 0:00:01\r\u001b[K |██████████ | 296 kB 5.2 MB/s eta 0:00:01\r\u001b[K |██████████▎ | 307 kB 5.2 MB/s eta 0:00:01\r\u001b[K |██████████▋ | 317 kB 5.2 MB/s eta 0:00:01\r\u001b[K |███████████ | 327 kB 5.2 MB/s eta 0:00:01\r\u001b[K |███████████▎ | 337 kB 5.2 MB/s eta 0:00:01\r\u001b[K |███████████▋ | 348 kB 5.2 MB/s eta 0:00:01\r\u001b[K |████████████ | 358 kB 5.2 MB/s eta 0:00:01\r\u001b[K |████████████▎ | 368 kB 5.2 MB/s eta 0:00:01\r\u001b[K |████████████▋ | 378 kB 5.2 MB/s eta 0:00:01\r\u001b[K |█████████████ | 389 kB 5.2 MB/s eta 0:00:01\r\u001b[K |█████████████▎ | 399 kB 5.2 MB/s eta 0:00:01\r\u001b[K |█████████████▋ | 409 kB 5.2 MB/s eta 0:00:01\r\u001b[K |██████████████ | 419 kB 5.2 MB/s eta 0:00:01\r\u001b[K |██████████████▍ | 430 kB 5.2 MB/s eta 0:00:01\r\u001b[K |██████████████▊ | 440 kB 5.2 MB/s eta 0:00:01\r\u001b[K |███████████████ | 450 kB 5.2 MB/s eta 0:00:01\r\u001b[K |███████████████▍ | 460 kB 5.2 MB/s eta 0:00:01\r\u001b[K |███████████████▊ | 471 kB 5.2 MB/s eta 0:00:01\r\u001b[K |████████████████ | 481 kB 5.2 MB/s eta 0:00:01\r\u001b[K |████████████████▍ | 491 kB 5.2 MB/s eta 0:00:01\r\u001b[K |████████████████▊ | 501 kB 5.2 MB/s eta 0:00:01\r\u001b[K |█████████████████ | 512 kB 5.2 MB/s eta 0:00:01\r\u001b[K |█████████████████▍ | 522 kB 5.2 MB/s eta 0:00:01\r\u001b[K |█████████████████▊ | 532 kB 5.2 MB/s eta 0:00:01\r\u001b[K |██████████████████ | 542 kB 5.2 MB/s eta 0:00:01\r\u001b[K |██████████████████▍ | 552 kB 5.2 MB/s eta 0:00:01\r\u001b[K |██████████████████▉ | 563 kB 5.2 MB/s eta 0:00:01\r\u001b[K |███████████████████▏ | 573 kB 5.2 MB/s eta 0:00:01\r\u001b[K |███████████████████▌ | 583 kB 5.2 MB/s eta 0:00:01\r\u001b[K |███████████████████▉ | 593 kB 5.2 MB/s eta 0:00:01\r\u001b[K |████████████████████▏ | 604 kB 5.2 MB/s eta 0:00:01\r\u001b[K |████████████████████▌ | 614 kB 5.2 MB/s eta 0:00:01\r\u001b[K |████████████████████▉ | 624 kB 5.2 MB/s eta 0:00:01\r\u001b[K |█████████████████████▏ | 634 kB 5.2 MB/s eta 0:00:01\r\u001b[K |█████████████████████▌ | 645 kB 5.2 MB/s eta 0:00:01\r\u001b[K |█████████████████████▉ | 655 kB 5.2 MB/s eta 0:00:01\r\u001b[K |██████████████████████▏ | 665 kB 5.2 MB/s eta 0:00:01\r\u001b[K |██████████████████████▌ | 675 kB 5.2 MB/s eta 0:00:01\r\u001b[K |██████████████████████▉ | 686 kB 5.2 MB/s eta 0:00:01\r\u001b[K |███████████████████████▎ | 696 kB 5.2 MB/s eta 0:00:01\r\u001b[K |███████████████████████▋ | 706 kB 5.2 MB/s eta 0:00:01\r\u001b[K |████████████████████████ | 716 kB 5.2 MB/s eta 0:00:01\r\u001b[K |████████████████████████▎ | 727 kB 5.2 MB/s eta 0:00:01\r\u001b[K |████████████████████████▋ | 737 kB 5.2 MB/s eta 0:00:01\r\u001b[K |█████████████████████████ | 747 kB 5.2 MB/s eta 0:00:01\r\u001b[K |█████████████████████████▎ | 757 kB 5.2 MB/s eta 0:00:01\r\u001b[K |█████████████████████████▋ | 768 kB 5.2 MB/s eta 0:00:01\r\u001b[K |██████████████████████████ | 778 kB 5.2 MB/s eta 0:00:01\r\u001b[K |██████████████████████████▎ | 788 kB 5.2 MB/s eta 0:00:01\r\u001b[K |██████████████████████████▋ | 798 kB 5.2 MB/s eta 0:00:01\r\u001b[K |███████████████████████████ | 808 kB 5.2 MB/s eta 0:00:01\r\u001b[K |███████████████████████████▎ | 819 kB 5.2 MB/s eta 0:00:01\r\u001b[K |███████████████████████████▋ | 829 kB 5.2 MB/s eta 0:00:01\r\u001b[K |████████████████████████████ | 839 kB 5.2 MB/s eta 0:00:01\r\u001b[K |████████████████████████████▍ | 849 kB 5.2 MB/s eta 0:00:01\r\u001b[K |████████████████████████████▊ | 860 kB 5.2 MB/s eta 0:00:01\r\u001b[K |█████████████████████████████ | 870 kB 5.2 MB/s eta 0:00:01\r\u001b[K |█████████████████████████████▍ | 880 kB 5.2 MB/s eta 0:00:01\r\u001b[K |█████████████████████████████▊ | 890 kB 5.2 MB/s eta 0:00:01\r\u001b[K |██████████████████████████████ | 901 kB 5.2 MB/s eta 0:00:01\r\u001b[K |██████████████████████████████▍ | 911 kB 5.2 MB/s eta 0:00:01\r\u001b[K |██████████████████████████████▊ | 921 kB 5.2 MB/s eta 0:00:01\r\u001b[K |███████████████████████████████ | 931 kB 5.2 MB/s eta 0:00:01\r\u001b[K |███████████████████████████████▍| 942 kB 5.2 MB/s eta 0:00:01\r\u001b[K |███████████████████████████████▊| 952 kB 5.2 MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 959 kB 5.2 MB/s \n\u001b[?25hRequirement already satisfied: numpy>=1.7 in /usr/local/lib/python3.7/dist-packages (from gpy) (1.19.5)\nRequirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from gpy) (1.15.0)\nCollecting paramz>=0.9.0\n Downloading paramz-0.9.5.tar.gz (71 kB)\n\u001b[K |████████████████████████████████| 71 kB 7.3 MB/s \n\u001b[?25hRequirement already satisfied: cython>=0.29 in /usr/local/lib/python3.7/dist-packages (from gpy) (0.29.24)\nRequirement already satisfied: scipy>=1.3.0 in /usr/local/lib/python3.7/dist-packages (from gpy) (1.4.1)\nRequirement already satisfied: decorator>=4.0.10 in /usr/local/lib/python3.7/dist-packages (from paramz>=0.9.0->gpy) (4.4.2)\nBuilding wheels for collected packages: gpy, paramz\n Building wheel for gpy (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for gpy: filename=GPy-1.10.0-cp37-cp37m-linux_x86_64.whl size=2565100 sha256=1f810943b9e1419d3ec07c7fae1a04688b6c931395173c08cce6010fcbae5fea\n Stored in directory: /root/.cache/pip/wheels/f7/18/28/dd1ce0192a81b71a3b086fd952511d088b21e8359ea496860a\n Building wheel for paramz (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for paramz: filename=paramz-0.9.5-py3-none-any.whl size=102565 sha256=d7b2ee4f7a54dce4cda6d28f4abfa860e1a6520f4e9ac92fe35982042a8c5bdf\n Stored in directory: /root/.cache/pip/wheels/c8/95/f5/ce28482da28162e6028c4b3a32c41d147395825b3cd62bc810\nSuccessfully built gpy paramz\nInstalling collected packages: paramz, gpy\nSuccessfully installed gpy-1.10.0 paramz-0.9.5\n"
],
[
"import GPy as GPy\nimport numpy as np\nimport pylab as pb\nimport pymc3 as pm\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"## Data generation\nGenerate data from a nonlinear function and use a Gaussian Process to sample it.",
"_____no_output_____"
]
],
[
[
"# seed the legacy random number generator\n# to replicate experiments\nseed = None\n#seed = 7\nnp.random.seed(seed)",
"_____no_output_____"
],
[
"# Gaussian Processes\n# https://gpy.readthedocs.io/en/deploy/GPy.kern.html\n# Radial Basis Functions\n# https://scikit-learn.org/stable/auto_examples/svm/plot_rbf_parameters.html\n# kernel is a function that specifies the degree of similarity \n# between variables given their relative positions in parameter space\nkernel = GPy.kern.RBF(input_dim=1,lengthscale=0.15,variance=0.2)\nprint(kernel)",
" \u001b[1mrbf. \u001b[0;0m | value | constraints | priors\n \u001b[1mvariance \u001b[0;0m | 0.2 | +ve | \n \u001b[1mlengthscale\u001b[0;0m | 0.15 | +ve | \n"
],
[
"# number of samples\nnum_samples_train = 250 \nnum_samples_test = 200\n\n# intervals to sample\na, b, c = 0.2, 0.6, 0.8\n# points evenly spaced over [0,1]\ninterval_1 = np.random.rand(int(num_samples_train/2))*b - c\ninterval_2 = np.random.rand(int(num_samples_train/2))*b + c\n\nX_new_train = np.sort(np.hstack([interval_1,interval_2])) \nX_new_test = np.linspace(-1,1,num_samples_test)\n\nX_new_all = np.hstack([X_new_train,X_new_test]).reshape(-1,1)\n\n# vector of the means\nμ_new = np.zeros((len(X_new_all)))\n# covariance matrix\nC_new = kernel.K(X_new_all,X_new_all)\n\n# noise factor\nnoise_new = 0.1\n\n# generate samples path with mean μ and covariance C\nTF_new = np.random.multivariate_normal(μ_new,C_new,1)[0,:]\ny_new_train = TF_new[0:len(X_new_train)] + np.random.randn(len(X_new_train))*noise_new\ny_new_test = TF_new[len(X_new_train):] + np.random.randn(len(X_new_test))*noise_new\nTF_new = TF_new[len(X_new_train):]",
"_____no_output_____"
]
],
[
[
"In this example, first generate a nonlinear functions and then generate noisy training data from that function.\n\nThe constrains are:\n* Training samples $x$ belong to either interval $[-0.8,-0.2]$ or $[0.2,0.8]$.\n* There is not data training samples from the interval $[-0.2,0.2]$. \n* The goal is to evaluate the extrapolation error outside in the interval $[-0.2,0.2]$.",
"_____no_output_____"
]
],
[
[
"# plot \npb.figure()\npb.plot(X_new_test,TF_new,c='b',label='True Function',zorder=100)\n# training data\npb.scatter(X_new_train,y_new_train,c='g',label='Train Samples',alpha=0.5)\npb.xlabel(\"x\",fontsize=16);\npb.ylabel(\"y\",fontsize=16,rotation=0)\npb.legend()\npb.savefig(\"New_data.pdf\")",
"_____no_output_____"
]
],
[
[
"## Bayesian NN\nWe address the previous nonlinear regression problem by using a Bayesian NN.\n\n**The model is basically very similar to polynomial regression**. We first define the nonlinear function (NN)\nand the place a prior over the unknown parameters. We then compute the posterior.",
"_____no_output_____"
]
],
[
[
"# https://theano-pymc.readthedocs.io/en/latest/\nimport theano\n\n# add a column of ones to include an intercept in the model\nx1 = np.vstack([np.ones(len(X_new_train)), X_new_train]).T\n\n\nfloatX = theano.config.floatX\n\nl = 15\n# Initialize random weights between each layer\n# we do that to help the numerical algorithm that computes the posterior\ninit_1 = np.random.randn(x1.shape[1], l).astype(floatX)\ninit_out = np.random.randn(l).astype(floatX)\n\n# pymc3 model as neural_network\nwith pm.Model() as neural_network:\n # we convert the data in theano type so we can do dot products with the correct type.\n ann_input = pm.Data('ann_input', x1)\n ann_output = pm.Data('ann_output', y_new_train)\n # Priors \n # Weights from input to hidden layer\n weights_in_1 = pm.Normal('w_1', 0, sigma=10,\n shape=(x1.shape[1], l), testval=init_1)\n # Weights from hidden layer to output\n weights_2_out = pm.Normal('w_0', 0, sigma=10,\n shape=(l,),testval=init_out)\n\n # Build neural-network using tanh activation function\n # Inner layer\n act_1 = pm.math.tanh(pm.math.dot(ann_input,weights_in_1))\n # Linear layer, like in Linear regression\n act_out = pm.Deterministic('act_out',pm.math.dot(act_1, weights_2_out))\n\n # standard deviation of noise\n sigma = pm.HalfCauchy('sigma',5)\n\n # Normal likelihood\n out = pm.Normal('out',\n act_out,\n sigma=sigma,\n observed=ann_output)",
"_____no_output_____"
],
[
"# this can be slow because there are many parameters\n\n# some parameters\npar1 = 100 # start with 100, then use 1000+\npar2 = 1000 # start with 1000, then use 10000+\n\n# neural network\nwith neural_network:\n posterior = pm.sample(par1,tune=par2,chains=1)",
"WARNING (theano.tensor.blas): We did not find a dynamic library in the library_dir of the library we use for blas. If you use ATLAS, make sure to compile it with dynamics library.\nWARNING (theano.tensor.blas): We did not find a dynamic library in the library_dir of the library we use for blas. If you use ATLAS, make sure to compile it with dynamics library.\nOnly 100 samples in chain.\nAuto-assigning NUTS sampler...\nInitializing NUTS using jitter+adapt_diag...\nSequential sampling (1 chains in 1 job)\nNUTS: [sigma, w_0, w_1]\n"
]
],
[
[
"Specifically, PyMC3 supports the following Variational Inference (VI) methods:\n\n * Automatic Differentiation Variational Inference (ADVI): 'advi'\n * ADVI full rank: 'fullrank_advi'\n * Stein Variational Gradient Descent (SVGD): 'svgd'\n * Amortized Stein Variational Gradient Descent (ASVGD): 'asvgd'\n * Normalizing Flow with default scale-loc flow (NFVI): 'nfvi'\n",
"_____no_output_____"
]
],
[
[
"# we can do instead an approximated inference\nparam3 = 1000 # start with 1000, then use 50000+\nVI = 'advi' # 'advi', 'fullrank_advi', 'svgd', 'asvgd', 'nfvi'\nOP = pm.adam # pm.adam, pm.sgd, pm.adagrad, pm.adagrad_window, pm.adadelta\nLR = 0.01 \n\nwith neural_network:\n approx = pm.fit(param3, method=VI, obj_optimizer=pm.adam(learning_rate=LR))",
"_____no_output_____"
],
[
"# plot \npb.plot(approx.hist, label='Variational Inference: '+ VI.upper(), alpha=.3)\npb.legend(loc='upper right')\n# Evidence Lower Bound (ELBO)\n# https://en.wikipedia.org/wiki/Evidence_lower_bound\npb.ylabel('ELBO')\npb.xlabel('iteration');",
"_____no_output_____"
],
[
"# draw samples from variational posterior\nD = 500\nposterior = approx.sample(draws=D)",
"_____no_output_____"
]
],
[
[
"Now, we compute the prediction for each sample. \n* Note that we use `np.tanh` instead of `pm.math.tanh`\nfor speed reason. \n* `pm.math.tanh` is slower outside a Pymc3 model because it converts all data in theano format.\n* It is convenient to do GPU-based training, but it is slow when we only need to compute predictions.",
"_____no_output_____"
]
],
[
[
"# add a column of ones to include an intercept in the model\nx2 = np.vstack([np.ones(len(X_new_test)), X_new_test]).T\n\ny_pred = []\nfor i in range(posterior['w_1'].shape[0]):\n #inner layer\n t1 = np.tanh(np.dot(posterior['w_1'][i,:,:].T,x2.T))\n #outer layer\n y_pred.append(np.dot(posterior['w_0'][i,:],t1))\n\n# predictions \ny_pred = np.array(y_pred)",
"_____no_output_____"
]
],
[
[
"We first plot the mean of `y_pred`, this is very similar to the prediction that Keras returns",
"_____no_output_____"
]
],
[
[
"# plot\npb.plot(X_new_test,TF_new,label='true')\npb.plot(X_new_test,y_pred.mean(axis=0),label='Bayes NN mean')\npb.scatter(X_new_train,y_new_train,c='r',alpha=0.5)\npb.legend()\npb.ylim([-1,1])\npb.xlabel(\"x\",fontsize=16);\npb.ylabel(\"y\",fontsize=16,rotation=0)\npb.savefig(\"BayesNN_mean.pdf\")",
"_____no_output_____"
]
],
[
[
"Now, we plot the uncertainty, by plotting N nonlinear regression lines from the posterior",
"_____no_output_____"
]
],
[
[
"# plot\npb.plot(X_new_test,TF_new,label='true',Zorder=100)\npb.plot(X_new_test,y_pred.mean(axis=0),label='Bayes NN mean',Zorder=100)\n\n\nN = 500\n# nonlinear regression lines\nfor i in range(N):\n pb.plot(X_new_test,y_pred[i,:],c='gray',alpha=0.05)\n\npb.scatter(X_new_train,y_new_train,c='r',alpha=0.5)\npb.xlabel(\"x\",fontsize=16);\npb.ylabel(\"y\",fontsize=16,rotation=0)\npb.ylim([-1,1.5])\npb.legend()\npb.savefig(\"BayesNN_samples.pdf\")",
"_____no_output_____"
],
[
"# plot\npb.plot(X_new_test,TF_new,label='true',Zorder=100)\npb.plot(X_new_test,y_pred.mean(axis=0),label='Bayes NN mean',Zorder=100)\npb.scatter(X_new_train,y_new_train,c='r',alpha=0.5)\npb.xlabel(\"x\",fontsize=16);\npb.ylabel(\"y\",fontsize=16,rotation=0)\npb.ylim([-1,1.5])\npb.legend()\npb.savefig(\"BayesNN_mean.pdf\")",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d097aa6f83550a80bb3b462d8183b4350781e576 | 23,660 | ipynb | Jupyter Notebook | 7.20.ipynb | hzdhhh/Python | e20797ef7cfbd3f1ac1561c57b92f06527663bc6 | [
"Apache-2.0"
] | null | null | null | 7.20.ipynb | hzdhhh/Python | e20797ef7cfbd3f1ac1561c57b92f06527663bc6 | [
"Apache-2.0"
] | null | null | null | 7.20.ipynb | hzdhhh/Python | e20797ef7cfbd3f1ac1561c57b92f06527663bc6 | [
"Apache-2.0"
] | null | null | null | 21.686526 | 1,394 | 0.474768 | [
[
[
"# 函数\n\n- 函数可以用来定义可重复代码,组织和简化\n- 一般来说一个函数在实际开发中为一个小功能\n- 一个类为一个大功能\n- 同样函数的长度不要超过一屏",
"_____no_output_____"
],
[
"Python中的所有函数实际上都是有返回值(return None),\n\n如果你没有设置return,那么Python将不显示None.\n\n如果你设置return,那么将返回出return这个值.",
"_____no_output_____"
]
],
[
[
"def HJN():\n print('Hello')\n return 1000",
"_____no_output_____"
],
[
"b=HJN()\nprint(b)",
"Hello\n1000\n"
],
[
"HJN",
"_____no_output_____"
],
[
"def panduan(number):\n if number % 2 == 0:\n print('O')\n else:\n print('J')",
"_____no_output_____"
],
[
"panduan(number=1)",
"J\n"
],
[
"panduan(2)",
"O\n"
]
],
[
[
"## 定义一个函数\n\ndef function_name(list of parameters):\n \n do something\n\n- 以前使用的random 或者range 或者print.. 其实都是函数或者类",
"_____no_output_____"
],
[
"函数的参数如果有默认值的情况,当你调用该函数的时候:\n可以不给予参数值,那么就会走该参数的默认值\n否则的话,就走你给予的参数值.",
"_____no_output_____"
]
],
[
[
"import random",
"_____no_output_____"
],
[
"def hahah():\n n = random.randint(0,5)\n while 1:\n N = eval(input('>>'))\n if n == N:\n print('smart')\n break\n elif n < N:\n print('太小了')\n elif n > N:\n print('太大了')\n",
"_____no_output_____"
]
],
[
[
"## 调用一个函数\n- functionName()\n- \"()\" 就代表调用",
"_____no_output_____"
]
],
[
[
"def H():\n print('hahaha')",
"_____no_output_____"
],
[
"def B():\n H()",
"_____no_output_____"
],
[
"B()",
"hahaha\n"
],
[
"def A(f):\n f()",
"_____no_output_____"
],
[
"A(B)",
"hahaha\n"
]
],
[
[
"",
"_____no_output_____"
],
[
"## 带返回值和不带返回值的函数\n- return 返回的内容\n- return 返回多个值\n- 一般情况下,在多个函数协同完成一个功能的时候,那么将会有返回值",
"_____no_output_____"
],
[
"\n\n- 当然也可以自定义返回None",
"_____no_output_____"
],
[
"## EP:\n",
"_____no_output_____"
]
],
[
[
"def main():\n print(min(min(5,6),(51,6)))\ndef min(n1,n2):\n a = n1\n if n2 < a:\n a = n2",
"_____no_output_____"
],
[
"main()",
"_____no_output_____"
]
],
[
[
"## 类型和关键字参数\n- 普通参数\n- 多个参数\n- 默认值参数\n- 不定长参数",
"_____no_output_____"
],
[
"## 普通参数",
"_____no_output_____"
],
[
"## 多个参数",
"_____no_output_____"
],
[
"## 默认值参数",
"_____no_output_____"
],
[
"## 强制命名",
"_____no_output_____"
]
],
[
[
"def U(str_):\n xiaoxie = 0\n for i in str_:\n ASCII = ord(i)\n if 97<=ASCII<=122:\n xiaoxie +=1\n elif xxxx:\n daxie += 1\n elif xxxx:\n shuzi += 1\n return xiaoxie,daxie,shuzi",
"_____no_output_____"
],
[
"U('HJi12')",
"H\nJ\ni\n1\n2\n"
]
],
[
[
"## 不定长参数\n- \\*args\n> - 不定长,来多少装多少,不装也是可以的\n - 返回的数据类型是元组\n - args 名字是可以修改的,只是我们约定俗成的是args\n- \\**kwargs \n> - 返回的字典\n - 输入的一定要是表达式(键值对)\n- name,\\*args,name2,\\**kwargs 使用参数名",
"_____no_output_____"
]
],
[
[
"def TT(a,b)",
"_____no_output_____"
],
[
"def TT(*args,**kwargs):\n print(kwargs)\n print(args)\nTT(1,2,3,4,6,a=100,b=1000)",
"{'a': 100, 'b': 1000}\n(1, 2, 3, 4, 6)\n"
],
[
"{'key':'value'}",
"()\n"
],
[
"TT(1,2,4,5,7,8,9,)",
"(1, 2, 4, 5, 7, 8, 9)\n"
],
[
"def B(name1,nam3):\n pass",
"_____no_output_____"
],
[
"B(name1=100,2)",
"_____no_output_____"
],
[
"def sum_(*args,A='sum'):\n \n res = 0\n count = 0\n for i in args:\n res +=i\n count += 1\n if A == \"sum\":\n return res\n elif A == \"mean\":\n mean = res / count\n return res,mean\n else:\n print(A,'还未开放')\n \n ",
"_____no_output_____"
],
[
"sum_(-1,0,1,4,A='var')",
"var 还未开放\n"
],
[
"'aHbK134'.__iter__",
"_____no_output_____"
],
[
"b = 'asdkjfh'\nfor i in b :\n print(i)",
"a\ns\nd\nk\nj\nf\nh\n"
],
[
"2,5\n2 + 22 + 222 + 2222 + 22222",
"_____no_output_____"
]
],
[
[
"## 变量的作用域\n- 局部变量 local\n- 全局变量 global\n- globals 函数返回一个全局变量的字典,包括所有导入的变量\n- locals() 函数会以字典类型返回当前位置的全部局部变量。",
"_____no_output_____"
]
],
[
[
"a = 1000\nb = 10\ndef Y():\n global a,b\n a += 100\n print(a)\nY()",
"1100\n"
],
[
"def YY(a1):\n a1 += 100\n print(a1)\nYY(a)\nprint(a)",
"1200\n1100\n"
]
],
[
[
"## 注意:\n- global :在进行赋值操作的时候需要声明\n- 官方解释:This is because when you make an assignment to a variable in a scope, that variable becomes local to that scope and shadows any similarly named variable in the outer scope.\n- ",
"_____no_output_____"
],
[
"# Homework\n- 1\n",
"_____no_output_____"
]
],
[
[
"def getPentagonalNumber(n):\n count = 0\n for n in range(100):\n y=int(n*(3*n-1)/2)\n print(y,end=' ')\n count += 1\n if count %10 == 0:\n print()\n \ngetPentagonalNumber(100)",
"0 1 5 12 22 35 51 70 92 117 \n145 176 210 247 287 330 376 425 477 532 \n590 651 715 782 852 925 1001 1080 1162 1247 \n1335 1426 1520 1617 1717 1820 1926 2035 2147 2262 \n2380 2501 2625 2752 2882 3015 3151 3290 3432 3577 \n3725 3876 4030 4187 4347 4510 4676 4845 5017 5192 \n5370 5551 5735 5922 6112 6305 6501 6700 6902 7107 \n7315 7526 7740 7957 8177 8400 8626 8855 9087 9322 \n9560 9801 10045 10292 10542 10795 11051 11310 11572 11837 \n12105 12376 12650 12927 13207 13490 13776 14065 14357 14652 \n"
]
],
[
[
"- 2 \n",
"_____no_output_____"
]
],
[
[
"def sumDigits(n):\n bai=n//100\n shi=n//10%10\n ge=n%10\n y=bai+shi+ge\n print('%d(%d+%d+%d)'%(y,bai,shi,ge))\nsumDigits(234)",
"9(2+3+4)\n"
]
],
[
[
"- 3\n",
"_____no_output_____"
]
],
[
[
"def displaySortedNumber():\n num1,num2,num3=map(float,input('Enter three number:').split(','))\n a=[num1,num2,num3]\n a.sort()\n print(a)\ndisplaySortedNumber()",
"Enter three number:2,1.0,3\n[1.0, 2.0, 3.0]\n"
]
],
[
[
"- 4\n",
"_____no_output_____"
]
],
[
[
"def futureInvestmentValue(principal,rate,years):\n for i in range(years):\n principal = principal * (1+rate)\n print(\"{}年内总额{}: \".format(i+1,principal))\nprincipal = eval(input(\"输入存款金额: \"))\nrate = eval(input(\"输入利率: \"))\nyears = eval(input(\"输入年份:\" ))\nfutureInvestmentValue(principal,rate,years)",
"输入存款金额: 1000\n输入利率: 9\n输入年份:30\n1年内总额10000: \n2年内总额100000: \n3年内总额1000000: \n4年内总额10000000: \n5年内总额100000000: \n6年内总额1000000000: \n7年内总额10000000000: \n8年内总额100000000000: \n9年内总额1000000000000: \n10年内总额10000000000000: \n11年内总额100000000000000: \n12年内总额1000000000000000: \n13年内总额10000000000000000: \n14年内总额100000000000000000: \n15年内总额1000000000000000000: \n16年内总额10000000000000000000: \n17年内总额100000000000000000000: \n18年内总额1000000000000000000000: \n19年内总额10000000000000000000000: \n20年内总额100000000000000000000000: \n21年内总额1000000000000000000000000: \n22年内总额10000000000000000000000000: \n23年内总额100000000000000000000000000: \n24年内总额1000000000000000000000000000: \n25年内总额10000000000000000000000000000: \n26年内总额100000000000000000000000000000: \n27年内总额1000000000000000000000000000000: \n28年内总额10000000000000000000000000000000: \n29年内总额100000000000000000000000000000000: \n30年内总额1000000000000000000000000000000000: \n"
]
],
[
[
"- 5\n",
"_____no_output_____"
]
],
[
[
"def printChars():\n count=0\n for i in range(49,91):\n print(chr(i),end=' ')\n count=count+1\n if count%10==0:\n print()\nprintChars()",
"1 2 3 4 5 6 7 8 9 : \n; < = > ? @ A B C D \nE F G H I J K L M N \nO P Q R S T U V W X \nY Z "
]
],
[
[
"- 6\n",
"_____no_output_____"
]
],
[
[
"def numberofDaysInAYear():\n for year in range(2010,2021):\n if (year%4==0 and year%100!=0) or (year%400==0):\n print('%d年366天'%year)\n else:\n print('%d年365天'%year) \nnumberofDaysInAYear()",
"2010年365天\n2011年365天\n2012年366天\n2013年365天\n2014年365天\n2015年365天\n2016年366天\n2017年365天\n2018年365天\n2019年365天\n2020年366天\n"
]
],
[
[
"- 7\n",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport math\ndef xsj(x1,y1,x2,y2):\n p1=np.array([x1,y1])\n p2=np.array([x2,y2])\n p3=p2-p1\n p4=math.hypot(p3[0],p3[1])\n print(p4)\nx1,y1,x2,y2=map(int,input().split(','))\nxsj(x1,y1,x2,y2)",
"1,2,3,4\n2.8284271247461903\n"
]
],
[
[
"- 8\n",
"_____no_output_____"
],
[
"- 9\n\n",
"_____no_output_____"
]
],
[
[
"import time\n\nlocaltime = time.asctime(time.localtime(time.time()))\nprint(\"本地时间为 :\", localtime)\n",
"本地时间为 : Sat May 11 08:37:21 2019\n"
],
[
"2019 - 1970",
"_____no_output_____"
]
],
[
[
"- 10\n",
"_____no_output_____"
]
],
[
[
"import random\n\nnum1=random.randrange(1,7)\nnum2=random.randrange(1,7)\nsum_=num1+num2\n\nif sum_==2 or sum_==3 or sum_==12:\n print('You rolled %d+%d=%d'%(num1,num2,sum_))\n print('you lose')\nelif sum_==7 or sum_==11:\n print('You rolled %d+%d=%d'%(num1,num2,sum_))\n print('you win')\nelse:\n print('You rolled %d+%d=%d'%(num1,num2,sum_))\n print('point is %d'%sum_)\n num1=random.randrange(1,7)\n num2=random.randrange(1,7)\n sum_1=num1+num2\n if sum_1==sum_:\n print('You rolled %d+%d=%d'%(num1,num2,sum_1))\n print('you win')\n else:\n print('You rolled %d+%d=%d'%(num1,num2,sum_1))\n print('you lose')",
"You rolled 5+2=7\nyou win\n"
]
],
[
[
"- 11 \n### 去网上寻找如何用Python代码发送邮件",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d097aa79a610f0989d460f3755559f850ac76b83 | 6,237 | ipynb | Jupyter Notebook | coding-259-master/coding-259/py3/03-conv-basic/neural-network.ipynb | huangjeake/OpenCV | f0253f7b6adf135d256f24e7298a8b8d11755af9 | [
"MIT"
] | null | null | null | coding-259-master/coding-259/py3/03-conv-basic/neural-network.ipynb | huangjeake/OpenCV | f0253f7b6adf135d256f24e7298a8b8d11755af9 | [
"MIT"
] | null | null | null | coding-259-master/coding-259/py3/03-conv-basic/neural-network.ipynb | huangjeake/OpenCV | f0253f7b6adf135d256f24e7298a8b8d11755af9 | [
"MIT"
] | null | null | null | 34.269231 | 94 | 0.499759 | [
[
[
"import tensorflow as tf\nimport os\nimport pickle\nimport numpy as np\n\nCIFAR_DIR = \"./../../cifar-10-batches-py\"\nprint(os.listdir(CIFAR_DIR))",
"_____no_output_____"
],
[
"def load_data(filename):\n \"\"\"read data from data file.\"\"\"\n with open(filename, 'rb') as f:\n data = pickle.load(f, encoding='bytes')\n return data[b'data'], data[b'labels']\n\n# tensorflow.Dataset.\nclass CifarData:\n def __init__(self, filenames, need_shuffle):\n all_data = []\n all_labels = []\n for filename in filenames:\n data, labels = load_data(filename)\n all_data.append(data)\n all_labels.append(labels)\n self._data = np.vstack(all_data)\n self._data = self._data / 127.5 - 1\n self._labels = np.hstack(all_labels)\n print(self._data.shape)\n print(self._labels.shape)\n \n self._num_examples = self._data.shape[0]\n self._need_shuffle = need_shuffle\n self._indicator = 0\n if self._need_shuffle:\n self._shuffle_data()\n \n def _shuffle_data(self):\n # [0,1,2,3,4,5] -> [5,3,2,4,0,1]\n p = np.random.permutation(self._num_examples)\n self._data = self._data[p]\n self._labels = self._labels[p]\n \n def next_batch(self, batch_size):\n \"\"\"return batch_size examples as a batch.\"\"\"\n end_indicator = self._indicator + batch_size\n if end_indicator > self._num_examples:\n if self._need_shuffle:\n self._shuffle_data()\n self._indicator = 0\n end_indicator = batch_size\n else:\n raise Exception(\"have no more examples\")\n if end_indicator > self._num_examples:\n raise Exception(\"batch size is larger than all examples\")\n batch_data = self._data[self._indicator: end_indicator]\n batch_labels = self._labels[self._indicator: end_indicator]\n self._indicator = end_indicator\n return batch_data, batch_labels\n\ntrain_filenames = [os.path.join(CIFAR_DIR, 'data_batch_%d' % i) for i in range(1, 6)]\ntest_filenames = [os.path.join(CIFAR_DIR, 'test_batch')]\n\ntrain_data = CifarData(train_filenames, True)\ntest_data = CifarData(test_filenames, False)",
"_____no_output_____"
],
[
"x = tf.placeholder(tf.float32, [None, 3072])\n# [None], eg: [0,5,6,3]\ny = tf.placeholder(tf.int64, [None])\nhidden1 = tf.layers.dense(x, 100, activation=tf.nn.relu)\nhidden2 = tf.layers.dense(hidden1, 100, activation=tf.nn.relu)\nhidden3 = tf.layers.dense(hidden2, 50, activation=tf.nn.relu)\ny_ = tf.layers.dense(hidden3, 10)\n\nloss = tf.losses.sparse_softmax_cross_entropy(labels=y, logits=y_)\n# y_ -> sofmax\n# y -> one_hot\n# loss = ylogy_\n\n# indices\npredict = tf.argmax(y_, 1)\n# [1,0,1,1,1,0,0,0]\ncorrect_prediction = tf.equal(predict, y)\naccuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float64))\n\nwith tf.name_scope('train_op'):\n train_op = tf.train.AdamOptimizer(1e-3).minimize(loss)",
"_____no_output_____"
],
[
"init = tf.global_variables_initializer()\nbatch_size = 20\ntrain_steps = 100000\ntest_steps = 100\n\n# train: 100k: 51.%\nwith tf.Session() as sess:\n sess.run(init)\n for i in range(train_steps):\n batch_data, batch_labels = train_data.next_batch(batch_size)\n loss_val, acc_val, _ = sess.run(\n [loss, accuracy, train_op],\n feed_dict={\n x: batch_data,\n y: batch_labels})\n if (i+1) % 500 == 0:\n print('[Train] Step: %d, loss: %4.5f, acc: %4.5f' \n % (i+1, loss_val, acc_val))\n if (i+1) % 5000 == 0:\n test_data = CifarData(test_filenames, False)\n all_test_acc_val = []\n for j in range(test_steps):\n test_batch_data, test_batch_labels \\\n = test_data.next_batch(batch_size)\n test_acc_val = sess.run(\n [accuracy],\n feed_dict = {\n x: test_batch_data, \n y: test_batch_labels\n })\n all_test_acc_val.append(test_acc_val)\n test_acc = np.mean(all_test_acc_val)\n print('[Test ] Step: %d, acc: %4.5f'\n % (i+1, test_acc))",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
d097abbb0f1adb301f80a26cd5ebae7fd384c03a | 24,218 | ipynb | Jupyter Notebook | Note_books/Explore_Models/Feature_Partitions.ipynb | joeyamosjohns/final_project_nhl_prediction_first_draft | 8bffe1c82c76ec4aa8482d38d9eb5efad1644496 | [
"MIT"
] | null | null | null | Note_books/Explore_Models/Feature_Partitions.ipynb | joeyamosjohns/final_project_nhl_prediction_first_draft | 8bffe1c82c76ec4aa8482d38d9eb5efad1644496 | [
"MIT"
] | null | null | null | Note_books/Explore_Models/Feature_Partitions.ipynb | joeyamosjohns/final_project_nhl_prediction_first_draft | 8bffe1c82c76ec4aa8482d38d9eb5efad1644496 | [
"MIT"
] | null | null | null | 32.377005 | 254 | 0.46069 | [
[
[
"import pandas as pd\nimport numpy as np",
"_____no_output_____"
],
[
"X_raw = pd.read_csv(\"/Users/joejohns/data_bootcamp/GitHub/final_project_nhl_prediction/Data/Shaped_Data/data_bet_stats_mp.csv\")",
"_____no_output_____"
],
[
"##note it is against, for .. then sometimes *by itself* stat% = for/(for+against); stat%_against would be 1-stat%\n\n##basic features to add:\n##pp %\n##pk stats, \n##pk%\n\n##sh%\n##sv%\n\n##goal_diff = gf -ga\n##goal% = gf/(gf+ga)\n\n##total points (prolly adjust for stupid OT, SO shit)\n##pts %\n##league rank (based on pts%)\n\n\n\n\n\nfeat_drop = [ \n'startRinkSide'\n'HoA', #this is mp ###many of these are repeated from mp_data\n'HoA_bet',\n'VH',\n'home_or_away',\n'team',\n'name',\n'Team',\n'Unnamed: 0',\n'playerTeam',\n'position',\n'blocked', ## Same as bSAAgainst\n'pim', ## same as penaltyminFor\n'goals', ##goalsFor\n'shots',\n'giveaways',\n 'hits',\n] \n\n\n#################################first round\nfeat_goals = [\n'goalsAgainst',\n 'goalsFor',]\n\n##do ga - gf and maybe gf/(gf+ga) ... can I get rid of OT and SO ? Not so easy ... would need to use situation stuff.\n\n##\nfeat_SOG = [\n'shotsOnGoalAgainst',\n 'shotsOnGoalFor',\n]\n##sh%, sv%\n\nfeat_saves = [\n'savedShotsOnGoalAgainst',\n 'savedShotsOnGoalFor',\n#pair with shots for sv%, sh%\n]\n\n##pp, pk, penalties\n\nfeat_pen_pp_pk = [\n'penalityMinutesAgainst', #Penalties \n 'penalityMinutesFor',\n# 'penaltiesAgainst',\n# 'penaltiesFor', not sure so useful compared to minutes \n'powerPlayGoals',\n 'powerPlayOpportunities', #Powerplay\n]\n##! Need to create pk stat and pp%, pk%\n\n\n##xgoals\nfeat_xgoals =[\n'xGoalsAgainst', #(measure of quality of chances for and against)\n'xGoalsFor',\n 'xGoalsPercentage', #derived from above two\n]\n\n##possession \n\nfeat_SA = [\n'unblockedShotAttemptsAgainst',\n 'unblockedShotAttemptsFor',\n 'shotAttemptsAgainst',\n 'shotAttemptsFor',\n'corsiPercentage', ##derived from 4 above\n'fenwickPercentage',\n ]\n\n##a way to get possession \n\nfeat_FO = [\n 'faceOffsWonAgainst',\n 'faceOffsWonFor',\n'faceOffWinPercentage',] #has missing nan ... re-do it using last 2.\n\n##measures of possession loss/gain \n\nfeat_give_aways = [\n 'giveawaysAgainst',\n 'giveawaysFor',\n ] \n\nfeat_dzone_give_aways = [ \n'dZoneGiveawaysAgainst',\n 'dZoneGiveawaysFor',]\n\n##should cause more give aways and recoveries \n\nfeat_hits = [\n'hitsAgainst',\n 'hitsFor',\n] \n\n\n\n#measures defensive stat ... also ability to get shots thru\n\nfeat_blocked = [ \n'blockedShotAttemptsAgainst',\n'blockedShotAttemptsFor',]\n\n##measures shooting skill to hit the net or ability to make guys shoot wide if you are in lane (kind of like block)\n\nfeat_missed = [\n'missedShotsAgainst',\n'missedShotsFor',]\n\n##measures how many rebounds you give up (degense)... and how many you generate (offense)\n\n##g/rb \n##sht/rb \n##hml sht/rb\n## xg/rb \n\nfeat_rebounds = [\n'reboundGoalsAgainst', #could put with goals ... prolly want g/rb; pair with high rebounds for\n 'reboundGoalsFor',\n 'reboundsAgainst',\n 'reboundsFor',\n ] \n\n##ability to maintain pressure ... \n\nfeat_pressure = [\n 'playContinuedInZoneAgainst', #after a shot is next shot in zone (no events outside+ same players on ice)\n 'playContinuedInZoneFor',\n \n 'playContinuedOutsideZoneAgainst',\n 'playContinuedOutsideZoneFor',\n]\n\n \nfeat_pressure_stoppage = [\n'freezeAgainst', # \"freeze after shot attempt For/Against\"\n'freezeFor',\n\n'playStoppedAgainst',\n 'playStoppedFor', #non-freeze reason\n\n]\n\n################################second round\n\nfeat_goals_hml_danger = [\n 'highDangerGoalsAgainst',\n 'highDangerGoalsFor',\n 'mediumDangerGoalsAgainst',\n 'mediumDangerGoalsFor',\n 'lowDangerGoalsAgainst',\n 'lowDangerGoalsFor',\n]\n\nfeat_saves_fen = [\n'savedUnblockedShotAttemptsAgainst', ##mised shots plus saved SOG\n'savedUnblockedShotAttemptsFor', #pair with unblocked shots for Fsv%\n]\n\nfeat_xgoals_adj = [ \n'scoreVenueAdjustedxGoalsAgainst', ##probably select one of these 3 versions? \n 'scoreVenueAdjustedxGoalsFor',\n \n'flurryAdjustedxGoalsAgainst',\n'flurryAdjustedxGoalsFor',\n\n'flurryScoreVenueAdjustedxGoalsAgainst',\n'flurryScoreVenueAdjustedxGoalsFor',\n]\n\n\nfeat_xgoals_hml_danger = [\n 'highDangerxGoalsAgainst',\n 'highDangerxGoalsFor',\n \n 'mediumDangerxGoalsAgainst',\n 'mediumDangerxGoalsFor',\n \n 'lowDangerxGoalsAgainst',\n 'lowDangerxGoalsFor',\n] \n\nfeat_xgoals_rebounds = [\n'xGoalsFromActualReboundsOfShotsAgainst',\n'xGoalsFromActualReboundsOfShotsFor',\n'xGoalsFromxReboundsOfShotsAgainst',\n'xGoalsFromxReboundsOfShotsFor',\n'totalShotCreditAgainst', ##xgoals + xgoalsfromxreb -reboundxgoals ?\n'totalShotCreditFor',\n]\n\nfeat_SA_adj = [\n'scoreAdjustedShotsAttemptsAgainst',\n'scoreAdjustedShotsAttemptsFor',\n'scoreAdjustedUnblockedShotAttemptsAgainst', \n'scoreAdjustedUnblockedShotAttemptsFor',\n\n]\n\n\nfeat_SOG_hml_danger = [\n'highDangerShotsAgainst',\n 'highDangerShotsFor',\n\n'mediumDangerShotsAgainst',\n 'mediumDangerShotsFor',\n\n'lowDangerShotsAgainst',\n 'lowDangerShotsFor',\n]\n\nfeat_xrebounds = [\n'reboundxGoalsAgainst',\n 'reboundxGoalsFor',\n 'xReboundsAgainst',\n 'xReboundsFor']\n\nfeat_xpressure = [\n'xPlayStoppedAgainst',\n 'xPlayStoppedFor',\n\n 'xPlayContinuedInZoneAgainst', ##maybe do PCIZA and PCIZA - xPCIZA (measures lucky/unlucky)\n 'xPlayContinuedInZoneFor',\n \n'xPlayStoppedAgainst',\n 'xPlayStoppedFor',\n]\n",
"_____no_output_____"
],
[
"#I_F means individual for player; from player stats dictionary ... team level version there should be no difference if\n#cacluated from shot level data; but I guess he just averaged them ... I think the xgoalsonsshots is calculated from shot level data ... so these\n##avges only sometimes match\n\n#I_F_xGoalsFromxReboundsOfShots,\"Expected Goals from Expected Rebounds of player's shots. Even if a shot does not actually generate a rebound, if it's a shot that is likely to generate a rebound the player is credited with xGoalsFromxRebounds\"\n#I_F_xGoalsFromActualReboundsOfShots,Expected Goals from actual rebounds shots of player's shots. \n#I_F_reboundxGoals,Expected Goal on rebound shots",
"_____no_output_____"
],
[
"X_raw.head()",
"_____no_output_____"
]
],
[
[
"Goals for Total number of goals scored so far this season\n\nGoals against\n\nTotal number of goals conceded so far thisseason\n\nGoals Differential Goals for – Goals against\n\nPower Play Success Rate Ratio – scoring a goal when 5 on 4\n\nPower Kill Success Rate Ratio – not conceding a goal when 4 on 5\n\nShot % Goal scored/shots taken\n\nSave % Goals conceded/shots saved\n\nWinning Streak Number of consecutive games won\n\nConference Standing Latest ranking on conference table\n\nFenwick Close % Possession ratio\n\nPDO Luck parameter\n\n5/5 Goal For/Against Ratio – 5 on 5 Goals For/Against\n\n",
"_____no_output_____"
]
],
[
[
"feat_Pisch = [ 'faceOffWinPercentage',",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d097b22da7ea83db0034904eb85031ba59de2708 | 3,036 | ipynb | Jupyter Notebook | 1 - Coordinate System.ipynb | red-hara/jupyter-dh-notation | 0ffd305b3e67ce7dd3c20f2d1c719b53251dbf58 | [
"MIT"
] | null | null | null | 1 - Coordinate System.ipynb | red-hara/jupyter-dh-notation | 0ffd305b3e67ce7dd3c20f2d1c719b53251dbf58 | [
"MIT"
] | null | null | null | 1 - Coordinate System.ipynb | red-hara/jupyter-dh-notation | 0ffd305b3e67ce7dd3c20f2d1c719b53251dbf58 | [
"MIT"
] | null | null | null | 25.3 | 143 | 0.556324 | [
[
[
"# Системы координат\n\nТвердое тело обладает шестью степенями свободы, которые можно представить в виде трех перемещений и трех вращений.\n\n\n\nЧтобы описать положение тела оказывается удобным описать положение _локальной системы координат_ (ЛСК), жестко связанной с этим телом.\nТогда описание положения ЛСК становится унифицированным способом описания положения любого тела.\n\n\n\nПоложение ЛСК задается относительно некоторобо базового положения - _базовой системы координат_ (БСК).",
"_____no_output_____"
]
],
[
[
"%matplotlib notebook\nimport matplotlib.pyplot as plt\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"Зададим положение и ориентацию ЛСК в двумерном пространстве:",
"_____no_output_____"
]
],
[
[
"x = 10\ny = 7\nalpha = np.deg2rad(15)",
"_____no_output_____"
]
],
[
[
"И отобразим ее:",
"_____no_output_____"
]
],
[
[
"fig = plt.figure(figsize=(6, 6))\nax = fig.add_subplot()\nax.axis(\"equal\")\nax.set_xlim((-3, 12)); ax.set_ylim((-3, 12))\n\nax.arrow(0, 0, 5, 0, color=\"#ff0000\", linewidth=6)\nax.arrow(0, 0, 0, 5, color=\"#00ff00\", linewidth=6)\n\nax.arrow(x, y, np.cos(alpha), np.sin(alpha), color=\"#ff0000\", linewidth=2)\nax.arrow(x, y, -np.sin(alpha), np.cos(alpha), color=\"#00ff00\", linewidth=2)\n\nax.arrow(0, 0, x, y, color=\"#000000\", linewidth=1, head_width=0.1)\n\nfig.show()",
"_____no_output_____"
]
],
[
[
"Как мы узнали где находятся как смещены кончики ЛСК (направления осей) относительно ее положения?\n\nМы знаем угол поворота, следовательно мы можем написать систему линейных уравнений для любой точки $(a, b)$:\n\n$$\nX = a \\cos(\\alpha) - b \\sin(\\alpha) + x \\\\\nY = a \\sin(\\alpha) + b \\cos(\\alpha) + y \\\\\n$$",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d097ba54892857c0d52317cf8e66f646414e7ae7 | 2,356 | ipynb | Jupyter Notebook | canarie-presentation/presentation.ipynb | Ouranosinc/jenkins-config | d41c96dadf62db603456ba70205015d04f9610c6 | [
"MIT"
] | null | null | null | canarie-presentation/presentation.ipynb | Ouranosinc/jenkins-config | d41c96dadf62db603456ba70205015d04f9610c6 | [
"MIT"
] | 9 | 2019-05-14T16:46:20.000Z | 2021-12-14T04:16:08.000Z | canarie-presentation/presentation.ipynb | Ouranosinc/jenkins-config | d41c96dadf62db603456ba70205015d04f9610c6 | [
"MIT"
] | 1 | 2021-05-27T23:02:07.000Z | 2021-05-27T23:02:07.000Z | 29.45 | 183 | 0.633701 | [
[
[
"# Jenkins Configuration as Code\n\nWe will look at how Jenkins is deployed at Ouranos.\n\nOuranos is using Jenkins to run automated nightly integration tests on our PAVICS platform. The tests are actually regular Jupyter notebooks so we achieve two goals at once:\n* Tutorials notebooks\n* Integration tests\n\nThis presentation and all the code used can be found at https://github.com/Ouranosinc/jenkins-config/tree/master/canarie-presentation.\n\nBy Long Vu, software developer at Ouranos.\n\nPresentation at the Canadian Research Software Conference, May 28, 2019.",
"_____no_output_____"
],
[
"# Jenkins without Configuration as Code plugin\n\n\n* No automated configuration sync between Production and Staging instances\n* Very time consuming to manually reconfigure a fresh instance to replicate exactly the same configurations\n* No tracking who changed which config, when, why\n",
"_____no_output_____"
],
[
"# Jenkins using Configuration as Code plugin\n\n* Fixes all the previous problems\n* Fits the DevOPS way of \"Infrastructure as Code\"\n",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
]
] |
d097d17b7917587552007e11292ef368be307b73 | 6,853 | ipynb | Jupyter Notebook | Chap1/2 CheckPermutation.ipynb | jenivial/cracking-the-coding-interview | d26fcd93ec18b5cf9d4ffd59e0eb65353d503f05 | [
"Unlicense"
] | null | null | null | Chap1/2 CheckPermutation.ipynb | jenivial/cracking-the-coding-interview | d26fcd93ec18b5cf9d4ffd59e0eb65353d503f05 | [
"Unlicense"
] | null | null | null | Chap1/2 CheckPermutation.ipynb | jenivial/cracking-the-coding-interview | d26fcd93ec18b5cf9d4ffd59e0eb65353d503f05 | [
"Unlicense"
] | null | null | null | 55.266129 | 4,452 | 0.492193 | [
[
[
"#Casos de prueba",
"_____no_output_____"
]
],
[
[
"var ex11 = \"abcd\";\nvar ex12 = \"abdc\";\nvar ex21 = \"abcde\";\nvar ex22 = \"abc\";",
"_____no_output_____"
]
],
[
[
"#Solución",
"_____no_output_____"
]
],
[
[
"bool IsPermutation(string firstString,string secondString)\n{\n var dic = new Dictionary<char,int>();\n foreach(var c in firstString)\n {\n if(!dic.ContainsKey(c))\n {\n dic.Add(c,0);\n }\n\n dic[c] = dic[c] + 1;\n }\n\n foreach(var c in secondString)\n {\n if(!dic.ContainsKey(c))\n {\n return false;\n }\n\n dic[c] = dic[c] - 1;\n if(dic[c] < 0)\n {\n return false;\n }\n }\n\n foreach(var keyValue in dic)\n {\n if(keyValue.Value != 0)\n {\n return false;\n } \n }\n\n return true;\n}",
"_____no_output_____"
],
[
"Console.WriteLine(IsPermutation(ex11,ex12));\nConsole.WriteLine(IsPermutation(ex21,ex22));\n",
"True\nFalse\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d097d5ce8bc27e7614b7bef0e6088734f93bda19 | 74,733 | ipynb | Jupyter Notebook | Implementaciones/LinearRegression/LinearRegression-Keras.ipynb | sergiomora03/deep-learning-basics | da8a74c4ae34104cafab6f168b7d968e3c414cd9 | [
"MIT"
] | null | null | null | Implementaciones/LinearRegression/LinearRegression-Keras.ipynb | sergiomora03/deep-learning-basics | da8a74c4ae34104cafab6f168b7d968e3c414cd9 | [
"MIT"
] | null | null | null | Implementaciones/LinearRegression/LinearRegression-Keras.ipynb | sergiomora03/deep-learning-basics | da8a74c4ae34104cafab6f168b7d968e3c414cd9 | [
"MIT"
] | null | null | null | 65.960282 | 24,952 | 0.782519 | [
[
[
"# Linear Regression\n\nWe will implement a linear regression model by using the Keras library. ",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"## Data set: Weight and height\n\nActive Drive and read the csv file with the weight and height data",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv('/content/drive/My Drive/Colab Notebooks/DeepLearning-Intro-part2/weight-height.csv')",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"df.plot(kind='scatter',\n x='Height',\n y='Weight',\n title='Weight and Height in adults')",
"_____no_output_____"
]
],
[
[
"## Model building\n",
"_____no_output_____"
]
],
[
[
"# Import the type of model: Sequential, because we will add elements to this model in a sequence\nfrom keras.models import Sequential\n# To build a linear model we will need only dense layers\nfrom keras.layers import Dense\n# Import the optimizers, they change the weights and biases looking for the minimum cost\nfrom keras.optimizers import Adam, SGD",
"Using TensorFlow backend.\n"
]
],
[
[
"### Define the model",
"_____no_output_____"
]
],
[
[
"# define the model to be sequential\nmodel = Sequential()",
"_____no_output_____"
]
],
[
[
"```\nDense(units, activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, **kwargs)\n```\n\nJust your regular densely-connected NN layer.\n\nDense implements the operation: $output = activation(dot(input, kernel) + bias)$ where activation is the element-wise activation function passed as the activation argument, kernel is a weights matrix created by the layer, and bias is a bias vector created by the layer (only applicable if use_bias is True).",
"_____no_output_____"
]
],
[
[
"# we add to the model a dense layer\n# the first parammeter is the number of units that is how many outputs this layer will have \n# Since this is a linear regression we will require a model with one output and one input\nmodel.add(Dense(1, input_shape=(1,))) #this code implements a model x*w+b",
"_____no_output_____"
],
[
"model.summary()",
"Model: \"sequential_1\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense_1 (Dense) (None, 1) 2 \n=================================================================\nTotal params: 2\nTrainable params: 2\nNon-trainable params: 0\n_________________________________________________________________\n"
]
],
[
[
"We have a single layer called 'dense_1' the Output Shape is 1 number and it has 2 parameters. \nThe reason that the Output Shape is (None, 1) is because the model can accept multiple points at once, instead of passing a single value we can ask for many values of x in one single call.\n",
"_____no_output_____"
],
[
"When we compile the model, Keras will construct the model based on the backend software that we define (here we are using TensorFlow model).\n",
"_____no_output_____"
],
[
"\n```\nmodel.compile(optimizer, loss=None, metrics=None, loss_weights=None, sample_weight_mode=None, weighted_metrics=None, target_tensors=None, **kwargs)\n```\n",
"_____no_output_____"
]
],
[
[
"# we will compile using the cost function (loss) 'mean_squared_error'\nmodel.compile(Adam(lr=0.8), 'mean_squared_error')",
"_____no_output_____"
]
],
[
[
"### Fit the model",
"_____no_output_____"
]
],
[
[
"X = df[['Height']].values #input data\ny_true = df['Weight'].values #output data",
"_____no_output_____"
]
],
[
[
"Fit the model by using the input data, X, and the output data, y_true. In each iteration the loss is decreasing by looking for the W and B values. In this example it will search 40 times (40 epochs).",
"_____no_output_____"
]
],
[
[
"model.fit(X, y_true, epochs=40)",
"Epoch 1/40\n10000/10000 [==============================] - 1s 54us/step - loss: 641.6255\nEpoch 2/40\n10000/10000 [==============================] - 0s 26us/step - loss: 531.2374\nEpoch 3/40\n10000/10000 [==============================] - 0s 27us/step - loss: 475.7274\nEpoch 4/40\n10000/10000 [==============================] - 0s 26us/step - loss: 437.8883\nEpoch 5/40\n10000/10000 [==============================] - 0s 27us/step - loss: 378.0077\nEpoch 6/40\n10000/10000 [==============================] - 0s 29us/step - loss: 348.8181\nEpoch 7/40\n10000/10000 [==============================] - 0s 25us/step - loss: 298.8625\nEpoch 8/40\n10000/10000 [==============================] - 0s 25us/step - loss: 285.9285\nEpoch 9/40\n10000/10000 [==============================] - 0s 25us/step - loss: 267.1154\nEpoch 10/40\n10000/10000 [==============================] - 0s 27us/step - loss: 257.0898\nEpoch 11/40\n10000/10000 [==============================] - 0s 26us/step - loss: 229.6472\nEpoch 12/40\n10000/10000 [==============================] - 0s 24us/step - loss: 215.3994\nEpoch 13/40\n10000/10000 [==============================] - 0s 24us/step - loss: 212.3130\nEpoch 14/40\n10000/10000 [==============================] - 0s 26us/step - loss: 197.3882\nEpoch 15/40\n10000/10000 [==============================] - 0s 27us/step - loss: 201.8168\nEpoch 16/40\n10000/10000 [==============================] - 0s 26us/step - loss: 194.6146\nEpoch 17/40\n10000/10000 [==============================] - 0s 24us/step - loss: 197.7766\nEpoch 18/40\n10000/10000 [==============================] - 0s 26us/step - loss: 176.6824\nEpoch 19/40\n10000/10000 [==============================] - 0s 25us/step - loss: 179.1187\nEpoch 20/40\n10000/10000 [==============================] - 0s 26us/step - loss: 202.4124\nEpoch 21/40\n10000/10000 [==============================] - 0s 26us/step - loss: 177.8580\nEpoch 22/40\n10000/10000 [==============================] - 0s 27us/step - loss: 178.8894\nEpoch 23/40\n10000/10000 [==============================] - 0s 26us/step - loss: 181.3036\nEpoch 24/40\n10000/10000 [==============================] - 0s 25us/step - loss: 179.6010\nEpoch 25/40\n10000/10000 [==============================] - 0s 25us/step - loss: 183.0226\nEpoch 26/40\n10000/10000 [==============================] - 0s 27us/step - loss: 174.9212\nEpoch 27/40\n10000/10000 [==============================] - 0s 27us/step - loss: 173.5374\nEpoch 28/40\n10000/10000 [==============================] - 0s 27us/step - loss: 167.8445\nEpoch 29/40\n10000/10000 [==============================] - 0s 26us/step - loss: 179.6013\nEpoch 30/40\n10000/10000 [==============================] - 0s 27us/step - loss: 184.6290\nEpoch 31/40\n10000/10000 [==============================] - 0s 27us/step - loss: 178.1232\nEpoch 32/40\n10000/10000 [==============================] - 0s 25us/step - loss: 178.5953\nEpoch 33/40\n10000/10000 [==============================] - 0s 26us/step - loss: 186.7905\nEpoch 34/40\n10000/10000 [==============================] - 0s 27us/step - loss: 183.4978\nEpoch 35/40\n10000/10000 [==============================] - 0s 27us/step - loss: 176.8871\nEpoch 36/40\n10000/10000 [==============================] - 0s 27us/step - loss: 170.2855\nEpoch 37/40\n10000/10000 [==============================] - 0s 27us/step - loss: 185.9228\nEpoch 38/40\n10000/10000 [==============================] - 0s 25us/step - loss: 189.4159\nEpoch 39/40\n10000/10000 [==============================] - 0s 28us/step - loss: 179.4457\nEpoch 40/40\n10000/10000 [==============================] - 0s 26us/step - loss: 181.2774\n"
],
[
"y_pred = model.predict(X)",
"_____no_output_____"
],
[
"df.plot(kind='scatter',\n x='Height',\n y='Weight',\n title='Weight and Height in adults')\nplt.plot(X, y_pred, color='red')",
"_____no_output_____"
]
],
[
[
"Extract the values of W (slope) and B (bias). \n",
"_____no_output_____"
]
],
[
[
"W, B = model.get_weights()",
"_____no_output_____"
],
[
"W",
"_____no_output_____"
],
[
"B",
"_____no_output_____"
]
],
[
[
"## Performance of the model \n",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import r2_score",
"_____no_output_____"
],
[
"print(\"The R2 score is {:0.3f}\".format(r2_score(y_true, y_pred)))",
"The R2 score is 0.829\n"
]
],
[
[
"### Train/test split\n",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import train_test_split",
"_____no_output_____"
],
[
"X_train, X_test, y_train, y_test = train_test_split(X, y_true,\n test_size=0.2)",
"_____no_output_____"
],
[
"len(X_train)",
"_____no_output_____"
],
[
"len(X_test)",
"_____no_output_____"
],
[
"#reset the parameters of the model\nW[0, 0] = 0.0\nB[0] = 0.0\nmodel.set_weights((W, B))",
"_____no_output_____"
],
[
"#retrain the model in the selected sample\nmodel.fit(X_train, y_train, epochs=50, verbose=0) #verbose=0 doesn't show each iteration",
"_____no_output_____"
],
[
"y_train_pred = model.predict(X_train).ravel() \ny_test_pred = model.predict(X_test).ravel()",
"_____no_output_____"
],
[
"from sklearn.metrics import mean_squared_error as mse",
"_____no_output_____"
],
[
"print(\"The Mean Squared Error on the Train set is:\\t{:0.1f}\".format(mse(y_train, y_train_pred)))\nprint(\"The Mean Squared Error on the Test set is:\\t{:0.1f}\".format(mse(y_test, y_test_pred)))",
"The Mean Squared Error on the Train set is:\t169.0\nThe Mean Squared Error on the Test set is:\t176.4\n"
],
[
"print(\"The R2 score on the Train set is:\\t{:0.3f}\".format(r2_score(y_train, y_train_pred)))\nprint(\"The R2 score on the Test set is:\\t{:0.3f}\".format(r2_score(y_test, y_test_pred)))",
"The R2 score on the Train set is:\t0.837\nThe R2 score on the Test set is:\t0.827\n"
]
],
[
[
"The score for the training set is close to the one in the test set, therefore this model is good in generalization. \n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d097e7a0ab5795d35e65b04298db630b81a22ee3 | 59,275 | ipynb | Jupyter Notebook | Tutorials_PCA_KMeans_DBSCAN_Autoencoder_with_MNIST/PCA_mnist_kmeans_blog.ipynb | ustundag/2D-3D-Semantics | 6f79be0082e2bfd6b7940c2314972a603e55f201 | [
"Apache-2.0"
] | null | null | null | Tutorials_PCA_KMeans_DBSCAN_Autoencoder_with_MNIST/PCA_mnist_kmeans_blog.ipynb | ustundag/2D-3D-Semantics | 6f79be0082e2bfd6b7940c2314972a603e55f201 | [
"Apache-2.0"
] | null | null | null | Tutorials_PCA_KMeans_DBSCAN_Autoencoder_with_MNIST/PCA_mnist_kmeans_blog.ipynb | ustundag/2D-3D-Semantics | 6f79be0082e2bfd6b7940c2314972a603e55f201 | [
"Apache-2.0"
] | null | null | null | 212.455197 | 46,440 | 0.90016 | [
[
[
"print(__doc__)\n\nfrom time import time\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nfrom sklearn import metrics\nfrom sklearn.cluster import KMeans\nfrom sklearn.datasets import load_digits\nfrom sklearn.decomposition import PCA\nfrom sklearn.preprocessing import scale\n\nnp.random.seed(42)\n\ndigits = load_digits()\ndata = scale(digits.data)\n\nn_samples, n_features = data.shape\nn_digits = len(np.unique(digits.target))\nlabels = digits.target\n\nsample_size = 300",
"Automatically created module for IPython interactive environment\n"
],
[
"print(digits.data.shape)\nprint(digits.data[0].shape)\n#plt.imshow(digits.data[0].reshape([8,8]), cmap='gray')\nprint(data.shape)\nprint(data[0].shape)\nplt.imshow(data[8].reshape([8,8]), cmap='gray')\n\nprint(labels[8])",
"(1797, 64)\n(64,)\n(1797, 64)\n(64,)\n8\n"
]
],
[
[
"# # A demo of K-Means clustering on the handwritten digits data\n\n\nIn this example we compare the various initialization strategies for\nK-means in terms of runtime and quality of the results.\n\nAs the ground truth is known here, we also apply different cluster\nquality metrics to judge the goodness of fit of the cluster labels to the\nground truth.\n\nCluster quality metrics evaluated (see `clustering_evaluation` for\ndefinitions and discussions of the metrics):\n\n| Shorthand | Full name |\n|------------|----------------------------:|\n| homo | homogeneity score |\n| compl | completeness score |\n| v-meas | V measure |\n| ARI | adjusted Rand index |\n| AMI | adjusted mutual information |\n| silhouette | silhouette coefficient |\n",
"_____no_output_____"
]
],
[
[
"def bench_k_means(estimator, name, data):\n t0 = time()\n estimator.fit(data)\n print('%-9s\\t%.2fs\\t%i\\t%.3f\\t%.3f\\t%.3f\\t%.3f\\t%.3f\\t%.3f'\n % (name, (time() - t0), estimator.inertia_,\n metrics.homogeneity_score(labels, estimator.labels_),\n metrics.completeness_score(labels, estimator.labels_),\n metrics.v_measure_score(labels, estimator.labels_),\n metrics.adjusted_rand_score(labels, estimator.labels_),\n metrics.adjusted_mutual_info_score(labels, estimator.labels_,\n average_method='arithmetic'),\n metrics.silhouette_score(data, estimator.labels_,\n metric='euclidean',\n sample_size=sample_size)))",
"_____no_output_____"
],
[
"print(\"n_digits: %d, \\t n_samples %d, \\t n_features %d\"\n % (n_digits, n_samples, n_features))\n\nprint(82 * '_')\nprint('init\\t\\ttime\\tinertia\\thomo\\tcompl\\tv-meas\\tARI\\tAMI\\tsilhouette')\n\nbench_k_means(KMeans(init='k-means++', n_clusters=n_digits, n_init=10),\n name=\"k-means++\", data=data)\n\nbench_k_means(KMeans(init='random', n_clusters=n_digits, n_init=10),\n name=\"random\", data=data)\n\n# in this case the seeding of the centers is deterministic, hence we run the\n# kmeans algorithm only once with n_init=1\npca = PCA(n_components=n_digits).fit(data)\nbench_k_means(KMeans(init=pca.components_, n_clusters=n_digits, n_init=1),\n name=\"PCA-based\", data=data)\nprint(82 * '_')",
"n_digits: 10, \t n_samples 1797, \t n_features 64\n__________________________________________________________________________________\ninit\t\ttime\tinertia\thomo\tcompl\tv-meas\tARI\tAMI\tsilhouette\nk-means++\t0.47s\t69432\t0.602\t0.650\t0.625\t0.465\t0.621\t0.146\nrandom \t0.41s\t69694\t0.669\t0.710\t0.689\t0.553\t0.686\t0.147\nPCA-based\t0.10s\t70804\t0.671\t0.698\t0.684\t0.561\t0.681\t0.118\n__________________________________________________________________________________\n"
],
[
"# Visualize the results on PCA-reduced data\n\nreduced_data = PCA(n_components=2).fit_transform(data)\nkmeans = KMeans(init='k-means++', n_clusters=n_digits, n_init=10)\nkmeans.fit(reduced_data)\n\n# Step size of the mesh. Decrease to increase the quality of the VQ.\nh = .02 # point in the mesh [x_min, x_max]x[y_min, y_max].\n\n# Plot the decision boundary. For that, we will assign a color to each\nx_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1\ny_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1\nxx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))\n\n# Obtain labels for each point in mesh. Use last trained model.\nZ = kmeans.predict(np.c_[xx.ravel(), yy.ravel()])\n\n# Put the result into a color plot\nZ = Z.reshape(xx.shape)\nplt.figure(1)\nplt.clf()\nplt.imshow(Z, interpolation='nearest',\n extent=(xx.min(), xx.max(), yy.min(), yy.max()),\n cmap=plt.cm.Paired,\n aspect='auto', origin='lower')\n\nplt.plot(reduced_data[:, 0], reduced_data[:, 1], 'k.', markersize=2)\n# Plot the centroids as a white X\ncentroids = kmeans.cluster_centers_\nplt.scatter(centroids[:, 0], centroids[:, 1],\n marker='x', s=169, linewidths=3,\n color='w', zorder=10)\nplt.title('K-means clustering on the digits dataset (PCA-reduced data)\\n'\n 'Centroids are marked with white cross')\nplt.xlim(x_min, x_max)\nplt.ylim(y_min, y_max)\nplt.xticks(())\nplt.yticks(())\nplt.show()",
"_____no_output_____"
],
[
"from sklearn.metrics import normalized_mutual_info_score\nnormalized_mutual_info_score(kmeans.labels_, labels)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d097e9273cd4d60d95b2209c34747c5d3a33524c | 160,861 | ipynb | Jupyter Notebook | notebooks/deep-learning/3.train-test-diabetes.ipynb | edithlee972/msds621 | f02bc7212f0a7beaa252a5d2b308bad33c762be7 | [
"MIT"
] | null | null | null | notebooks/deep-learning/3.train-test-diabetes.ipynb | edithlee972/msds621 | f02bc7212f0a7beaa252a5d2b308bad33c762be7 | [
"MIT"
] | null | null | null | notebooks/deep-learning/3.train-test-diabetes.ipynb | edithlee972/msds621 | f02bc7212f0a7beaa252a5d2b308bad33c762be7 | [
"MIT"
] | null | null | null | 163.31066 | 66,356 | 0.874457 | [
[
[
"# Training vs validation loss\n\n[](https://colab.research.google.com/github/parrt/fundamentals-of-deep-learning/blob/main/notebooks/3.train-test-diabetes.ipynb)\n\nBy [Terence Parr](https://explained.ai).\n\nThis notebook explores how to use a validation set to estimate how well a model generalizes from its training data to unknown test vectors. We will see that deep learning models often have so many parameters that we can drive training loss to zero, but unfortunately the validation loss usually grows as the model overfits. We will also compare how deep learning performs compared to a random forest model as a baseline. Instead of the cars data set, we will use the [diabetes data set](https://scikit-learn.org/stable/datasets/toy_dataset.html#diabetes-dataset) loaded via sklearn.",
"_____no_output_____"
],
[
"## Support code",
"_____no_output_____"
]
],
[
[
"import os\nimport sys\nimport torch\nimport copy\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport numpy as np\nimport pandas as pd\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.datasets import load_diabetes\nfrom sklearn.metrics import r2_score\nfrom sklearn.ensemble import RandomForestRegressor\nimport matplotlib.pyplot as plt\nfrom matplotlib import colors\n\n! pip install -q -U colour\nimport colour\n\n%config InlineBackend.figure_format = 'retina'\n\nimport tsensor",
"_____no_output_____"
],
[
"def plot_history(history, ax=None, maxy=None, file=None):\n if ax is None:\n fig, ax = plt.subplots(1,1, figsize=(3.5,3))\n ax.set_ylabel(\"Loss\")\n ax.set_xlabel(\"Epochs\")\n loss = history[:,0]\n val_loss = history[:,1]\n if maxy:\n ax.set_ylim(0,maxy)\n else:\n ax.set_ylim(0,torch.max(val_loss))\n ax.spines['top'].set_visible(False) # turns off the top \"spine\" completely\n ax.spines['right'].set_visible(False)\n ax.spines['left'].set_linewidth(.5)\n ax.spines['bottom'].set_linewidth(.5)\n ax.plot(loss, label='train_loss')\n ax.plot(val_loss, label='val_loss')\n ax.legend(loc='upper right')\n plt.tight_layout()\n if file:\n# plt.savefig(f\"/Users/{os.environ['USER']}/Desktop/{file}.pdf\")\n plt.savefig(f\"{os.environ['HOME']}/{file}.pdf\")\n ",
"_____no_output_____"
]
],
[
[
"## Load diabetes data set\n\nFrom [sklearn diabetes data set](https://scikit-learn.org/stable/datasets/toy_dataset.html#diabetes-dataset):\n\"<i>Ten baseline variables, age, sex, body mass index, average blood pressure, and six blood serum measurements were obtained for each of n = 442 diabetes patients, as well as the response of interest, a quantitative measure of disease progression one year after baseline.</i>\"\n\nSo, the goal is to predict disease progression based upon all of these features.",
"_____no_output_____"
]
],
[
[
"d = load_diabetes()\nlen(d.data)",
"_____no_output_____"
],
[
"df = pd.DataFrame(d.data, columns=d.feature_names)\ndf['disease'] = d.target # \"quantitative measure of disease progression one year after baseline\"\ndf.head(3)",
"_____no_output_____"
]
],
[
[
"## Split data into train, validation sets\n\nAny sufficiently powerful model is able to effectively drive down the training loss (error). What we really care about, though, is how well the model generalizes. That means we have to look at the validation or test error, computed from records the model was not trained on. (We'll use \"test\" as shorthand for \"validation\" often, but technically they are not the same.) For non-time-sensitive data sets, we can simply randomize and hold out 20% of our data as our validation set:",
"_____no_output_____"
]
],
[
[
"np.random.seed(1) # set a random seed for consistency across runs\nn = len(df)\nX = df.drop('disease',axis=1).values\ny = df['disease'].values\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20) # hold out 20%",
"_____no_output_____"
],
[
"len(X), len(X_train), len(X_test)",
"_____no_output_____"
]
],
[
[
"Let's also make sure to normalize the data to make training easier:",
"_____no_output_____"
]
],
[
[
"m = np.mean(X_train,axis=0)\nstd = np.std(X_train,axis=0)\nX_train = (X_train-m)/std\nX_test = (X_test-m)/std # use training data only when prepping test sets",
"_____no_output_____"
]
],
[
[
"## Baseline with random forest\n\nWhen building machine learning models, it's always important to ask how good your model is. One of the best ways is to choose a baseline model, such as a random forest or a linear regression model, and compare your new model to make sure it can beat the old model. Random forests are easy to use, understand, and train so they are a good baseline. Training the model is as simple as calling `fit()` (`min_samples_leaf=20` gives a bit more generality):",
"_____no_output_____"
]
],
[
[
"rf = RandomForestRegressor(n_estimators=100, n_jobs=-1, min_samples_leaf=20)\nrf.fit(X_train, y_train.reshape(-1))",
"_____no_output_____"
]
],
[
[
"To evaluate our models, let's compute the mean squared error (MSE) for both training and validation sets:",
"_____no_output_____"
]
],
[
[
"y_pred = rf.predict(X_train)\nmse = np.mean((y_pred - y_train.reshape(-1))**2)\n\ny_pred_test = rf.predict(X_test)\nmse_test = np.mean((y_pred_test - y_test.reshape(-1))**2)\n\nprint(f\"Training MSE {mse:.2f} validation MSE {mse_test:.2f}\")",
"Training MSE 2533.91 validation MSE 3473.22\n"
]
],
[
[
"Let's check $R^2$ as well.",
"_____no_output_____"
]
],
[
[
"rf.score(X_train, y_train), rf.score(X_test, y_test)",
"_____no_output_____"
]
],
[
[
"#### Exercise\n\nWhy is the validation error much larger than the training error?\n\n<details>\n<summary>Solution</summary>\n Because the model was trained on the training set, one would expect it to generally perform better on it than any other data set. The more the validation error diverges from the training error, the less general you should assume your model is.\n</details>",
"_____no_output_____"
],
[
"## Train neural network model\n\nOk, so now we have a baseline and an understanding of how well a decent model performs on this data set. Let's see if we can beat that baseline with a neural network. First we will see how easy it is to drive the training error down and then show how the validation error is not usually very good in that case. We will finish by considering ways to get better validation errors, which means more general models.",
"_____no_output_____"
],
[
"### Most basic network training\n\nA basic training loop for a neural network model simply measures and tracks the training loss or error/metric. (In this case, our loss and metric are the same.) The following function embodies such a training loop:",
"_____no_output_____"
]
],
[
[
"def train0(model, X_train, y_train,\n learning_rate = .5, nepochs=2000):\n optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)\n for epoch in range(nepochs+1):\n y_pred = model(X_train)\n loss = torch.mean((y_pred - y_train)**2)\n if epoch % (nepochs//10) == 0:\n print(f\"Epoch {epoch:4d} MSE train loss {loss:12.3f}\")\n \n optimizer.zero_grad()\n loss.backward() # autograd computes w1.grad, b1.grad, ...\n optimizer.step()",
"_____no_output_____"
]
],
[
[
"To use this method, we have to convert the training and validation data sets to pytorch tensors from numpy (they are already normalized):",
"_____no_output_____"
]
],
[
[
"X_train = torch.tensor(X_train).float()\nX_test = torch.tensor(X_test).float()\ny_train = torch.tensor(y_train).float().reshape(-1,1) # column vector\ny_test = torch.tensor(y_test).float().reshape(-1,1)",
"_____no_output_____"
]
],
[
[
"Let's create a model with one hidden layer and an output layer, glued together with a ReLU nonlinearity. The network looks something like the following except of course we have many more input features and neurons than shown here:\n\n<img src=\"images/diabetes-relu.png\" width=\"300\">\n\nThere is an implied input layer which is really just the input vector of features. The output layer takes the output of the hidden layer and generates a single output, our $\\hat{y}$:",
"_____no_output_____"
]
],
[
[
"ncols = X.shape[1]\nn_neurons = 150\n\nmodel = nn.Sequential(\n nn.Linear(ncols, n_neurons), # hidden layer\n nn.ReLU(), # nonlinearity\n nn.Linear(n_neurons, 1) # output layer\n)\n\ntrain0(model, X_train, y_train, learning_rate=.08, nepochs=5000)",
"Epoch 0 MSE train loss 29693.615\nEpoch 500 MSE train loss 552.236\nEpoch 1000 MSE train loss 85.343\nEpoch 1500 MSE train loss 25.167\nEpoch 2000 MSE train loss 120.147\nEpoch 2500 MSE train loss 9.741\nEpoch 3000 MSE train loss 14.483\nEpoch 3500 MSE train loss 3.674\nEpoch 4000 MSE train loss 13.071\nEpoch 4500 MSE train loss 1.539\nEpoch 5000 MSE train loss 1.087\n"
]
],
[
[
"Run this a few times and you'll see that we can drive the training error very close to zero with 150 neurons and many iterations (epochs). Compare this to the RF training MSE which is orders of magnitude bigger (partly due to the `min_samples_leaf` hyperparameter).",
"_____no_output_____"
],
[
"#### Exercise\n\nWhy does the training loss sometimes pop up and then go back down? Why is it not monotonically decreasing?\n\n<details>\n<summary>Solution</summary>\n The only source of randomness is the initialization of the model parameters, but that does not explain the lack of monotonicity. In this situation, it is likely that the learning rate is too high and therefore, as we approach the minimum of the lost function, our steps are too big. We are jumping back and forth across the location of the minimum in parameter space.\n</details>",
"_____no_output_____"
],
[
"#### Exercise\n\nChange the learning rate from 0.08 to 0.001 and rerun the example. What happens to the training loss? Is it better or worse than the baseline random forest and the model trained with learning rate 0.08?\n\n<details>\n<summary>Solution</summary>\n The training loss continues to decrease but much lower than before and stops long before reaching a loss near zero. On the other hand, it is better than the training error from the baseline random forest.\n</details>",
"_____no_output_____"
],
[
"## Reducing the learning rate to zero in on the minimum\n\nIn one of the above exercises we discussed that the learning rate was probably too high in the vicinity of the lost function minimum. There are ways to throttle the learning rate down as we approach the minimum, but we are using a fixed learning rate here. In order to get a smooth, monotonic reduction in loss function let's start with a smaller learning rate, but that means increasing the number of epochs:",
"_____no_output_____"
]
],
[
[
"ncols = X.shape[1]\nn_neurons = 150\n\nmodel = nn.Sequential(\n nn.Linear(ncols, n_neurons), # hidden layer\n nn.ReLU(), # nonlinearity\n nn.Linear(n_neurons, 1) # output layer\n)\n\ntrain0(model, X_train, y_train, learning_rate=.017, nepochs=15000)",
"Epoch 0 MSE train loss 29626.404\nEpoch 1500 MSE train loss 1588.800\nEpoch 3000 MSE train loss 264.717\nEpoch 4500 MSE train loss 64.575\nEpoch 6000 MSE train loss 17.353\nEpoch 7500 MSE train loss 6.367\nEpoch 9000 MSE train loss 2.682\nEpoch 10500 MSE train loss 0.691\nEpoch 12000 MSE train loss 0.700\nEpoch 13500 MSE train loss 0.190\nEpoch 15000 MSE train loss 0.067\n"
]
],
[
[
"Notice now that we can reliably drive that training error down to zero without bouncing around, although it takes longer with the smaller learning rate.",
"_____no_output_____"
],
[
"#### Exercise\n\nPlay around with the learning rate and nepochs to see how fast you can reliably get MSE down to 0.",
"_____no_output_____"
],
[
"### Tracking validation loss\n\nA low training error doesn't really tell us that much, other than the model is able to capture the relationship between the features and the target variable. What we really want is a general model, which means evaluating the model's performance on a validation set. We have both sets, so let's now track the training and validation error in the loop. We will see that our model performs much worse on the records in the validation set (on which the model was not trained).",
"_____no_output_____"
]
],
[
[
"def train1(model, X_train, X_test, y_train, y_test,\n learning_rate = .5, nepochs=2000):\n optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)\n history = [] # track training and validation loss\n for epoch in range(nepochs+1):\n y_pred = model(X_train)\n loss = torch.mean((y_pred - y_train)**2)\n y_pred_test = model(X_test)\n loss_test = torch.mean((y_pred_test - y_test)**2)\n history.append((loss, loss_test))\n if epoch % (nepochs//10) == 0:\n print(f\"Epoch {epoch:4d} MSE train loss {loss:12.3f} test loss {loss_test:12.3f}\")\n \n optimizer.zero_grad()\n loss.backward() # autograd computes w1.grad, b1.grad, ...\n optimizer.step()\n return torch.tensor(history)",
"_____no_output_____"
]
],
[
[
"Let's create the exact same model that we had before but plot train/validation errors against the number of epochs:",
"_____no_output_____"
]
],
[
[
"ncols = X.shape[1]\nn_neurons = 150\nmodel = nn.Sequential(\n nn.Linear(ncols, n_neurons),\n nn.ReLU(),\n nn.Linear(n_neurons, 1)\n)\n\nhistory = train1(model, X_train, X_test, y_train, y_test,\n learning_rate=.02, nepochs=8000)\n\nplot_history(torch.clamp(history, 0, 12000), file=\"train-test\")",
"Epoch 0 MSE train loss 29654.730 test loss 27063.545\nEpoch 800 MSE train loss 1841.183 test loss 3583.445\nEpoch 1600 MSE train loss 806.315 test loss 5304.467\nEpoch 2400 MSE train loss 286.301 test loss 7847.571\nEpoch 3200 MSE train loss 98.467 test loss 9669.961\nEpoch 4000 MSE train loss 29.252 test loss 11196.380\nEpoch 4800 MSE train loss 9.511 test loss 11814.456\nEpoch 5600 MSE train loss 4.533 test loss 12238.104\nEpoch 6400 MSE train loss 1.763 test loss 12479.471\nEpoch 7200 MSE train loss 16.615 test loss 12586.157\nEpoch 8000 MSE train loss 0.282 test loss 12712.996\n"
]
],
[
[
"Wow. The validation error is much much worse than the training error, which is almost 0. That tells us that the model is severely overfit to the training data and is not general at all. Well, the validation error actually makes a lot of progress initially but then after a few thousand epochs immediately starts to grow (we'll use this fact later). Unless we do something fancier, the best solution can be obtained by selecting the model parameters that gives us the lowest validation loss.",
"_____no_output_____"
],
[
"### Track best loss and choose best model\n\nWe saw in the previous section that the most general model appears fairly soon in the training cycle. So, despite being able to drive the training error to zero if we keep going long enough, the most general model actually is known very early in the training process. This is not always the case, but it certainly is here for this data. Let's exploit this by tracking the best model, the one with the lowest validation error. There is [some indication](https://moultano.wordpress.com/2020/10/18/why-deep-learning-works-even-though-it-shouldnt/) that a good approach is to (sometimes crank up the power of the model and then) just stop early, or at least pick the model with the lowest validation error. The following function embodies that by making a copy of our neural net model when it finds an improved version.",
"_____no_output_____"
]
],
[
[
"def train2(model, X_train, X_test, y_train, y_test,\n learning_rate = .5, nepochs=2000, weight_decay=0):\n optimizer = torch.optim.Adam(model.parameters(),\n lr=learning_rate, weight_decay=weight_decay)\n history = [] # track training and validation loss\n best_loss = 1e10\n best_model = None\n for epoch in range(nepochs+1):\n y_pred = model(X_train)\n loss = torch.mean((y_pred - y_train)**2)\n\n y_pred_test = model(X_test)\n loss_test = torch.mean((y_pred_test - y_test)**2)\n history.append((loss, loss_test))\n if loss_test < best_loss:\n best_loss = loss_test\n best_model = copy.deepcopy(model)\n best_epoch = epoch\n if epoch % (nepochs//10) == 0:\n print(f\"Epoch {epoch:4d} MSE train loss {loss:12.3f} test loss {loss_test:12.3f}\")\n \n optimizer.zero_grad()\n loss.backward() # autograd computes w1.grad, b1.grad, ...\n optimizer.step()\n print(f\"BEST MSE test loss {best_loss:.3f} at epoch {best_epoch}\")\n return torch.tensor(history), best_model",
"_____no_output_____"
]
],
[
[
"Let's use the exact same model and learning rate with no weight decay and see what happens.",
"_____no_output_____"
]
],
[
[
"ncols = X.shape[1]\nn_neurons = 150\nmodel = nn.Sequential(\n nn.Linear(ncols, n_neurons),\n nn.ReLU(),\n nn.Linear(n_neurons, 1)\n)\n\nhistory, best_model = train2(model, X_train, X_test, y_train, y_test,\n learning_rate=.02, nepochs=1000,\n weight_decay=0)\n\n# verify we got the best model out\ny_pred = best_model(X_test)\nloss_test = torch.mean((y_pred - y_test)**2)\n\nplot_history(torch.clamp(history, 0, 12000))",
"Epoch 0 MSE train loss 29607.461 test loss 27006.693\nEpoch 100 MSE train loss 2629.585 test loss 3087.739\nEpoch 200 MSE train loss 2384.102 test loss 3186.564\nEpoch 300 MSE train loss 2285.357 test loss 3230.039\nEpoch 400 MSE train loss 2208.642 test loss 3257.470\nEpoch 500 MSE train loss 2133.422 test loss 3275.208\nEpoch 600 MSE train loss 2077.234 test loss 3265.872\nEpoch 700 MSE train loss 2031.759 test loss 3267.592\nEpoch 800 MSE train loss 1977.212 test loss 3271.320\nEpoch 900 MSE train loss 1920.043 test loss 3249.735\nEpoch 1000 MSE train loss 1870.694 test loss 3281.241\nBEST MSE test loss 3071.237 at epoch 90\n"
]
],
[
[
"Let's also look at $R^2$:",
"_____no_output_____"
]
],
[
[
"y_pred = best_model(X_train).detach().numpy()\ny_pred_test = best_model(X_test).detach().numpy()\n\nr2_score(y_train, y_pred), r2_score(y_test, y_pred_test)",
"_____no_output_____"
]
],
[
[
"The best MSE bounces around a loss value of 3000 from run to run, a bit above it or a bit below, depending on the run. And this decent result occurs without having to understand or use weight decay (more on this next). Compare the validation R^2 to that of the RF; the network does much better!",
"_____no_output_____"
],
[
"### Weight decay to reduce overfitting\n\nOther than stopping early, one of the most common ways to reduce model overfitting is to use weight decay, otherwise known as L2 (Ridge) regression, to constrain the model parameters. Without constraints, model parameters can get very large, which typically leads to a lack of generality. Using the `Adam` optimizer, we turn on weight decay with parameter `weight_decay`, but otherwise the training loop is the same:",
"_____no_output_____"
]
],
[
[
"def train3(model, X_train, X_test, y_train, y_test,\n learning_rate = .5, nepochs=2000, weight_decay=0, trace=True):\n optimizer = torch.optim.Adam(model.parameters(),\n lr=learning_rate, weight_decay=weight_decay)\n history = [] # track training and validation loss\n for epoch in range(nepochs+1):\n y_pred = model(X_train)\n loss = torch.mean((y_pred - y_train)**2)\n\n y_pred_test = model(X_test)\n loss_test = torch.mean((y_pred_test - y_test)**2)\n history.append((loss, loss_test))\n if trace and epoch % (nepochs//10) == 0:\n print(f\"Epoch {epoch:4d} MSE train loss {loss:12.3f} test loss {loss_test:12.3f}\")\n \n optimizer.zero_grad()\n loss.backward() # autograd computes w1.grad, b1.grad, ...\n optimizer.step()\n return torch.tensor(history)",
"_____no_output_____"
]
],
[
[
"How do we know what the right value of the weight decay is? Typically we try a variety of weight decay values and then see which one gives us the best validation error, so let's do that using a grid of images. The following loop uses the same network and learning rate for each run but varies the weight decay:",
"_____no_output_____"
]
],
[
[
"ncols = X.shape[1]\nn_neurons = 150\n\nfig, axes = plt.subplots(1, 4, figsize=(12.5,2.5))\n\nfor wd,ax in zip([0,.3,.6,1.5],axes):\n model = nn.Sequential(\n nn.Linear(ncols, n_neurons),\n nn.ReLU(),\n nn.Linear(n_neurons, 1)\n )\n history = train3(model, X_train, X_test, y_train, y_test,\n learning_rate=.05, nepochs=1000, weight_decay=wd,\n trace=False)\n mse_valid = history[-1][1]\n ax.set_title(f\"wd={wd:.1f}, valid MSE {mse_valid:.0f}\")\n plot_history(torch.clamp(history, 0, 10000), ax=ax, maxy=10_000)\n\nplt.tight_layout()\nplt.show()",
"_____no_output_____"
]
],
[
[
"From this experiment, we can conclude that a weight decay of 1.5 gives the best final mean squared error. But, the experiment is reporting the final MSE all the way on the right side of the graph. \n\nThe minimum MSE in the above experiment (of four side-by-side graphs), however, appears before the right edge and the validation error simply gets worse after that. That tells us that we should not pick the parameters simply as the parameters where the training leaves off. We should pick the model parameters that give the minimum loss, as we did before.",
"_____no_output_____"
],
[
"#### Exercise\n\nSet the weight decay to something huge like 100. What do you observe about the training and validation curves?\n\n<details>\n<summary>Solution</summary>\n\nThe two curves are flat, and about the same level. The minimum validation error is about 6000 so much worse than with more reasonable weight decay. We have seriously biased the model because we cannot even drive the training error downwards. The bias comes from the extreme constraint we've placed on the model parameters.\n \n<pre>\nmodel = nn.Sequential(\n nn.Linear(ncols, n_neurons),\n nn.ReLU(),\n nn.Linear(n_neurons, 1)\n)\nhistory = train2(model, X_train, X_test, y_train, y_test,\n learning_rate=.05, nepochs=1000, weight_decay=100,\n trace=False)\nmse_valid = history[-1][1]\nax.set_title(f\"wd={wd:.1f}, valid MSE {mse_valid:.0f}\")\nplot_history(torch.clamp(history, 0, 10000), ax=ax, maxy=10_000)\n</pre>\n</details>",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d097eaf59230f5ba3f013348e7b1988c62e7e918 | 21,155 | ipynb | Jupyter Notebook | python/example/attack_simple.ipynb | souravrhythm/opendp | 2c576dbf98389c349ca3a3be928f0e600cd0c10b | [
"MIT"
] | 95 | 2021-02-17T19:50:28.000Z | 2022-03-31T16:50:59.000Z | python/example/attack_simple.ipynb | souravrhythm/opendp | 2c576dbf98389c349ca3a3be928f0e600cd0c10b | [
"MIT"
] | 299 | 2021-02-10T00:14:41.000Z | 2022-03-31T16:17:33.000Z | python/example/attack_simple.ipynb | souravrhythm/opendp | 2c576dbf98389c349ca3a3be928f0e600cd0c10b | [
"MIT"
] | 13 | 2021-04-01T14:40:56.000Z | 2022-03-27T08:52:46.000Z | 50.01182 | 9,507 | 0.750414 | [
[
[
"# Simple Attack\n\nIn this notebook, we will examine perhaps the simplest possible attack on an individual's private data and what the OpenDP library can do to mitigate it.\n\n## Loading the data\n\nThe vetting process is currently underway for the code in the OpenDP Library.\nAny constructors that have not been vetted may still be accessed if you opt-in to \"contrib\".",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom opendp.mod import enable_features\nenable_features('contrib')",
"_____no_output_____"
]
],
[
[
"We begin with loading up the data.",
"_____no_output_____"
]
],
[
[
"import os\ndata_path = os.path.join('.', 'data', 'PUMS_california_demographics_1000', 'data.csv')\n\nwith open(data_path) as input_file:\n data = input_file.read()\n\ncol_names = [\"age\", \"sex\", \"educ\", \"race\", \"income\", \"married\"]\nprint(col_names)\nprint('\\n'.join(data.split('\\n')[:6]))",
"['age', 'sex', 'educ', 'race', 'income', 'married']\n59,1,9,1,0,1\n31,0,1,3,17000,0\n36,1,11,1,0,1\n54,1,11,1,9100,1\n39,0,5,3,37000,0\n34,0,9,1,0,1\n"
]
],
[
[
"The following code parses the data into a vector of incomes.\nMore details on preprocessing can be found [here](https://github.com/opendp/opendp/blob/main/python/example/basic_data_analysis.ipynb).",
"_____no_output_____"
]
],
[
[
"from opendp.trans import make_split_dataframe, make_select_column, make_cast, make_impute_constant\n\nincome_preprocessor = (\n # Convert data into a dataframe where columns are of type Vec<str>\n make_split_dataframe(separator=\",\", col_names=col_names) >>\n # Selects a column of df, Vec<str>\n make_select_column(key=\"income\", TOA=str)\n)\n\n# make a transformation that casts from a vector of strings to a vector of floats\ncast_str_float = (\n # Cast Vec<str> to Vec<Option<floats>>\n make_cast(TIA=str, TOA=float) >>\n # Replace any elements that failed to parse with 0., emitting a Vec<float>\n make_impute_constant(0.)\n)\n\n# replace the previous preprocessor: extend it with the caster\nincome_preprocessor = income_preprocessor >> cast_str_float\nincomes = income_preprocessor(data)\n\nprint(incomes[:7])",
"[0.0, 17000.0, 0.0, 9100.0, 37000.0, 0.0, 6000.0]\n"
]
],
[
[
"## A simple attack\n\nSay there's an attacker who's target is the income of the first person in our data (i.e. the first income in the csv). In our case, its simply `0` (but any number is fine, i.e. 5000).",
"_____no_output_____"
]
],
[
[
"person_of_interest = incomes[0]\nprint('person of interest:\\n\\n{0}'.format(person_of_interest))",
"person of interest:\n\n0.0\n"
]
],
[
[
"Now consider the case an attacker that doesn't know the POI income, but do know the following: (1) the average income without the POI income, and (2) the number of persons in the database.\nAs we show next, if he would also get the average income (including the POI's one), by simple manipulation he can easily back out the individual's income.",
"_____no_output_____"
]
],
[
[
"# attacker information: everyone's else mean, and their count.\nknown_mean = np.mean(incomes[1:])\nknown_obs = len(incomes) - 1\n\n# assume the attackers know legitimately get the overall mean (and hence can infer the total count)\noverall_mean = np.mean(incomes)\nn_obs = len(incomes)\n\n# back out POI's income\npoi_income = overall_mean * n_obs - known_obs * known_mean\nprint('poi_income: {0}'.format(poi_income))",
"poi_income: 0.0\n"
]
],
[
[
"The attacker now knows with certainty that the POI has an income of $0.\n\n\n## Using OpenDP\nLet's see what happens if the attacker were made to interact with the data through OpenDP and was given a privacy budget of $\\epsilon = 1$.\nWe will assume that the attacker is reasonably familiar with differential privacy and believes that they should use tighter data bounds than they would anticipate being in the data in order to get a less noisy estimate.\nThey will need to update their `known_mean` accordingly.",
"_____no_output_____"
]
],
[
[
"from opendp.trans import make_clamp, make_sized_bounded_mean, make_bounded_resize\nfrom opendp.meas import make_base_laplace\n\nenable_features(\"floating-point\")\n\nmax_influence = 1\ncount_release = 100\n\nincome_bounds = (0.0, 100_000.0)\n\nclamp_and_resize_data = (\n make_clamp(bounds=income_bounds) >>\n make_bounded_resize(size=count_release, bounds=income_bounds, constant=10_000.0)\n)\n\nknown_mean = np.mean(clamp_and_resize_data(incomes)[1:])\n\nmean_measurement = (\n clamp_and_resize_data >>\n make_sized_bounded_mean(size=count_release, bounds=income_bounds) >>\n make_base_laplace(scale=1.0)\n)\n\ndp_mean = mean_measurement(incomes)\n\nprint(\"DP mean:\", dp_mean)\nprint(\"Known mean:\", known_mean)",
"DP mean: 28203.570278867388\nKnown mean: 28488.08080808081\n"
]
],
[
[
"We will be using `n_sims` to simulate the process a number of times to get a sense for various possible outcomes for the attacker.\nIn practice, they would see the result of only one simulation.",
"_____no_output_____"
]
],
[
[
"# initialize vector to store estimated overall means\nn_sims = 10_000\nn_queries = 1\npoi_income_ests = []\nestimated_means = []\n\n# get estimates of overall means\nfor i in range(n_sims):\n query_means = [mean_measurement(incomes) for j in range(n_queries)]\n\n # get estimates of POI income\n estimated_means.append(np.mean(query_means))\n poi_income_ests.append(estimated_means[i] * count_release - (count_release - 1) * known_mean)\n\n\n# get mean of estimates\nprint('Known Mean Income (after truncation): {0}'.format(known_mean))\nprint('Observed Mean Income: {0}'.format(np.mean(estimated_means)))\nprint('Estimated POI Income: {0}'.format(np.mean(poi_income_ests)))\nprint('True POI Income: {0}'.format(person_of_interest))",
"Known Mean Income (after truncation): 28488.08080808081\nObserved Mean Income: 28203.193459994138\nEstimated POI Income: -0.6540005867157132\nTrue POI Income: 0.0\n"
]
],
[
[
"We see empirically that, in expectation, the attacker can get a reasonably good estimate of POI's income. However, they will rarely (if ever) get it exactly and would have no way of knowing if they did.\n\nIn our case, indeed the mean estimated POI income approaches the true income, as the number of simulations `n_sims` increases.\nBelow is a plot showing the empirical distribution of estimates of POI income. Notice about its concentration around `0`, and the Laplacian curve of the graph.",
"_____no_output_____"
]
],
[
[
"import warnings\nimport seaborn as sns\n\n# hide warning created by outstanding scipy.stats issue\nwarnings.simplefilter(action='ignore', category=FutureWarning)\n\n# distribution of POI income\nax = sns.distplot(poi_income_ests, kde = False, hist_kws = dict(edgecolor = 'black', linewidth = 1))\nax.set(xlabel = 'Estimated POI income')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d097eea8d981d202eeb9ebcc51c51ffb06f9562a | 6,086 | ipynb | Jupyter Notebook | kneeoa/Preprocessing/Model/Model.ipynb | Tommy-Ngx/oa_rerun2 | f10f1baac4457d98ecb78c7e5d7d191fa9dd434a | [
"MIT"
] | 10 | 2020-08-28T05:35:05.000Z | 2021-12-27T07:48:19.000Z | kneeoa/Preprocessing/Model/Model.ipynb | Tommy-Ngx/oa_rerun2 | f10f1baac4457d98ecb78c7e5d7d191fa9dd434a | [
"MIT"
] | null | null | null | kneeoa/Preprocessing/Model/Model.ipynb | Tommy-Ngx/oa_rerun2 | f10f1baac4457d98ecb78c7e5d7d191fa9dd434a | [
"MIT"
] | 1 | 2020-10-25T13:38:27.000Z | 2020-10-25T13:38:27.000Z | 35.590643 | 119 | 0.544857 | [
[
[
"import tensorflow as tf\nimport h5py\nimport shutil\nimport numpy as np\nfrom torch.utils.data import DataLoader\nimport keras\nfrom tqdm.notebook import tqdm\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Flatten, Conv3D, Dropout, MaxPooling3D,MaxPooling2D\nfrom keras.utils import to_categorical\nfrom tensorflow.keras.callbacks import ModelCheckpoint\nfrom keras.utils.vis_utils import plot_model\nfrom sklearn.model_selection import train_test_split\nfrom keras.layers import Conv2D,Dropout\nfrom keras.layers import Activation,Average\nfrom keras.layers import GlobalAveragePooling2D,BatchNormalization\nfrom keras.optimizers import Adam\nimport time\nimport collections\nfrom keras.losses import categorical_crossentropy",
"_____no_output_____"
]
],
[
[
"ConvPool_CNN Model ",
"_____no_output_____"
]
],
[
[
"def ConvPool_CNN_C():\n model = Sequential()\n model.add(Conv2D(96,kernel_size=(3,3),activation='relu',padding='same'))\n model.add(Conv2D(96,kernel_size=(3,3),activation='relu',padding='same'))\n model.add(Conv2D(96,kernel_size=(3,3),activation='relu',padding='same'))\n model.add(MaxPooling2D(pool_size=(3,3),strides=2))\n model.add(Conv2D(192,(3,3),activation='relu',padding='same'))\n model.add(Conv2D(192,(3,3),activation='relu',padding='same'))\n model.add(Conv2D(192,(3,3),activation='relu',padding='same'))\n model.add(MaxPooling2D(pool_size=(3,3),strides=2))\n model.add(Conv2D(192,(3,3),activation='relu',padding='same'))\n model.add(Conv2D(192,(1,1),activation='relu'))\n model.add(Conv2D(5,(1,1)))\n model.add(GlobalAveragePooling2D())\n model.add(Flatten())\n model.add(Dense(5, activation='softmax'))\n model.build(input_shape)\n model.compile(loss=categorical_crossentropy,optimizer=keras.optimizers.Adam(0.001),metrics=['accuracy'])\n return model",
"_____no_output_____"
]
],
[
[
"ALL_CNN_MODEL",
"_____no_output_____"
]
],
[
[
"def all_cnn_c(X,y,learningRate=0.001,lossFunction='categorical_crossentropy'):\n model = Sequential()\n model.add(Conv2D(96,kernel_size=(3,3),activation='relu',padding='same'))\n model.add(Conv2D(96,kernel_size=(3,3),activation='relu',padding='same'))\n model.add(Conv2D(96,kernel_size=(3,3),activation='relu',padding='same'))\n model.add(Conv2D(192,(3,3),activation='relu',padding='same'))\n model.add(Conv2D(192,(3,3),activation='relu',padding='same'))\n model.add(Conv2D(192,(3,3),activation='relu',padding='same'))\n model.add(Conv2D(192,(3,3),activation='relu',padding='same'))\n model.add(Conv2D(192,(1,1),activation='relu'))\n model.add(GlobalAveragePooling2D())\n model.add(Dense(5, activation='softmax'))\n model.build(input_shape)\n model.compile(loss=categorical_crossentropy,optimizer=Adam(0.001),metrics=['accuracy'])\n\n return model",
"_____no_output_____"
]
],
[
[
"NIN_CNN_MODEL",
"_____no_output_____"
]
],
[
[
"def nin_cnn_c():\n model = Sequential()\n model.add(Conv2D(32,kernel_size=(5,5),activation='relu',padding='valid'))\n model.add(Conv2D(32,kernel_size=(5,5),activation='relu'))\n model.add(Conv2D(32,kernel_size=(5,5),activation='relu'))\n model.add(MaxPooling2D(pool_size=(3,3),strides=2))\n model.add(Dropout(0.5))\n model.add(Conv2D(64,(3,3),activation='relu',padding='same'))\n model.add(Conv2D(64,(1,1),activation='relu',padding='same'))\n model.add(Conv2D(64,(1,1),activation='relu',padding='same'))\n model.add(MaxPooling2D(pool_size=(3,3),strides=2))\n model.add(Dropout(0.5))\n model.add(Conv2D(128,(3,3),activation='relu',padding='same'))\n model.add(Conv2D(32,(1,1),activation='relu'))\n model.add(Conv2D(5,(1,1)))\n model.add(GlobalAveragePooling2D())\n model.add(Flatten())\n model.add(Dense(5, activation='softmax'))\n model.build(input_shape)\n \n model.compile(loss=categorical_crossentropy,optimizer=Adam(0.001),metrics=['accuracy'])\n return model",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d097f4d11b4233d51590392cb2e6257606f69618 | 282,047 | ipynb | Jupyter Notebook | code/diffusion_via_coin_flips.ipynb | RPGroup-PBoC/gist_pboc_2019 | 883f3e8212932bcc276048f8557133195d6f3519 | [
"MIT"
] | null | null | null | code/diffusion_via_coin_flips.ipynb | RPGroup-PBoC/gist_pboc_2019 | 883f3e8212932bcc276048f8557133195d6f3519 | [
"MIT"
] | null | null | null | code/diffusion_via_coin_flips.ipynb | RPGroup-PBoC/gist_pboc_2019 | 883f3e8212932bcc276048f8557133195d6f3519 | [
"MIT"
] | null | null | null | 859.89939 | 140,344 | 0.953798 | [
[
[
"© 2018 Suzy Beeler and Vahe Galstyan. This work is licensed under a [Creative Commons Attribution License CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/). All code contained herein is licensed under an [MIT license](https://opensource.org/licenses/MIT) \n\nThis exercise was generated from a Jupyter notebook. You can download the notebook [here](diffusion_via_coin_flips.ipynb).\n___\n\n# Objective \n\nIn this tutorial, we will computationally simulate the process of diffusion with \"coin flips,\" where at each time step, the particle can either move to the left or the right, each with probability $0.5$. From here, we can see how the distance a diffusing particle travels scale with time. \n\n# Modeling 1-D diffusion with coin flips \n\nDiffusion can be understood as random motion in space caused by thermal fluctuations in the environment. In the cytoplasm of the cell different molecules undergo a 3-dimensional diffusive motion. On the other hand, diffusion on the cell membrane is chiefly 2-dimensional. Here we will consider a 1-dimensional diffusion motion to make the treatment simpler, but the ideas can be extended into higher dimensions.",
"_____no_output_____"
]
],
[
[
"# Import modules\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# Show figures in the notebook\n%matplotlib inline\n\n# For pretty plots\nimport seaborn as sns\nrc={'lines.linewidth': 2, 'axes.labelsize': 14, 'axes.titlesize': 14, \\\n 'xtick.labelsize' : 14, 'ytick.labelsize' : 14}\nsns.set(rc=rc)",
"_____no_output_____"
]
],
[
[
"To simulate the flipping of a coin, we will make use of `numpy`'s `random.uniform()` function that produces a random number between $0$ and $1$. Let's see it in action by printing a few random numbers:",
"_____no_output_____"
]
],
[
[
"for i in range(10):\n print(np.random.uniform())",
"0.8342160184177014\n0.9430314233421397\n0.007367989009862352\n0.09402902047181094\n0.4641922790100985\n0.9895482225120592\n0.1652578614128407\n0.6038851928675221\n0.675556259583081\n0.2558372886656155\n"
]
],
[
[
"We can now use these randomly generated numbers to simulate the process of a diffusing particle moving in one dimension, where any value below $0.5$ corresponds to step to the left and any value above $0.5$ corresponds to a step to the right. Below, we keep track of the position of a particle for $1000$ steps, where each position is $+1$ or $-1$ from the previous position, as determined by the result of a coin flip.",
"_____no_output_____"
]
],
[
[
"# Number of steps\nn_steps = 1000\n\n# Array to store walker positions\npositions = np.zeros(n_steps)\n\n# simulate the particle moving and store the new position\nfor i in range(1, n_steps):\n \n # generate random number \n rand = np.random.uniform()\n \n # step in the positive direction \n if rand > 0.5:\n positions[i] = positions[i-1] + 1\n \n # step in the negative direction \n else:\n positions[i] = positions[i-1] - 1\n \n# Show the trajectory\nplt.plot(positions)\nplt.xlabel('steps')\nplt.ylabel('position');",
"_____no_output_____"
]
],
[
[
"As we can see, the position of the particle moves about the origin in an undirected fashion as a result of the randomness of the steps taken. However, it's hard to conclude anything from this single trace. Only by simulating many of these trajectories can we begin to conclude some of the scaling properties of diffusing particles. \n\n# Average behavior of diffusing particles \n\nNow let's generate multiple random trajectories and see their collective behavior. To do that, we will create a 2-dimensional `numpy` array where each row will be a different trajectory. 2D arrays can be sliced such that `[i,:]` refers to all the values in the `i`th row, and `[:,j]` refers to all the values in `j`th column. ",
"_____no_output_____"
]
],
[
[
"# Number of trajectories\nn_traj = 1000\n\n# 2d array for storing the trajectories\npositions_2D = np.zeros([n_traj, n_steps])\n\n# first iterate through the trajectories\nfor i in range(n_traj):\n \n # then iterate through the steps\n for j in range(1, n_steps):\n \n # generate random number \n rand = np.random.uniform()\n\n # step in the positive direction \n if rand > 0.5:\n positions_2D[i, j] = positions_2D[i, j-1] + 1\n\n # step in the negative direction \n else:\n positions_2D[i, j] = positions_2D[i, j-1] - 1",
"_____no_output_____"
]
],
[
[
"Now let's plot the results, once again by looping. ",
"_____no_output_____"
]
],
[
[
"# iterate through each trajectory and plot\nfor i in range(n_traj):\n plt.plot(positions_2D[i,:])\n \n# label\nplt.xlabel('steps')\nplt.ylabel('position');",
"_____no_output_____"
]
],
[
[
"The overall tendency is that the average displacement from the origin increases with the number of time steps. Because each trajectory is assigned a solid color and all trajectories are overlaid on top of each other, it's hard to see the distribution of the walker position at a given number of times steps. To get a better intuition about the distribution of the walker's position at different steps, we will assign the same color to each trajectory and add transparency to each of them so that the more densely populated regions have a darker color.",
"_____no_output_____"
]
],
[
[
"# iterate through each trajectory and plot\nfor i in range(n_traj):\n # lower alpha corresponds to lighter lines \n plt.plot(positions_2D[i,:], alpha=0.01, color='k')\n\n# label\nplt.xlabel('steps')\nplt.ylabel('position');",
"_____no_output_____"
]
],
[
[
"As we can see, over the course of diffusion the distribution of the walker's position becomes wider but remains centered around the origin, indicative of the unbiased nature of the random walk. To see how the walkers are distributed at this last time point, let's make a histogram of the walker's final positions.",
"_____no_output_____"
]
],
[
[
"# Make a histogram of final positions\n_ = plt.hist(positions_2D[:,-1], bins=20)\nplt.xlabel('final position')\nplt.ylabel('frequency');",
"_____no_output_____"
]
],
[
[
"As expected, the distribution is centered around the origin and has a Gaussian-like shape. The more trajectories we sample, the \"more Gaussian\" the distribution will become. However, we may notice that the distribution appears to change depending on the number of bins we choose. This is known as *bin bias* and doesn't reflect anything about our data itself, just how we choose to represent it. An alternative (and arguably better) way to present the data is as a *empirical cumulative distribution function* (or ECDF), where we don't specify a number of bins, but instead plot each data point. For our cumulative frequency distribution, the $x$-axis corresponds to the final position of a particle and the $y$-axis corresponds to the proportion of particles that ended at this position or a more negative position.",
"_____no_output_____"
]
],
[
[
"# sort the final positions \nsorted_positons = np.sort(positions_2D[:,-1])\n# make the corresponding y_values (i.e. percentiles)\ny_values = np.linspace(start=0, stop=1, num=len(sorted_positons))\n\n# plot the cumulative histogram\nplt.plot(sorted_positons, y_values, '.')\nplt.xlabel(\"final position\")\nplt.ylabel(\"cumulative frequency\");",
"_____no_output_____"
]
],
[
[
"This way of visualizing the data makes it easier to tell that distribution of walkers is in fact symmetric around 0. That is, 50% of the walkers ended on a negative position, while 50% of the walkers ended on a positive position. ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d097f8345f7313046836af07427fba5ddf4885ff | 88,746 | ipynb | Jupyter Notebook | 01-Python_essentials.ipynb | guiwitz/Python_image_processing_beginner | ef6637bb11bcb857905ffdacbe1e2ab7d72aaaa0 | [
"BSD-3-Clause"
] | 1 | 2019-12-04T08:50:07.000Z | 2019-12-04T08:50:07.000Z | 01-Python_essentials.ipynb | guiwitz/Python_image_processing_beginner | ef6637bb11bcb857905ffdacbe1e2ab7d72aaaa0 | [
"BSD-3-Clause"
] | null | null | null | 01-Python_essentials.ipynb | guiwitz/Python_image_processing_beginner | ef6637bb11bcb857905ffdacbe1e2ab7d72aaaa0 | [
"BSD-3-Clause"
] | null | null | null | 87.348425 | 57,144 | 0.831823 | [
[
[
"# 1. Python and notebook basics\n\nIn this first chapter, we will cover the very essentials of Python and notebooks such as creating a variable, importing packages, using functions, seeing how variables behave in the notebook etc. We will see more details on some of these topics, but this very short introduction will then allow us to quickly dive into more applied and image processing specific topics without having to go through a full Python introduction.",
"_____no_output_____"
],
[
"## Variables\n\nLike we would do in mathematics when we define variables in equations such as $x=3$, we can do the same in all programming languages. Python has one of the simplest syntax for this, i.e. exactly as we would do it naturally. Let's define a variable in the next cell:",
"_____no_output_____"
]
],
[
[
"a = 3",
"_____no_output_____"
]
],
[
[
"As long as we **don't execute the cell** using Shift+Enter or the play button in the menu, the above cell is **purely text**. We can close our Jupyter session and then re-start it and this line of text will still be there. However other parts of the notebook are not \"aware\" that this variable has been defined and so we can't re-use anywhere else. For example if we type ```a``` again and execute the cell, we get an error:",
"_____no_output_____"
]
],
[
[
"a",
"_____no_output_____"
]
],
[
[
"So we actually need to **execute** the cell so that Python reads that line and executes the command. Here it's a very simple command that just says that the value of the variable ```a``` is three. So let's go back to the cell that defined ```a``` and now execute it (click in the cell and hit Shift+Enter). Now this variable is **stored in the computing memory** of the computer and we can re-use it anywhere in the notebook (but only in **this** notebook)!\n\nWe can again just type ```a```",
"_____no_output_____"
]
],
[
[
"a",
"_____no_output_____"
]
],
[
[
"We see that now we get an *output* with the value three. Most variables display an output when they are not involved in an operation. For example the line ```a=3``` didn't have an output.\n\nNow we can define other variables in a new cell. Note that we can put as **many lines** of commands as we want in a single cell. Each command just need to be on a new line.",
"_____no_output_____"
]
],
[
[
"b = 5\nc = 2",
"_____no_output_____"
]
],
[
[
"As variables are defined for the entire notebook we can combine information that comes from multiple cells. Here we do some basic mathematics:",
"_____no_output_____"
]
],
[
[
"a + b",
"_____no_output_____"
]
],
[
[
"Here we only see the output. We can't re-use that ouput for further calculations as we didn't define a new variable to contain it. Here we do it:",
"_____no_output_____"
]
],
[
[
"d = a + b",
"_____no_output_____"
],
[
"d",
"_____no_output_____"
]
],
[
[
"```d``` is now a new variable. It is purely numerical and not a mathematical formula as the above cell could make you believe. For example if we change the value of ```a```:",
"_____no_output_____"
]
],
[
[
"a = 100",
"_____no_output_____"
]
],
[
[
"and check the value of ```d```:",
"_____no_output_____"
]
],
[
[
"d",
"_____no_output_____"
]
],
[
[
"it has not change. We would have to rerun the operation and assign it again to ```d``` for it to update:",
"_____no_output_____"
]
],
[
[
"d = a + b\nd",
"_____no_output_____"
]
],
[
[
"We will see many other types of variables during the course. Some are just other types of data, for example we can define a **text** variable by using quotes ```' '``` around a given text:",
"_____no_output_____"
]
],
[
[
"my_text = 'This is my text'\nmy_text",
"_____no_output_____"
]
],
[
[
"Others can contain multiple elements like lists:",
"_____no_output_____"
]
],
[
[
"my_list = [3, 8, 5, 9]\nmy_list",
"_____no_output_____"
]
],
[
[
"but more on these data structures later...",
"_____no_output_____"
],
[
"## Functions\n\nWe have seen that we could define variables and do some basic operations with them. If we want to go beyond simple arithmetic we need more **complex functions** that can operate on variables. Imagine for example that we need a function $f(x, a, b) = a * x + b$. For this we can use and **define functions**. Here's how we can define the previous function:",
"_____no_output_____"
]
],
[
[
"def my_fun(x, a, b):\n out = a * x + b\n return out",
"_____no_output_____"
]
],
[
[
"We see a series of Python rules to define a function:\n\n- we use the word **```def```** to signal that we are creating a function\n- we pick a **function name**, here ```my_fun```\n- we open the **parenthesis** and put all our **variables ```x```, ```a```, ```b```** in there, just like when we do mathematics\n- we do some operation inside the function. **Inside** the function is signal with the **indentation**: everything that belong inside the function (there could be many more lines) is shifted by a *single tab* or *three space* to the right\n- we use the word **```return```** to tell what is the output of the function, here the variable ```out```\n\nWe can now use this function as if we were doing mathematics: we pick a a value for the three parameters e.g. $f(3, 2, 5)$",
"_____no_output_____"
]
],
[
[
"my_fun(3, 2, 5)",
"_____no_output_____"
]
],
[
[
"Note that **some functions are defined by default** in Python. For example if I define a variable which is a string:",
"_____no_output_____"
]
],
[
[
"my_text = 'This is my text'",
"_____no_output_____"
]
],
[
[
"I can count the number of characters in this text using the ```len()``` function which comes from base Python:",
"_____no_output_____"
]
],
[
[
"len(my_text)",
"_____no_output_____"
]
],
[
[
"The ```len``` function has not been manually defined within a ```def``` statement, it simply exist by default in the Python language.",
"_____no_output_____"
],
[
"## Variables as objects\n\nIn the Python world, variables are not \"just\" variables, they are actually more complex objects. So for example our variable ```my_text``` does indeed contain the text ```This is my text``` but it contains also additional features. The way to access those features is to use the dot notation ```my_text.some_feature```. There are two types of featues:\n- functions, called here methods, that do some computation or modify the variable itself\n- properties, that contain information about the variable\n\nFor example the object ```my_text``` has a function attached to it that allows us to put all letters to lower case:",
"_____no_output_____"
]
],
[
[
"my_text",
"_____no_output_____"
],
[
"my_text.lower()",
"_____no_output_____"
]
],
[
[
"If we define a complex number:",
"_____no_output_____"
]
],
[
[
"a = 3 + 5j",
"_____no_output_____"
]
],
[
[
"then we can access the property ```real``` that gives us only the real part of the number:",
"_____no_output_____"
]
],
[
[
"a.real",
"_____no_output_____"
]
],
[
[
"Note that when we use a method (function) we need to use the parenthesis, just like for regular functions, while for properties we don't.",
"_____no_output_____"
],
[
"## Packages\n\nIn the examples above, we either defined a function ourselves or used one generally accessible in base Python but there is a third solution: **external packages**. These packages are collections of functions used in a specific domain that are made available to everyone via specialized online repositories. For example we will be using in this course a package called [scikit-image](https://scikit-image.org/) that implements a large number of functions for image processing. For example if we want to filter an image stored in a variable ```im_in``` with a median filter, we can then just use the ```median()``` function of scikit-image and apply it to an image ```im_out = median(im_in)```. The question is now: how do we access these functions?\n\n### Importing functions\n\nThe answer is that we have to **import** the functions we want to use in a *given notebook* from a package to be able to use them. First the package needs to be **installed**. One of the most popular place where to find such packages is the PyPi repository. We can install packages from there using the following command either in a **terminal or directly in the notebook**. For example for [scikit-image](https://pypi.org/project/scikit-image/):\n",
"_____no_output_____"
]
],
[
[
"pip install scikit-image",
"Requirement already satisfied: scikit-image in /Users/gw18g940/mambaforge/envs/improc_beginner/lib/python3.9/site-packages (0.19.2)\nRequirement already satisfied: networkx>=2.2 in /Users/gw18g940/mambaforge/envs/improc_beginner/lib/python3.9/site-packages (from scikit-image) (2.7.1)\nRequirement already satisfied: tifffile>=2019.7.26 in /Users/gw18g940/mambaforge/envs/improc_beginner/lib/python3.9/site-packages (from scikit-image) (2022.2.9)\nRequirement already satisfied: PyWavelets>=1.1.1 in /Users/gw18g940/mambaforge/envs/improc_beginner/lib/python3.9/site-packages (from scikit-image) (1.2.0)\nRequirement already satisfied: pillow!=7.1.0,!=7.1.1,!=8.3.0,>=6.1.0 in /Users/gw18g940/mambaforge/envs/improc_beginner/lib/python3.9/site-packages (from scikit-image) (9.0.1)\nRequirement already satisfied: scipy>=1.4.1 in /Users/gw18g940/mambaforge/envs/improc_beginner/lib/python3.9/site-packages (from scikit-image) (1.8.0)\nRequirement already satisfied: packaging>=20.0 in /Users/gw18g940/mambaforge/envs/improc_beginner/lib/python3.9/site-packages (from scikit-image) (21.3)\nRequirement already satisfied: imageio>=2.4.1 in /Users/gw18g940/mambaforge/envs/improc_beginner/lib/python3.9/site-packages (from scikit-image) (2.16.1)\nRequirement already satisfied: numpy>=1.17.0 in /Users/gw18g940/mambaforge/envs/improc_beginner/lib/python3.9/site-packages (from scikit-image) (1.22.2)\nRequirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /Users/gw18g940/mambaforge/envs/improc_beginner/lib/python3.9/site-packages (from packaging>=20.0->scikit-image) (3.0.7)\nNote: you may need to restart the kernel to use updated packages.\n"
]
],
[
[
"Once installed we can **import** the packakge in a notebook in the following way (note that the name of the package is scikit-image, but in code we use an abbreviated name ```skimage```):",
"_____no_output_____"
]
],
[
[
"import skimage",
"_____no_output_____"
]
],
[
[
"The import is valid for the **entire notebook**, we don't need that line in each cell. \n\nNow that we have imported the package we can access all function that we define in it using a *dot notation* ```skimage.myfun```. Most packages are organized into submodules and in that case to access functions of a submodule we use ```skimage.my_submodule.myfun```.\n\nTo come back to the previous example: the ```median``` filtering function is in the ```filters``` submodule that we could now use as:\n\n```python\nim_out = skimage.filters.median(im_in)\n```",
"_____no_output_____"
],
[
"We cannot execute this command as the variables ```im_in``` and ```im_out``` are not yet defined.\n\nNote that there are multiple ways to import packages. For example we could give another name to the package, using the ```as``` statement:",
"_____no_output_____"
]
],
[
[
"import skimage as sk",
"_____no_output_____"
]
],
[
[
"Nowe if we want to use the ```median``` function in the filters sumodule we would write:\n\n```python\nim_out = sk.filters.median(im_in)\n```",
"_____no_output_____"
],
[
"We can also import only a certain submodule using:",
"_____no_output_____"
]
],
[
[
"from skimage import filters",
"_____no_output_____"
]
],
[
[
"Now we have to write:\n\n```python\nim_out = filters.median(im_in)\n```",
"_____no_output_____"
],
[
"Finally, we can import a **single** function like this:",
"_____no_output_____"
]
],
[
[
"from skimage.filters import median",
"_____no_output_____"
]
],
[
[
"and now we have to write:\n\n```python\nim_out = median(im_in)\n```",
"_____no_output_____"
],
[
"## Structures\n\nAs mentioned above we cannot execute those various lines like ```im_out = median(im_in)``` because the image variable ```im_in``` is not yet defined. This variable should be an image, i.e. it cannot be a single number like in ```a=3``` but an entire grid of values, each value being one pixel. We therefore need a specific variable type that can contain such a structure.\n\nWe have already seen that we can define different types of variables. Single numbers:",
"_____no_output_____"
]
],
[
[
"a = 3",
"_____no_output_____"
]
],
[
[
"Text:",
"_____no_output_____"
]
],
[
[
"b = 'my text'",
"_____no_output_____"
]
],
[
[
"or even lists of numbers:",
"_____no_output_____"
]
],
[
[
"c = [6,2,8,9]",
"_____no_output_____"
]
],
[
[
"This last type of variable is called a ```list``` in Python and is one of the **structures** that is available in Python. If we think of an image that has multiple lines and columns of pixels, we could now imagine that we can represent it as a list of lists, each single list being e.g. one row pf pixels. For example a 3 x 3 image could be:",
"_____no_output_____"
]
],
[
[
"my_image = [[4,8,7], [6,4,3], [5,3,7]]\nmy_image",
"_____no_output_____"
]
],
[
[
"While in principle we could use a ```list``` for this, computations on such objects would be very slow. For example if we wanted to do background correction and subtract a given value from our image, effectively we would have to go through each element of our list (each pixel) one by one and sequentially remove the background from each pixel. If the background is 3 we would have therefore to compute:\n- 4-3\n- 8-3\n- 7-3\n- 6-3\n\netc. Since operations are done sequentially this would be very slow as we couldn't exploit the fact that most computers have multiple processors. Also it would be tedious to write such an operation.\n\nTo fix this, most scientific areas that use lists of numbers of some kind (time-series, images, measurements etc.) resort to an **external package** called ```Numpy``` which offers a **computationally efficient list** called an **array**.\n\nTo make this clearer we now import an image in our notebook to see such a structure. We will use a **function** from the scikit-image package to do this import. That function called ```imread``` is located in the submodule called ```io```. Remember that we can then access this function with ```skimage.io.imread()```. Just like we previously defined a function $f(x, a, b)$ that took inputs $x, a, b$, this ```imread()``` function also needs an input. Here it is just the **location of the image**, and that location can either be the **path** to the file on our computer or a **url** of an online place where the image is stored. Here we use an image that can be found at https://github.com/guiwitz/PyImageCourse_beginner/raw/master/images/19838_1252_F8_1.tif. As you can see it is a tif file. This address that we are using as an input should be formatted as text:",
"_____no_output_____"
]
],
[
[
"my_address = 'https://github.com/guiwitz/PyImageCourse_beginner/raw/master/images/19838_1252_F8_1.tif'",
"_____no_output_____"
]
],
[
[
"Now we can call our function:",
"_____no_output_____"
]
],
[
[
"skimage.io.imread(my_address)",
"_____no_output_____"
]
],
[
[
"We see here an output which is what is returned by our function. It is as expected a list of numbers, and not all numbers are shown because the list is too long. We see that we also have ```[]``` to specify rows, columns etc. The main difference compared to our list of lists that we defined previously is the ```array``` indication at the very beginning of the list of numbers. This ```array``` indication tells us that we are dealing with a ```Numpy``` array, this alternative type of list of lists that will allow us to do efficient computations.",
"_____no_output_____"
],
[
"## Plotting\n\nWe will see a few ways to represent data during the course. Here we just want to have a quick look at the image we just imported. For plotting we will use yet another **external library** called Matplotlib. That library is extensively used in the Python world and offers extensive choices of plots. We will mainly use one **function** from the library to display images: ```imshow```. Again, to access that function, we first need to import the package. Here we need a specific submodule:",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"Now we can use the ```plt.imshow()``` function. There are many options for plot, but we can use that function already by just passing an ```array``` as an input. First we need to assign the imported array to a variable:",
"_____no_output_____"
]
],
[
[
"import skimage.io\n\nimage = skimage.io.imread(my_address)",
"_____no_output_____"
],
[
"plt.imshow(image);",
"_____no_output_____"
]
],
[
[
"We see that we are dealing with a multi-channel image and can already distinguish cell nuclei (blue) and cytoplasm (red).",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
d098039eac7ac6a59a05ed0907f557dadf8d4fa7 | 14,432 | ipynb | Jupyter Notebook | 4.1.multiple_linear_regression_prediction_v2.ipynb | indervirbanipal/deep-neural-networks-with-pytorch | 40b99a431a49d41238ad5c89bb4b0888c48d74bc | [
"MIT"
] | null | null | null | 4.1.multiple_linear_regression_prediction_v2.ipynb | indervirbanipal/deep-neural-networks-with-pytorch | 40b99a431a49d41238ad5c89bb4b0888c48d74bc | [
"MIT"
] | null | null | null | 4.1.multiple_linear_regression_prediction_v2.ipynb | indervirbanipal/deep-neural-networks-with-pytorch | 40b99a431a49d41238ad5c89bb4b0888c48d74bc | [
"MIT"
] | null | null | null | 24.378378 | 324 | 0.542891 | [
[
[
"<a href=\"http://cocl.us/pytorch_link_top\">\n <img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/Pytochtop.png\" width=\"750\" alt=\"IBM Product \" />\n</a> \n",
"_____no_output_____"
],
[
"<img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/cc-logo-square.png\" width=\"200\" alt=\"cognitiveclass.ai logo\" />\n",
"_____no_output_____"
],
[
"<h1>Multiple Linear Regression</h1>\n",
"_____no_output_____"
],
[
"<h2>Objective</h2><ul><li> How to make the prediction for multiple inputs.</li><li> How to use linear class to build more complex models.</li><li> How to build a custom module.</li></ul> \n",
"_____no_output_____"
],
[
"<h2>Table of Contents</h2>\n<p>In this lab, you will review how to make a prediction in several different ways by using PyTorch.</p>\n\n<ul>\n <li><a href=\"#Prediction\">Prediction</a></li>\n <li><a href=\"#Linear\">Class Linear</a></li>\n <li><a href=\"#Cust\">Build Custom Modules</a></li>\n</ul>\n\n<p>Estimated Time Needed: <strong>15 min</strong></p>\n\n<hr>\n",
"_____no_output_____"
],
[
"<h2>Preparation</h2>\n",
"_____no_output_____"
],
[
"Import the libraries and set the random seed.\n",
"_____no_output_____"
]
],
[
[
"# Import the libraries and set the random seed\n\nfrom torch import nn\nimport torch\ntorch.manual_seed(1)",
"_____no_output_____"
]
],
[
[
"<!--Empty Space for separating topics-->\n",
"_____no_output_____"
],
[
"<h2 id=\"Prediction\">Prediction</h2>\n",
"_____no_output_____"
],
[
"Set weight and bias.\n",
"_____no_output_____"
]
],
[
[
"# Set the weight and bias\n\nw = torch.tensor([[2.0], [3.0]], requires_grad=True)\nb = torch.tensor([[1.0]], requires_grad=True)",
"_____no_output_____"
]
],
[
[
"Define the parameters. <code>torch.mm</code> uses matrix multiplication instead of scaler multiplication.\n",
"_____no_output_____"
]
],
[
[
"# Define Prediction Function\n\ndef forward(x):\n yhat = torch.mm(x, w) + b\n return yhat",
"_____no_output_____"
]
],
[
[
"The function <code>forward</code> implements the following equation:\n",
"_____no_output_____"
],
[
"<img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/chapter2/2.6.1_matrix_eq.png\" width=\"600\" alt=\"Matrix Linear Regression\"/>\n",
"_____no_output_____"
],
[
"If we input a <i>1x2</i> tensor, because we have a <i>2x1</i> tensor as <code>w</code>, we will get a <i>1x1</i> tensor: \n",
"_____no_output_____"
]
],
[
[
"# Calculate yhat\n\nx = torch.tensor([[1.0, 2.0]])\nyhat = forward(x)\nprint(\"The result: \", yhat)",
"_____no_output_____"
]
],
[
[
"<img src = \"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/chapter2/2.6.1example.png\" width = \"300\" alt=\"Linear Regression Matrix Sample One\" />\n",
"_____no_output_____"
],
[
"# Each row of the following tensor represents a sample:\n",
"_____no_output_____"
]
],
[
[
"# Sample tensor X\n\nX = torch.tensor([[1.0, 1.0], [1.0, 2.0], [1.0, 3.0]])",
"_____no_output_____"
],
[
"# Make the prediction of X \n\nyhat = forward(X)\nprint(\"The result: \", yhat)",
"_____no_output_____"
]
],
[
[
"<!--Empty Space for separating topics-->\n",
"_____no_output_____"
],
[
"<h2 id=\"Linear\">Class Linear</h2>\n",
"_____no_output_____"
],
[
"We can use the linear class to make a prediction. You'll also use the linear class to build more complex models.\n",
"_____no_output_____"
],
[
"Let us create a model.\n",
"_____no_output_____"
]
],
[
[
"# Make a linear regression model using build-in function\n\nmodel = nn.Linear(2, 1)",
"_____no_output_____"
]
],
[
[
"Make a prediction with the first sample:\n",
"_____no_output_____"
]
],
[
[
"# Make a prediction of x\n\nyhat = model(x)\nprint(\"The result: \", yhat)",
"_____no_output_____"
]
],
[
[
"Predict with multiple samples <code>X</code>: \n",
"_____no_output_____"
]
],
[
[
"# Make a prediction of X\n\nyhat = model(X)\nprint(\"The result: \", yhat)",
"_____no_output_____"
]
],
[
[
"The function performs matrix multiplication as shown in this image:\n",
"_____no_output_____"
],
[
"<img src = \"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/chapter2/2.6.1multi_sample_example.png\" width = \"600\" alt=\"Linear Regression Matrix Sample One\" />\n",
"_____no_output_____"
],
[
"<!--Empty Space for separating topics-->\n",
"_____no_output_____"
],
[
"<h2 id=\"Cust\">Build Custom Modules </h2>\n",
"_____no_output_____"
],
[
"Now, you'll build a custom module. You can make more complex models by using this method later. \n",
"_____no_output_____"
]
],
[
[
"# Create linear_regression Class\n\nclass linear_regression(nn.Module):\n \n # Constructor\n def __init__(self, input_size, output_size):\n super(linear_regression, self).__init__()\n self.linear = nn.Linear(input_size, output_size)\n \n # Prediction function\n def forward(self, x):\n yhat = self.linear(x)\n return yhat",
"_____no_output_____"
]
],
[
[
"Build a linear regression object. The input feature size is two. \n",
"_____no_output_____"
]
],
[
[
"model = linear_regression(2, 1)",
"_____no_output_____"
]
],
[
[
"This will input the following equation:\n",
"_____no_output_____"
],
[
"<img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/chapter2/2.6.1_matrix_eq.png\" width=\"600\" alt=\"Matrix Linear Regression\" />\n",
"_____no_output_____"
],
[
"You can see the randomly initialized parameters by using the <code>parameters()</code> method:\n",
"_____no_output_____"
]
],
[
[
"# Print model parameters\n\nprint(\"The parameters: \", list(model.parameters()))",
"_____no_output_____"
]
],
[
[
"You can also see the parameters by using the <code>state_dict()</code> method:\n",
"_____no_output_____"
]
],
[
[
"# Print model parameters\n\nprint(\"The parameters: \", model.state_dict())",
"_____no_output_____"
]
],
[
[
"Now we input a 1x2 tensor, and we will get a 1x1 tensor.\n",
"_____no_output_____"
]
],
[
[
"# Make a prediction of x\n\nyhat = model(x)\nprint(\"The result: \", yhat)",
"_____no_output_____"
]
],
[
[
"The shape of the output is shown in the following image: \n",
"_____no_output_____"
],
[
"<img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/chapter2/2.6.1_matrix_eq.png\" width=\"600\" alt=\"Matrix Linear Regression\" />\n",
"_____no_output_____"
],
[
"Make a prediction for multiple samples:\n",
"_____no_output_____"
]
],
[
[
"# Make a prediction of X\n\nyhat = model(X)\nprint(\"The result: \", yhat)",
"_____no_output_____"
]
],
[
[
"The shape is shown in the following image: \n",
"_____no_output_____"
],
[
"<img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/chapter2/2.6.1Multi_sample.png\" width=\"600\" alt=\"Multiple Samples Linear Regression\" />\n",
"_____no_output_____"
],
[
"<!--Empty Space for separating topics-->\n",
"_____no_output_____"
],
[
"<h3>Practice</h3>\n",
"_____no_output_____"
],
[
"Build a model or object of type <code>linear_regression</code>. Using the <code>linear_regression</code> object will predict the following tensor: \n",
"_____no_output_____"
]
],
[
[
"# Practice: Build a model to predict the follow tensor.\n\nX = torch.tensor([[11.0, 12.0, 13, 14], [11, 12, 13, 14]])",
"_____no_output_____"
]
],
[
[
"Double-click <b>here</b> for the solution.\n\n<!-- Your answer is below:\nmodel = linear_regression(4, 1)\nyhat = model(X)\nprint(\"The result: \", yhat)\n-->\n",
"_____no_output_____"
],
[
"<!--Empty Space for separating topics-->\n",
"_____no_output_____"
],
[
"<a href=\"http://cocl.us/pytorch_link_bottom\">\n <img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/notebook_bottom%20.png\" width=\"750\" alt=\"PyTorch Bottom\" />\n</a>\n",
"_____no_output_____"
],
[
"<h2>About the Authors:</h2> \n\n<a href=\"https://www.linkedin.com/in/joseph-s-50398b136/\">Joseph Santarcangelo</a> has a PhD in Electrical Engineering, his research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.\n",
"_____no_output_____"
],
[
"Other contributors: <a href=\"https://www.linkedin.com/in/michelleccarey/\">Michelle Carey</a>, <a href=\"www.linkedin.com/in/jiahui-mavis-zhou-a4537814a\">Mavis Zhou</a>\n",
"_____no_output_____"
],
[
"## Change Log\n\n| Date (YYYY-MM-DD) | Version | Changed By | Change Description |\n| ----------------- | ------- | ---------- | ----------------------------------------------------------- |\n| 2020-09-23 | 2.0 | Shubham | Migrated Lab to Markdown and added to course repo in GitLab |\n",
"_____no_output_____"
],
[
"<hr>\n",
"_____no_output_____"
],
[
"Copyright © 2018 <a href=\"cognitiveclass.ai?utm_source=bducopyrightlink&utm_medium=dswb&utm_campaign=bdu\">cognitiveclass.ai</a>. This notebook and its source code are released under the terms of the <a href=\"https://bigdatauniversity.com/mit-license/\">MIT License</a>.\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d09807cfe34cc7ffe01d386a193e90808dc00ea6 | 44,663 | ipynb | Jupyter Notebook | intro-to-pytorch/Part 4 - Fashion-MNIST (Exercises).ipynb | philip-le/deep-learning-v2-pytorch | 2618ebf2feb519b592cc3aa352f52dd1659f721a | [
"MIT"
] | null | null | null | intro-to-pytorch/Part 4 - Fashion-MNIST (Exercises).ipynb | philip-le/deep-learning-v2-pytorch | 2618ebf2feb519b592cc3aa352f52dd1659f721a | [
"MIT"
] | null | null | null | intro-to-pytorch/Part 4 - Fashion-MNIST (Exercises).ipynb | philip-le/deep-learning-v2-pytorch | 2618ebf2feb519b592cc3aa352f52dd1659f721a | [
"MIT"
] | null | null | null | 86.724272 | 25,868 | 0.830956 | [
[
[
"# Classifying Fashion-MNIST\n\nNow it's your turn to build and train a neural network. You'll be using the [Fashion-MNIST dataset](https://github.com/zalandoresearch/fashion-mnist), a drop-in replacement for the MNIST dataset. MNIST is actually quite trivial with neural networks where you can easily achieve better than 97% accuracy. Fashion-MNIST is a set of 28x28 greyscale images of clothes. It's more complex than MNIST, so it's a better representation of the actual performance of your network, and a better representation of datasets you'll use in the real world.\n\n<img src='assets/fashion-mnist-sprite.png' width=500px>\n\nIn this notebook, you'll build your own neural network. For the most part, you could just copy and paste the code from Part 3, but you wouldn't be learning. It's important for you to write the code yourself and get it to work. Feel free to consult the previous notebooks though as you work through this.\n\nFirst off, let's load the dataset through torchvision.",
"_____no_output_____"
]
],
[
[
"import torch\nfrom torchvision import datasets, transforms\nimport helper\n\n# Define a transform to normalize the data\ntransform = transforms.Compose([transforms.ToTensor(),\n transforms.Normalize((0.5,), (0.5,))])\n# Download and load the training data\ntrainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform)\ntrainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)\n\n# Download and load the test data\ntestset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)\ntestloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)",
"Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz to /home/philip/.pytorch/F_MNIST_data/FashionMNIST/raw/train-images-idx3-ubyte.gz\n"
],
[
"trainset.classes",
"_____no_output_____"
],
[
"trainset",
"_____no_output_____"
]
],
[
[
"Here we can see one of the images.",
"_____no_output_____"
]
],
[
[
"image, label = next(iter(trainloader))\nprint(image.shape, label.shape)\nhelper.imshow(image[0,:]);",
"torch.Size([64, 1, 28, 28]) torch.Size([64])\n"
]
],
[
[
"## Building the network\n\nHere you should define your network. As with MNIST, each image is 28x28 which is a total of 784 pixels, and there are 10 classes. You should include at least one hidden layer. We suggest you use ReLU activations for the layers and to return the logits or log-softmax from the forward pass. It's up to you how many layers you add and the size of those layers.",
"_____no_output_____"
]
],
[
[
"# TODO: Define your network architecture here\nfrom torch import nn\n\nmodel = nn.Sequential(nn.Linear(784, 128),\n nn.ReLU(),\n nn.Linear(128,32),\n nn.ReLU(),\n nn.Linear(32,10))\n\n\nmodel.parameters\n",
"_____no_output_____"
]
],
[
[
"# Train the network\n\nNow you should create your network and train it. First you'll want to define [the criterion](http://pytorch.org/docs/master/nn.html#loss-functions) ( something like `nn.CrossEntropyLoss`) and [the optimizer](http://pytorch.org/docs/master/optim.html) (typically `optim.SGD` or `optim.Adam`).\n\nThen write the training code. Remember the training pass is a fairly straightforward process:\n\n* Make a forward pass through the network to get the logits \n* Use the logits to calculate the loss\n* Perform a backward pass through the network with `loss.backward()` to calculate the gradients\n* Take a step with the optimizer to update the weights\n\nBy adjusting the hyperparameters (hidden units, learning rate, etc), you should be able to get the training loss below 0.4.",
"_____no_output_____"
]
],
[
[
"# TODO: Create the network, define the criterion and optimizer\ncriterion = nn.CrossEntropyLoss()\noptimizer = torch.optim.Adam(params=model.parameters(), lr=0.01)\n",
"_____no_output_____"
],
[
"len(trainloader), 60000/64",
"_____no_output_____"
],
[
"# TODO: Train the network here\n\nepochs = 10\nfor e in range(epochs):\n running_loss = 0\n for image,label in iter(trainloader):\n optimizer.zero_grad()\n \n output = model(image.view(image.shape[0],-1))\n loss = criterion(output, label)\n loss.backward()\n \n optimizer.step()\n running_loss += loss.item()\n \n else:\n print(f\"Epoch {e} - loss {running_loss/len(trainloader)}\")\n \n",
"Epoch 0 - loss 0.4166704870776327\nEpoch 1 - loss 0.4005795487685244\nEpoch 2 - loss 0.3926830308428451\nEpoch 3 - loss 0.38288127825553736\nEpoch 4 - loss 0.36766664337501853\nEpoch 5 - loss 0.3670633032377849\nEpoch 6 - loss 0.3610582384251074\nEpoch 7 - loss 0.3601057148739092\nEpoch 8 - loss 0.3558250844144999\nEpoch 9 - loss 0.3507081931556212\n"
],
[
"%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport helper\n\n# Test out your network!\n\ndataiter = iter(testloader)\nimages, labels = dataiter.next()\nimg = images[2]\n# Convert 2D image to 1D vector\nimg = img.resize_(1, 784)\n\n# TODO: Calculate the class probabilities (softmax) for img\nwith torch.no_grad():\n ps = model(img) \n\n# Plot the image and probabilities\nhelper.view_classify(img.resize_(1, 28, 28), ps, version='Fashion')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d09826e9b651463055701e0f639d2d5bf59c3627 | 30,306 | ipynb | Jupyter Notebook | notebook_utils/generate_data.ipynb | omarsou/altegrad_challenge_hindex | 199e555a79919bd4bf2e1483c04458169f9a289b | [
"MIT"
] | 1 | 2021-03-26T08:40:15.000Z | 2021-03-26T08:40:15.000Z | notebook_utils/generate_data.ipynb | omarsou/altegrad_challenge_hindex | 199e555a79919bd4bf2e1483c04458169f9a289b | [
"MIT"
] | null | null | null | notebook_utils/generate_data.ipynb | omarsou/altegrad_challenge_hindex | 199e555a79919bd4bf2e1483c04458169f9a289b | [
"MIT"
] | null | null | null | 30,306 | 30,306 | 0.73134 | [
[
[
"**Create Train / Dev / Test files. <br> Each file is a dictionary where each key represent the ID of a certain Author and each value is a dict where the keys are : <br> - author_embedding : the Node embedding that correspond to the author (tensor of shape (128,)) <br> - papers_embedding : the abstract embedding of every papers (tensor of shape (10,dim)) (dim depend on the embedding model taken into account) <br> - features : the graph structural features (tensor of shape (4,)) <br> - y : the target (tensor of shape (1,))**",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport networkx as nx\nfrom tqdm import tqdm_notebook as tqdm\nfrom sklearn.utils import shuffle\nimport gzip\nimport pickle\nimport torch",
"_____no_output_____"
],
[
"def load_dataset_file(filename):\n with gzip.open(filename, \"rb\") as f:\n loaded_object = pickle.load(f)\n return loaded_object\ndef save(object, filename, protocol = 0):\n \"\"\"Saves a compressed object to disk\n \"\"\"\n file = gzip.GzipFile(filename, 'wb')\n file.write(pickle.dumps(object, protocol))\n file.close()",
"_____no_output_____"
]
],
[
[
"# Roberta Embedding",
"_____no_output_____"
]
],
[
[
"# Load the paper's embedding\nembedding_per_paper = load_dataset_file('/content/drive/MyDrive/altegrad_datachallenge/files_generated/embedding_per_paper_clean.txt')\n# Load the node's embedding\nembedding_per_nodes = load_dataset_file('/content/drive/MyDrive/altegrad_datachallenge/files_generated/Node2Vec.txt')\n# read the file to create a dictionary with author key and paper list as value\nf = open(\"/content/drive/MyDrive/altegrad_datachallenge/author_papers.txt\",\"r\")\npapers_per_author = {}\nfor l in f:\n auth_paps = [paper_id.strip() for paper_id in l.split(\":\")[1].replace(\"[\",\"\").replace(\"]\",\"\").replace(\"\\n\",\"\").replace(\"\\'\",\"\").replace(\"\\\"\",\"\").split(\",\")]\n papers_per_author[l.split(\":\")[0]] = auth_paps\n# Load train set\ndf_train = shuffle(pd.read_csv('/content/drive/MyDrive/altegrad_datachallenge/train.csv', dtype={'authorID': np.int64, 'h_index': np.float32})).reset_index(drop=True)\n# Load test set\ndf_test = pd.read_csv('/content/drive/MyDrive/altegrad_datachallenge/test.csv', dtype={'authorID': np.int64}) \n# Load Graph\nG = nx.read_edgelist('/content/drive/MyDrive/altegrad_datachallenge/collaboration_network.edgelist', delimiter=' ', nodetype=int)",
"_____no_output_____"
],
[
"# computes structural features for each node\ncore_number = nx.core_number(G)\navg_neighbor_degree = nx.average_neighbor_degree(G)\n# Split into train/valid\ndf_valid = df_train.iloc[int(len(df_train)*0.9):, :]\ndf_train = df_train.iloc[:int(len(df_train)*0.9), :]",
"_____no_output_____"
]
],
[
[
"## Train",
"_____no_output_____"
]
],
[
[
"train_data = {}\nfor i, row in tqdm(df_train.iterrows()):\n author_id, y = str(int(row['authorID'])), row['h_index']\n degree, core_number_, avg_neighbor_degree_ = G.degree(int(author_id)), core_number[int(author_id)], avg_neighbor_degree[int(author_id)]\n author_embedding = torch.from_numpy(embedding_per_nodes[int(author_id)].reshape(1,-1))\n papers_ids = papers_per_author[author_id]\n papers_embedding = []\n num_papers = 0\n for id_paper in papers_ids:\n num_papers += 1\n try:\n papers_embedding.append(torch.from_numpy(embedding_per_paper[id_paper].reshape(1,-1)))\n except KeyError:\n print(f\"Missing paper for {author_id}\")\n papers_embedding.append(torch.zeros((1,768)))\n papers_embedding = torch.cat(papers_embedding, dim=0)\n additional_features = torch.from_numpy(np.array([degree, core_number_, avg_neighbor_degree_, num_papers]).reshape(1,-1))\n y = torch.Tensor([y])\n train_data[author_id] = {'author_embedding': author_embedding, 'papers_embedding': papers_embedding, 'features': additional_features, 'target': y}",
"_____no_output_____"
],
[
"# Saving\nsave(train_data, '/content/drive/MyDrive/altegrad_datachallenge/data/data.train')\n# Deleting (memory)\ndel train_data",
"_____no_output_____"
]
],
[
[
"## Validation",
"_____no_output_____"
]
],
[
[
"valid_data = {}\nfor i, row in tqdm(df_valid.iterrows()):\n author_id, y = str(int(row['authorID'])), row['h_index']\n degree, core_number_, avg_neighbor_degree_ = G.degree(int(author_id)), core_number[int(author_id)], avg_neighbor_degree[int(author_id)]\n author_embedding = torch.from_numpy(embedding_per_nodes[int(author_id)].reshape(1,-1))\n papers_ids = papers_per_author[author_id]\n papers_embedding = []\n num_papers = 0\n for id_paper in papers_ids:\n num_papers += 1\n try:\n papers_embedding.append(torch.from_numpy(embedding_per_paper[id_paper].reshape(1,-1)))\n except KeyError:\n papers_embedding.append(torch.zeros((1,768)))\n papers_embedding = torch.cat(papers_embedding, dim=0)\n additional_features = torch.from_numpy(np.array([degree, core_number_, avg_neighbor_degree_, num_papers]).reshape(1,-1))\n y = torch.Tensor([y])\n valid_data[author_id] = {'author_embedding': author_embedding, 'papers_embedding': papers_embedding, 'features': additional_features, 'target': y}",
"_____no_output_____"
],
[
"save(valid_data, '/content/drive/MyDrive/altegrad_datachallenge/data/data.valid')\ndel valid_data",
"_____no_output_____"
]
],
[
[
"## Test",
"_____no_output_____"
]
],
[
[
"test_data = {}\nfor i, row in tqdm(df_test.iterrows()):\n author_id = str(int(row['authorID']))\n degree, core_number_, avg_neighbor_degree_ = G.degree(int(author_id)), core_number[int(author_id)], avg_neighbor_degree[int(author_id)]\n author_embedding = torch.from_numpy(embedding_per_nodes[int(author_id)].reshape(1,-1))\n papers_ids = papers_per_author[author_id]\n papers_embedding = []\n num_papers = 0\n for id_paper in papers_ids:\n num_papers += 1\n try:\n papers_embedding.append(torch.from_numpy(embedding_per_paper[id_paper].reshape(1,-1)))\n except KeyError:\n papers_embedding.append(torch.zeros((1,768)))\n papers_embedding = torch.cat(papers_embedding, dim=0)\n additional_features = torch.from_numpy(np.array([degree, core_number_, avg_neighbor_degree_, num_papers]).reshape(1,-1))\n test_data[author_id] = {'author_embedding': author_embedding, 'papers_embedding': papers_embedding, 'features': additional_features}",
"_____no_output_____"
],
[
"del G\ndel df_test\ndel embedding_per_paper\ndel papers_per_author\ndel core_number\ndel avg_neighbor_degree\ndel embedding_per_nodes",
"_____no_output_____"
],
[
"save(test_data, '/content/drive/MyDrive/altegrad_datachallenge/data/data.test', 4)\ndel test_data",
"_____no_output_____"
]
],
[
[
"# Doc2Vec",
"_____no_output_____"
]
],
[
[
"# Load the paper's embedding\nembedding_per_paper = load_dataset_file('/content/drive/MyDrive/altegrad_datachallenge/files_generated/doc2vec_paper_embedding.txt')\n# Load the node's embedding\nembedding_per_nodes = load_dataset_file('/content/drive/MyDrive/altegrad_datachallenge/files_generated/Node2Vec.txt')\n# read the file to create a dictionary with author key and paper list as value\nf = open(\"/content/drive/MyDrive/altegrad_datachallenge/data/author_papers.txt\",\"r\")\npapers_per_author = {}\nfor l in f:\n auth_paps = [paper_id.strip() for paper_id in l.split(\":\")[1].replace(\"[\",\"\").replace(\"]\",\"\").replace(\"\\n\",\"\").replace(\"\\'\",\"\").replace(\"\\\"\",\"\").split(\",\")]\n papers_per_author[l.split(\":\")[0]] = auth_paps\n# Load train set\ndf_train = shuffle(pd.read_csv('/content/drive/MyDrive/altegrad_datachallenge/data/train.csv', dtype={'authorID': np.int64, 'h_index': np.float32})).reset_index(drop=True)\n# Load test set\ndf_test = pd.read_csv('/content/drive/MyDrive/altegrad_datachallenge/data/test.csv', dtype={'authorID': np.int64}) \n# Load Graph\nG = nx.read_edgelist('/content/drive/MyDrive/altegrad_datachallenge/data/collaboration_network.edgelist', delimiter=' ', nodetype=int)",
"_____no_output_____"
],
[
"# computes structural features for each node\ncore_number = nx.core_number(G)\navg_neighbor_degree = nx.average_neighbor_degree(G)\n# Split into train/valid\ndf_valid = df_train.iloc[int(len(df_train)*0.9):, :]\ndf_train = df_train.iloc[:int(len(df_train)*0.9), :]",
"_____no_output_____"
]
],
[
[
"## Train",
"_____no_output_____"
]
],
[
[
"train_data = {}\nfor i, row in tqdm(df_train.iterrows()):\n author_id, y = str(int(row['authorID'])), row['h_index']\n degree, core_number_, avg_neighbor_degree_ = G.degree(int(author_id)), core_number[int(author_id)], avg_neighbor_degree[int(author_id)]\n author_embedding = torch.from_numpy(embedding_per_nodes[int(author_id)].reshape(1,-1))\n papers_ids = papers_per_author[author_id]\n papers_embedding = []\n num_papers = 0\n for id_paper in papers_ids:\n num_papers += 1\n try:\n papers_embedding.append(torch.from_numpy(embedding_per_paper[id_paper].reshape(1,-1)))\n except KeyError:\n print(f\"Missing paper for {author_id}\")\n papers_embedding.append(torch.zeros((1,256)))\n papers_embedding = torch.cat(papers_embedding, dim=0)\n additional_features = torch.from_numpy(np.array([degree, core_number_, avg_neighbor_degree_, num_papers]).reshape(1,-1))\n y = torch.Tensor([y])\n train_data[author_id] = {'author_embedding': author_embedding, 'papers_embedding': papers_embedding, 'features': additional_features, 'target': y}",
"_____no_output_____"
],
[
"# Saving\nsave(train_data, '/content/drive/MyDrive/altegrad_datachallenge/data/d2v.train')\n# Deleting (memory)\ndel train_data",
"_____no_output_____"
]
],
[
[
"## Dev",
"_____no_output_____"
]
],
[
[
"valid_data = {}\nfor i, row in tqdm(df_valid.iterrows()):\n author_id, y = str(int(row['authorID'])), row['h_index']\n degree, core_number_, avg_neighbor_degree_ = G.degree(int(author_id)), core_number[int(author_id)], avg_neighbor_degree[int(author_id)]\n author_embedding = torch.from_numpy(embedding_per_nodes[int(author_id)].reshape(1,-1))\n papers_ids = papers_per_author[author_id]\n papers_embedding = []\n num_papers = 0\n for id_paper in papers_ids:\n num_papers += 1\n try:\n papers_embedding.append(torch.from_numpy(embedding_per_paper[id_paper].reshape(1,-1)))\n except KeyError:\n papers_embedding.append(torch.zeros((1,256)))\n papers_embedding = torch.cat(papers_embedding, dim=0)\n additional_features = torch.from_numpy(np.array([degree, core_number_, avg_neighbor_degree_, num_papers]).reshape(1,-1))\n y = torch.Tensor([y])\n valid_data[author_id] = {'author_embedding': author_embedding, 'papers_embedding': papers_embedding, 'features': additional_features, 'target': y}",
"/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:2: TqdmDeprecationWarning: This function will be removed in tqdm==5.0.0\nPlease use `tqdm.notebook.tqdm` instead of `tqdm.tqdm_notebook`\n \n"
],
[
"save(valid_data, '/content/drive/MyDrive/altegrad_datachallenge/data/d2v.valid')\ndel valid_data",
"_____no_output_____"
]
],
[
[
"## Test",
"_____no_output_____"
]
],
[
[
"test_data = {}\nfor i, row in tqdm(df_test.iterrows()):\n author_id = str(int(row['authorID']))\n degree, core_number_, avg_neighbor_degree_ = G.degree(int(author_id)), core_number[int(author_id)], avg_neighbor_degree[int(author_id)]\n author_embedding = torch.from_numpy(embedding_per_nodes[int(author_id)].reshape(1,-1))\n papers_ids = papers_per_author[author_id]\n papers_embedding = []\n num_papers = 0\n for id_paper in papers_ids:\n num_papers += 1\n try:\n papers_embedding.append(torch.from_numpy(embedding_per_paper[id_paper].reshape(1,-1)))\n except KeyError:\n papers_embedding.append(torch.zeros((1,256)))\n papers_embedding = torch.cat(papers_embedding, dim=0)\n additional_features = torch.from_numpy(np.array([degree, core_number_, avg_neighbor_degree_, num_papers]).reshape(1,-1))\n test_data[author_id] = {'author_embedding': author_embedding, 'papers_embedding': papers_embedding, 'features': additional_features}",
"/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:2: TqdmDeprecationWarning: This function will be removed in tqdm==5.0.0\nPlease use `tqdm.notebook.tqdm` instead of `tqdm.tqdm_notebook`\n \n"
],
[
"del G\ndel df_test\ndel embedding_per_paper\ndel papers_per_author\ndel core_number\ndel avg_neighbor_degree\ndel embedding_per_nodes",
"_____no_output_____"
],
[
"save(test_data, '/content/drive/MyDrive/altegrad_datachallenge/data/d2v.test', 4)\ndel test_data",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d098286f2218d452cf9723e7c115fbf19078cbc2 | 502,480 | ipynb | Jupyter Notebook | WeatherPy/WeatherPy.ipynb | ball4410/python-api-challenge | 8a1f784ae8eb2306db7dac91f467ec279144121d | [
"MIT"
] | null | null | null | WeatherPy/WeatherPy.ipynb | ball4410/python-api-challenge | 8a1f784ae8eb2306db7dac91f467ec279144121d | [
"MIT"
] | null | null | null | WeatherPy/WeatherPy.ipynb | ball4410/python-api-challenge | 8a1f784ae8eb2306db7dac91f467ec279144121d | [
"MIT"
] | null | null | null | 233.820382 | 46,516 | 0.903698 | [
[
[
"# WeatherPy\n----\n\n#### Note\n* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.",
"_____no_output_____"
]
],
[
[
"!pip3 install citipy",
"Collecting citipy\n Downloading citipy-0.0.5.tar.gz (557 kB)\n\u001b[K |████████████████████████████████| 557 kB 3.1 MB/s eta 0:00:01\n\u001b[?25hCollecting kdtree>=0.12\n Downloading kdtree-0.16-py2.py3-none-any.whl (7.7 kB)\nBuilding wheels for collected packages: citipy\n Building wheel for citipy (setup.py) ... \u001b[?25ldone\n\u001b[?25h Created wheel for citipy: filename=citipy-0.0.5-py3-none-any.whl size=559701 sha256=373da9914e2058b01b84a4d1e43ce30ad4c7d3ec5eca036f2d4ebe652b3e424e\n Stored in directory: /Users/chelseaball/Library/Caches/pip/wheels/6d/df/5e/ad8eb9cc5ee7f4ba76865167c09f9a7edff405c669111d8353\nSuccessfully built citipy\nInstalling collected packages: kdtree, citipy\nSuccessfully installed citipy-0.0.5 kdtree-0.16\n"
],
[
"# Dependencies and Setup\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\nimport requests\nimport time\nfrom scipy.stats import linregress\n\n\n# Import API key\nfrom api_keys import weather_api_key\n\n# Incorporated citipy to determine city based on latitude and longitude\nfrom citipy import citipy\n\n# Output File (CSV)\noutput_data_file = \"output_data/cities.csv\"\n\n# Range of latitudes and longitudes\nlat_range = (-90, 90)\nlng_range = (-180, 180)",
"_____no_output_____"
]
],
[
[
"## Generate Cities List",
"_____no_output_____"
]
],
[
[
"# List for holding lat_lngs and cities\nlat_lngs = []\ncities = []\n\n# Create a set of random lat and lng combinations\nlats = np.random.uniform(lat_range[0], lat_range[1], size=1500)\nlngs = np.random.uniform(lng_range[0], lng_range[1], size=1500)\nlat_lngs = zip(lats, lngs)\n\n# Identify nearest city for each lat, lng combination\nfor lat_lng in lat_lngs:\n city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name\n \n # If the city is unique, then add it to a our cities list\n if city not in cities:\n cities.append(city)\n\n# Print the city count to confirm sufficient count\nlen(cities) #617",
"_____no_output_____"
]
],
[
[
"### Perform API Calls\n* Perform a weather check on each city using a series of successive API calls.\n* Include a print log of each city as it'sbeing processed (with the city number and city name).\n",
"_____no_output_____"
]
],
[
[
"\n",
"_____no_output_____"
],
[
"# Save config information\nurl = \"http://api.openweathermap.org/data/2.5/weather?units=Imperial&\"\nbase_url = f\"{url}APPID={weather_api_key}&q=\"\n\n\ncity_data = []\n\nprint(\"Beginning Data Retrieval\")\nprint(\"--------------------------\")\n# use iterrows to iterate through pandas dataframe\nfor index, city in enumerate(cities):\n \n print(f\"Processing record {index}: {city}\")\n \n try:\n # assemble url and make API request\n response = requests.get(base_url + city).json()\n \n# if index == 1:\n# print(base_url + city)\n \n city_lat = response['coord'][\"lat\"]\n city_lon = response['coord'][\"lon\"]\n max_temp = response['main']['temp_max']\n humidity = response['main']['humidity']\n cloudiness = response['clouds']['all']\n wind_speed = response['wind']['speed']\n country = response['sys']['country']\n date = response['dt']\n \n #Store data for each city found\n city_data.append({\"City\": city, \n \"Lat\": city_lat,\n \"Lon\": city_lon,\n \"Max Temp\": max_temp,\n \"Humidity\": humidity,\n \"Cloudiness\": cloudiness,\n \"Wind Speed\": wind_speed,\n \"Country\": country,\n \"Date\": date})\n \n except (KeyError, IndexError):\n print(\"City Not found.Skipping...\")\n \nprint(\"--------------------------\")\nprint(\"Data Retrieval Complete\")\nprint(\"--------------------------\")\n",
"Beginning Data Retrieval\n--------------------------\nProcessing record 0: souillac\nProcessing record 1: shubarkuduk\nhttp://api.openweathermap.org/data/2.5/weather?units=Imperial&APPID=9ffa18eb275e20275c32d718a9a31ad7&q=shubarkuduk\nProcessing record 2: bukavu\nProcessing record 3: tuktoyaktuk\nProcessing record 4: westport\nProcessing record 5: hermanus\nProcessing record 6: east london\nProcessing record 7: sibolga\nProcessing record 8: kaitangata\nProcessing record 9: sitka\nProcessing record 10: punta arenas\nProcessing record 11: malwan\nCity Not found.Skipping...\nProcessing record 12: beringovskiy\nProcessing record 13: rikitea\nProcessing record 14: busselton\nProcessing record 15: srednekolymsk\nProcessing record 16: ushuaia\nProcessing record 17: leningradskiy\nProcessing record 18: bethel\nProcessing record 19: puerto madero\nProcessing record 20: port alfred\nProcessing record 21: lebu\nProcessing record 22: tsihombe\nCity Not found.Skipping...\nProcessing record 23: fortuna\nProcessing record 24: niquelandia\nProcessing record 25: saint-philippe\nProcessing record 26: sayyan\nProcessing record 27: butaritari\nProcessing record 28: nogliki\nProcessing record 29: puerto ayora\nProcessing record 30: deputatskiy\nProcessing record 31: bluff\nProcessing record 32: geraldton\nProcessing record 33: mahibadhoo\nProcessing record 34: karratha\nProcessing record 35: longyearbyen\nProcessing record 36: laguna\nProcessing record 37: siguiri\nProcessing record 38: beroroha\nProcessing record 39: upernavik\nProcessing record 40: esperance\nProcessing record 41: kapuskasing\nProcessing record 42: fukuma\nProcessing record 43: tuatapere\nProcessing record 44: jamestown\nProcessing record 45: clyde river\nProcessing record 46: sisimiut\nProcessing record 47: arraial do cabo\nProcessing record 48: tasiilaq\nProcessing record 49: nikolskoye\nProcessing record 50: bredasdorp\nProcessing record 51: taolanaro\nCity Not found.Skipping...\nProcessing record 52: thompson\nProcessing record 53: mataura\nProcessing record 54: kundiawa\nProcessing record 55: grand river south east\nCity Not found.Skipping...\nProcessing record 56: cidreira\nProcessing record 57: pevek\nProcessing record 58: amderma\nCity Not found.Skipping...\nProcessing record 59: ribeira grande\nProcessing record 60: albany\nProcessing record 61: yar-sale\nProcessing record 62: bitung\nProcessing record 63: zadar\nProcessing record 64: belgrade\nProcessing record 65: damavand\nProcessing record 66: attawapiskat\nCity Not found.Skipping...\nProcessing record 67: atuona\nProcessing record 68: castro\nProcessing record 69: seredka\nProcessing record 70: saskylakh\nProcessing record 71: hithadhoo\nProcessing record 72: mahebourg\nProcessing record 73: teknaf\nProcessing record 74: cape town\nProcessing record 75: cherskiy\nProcessing record 76: mys shmidta\nCity Not found.Skipping...\nProcessing record 77: hobart\nProcessing record 78: ilo\nProcessing record 79: salinopolis\nProcessing record 80: carnarvon\nProcessing record 81: bonavista\nProcessing record 82: dikson\nProcessing record 83: christchurch\nProcessing record 84: saint-joseph\nProcessing record 85: toamasina\nProcessing record 86: mar del plata\nProcessing record 87: arman\nProcessing record 88: kodiak\nProcessing record 89: louisbourg\nCity Not found.Skipping...\nProcessing record 90: bilibino\nProcessing record 91: provideniya\nProcessing record 92: bambous virieux\nProcessing record 93: shangrao\nProcessing record 94: vaini\nProcessing record 95: aguimes\nProcessing record 96: sur\nProcessing record 97: anaco\nProcessing record 98: mancio lima\nProcessing record 99: katsuura\nProcessing record 100: utiroa\nCity Not found.Skipping...\nProcessing record 101: barrow\nProcessing record 102: sokna\nProcessing record 103: ericeira\nProcessing record 104: dakar\nProcessing record 105: moose factory\nProcessing record 106: georgetown\nProcessing record 107: khatanga\nProcessing record 108: abu samrah\nProcessing record 109: sindor\nProcessing record 110: mayo\nProcessing record 111: san patricio\nProcessing record 112: san rafael del sur\nProcessing record 113: salalah\nProcessing record 114: barentsburg\nCity Not found.Skipping...\nProcessing record 115: saint pete beach\nProcessing record 116: aktash\nProcessing record 117: karauzyak\nCity Not found.Skipping...\nProcessing record 118: airai\nProcessing record 119: inongo\nProcessing record 120: muskegon\nProcessing record 121: torbay\nProcessing record 122: rungata\nCity Not found.Skipping...\nProcessing record 123: bayanday\nProcessing record 124: natal\nProcessing record 125: jiuquan\nProcessing record 126: port elizabeth\nProcessing record 127: iqaluit\nProcessing record 128: stornoway\nProcessing record 129: oxapampa\nProcessing record 130: kavaratti\nProcessing record 131: hasaki\nProcessing record 132: nizhniy baskunchak\nProcessing record 133: chuy\nProcessing record 134: kudahuvadhoo\nProcessing record 135: conceicao do araguaia\nProcessing record 136: cheuskiny\nCity Not found.Skipping...\nProcessing record 137: walvis bay\nProcessing record 138: nanortalik\nProcessing record 139: namatanai\nProcessing record 140: lagoa\nProcessing record 141: dalmeny\nProcessing record 142: chokurdakh\nProcessing record 143: ossora\nProcessing record 144: boende\nProcessing record 145: ancud\nProcessing record 146: ibra\nProcessing record 147: banda aceh\nProcessing record 148: moindou\nProcessing record 149: colac\nProcessing record 150: kruisfontein\nProcessing record 151: anloga\nProcessing record 152: belushya guba\nCity Not found.Skipping...\nProcessing record 153: coquimbo\nProcessing record 154: severo-kurilsk\nProcessing record 155: nuevitas\nProcessing record 156: naze\nProcessing record 157: norman wells\nProcessing record 158: porto santo\nProcessing record 159: wanxian\nProcessing record 160: jalu\nProcessing record 161: shiyan\nProcessing record 162: vysokogornyy\nProcessing record 163: la ronge\nProcessing record 164: del rio\nProcessing record 165: takoradi\nProcessing record 166: kilindoni\nProcessing record 167: kegayli\nCity Not found.Skipping...\nProcessing record 168: barinas\nProcessing record 169: tabiauea\nCity Not found.Skipping...\nProcessing record 170: iskateley\nProcessing record 171: sioux city\nProcessing record 172: farafangana\nProcessing record 173: valley\nProcessing record 174: nanzhang\nProcessing record 175: solnechnyy\nProcessing record 176: haibowan\nCity Not found.Skipping...\nProcessing record 177: bengkulu\nProcessing record 178: vostok\nProcessing record 179: kapaa\nProcessing record 180: huarmey\nProcessing record 181: katherine\nProcessing record 182: cabo san lucas\nProcessing record 183: lensk\nProcessing record 184: hilo\nProcessing record 185: saint george\nProcessing record 186: vila\nProcessing record 187: manokwari\nProcessing record 188: praia\nProcessing record 189: bilma\nProcessing record 190: new norfolk\nProcessing record 191: touros\nProcessing record 192: marsa matruh\nProcessing record 193: ixtapa\nProcessing record 194: pascagoula\nProcessing record 195: san francisco\nProcessing record 196: husavik\nProcessing record 197: gulshat\nCity Not found.Skipping...\nProcessing record 198: auray\nProcessing record 199: avarua\nProcessing record 200: korla\nProcessing record 201: burica\nCity Not found.Skipping...\nProcessing record 202: washington\nProcessing record 203: roald\nProcessing record 204: paulo afonso\nProcessing record 205: samusu\nCity Not found.Skipping...\nProcessing record 206: lompoc\nProcessing record 207: kulhudhuffushi\nProcessing record 208: mnogovershinnyy\nProcessing record 209: naftah\nCity Not found.Skipping...\nProcessing record 210: casino\nProcessing record 211: haapiti\nProcessing record 212: sechura\nProcessing record 213: los llanos de aridane\nProcessing record 214: mlonggo\nProcessing record 215: seddon\nProcessing record 216: vardo\nProcessing record 217: muros\nProcessing record 218: qaanaaq\nProcessing record 219: inhambane\nProcessing record 220: tiksi\nProcessing record 221: kyra\nProcessing record 222: portland\nProcessing record 223: burkhala\nCity Not found.Skipping...\nProcessing record 224: te anau\nProcessing record 225: dyakonovo\nCity Not found.Skipping...\nProcessing record 226: makat\nProcessing record 227: teneguiban\nCity Not found.Skipping...\nProcessing record 228: yeppoon\nProcessing record 229: zelenyy bor\nProcessing record 230: kargasok\n"
]
],
[
[
"### Convert Raw Data to DataFrame\n* Export the city data into a .csv.\n* Display the DataFrame",
"_____no_output_____"
]
],
[
[
"# Make city data into DF and export the cities DataFrame to a CSV file\ncity_df = pd.DataFrame(city_data)\ncity_df.describe()\n\ncity_df.to_csv(output_data_file, index_label=\"City ID\")\ncity_df.head()\n",
"_____no_output_____"
],
[
"city_df.count()\ncity_df.describe()",
"_____no_output_____"
]
],
[
[
"## Inspect the data and remove the cities where the humidity > 100%.\n----\nSkip this step if there are no cities that have humidity > 100%. ",
"_____no_output_____"
]
],
[
[
"city_df.loc[city_df[\"Humidity\"] > 100] # No rows returned\n",
"_____no_output_____"
],
[
"# Get the indices of cities that have humidity over 100%.\n",
"_____no_output_____"
],
[
"# Make a new DataFrame equal to the city data to drop all humidity outliers by index.\n# Passing \"inplace=False\" will make a copy of the city_data DataFrame, which we call \"clean_city_data\".\n",
"_____no_output_____"
],
[
"\n",
"_____no_output_____"
]
],
[
[
"## Plotting the Data\n* Use proper labeling of the plots using plot titles (including date of analysis) and axes labels.\n* Save the plotted figures as .pngs.",
"_____no_output_____"
],
[
"## Latitude vs. Temperature Plot",
"_____no_output_____"
]
],
[
[
"#Make plot\ntemp = city_df[\"Max Temp\"]\nlat = city_df[\"Lat\"]\n\nplt.scatter(lat, temp, marker=\"o\", facecolors=\"blue\", edgecolors=\"black\", alpha=0.75)\n\n# Set y lim based on max and min temp\nplt.ylim(min(temp) - 5, max(temp) + 5)\n\n# Set the x lim based on max and min lat\nplt.xlim(min(lat) - 5, max(lat) + 5)\n\n# Create a title, x label, and y label for our chart\nplt.title(\"Latitude v Temperature Plot\")\nplt.xlabel(\"Latitude\")\nplt.ylabel(\"Temperature (Fahrenheit)\")\n\nplt.savefig(\"Latitude_Temperature.png\")\n\n# Prints the scatter plot to the screen\nplt.show()\n\n#The plot shows that the further away a location is from the equater, the lower the max temperate\n#The more extreme cold temperates (< 0 degrees F) are all in the Northern hemisphere. ",
"_____no_output_____"
]
],
[
[
"## Latitude vs. Humidity Plot",
"_____no_output_____"
]
],
[
[
"#Make plot\nhumid = city_df[\"Humidity\"]\nlat = city_df[\"Lat\"]\n\nplt.scatter(lat, humid, marker=\"o\", facecolors=\"blue\", edgecolors=\"black\", alpha=0.75)\n\n# Set y lim based on max and min humudity\nplt.ylim(min(humid) - 5, max(humid) + 5)\n\n# Set the x lim based on max and min lat\nplt.xlim(min(lat) - 5, max(lat) + 5)\n\n# Create a title, x label, and y label for our chart\nplt.title(\"Latitude v Humidity Plot\")\nplt.xlabel(\"Latitude\")\nplt.ylabel(\"Humity (%)\")\n\nplt.savefig(\"Latitude_Humidity.png\")\n\n# Prints the scatter plot to the screen\nplt.show()\n\n#There isn't too much of a trend for this scatter plot\n#The cities with a higher percent of humidity are near the equater and around a latitude of 50 ",
"_____no_output_____"
]
],
[
[
"## Latitude vs. Cloudiness Plot",
"_____no_output_____"
]
],
[
[
"#Make plot\ncloud = city_df[\"Cloudiness\"]\nlat = city_df[\"Lat\"]\n\nplt.scatter(lat, cloud, marker=\"o\", facecolors=\"blue\", edgecolors=\"black\", alpha=0.75)\n\n# Set y lim based on max and min cloudiness\nplt.ylim(min(cloud) - 5, max(cloud) + 5)\n\n# Set the x lim based on max and min lat\nplt.xlim(min(lat) - 5, max(lat) + 5)\n\n# Create a title, x label, and y label for our chart\nplt.title(\"Latitude v Cloudiness Plot\")\nplt.xlabel(\"Latitude\")\nplt.ylabel(\"Cloudiness (%)\")\n\nplt.savefig(\"Latitude_Cloudiness.png\")\n\n# Prints the scatter plot to the screen\nplt.show()\n\n#Once again not too much of a general trend \n#But there does seem to be either high cloudines or no cloudiness for most the data (some in between)",
"_____no_output_____"
]
],
[
[
"## Latitude vs. Wind Speed Plot",
"_____no_output_____"
]
],
[
[
"#Make plot\nwind = city_df[\"Wind Speed\"]\nlat = city_df[\"Lat\"]\n\nplt.scatter(lat, wind, marker=\"o\", facecolors=\"blue\", edgecolors=\"black\", alpha=0.75)\n\n# Set y lim based on max and min cloudiness\nplt.ylim(-0.75, max(wind) + 5)\n\n# Set the x lim based on max and min lat\nplt.xlim(min(lat) - 5, max(lat) + 5)\n\n# Create a title, x label, and y label for our chart\nplt.title(\"Latitude v Cloudiness Plot\")\nplt.xlabel(\"Latitude\")\nplt.ylabel(\"Windiness (mph)\")\n\nplt.savefig(\"Latitude_WindSpeed.png\")\n\n# Prints the scatter plot to the screen\nplt.show()\n\n#Most of the data has wind speeds between 0 and 15 mph\n#Latitude does not seem to have an effect on wind speed",
"_____no_output_____"
]
],
[
[
"## Linear Regression",
"_____no_output_____"
]
],
[
[
"#Separate our data frame into different hemispheres\ncity_df.head()\nnor_hem_df = city_df.loc[city_df[\"Lat\"] >= 0]\n\nso_hem_df = city_df.loc[city_df[\"Lat\"] <= 0]",
"_____no_output_____"
]
],
[
[
"#### Northern Hemisphere - Max Temp vs. Latitude Linear Regression",
"_____no_output_____"
]
],
[
[
"#Make plot\ntemp = nor_hem_df[\"Max Temp\"]\nlat = nor_hem_df[\"Lat\"]\n\nplt.scatter(lat, temp, marker=\"o\", facecolors=\"blue\", edgecolors=\"black\", alpha=0.75)\n\n# Set y lim based on max and min temp\nplt.ylim(min(temp) - 5, max(temp) + 5)\n\n# Set the x lim based on max and min lat\nplt.xlim(min(lat) - 5, max(lat) + 5)\n\n# Create a title, x label, and y label for our chart\nplt.title(\"Northern Hemisphere - Latitude v Max Temperature Plot\")\nplt.xlabel(\"Latitude\")\nplt.ylabel(\"Max Temperature (Fahrenheit)\")\n\n# Add linear regression Line\n(slope, intercept, rvalue, pvalue, stderr) = linregress(lat, temp)\nregress_values = lat * slope + intercept\nline_eq = \"y = \" + str(round(slope,2)) + \"x + \" + str(round(intercept,2))\nplt.plot(lat,regress_values,\"r-\")\nplt.annotate(line_eq,(6,10),fontsize=15,color=\"red\")\n\nplt.savefig(\"NH_Latitude_Tem.png\")\n\n# Prints the scatter plot to the screen\nplt.show()\n\n#This plot is showing the relationship between latitude of cities in the Northern Hemisphere and their max temperature\n#The further a city is from the equater(x > 0), the lower the max temp",
"_____no_output_____"
]
],
[
[
"#### Southern Hemisphere - Max Temp vs. Latitude Linear Regression",
"_____no_output_____"
]
],
[
[
"#Make plot\ntemp = so_hem_df[\"Max Temp\"]\nlat = so_hem_df[\"Lat\"]\n\nplt.scatter(lat, temp, marker=\"o\", facecolors=\"blue\", edgecolors=\"black\", alpha=0.75)\n\n# Set y lim based on max and min temp\nplt.ylim(min(temp) - 5, max(temp) + 5)\n\n# Set the x lim based on max and min lat\nplt.xlim(min(lat) - 2, max(lat) + 2)\n\n# Create a title, x label, and y label for our chart\nplt.title(\"Southern Hemisphere - Latitude v Max Temperature Plot\")\nplt.xlabel(\"Latitude\")\nplt.ylabel(\"Max Temperature (Fahrenheit)\")\n\n# Add linear regression Line\n(slope, intercept, rvalue, pvalue, stderr) = linregress(lat, temp)\nregress_values = lat * slope + intercept\nline_eq = \"y = \" + str(round(slope,2)) + \"x + \" + str(round(intercept,2))\nplt.plot(lat,regress_values,\"r-\")\nplt.annotate(line_eq,(0, 80),fontsize=15,color=\"red\")\n\nplt.savefig(\"SH_Latitude_Temp.png\")\n\n# Prints the scatter plot to the screen\nplt.show()\n\n#This plot is showing the relationship between latitude of cities in the Southern Hemisphere and their max temperature\n#The further a city is from the equater(x < 0), the lower the max temp",
"_____no_output_____"
]
],
[
[
"#### Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression",
"_____no_output_____"
]
],
[
[
"#Make plot\nhumid = nor_hem_df[\"Humidity\"]\nlat = nor_hem_df[\"Lat\"]\n\nplt.scatter(lat, humid, marker=\"o\", facecolors=\"blue\", edgecolors=\"black\", alpha=0.75)\n\n# Set y lim based on max and min humudity\nplt.ylim(min(humid) - 5, max(humid) + 5)\n\n# Set the x lim based on max and min lat\nplt.xlim(min(lat) - 5, max(lat) + 5)\n\n# Create a title, x label, and y label for our chart\nplt.title(\"Northern Hemisphere - Latitude v Humidity Plot\")\nplt.xlabel(\"Latitude\")\nplt.ylabel(\"Humity (%)\")\n\n# Add linear regression Line\n(slope, intercept, rvalue, pvalue, stderr) = linregress(lat, humid)\nregress_values = lat * slope + intercept\nline_eq = \"y = \" + str(round(slope,2)) + \"x + \" + str(round(intercept,2))\nplt.plot(lat,regress_values,\"r-\")\nplt.annotate(line_eq,(48, 40),fontsize=15,color=\"red\")\n\nplt.savefig(\"NH_Latitude_Humidity.png\")\n\n# Prints the scatter plot to the screen\nplt.show()\n\n#This plot is showing the relationship between latitude of cities in the Northern Hemisphere and their humidity percentage\n#The further a city is from the equater(x > 0), the higher percent humidity",
"_____no_output_____"
]
],
[
[
"#### Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression",
"_____no_output_____"
]
],
[
[
"#Make plot\nhumid = so_hem_df[\"Humidity\"]\nlat = so_hem_df[\"Lat\"]\n\nplt.scatter(lat, humid, marker=\"o\", facecolors=\"blue\", edgecolors=\"black\", alpha=0.75)\n\n# Set y lim based on max and min humudity\nplt.ylim(min(humid) - 5, max(humid) + 5)\n\n# Set the x lim based on max and min lat\nplt.xlim(min(lat) - 5, max(lat) + 5)\n\n# Create a title, x label, and y label for our chart\nplt.title(\"Southern Hemisphere - Latitude v Humidity Plot\")\nplt.xlabel(\"Latitude\")\nplt.ylabel(\"Humity (%)\")\n\n# Add linear regression Line\n(slope, intercept, rvalue, pvalue, stderr) = linregress(lat, humid)\nregress_values = lat * slope + intercept\nline_eq = \"y = \" + str(round(slope,2)) + \"x + \" + str(round(intercept,2))\nplt.plot(lat,regress_values,\"r-\")\nplt.annotate(line_eq,(5,85),fontsize=15,color=\"red\")\n\nplt.savefig(\"SH_Latitude_Humidity.png\")\n\n# Prints the scatter plot to the screen\nplt.show()\n\n#This plot is showing the relationship between latitude of cities in the Southern Hemisphere and their humidty percentage\n#The further a city is from the equater(x < 0), the lower the percent of humdity\n",
"_____no_output_____"
]
],
[
[
"#### Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression",
"_____no_output_____"
]
],
[
[
"#Make plot\ncloud = nor_hem_df[\"Cloudiness\"]\nlat = nor_hem_df[\"Lat\"]\n\nplt.scatter(lat, cloud, marker=\"o\", facecolors=\"blue\", edgecolors=\"black\", alpha=0.75)\n\n# Set y lim based on max and min humudity\nplt.ylim(min(cloud) - 5, max(cloud) + 5)\n\n# Set the x lim based on max and min lat\nplt.xlim(min(lat) - 5, max(lat) + 5)\n\n# Create a title, x label, and y label for our chart\nplt.title(\"Northern Hemisphere - Latitude v Cloudiness Plot\")\nplt.xlabel(\"Latitude\")\nplt.ylabel(\"Humity (%)\")\n\n# Add linear regression Line\n(slope, intercept, rvalue, pvalue, stderr) = linregress(lat, cloud)\nregress_values = lat * slope + intercept\nline_eq = \"y = \" + str(round(slope,2)) + \"x + \" + str(round(intercept,2))\nplt.plot(lat,regress_values,\"r-\")\nplt.annotate(line_eq,(60,50),fontsize=15,color=\"red\")\n\nplt.savefig(\"NH_Latitude_Cloudiness.png\")\n\n# Prints the scatter plot to the screen\nplt.show()\n\n#This plot is showing the relationship between latitude of cities in the Northern Hemisphere and their cloudiness percentage\n#In general, the further a city is from the equater(x > 0), the higher the cloudiness percentage",
"_____no_output_____"
]
],
[
[
"#### Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression",
"_____no_output_____"
]
],
[
[
"#Make plot\ncloud = so_hem_df[\"Cloudiness\"]\nlat = so_hem_df[\"Lat\"]\n\nplt.scatter(lat, cloud, marker=\"o\", facecolors=\"blue\", edgecolors=\"black\", alpha=0.75)\n\n# Set y lim based on max and min humudity\nplt.ylim(min(cloud) - 5, max(cloud) + 5)\n\n# Set the x lim based on max and min lat\nplt.xlim(min(lat) - 5, max(lat) + 5)\n\n# Create a title, x label, and y label for our chart\nplt.title(\"Southern Hemisphere - Latitude v Cloudiness Plot\")\nplt.xlabel(\"Latitude\")\nplt.ylabel(\"Humity (%)\")\n\n# Add linear regression Line\n(slope, intercept, rvalue, pvalue, stderr) = linregress(lat, cloud)\nregress_values = lat * slope + intercept\nline_eq = \"y = \" + str(round(slope,2)) + \"x + \" + str(round(intercept,2))\nplt.plot(lat,regress_values,\"r-\")\nplt.annotate(line_eq,(0,60),fontsize=15,color=\"red\")\n\nplt.savefig(\"SH_Latitude_Cloudiness.png\")\n\n# Prints the scatter plot to the screen\nplt.show()\n#This plot is showing the relationship between latitude of cities in the Southern Hemisphere and their cloudiness percentage\n#In general, the further a city is from the equater(x < 0), the lower the cloudiness percentage",
"_____no_output_____"
]
],
[
[
"#### Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression",
"_____no_output_____"
]
],
[
[
"#Make plot\nwind = nor_hem_df[\"Wind Speed\"]\nlat = nor_hem_df[\"Lat\"]\n\nplt.scatter(lat, wind, marker=\"o\", facecolors=\"blue\", edgecolors=\"black\", alpha=0.75)\n\n# Set y lim based on max and min humudity\nplt.ylim(min(wind) - 5, max(wind) + 5)\n\n# Set the x lim based on max and min lat\nplt.xlim(min(lat) - 5, max(lat) + 5)\n\n# Create a title, x label, and y label for our chart\nplt.title(\"Northern Hemisphere - Latitude v Windiness Plot\")\nplt.xlabel(\"Latitude\")\nplt.ylabel(\"Windiness (mph)\")\n\n# Add linear regression Line\n(slope, intercept, rvalue, pvalue, stderr) = linregress(lat, wind)\nregress_values = lat * slope + intercept\nline_eq = \"y = \" + str(round(slope,2)) + \"x + \" + str(round(intercept,2))\nplt.plot(lat,regress_values,\"r-\")\nplt.annotate(line_eq,(6,30),fontsize=15,color=\"red\")\n\nplt.savefig(\"NH_Latitude_Windiness.png\")\n\n# Prints the scatter plot to the screen\nplt.show()\n\n#This plot is showing the relationship between latitude of cities in the Northern Hemisphere and their wind speed\n#The slope in this case is very small. There is not a significant change in wind spped the further a city is from the equater",
"_____no_output_____"
]
],
[
[
"#### Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression",
"_____no_output_____"
]
],
[
[
"#Make plot\nwind = so_hem_df[\"Wind Speed\"]\nlat = so_hem_df[\"Lat\"]\n\nplt.scatter(lat, wind, marker=\"o\", facecolors=\"blue\", edgecolors=\"black\", alpha=0.75)\n\n# Set y lim based on max and min humudity\nplt.ylim(min(wind) - 5, max(wind) + 5)\n\n# Set the x lim based on max and min lat\nplt.xlim(min(lat) - 5, max(lat) + 5)\n\n# Create a title, x label, and y label for our chart\nplt.title(\"Southern Hemisphere - Latitude v Windiness Plot\")\nplt.xlabel(\"Latitude\")\nplt.ylabel(\"Windiness (mph)\")\n\n# Add linear regression Line\n(slope, intercept, rvalue, pvalue, stderr) = linregress(lat, wind)\nregress_values = lat * slope + intercept\nline_eq = \"y = \" + str(round(slope,2)) + \"x + \" + str(round(intercept,2))\nplt.plot(lat,regress_values,\"r-\")\nplt.annotate(line_eq,(-50,23),fontsize=15,color=\"red\")\n\nplt.savefig(\"SH_Latitude_Windiness.png\")\n\n# Prints the scatter plot to the screen\nplt.show()\n\n#This plot is showing the relationship between latitude of cities in the Southern Hemisphere and their wind speed\n#The slope in this case is also small but there is a slight change in wind speed the further a city is from the equater",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d098548d0a7e46054b34364c453e7ad699125903 | 3,806 | ipynb | Jupyter Notebook | nidm/nidm-experiment/fbirn_clinical/.ipynb_checkpoints/abide_terms-checkpoint.ipynb | jbpoline/ni-dm | eb8859af2ee3ba3d7dcce4c92985af96ff8e12fe | [
"Apache-2.0"
] | 1 | 2015-10-29T07:12:01.000Z | 2015-10-29T07:12:01.000Z | nidm/nidm-experiment/fbirn_clinical/.ipynb_checkpoints/abide_terms-checkpoint.ipynb | jbpoline/ni-dm | eb8859af2ee3ba3d7dcce4c92985af96ff8e12fe | [
"Apache-2.0"
] | null | null | null | nidm/nidm-experiment/fbirn_clinical/.ipynb_checkpoints/abide_terms-checkpoint.ipynb | jbpoline/ni-dm | eb8859af2ee3ba3d7dcce4c92985af96ff8e12fe | [
"Apache-2.0"
] | 2 | 2016-11-28T05:55:55.000Z | 2021-10-10T22:54:06.000Z | 25.891156 | 128 | 0.533106 | [
[
[
"import os\nimport rdflib as rdf",
"_____no_output_____"
],
[
"#create graph\ng = rdf.Graph()",
"_____no_output_____"
],
[
"#add namespaces\nnidm = rdf.Namespace(\"http://nidm.nidash.org/\")\nprov = rdf.Namespace(\"http://www.w3.org/ns/prov#\")\nncit = rdf.Namespace(\"http://ncitt.ncit.nih.gov/\")\nnidash = rdf.Namespace(\"http://purl.org/nidash/nidm/\")\nabide = rdf.Namespace(\"http://fcon_1000.projects.nitrc.org/indi/abide/\")",
"_____no_output_____"
],
[
"g.bind('nidm', nidm)\ng.bind('prov', prov)\ng.bind('ncit', ncit)\ng.bind('nidash', nidash)\ng.bind('abide', abide)",
"_____no_output_____"
],
[
"#create entities for term definitions\ng.add((nidash[\"entity_FSIQ\"], rdf.RDF.type, prov[\"Entity\"]))\ng.add((nidash[\"entity_FSIQ\"], abide[\"term\"], rdf.Literal(\"ABIDE_FIQ\")))\ng.add((nidash[\"entity_FSIQ\"], prov[\"label\"], rdf.Literal(\"ABIDE vocabulary term\")))\ng.add((nidash[\"entity_FSIQ\"], prov[\"definition\"], rdf.Literal(\"Estimated Full Scale IQ = 127.8 - .78 * errors\")))\ng.add((nidash[\"entity_FSIQ\"], abide[\"form\"], rdf.Literal(\"WASI\")))",
"_____no_output_____"
],
[
"g.add((nidash[\"entity_PIQ\"], rdf.RDF.type, prov[\"Entity\"]))\ng.add((nidash[\"entity_PIQ\"], abide[\"term\"], rdf.Literal(\"ABIDE_PIQ\")))\ng.add((nidash[\"entity_PIQ\"], prov[\"label\"], rdf.Literal(\"ABIDE vocabulary term\")))\ng.add((nidash[\"entity_PIQ\"], prov[\"definition\"], rdf.Literal(\"Estimated Performance IQ = 119.4 - .42 * errors\")))\ng.add((nidash[\"entity_PIQ\"], abide[\"form\"], rdf.Literal(\"WASI\")))",
"_____no_output_____"
],
[
"g.add((nidash[\"entity_VIQ\"], rdf.RDF.type, prov[\"Entity\"]))\ng.add((nidash[\"entity_VIQ\"], abide[\"term\"], rdf.Literal(\"ABIDE_VIQ\")))\ng.add((nidash[\"entity_VIQ\"], prov[\"label\"], rdf.Literal(\"ABIDE vocabulary term\")))\ng.add((nidash[\"entity_VIQ\"], prov[\"definition\"], rdf.Literal(\"Estimated Verbal IQ = 128.7 - .89 * errors\")))\ng.add((nidash[\"entity_VIQ\"], abide[\"form\"], rdf.Literal(\"WASI\")))",
"_____no_output_____"
],
[
"g.serialize(\"abide_terms.ttl\", format='turtle')",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d09856d63a8dbb3ddf6465c04a57140eb8db82a6 | 469,861 | ipynb | Jupyter Notebook | hmc/example4-leapfrog.ipynb | bjlkeng/sandbox | ba1fea113065256d4981a71f7b4bece7299effd1 | [
"MIT"
] | 158 | 2017-11-09T14:56:31.000Z | 2022-03-26T17:26:20.000Z | hmc/example4-leapfrog.ipynb | bjlkeng/sandbox | ba1fea113065256d4981a71f7b4bece7299effd1 | [
"MIT"
] | 8 | 2017-11-28T11:14:46.000Z | 2021-05-03T00:23:57.000Z | hmc/example4-leapfrog.ipynb | bjlkeng/sandbox | ba1fea113065256d4981a71f7b4bece7299effd1 | [
"MIT"
] | 77 | 2017-11-21T15:27:52.000Z | 2022-02-17T16:37:34.000Z | 846.596396 | 158,340 | 0.948291 | [
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline",
"_____no_output_____"
],
[
"# x = Acos(k/m t + \\theta) = 1\n# p = mx' = Ak/m sin(k/m t + \\theta)\n\nt = np.linspace(0, 2 * np.pi, 100)\nt",
"_____no_output_____"
]
],
[
[
"# Exact Equation",
"_____no_output_____"
]
],
[
[
"x, p = np.cos(t - np.pi), -np.sin(t - np.pi)\n\n\nfig = plt.figure(figsize=(5, 5))\nfor i in range(0, len(t), 1):\n plt.plot(x[i:i+2], p[i:i+2], color='black', markersize=0)",
"_____no_output_____"
]
],
[
[
"# Euler's Method Equation",
"_____no_output_____"
],
[
"# Euler's Method",
"_____no_output_____"
]
],
[
[
"# x' = p/m = p\n# p' = -kx + mg = -x\n# x = x + \\eps * p' = x + \\eps*(p)\n# p = p + \\eps * x' = p - \\eps*(x)\nfig = plt.figure(figsize=(5, 5))\nplt.title(\"Euler's Method (eps=0.1)\")\nplt.xlabel(\"position (q)\")\nplt.ylabel(\"momentum (p)\")\nfor i in range(0, len(t), 1):\n plt.plot(x[i:i+2], p[i:i+2], color='black', markersize=0)\n\nx_prev = 0\np_prev = 1\neps = 0.1\nsteps = 100\n\nfor i in range(0, steps, 1):\n x_next = x_prev + eps * p_prev\n p_next = p_prev - eps * x_prev\n plt.plot([x_prev, x_next], [p_prev, p_next], marker='o', color='blue', markersize=5)\n x_prev, p_prev = x_next, p_next",
"_____no_output_____"
]
],
[
[
"# Modified Euler's Method",
"_____no_output_____"
]
],
[
[
"# x' = p/m = p\n# p' = -kx + mg = -x\n# x = x + \\eps * p' = x + \\eps*(p)\n# p = p + \\eps * x' = p - \\eps*(x)\nfig = plt.figure(figsize=(5, 5))\nplt.title(\"Modified Euler's Method (eps=0.2)\")\nplt.xlabel(\"position (q)\")\nplt.ylabel(\"momentum (p)\")\nfor i in range(0, len(t), 1):\n plt.plot(x[i:i+2], p[i:i+2], color='black', markersize=0)\n\nx_prev = 0\np_prev = 1\neps = 0.2\nsteps = int(2*np.pi / eps)\n\nfor i in range(0, steps, 1):\n p_next = p_prev - eps * x_prev\n x_next = x_prev + eps * p_next\n plt.plot([x_prev, x_next], [p_prev, p_next], marker='o', color='blue', markersize=5)\n x_prev, p_prev = x_next, p_next",
"_____no_output_____"
],
[
"# x' = p/m = p\n# p' = -kx + mg = -x\n# x = x + \\eps * p' = x + \\eps*(p)\n# p = p + \\eps * x' = p - \\eps*(x)\nfig = plt.figure(figsize=(5, 5))\nplt.title(\"Modified Euler's Method (eps=0.2)\")\nplt.xlabel(\"position (q)\")\nplt.ylabel(\"momentum (p)\")\nfor i in range(0, len(t), 1):\n plt.plot(x[i:i+2], p[i:i+2], color='black', markersize=0)\n\nx_prev = 0.1\np_prev = 1\neps = 1.31827847281\n#eps = 1.31827847281\nsteps = 50 #int(2*np.pi / eps)\n\nfor i in range(0, steps, 1):\n p_next = p_prev - eps * x_prev\n x_next = x_prev + eps * p_next\n plt.plot([x_prev, x_next], [p_prev, p_next], marker='o', color='blue', markersize=5)\n x_prev, p_prev = x_next, p_next",
"_____no_output_____"
]
],
[
[
"# Leapfrog Method",
"_____no_output_____"
]
],
[
[
"# x' = p/m = p\n# p' = -kx + mg = -x\n# x = x + \\eps * p' = x + \\eps*(p)\n# p = p + \\eps * x' = p - \\eps*(x)\nfig = plt.figure(figsize=(5, 5))\nplt.title(\"Leapfrog Method (eps=0.2)\")\nplt.xlabel(\"position (q)\")\nplt.ylabel(\"momentum (p)\")\nfor i in range(0, len(t), 1):\n plt.plot(x[i:i+2], p[i:i+2], color='black', markersize=0)\n\nx_prev = 0\np_prev = 1\neps = 0.2\nsteps = int(2*np.pi / eps)\n\nfor i in range(0, steps, 1):\n p_half = p_prev - eps/2 * x_prev\n x_next = x_prev + eps * p_half\n p_next = p_half - eps/2 * x_next\n plt.plot([x_prev, x_next], [p_prev, p_next], marker='o', color='blue', markersize=5)\n x_prev, p_prev = x_next, p_next",
"_____no_output_____"
],
[
"# x' = p/m = p\n# p' = -kx + mg = -x\n# x = x + \\eps * p' = x + \\eps*(p)\n# p = p + \\eps * x' = p - \\eps*(x)\nfig = plt.figure(figsize=(5, 5))\nplt.title(\"Leapfrog Method (eps=0.9)\")\nplt.xlabel(\"position (q)\")\nplt.ylabel(\"momentum (p)\")\nfor i in range(0, len(t), 1):\n plt.plot(x[i:i+2], p[i:i+2], color='black', markersize=0)\n\nx_prev = 0\np_prev = 1\neps = 0.9\nsteps = 3 * int(2*np.pi / eps + 0.1)\n\nfor i in range(0, steps, 1):\n p_half = p_prev - eps/2 * x_prev\n x_next = x_prev + eps * p_half\n p_next = p_half - eps/2 * x_next\n plt.plot([x_prev, x_next], [p_prev, p_next], marker='o', color='blue', markersize=5)\n x_prev, p_prev = x_next, p_next",
"_____no_output_____"
]
],
[
[
"# Combined Figure",
"_____no_output_____"
]
],
[
[
"fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(15,15))\n\n# subplot1\nax1.set_title(\"Euler's Method (eps=0.1)\")\nax1.set_xlabel(\"position (q)\")\nax1.set_ylabel(\"momentum (p)\")\nfor i in range(0, len(t), 1):\n ax1.plot(x[i:i+2], p[i:i+2], color='black', markersize=0)\n\nx_prev = 0\np_prev = 1\neps = 0.1\nsteps = 100\n\nfor i in range(0, steps, 1):\n x_next = x_prev + eps * p_prev\n p_next = p_prev - eps * x_prev\n ax1.plot([x_prev, x_next], [p_prev, p_next], marker='o', color='blue', markersize=5)\n x_prev, p_prev = x_next, p_next\n \n# subplot2 \nax2.set_title(\"Modified Euler's Method (eps=0.2)\")\nax2.set_xlabel(\"position (q)\")\nax2.set_ylabel(\"momentum (p)\")\nfor i in range(0, len(t), 1):\n ax2.plot(x[i:i+2], p[i:i+2], color='black', markersize=0)\n\nx_prev = 0\np_prev = 1\neps = 0.2\nsteps = int(2*np.pi / eps)\n\nfor i in range(0, steps, 1):\n p_next = p_prev - eps * x_prev\n x_next = x_prev + eps * p_next\n ax2.plot([x_prev, x_next], [p_prev, p_next], marker='o', color='blue', markersize=5)\n x_prev, p_prev = x_next, p_next\n \n# subplot3\nax3.set_title(\"Leapfrog Method (eps=0.2)\")\nax3.set_xlabel(\"position (q)\")\nax3.set_ylabel(\"momentum (p)\")\nfor i in range(0, len(t), 1):\n ax3.plot(x[i:i+2], p[i:i+2], color='black', markersize=0)\n\nx_prev = 0\np_prev = 1\neps = 0.2\nsteps = int(2*np.pi / eps)\n\nfor i in range(0, steps, 1):\n p_half = p_prev - eps/2 * x_prev\n x_next = x_prev + eps * p_half\n p_next = p_half - eps/2 * x_next\n ax3.plot([x_prev, x_next], [p_prev, p_next], marker='o', color='blue', markersize=5)\n x_prev, p_prev = x_next, p_next\n \n# subplot4\nax4.set_title(\"Leapfrog Method (eps=0.9)\")\nax4.set_xlabel(\"position (q)\")\nax4.set_ylabel(\"momentum (p)\")\nfor i in range(0, len(t), 1):\n ax4.plot(x[i:i+2], p[i:i+2], color='black', markersize=0)\n\nx_prev = 0\np_prev = 1\neps = 0.9\nsteps = 3 * int(2*np.pi / eps + 0.1)\n\nfor i in range(0, steps, 1):\n p_half = p_prev - eps/2 * x_prev\n x_next = x_prev + eps * p_half\n p_next = p_half - eps/2 * x_next\n ax4.plot([x_prev, x_next], [p_prev, p_next], marker='o', color='blue', markersize=5)\n x_prev, p_prev = x_next, p_next",
"_____no_output_____"
]
],
[
[
"# Combined Figure - Square",
"_____no_output_____"
]
],
[
[
"fig, ((ax1, ax2)) = plt.subplots(1, 2, figsize=(15, 7.5))\n\n# subplot1\nax1.set_title(\"Euler's Method (eps=0.2)\")\nax1.set_xlabel(\"position (q)\")\nax1.set_ylabel(\"momentum (p)\")\nfor i in range(0, len(t), 1):\n ax1.plot(x[i:i+2], p[i:i+2], color='black', markersize=0)\n \ndef draw_square(ax, x, p, **args):\n assert len(x) == len(p) == 4\n x = list(x) + [x[0]]\n p = list(p) + [p[0]]\n ax.plot(x, p, **args)\n \ndef euler_update(x, p, eps):\n assert len(x) == len(p) == 4\n x_next = [0.]* 4\n p_next = [0.]* 4\n for i in range(4):\n x_next[i] = x[i] + eps * p[i]\n p_next[i] = p[i] - eps * x[i]\n return x_next, p_next\n\ndef mod_euler_update(x, p, eps):\n assert len(x) == len(p) == 4\n x_next = [0.]* 4\n p_next = [0.]* 4\n for i in range(4):\n x_next[i] = x[i] + eps * p[i]\n p_next[i] = p[i] - eps * x_next[i]\n return x_next, p_next\n\ndelta = 0.1\neps = 0.2\n\n\nx_prev = np.array([0.0, 0.0, delta, delta]) + 0.0\np_prev = np.array([0.0, delta, delta, 0.0]) + 1.0\nsteps = int(2*np.pi / eps)\n\nfor i in range(0, steps, 1):\n draw_square(ax1, x_prev, p_prev, marker='o', color='blue', markersize=5)\n x_next, p_next = euler_update(x_prev, p_prev, eps)\n x_prev, p_prev = x_next, p_next\n \n# subplot2 \nax2.set_title(\"Modified Euler's Method (eps=0.2)\")\nax2.set_xlabel(\"position (q)\")\nax2.set_ylabel(\"momentum (p)\")\nfor i in range(0, len(t), 1):\n ax2.plot(x[i:i+2], p[i:i+2], color='black', markersize=0)\n\nx_prev = np.array([0.0, 0.0, delta, delta]) + 0.0\np_prev = np.array([0.0, delta, delta, 0.0]) + 1.0\n\nfor i in range(0, steps, 1):\n draw_square(ax2, x_prev, p_prev, marker='o', color='blue', markersize=5)\n x_next, p_next = mod_euler_update(x_prev, p_prev, eps)\n x_prev, p_prev = x_next, p_next\n",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0986a9685543b6dafea9f5834139199c0174617 | 857,380 | ipynb | Jupyter Notebook | Measure_velocities.ipynb | stephenpardy/Offsets_Notebooks | a07b70c2c542b28dcb34144246e0a02527cbe7eb | [
"MIT"
] | null | null | null | Measure_velocities.ipynb | stephenpardy/Offsets_Notebooks | a07b70c2c542b28dcb34144246e0a02527cbe7eb | [
"MIT"
] | null | null | null | Measure_velocities.ipynb | stephenpardy/Offsets_Notebooks | a07b70c2c542b28dcb34144246e0a02527cbe7eb | [
"MIT"
] | null | null | null | 83.958089 | 820 | 0.823404 | [
[
[
"# Import and settings",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.path import Path\nimport matplotlib.patches as patches\nfrom snaptools import manipulate as man\nfrom snaptools import snapio\nfrom snaptools import plot_tools\nfrom snaptools import utils\nfrom scipy.stats import binned_statistic\nfrom mpl_toolkits.axes_grid1 import Grid\n\nfrom snaptools import simulation\nfrom snaptools import snapshot\nfrom snaptools import measure\nfrom pathos.multiprocessing import ProcessingPool as Pool\n\n\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\nimport h5py\nimport pandas as PD\nfrom scipy.interpolate import interp2d\n\ncolors = ['#332288', '#CC6677', '#6699CC', '#117733']\nimport matplotlib \nmatplotlib.rc('xtick', labelsize=10) \nmatplotlib.rc('ytick', labelsize=10) \nmatplotlib.rc('lines', linewidth=3)\n%matplotlib inline\n",
"_____no_output_____"
]
],
[
[
"# Snapshot",
"_____no_output_____"
]
],
[
[
"settings = plot_tools.make_defaults(first_only=True, com=True, xlen=20, ylen=20, in_min=0)\nsnap = io.load_snap('/usr/users/spardy/coors2/hpc_backup/working/Gas/Dehnen_LMC/collision/output_Dehnen_smc_45deg/snap_007.hdf5')\nvelfield = snap.to_velfield(parttype='gas', write=False, first_only=True, com=True)\ncentDict = snap.find_centers(settings)\ncom1, com2, gal1id, gal2id = snap.center_of_mass('stars')\n\nvelx = snap.vel['stars'][gal1id, 0]\nvely = snap.vel['stars'][gal1id, 1]\nvelz = snap.vel['stars'][gal1id, 2]\nposx = snap.pos['stars'][gal1id, 0]\nposy = snap.pos['stars'][gal1id, 1]\nposz = snap.pos['stars'][gal1id, 2]\n\nposx -= com1[0]\nposy -= com1[1]\nposz -= com1[2]\n\nx_axis = np.linspace(-15, 15, 512)\ny_axis = x_axis\nX, Y = np.meshgrid(x_axis, y_axis)\nangle = np.arctan2(X, Y)\nR = np.sqrt(X**2 + Y**2)*(-1)**(angle < 0) \n# Use arctan to make all R values negative on other side of Y axis\n",
"_____no_output_____"
],
[
"#sparse_vfield = snap.to_velfield(lengthX=10, lengthY=10, BINS=128, write=False, first_only=True, com=True)\n\nsettings = plot_tools.make_defaults(first_only=True, com=True, xlen=10, ylen=10, in_min=0, BINS=128)\nZ2 = snap.to_cube(theta=45, write=False, first_only=True, com=True, BINS=128, lengthX=10, lengthY=10)\nmom1 = np.zeros((128, 128))\nvelocities = np.linspace(-200, 200, 100)\nfor i in xrange(Z2.shape[2]):\n mom1 += Z2[:,:,i]*velocities[i]\n\nmom1 /= np.sum(Z2, axis=2)\nsparse_vfield = mom1\n\nsparse_vfield[sparse_vfield != sparse_vfield] = 0\nsparse_X, sparse_Y = np.meshgrid(np.linspace(-10, 10, 128), np.linspace(-10, 10, 128))\nwith file('./vels_i45deg.txt', 'w') as velfile:\n velfile.write(' X Y VEL EVEL\\n')\n velfile.write(' asec asec km/s km/s\\n')\n velfile.write('-----------------------------------------\\n')\n for xi, yi, vi in zip(sparse_X.flatten(), sparse_Y.flatten(), sparse_vfield.flatten()):\n velfile.write('%3.2f %3.2f %3.2f 0.001\\n' % (xi, yi, vi))",
"_____no_output_____"
],
[
"com1, com2, gal1id, gal2id = snap.center_of_mass('stars')\nv1 = snap.vel['stars'][gal1id, :].mean(axis=0)\nv2 = snap.vel['stars'][gal2id, :].mean(axis=0)\nprint(np.sqrt(np.sum((v1-v2)**2)))",
"255.248\n"
]
],
[
[
"# Measure Velocities from Velfield",
"_____no_output_____"
]
],
[
[
"# Now try with the velfield\nsettings = plot_tools.make_defaults(first_only=True, com=True, xlen=20, ylen=20, in_min=0)\nbinDict = snap.bin_snap(settings)\nZ2 = binDict['Z2']\nmeasurements = man.fit_contours(Z2, settings, plot=True)\n#measurementsV2 = man.fit_contours(~np.isnan(velfield), settingsV, plot=True, numcontours=1)\n\nlength = 10\nthick = 0.1 \ncodes = [Path.MOVETO,\nPath.LINETO,\nPath.LINETO,\nPath.LINETO,\nPath.CLOSEPOLY,\n]\n\nfig, axes = plt.subplots(2, 2, figsize=(10, 10))\naxes = axes.flatten()\nif len(np.where(measurements['eccs'] > 0.5)[0]) > 0:\n bar_ind = np.max(np.where(measurements['eccs'] > 0.5)[0])\n theta = measurements['angles'][bar_ind]\nelse:\n theta = measurements['angles'][measurements['angles'] == measurements['angles']][-1]\n \n#print(theta)\nr = 0\n\nim = axes[1].imshow(velfield, origin='lower', extent=[-20, 20, -20, 20], cmap='gnuplot')\nim = axes[3].imshow(velfield, origin='lower', extent=[-20, 20, -20, 20], cmap='gnuplot')\n\n#fig.colorbar(im)\n#axes[1].add_artist(measurementsV['ellipses'][0])\n#axes[3].add_artist(measurementsV2['ellipses'][0])\n\n\nfor i, t in enumerate(np.radians([theta, theta+90])):\n x = r*np.sin(t)\n y = r*np.cos(t)\n verts = [\n [length*np.cos(t)-thick*np.sin(t)-x,\n length*np.sin(t)+thick*np.cos(t)+y],\n [length*np.cos(t)+thick*np.sin(t)-x,\n length*np.sin(t)-thick*np.cos(t)+y],\n [-length*np.cos(t)+thick*np.sin(t)-x,\n -length*np.sin(t)-thick*np.cos(t)+y],\n [-length*np.cos(t)-thick*np.sin(t)-x,\n -length*np.sin(t)+thick*np.cos(t)+y],\n [0, 0]]\n path = Path(verts, codes)\n within_box = path.contains_points(np.array([X.flatten(), Y.flatten()]).T)\n s = R.flatten()[within_box].argsort()\n dist = R.flatten()[within_box][s]\n vel = velfield.flatten()[within_box][s]\n vel, binEdges, binNum = binned_statistic(dist, vel, bins=50)\n \n rcoord = binEdges[np.nanargmin(np.abs(vel))] \n print(np.abs(vel))\n print(rcoord)\n axes[i*2].set_title(str(i))\n divider = make_axes_locatable(axes[i*2])\n axOff = divider.append_axes(\"bottom\", size=1.5, pad=0.1)\n axes[i*2].set_xticks([])\n axOff.set_xlabel('R [kpc]')\n axOff.set_ylabel('Velocity [km s$^{-1}$]')\n axOff.axvline(x=rcoord)\n \n axOff.plot(binEdges[:-1], vel, 'b.')\n axes[i*2].plot(binEdges[:-2], np.diff(vel), 'b.')\n #diffs = np.abs(np.diff(vel))\n #if np.any(diffs == diffs):\n \n #rcoord = binEdges[np.nanargmax(np.abs(np.diff(vel)))]\n xcoord = np.cos(t)*rcoord\n ycoord = np.sin(t)*rcoord\n\n patch = patches.PathPatch(path, facecolor='none', lw=2, alpha=0.5)\n axes[1+2*i].add_patch(patch)\n #axes[1+2*i].text((1+length)*np.cos(t)-thick*np.sin(t)-x,\n # (1+length)*np.sin(t)+thick*np.cos(t)+y,\n # str(i), fontsize=15, color='black')\n\n axes[1+2*i].plot(xcoord, ycoord, 'k+', markersize=15, markeredgewidth=2)\n axes[1+2*i].plot(centDict['barCenter'][0], centDict['barCenter'][1], 'g^', markersize=15, markeredgewidth=1, markerfacecolor=None)\n axes[1+2*i].plot(centDict['haloCenter'][0], centDict['haloCenter'][1], 'bx', markersize=15, markeredgewidth=2)\n axes[1+2*i].plot(centDict['diskCenters'][0], centDict['diskCenters'][1], 'c*', markersize=15, markeredgewidth=2)\n\n \n centDict\n \n#plt.tight_layout()\nplt.show()",
"[ nan nan nan nan nan\n nan nan nan nan nan\n nan 6.46727908 6.31208713 6.91340601 nan\n 12.430347 nan 18.06400333 20.09083625 22.40071167\n 26.33220346 27.68547241 24.03372598 16.96297342 12.68978974\n 7.77657353 0.34139155 8.32692741 16.42985223 24.54370133\n 33.14937827 41.9204495 45.05136749 47.80114574 nan\n nan 50.64748577 50.67692256 51.60935358 nan\n nan nan nan nan nan\n nan nan nan nan nan]\n0.399565865949\n[ nan nan nan nan nan\n nan nan nan nan nan\n nan nan 109.75483322 107.53620024 104.92182311\n 101.33595623 97.22460616 87.78463693 68.85196684 55.13417006\n 49.65257987 39.54559096 27.11274878 12.92937899 4.8741186\n 15.32059164 31.38895331 45.07653135 56.33495987 64.39997895\n 71.6718257 75.99781949 79.2800214 nan 87.16691045\n nan nan nan nan nan\n nan nan nan nan nan\n nan nan nan nan nan]\n-0.399565865949\n"
],
[
"print np.nanargmin(np.abs(vel))\nprint binEdges[np.nanargmin(np.abs(vel))]",
"24\n-0.399565865949\n"
],
[
"#fig, axes = plt.subplots(1, 2, figsize=(20, 10))\n\nfig, axis = plt.subplots(1, figsize=(10,10))\n#plot_tools.plot_contours(density, measurements, 0, -1, [0, 0], settings, axis=axis)\n\nim = axis.imshow(velfield, origin='lower', extent=[-15, 15, -15, 15], cmap='gnuplot')\n#axes[1].imshow(mom1, origin='lower', extent=[-15, 15, -15, 15], cmap='gnuplot')\nfig.colorbar(im)\n\nlength = 10\nthick = 0.1 \n\ncodes = [Path.MOVETO,\nPath.LINETO,\nPath.LINETO,\nPath.LINETO,\nPath.CLOSEPOLY,\n]\n \ntheta = 110\nfor i, r in enumerate(xrange(-5, 5, 1)):\n x = r*np.sin(np.radians(theta))\n y = r*np.cos(np.radians(theta))\n verts = [\n [length*np.cos(np.radians(theta))-thick*np.sin(np.radians(theta))-x,\n length*np.sin(np.radians(theta))+thick*np.cos(np.radians(theta))+y],\n [length*np.cos(np.radians(theta))+thick*np.sin(np.radians(theta))-x,\n length*np.sin(np.radians(theta))-thick*np.cos(np.radians(theta))+y],\n [-length*np.cos(np.radians(theta))+thick*np.sin(np.radians(theta))-x,\n -length*np.sin(np.radians(theta))-thick*np.cos(np.radians(theta))+y],\n [-length*np.cos(np.radians(theta))-thick*np.sin(np.radians(theta))-x,\n -length*np.sin(np.radians(theta))+thick*np.cos(np.radians(theta))+y],\n [0, 0]]\n path = Path(verts, codes)\n patch = patches.PathPatch(path, facecolor='none', lw=2, alpha=0.75)\n axis.add_patch(patch)\n axis.text((1+length)*np.cos(np.radians(theta))-thick*np.sin(np.radians(theta))-x,\n (1+length)*np.sin(np.radians(theta))+thick*np.cos(np.radians(theta))+y,\n str(i), fontsize=15, color='black')\n\n#axes[0].set_xlim(-15,15)\n#axes[0].set_ylim(-15,15)",
"_____no_output_____"
],
[
"# Now try with the velfield\nlength = 10\nthick = 0.1 \ncodes = [Path.MOVETO,\nPath.LINETO,\nPath.LINETO,\nPath.LINETO,\nPath.CLOSEPOLY,\n]\n\nfig, axes = plt.subplots(2, 5, figsize=(20, 6))\naxes = axes.flatten()\ntheta = np.radians(110)\nfor i, r in enumerate(xrange(-5, 5, 1)):\n x = r*np.sin(theta)\n y = r*np.cos(theta)\n verts = [\n [length*np.cos(theta)-thick*np.sin(theta)-x,\n length*np.sin(theta)+thick*np.cos(theta)+y],\n [length*np.cos(theta)+thick*np.sin(theta)-x,\n length*np.sin(theta)-thick*np.cos(theta)+y],\n [-length*np.cos(theta)+thick*np.sin(theta)-x,\n -length*np.sin(theta)-thick*np.cos(theta)+y],\n [-length*np.cos(theta)-thick*np.sin(theta)-x,\n -length*np.sin(theta)+thick*np.cos(theta)+y],\n [0, 0]]\n path = Path(verts, codes)\n within_box = path.contains_points(np.array([X.flatten(), Y.flatten()]).T)\n s = Y.flatten()[within_box].argsort()\n dist = Y.flatten()[within_box][s]\n vel = velfield.flatten()[within_box][s]\n vel, binEdges, binNum = binned_statistic(dist, vel, bins=50)\n axes[i].set_title(str(i))\n divider = make_axes_locatable(axes[i])\n axOff = divider.append_axes(\"bottom\", size=1, pad=0.1)\n axes[i].set_xticks([])\n\n axOff.plot(binEdges[:-1], vel, 'b.')\n axes[i].plot(binEdges[:-2], np.diff(vel), 'b.')\n xcoord = binEdges[np.nanargmax(np.abs(np.diff(vel)))]\n ycoord = np.tan(theta)*xcoord\n print(xcoord, ycoord) \n \n#plt.tight_layout()\nplt.show()\n",
"_____no_output_____"
],
[
"!ls ../",
"Astro715_HW3.ipynb\t Illustris.ipynb\t plottingtest.ipynb\r\nData\t\t\t Impact_parameter.ipynb PythonNotebooks\r\nDehnen_Burkert_fit.ipynb isima_notebooks\t Stream_Notebooks\r\nEllipse_Stuff.ipynb\t llustrisPlots.ipynb\t test.out\r\nGalCoords.ipynb\t\t Mathematica_Notebooks TestPotential.ipynb\r\nHaro11.ipynb\t\t Offsets_Notebooks\t vels_i45deg.txt\r\nHaro11_planning.ipynb\t outfile.txt\r\n"
]
],
[
[
"##Plot 2d velocities vs. disk fit",
"_____no_output_____"
]
],
[
[
"names = [r'$\\theta = 45$', r'$\\theta = 90$',\n r'$\\theta = 0$', r'$\\theta = 0$ - Retrograde']\n\nfig = plt.figure(figsize=(15, 10))\ncolors = ['#332288', '#CC6677', '#6699CC', '#117733']\ngrid = Grid(fig, 111,\n nrows_ncols=(2, 2),\n axes_pad=0.0,\n label_mode=\"L\",\n share_all=True\n )\n\ngroups = ['45deg',\n '90deg',\n '0deg',\n '0deg_retro']\n\nfor group, ax in zip(groups, grid):\n with h5py.File('../Data/offSetsDehnen_best.hdf5', 'r') as offsets:\n centers = offsets['/stars/%s/' % group]\n haloCenters = centers['halo_pos'][()]\n diskCenters = centers['disk_pos'][()]\n times = centers['time'][()]\n \n velcents2d = np.loadtxt('/usr/users/spardy/coors/data/2dVels/xy_%s.txt' % group)\n velcents2d = np.array(velcents2d).reshape(len(velcents2d)/2, 2, order='F')\n\n\n ax.plot(times[:-1],\n np.sqrt(np.sum((diskCenters[:-1, :]-haloCenters[:-1, :])**2, axis=1)),\n label='Photometric')\n\n ax.plot(times[:-1],\n np.sqrt(np.sum((velcents2d-haloCenters[:-1, :])**2, axis=1)),\n label='2D Velocity', color=colors[1]) \n \nfor i, (ax, name) in enumerate(zip(grid, names)):\n if i == 0: \n yticks = ax.yaxis.get_major_ticks()\n yticks[0].label1.set_visible(False)\n ax.set_xlim(0, 1.9)\n #ax.set_ylim(0, 4.0)\n #ax.errorbar([-0.75], [1.1], yerr=distErrs, label='Typical Error')\n ax.legend(fancybox=True, loc='upper right')\n if (i == 0) or (i == 2):\n ax.set_ylabel('Offset from Halo \\nCenter [kpc]', fontsize=20)\n #axOff.set_ylabel('D$_{Disk}$ - D$_{Bar}$ \\n [kpc]', fontsize=20)\n ax.set_xlabel(\"Time [Gyr]\", fontsize=20)\n ax.annotate(name, xy=(0.05, 0.8), color='black', xycoords='axes fraction',\n bbox=dict(facecolor='gray', edgecolor='black',\n boxstyle='round, pad=1', alpha=0.5))\n \n\nplt.subplots_adjust(wspace=0.04) # Default is 0.2\nplt.savefig('../../Offsets_paper/plots/velocity_centers.pdf', dpi=600)",
"_____no_output_____"
],
[
"fig, axes = plt.subplots(1, 3, figsize=(22.5, 7.5))\n\nwith h5py.File('/usr/users/spardy/velocity_offsets.hdf5', 'r') as velFile:\n grp = velFile['Dehnen_45deg/']\n\n velcents2d = np.loadtxt('/usr/users/spardy/coors/data/2dVels/xy.txt')\n velcents2d = np.array(velcents2d).reshape(len(velcents2d)/2, 2, order='F')\n\n # Minor Axis\n \n velCenters = np.sqrt(np.sum(grp['velCenters'][()]**2, axis=1))\n velCent = velCenters[:, 1]\n axes[0].plot(times, velCent, zorder=-1, label='Minor-Axis', color=colors[1], linestyle='--')\n \n velCent = PD.rolling_mean(velCenters[:, 1], 3)\n \n times = grp['time'][()]\n axes[0].plot(times, np.sqrt(np.sum(grp['diskCenters'][()]**2, axis=1)), label='Disk')\n\n axes[0].plot(times, velCent, zorder=-1, label='Avg.', color='gray')\n\n # major axis\n \n velCent = velCenters[:, 0]\n axes[1].plot(times, velCent, zorder=-1, label='Major-Axis', color=colors[1], linestyle='--')\n \n velCent = PD.rolling_mean(velCenters[:, 1], 3)\n \n axes[1].plot(times, np.sqrt(np.sum(grp['diskCenters'][()]**2, axis=1)), label='Disk')\n\n axes[1].plot(times, velCent, zorder=-1, label='Avg.', color='gray') \n\n # 2d fit\n \n axes[2].plot(times, np.sqrt(np.sum(grp['diskCenters'][()]**2, axis=1)), label='Disk')\n\n axes[2].plot(times, np.sqrt(np.sum(velcents2d**2, axis=1)), label='2D Velocity', color=colors[1])\n\n \nfor axis in axes:\n axis.legend() \n axis.set_xlabel('Time [Gyr]')\n axis.set_ylabel('Distance from Frame Center [kpc]')",
"_____no_output_____"
],
[
"data = np.loadtxt(\"/usr/users/spardy/coors/data/2dVels/vel008_0.txt\", skiprows=3, usecols=(0,1,2))\nmodel = np.loadtxt(\"/usr/users/spardy/coors/data/2dVels/LMC_OUT_0/vel008_0.mod\", skiprows=2, usecols=(0,1,2))\n\n#dataX = data[:, 0].reshape(256, 256)\n#dataY = data[:, 1].reshape(256, 256)\ndataZ = data[:, 2].reshape(256, 256)\nprint dataZ.shape\n#ataF = interp2d(data[:, 0], data[:, 1], data[:, 2]\n\nprint model.shape\nbinsize = 20./256.\nXind = np.array(np.floor((model[:, 0]+10)/binsize)).astype(int)\nYind = np.array(np.floor((model[:, 1]+10)/binsize)).astype(int)\n\n#modelX = model[:, 0].reshape(sz, sz)\n#modelY = model[:, 1].reshape(sz, sz)\n#modelZ = model[:, 2].reshape(sz, sz)\n\n#XIND, YIND = np.meshgrid(Xind, Yind)\n\nsparseImg = np.ones((256, 256))*np.nan\n#sparseImg[XIND, YIND] = dataZ[XIND, YIND]\n\nsparseModel = np.ones((256, 256))*np.nan\n\nfor xi, yi, z in zip(Xind, Yind, model[:, 2]):\n sparseModel[xi, yi] = z\n sparseImg[Xind, Yind] = dataZ[Xind, Yind]",
"(256, 256)\n(10885, 3)\n"
],
[
"fig, axes = plt.subplots(1, 3, figsize=(15, 5))\naxes[0].imshow(sparseImg, extent=[-10, 10, -10, 10])\naxes[1].imshow(sparseModel.T, extent=[-10, 10, -10, 10])\naxes[2].imshow(sparseModel.T-sparseImg, extent=[-10, 10, -10, 10])\n#axes[1].plot(model[:, 0], model[:, 2])",
"_____no_output_____"
]
],
[
[
"# OLD STUFF",
"_____no_output_____"
]
],
[
[
"theta = np.radians(20)\nr = 0\nx = r*np.sin(theta)\ny = r*np.cos(theta)\nverts = [\n [length*np.cos(theta)-thick*np.sin(theta)-x,\n length*np.sin(theta)+thick*np.cos(theta)+y],\n [length*np.cos(theta)+thick*np.sin(theta)-x,\n length*np.sin(theta)-thick*np.cos(theta)+y],\n [-length*np.cos(theta)+thick*np.sin(theta)-x,\n -length*np.sin(theta)-thick*np.cos(theta)+y],\n [-length*np.cos(theta)-thick*np.sin(theta)-x,\n -length*np.sin(theta)+thick*np.cos(theta)+y],\n [0, 0]]\npath = Path(verts, codes)\nwithin_box = path.contains_points(np.array([X.flatten(), Y.flatten()]).T)\ns = Y.flatten()[within_box].argsort()\ndist = Y.flatten()[within_box][s]\nvel = velfield.flatten()[within_box][s]\nvel, binEdges, binNum = binned_statistic(dist, vel, bins=50)\nxcoord = binEdges[np.nanargmax(np.abs(np.diff(vel)))]\nycoord = np.tan(theta)*xcoord\nprint(xcoord, ycoord)",
"_____no_output_____"
],
[
"# MOM1 maps\nsettings = plot_tools.make_defaults(first_only=True, com=True, xlen=20, ylen=20, in_min=0)\nZ2 = snap.to_cube(theta=20, write=False, first_only=True, com=True)\nmom1 = np.zeros((512, 512))\nvelocities = np.linspace(-200, 200, 100)\nfor i in xrange(Z2.shape[2]):\n mom1 += Z2[:,:,i]*velocities[i]\n\nmom1 /= np.sum(Z2, axis=2)\nx_axis = np.linspace(-15, 15, 512)\ny_axis = x_axis\nX, Y = np.meshgrid(x_axis, y_axis)\ndensity = np.sum(Z2, axis=2)\ndensity[density > 0] = np.log10(density[density > 0])\nsettings = plot_tools.make_defaults(xlen=20, ylen=20, in_min=0, in_max=6)\nmeasurements = man.fit_contours(density, settings, plot=True)",
"_____no_output_____"
],
[
"#Using the moment1 map\nlength = 10\nthick = 0.1 \ncodes = [Path.MOVETO,\nPath.LINETO,\nPath.LINETO,\nPath.LINETO,\nPath.CLOSEPOLY,\n]\n\nfig, axes = plt.subplots(2, 5, figsize=(20, 6))\naxes = axes.flatten()\ntheta = np.radians(measurements['angles'][0]-90)\nfor i, r in enumerate(xrange(-5, 5, 1)):\n x = r*np.sin(theta)\n y = r*np.cos(theta)\n verts = [\n [length*np.cos(theta)-thick*np.sin(theta)-x,\n length*np.sin(theta)+thick*np.cos(theta)+y],\n [length*np.cos(theta)+thick*np.sin(theta)-x,\n length*np.sin(theta)-thick*np.cos(theta)+y],\n [-length*np.cos(theta)+thick*np.sin(theta)-x,\n -length*np.sin(theta)-thick*np.cos(theta)+y],\n [-length*np.cos(theta)-thick*np.sin(theta)-x,\n -length*np.sin(theta)+thick*np.cos(theta)+y],\n [0, 0]]\n path = Path(verts, codes)\n within_box = path.contains_points(np.array([X.flatten(), Y.flatten()]).T)\n s = X.flatten()[within_box].argsort()\n dist = X.flatten()[within_box][s]\n vel = mom1.flatten()[within_box][s]\n vel, binEdges, binNum = binned_statistic(dist, vel, bins=50)\n axes[i].set_title(str(i))\n divider = make_axes_locatable(axes[i])\n axOff = divider.append_axes(\"bottom\", size=1, pad=0.1)\n axes[i].set_xticks([])\n\n axOff.plot(binEdges[:-1], vel, 'b.')\n axes[i].plot(binEdges[:-2], np.diff(vel), 'b.')\n \n \n#plt.tight_layout()\nplt.show()",
"_____no_output_____"
],
[
"for i, theta in enumerate(xrange(0, 180, 18)):\n verts = [\n [length*np.cos(np.radians(theta))-thick*np.sin(np.radians(theta)),\n length*np.sin(np.radians(theta))+thick*np.cos(np.radians(theta))],\n [length*np.cos(np.radians(theta))+thick*np.sin(np.radians(theta)),\n length*np.sin(np.radians(theta))-thick*np.cos(np.radians(theta))],\n [-length*np.cos(np.radians(theta))+thick*np.sin(np.radians(theta)),\n -length*np.sin(np.radians(theta))-thick*np.cos(np.radians(theta))],\n [-length*np.cos(np.radians(theta))-thick*np.sin(np.radians(theta)),\n -length*np.sin(np.radians(theta))+thick*np.cos(np.radians(theta))],\n [0, 0]]\n path = Path(verts, codes)\n patch = patches.PathPatch(path, facecolor='none', lw=2, alpha=0.75)\n axes[1].add_patch(patch)\n axes[1].text((1+length)*np.cos(np.radians(theta))-thick*np.sin(np.radians(theta)),\n (1+length)*np.sin(np.radians(theta))+thick*np.cos(np.radians(theta)),\n str(i), fontsize=15)",
"_____no_output_____"
],
[
"x1 = 10\ndy = 0.1\n\ncodes = [Path.MOVETO,\nPath.LINETO,\nPath.LINETO,\nPath.LINETO,\nPath.CLOSEPOLY,\n]\n\nfig, axes = plt.subplots(2, 5, figsize=(20, 4))\naxes = axes.flatten()\nfor i, theta in enumerate(xrange(0, 180, 18)):\n verts = [\n [x1*np.cos(np.radians(theta))-dy*np.sin(np.radians(theta)),\n x1*np.sin(np.radians(theta))+dy*np.cos(np.radians(theta))],\n [x1*np.cos(np.radians(theta))+dy*np.sin(np.radians(theta)),\n x1*np.sin(np.radians(theta))-dy*np.cos(np.radians(theta))],\n [-x1*np.cos(np.radians(theta))+dy*np.sin(np.radians(theta)),\n -x1*np.sin(np.radians(theta))-dy*np.cos(np.radians(theta))],\n [-x1*np.cos(np.radians(theta))-dy*np.sin(np.radians(theta)),\n -x1*np.sin(np.radians(theta))+dy*np.cos(np.radians(theta))],\n [0, 0]]\n path = Path(verts, codes)\n within_box = path.contains_points(np.array([X.flatten(), Y.flatten()]).T)\n axes[i].plot(X.flatten()[within_box], mom1.flatten()[within_box], 'b.')\n axes[i].set_title(str(i))\n \nplt.tight_layout()\nplt.show()",
"_____no_output_____"
],
[
"x1 = 10\ndy = 0.1\n\ncodes = [Path.MOVETO,\nPath.LINETO,\nPath.LINETO,\nPath.LINETO,\nPath.CLOSEPOLY,\n]\n\nfig, axes = plt.subplots(2, 5, figsize=(20, 4))\naxes = axes.flatten()\nfor i, y in enumerate(xrange(-10, 10, 2)):\n verts = [\n [x1, dy+y],\n [x1, -dy+y],\n [-x1, -dy+y],\n [-x1, dy+y],\n [0, 0]]\n path = Path(verts, codes)\n patch = patches.PathPatch(path, facecolor='none', lw=2)\n#axes[i].add_patch(patch)\n#axes[i].set_xlim(-20,20)\n#axes[i].set_ylim(-20,20)\n within_box = path.contains_points(np.array([posx, posy]).T)\n axes[i].plot(posx[within_box[::10]], vely[within_box[::10]], 'b.')\n\nplt.show()\n\n#fig = plt.figure()\n#ax = fig.add_subplot(111)\n#patch = patches.PathPatch(path, facecolor='none', lw=2)\n#ax.add_patch(patch)\n#ax.set_xlim(-20,20)\n#ax.set_ylim(-20,20)\n#plt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d098728638307ce05c2cb49f3fc5e067de0dd106 | 315,149 | ipynb | Jupyter Notebook | RobotUtadeo/urdf/rtb_serial_RobotUtadeo.ipynb | olmerg/rtb_serial_robot | c25828e50dca2498dd39ae57767be528fc87b556 | [
"MIT"
] | 2 | 2021-05-06T16:24:41.000Z | 2021-11-24T11:09:32.000Z | RobotUtadeo/urdf/rtb_serial_RobotUtadeo.ipynb | olmerg/rtb_serial_robot | c25828e50dca2498dd39ae57767be528fc87b556 | [
"MIT"
] | 2 | 2021-05-18T16:45:32.000Z | 2021-05-18T17:30:25.000Z | RobotUtadeo/urdf/rtb_serial_RobotUtadeo.ipynb | olmerg/rtb_serial_robot | c25828e50dca2498dd39ae57767be528fc87b556 | [
"MIT"
] | 3 | 2021-04-26T21:23:08.000Z | 2021-10-10T05:50:32.000Z | 39.373938 | 148 | 0.236066 | [
[
[
"# -*- coding: utf-8 -*-\n\"\"\"\nThis is the example to execute the RobotUtadeo\n@author olmerg\n\"\"\"\nimport sys \nsys.path.append('RobotUtadeo')\n\n# ../ se devuelve hasta escontrar la carpeta Swift_serial\nsys.path.append( '../../Swift_serial')\nimport numpy as np\nimport roboticstoolbox as rtb\nfrom RobotUtadeo import RobotUtadeo\nfrom Swift_serial import Swift_serial\nfrom math import pi\n\nif __name__ == '__main__': # pragma nocover\n\n env = Swift_serial('COM6',115200)\n \n #posicion inicial (aqui cambiar por el robot realizado)\n robot=RobotUtadeo()\n print(robot)\n print(robot.to_dict)\n # the robot should start in home \n env.launch()\n env.add(robot)\n \n p0=np.array([0, 0, 0, 0, 0]) \n p1=np.array([pi/2, 0, 0, 0, 0])\n p2=np.array([pi/2, pi/2, pi/2,-pi/2, 0])\n p3=np.array([pi/2, pi/2, -(5*pi/36), pi/12, 0])\n p4=np.array([pi/2, 0, pi/4, (2*pi/9), 0])\n p5=np.array([-pi/2, 0, pi/4, (2*pi/9), 0])\n p6=np.array([-pi/2, pi/2, -pi/4, (2*pi/9), 0])\n p7=np.array([-pi/2, pi/2, -pi/4, (2*pi/9), pi/2]) \n\n q0 = rtb.tools.trajectory.jtraj(p0,p1,80)\n q1 = rtb.tools.trajectory.jtraj(p1,p2,180)\n q2 = rtb.tools.trajectory.jtraj(p2,p3,195)\n q3 = rtb.tools.trajectory.jtraj(p3,p4,195)\n q4 = rtb.tools.trajectory.jtraj(p4,p5,190)\n q5 = rtb.tools.trajectory.jtraj(p5,p6,215) \n q6 = rtb.tools.trajectory.jtraj(p6,p7,225) \n \n #env.reset()\nfor i in [0, 1, 2]:\n for q in q0.y:\n #print(q)\n robot.q=q\n env.step(0.01)\n for q in q1.y:\n #print(q)\n robot.q=q\n env.step(0.01) \n for q in q2.y:\n #print(q)\n robot.q=q\n env.step(0.01) \n for q in q3.y:\n #print(q)\n robot.q=q\n env.step(0.01) \n for q in q4.y:\n #print(q)\n robot.q=q\n env.step(0.01) \n for q in q5.y:\n #print(q)\n robot.q=q\n env.step(0.01)\n for q in q6.y:\n #print(q)\n robot.q=q\n env.step(0.01)\n # return to home\n env.reset()\n #env.close()\n #del env \n #del robot\n #qt.plot(block=True) ",
"init serial COM6 speed 115200\n┌───┬───────────┬───────────┬───────────────┬────────────────────────────────┐\n│id │ link │ parent │ joint │ ETS │\n├───┼───────────┼───────────┼───────────────┼────────────────────────────────┤\n│ 0\u001b[0m │ \u001b[38;5;4mworld\u001b[0m │ -\u001b[0m │ \u001b[0m │ \u001b[0m │\n│ 1\u001b[0m │ \u001b[38;5;4mbase_link\u001b[0m │ world\u001b[0m │ Join\u001b[0m │ \u001b[0m │\n│ 2\u001b[0m │ link_1\u001b[0m │ base_link\u001b[0m │ joint_1\u001b[0m │ tz(0.75) * Rz(q0)\u001b[0m │\n│ 3\u001b[0m │ link_2\u001b[0m │ link_1\u001b[0m │ joint_2\u001b[0m │ tz(0.55) * Rz(89.95°) * Ry(q1)\u001b[0m │\n│ 4\u001b[0m │ link_3\u001b[0m │ link_2\u001b[0m │ joint_3\u001b[0m │ tz(1.2) * Ry(q2)\u001b[0m │\n│ 5\u001b[0m │ link_4\u001b[0m │ link_3\u001b[0m │ joint_4\u001b[0m │ tz(1.0) * Ry(q3)\u001b[0m │\n│ 6\u001b[0m │ @Gripper\u001b[0m │ link_4\u001b[0m │ joint_Gripper\u001b[0m │ tz(0.7) * Rz(89.95°) * Rz(q4)\u001b[0m │\n└───┴───────────┴───────────┴───────────────┴────────────────────────────────┘\n\n┌─────┬──────┬──────┬──────┬──────┬──────┐\n│name │ q0 │ q1 │ q2 │ q3 │ q4 │\n├─────┼──────┼──────┼──────┼──────┼──────┤\n│ qz\u001b[0m │ 90°\u001b[0m │ 90°\u001b[0m │ 90°\u001b[0m │ 90°\u001b[0m │ 90°\u001b[0m │\n│ qr\u001b[0m │ 0°\u001b[0m │ 0°\u001b[0m │ 0°\u001b[0m │ 0°\u001b[0m │ 0°\u001b[0m │\n└─────┴──────┴──────┴──────┴──────┴──────┘\n\n<bound method ERobot.to_dict of <RobotUtadeo.RobotUtadeo object at 0x0000024E9966DF40>>\n[0. 0. 0. 0. 0.]\n[0. 0. 0. 0. 0.]\n['0', '7', '70', '-7', '0', '90']\n[0. 0. 0. 0. 0.]\n['0', '7', '70', '-7', '0', '90']\n[0. 0. 0. 0. 0.]\n['0', '7', '70', '-7', '0', '90']\n[0. 0. 0. 0. 0.]\n['0', '7', '70', '-7', '0', '90']\n[0. 0. 0. 0. 0.]\n['0', '7', '70', '-7', '0', '90']\n[0. 0. 0. 0. 0.]\n['0', '7', '70', '-7', '0', '90']\n[0. 0. 0. 0. 0.]\n['0', '7', '70', '-7', '0', '90']\n[0. 0. 0. 0. 0.]\n['0', '7', '70', '-7', '0', '90']\n[0. 0. 0. 0. 0.]\n['0', '7', '70', '-7', '0', '90']\n[0. 0. 0. 0. 0.]\n['0', '7', '70', '-7', '0', '90']\n[0. 0. 0. 0. 0.]\n['0', '7', '70', '-7', '0', '90']\n[1. 0. 0. 0. 0.]\n['0', '7', '70', '-7', '0', '90']\n[1. 0. 0. 0. 0.]\n['2', '7', '70', '-7', '0', '90']\n[1. 0. 0. 0. 0.]\n['3', '7', '70', '-7', '0', '90']\n[1. 0. 0. 0. 0.]\n['4', '7', '70', '-7', '0', '90']\n[1. 0. 0. 0. 0.]\n['5', '7', '70', '-7', '0', '90']\n[1. 0. 0. 0. 0.]\n['6', '7', '70', '-7', '0', '90']\n[1. 0. 0. 0. 0.]\n['7', '7', '70', '-7', '0', '90']\n[1. 0. 0. 0. 0.]\n['8', '7', '70', '-7', '0', '90']\n[1. 0. 0. 0. 0.]\n['9', '7', '70', '-7', '0', '90']\n[1. 0. 0. 0. 0.]\n['10', '7', '70', '-7', '0', '90']\n[1. 0. 0. 0. 0.]\n['11', '7', '70', '-7', '0', '90']\n[1. 0. 0. 0. 0.]\n['12', '7', '70', '-7', '0', '90']\n[1. 0. 0. 0. 0.]\n['13', '7', '70', '-7', '0', '90']\n[2. 0. 0. 0. 0.]\n['15', '7', '70', '-7', '0', '90']\n[2. 0. 0. 0. 0.]\n['17', '7', '70', '-7', '0', '90']\n[2. 0. 0. 0. 0.]\n['19', '7', '70', '-7', '0', '90']\n[2. 0. 0. 0. 0.]\n['21', '7', '70', '-7', '0', '90']\n[2. 0. 0. 0. 0.]\n['23', '7', '70', '-7', '0', '90']\n[2. 0. 0. 0. 0.]\n['25', '7', '70', '-7', '0', '90']\n[2. 0. 0. 0. 0.]\n['27', '7', '70', '-7', '0', '90']\n[2. 0. 0. 0. 0.]\n['29', '7', '70', '-7', '0', '90']\n[2. 0. 0. 0. 0.]\n['31', '7', '70', '-7', '0', '90']\n[2. 0. 0. 0. 0.]\n['33', '7', '70', '-7', '0', '90']\n[2. 0. 0. 0. 0.]\n['35', '7', '70', '-7', '0', '90']\n[2. 0. 0. 0. 0.]\n['37', '7', '70', '-7', '0', '90']\n[2. 0. 0. 0. 0.]\n['39', '7', '70', '-7', '0', '90']\n[2. 0. 0. 0. 0.]\n['41', '7', '70', '-7', '0', '90']\n[2. 0. 0. 0. 0.]\n['43', '7', '70', '-7', '0', '90']\n[2. 0. 0. 0. 0.]\n['45', '7', '70', '-7', '0', '90']\n[2. 0. 0. 0. 0.]\n['47', '7', '70', '-7', '0', '90']\n[2. 0. 0. 0. 0.]\n['49', '7', '70', '-7', '0', '90']\n[2. 0. 0. 0. 0.]\n['51', '7', '70', '-7', '0', '90']\n[2. 0. 0. 0. 0.]\n['53', '7', '70', '-7', '0', '90']\n[2. 0. 0. 0. 0.]\n['55', '7', '70', '-7', '0', '90']\n[2. 0. 0. 0. 0.]\n['57', '7', '70', '-7', '0', '90']\n[2. 0. 0. 0. 0.]\n['59', '7', '70', '-7', '0', '90']\n[2. 0. 0. 0. 0.]\n['61', '7', '70', '-7', '0', '90']\n[2. 0. 0. 0. 0.]\n['63', '7', '70', '-7', '0', '90']\n[2. 0. 0. 0. 0.]\n['65', '7', '70', '-7', '0', '90']\n[2. 0. 0. 0. 0.]\n['67', '7', '70', '-7', '0', '90']\n[2. 0. 0. 0. 0.]\n['69', '7', '70', '-7', '0', '90']\n[2. 0. 0. 0. 0.]\n['71', '7', '70', '-7', '0', '90']\n[2. 0. 0. 0. 0.]\n['73', '7', '70', '-7', '0', '90']\n[2. 0. 0. 0. 0.]\n['75', '7', '70', '-7', '0', '90']\n[1. 0. 0. 0. 0.]\n['76', '7', '70', '-7', '0', '90']\n[1. 0. 0. 0. 0.]\n['77', '7', '70', '-7', '0', '90']\n[1. 0. 0. 0. 0.]\n['78', '7', '70', '-7', '0', '90']\n[1. 0. 0. 0. 0.]\n['79', '7', '70', '-7', '0', '90']\n[1. 0. 0. 0. 0.]\n['80', '7', '70', '-7', '0', '90']\n[1. 0. 0. 0. 0.]\n['81', '7', '70', '-7', '0', '90']\n[1. 0. 0. 0. 0.]\n['82', '7', '70', '-7', '0', '90']\n[1. 0. 0. 0. 0.]\n['83', '7', '70', '-7', '0', '90']\n[1. 0. 0. 0. 0.]\n['84', '7', '70', '-7', '0', '90']\n[1. 0. 0. 0. 0.]\n['85', '7', '70', '-7', '0', '90']\n[1. 0. 0. 0. 0.]\n['86', '7', '70', '-7', '0', '90']\n[1. 0. 0. 0. 0.]\n['87', '7', '70', '-7', '0', '90']\n[1. 0. 0. 0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[0. 0. 0. 0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[0. 0. 0. 0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[0. 0. 0. 0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[0. 0. 0. 0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[0. 0. 0. 0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[0. 0. 0. 0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[0. 0. 0. 0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[0. 0. 0. 0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[0. 0. 0. 0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[0. 0. 0. 0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[0. 0. 0. 0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[0. 0. 0. 0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 0. 0. -0. 0.]\n['88', '7', '70', '-7', '0', '90']\n[ 0. 1. 1. -1. 0.]\n['88', '8', '71', '-8', '0', '90']\n[ 0. 1. 1. -1. 0.]\n['88', '9', '72', '-9', '0', '90']\n[ 0. 1. 1. -1. 0.]\n['88', '10', '73', '-10', '0', '90']\n[ 0. 1. 1. -1. 0.]\n"
]
]
] | [
"code"
] | [
[
"code"
]
] |
d0987e3135ee5563d86fc459ff2394e7879e51a6 | 981,571 | ipynb | Jupyter Notebook | THE_LOAN_APPROVAL_PROJECT.ipynb | Ciiku-Kihara/LOAN-APPROVAL-PROJECT | 8e10465bb7cefd2dee6d30d850451c528a4eb58b | [
"MIT"
] | null | null | null | THE_LOAN_APPROVAL_PROJECT.ipynb | Ciiku-Kihara/LOAN-APPROVAL-PROJECT | 8e10465bb7cefd2dee6d30d850451c528a4eb58b | [
"MIT"
] | null | null | null | THE_LOAN_APPROVAL_PROJECT.ipynb | Ciiku-Kihara/LOAN-APPROVAL-PROJECT | 8e10465bb7cefd2dee6d30d850451c528a4eb58b | [
"MIT"
] | null | null | null | 183.642844 | 71,226 | 0.860761 | [
[
[
"<a href=\"https://colab.research.google.com/github/Ciiku-Kihara/LOAN-APPROVAL-PROJECT/blob/main/THE_LOAN_APPROVAL_PROJECT.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"## A CASE STUDY OF FACTORS AFFECTING LOAN APPROVAL",
"_____no_output_____"
],
[
"## 1. Defining the question",
"_____no_output_____"
],
[
"### a) Specifying the analysis question\n\nIs there a relationship between gender, credit history and the area one lives and loan status?",
"_____no_output_____"
],
[
"### b) Defining the metric for success\n\nBe able to obtain and run statistically correct hypothesis tests, and come to a meaningful conclusion",
"_____no_output_____"
],
[
"### c) Understanding the context\n\nIn finance, a loan is the lending of money by one or more individuals, organizations, or other entities to other individuals and organizations.\n\nBorrowing a Loan will build your confidence in securing a loan. If you repay well your loan, you will have a good credit history and stand a chance of more loan. Borrowing loan is important. It helps you when you don't have cash on hand and will are of great help whenever you are in a fix.",
"_____no_output_____"
],
[
"### d) Recording the experimental design\n\nWe will be conducting Exploratory data analysis which includes Univariate analysis, Bivariate and multivariate analysis.\n\nIn order to answer our research question we will be carrying out hypothesis testing using Chi-square test to get the relationships and differences between our independent and target variables hence coming up with significant conclusions.\n\n",
"_____no_output_____"
],
[
"### e) Data Relevance\n\nThe dataset contains demographic information on factors that determine whether one gets a loan or not. \nThis data was extracted from Kaggle, which is a reputable organization.\nThe information contained in our dataset was relevant for our analysis.",
"_____no_output_____"
],
[
"## 2. Importing relevant libraries",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom scipy.stats import f_oneway\nfrom scipy.stats import ttest_ind\nimport scipy.stats as stats\nfrom sklearn.decomposition import PCA",
"_____no_output_____"
]
],
[
[
"## 3. Loading and checking the data",
"_____no_output_____"
]
],
[
[
"# Loading our dataset\n\nloans_df = pd.read_csv('loans.csv')",
"_____no_output_____"
],
[
"# Getting a preview of the first 10 rows\n\nloans_df.head(10)",
"_____no_output_____"
],
[
"# Determining the number of rows and columns in the dataset\n\nloans_df.shape",
"_____no_output_____"
],
[
"# Determining the names of the columns present in the dataset\n\nloans_df.columns",
"_____no_output_____"
],
[
"# Description of the quantitative columns\n\nloans_df.describe()",
"_____no_output_____"
],
[
"# Description of the qualitative columns\n\nloans_df.describe(include = 'object')",
"_____no_output_____"
],
[
"# Checking if each column is of the appropriate data type\n\nloans_df.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 614 entries, 0 to 613\nData columns (total 13 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Loan_ID 614 non-null object \n 1 Gender 601 non-null object \n 2 Married 611 non-null object \n 3 Dependents 599 non-null object \n 4 Education 614 non-null object \n 5 Self_Employed 582 non-null object \n 6 ApplicantIncome 614 non-null int64 \n 7 CoapplicantIncome 614 non-null float64\n 8 LoanAmount 592 non-null float64\n 9 Loan_Amount_Term 600 non-null float64\n 10 Credit_History 564 non-null float64\n 11 Property_Area 614 non-null object \n 12 Loan_Status 614 non-null object \ndtypes: float64(4), int64(1), object(8)\nmemory usage: 62.5+ KB\n"
]
],
[
[
"## 4. External data source validation",
"_____no_output_____"
],
[
"> We validated our dataset using information from the following link:\n\n> http://calcnet.mth.cmich.edu/org/spss/prj_loan_data.htm",
"_____no_output_____"
],
[
"## 5. Data cleaning\n",
"_____no_output_____"
],
[
"Uniformity",
"_____no_output_____"
]
],
[
[
"# Changing all column names to lowercase, stripping white spaces\n# and removing all underscores\n\nloans_df.columns = loans_df.columns.str.lower().str.strip().str.replace(\"_\",\"\")",
"_____no_output_____"
],
[
"# Confirming the changes made\n\nloans_df.head(5)",
"_____no_output_____"
]
],
[
[
"Data Completeness",
"_____no_output_____"
]
],
[
[
"# Determining the number of null values in each column\n\nloans_df.isnull().sum()",
"_____no_output_____"
],
[
"#Imputing Loan Amount with mean\n\nloans_df['loanamount'] = loans_df['loanamount'].fillna(loans_df['loanamount'].mean())",
"_____no_output_____"
],
[
"#FowardFill For LoanTerm \n\nloans_df['loanamountterm'] = loans_df['loanamountterm'].fillna(method = \"ffill\")",
"_____no_output_____"
],
[
"#Assuming Missing values imply bad credit History - replacing nulls with 0\n\nloans_df['credithistory'] = loans_df['credithistory'].fillna(0)",
"_____no_output_____"
],
[
"#Imputing gender, married, and selfemployed\n\nloans_df['dependents']=loans_df['dependents'].fillna(loans_df['dependents'].mode()[0])\nloans_df['gender']=loans_df['gender'].fillna(loans_df['gender'].mode()[0])\nloans_df['married']=loans_df['married'].fillna(loans_df['married'].mode()[0])\nloans_df['selfemployed']=loans_df['selfemployed'].fillna(loans_df['selfemployed'].mode()[0])",
"_____no_output_____"
],
[
"# Confirming our changes after dealing with null values\n\nloans_df.isnull().sum()",
"_____no_output_____"
],
[
"# Previewing the data\n\nloans_df.head(10)",
"_____no_output_____"
]
],
[
[
"Data Consistency",
"_____no_output_____"
]
],
[
[
"# Checking if there are any duplicated rows\n\nloans_df.duplicated().sum()\n",
"_____no_output_____"
],
[
"# Checking for any anomalies in the qualitative variables\n\nqcol = ['gender', 'married', 'dependents', 'education',\n 'selfemployed','credithistory', 'propertyarea', 'loanstatus']\n\nfor col in qcol:\n print(col, ':', loans_df[col].unique())",
"gender : ['Male' 'Female']\nmarried : ['No' 'Yes']\ndependents : ['0' '1' '2' '3+']\neducation : ['Graduate' 'Not Graduate']\nselfemployed : ['No' 'Yes']\ncredithistory : [1. 0.]\npropertyarea : ['Urban' 'Rural' 'Semiurban']\nloanstatus : ['Y' 'N']\n"
],
[
"#Checking for Outliers\ncols = ['applicantincome','coapplicantincome', 'loanamount', 'loanamountterm']\n\nfor column in cols:\n plt.figure()\n loans_df.boxplot([column], fontsize= 12)\n plt.ylabel('count', fontsize = 12)\n plt.title('Boxplot - {}'.format(column), fontsize = 16)",
"_____no_output_____"
],
[
"# Determining how many rows would be lost if outliers were removed\n\n# Calculating our first, third quantiles and then later our IQR\n# ---\nQ1 = loans_df.quantile(0.25)\nQ3 = loans_df.quantile(0.75)\nIQR = Q3 - Q1\n\n# Removing outliers based on the IQR range and stores the result in the data frame 'auto'\n# ---\n# \nloans_df_new = loans_df[~((loans_df < (Q1 - 1.5 * IQR)) | (loans_df > (Q3 + 1.5 * IQR))).any(axis=1)]\n\n# Printing the shape of our new dataset\n# ---\n# \nprint(loans_df_new.shape)\n\n# Printing the shape of our old dataset\n# ---\n#\nprint(loans_df.shape)\n\n# Number of rows removed\n\nrows_removed = loans_df.shape[0] - loans_df_new.shape[0]\nrows_removed\n\n# Percentage of rows removed of the percentage\nrow_percent = (rows_removed/loans_df.shape[0]) * 100\nrow_percent",
"(355, 13)\n(614, 13)\n"
],
[
"# Exporting our data\n\nloans_df.to_csv('loanscleaned.csv')",
"_____no_output_____"
]
],
[
[
"## 6. Exploratory Data Analysis",
"_____no_output_____"
],
[
"### a) Univariate Analysis",
"_____no_output_____"
]
],
[
[
"# Previewing the dataset\n\nloans_df.head(4)",
"_____no_output_____"
],
[
"# Loan Status\n\nYes = loans_df[loans_df[\"loanstatus\"] == 'Y'].shape[0]\nNo = loans_df[loans_df[\"loanstatus\"] == 'N'].shape[0]\nprint(f\"Yes = {Yes}\")\nprint(f\"No = {No}\")\nprint(' ')\nprint(f\"Proportion of Yes = {(Yes / len(loans_df['loanstatus'])) * 100:.2f}%\")\nprint(f\"Proportion of No = {(No / len(loans_df['loanstatus'])) * 100:.2f}%\")\nprint(' ') \nplt.figure(figsize=(10, 8))\nsns.countplot(x = loans_df[\"loanstatus\"])\nplt.xticks((0, 1), [\"Yes\", \"No\"], fontsize = 14)\nplt.xlabel(\"Loan Approval Status\", fontsize = 14)\nplt.ylabel(\"Frequency\", fontsize = 14)\nplt.title(\"Number of Approved and Disapproved Loans\", y=1, fontdict={\"fontsize\": 20});",
"Yes = 422\nNo = 192\n \nProportion of Yes = 68.73%\nProportion of No = 31.27%\n \n"
],
[
"# Pie Chart for Gender\n\ngender = loans_df.gender.value_counts()\nplt.figure(figsize= (8,5), dpi=100)\n\n# Highlighting yes\nexplode = (0.1, 0) \ncolors = ['blue', 'orange']\n\n# Plotting our pie chart\ngender.plot.pie(explode = explode, colors = colors, autopct='%1.1f%%', shadow=True, startangle=140)\n\nplt.axis('equal')\nplt.title('Pie chart of Gender Distribution')\nplt.show()",
"_____no_output_____"
],
[
"# Pie Chart for Education\n\neducation = loans_df.education.value_counts()\nplt.figure(figsize= (8,5), dpi=100)\n\n# Highlighting yes\nexplode = (0.1, 0) \ncolors = ['blue', 'orange']\n\n# Plotting our pie chart\neducation.plot.pie(explode = explode, colors = colors, autopct='%1.1f%%', shadow=True, startangle=140)\n\nplt.axis('equal')\nplt.title('Pie chart of Education')\nplt.show()",
"_____no_output_____"
],
[
"# Marital status\n\nYes = loans_df[loans_df[\"married\"] == 'Yes'].shape[0]\nNo = loans_df[loans_df[\"married\"] == 'No'].shape[0]\nprint(f\"Yes = {Yes}\")\nprint(f\"No = {No}\")\nprint(' ')\nprint(f\"Proportion of Yes = {(Yes / len(loans_df['married'])) * 100:.2f}%\")\nprint(f\"Proportion of No = {(No / len(loans_df['married'])) * 100:.2f}%\") \nprint(' ') \nplt.figure(figsize=(10, 8))\nsns.countplot(x = loans_df[\"married\"])\nplt.xticks((0, 1), [\"No\", \"Yes\"], fontsize = 14)\nplt.xlabel(\"Marital Status\", fontsize = 14)\nplt.ylabel(\"Frequency\", fontsize = 14)\nplt.title(\"Marital Status\", y=1, fontdict={\"fontsize\": 20});",
"Yes = 401\nNo = 213\n \nProportion of Yes = 65.31%\nProportion of No = 34.69%\n \n"
],
[
"# Frequency table for Property Area in percentage\n\nround(loans_df.propertyarea.value_counts(normalize = True),2)",
"_____no_output_____"
],
[
"# Pie Chart for Credit History\n\ncredit = loans_df.credithistory.value_counts()\nplt.figure(figsize= (8,5), dpi=100)\n\n# Highlighting yes\nexplode = (0.1, 0) \ncolors = ['blue', 'orange']\n\n# Plotting our pie chart\ncredit.plot.pie(explode = explode, colors = colors, autopct='%1.1f%%', shadow=True, startangle=140)\n\nplt.axis('equal')\nplt.title('Pie chart of Credit History')\nplt.show()",
"_____no_output_____"
],
[
"# Frequency table for Self Employed status in percentage\n\nround(loans_df.selfemployed.value_counts(normalize = True),2)",
"_____no_output_____"
],
[
"# Frequency table for Dependents in percentage\n\nround(loans_df.dependents.value_counts(normalize = True),2)",
"_____no_output_____"
],
[
"# Histogram for Applicant Income\n\ndef histogram(var1, bins):\n plt.figure(figsize= (10,8)),\n sns.set_style('darkgrid'),\n sns.set_palette('colorblind'),\n sns.histplot(x = var1, data=loans_df, bins = bins , shrink= 0.9, kde = True)\n\nhistogram('applicantincome', 50)\nplt.title('Histogram of the Applicant Income', fontsize = 16)\nplt.xlabel('Applicant Income', fontsize = 14)\nplt.ylabel('Count', fontsize = 14)\nplt.xticks(fontsize = 14)\nplt.yticks(fontsize = 14)\nplt.show()",
"_____no_output_____"
],
[
"# Checking on coefficent of variance, skewness and kurtosis\n\nprint('The skewness is:', loans_df['applicantincome'].skew())\nprint('The kurtosis is:', loans_df['applicantincome'].kurt())\nprint('The coefficient of variation is:', loans_df['applicantincome'].std()/loans_df['applicantincome'].mean())",
"The skewness is: 6.539513113994625\nThe kurtosis is: 60.54067593369113\nThe coefficient of variation is: 1.1305797551151708\n"
],
[
"# Histogram for Loan Amount\n\nhistogram('loanamount', 50)\n\nplt.title('Histogram of the Loan Amount Given', fontsize = 16)\nplt.xlabel('Applicant Income', fontsize = 14)\nplt.ylabel('Count', fontsize = 14)\nplt.xticks(fontsize = 14)\nplt.yticks(fontsize = 14)\nplt.show()",
"_____no_output_____"
],
[
"# Checking on coefficent of variance, skewness and kurtosis\n\nprint('The skewness is:', loans_df['loanamount'].skew())\nprint('The kurtosis is:', loans_df['loanamount'].kurt())\nprint('The coefficient of variation is:', loans_df['loanamount'].std()/loans_df['loanamount'].mean())",
"The skewness is: 2.726601144105299\nThe kurtosis is: 10.896456468091559\nThe coefficient of variation is: 0.5739787353875622\n"
],
[
"# Histogram for Co-applicant Income\n\nhistogram('coapplicantincome', 50)\n\nplt.title('Histogram of the Co-applicant Income', fontsize = 16)\nplt.xlabel('Co-applicant Income', fontsize = 14)\nplt.ylabel('Count', fontsize = 14)\nplt.xticks(fontsize = 14)\nplt.yticks(fontsize = 14)\nplt.show()",
"_____no_output_____"
],
[
"# Checking on coefficent of variance, skewness and kurtosis\n\nprint('The skewness is:', loans_df['coapplicantincome'].skew())\nprint('The kurtosis is:', loans_df['coapplicantincome'].kurt())\nprint('The coefficient of variation is:', loans_df['coapplicantincome'].std()/loans_df['coapplicantincome'].mean())",
"The skewness is: 7.491531216657306\nThe kurtosis is: 84.95638421103374\nThe coefficient of variation is: 1.8049381363301926\n"
],
[
"# Looking at the unique variables in amount\n\nloans_df.loanamountterm.unique()",
"_____no_output_____"
],
[
"# Measures of central tendency for our quantitative variables\n\nloans_df.describe()",
"_____no_output_____"
]
],
[
[
"### b) Bivariate Analysis",
"_____no_output_____"
]
],
[
[
"# Preview of dataset\n\nloans_df.head(3)",
"_____no_output_____"
],
[
"# Comparison of Self employment Status and Loan Status \n\ntable=pd.crosstab(loans_df['selfemployed'],loans_df['loanstatus'])\ntable.div(table.sum(1).astype(float), axis=0).plot(kind='bar', figsize= (10,8), stacked=False)\nplt.title('Stacked Bar Chart of Self Employed to Loan Status', fontsize = 16)\nplt.xlabel('Self Employed', fontsize = 14)\nplt.ylabel('Proportion of Respondents', fontsize = 14)\nplt.xticks(rotation = 360, fontsize = 14)\nplt.yticks(fontsize = 14)\nplt.show()",
"_____no_output_____"
],
[
"# Comparison of Education and Loan Status\n\ntable=pd.crosstab(loans_df['education'],loans_df['loanstatus'])\ntable.div(table.sum(1).astype(float), axis=0).plot(kind='bar', figsize = (10,8), stacked=False)\nplt.title('Stacked Bar Chart of Education and Loan Status', fontsize = 16)\nplt.xlabel('Education', fontsize = 14)\nplt.ylabel('Proportion of Respondents', fontsize = 14)\nplt.xticks(rotation = 360, fontsize = 14)\nplt.yticks(fontsize = 14)\nplt.show()",
"_____no_output_____"
],
[
"# Comparison of Gender and Loan Status\n\ntable=pd.crosstab(loans_df['gender'],loans_df['loanstatus'])\ntable.div(table.sum(1).astype(float), axis=0).plot(kind='bar',figsize = (10,8), stacked=False)\nplt.title('Bar Chart of Gender to loanstatus', fontsize = 16)\nplt.xlabel('Gender', fontsize = 14)\nplt.ylabel('Proportion of Respondents', fontsize = 14)\nplt.xticks(rotation = 360, fontsize = 14)\nplt.yticks(fontsize = 14)\nplt.show()",
"_____no_output_____"
],
[
"# Comparison of Marital Status and Loan Status\n\ntable=pd.crosstab(loans_df['married'],loans_df['loanstatus'])\ntable.div(table.sum(1).astype(float), axis=0).plot(kind='bar', figsize = (10,8), stacked=False)\nplt.title('Bar Chart of Marital Status to Loan Status', fontsize = 16)\nplt.xlabel('Marital Status',fontsize = 14)\nplt.ylabel('Proportion of Respondents', fontsize = 14)\nplt.xticks(rotation = 360, fontsize = 14)\nplt.yticks(fontsize = 14)\nplt.show()",
"_____no_output_____"
],
[
"# Comparison of Credit History and Loan Status\n\ntable=pd.crosstab(loans_df['credithistory'],loans_df['loanstatus'])\ntable.div(table.sum(1).astype(float), axis=0).plot(kind='bar', figsize = (10,8), stacked=False)\nplt.title('Bar Chart of Credit History and Loanstatus', fontsize = 16)\nplt.xlabel('Credit History', fontsize = 14)\nplt.ylabel('Proportion of Respondents', fontsize = 14)\nplt.xticks(rotation = 360, fontsize = 14)\nplt.yticks(fontsize = 14)\nplt.show()",
"_____no_output_____"
],
[
"# Comparison of Property Area and Loan Status\n\ntable=pd.crosstab(loans_df['propertyarea'],loans_df['loanstatus'])\ntable.div(table.sum(1).astype(float), axis=0).plot(kind='bar', figsize = (10,8), stacked=False)\nplt.title('Bar Chart of Area and Loan Status', fontsize = 16)\nplt.xlabel('Area', fontsize = 14)\nplt.ylabel('Proportion of Respondents', fontsize = 14)\nplt.xticks(rotation = 360, fontsize = 14)\nplt.yticks(fontsize = 14)\nplt.show()",
"_____no_output_____"
],
[
"# Comparison of Dependents and Loan Status\n\ntable=pd.crosstab(loans_df['dependents'],loans_df['loanstatus'])\ntable.div(table.sum(1).astype(float), axis=0).plot(kind='bar', figsize = (10,8), stacked=False)\nplt.title('Bar Chart of Dependents and Loan Status', fontsize = 16)\nplt.xlabel('Dependents', fontsize = 14)\nplt.ylabel('Proportion of Respondents', fontsize = 14)\nplt.xticks(rotation = 360, fontsize = 14)\nplt.yticks(fontsize = 14)\nplt.show()",
"_____no_output_____"
],
[
"#Scatterplot to show correlation between Applicant Income and Loan amount\n\nplt.figure(figsize= (10,8))\nsns.scatterplot(x= loans_df.applicantincome, y = loans_df.loanamount)\nplt.title('Applicant Income Vs Loan Amount', fontsize = 16)\nplt.ylabel('Loan Amount', fontsize=14)\nplt.xlabel('Applicant Income', fontsize=14)\nplt.xticks(rotation = 75, fontsize = 14)\nplt.yticks(fontsize = 14)\nplt.show()",
"_____no_output_____"
],
[
"# Correlation coefficient between applicant income and loan amount\n\nloans_df['applicantincome'].corr(loans_df['loanamount'])",
"_____no_output_____"
],
[
"#Scatterplot to show correlation between Co-Applicant Income and Loan amount\n\nplt.figure(figsize= (10,8))\nsns.scatterplot(x= loans_df.coapplicantincome, y = loans_df.loanamount)\nplt.title('Co-Applicant Income Vs Loan Amount', fontsize = 16)\nplt.ylabel('Loan Amount', fontsize=14)\nplt.xlabel('Co-Applicant Income', fontsize=14)\nplt.xticks(rotation = 75, fontsize = 14)\nplt.yticks(fontsize = 14)\nplt.show()",
"_____no_output_____"
],
[
"# Correlation coefficient between loan amount and co-applicant income\n\nloans_df['coapplicantincome'].corr(loans_df['loanamount'])",
"_____no_output_____"
],
[
"# Scatterplot between Co-applicant income and Loan amount for \n# income less that 2000\n\nloans_df[loans_df['coapplicantincome'] < 2000].sample(200).plot.scatter(x='applicantincome', y='loanamount')",
"*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.\n"
],
[
"# Correlation Heatmap\n\nplt.figure(figsize=(7,4)) \nsns.heatmap(loans_df.corr(),annot=True,cmap='cubehelix_r') \nplt.show()",
"_____no_output_____"
]
],
[
[
"### c) Multivariate Analysis ",
"_____no_output_____"
]
],
[
[
"# Analysis of Loan Status, Applicant income and Loan Amount\n\nplt.figure(figsize=(10,8))\nsns.scatterplot(x= loans_df['loanamount'], y=loans_df['applicantincome'], hue= loans_df['loanstatus'])\nplt.title('Loan Amount vs Applicant Income vs Loan Status', fontsize = 16)\nplt.xlabel('Loan Amount', fontsize = 14)\nplt.ylabel('Applicant Income', fontsize = 14)\nplt.xticks(fontsize = 14)\nplt.yticks(fontsize = 14)",
"_____no_output_____"
],
[
"# Analysis of Loan Status, Applicant income and Credit History\n\nplt.figure(figsize=(10,8))\nsns.scatterplot(x= loans_df['loanamount'], y=loans_df['applicantincome'], hue= loans_df['credithistory'])\nplt.title('Loan Amount vs Applicant Income vs Credit History', fontsize = 16)\nplt.xlabel('Loan Amount', fontsize = 14)\nplt.ylabel('Applicant Income', fontsize = 14)\nplt.xticks(fontsize = 14)\nplt.yticks(fontsize = 14)",
"_____no_output_____"
]
],
[
[
"## 7. Hypothesis testing",
"_____no_output_____"
],
[
"- The Chi-square test will be used for all hypothesis tests in our analysis.\n- The level of significance to be used in all tests below will be 0.05 or 5% ",
"_____no_output_____"
],
[
"**Hypothesis 1:**\n\nHo : There is no relationship between credit history and the loan status\n\nHa : There is a relationship between credit history and the loan status\n",
"_____no_output_____"
]
],
[
[
"# Creating a crosstab\n\ntab = pd.crosstab(loans_df['loanstatus'], loans_df['credithistory'])\ntab",
"_____no_output_____"
],
[
"# Obtaining the observed values\n\nobserved_values = tab.values\nprint('Observed values: -\\n', observed_values)",
"Observed values: -\n [[ 95 97]\n [ 44 378]]\n"
],
[
"# Creating the chi square contingency table\n\nval = stats.chi2_contingency(tab)\nval",
"_____no_output_____"
],
[
"# Obtaining the expected values\n\nexpected_values = val[3]\nexpected_values",
"_____no_output_____"
],
[
"# Obtaining the degrees of freedom\n\nrows = len(tab.iloc[0:2, 0])\ncolumns = len(tab.iloc[0, 0:2])\ndof = (rows-1)*(columns-1)\nprint('Degrees of Freedom', dof)",
"Degrees of Freedom 1\n"
],
[
"# Obtaining the chi-square statistic\n\nchi_square =sum([(o-e)**2./e for o,e in zip(observed_values,expected_values)])\nchi_square\nchi_square_statistic = chi_square[0]+chi_square[1]\nchi_square_statistic",
"_____no_output_____"
],
[
"# Getting the critical value\nalpha = 0.05\ncritical_value = stats.chi2.ppf(q = 1-alpha, df = dof)\n\nprint('Critical Value:', critical_value)",
"Critical Value: 3.841458820694124\n"
],
[
"# Getting p value\n\np_value = 1 - stats.chi2.cdf(x = chi_square_statistic, df= dof)\np_value\n\n",
"_____no_output_____"
],
[
"# Conclusion\n\nif chi_square_statistic>=critical_value:\n print('Reject Null Hypothesis')\nelse:\n print('Do not Reject Null Hypothesis')",
"Reject Null Hypothesis\n"
]
],
[
[
"The chi-square statistic is greater than the critical value hence we reject the null hypothesis that there is no relationship between credit history and loan status\n\nAt 5% level of significance, there is enough evidence to conclude that there is a relationship between credit history and loan status",
"_____no_output_____"
],
[
"**Hypothesis 2 :**\n\nHo : There is no relationship between area and the loan status\n\nHa : There is a relationship between area and the loan status",
"_____no_output_____"
]
],
[
[
"# Creating a crosstab\n\ntab = pd.crosstab(loans_df['loanstatus'], loans_df['propertyarea'])\ntab",
"_____no_output_____"
],
[
"# Obtaining the observed values\n\nobserved_values = tab.values\nprint('Observed values: -\\n', observed_values)",
"Observed values: -\n [[ 69 54 69]\n [110 179 133]]\n"
],
[
"# Creating the chi square contingency table\n\nval = stats.chi2_contingency(tab)\nval",
"_____no_output_____"
],
[
"# Obtaining the expected values\n\nexpected_values = val[3]\nexpected_values",
"_____no_output_____"
],
[
"# Obtaining the degrees of freedom\n\nrows = len(tab.iloc[0:2, 0])\ncolumns = len(tab.iloc[0, 0:2])\ndof = (rows-1)*(columns-1)\nprint('Degrees of Freedom', dof)",
"Degrees of Freedom 1\n"
],
[
"# Obtaining the chi-square statistic\n\nchi_square =sum([(o-e)**2./e for o,e in zip(observed_values,expected_values)])\nchi_square\nchi_square_statistic = chi_square[0]+chi_square[1]\nchi_square_statistic",
"_____no_output_____"
],
[
"# Getting the critical value\nalpha = 0.05\ncritical_value = stats.chi2.ppf(q = 1-alpha, df = dof)\n\nprint('Critical Value:', critical_value)",
"Critical Value: 3.841458820694124\n"
],
[
"# Getting p value\n\np_value = 1 - stats.chi2.cdf(x = chi_square_statistic, df= dof)\np_value",
"_____no_output_____"
],
[
"# Conclusion\n\nif chi_square_statistic>=critical_value:\n print('Reject Null Hypothesis')\nelse:\n print('Do not Reject Null Hypothesis')",
"Reject Null Hypothesis\n"
]
],
[
[
"The chi-square statistic is greater than the critical value hence we reject the null hypothesis that there is no relationship between area and loan status\n\nAt 5% level of significance, there is enough evidence to conclude that there is a relationship between credit area and loan status",
"_____no_output_____"
],
[
"**Hypothesis 3 :**\n\nHo : There is no relationship between gender and the loan status\n\nHa : There is a relationship between gender and the loan status",
"_____no_output_____"
]
],
[
[
"# Creating a crosstab\n\ntab = pd.crosstab(loans_df['loanstatus'], loans_df['gender'])\ntab",
"_____no_output_____"
],
[
"# Obtaining the observed values\n\nobserved_values = tab.values\nprint('Observed values: -\\n', observed_values)",
"Observed values: -\n [[ 37 155]\n [ 75 347]]\n"
],
[
"# Creating the chi square contingency table\n\nval = stats.chi2_contingency(tab)\nval",
"_____no_output_____"
],
[
"# Obtaining the expected values\n\nexpected_values = val[3]\nexpected_values",
"_____no_output_____"
],
[
"# Obtaining the degrees of freedom\n\nrows = len(tab.iloc[0:2, 0])\ncolumns = len(tab.iloc[0, 0:2])\ndof = (rows-1)*(columns-1)\nprint('Degrees of Freedom', dof)",
"Degrees of Freedom 1\n"
],
[
"# Obtaining the chi-square statistic\n\nchi_square =sum([(o-e)**2./e for o,e in zip(observed_values,expected_values)])\nchi_square\nchi_square_statistic = chi_square[0]+chi_square[1]\nchi_square_statistic",
"_____no_output_____"
],
[
"# Getting the critical value\nalpha = 0.05\ncritical_value = stats.chi2.ppf(q = 1-alpha, df = dof)\n\nprint('Critical Value:', critical_value)",
"Critical Value: 3.841458820694124\n"
],
[
"# Getting p value\n\np_value = 1 - stats.chi2.cdf(x = chi_square_statistic, df= dof)\np_value",
"_____no_output_____"
],
[
"# Conclusion\n\nif chi_square_statistic>=critical_value:\n print('Reject Null Hypothesis')\nelse:\n print('Do not Reject Null Hypothesis')",
"Do not Reject Null Hypothesis\n"
]
],
[
[
"The chi-square statistic is less than the critical value hence we do not reject the null hypothesis that there is no relationship between area and loan status\n\nAt 5% level of significance, there is not enough evidence to conclude that there is a relationship between credit area and loan status",
"_____no_output_____"
],
[
"## 8. Dimensionality reduction",
"_____no_output_____"
]
],
[
[
"# PCA analysis with One Hot Encoding\n\ndummy_Gender = pd.get_dummies(loans_df['gender'], prefix = 'Gender')\ndummy_Married = pd.get_dummies(loans_df['married'], prefix = \"Married\")\ndummy_Education = pd.get_dummies(loans_df['education'], prefix = \"Education\")\ndummy_Self_Employed = pd.get_dummies(loans_df['selfemployed'], prefix = \"Selfemployed\")\ndummy_Property_Area = pd.get_dummies(loans_df['propertyarea'], prefix = \"Property\")\ndummy_Dependents = pd.get_dummies(loans_df['dependents'], prefix = \"Dependents\")\ndummy_Loan_status = pd.get_dummies(loans_df['loanstatus'], prefix = \"Approve\")",
"_____no_output_____"
],
[
"# Creating a list of our dummy data\n\nframes = [loans_df,dummy_Gender,dummy_Married,dummy_Education,dummy_Self_Employed,dummy_Property_Area,dummy_Dependents,dummy_Loan_status]",
"_____no_output_____"
],
[
"# Combining the dummy data with our dataframe\n\ndf_train = pd.concat(frames, axis = 1)",
"_____no_output_____"
],
[
"# Previewing our training dataset\n\ndf_train.head(10)",
"_____no_output_____"
],
[
"# Dropping of non-numeric columns as part of pre-processing\n\ndf_train = df_train.drop(columns = ['loanid', 'gender', 'married', 'dependents', 'education','selfemployed', 'propertyarea','loanstatus','Approve_N'])",
"_____no_output_____"
],
[
"# Previewing the final dataset for our analysis\n\ndf_train",
"_____no_output_____"
],
[
"# Preprocessing \n\nX=df_train.drop(['Approve_Y'],axis=1)\ny=df_train['Approve_Y']",
"_____no_output_____"
],
[
"# Splitting into training and test\n\nfrom sklearn.model_selection import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)\n",
"_____no_output_____"
],
[
"# Normalization\n\n# Dependents had an issue because of the +\n\nfrom sklearn.preprocessing import StandardScaler\n\nsc = StandardScaler()\nX_train = sc.fit_transform(X_train)\nX_test = sc.transform(X_test)",
"_____no_output_____"
],
[
"# Applying PCA\n\nfrom sklearn.decomposition import PCA\n\npca = PCA(n_components=6)\nX_train = pca.fit_transform(X_train)\nX_test = pca.transform(X_test)",
"_____no_output_____"
],
[
"# Obtaining the explained variance ratio which returns the variance caused by each of the principal components. \n# We execute the following line of code to find the \"explained variance ratio\"\n\nexplained_variance = pca.explained_variance_ratio_\n\nexplained_variance",
"_____no_output_____"
],
[
"# Plotting our scree plot\n\nplt.plot(pca.explained_variance_ratio_)\nplt.xlabel('Number of components', fontsize = 14)\nplt.ylabel('Explained variance', fontsize = 14)\nplt.title('Scree Plot', fontsize = 16)\nplt.show()",
"_____no_output_____"
],
[
"# Training and Making Predictions\nfrom sklearn.ensemble import RandomForestClassifier\n\nclassifier = RandomForestClassifier(max_depth=2, random_state=0)\nclassifier.fit(X_train, y_train)\n\n# Predicting the Test set results\ny_pred = classifier.predict(X_test)",
"_____no_output_____"
],
[
"# Performance Evaluation\n# \nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.metrics import accuracy_score\n\ncm = confusion_matrix(y_test, y_pred)\nprint(cm)\nprint('Accuracy' , accuracy_score(y_test, y_pred))",
"[[ 0 33]\n [ 0 90]]\nAccuracy 0.7317073170731707\n"
]
],
[
[
"## 9. Challenging the solution\n\nIf we had more knowledge on machine learning, we would have built a classification model to accompany the hypothesis tests and hence strengthen our analysis.",
"_____no_output_____"
],
[
"## 10. Follow-up questions\n\nAt this point, we can refine our question or collect new data, all in an iterative process to get at the truth.\n",
"_____no_output_____"
],
[
"### a) Did we have the right data?\n> Yes. The data was relevant in context to our research.\n\n### b) Do we need other data to answer the question?\n> Our data was quite right but if there was a way we could collect more data on the same it would be better.\n\n### c) Did we have the right question?\n> Yes",
"_____no_output_____"
],
[
"## 11. Conclusions and Recommendations\n> From our exploratory data analysis and statistical analysis techniques we have made some observations therefore we can draw some conclusions.\n\n> - In consideration of this dataset containing a substantial amount of outliers standing at 42%, it deemed best not to drop them because in the case of loans, some questions are sensitive eg gender.\n- The number of loans approved was 69% and disapproved loans were 31%\n- It appears that males apply for more loans than women because males are 82% and females are 18%. Same case applies to graduates. This is because they have high numbers in loan applications. It would make one wonder if graduates are not being paid well or they need the loans to facilitate other things like investments\n- There was no significant difference between number of approved loans considering that a person is self employed or not. This implies that loan approval is more dependent on applicant income.\n- About 78% of loans were approved for applicants with good credit history.\n- There was a high number of approved loans for applicant coming from semi-urban areas. This would suggest that they could have higher income.\n- The number of dependents affects an applicants loan approval rate. It was interesting to find that people with 2 dependents had more loan approvals.\n- We also find out that credit history have a significantly high relationship with loan approval.\n- There is a relationship between area and loan status\n- Undoubtedly, despite more males applying for loans, gender does not affect whether a loan is approved or not.\n- It is discovered that those earning between 0 and 20,000 go for smaller loans amounts. If the bank capitalizes on offering good rates, they can make good profits since smaller loans have shorter repayment periods.\n\nWe recommend that the lender should prioritize people with good credit history and high income earners. \nThe lenders should have better data collection methods to capture more data in order for the models to make more accurate predictions.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d098966c000ecdf4a52efbd98c302e09ccda8644 | 4,277 | ipynb | Jupyter Notebook | notebook/ode_extras/feagin.ipynb | KZiemian/DiffEqTutorials.jl | 97c2bce7039bd976522f06d7e1c9900bb63c2aff | [
"MIT"
] | 1 | 2019-03-22T12:30:52.000Z | 2019-03-22T12:30:52.000Z | notebook/ode_extras/feagin.ipynb | KZiemian/DiffEqTutorials.jl | 97c2bce7039bd976522f06d7e1c9900bb63c2aff | [
"MIT"
] | null | null | null | notebook/ode_extras/feagin.ipynb | KZiemian/DiffEqTutorials.jl | 97c2bce7039bd976522f06d7e1c9900bb63c2aff | [
"MIT"
] | null | null | null | 36.87069 | 785 | 0.59855 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
d098981e46e1c2b3d56c9f3973043cf65c1ba817 | 415,905 | ipynb | Jupyter Notebook | Model backlog/ResNet50/23 - ResNet50 - Data augmentation fill constant.ipynb | ThinkBricks/APTOS2019BlindnessDetection | e524fd69f83a1252710076c78b6a5236849cd885 | [
"MIT"
] | 23 | 2019-09-08T17:19:16.000Z | 2022-02-02T16:20:09.000Z | Model backlog/ResNet50/23 - ResNet50 - Data augmentation fill constant.ipynb | ThinkBricks/APTOS2019BlindnessDetection | e524fd69f83a1252710076c78b6a5236849cd885 | [
"MIT"
] | 1 | 2020-03-10T18:42:12.000Z | 2020-09-18T22:02:38.000Z | Model backlog/ResNet50/23 - ResNet50 - Data augmentation fill constant.ipynb | ThinkBricks/APTOS2019BlindnessDetection | e524fd69f83a1252710076c78b6a5236849cd885 | [
"MIT"
] | 16 | 2019-09-21T12:29:59.000Z | 2022-03-21T00:42:26.000Z | 217.182768 | 107,744 | 0.862933 | [
[
[
"# Dependencies",
"_____no_output_____"
]
],
[
[
"import os\nimport random\nimport warnings\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nfrom sklearn.utils import class_weight\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import confusion_matrix, cohen_kappa_score\nfrom keras import backend as K\nfrom keras.models import Model\nfrom keras import optimizers, applications\nfrom keras.preprocessing.image import ImageDataGenerator\nfrom keras.callbacks import EarlyStopping, ReduceLROnPlateau, Callback\nfrom keras.layers import Dense, Dropout, GlobalAveragePooling2D, Input\n\n# Set seeds to make the experiment more reproducible.\nfrom tensorflow import set_random_seed\ndef seed_everything(seed=0):\n random.seed(seed)\n os.environ['PYTHONHASHSEED'] = str(seed)\n np.random.seed(seed)\n set_random_seed(0)\nseed_everything()\n\n%matplotlib inline\nsns.set(style=\"whitegrid\")\nwarnings.filterwarnings(\"ignore\")",
"Using TensorFlow backend.\n"
]
],
[
[
"# Load data",
"_____no_output_____"
]
],
[
[
"train = pd.read_csv('../input/aptos2019-blindness-detection/train.csv')\ntest = pd.read_csv('../input/aptos2019-blindness-detection/test.csv')\nprint('Number of train samples: ', train.shape[0])\nprint('Number of test samples: ', test.shape[0])\n\n# Preprocecss data\ntrain[\"id_code\"] = train[\"id_code\"].apply(lambda x: x + \".png\")\ntest[\"id_code\"] = test[\"id_code\"].apply(lambda x: x + \".png\")\ntrain['diagnosis'] = train['diagnosis'].astype('str')\ndisplay(train.head())",
"Number of train samples: 3662\nNumber of test samples: 1928\n"
]
],
[
[
"# Model parameters",
"_____no_output_____"
]
],
[
[
"# Model parameters\nBATCH_SIZE = 8\nEPOCHS = 30\nWARMUP_EPOCHS = 2\nLEARNING_RATE = 1e-4\nWARMUP_LEARNING_RATE = 1e-3\nHEIGHT = 512\nWIDTH = 512\nCANAL = 3\nN_CLASSES = train['diagnosis'].nunique()\nES_PATIENCE = 5\nRLROP_PATIENCE = 3\nDECAY_DROP = 0.5",
"_____no_output_____"
],
[
"def kappa(y_true, y_pred, n_classes=5):\n y_trues = K.cast(K.argmax(y_true), K.floatx())\n y_preds = K.cast(K.argmax(y_pred), K.floatx())\n n_samples = K.cast(K.shape(y_true)[0], K.floatx())\n distance = K.sum(K.abs(y_trues - y_preds))\n max_distance = n_classes - 1\n \n kappa_score = 1 - ((distance**2) / (n_samples * (max_distance**2)))\n\n return kappa_score",
"_____no_output_____"
]
],
[
[
"# Train test split",
"_____no_output_____"
]
],
[
[
"X_train, X_val = train_test_split(train, test_size=0.25, random_state=0)",
"_____no_output_____"
]
],
[
[
"# Data generator",
"_____no_output_____"
]
],
[
[
"train_datagen=ImageDataGenerator(rescale=1./255, \n rotation_range=360,\n brightness_range=[0.5, 1.5],\n zoom_range=[1, 1.2],\n zca_whitening=True,\n horizontal_flip=True,\n vertical_flip=True,\n fill_mode='constant',\n cval=0.)\n\ntrain_generator=train_datagen.flow_from_dataframe(\n dataframe=X_train,\n directory=\"../input/aptos2019-blindness-detection/train_images/\",\n x_col=\"id_code\",\n y_col=\"diagnosis\",\n batch_size=BATCH_SIZE,\n class_mode=\"categorical\",\n target_size=(HEIGHT, WIDTH))\n\nvalid_generator=train_datagen.flow_from_dataframe(\n dataframe=X_val,\n directory=\"../input/aptos2019-blindness-detection/train_images/\",\n x_col=\"id_code\",\n y_col=\"diagnosis\",\n batch_size=BATCH_SIZE,\n class_mode=\"categorical\", \n target_size=(HEIGHT, WIDTH))\n\ntest_datagen = ImageDataGenerator(rescale=1./255)\n\ntest_generator = test_datagen.flow_from_dataframe( \n dataframe=test,\n directory = \"../input/aptos2019-blindness-detection/test_images/\",\n x_col=\"id_code\",\n target_size=(HEIGHT, WIDTH),\n batch_size=1,\n shuffle=False,\n class_mode=None)",
"Found 2746 validated image filenames belonging to 5 classes.\nFound 916 validated image filenames belonging to 5 classes.\nFound 1928 validated image filenames.\n"
]
],
[
[
"# Model",
"_____no_output_____"
]
],
[
[
"def create_model(input_shape, n_out):\n input_tensor = Input(shape=input_shape)\n base_model = applications.ResNet50(weights=None, \n include_top=False,\n input_tensor=input_tensor)\n base_model.load_weights('../input/resnet50/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5')\n\n x = GlobalAveragePooling2D()(base_model.output)\n x = Dropout(0.5)(x)\n x = Dense(2048, activation='relu')(x)\n x = Dropout(0.5)(x)\n final_output = Dense(n_out, activation='softmax', name='final_output')(x)\n model = Model(input_tensor, final_output)\n \n return model",
"_____no_output_____"
],
[
"model = create_model(input_shape=(HEIGHT, WIDTH, CANAL), n_out=N_CLASSES)\n\nfor layer in model.layers:\n layer.trainable = False\n\nfor i in range(-5, 0):\n model.layers[i].trainable = True\n \nclass_weights = class_weight.compute_class_weight('balanced', np.unique(train['diagnosis'].astype('int').values), train['diagnosis'].astype('int').values)\n\nmetric_list = [\"accuracy\", kappa]\noptimizer = optimizers.Adam(lr=WARMUP_LEARNING_RATE)\nmodel.compile(optimizer=optimizer, loss=\"categorical_crossentropy\", metrics=metric_list)\nmodel.summary()",
"__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_1 (InputLayer) (None, 512, 512, 3) 0 \n__________________________________________________________________________________________________\nconv1_pad (ZeroPadding2D) (None, 518, 518, 3) 0 input_1[0][0] \n__________________________________________________________________________________________________\nconv1 (Conv2D) (None, 256, 256, 64) 9472 conv1_pad[0][0] \n__________________________________________________________________________________________________\nbn_conv1 (BatchNormalization) (None, 256, 256, 64) 256 conv1[0][0] \n__________________________________________________________________________________________________\nactivation_1 (Activation) (None, 256, 256, 64) 0 bn_conv1[0][0] \n__________________________________________________________________________________________________\npool1_pad (ZeroPadding2D) (None, 258, 258, 64) 0 activation_1[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_1 (MaxPooling2D) (None, 128, 128, 64) 0 pool1_pad[0][0] \n__________________________________________________________________________________________________\nres2a_branch2a (Conv2D) (None, 128, 128, 64) 4160 max_pooling2d_1[0][0] \n__________________________________________________________________________________________________\nbn2a_branch2a (BatchNormalizati (None, 128, 128, 64) 256 res2a_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_2 (Activation) (None, 128, 128, 64) 0 bn2a_branch2a[0][0] \n__________________________________________________________________________________________________\nres2a_branch2b (Conv2D) (None, 128, 128, 64) 36928 activation_2[0][0] \n__________________________________________________________________________________________________\nbn2a_branch2b (BatchNormalizati (None, 128, 128, 64) 256 res2a_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_3 (Activation) (None, 128, 128, 64) 0 bn2a_branch2b[0][0] \n__________________________________________________________________________________________________\nres2a_branch2c (Conv2D) (None, 128, 128, 256 16640 activation_3[0][0] \n__________________________________________________________________________________________________\nres2a_branch1 (Conv2D) (None, 128, 128, 256 16640 max_pooling2d_1[0][0] \n__________________________________________________________________________________________________\nbn2a_branch2c (BatchNormalizati (None, 128, 128, 256 1024 res2a_branch2c[0][0] \n__________________________________________________________________________________________________\nbn2a_branch1 (BatchNormalizatio (None, 128, 128, 256 1024 res2a_branch1[0][0] \n__________________________________________________________________________________________________\nadd_1 (Add) (None, 128, 128, 256 0 bn2a_branch2c[0][0] \n bn2a_branch1[0][0] \n__________________________________________________________________________________________________\nactivation_4 (Activation) (None, 128, 128, 256 0 add_1[0][0] \n__________________________________________________________________________________________________\nres2b_branch2a (Conv2D) (None, 128, 128, 64) 16448 activation_4[0][0] \n__________________________________________________________________________________________________\nbn2b_branch2a (BatchNormalizati (None, 128, 128, 64) 256 res2b_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_5 (Activation) (None, 128, 128, 64) 0 bn2b_branch2a[0][0] \n__________________________________________________________________________________________________\nres2b_branch2b (Conv2D) (None, 128, 128, 64) 36928 activation_5[0][0] \n__________________________________________________________________________________________________\nbn2b_branch2b (BatchNormalizati (None, 128, 128, 64) 256 res2b_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_6 (Activation) (None, 128, 128, 64) 0 bn2b_branch2b[0][0] \n__________________________________________________________________________________________________\nres2b_branch2c (Conv2D) (None, 128, 128, 256 16640 activation_6[0][0] \n__________________________________________________________________________________________________\nbn2b_branch2c (BatchNormalizati (None, 128, 128, 256 1024 res2b_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_2 (Add) (None, 128, 128, 256 0 bn2b_branch2c[0][0] \n activation_4[0][0] \n__________________________________________________________________________________________________\nactivation_7 (Activation) (None, 128, 128, 256 0 add_2[0][0] \n__________________________________________________________________________________________________\nres2c_branch2a (Conv2D) (None, 128, 128, 64) 16448 activation_7[0][0] \n__________________________________________________________________________________________________\nbn2c_branch2a (BatchNormalizati (None, 128, 128, 64) 256 res2c_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_8 (Activation) (None, 128, 128, 64) 0 bn2c_branch2a[0][0] \n__________________________________________________________________________________________________\nres2c_branch2b (Conv2D) (None, 128, 128, 64) 36928 activation_8[0][0] \n__________________________________________________________________________________________________\nbn2c_branch2b (BatchNormalizati (None, 128, 128, 64) 256 res2c_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_9 (Activation) (None, 128, 128, 64) 0 bn2c_branch2b[0][0] \n__________________________________________________________________________________________________\nres2c_branch2c (Conv2D) (None, 128, 128, 256 16640 activation_9[0][0] \n__________________________________________________________________________________________________\nbn2c_branch2c (BatchNormalizati (None, 128, 128, 256 1024 res2c_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_3 (Add) (None, 128, 128, 256 0 bn2c_branch2c[0][0] \n activation_7[0][0] \n__________________________________________________________________________________________________\nactivation_10 (Activation) (None, 128, 128, 256 0 add_3[0][0] \n__________________________________________________________________________________________________\nres3a_branch2a (Conv2D) (None, 64, 64, 128) 32896 activation_10[0][0] \n__________________________________________________________________________________________________\nbn3a_branch2a (BatchNormalizati (None, 64, 64, 128) 512 res3a_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_11 (Activation) (None, 64, 64, 128) 0 bn3a_branch2a[0][0] \n__________________________________________________________________________________________________\nres3a_branch2b (Conv2D) (None, 64, 64, 128) 147584 activation_11[0][0] \n__________________________________________________________________________________________________\nbn3a_branch2b (BatchNormalizati (None, 64, 64, 128) 512 res3a_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_12 (Activation) (None, 64, 64, 128) 0 bn3a_branch2b[0][0] \n__________________________________________________________________________________________________\nres3a_branch2c (Conv2D) (None, 64, 64, 512) 66048 activation_12[0][0] \n__________________________________________________________________________________________________\nres3a_branch1 (Conv2D) (None, 64, 64, 512) 131584 activation_10[0][0] \n__________________________________________________________________________________________________\nbn3a_branch2c (BatchNormalizati (None, 64, 64, 512) 2048 res3a_branch2c[0][0] \n__________________________________________________________________________________________________\nbn3a_branch1 (BatchNormalizatio (None, 64, 64, 512) 2048 res3a_branch1[0][0] \n__________________________________________________________________________________________________\nadd_4 (Add) (None, 64, 64, 512) 0 bn3a_branch2c[0][0] \n bn3a_branch1[0][0] \n__________________________________________________________________________________________________\nactivation_13 (Activation) (None, 64, 64, 512) 0 add_4[0][0] \n__________________________________________________________________________________________________\nres3b_branch2a (Conv2D) (None, 64, 64, 128) 65664 activation_13[0][0] \n__________________________________________________________________________________________________\nbn3b_branch2a (BatchNormalizati (None, 64, 64, 128) 512 res3b_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_14 (Activation) (None, 64, 64, 128) 0 bn3b_branch2a[0][0] \n__________________________________________________________________________________________________\nres3b_branch2b (Conv2D) (None, 64, 64, 128) 147584 activation_14[0][0] \n__________________________________________________________________________________________________\nbn3b_branch2b (BatchNormalizati (None, 64, 64, 128) 512 res3b_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_15 (Activation) (None, 64, 64, 128) 0 bn3b_branch2b[0][0] \n__________________________________________________________________________________________________\nres3b_branch2c (Conv2D) (None, 64, 64, 512) 66048 activation_15[0][0] \n__________________________________________________________________________________________________\nbn3b_branch2c (BatchNormalizati (None, 64, 64, 512) 2048 res3b_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_5 (Add) (None, 64, 64, 512) 0 bn3b_branch2c[0][0] \n activation_13[0][0] \n__________________________________________________________________________________________________\nactivation_16 (Activation) (None, 64, 64, 512) 0 add_5[0][0] \n__________________________________________________________________________________________________\nres3c_branch2a (Conv2D) (None, 64, 64, 128) 65664 activation_16[0][0] \n__________________________________________________________________________________________________\nbn3c_branch2a (BatchNormalizati (None, 64, 64, 128) 512 res3c_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_17 (Activation) (None, 64, 64, 128) 0 bn3c_branch2a[0][0] \n__________________________________________________________________________________________________\nres3c_branch2b (Conv2D) (None, 64, 64, 128) 147584 activation_17[0][0] \n__________________________________________________________________________________________________\nbn3c_branch2b (BatchNormalizati (None, 64, 64, 128) 512 res3c_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_18 (Activation) (None, 64, 64, 128) 0 bn3c_branch2b[0][0] \n__________________________________________________________________________________________________\nres3c_branch2c (Conv2D) (None, 64, 64, 512) 66048 activation_18[0][0] \n__________________________________________________________________________________________________\nbn3c_branch2c (BatchNormalizati (None, 64, 64, 512) 2048 res3c_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_6 (Add) (None, 64, 64, 512) 0 bn3c_branch2c[0][0] \n activation_16[0][0] \n__________________________________________________________________________________________________\nactivation_19 (Activation) (None, 64, 64, 512) 0 add_6[0][0] \n__________________________________________________________________________________________________\nres3d_branch2a (Conv2D) (None, 64, 64, 128) 65664 activation_19[0][0] \n__________________________________________________________________________________________________\nbn3d_branch2a (BatchNormalizati (None, 64, 64, 128) 512 res3d_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_20 (Activation) (None, 64, 64, 128) 0 bn3d_branch2a[0][0] \n__________________________________________________________________________________________________\nres3d_branch2b (Conv2D) (None, 64, 64, 128) 147584 activation_20[0][0] \n__________________________________________________________________________________________________\nbn3d_branch2b (BatchNormalizati (None, 64, 64, 128) 512 res3d_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_21 (Activation) (None, 64, 64, 128) 0 bn3d_branch2b[0][0] \n__________________________________________________________________________________________________\nres3d_branch2c (Conv2D) (None, 64, 64, 512) 66048 activation_21[0][0] \n__________________________________________________________________________________________________\nbn3d_branch2c (BatchNormalizati (None, 64, 64, 512) 2048 res3d_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_7 (Add) (None, 64, 64, 512) 0 bn3d_branch2c[0][0] \n activation_19[0][0] \n__________________________________________________________________________________________________\nactivation_22 (Activation) (None, 64, 64, 512) 0 add_7[0][0] \n__________________________________________________________________________________________________\nres4a_branch2a (Conv2D) (None, 32, 32, 256) 131328 activation_22[0][0] \n__________________________________________________________________________________________________\nbn4a_branch2a (BatchNormalizati (None, 32, 32, 256) 1024 res4a_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_23 (Activation) (None, 32, 32, 256) 0 bn4a_branch2a[0][0] \n__________________________________________________________________________________________________\nres4a_branch2b (Conv2D) (None, 32, 32, 256) 590080 activation_23[0][0] \n__________________________________________________________________________________________________\nbn4a_branch2b (BatchNormalizati (None, 32, 32, 256) 1024 res4a_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_24 (Activation) (None, 32, 32, 256) 0 bn4a_branch2b[0][0] \n__________________________________________________________________________________________________\nres4a_branch2c (Conv2D) (None, 32, 32, 1024) 263168 activation_24[0][0] \n__________________________________________________________________________________________________\nres4a_branch1 (Conv2D) (None, 32, 32, 1024) 525312 activation_22[0][0] \n__________________________________________________________________________________________________\nbn4a_branch2c (BatchNormalizati (None, 32, 32, 1024) 4096 res4a_branch2c[0][0] \n__________________________________________________________________________________________________\nbn4a_branch1 (BatchNormalizatio (None, 32, 32, 1024) 4096 res4a_branch1[0][0] \n__________________________________________________________________________________________________\nadd_8 (Add) (None, 32, 32, 1024) 0 bn4a_branch2c[0][0] \n bn4a_branch1[0][0] \n__________________________________________________________________________________________________\nactivation_25 (Activation) (None, 32, 32, 1024) 0 add_8[0][0] \n__________________________________________________________________________________________________\nres4b_branch2a (Conv2D) (None, 32, 32, 256) 262400 activation_25[0][0] \n__________________________________________________________________________________________________\nbn4b_branch2a (BatchNormalizati (None, 32, 32, 256) 1024 res4b_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_26 (Activation) (None, 32, 32, 256) 0 bn4b_branch2a[0][0] \n__________________________________________________________________________________________________\nres4b_branch2b (Conv2D) (None, 32, 32, 256) 590080 activation_26[0][0] \n__________________________________________________________________________________________________\nbn4b_branch2b (BatchNormalizati (None, 32, 32, 256) 1024 res4b_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_27 (Activation) (None, 32, 32, 256) 0 bn4b_branch2b[0][0] \n__________________________________________________________________________________________________\nres4b_branch2c (Conv2D) (None, 32, 32, 1024) 263168 activation_27[0][0] \n__________________________________________________________________________________________________\nbn4b_branch2c (BatchNormalizati (None, 32, 32, 1024) 4096 res4b_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_9 (Add) (None, 32, 32, 1024) 0 bn4b_branch2c[0][0] \n activation_25[0][0] \n__________________________________________________________________________________________________\nactivation_28 (Activation) (None, 32, 32, 1024) 0 add_9[0][0] \n__________________________________________________________________________________________________\nres4c_branch2a (Conv2D) (None, 32, 32, 256) 262400 activation_28[0][0] \n__________________________________________________________________________________________________\nbn4c_branch2a (BatchNormalizati (None, 32, 32, 256) 1024 res4c_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_29 (Activation) (None, 32, 32, 256) 0 bn4c_branch2a[0][0] \n__________________________________________________________________________________________________\nres4c_branch2b (Conv2D) (None, 32, 32, 256) 590080 activation_29[0][0] \n__________________________________________________________________________________________________\nbn4c_branch2b (BatchNormalizati (None, 32, 32, 256) 1024 res4c_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_30 (Activation) (None, 32, 32, 256) 0 bn4c_branch2b[0][0] \n__________________________________________________________________________________________________\nres4c_branch2c (Conv2D) (None, 32, 32, 1024) 263168 activation_30[0][0] \n__________________________________________________________________________________________________\nbn4c_branch2c (BatchNormalizati (None, 32, 32, 1024) 4096 res4c_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_10 (Add) (None, 32, 32, 1024) 0 bn4c_branch2c[0][0] \n activation_28[0][0] \n__________________________________________________________________________________________________\nactivation_31 (Activation) (None, 32, 32, 1024) 0 add_10[0][0] \n__________________________________________________________________________________________________\nres4d_branch2a (Conv2D) (None, 32, 32, 256) 262400 activation_31[0][0] \n__________________________________________________________________________________________________\nbn4d_branch2a (BatchNormalizati (None, 32, 32, 256) 1024 res4d_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_32 (Activation) (None, 32, 32, 256) 0 bn4d_branch2a[0][0] \n__________________________________________________________________________________________________\nres4d_branch2b (Conv2D) (None, 32, 32, 256) 590080 activation_32[0][0] \n__________________________________________________________________________________________________\nbn4d_branch2b (BatchNormalizati (None, 32, 32, 256) 1024 res4d_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_33 (Activation) (None, 32, 32, 256) 0 bn4d_branch2b[0][0] \n__________________________________________________________________________________________________\nres4d_branch2c (Conv2D) (None, 32, 32, 1024) 263168 activation_33[0][0] \n__________________________________________________________________________________________________\nbn4d_branch2c (BatchNormalizati (None, 32, 32, 1024) 4096 res4d_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_11 (Add) (None, 32, 32, 1024) 0 bn4d_branch2c[0][0] \n activation_31[0][0] \n__________________________________________________________________________________________________\nactivation_34 (Activation) (None, 32, 32, 1024) 0 add_11[0][0] \n__________________________________________________________________________________________________\nres4e_branch2a (Conv2D) (None, 32, 32, 256) 262400 activation_34[0][0] \n__________________________________________________________________________________________________\nbn4e_branch2a (BatchNormalizati (None, 32, 32, 256) 1024 res4e_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_35 (Activation) (None, 32, 32, 256) 0 bn4e_branch2a[0][0] \n__________________________________________________________________________________________________\nres4e_branch2b (Conv2D) (None, 32, 32, 256) 590080 activation_35[0][0] \n__________________________________________________________________________________________________\nbn4e_branch2b (BatchNormalizati (None, 32, 32, 256) 1024 res4e_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_36 (Activation) (None, 32, 32, 256) 0 bn4e_branch2b[0][0] \n__________________________________________________________________________________________________\nres4e_branch2c (Conv2D) (None, 32, 32, 1024) 263168 activation_36[0][0] \n__________________________________________________________________________________________________\nbn4e_branch2c (BatchNormalizati (None, 32, 32, 1024) 4096 res4e_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_12 (Add) (None, 32, 32, 1024) 0 bn4e_branch2c[0][0] \n activation_34[0][0] \n__________________________________________________________________________________________________\nactivation_37 (Activation) (None, 32, 32, 1024) 0 add_12[0][0] \n__________________________________________________________________________________________________\nres4f_branch2a (Conv2D) (None, 32, 32, 256) 262400 activation_37[0][0] \n__________________________________________________________________________________________________\nbn4f_branch2a (BatchNormalizati (None, 32, 32, 256) 1024 res4f_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_38 (Activation) (None, 32, 32, 256) 0 bn4f_branch2a[0][0] \n__________________________________________________________________________________________________\nres4f_branch2b (Conv2D) (None, 32, 32, 256) 590080 activation_38[0][0] \n__________________________________________________________________________________________________\nbn4f_branch2b (BatchNormalizati (None, 32, 32, 256) 1024 res4f_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_39 (Activation) (None, 32, 32, 256) 0 bn4f_branch2b[0][0] \n__________________________________________________________________________________________________\nres4f_branch2c (Conv2D) (None, 32, 32, 1024) 263168 activation_39[0][0] \n__________________________________________________________________________________________________\nbn4f_branch2c (BatchNormalizati (None, 32, 32, 1024) 4096 res4f_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_13 (Add) (None, 32, 32, 1024) 0 bn4f_branch2c[0][0] \n activation_37[0][0] \n__________________________________________________________________________________________________\nactivation_40 (Activation) (None, 32, 32, 1024) 0 add_13[0][0] \n__________________________________________________________________________________________________\nres5a_branch2a (Conv2D) (None, 16, 16, 512) 524800 activation_40[0][0] \n__________________________________________________________________________________________________\nbn5a_branch2a (BatchNormalizati (None, 16, 16, 512) 2048 res5a_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_41 (Activation) (None, 16, 16, 512) 0 bn5a_branch2a[0][0] \n__________________________________________________________________________________________________\nres5a_branch2b (Conv2D) (None, 16, 16, 512) 2359808 activation_41[0][0] \n__________________________________________________________________________________________________\nbn5a_branch2b (BatchNormalizati (None, 16, 16, 512) 2048 res5a_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_42 (Activation) (None, 16, 16, 512) 0 bn5a_branch2b[0][0] \n__________________________________________________________________________________________________\nres5a_branch2c (Conv2D) (None, 16, 16, 2048) 1050624 activation_42[0][0] \n__________________________________________________________________________________________________\nres5a_branch1 (Conv2D) (None, 16, 16, 2048) 2099200 activation_40[0][0] \n__________________________________________________________________________________________________\nbn5a_branch2c (BatchNormalizati (None, 16, 16, 2048) 8192 res5a_branch2c[0][0] \n__________________________________________________________________________________________________\nbn5a_branch1 (BatchNormalizatio (None, 16, 16, 2048) 8192 res5a_branch1[0][0] \n__________________________________________________________________________________________________\nadd_14 (Add) (None, 16, 16, 2048) 0 bn5a_branch2c[0][0] \n bn5a_branch1[0][0] \n__________________________________________________________________________________________________\nactivation_43 (Activation) (None, 16, 16, 2048) 0 add_14[0][0] \n__________________________________________________________________________________________________\nres5b_branch2a (Conv2D) (None, 16, 16, 512) 1049088 activation_43[0][0] \n__________________________________________________________________________________________________\nbn5b_branch2a (BatchNormalizati (None, 16, 16, 512) 2048 res5b_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_44 (Activation) (None, 16, 16, 512) 0 bn5b_branch2a[0][0] \n__________________________________________________________________________________________________\nres5b_branch2b (Conv2D) (None, 16, 16, 512) 2359808 activation_44[0][0] \n__________________________________________________________________________________________________\nbn5b_branch2b (BatchNormalizati (None, 16, 16, 512) 2048 res5b_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_45 (Activation) (None, 16, 16, 512) 0 bn5b_branch2b[0][0] \n__________________________________________________________________________________________________\nres5b_branch2c (Conv2D) (None, 16, 16, 2048) 1050624 activation_45[0][0] \n__________________________________________________________________________________________________\nbn5b_branch2c (BatchNormalizati (None, 16, 16, 2048) 8192 res5b_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_15 (Add) (None, 16, 16, 2048) 0 bn5b_branch2c[0][0] \n activation_43[0][0] \n__________________________________________________________________________________________________\nactivation_46 (Activation) (None, 16, 16, 2048) 0 add_15[0][0] \n__________________________________________________________________________________________________\nres5c_branch2a (Conv2D) (None, 16, 16, 512) 1049088 activation_46[0][0] \n__________________________________________________________________________________________________\nbn5c_branch2a (BatchNormalizati (None, 16, 16, 512) 2048 res5c_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_47 (Activation) (None, 16, 16, 512) 0 bn5c_branch2a[0][0] \n__________________________________________________________________________________________________\nres5c_branch2b (Conv2D) (None, 16, 16, 512) 2359808 activation_47[0][0] \n__________________________________________________________________________________________________\nbn5c_branch2b (BatchNormalizati (None, 16, 16, 512) 2048 res5c_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_48 (Activation) (None, 16, 16, 512) 0 bn5c_branch2b[0][0] \n__________________________________________________________________________________________________\nres5c_branch2c (Conv2D) (None, 16, 16, 2048) 1050624 activation_48[0][0] \n__________________________________________________________________________________________________\nbn5c_branch2c (BatchNormalizati (None, 16, 16, 2048) 8192 res5c_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_16 (Add) (None, 16, 16, 2048) 0 bn5c_branch2c[0][0] \n activation_46[0][0] \n__________________________________________________________________________________________________\nactivation_49 (Activation) (None, 16, 16, 2048) 0 add_16[0][0] \n__________________________________________________________________________________________________\nglobal_average_pooling2d_1 (Glo (None, 2048) 0 activation_49[0][0] \n__________________________________________________________________________________________________\ndropout_1 (Dropout) (None, 2048) 0 global_average_pooling2d_1[0][0] \n__________________________________________________________________________________________________\ndense_1 (Dense) (None, 2048) 4196352 dropout_1[0][0] \n__________________________________________________________________________________________________\ndropout_2 (Dropout) (None, 2048) 0 dense_1[0][0] \n__________________________________________________________________________________________________\nfinal_output (Dense) (None, 5) 10245 dropout_2[0][0] \n==================================================================================================\nTotal params: 27,794,309\nTrainable params: 4,206,597\nNon-trainable params: 23,587,712\n__________________________________________________________________________________________________\n"
]
],
[
[
"# Train top layers",
"_____no_output_____"
]
],
[
[
"STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size\nSTEP_SIZE_VALID = valid_generator.n//valid_generator.batch_size\n\nhistory_warmup = model.fit_generator(generator=train_generator,\n steps_per_epoch=STEP_SIZE_TRAIN,\n validation_data=valid_generator,\n validation_steps=STEP_SIZE_VALID,\n epochs=WARMUP_EPOCHS,\n class_weight=class_weights,\n verbose=1).history",
"Epoch 1/2\n343/343 [==============================] - 564s 2s/step - loss: 1.3429 - acc: 0.6112 - kappa: 0.7074 - val_loss: 1.8074 - val_acc: 0.4890 - val_kappa: 0.2442\nEpoch 2/2\n343/343 [==============================] - 532s 2s/step - loss: 0.8423 - acc: 0.7008 - kappa: 0.8523 - val_loss: 2.3498 - val_acc: 0.4890 - val_kappa: 0.2517\n"
]
],
[
[
"# Fine-tune the complete model",
"_____no_output_____"
]
],
[
[
"for layer in model.layers:\n layer.trainable = True\n\nes = EarlyStopping(monitor='val_loss', mode='min', patience=ES_PATIENCE, restore_best_weights=True, verbose=1)\nrlrop = ReduceLROnPlateau(monitor='val_loss', mode='min', patience=RLROP_PATIENCE, factor=DECAY_DROP, min_lr=1e-6, verbose=1)\n\ncallback_list = [es, rlrop]\noptimizer = optimizers.Adam(lr=LEARNING_RATE)\nmodel.compile(optimizer=optimizer, loss=\"categorical_crossentropy\", metrics=metric_list)\nmodel.summary()",
"__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_1 (InputLayer) (None, 512, 512, 3) 0 \n__________________________________________________________________________________________________\nconv1_pad (ZeroPadding2D) (None, 518, 518, 3) 0 input_1[0][0] \n__________________________________________________________________________________________________\nconv1 (Conv2D) (None, 256, 256, 64) 9472 conv1_pad[0][0] \n__________________________________________________________________________________________________\nbn_conv1 (BatchNormalization) (None, 256, 256, 64) 256 conv1[0][0] \n__________________________________________________________________________________________________\nactivation_1 (Activation) (None, 256, 256, 64) 0 bn_conv1[0][0] \n__________________________________________________________________________________________________\npool1_pad (ZeroPadding2D) (None, 258, 258, 64) 0 activation_1[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_1 (MaxPooling2D) (None, 128, 128, 64) 0 pool1_pad[0][0] \n__________________________________________________________________________________________________\nres2a_branch2a (Conv2D) (None, 128, 128, 64) 4160 max_pooling2d_1[0][0] \n__________________________________________________________________________________________________\nbn2a_branch2a (BatchNormalizati (None, 128, 128, 64) 256 res2a_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_2 (Activation) (None, 128, 128, 64) 0 bn2a_branch2a[0][0] \n__________________________________________________________________________________________________\nres2a_branch2b (Conv2D) (None, 128, 128, 64) 36928 activation_2[0][0] \n__________________________________________________________________________________________________\nbn2a_branch2b (BatchNormalizati (None, 128, 128, 64) 256 res2a_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_3 (Activation) (None, 128, 128, 64) 0 bn2a_branch2b[0][0] \n__________________________________________________________________________________________________\nres2a_branch2c (Conv2D) (None, 128, 128, 256 16640 activation_3[0][0] \n__________________________________________________________________________________________________\nres2a_branch1 (Conv2D) (None, 128, 128, 256 16640 max_pooling2d_1[0][0] \n__________________________________________________________________________________________________\nbn2a_branch2c (BatchNormalizati (None, 128, 128, 256 1024 res2a_branch2c[0][0] \n__________________________________________________________________________________________________\nbn2a_branch1 (BatchNormalizatio (None, 128, 128, 256 1024 res2a_branch1[0][0] \n__________________________________________________________________________________________________\nadd_1 (Add) (None, 128, 128, 256 0 bn2a_branch2c[0][0] \n bn2a_branch1[0][0] \n__________________________________________________________________________________________________\nactivation_4 (Activation) (None, 128, 128, 256 0 add_1[0][0] \n__________________________________________________________________________________________________\nres2b_branch2a (Conv2D) (None, 128, 128, 64) 16448 activation_4[0][0] \n__________________________________________________________________________________________________\nbn2b_branch2a (BatchNormalizati (None, 128, 128, 64) 256 res2b_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_5 (Activation) (None, 128, 128, 64) 0 bn2b_branch2a[0][0] \n__________________________________________________________________________________________________\nres2b_branch2b (Conv2D) (None, 128, 128, 64) 36928 activation_5[0][0] \n__________________________________________________________________________________________________\nbn2b_branch2b (BatchNormalizati (None, 128, 128, 64) 256 res2b_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_6 (Activation) (None, 128, 128, 64) 0 bn2b_branch2b[0][0] \n__________________________________________________________________________________________________\nres2b_branch2c (Conv2D) (None, 128, 128, 256 16640 activation_6[0][0] \n__________________________________________________________________________________________________\nbn2b_branch2c (BatchNormalizati (None, 128, 128, 256 1024 res2b_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_2 (Add) (None, 128, 128, 256 0 bn2b_branch2c[0][0] \n activation_4[0][0] \n__________________________________________________________________________________________________\nactivation_7 (Activation) (None, 128, 128, 256 0 add_2[0][0] \n__________________________________________________________________________________________________\nres2c_branch2a (Conv2D) (None, 128, 128, 64) 16448 activation_7[0][0] \n__________________________________________________________________________________________________\nbn2c_branch2a (BatchNormalizati (None, 128, 128, 64) 256 res2c_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_8 (Activation) (None, 128, 128, 64) 0 bn2c_branch2a[0][0] \n__________________________________________________________________________________________________\nres2c_branch2b (Conv2D) (None, 128, 128, 64) 36928 activation_8[0][0] \n__________________________________________________________________________________________________\nbn2c_branch2b (BatchNormalizati (None, 128, 128, 64) 256 res2c_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_9 (Activation) (None, 128, 128, 64) 0 bn2c_branch2b[0][0] \n__________________________________________________________________________________________________\nres2c_branch2c (Conv2D) (None, 128, 128, 256 16640 activation_9[0][0] \n__________________________________________________________________________________________________\nbn2c_branch2c (BatchNormalizati (None, 128, 128, 256 1024 res2c_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_3 (Add) (None, 128, 128, 256 0 bn2c_branch2c[0][0] \n activation_7[0][0] \n__________________________________________________________________________________________________\nactivation_10 (Activation) (None, 128, 128, 256 0 add_3[0][0] \n__________________________________________________________________________________________________\nres3a_branch2a (Conv2D) (None, 64, 64, 128) 32896 activation_10[0][0] \n__________________________________________________________________________________________________\nbn3a_branch2a (BatchNormalizati (None, 64, 64, 128) 512 res3a_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_11 (Activation) (None, 64, 64, 128) 0 bn3a_branch2a[0][0] \n__________________________________________________________________________________________________\nres3a_branch2b (Conv2D) (None, 64, 64, 128) 147584 activation_11[0][0] \n__________________________________________________________________________________________________\nbn3a_branch2b (BatchNormalizati (None, 64, 64, 128) 512 res3a_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_12 (Activation) (None, 64, 64, 128) 0 bn3a_branch2b[0][0] \n__________________________________________________________________________________________________\nres3a_branch2c (Conv2D) (None, 64, 64, 512) 66048 activation_12[0][0] \n__________________________________________________________________________________________________\nres3a_branch1 (Conv2D) (None, 64, 64, 512) 131584 activation_10[0][0] \n__________________________________________________________________________________________________\nbn3a_branch2c (BatchNormalizati (None, 64, 64, 512) 2048 res3a_branch2c[0][0] \n__________________________________________________________________________________________________\nbn3a_branch1 (BatchNormalizatio (None, 64, 64, 512) 2048 res3a_branch1[0][0] \n__________________________________________________________________________________________________\nadd_4 (Add) (None, 64, 64, 512) 0 bn3a_branch2c[0][0] \n bn3a_branch1[0][0] \n__________________________________________________________________________________________________\nactivation_13 (Activation) (None, 64, 64, 512) 0 add_4[0][0] \n__________________________________________________________________________________________________\nres3b_branch2a (Conv2D) (None, 64, 64, 128) 65664 activation_13[0][0] \n__________________________________________________________________________________________________\nbn3b_branch2a (BatchNormalizati (None, 64, 64, 128) 512 res3b_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_14 (Activation) (None, 64, 64, 128) 0 bn3b_branch2a[0][0] \n__________________________________________________________________________________________________\nres3b_branch2b (Conv2D) (None, 64, 64, 128) 147584 activation_14[0][0] \n__________________________________________________________________________________________________\nbn3b_branch2b (BatchNormalizati (None, 64, 64, 128) 512 res3b_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_15 (Activation) (None, 64, 64, 128) 0 bn3b_branch2b[0][0] \n__________________________________________________________________________________________________\nres3b_branch2c (Conv2D) (None, 64, 64, 512) 66048 activation_15[0][0] \n__________________________________________________________________________________________________\nbn3b_branch2c (BatchNormalizati (None, 64, 64, 512) 2048 res3b_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_5 (Add) (None, 64, 64, 512) 0 bn3b_branch2c[0][0] \n activation_13[0][0] \n__________________________________________________________________________________________________\nactivation_16 (Activation) (None, 64, 64, 512) 0 add_5[0][0] \n__________________________________________________________________________________________________\nres3c_branch2a (Conv2D) (None, 64, 64, 128) 65664 activation_16[0][0] \n__________________________________________________________________________________________________\nbn3c_branch2a (BatchNormalizati (None, 64, 64, 128) 512 res3c_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_17 (Activation) (None, 64, 64, 128) 0 bn3c_branch2a[0][0] \n__________________________________________________________________________________________________\nres3c_branch2b (Conv2D) (None, 64, 64, 128) 147584 activation_17[0][0] \n__________________________________________________________________________________________________\nbn3c_branch2b (BatchNormalizati (None, 64, 64, 128) 512 res3c_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_18 (Activation) (None, 64, 64, 128) 0 bn3c_branch2b[0][0] \n__________________________________________________________________________________________________\nres3c_branch2c (Conv2D) (None, 64, 64, 512) 66048 activation_18[0][0] \n__________________________________________________________________________________________________\nbn3c_branch2c (BatchNormalizati (None, 64, 64, 512) 2048 res3c_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_6 (Add) (None, 64, 64, 512) 0 bn3c_branch2c[0][0] \n activation_16[0][0] \n__________________________________________________________________________________________________\nactivation_19 (Activation) (None, 64, 64, 512) 0 add_6[0][0] \n__________________________________________________________________________________________________\nres3d_branch2a (Conv2D) (None, 64, 64, 128) 65664 activation_19[0][0] \n__________________________________________________________________________________________________\nbn3d_branch2a (BatchNormalizati (None, 64, 64, 128) 512 res3d_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_20 (Activation) (None, 64, 64, 128) 0 bn3d_branch2a[0][0] \n__________________________________________________________________________________________________\nres3d_branch2b (Conv2D) (None, 64, 64, 128) 147584 activation_20[0][0] \n__________________________________________________________________________________________________\nbn3d_branch2b (BatchNormalizati (None, 64, 64, 128) 512 res3d_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_21 (Activation) (None, 64, 64, 128) 0 bn3d_branch2b[0][0] \n__________________________________________________________________________________________________\nres3d_branch2c (Conv2D) (None, 64, 64, 512) 66048 activation_21[0][0] \n__________________________________________________________________________________________________\nbn3d_branch2c (BatchNormalizati (None, 64, 64, 512) 2048 res3d_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_7 (Add) (None, 64, 64, 512) 0 bn3d_branch2c[0][0] \n activation_19[0][0] \n__________________________________________________________________________________________________\nactivation_22 (Activation) (None, 64, 64, 512) 0 add_7[0][0] \n__________________________________________________________________________________________________\nres4a_branch2a (Conv2D) (None, 32, 32, 256) 131328 activation_22[0][0] \n__________________________________________________________________________________________________\nbn4a_branch2a (BatchNormalizati (None, 32, 32, 256) 1024 res4a_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_23 (Activation) (None, 32, 32, 256) 0 bn4a_branch2a[0][0] \n__________________________________________________________________________________________________\nres4a_branch2b (Conv2D) (None, 32, 32, 256) 590080 activation_23[0][0] \n__________________________________________________________________________________________________\nbn4a_branch2b (BatchNormalizati (None, 32, 32, 256) 1024 res4a_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_24 (Activation) (None, 32, 32, 256) 0 bn4a_branch2b[0][0] \n__________________________________________________________________________________________________\nres4a_branch2c (Conv2D) (None, 32, 32, 1024) 263168 activation_24[0][0] \n__________________________________________________________________________________________________\nres4a_branch1 (Conv2D) (None, 32, 32, 1024) 525312 activation_22[0][0] \n__________________________________________________________________________________________________\nbn4a_branch2c (BatchNormalizati (None, 32, 32, 1024) 4096 res4a_branch2c[0][0] \n__________________________________________________________________________________________________\nbn4a_branch1 (BatchNormalizatio (None, 32, 32, 1024) 4096 res4a_branch1[0][0] \n__________________________________________________________________________________________________\nadd_8 (Add) (None, 32, 32, 1024) 0 bn4a_branch2c[0][0] \n bn4a_branch1[0][0] \n__________________________________________________________________________________________________\nactivation_25 (Activation) (None, 32, 32, 1024) 0 add_8[0][0] \n__________________________________________________________________________________________________\nres4b_branch2a (Conv2D) (None, 32, 32, 256) 262400 activation_25[0][0] \n__________________________________________________________________________________________________\nbn4b_branch2a (BatchNormalizati (None, 32, 32, 256) 1024 res4b_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_26 (Activation) (None, 32, 32, 256) 0 bn4b_branch2a[0][0] \n__________________________________________________________________________________________________\nres4b_branch2b (Conv2D) (None, 32, 32, 256) 590080 activation_26[0][0] \n__________________________________________________________________________________________________\nbn4b_branch2b (BatchNormalizati (None, 32, 32, 256) 1024 res4b_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_27 (Activation) (None, 32, 32, 256) 0 bn4b_branch2b[0][0] \n__________________________________________________________________________________________________\nres4b_branch2c (Conv2D) (None, 32, 32, 1024) 263168 activation_27[0][0] \n__________________________________________________________________________________________________\nbn4b_branch2c (BatchNormalizati (None, 32, 32, 1024) 4096 res4b_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_9 (Add) (None, 32, 32, 1024) 0 bn4b_branch2c[0][0] \n activation_25[0][0] \n__________________________________________________________________________________________________\nactivation_28 (Activation) (None, 32, 32, 1024) 0 add_9[0][0] \n__________________________________________________________________________________________________\nres4c_branch2a (Conv2D) (None, 32, 32, 256) 262400 activation_28[0][0] \n__________________________________________________________________________________________________\nbn4c_branch2a (BatchNormalizati (None, 32, 32, 256) 1024 res4c_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_29 (Activation) (None, 32, 32, 256) 0 bn4c_branch2a[0][0] \n__________________________________________________________________________________________________\nres4c_branch2b (Conv2D) (None, 32, 32, 256) 590080 activation_29[0][0] \n__________________________________________________________________________________________________\nbn4c_branch2b (BatchNormalizati (None, 32, 32, 256) 1024 res4c_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_30 (Activation) (None, 32, 32, 256) 0 bn4c_branch2b[0][0] \n__________________________________________________________________________________________________\nres4c_branch2c (Conv2D) (None, 32, 32, 1024) 263168 activation_30[0][0] \n__________________________________________________________________________________________________\nbn4c_branch2c (BatchNormalizati (None, 32, 32, 1024) 4096 res4c_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_10 (Add) (None, 32, 32, 1024) 0 bn4c_branch2c[0][0] \n activation_28[0][0] \n__________________________________________________________________________________________________\nactivation_31 (Activation) (None, 32, 32, 1024) 0 add_10[0][0] \n__________________________________________________________________________________________________\nres4d_branch2a (Conv2D) (None, 32, 32, 256) 262400 activation_31[0][0] \n__________________________________________________________________________________________________\nbn4d_branch2a (BatchNormalizati (None, 32, 32, 256) 1024 res4d_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_32 (Activation) (None, 32, 32, 256) 0 bn4d_branch2a[0][0] \n__________________________________________________________________________________________________\nres4d_branch2b (Conv2D) (None, 32, 32, 256) 590080 activation_32[0][0] \n__________________________________________________________________________________________________\nbn4d_branch2b (BatchNormalizati (None, 32, 32, 256) 1024 res4d_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_33 (Activation) (None, 32, 32, 256) 0 bn4d_branch2b[0][0] \n__________________________________________________________________________________________________\nres4d_branch2c (Conv2D) (None, 32, 32, 1024) 263168 activation_33[0][0] \n__________________________________________________________________________________________________\nbn4d_branch2c (BatchNormalizati (None, 32, 32, 1024) 4096 res4d_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_11 (Add) (None, 32, 32, 1024) 0 bn4d_branch2c[0][0] \n activation_31[0][0] \n__________________________________________________________________________________________________\nactivation_34 (Activation) (None, 32, 32, 1024) 0 add_11[0][0] \n__________________________________________________________________________________________________\nres4e_branch2a (Conv2D) (None, 32, 32, 256) 262400 activation_34[0][0] \n__________________________________________________________________________________________________\nbn4e_branch2a (BatchNormalizati (None, 32, 32, 256) 1024 res4e_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_35 (Activation) (None, 32, 32, 256) 0 bn4e_branch2a[0][0] \n__________________________________________________________________________________________________\nres4e_branch2b (Conv2D) (None, 32, 32, 256) 590080 activation_35[0][0] \n__________________________________________________________________________________________________\nbn4e_branch2b (BatchNormalizati (None, 32, 32, 256) 1024 res4e_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_36 (Activation) (None, 32, 32, 256) 0 bn4e_branch2b[0][0] \n__________________________________________________________________________________________________\nres4e_branch2c (Conv2D) (None, 32, 32, 1024) 263168 activation_36[0][0] \n__________________________________________________________________________________________________\nbn4e_branch2c (BatchNormalizati (None, 32, 32, 1024) 4096 res4e_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_12 (Add) (None, 32, 32, 1024) 0 bn4e_branch2c[0][0] \n activation_34[0][0] \n__________________________________________________________________________________________________\nactivation_37 (Activation) (None, 32, 32, 1024) 0 add_12[0][0] \n__________________________________________________________________________________________________\nres4f_branch2a (Conv2D) (None, 32, 32, 256) 262400 activation_37[0][0] \n__________________________________________________________________________________________________\nbn4f_branch2a (BatchNormalizati (None, 32, 32, 256) 1024 res4f_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_38 (Activation) (None, 32, 32, 256) 0 bn4f_branch2a[0][0] \n__________________________________________________________________________________________________\nres4f_branch2b (Conv2D) (None, 32, 32, 256) 590080 activation_38[0][0] \n__________________________________________________________________________________________________\nbn4f_branch2b (BatchNormalizati (None, 32, 32, 256) 1024 res4f_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_39 (Activation) (None, 32, 32, 256) 0 bn4f_branch2b[0][0] \n__________________________________________________________________________________________________\nres4f_branch2c (Conv2D) (None, 32, 32, 1024) 263168 activation_39[0][0] \n__________________________________________________________________________________________________\nbn4f_branch2c (BatchNormalizati (None, 32, 32, 1024) 4096 res4f_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_13 (Add) (None, 32, 32, 1024) 0 bn4f_branch2c[0][0] \n activation_37[0][0] \n__________________________________________________________________________________________________\nactivation_40 (Activation) (None, 32, 32, 1024) 0 add_13[0][0] \n__________________________________________________________________________________________________\nres5a_branch2a (Conv2D) (None, 16, 16, 512) 524800 activation_40[0][0] \n__________________________________________________________________________________________________\nbn5a_branch2a (BatchNormalizati (None, 16, 16, 512) 2048 res5a_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_41 (Activation) (None, 16, 16, 512) 0 bn5a_branch2a[0][0] \n__________________________________________________________________________________________________\nres5a_branch2b (Conv2D) (None, 16, 16, 512) 2359808 activation_41[0][0] \n__________________________________________________________________________________________________\nbn5a_branch2b (BatchNormalizati (None, 16, 16, 512) 2048 res5a_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_42 (Activation) (None, 16, 16, 512) 0 bn5a_branch2b[0][0] \n__________________________________________________________________________________________________\nres5a_branch2c (Conv2D) (None, 16, 16, 2048) 1050624 activation_42[0][0] \n__________________________________________________________________________________________________\nres5a_branch1 (Conv2D) (None, 16, 16, 2048) 2099200 activation_40[0][0] \n__________________________________________________________________________________________________\nbn5a_branch2c (BatchNormalizati (None, 16, 16, 2048) 8192 res5a_branch2c[0][0] \n__________________________________________________________________________________________________\nbn5a_branch1 (BatchNormalizatio (None, 16, 16, 2048) 8192 res5a_branch1[0][0] \n__________________________________________________________________________________________________\nadd_14 (Add) (None, 16, 16, 2048) 0 bn5a_branch2c[0][0] \n bn5a_branch1[0][0] \n__________________________________________________________________________________________________\nactivation_43 (Activation) (None, 16, 16, 2048) 0 add_14[0][0] \n__________________________________________________________________________________________________\nres5b_branch2a (Conv2D) (None, 16, 16, 512) 1049088 activation_43[0][0] \n__________________________________________________________________________________________________\nbn5b_branch2a (BatchNormalizati (None, 16, 16, 512) 2048 res5b_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_44 (Activation) (None, 16, 16, 512) 0 bn5b_branch2a[0][0] \n__________________________________________________________________________________________________\nres5b_branch2b (Conv2D) (None, 16, 16, 512) 2359808 activation_44[0][0] \n__________________________________________________________________________________________________\nbn5b_branch2b (BatchNormalizati (None, 16, 16, 512) 2048 res5b_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_45 (Activation) (None, 16, 16, 512) 0 bn5b_branch2b[0][0] \n__________________________________________________________________________________________________\nres5b_branch2c (Conv2D) (None, 16, 16, 2048) 1050624 activation_45[0][0] \n__________________________________________________________________________________________________\nbn5b_branch2c (BatchNormalizati (None, 16, 16, 2048) 8192 res5b_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_15 (Add) (None, 16, 16, 2048) 0 bn5b_branch2c[0][0] \n activation_43[0][0] \n__________________________________________________________________________________________________\nactivation_46 (Activation) (None, 16, 16, 2048) 0 add_15[0][0] \n__________________________________________________________________________________________________\nres5c_branch2a (Conv2D) (None, 16, 16, 512) 1049088 activation_46[0][0] \n__________________________________________________________________________________________________\nbn5c_branch2a (BatchNormalizati (None, 16, 16, 512) 2048 res5c_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_47 (Activation) (None, 16, 16, 512) 0 bn5c_branch2a[0][0] \n__________________________________________________________________________________________________\nres5c_branch2b (Conv2D) (None, 16, 16, 512) 2359808 activation_47[0][0] \n__________________________________________________________________________________________________\nbn5c_branch2b (BatchNormalizati (None, 16, 16, 512) 2048 res5c_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_48 (Activation) (None, 16, 16, 512) 0 bn5c_branch2b[0][0] \n__________________________________________________________________________________________________\nres5c_branch2c (Conv2D) (None, 16, 16, 2048) 1050624 activation_48[0][0] \n__________________________________________________________________________________________________\nbn5c_branch2c (BatchNormalizati (None, 16, 16, 2048) 8192 res5c_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_16 (Add) (None, 16, 16, 2048) 0 bn5c_branch2c[0][0] \n activation_46[0][0] \n__________________________________________________________________________________________________\nactivation_49 (Activation) (None, 16, 16, 2048) 0 add_16[0][0] \n__________________________________________________________________________________________________\nglobal_average_pooling2d_1 (Glo (None, 2048) 0 activation_49[0][0] \n__________________________________________________________________________________________________\ndropout_1 (Dropout) (None, 2048) 0 global_average_pooling2d_1[0][0] \n__________________________________________________________________________________________________\ndense_1 (Dense) (None, 2048) 4196352 dropout_1[0][0] \n__________________________________________________________________________________________________\ndropout_2 (Dropout) (None, 2048) 0 dense_1[0][0] \n__________________________________________________________________________________________________\nfinal_output (Dense) (None, 5) 10245 dropout_2[0][0] \n==================================================================================================\nTotal params: 27,794,309\nTrainable params: 27,741,189\nNon-trainable params: 53,120\n__________________________________________________________________________________________________\n"
],
[
"history_finetunning = model.fit_generator(generator=train_generator,\n steps_per_epoch=STEP_SIZE_TRAIN,\n validation_data=valid_generator,\n validation_steps=STEP_SIZE_VALID,\n epochs=EPOCHS,\n callbacks=callback_list,\n class_weight=class_weights,\n verbose=1).history",
"Epoch 1/30\n343/343 [==============================] - 567s 2s/step - loss: 0.7127 - acc: 0.7340 - kappa: 0.8797 - val_loss: 0.9326 - val_acc: 0.6927 - val_kappa: 0.8542\nEpoch 2/30\n343/343 [==============================] - 549s 2s/step - loss: 0.6119 - acc: 0.7693 - kappa: 0.9201 - val_loss: 0.4930 - val_acc: 0.8095 - val_kappa: 0.9478\nEpoch 3/30\n343/343 [==============================] - 549s 2s/step - loss: 0.5501 - acc: 0.7915 - kappa: 0.9390 - val_loss: 0.6268 - val_acc: 0.7335 - val_kappa: 0.8974\nEpoch 4/30\n343/343 [==============================] - 547s 2s/step - loss: 0.5339 - acc: 0.7937 - kappa: 0.9403 - val_loss: 0.6106 - val_acc: 0.7621 - val_kappa: 0.9342\nEpoch 5/30\n343/343 [==============================] - 544s 2s/step - loss: 0.5076 - acc: 0.8109 - kappa: 0.9487 - val_loss: 0.5792 - val_acc: 0.7797 - val_kappa: 0.9184\n\nEpoch 00005: ReduceLROnPlateau reducing learning rate to 4.999999873689376e-05.\nEpoch 6/30\n343/343 [==============================] - 544s 2s/step - loss: 0.4532 - acc: 0.8254 - kappa: 0.9557 - val_loss: 0.4168 - val_acc: 0.8502 - val_kappa: 0.9576\nEpoch 7/30\n343/343 [==============================] - 547s 2s/step - loss: 0.4337 - acc: 0.8331 - kappa: 0.9593 - val_loss: 0.4258 - val_acc: 0.8447 - val_kappa: 0.9537\nEpoch 8/30\n343/343 [==============================] - 544s 2s/step - loss: 0.4126 - acc: 0.8469 - kappa: 0.9644 - val_loss: 0.4385 - val_acc: 0.8425 - val_kappa: 0.9597\nEpoch 9/30\n343/343 [==============================] - 549s 2s/step - loss: 0.3963 - acc: 0.8437 - kappa: 0.9615 - val_loss: 0.4241 - val_acc: 0.8458 - val_kappa: 0.9615\n\nEpoch 00009: ReduceLROnPlateau reducing learning rate to 2.499999936844688e-05.\nEpoch 10/30\n343/343 [==============================] - 548s 2s/step - loss: 0.3610 - acc: 0.8637 - kappa: 0.9725 - val_loss: 0.3777 - val_acc: 0.8623 - val_kappa: 0.9705\nEpoch 11/30\n343/343 [==============================] - 551s 2s/step - loss: 0.3568 - acc: 0.8637 - kappa: 0.9712 - val_loss: 0.4111 - val_acc: 0.8480 - val_kappa: 0.9640\nEpoch 12/30\n343/343 [==============================] - 551s 2s/step - loss: 0.3377 - acc: 0.8688 - kappa: 0.9740 - val_loss: 0.4150 - val_acc: 0.8491 - val_kappa: 0.9636\nEpoch 13/30\n343/343 [==============================] - 550s 2s/step - loss: 0.3335 - acc: 0.8776 - kappa: 0.9751 - val_loss: 0.4253 - val_acc: 0.8612 - val_kappa: 0.9674\n\nEpoch 00013: ReduceLROnPlateau reducing learning rate to 1.249999968422344e-05.\nEpoch 14/30\n343/343 [==============================] - 549s 2s/step - loss: 0.3144 - acc: 0.8808 - kappa: 0.9775 - val_loss: 0.3882 - val_acc: 0.8656 - val_kappa: 0.9687\nEpoch 15/30\n343/343 [==============================] - 549s 2s/step - loss: 0.3115 - acc: 0.8827 - kappa: 0.9779 - val_loss: 0.3922 - val_acc: 0.8689 - val_kappa: 0.9705\nRestoring model weights from the end of the best epoch\nEpoch 00015: early stopping\n"
]
],
[
[
"# Model loss graph ",
"_____no_output_____"
]
],
[
[
"history = {'loss': history_warmup['loss'] + history_finetunning['loss'], \n 'val_loss': history_warmup['val_loss'] + history_finetunning['val_loss'], \n 'acc': history_warmup['acc'] + history_finetunning['acc'], \n 'val_acc': history_warmup['val_acc'] + history_finetunning['val_acc'], \n 'kappa': history_warmup['kappa'] + history_finetunning['kappa'], \n 'val_kappa': history_warmup['val_kappa'] + history_finetunning['val_kappa']}\n\nsns.set_style(\"whitegrid\")\nfig, (ax1, ax2, ax3) = plt.subplots(3, 1, sharex='col', figsize=(20, 18))\n\nax1.plot(history['loss'], label='Train loss')\nax1.plot(history['val_loss'], label='Validation loss')\nax1.legend(loc='best')\nax1.set_title('Loss')\n\nax2.plot(history['acc'], label='Train accuracy')\nax2.plot(history['val_acc'], label='Validation accuracy')\nax2.legend(loc='best')\nax2.set_title('Accuracy')\n\nax3.plot(history['kappa'], label='Train kappa')\nax3.plot(history['val_kappa'], label='Validation kappa')\nax3.legend(loc='best')\nax3.set_title('Kappa')\n\nplt.xlabel('Epochs')\nsns.despine()\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Model Evaluation",
"_____no_output_____"
]
],
[
[
"lastFullTrainPred = np.empty((0, N_CLASSES))\nlastFullTrainLabels = np.empty((0, N_CLASSES))\nlastFullValPred = np.empty((0, N_CLASSES))\nlastFullValLabels = np.empty((0, N_CLASSES))\n\nfor i in range(STEP_SIZE_TRAIN+1):\n im, lbl = next(train_generator)\n scores = model.predict(im, batch_size=train_generator.batch_size)\n lastFullTrainPred = np.append(lastFullTrainPred, scores, axis=0)\n lastFullTrainLabels = np.append(lastFullTrainLabels, lbl, axis=0)\n\nfor i in range(STEP_SIZE_VALID+1):\n im, lbl = next(valid_generator)\n scores = model.predict(im, batch_size=valid_generator.batch_size)\n lastFullValPred = np.append(lastFullValPred, scores, axis=0)\n lastFullValLabels = np.append(lastFullValLabels, lbl, axis=0)",
"_____no_output_____"
]
],
[
[
"# Threshold optimization",
"_____no_output_____"
]
],
[
[
"def find_best_fixed_threshold(preds, targs, do_plot=True):\n best_thr_list = [0 for i in range(preds.shape[1])]\n for index in reversed(range(1, preds.shape[1])):\n score = []\n thrs = np.arange(0, 1, 0.01)\n for thr in thrs:\n preds_thr = [index if x[index] > thr else np.argmax(x) for x in preds]\n score.append(cohen_kappa_score(targs, preds_thr))\n score = np.array(score)\n pm = score.argmax()\n best_thr, best_score = thrs[pm], score[pm].item()\n best_thr_list[index] = best_thr\n print(f'thr={best_thr:.3f}', f'F2={best_score:.3f}')\n if do_plot:\n plt.plot(thrs, score)\n plt.vlines(x=best_thr, ymin=score.min(), ymax=score.max())\n plt.text(best_thr+0.03, best_score-0.01, ('Kappa[%s]=%.3f'%(index, best_score)), fontsize=14);\n plt.show()\n return best_thr_list\n\nlastFullComPred = np.concatenate((lastFullTrainPred, lastFullValPred))\nlastFullComLabels = np.concatenate((lastFullTrainLabels, lastFullValLabels))\ncomplete_labels = [np.argmax(label) for label in lastFullComLabels]\n\nthreshold_list = find_best_fixed_threshold(lastFullComPred, complete_labels, do_plot=True)\nthreshold_list[0] = 0 # In last instance assign label 0\n\ntrain_preds = [np.argmax(pred) for pred in lastFullTrainPred]\ntrain_labels = [np.argmax(label) for label in lastFullTrainLabels]\nvalidation_preds = [np.argmax(pred) for pred in lastFullValPred]\nvalidation_labels = [np.argmax(label) for label in lastFullValLabels]\n\ntrain_preds_opt = [0 for i in range(lastFullTrainPred.shape[0])]\nfor idx, thr in enumerate(threshold_list):\n for idx2, pred in enumerate(lastFullTrainPred):\n if pred[idx] > thr:\n train_preds_opt[idx2] = idx\n\nvalidation_preds_opt = [0 for i in range(lastFullValPred.shape[0])]\nfor idx, thr in enumerate(threshold_list):\n for idx2, pred in enumerate(lastFullValPred):\n if pred[idx] > thr:\n validation_preds_opt[idx2] = idx",
"thr=0.390 F2=0.812\n"
]
],
[
[
"## Confusion Matrix",
"_____no_output_____"
]
],
[
[
"fig, (ax1, ax2) = plt.subplots(1, 2, sharex='col', figsize=(24, 7))\nlabels = ['0 - No DR', '1 - Mild', '2 - Moderate', '3 - Severe', '4 - Proliferative DR']\ntrain_cnf_matrix = confusion_matrix(train_labels, train_preds)\nvalidation_cnf_matrix = confusion_matrix(validation_labels, validation_preds)\n\ntrain_cnf_matrix_norm = train_cnf_matrix.astype('float') / train_cnf_matrix.sum(axis=1)[:, np.newaxis]\nvalidation_cnf_matrix_norm = validation_cnf_matrix.astype('float') / validation_cnf_matrix.sum(axis=1)[:, np.newaxis]\n\ntrain_df_cm = pd.DataFrame(train_cnf_matrix_norm, index=labels, columns=labels)\nvalidation_df_cm = pd.DataFrame(validation_cnf_matrix_norm, index=labels, columns=labels)\n\nsns.heatmap(train_df_cm, annot=True, fmt='.2f', cmap=\"Blues\",ax=ax1).set_title('Train')\nsns.heatmap(validation_df_cm, annot=True, fmt='.2f', cmap=\"Blues\",ax=ax2).set_title('Validation')\nplt.show()",
"_____no_output_____"
],
[
"fig, (ax1, ax2) = plt.subplots(1, 2, sharex='col', figsize=(24, 7))\nlabels = ['0 - No DR', '1 - Mild', '2 - Moderate', '3 - Severe', '4 - Proliferative DR']\ntrain_cnf_matrix = confusion_matrix(train_labels, train_preds_opt)\nvalidation_cnf_matrix = confusion_matrix(validation_labels, validation_preds_opt)\n\ntrain_cnf_matrix_norm = train_cnf_matrix.astype('float') / train_cnf_matrix.sum(axis=1)[:, np.newaxis]\nvalidation_cnf_matrix_norm = validation_cnf_matrix.astype('float') / validation_cnf_matrix.sum(axis=1)[:, np.newaxis]\n\ntrain_df_cm = pd.DataFrame(train_cnf_matrix_norm, index=labels, columns=labels)\nvalidation_df_cm = pd.DataFrame(validation_cnf_matrix_norm, index=labels, columns=labels)\n\nsns.heatmap(train_df_cm, annot=True, fmt='.2f', cmap=\"Blues\",ax=ax1).set_title('Train optimized')\nsns.heatmap(validation_df_cm, annot=True, fmt='.2f', cmap=\"Blues\",ax=ax2).set_title('Validation optimized')\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Quadratic Weighted Kappa",
"_____no_output_____"
]
],
[
[
"print(\"Train Cohen Kappa score: %.3f\" % cohen_kappa_score(train_preds, train_labels, weights='quadratic'))\nprint(\"Validation Cohen Kappa score: %.3f\" % cohen_kappa_score(validation_preds, validation_labels, weights='quadratic'))\nprint(\"Complete set Cohen Kappa score: %.3f\" % cohen_kappa_score(train_preds+validation_preds, train_labels+validation_labels, weights='quadratic'))\nprint(\"Train optimized Cohen Kappa score: %.3f\" % cohen_kappa_score(train_preds_opt, train_labels, weights='quadratic'))\nprint(\"Validation optimized Cohen Kappa score: %.3f\" % cohen_kappa_score(validation_preds_opt, validation_labels, weights='quadratic'))\nprint(\"Complete optimized set Cohen Kappa score: %.3f\" % cohen_kappa_score(train_preds_opt+validation_preds_opt, train_labels+validation_labels, weights='quadratic'))",
"Train Cohen Kappa score: 0.930\nValidation Cohen Kappa score: 0.906\nComplete set Cohen Kappa score: 0.924\nTrain optimized Cohen Kappa score: 0.915\nValidation optimized Cohen Kappa score: 0.896\nComplete optimized set Cohen Kappa score: 0.910\n"
]
],
[
[
"# Apply model to test set and output predictions",
"_____no_output_____"
]
],
[
[
"test_generator.reset()\nSTEP_SIZE_TEST = test_generator.n//test_generator.batch_size\npreds = model.predict_generator(test_generator, steps=STEP_SIZE_TEST)\npredictions = [np.argmax(pred) for pred in preds]\n\npredictions_opt = [0 for i in range(preds.shape[0])]\nfor idx, thr in enumerate(threshold_list):\n for idx2, pred in enumerate(preds):\n if pred[idx] > thr:\n predictions_opt[idx2] = idx\n\nfilenames = test_generator.filenames\nresults = pd.DataFrame({'id_code':filenames, 'diagnosis':predictions})\nresults['id_code'] = results['id_code'].map(lambda x: str(x)[:-4])\n\nresults_opt = pd.DataFrame({'id_code':filenames, 'diagnosis':predictions_opt})\nresults_opt['id_code'] = results_opt['id_code'].map(lambda x: str(x)[:-4])",
"_____no_output_____"
]
],
[
[
"# Predictions class distribution",
"_____no_output_____"
]
],
[
[
"fig, (ax1, ax2) = plt.subplots(1, 2, sharex='col', figsize=(24, 8.7))\nsns.countplot(x=\"diagnosis\", data=results, palette=\"GnBu_d\", ax=ax1)\nsns.countplot(x=\"diagnosis\", data=results_opt, palette=\"GnBu_d\", ax=ax2)\nsns.despine()\nplt.show()",
"_____no_output_____"
],
[
"val_kappa = cohen_kappa_score(validation_preds, validation_labels, weights='quadratic')\nval_opt_kappa = cohen_kappa_score(validation_preds_opt, validation_labels, weights='quadratic')\nif val_kappa > val_opt_kappa:\n results_name = 'submission.csv'\n results_opt_name = 'submission_opt.csv'\nelse:\n results_name = 'submission_norm.csv'\n results_opt_name = 'submission.csv'\n\nresults.to_csv(results_name, index=False)\nresults.head(10)",
"_____no_output_____"
],
[
"results_opt.to_csv(results_opt_name, index=False)\nresults_opt.head(10)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d098a251b0068c02df0c806c916ce34eb12f57cc | 37,515 | ipynb | Jupyter Notebook | application/part_3/CausalMF-questions.ipynb | vishalbelsare/intro-to-reco | 8e00cc397f322ea6ae5a793c10fe788446f4fb85 | [
"MIT"
] | 25 | 2018-06-27T14:35:11.000Z | 2020-12-06T13:16:41.000Z | application/part_3/CausalMF-questions.ipynb | vishalbelsare/intro-to-reco | 8e00cc397f322ea6ae5a793c10fe788446f4fb85 | [
"MIT"
] | null | null | null | application/part_3/CausalMF-questions.ipynb | vishalbelsare/intro-to-reco | 8e00cc397f322ea6ae5a793c10fe788446f4fb85 | [
"MIT"
] | 13 | 2018-07-02T12:29:34.000Z | 2020-03-17T22:57:13.000Z | 47.850765 | 1,062 | 0.58358 | [
[
[
"# Task: Predict User Item response under uniform exposure while learning from biased training data\n\nMany current applications use recommendations in order to modify the natural user behavior, such as to increase the number of sales or the time spent on a website. This results in a gap between the final recommendation objective and the classical setup where recommendation candidates are evaluated by their coherence with past user behavior, by predicting either the missing entries in the user-item matrix, or the most likely next event. To bridge this gap, we optimize a recommendation policy for the task of increasing the desired outcome versus the organic user behavior. We show this is equivalent to learning to predict recommendation outcomes under a fully random recommendation policy. To this end, we propose a new domain adaptation algorithm that learns from logged data containing outcomes from a biased recommendation policy and predicts recommendation outcomes according to random exposure. We compare our method against state-of-the-art factorization methods and new approaches of causal recommendation and show significant improvements.\n\n\n# Dataset\n\n**MovieLens 100k dataset** was collected by the GroupLens Research Project at the University of Minnesota.\n \nThis data set consists of:\n\t* 100,000 ratings (1-5) from 943 users on 1682 movies. \n\t* Each user has rated at least 20 movies. \n\nThe data was collected through the MovieLens web site (movielens.umn.edu) during the seven-month period from September 19th, 1997 through April 22nd, 1998.\n\n\n\n# Solution:\n\n**Causal Matrix Factorization** - for more details see: https://arxiv.org/abs/1706.07639\n\n\n\n\n# Metrics:\n\n### * MSE - Mean Squared Error\n### * NLL - Negative Log Likelihood\n### * AUC - Area Under the Curve\n\n\n-----------------------------\n-----------------------------\n\n\n\n# Questions:\n\n\n### Q1: Add the definition for create_counterfactual_regularizer() method\n### Q2: Compare the results of using variable values for cf_pen hyperparameter (0 vs. bigger)\n### Q3: Compare different types of optimizers\n### Q4: Push the performance as high as possible!",
"_____no_output_____"
]
],
[
[
"%%javascript\nIPython.OutputArea.prototype._should_scroll = function(lines) {\n return false;\n}",
"_____no_output_____"
],
[
"import os\nimport string\nimport tempfile\nimport time\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport csv\nimport random\n\nimport tensorflow as tf\nfrom tensorflow.contrib.tensorboard.plugins import projector\nfrom tensorboard import summary as summary_lib\n\nfrom __future__ import absolute_import\nfrom __future__ import print_function\n\ntf.set_random_seed(42)\n\ntf.logging.set_verbosity(tf.logging.INFO)\nprint(tf.__version__)",
"_____no_output_____"
],
[
"# Hyper-Parameters\nflags = tf.app.flags \n\ntf.app.flags.DEFINE_string('f', '', 'kernel')\n\nflags.DEFINE_string('data_set', 'user_prod_dict.skew.', 'Dataset string.') # Reg Skew\nflags.DEFINE_string('adapt_stat', 'adapt_2i', 'Adapt String.') # Adaptation strategy\nflags.DEFINE_string('model_name', 'cp2v', 'Name of the model for saving.')\nflags.DEFINE_float('learning_rate', 1.0, 'Initial learning rate.')\nflags.DEFINE_integer('num_epochs', 1, 'Number of epochs to train.')\nflags.DEFINE_integer('num_steps', 100, 'Number of steps after which to test.')\nflags.DEFINE_integer('embedding_size', 100, 'Size of each embedding vector.')\nflags.DEFINE_integer('batch_size', 512, 'How big is a batch of training.')\nflags.DEFINE_float('cf_pen', 10.0, 'Counterfactual regularizer hyperparam.')\nflags.DEFINE_float('l2_pen', 0.0, 'L2 regularizer hyperparam.')\nflags.DEFINE_string('cf_loss', 'l1', 'Use L1 or L2 for the loss .')\nFLAGS = tf.app.flags.FLAGS\n",
"_____no_output_____"
],
[
"#_DATA_PATH = \"/Users/f.vasile/MyFolders/MyProjects/1.MyPapers/2018_Q2_DS3_Course/code/cp2v/src/Data/\"\n_DATA_PATH = \"./data/\"\n\ntrain_data_set_location = _DATA_PATH + FLAGS.data_set + \"train.\" + FLAGS.adapt_stat + \".csv\" # Location of train dataset\ntest_data_set_location = _DATA_PATH + FLAGS.data_set + \"test.\" + FLAGS.adapt_stat + \".csv\" # Location of the test dataset\nvalidation_test_set_location = _DATA_PATH + FLAGS.data_set + \"valid_test.\" + FLAGS.adapt_stat + \".csv\" # Location of the validation dataset\nvalidation_train_set_location = _DATA_PATH + FLAGS.data_set + \"valid_train.\" + FLAGS.adapt_stat + \".csv\" #Location of the validation dataset\nmodel_name = FLAGS.model_name + \".ckpt\"\n\nprint(train_data_set_location)\n\n\ndef calculate_vocab_size(file_location):\n \"\"\"Calculate the total number of unique elements in the dataset\"\"\"\n\n with open(file_location, 'r') as csvfile:\n reader = csv.reader(csvfile, delimiter=',')\n useridtemp = []\n productid = []\n for row in reader:\n useridtemp.append(row[0])\n productid.append(row[1])\n\n userid_size = len(set(useridtemp))\n productid_size = len(set(productid))\n\n return userid_size, productid_size\n\n\nuserid_size, productid_size = calculate_vocab_size(train_data_set_location) # Calculate the total number of unique elements in the dataset\n\nprint(str(userid_size))\nprint(str(productid_size))\n\nplot_gradients = False # Plot the gradients\ncost_val = []\ntf.set_random_seed(42)",
"_____no_output_____"
],
[
"def load_train_dataset(dataset_location, batch_size, num_epochs):\n \"\"\"Load the training data using TF Dataset API\"\"\"\n\n with tf.name_scope('train_dataset_loading'):\n\n record_defaults = [[1], [1], [0.]] # Sets the type of the resulting tensors and default values\n # Dataset is in the format - UserID ProductID Rating\n dataset = tf.data.TextLineDataset(dataset_location).map(lambda line: tf.decode_csv(line, record_defaults=record_defaults))\n dataset = dataset.shuffle(buffer_size=10000)\n dataset = dataset.batch(batch_size)\n dataset = dataset.cache()\n dataset = dataset.repeat(num_epochs)\n iterator = dataset.make_one_shot_iterator()\n user_batch, product_batch, label_batch = iterator.get_next()\n label_batch = tf.expand_dims(label_batch, 1)\n\n return user_batch, product_batch, label_batch\n\n\ndef load_test_dataset(dataset_location):\n \"\"\"Load the test and validation datasets\"\"\"\n\n user_list = []\n product_list = []\n labels = []\n\n with open(dataset_location, 'r') as f:\n reader = csv.reader(f)\n for row in reader:\n user_list.append(row[0])\n product_list.append(row[1])\n labels.append(row[2])\n\n labels = np.reshape(labels, [-1, 1])\n cr = compute_empirical_cr(labels)\n\n return user_list, product_list, labels, cr\n\n\ndef compute_2i_regularization_id(prods, num_products):\n \"\"\"Compute the ID for the regularization for the 2i approach\"\"\"\n\n reg_ids = []\n # Loop through batch and compute if the product ID is greater than the number of products\n for x in np.nditer(prods):\n if x >= num_products:\n reg_ids.append(x)\n elif x < num_products:\n reg_ids.append(x + num_products) # Add number of products to create the 2i representation \n\n return np.asarray(reg_ids)\n\n\ndef generate_bootstrap_batch(seed, data_set_size):\n \"\"\"Generate the IDs for the bootstap\"\"\"\n\n random.seed(seed)\n ids = [random.randint(0, data_set_size-1) for j in range(int(data_set_size*0.8))]\n\n return ids\n\n\ndef compute_empirical_cr(labels):\n \"\"\"Compute the cr from the empirical data\"\"\"\n\n labels = labels.astype(np.float)\n clicks = np.count_nonzero(labels)\n views = len(np.where(labels==0)[0])\n cr = float(clicks)/float(views)\n\n return cr\n\n\ndef create_average_predictor_tensors(label_list_placeholder, logits_placeholder):\n \"\"\"Create the tensors required to run the averate predictor for the bootstraps\"\"\"\n\n with tf.device('/cpu:0'):\n \n with tf.variable_scope('ap_logits'):\n ap_logits = tf.reshape(logits_placeholder, [tf.shape(label_list_placeholder)[0], 1])\n\n with tf.name_scope('ap_losses'):\n \n ap_mse_loss = tf.losses.mean_squared_error(labels=label_list_placeholder, predictions=ap_logits)\n ap_log_loss = tf.losses.log_loss(labels=label_list_placeholder, predictions=ap_logits)\n\n with tf.name_scope('ap_metrics'):\n # Add performance metrics to the tensorflow graph\n ap_correct_predictions = tf.equal(tf.round(ap_logits), label_list_placeholder)\n ap_accuracy = tf.reduce_mean(tf.cast(ap_correct_predictions, tf.float32))\n\n return ap_mse_loss, ap_log_loss\n\ndef compute_bootstraps_2i(sess, model, test_user_batch, test_product_batch, test_label_batch, test_logits, running_vars_initializer, ap_mse_loss, ap_log_loss):\n \"\"\"Compute the bootstraps for the 2i model\"\"\"\n \n data_set_size = len(test_user_batch)\n mse = []\n llh = []\n ap_mse = []\n ap_llh = []\n auc_list = []\n mse_diff = []\n llh_diff = []\n\n # Compute the bootstrap values for the test split - this compute the empirical CR as well for comparision\n for i in range(30):\n\n ids = generate_bootstrap_batch(i*2, data_set_size)\n test_user_batch = np.asarray(test_user_batch)\n test_product_batch = np.asarray(test_product_batch)\n test_label_batch = np.asarray(test_label_batch)\n\n # Reset the running variables used for the AUC\n sess.run(running_vars_initializer)\n\n # Construct the feed-dict for the model and the average predictor\n feed_dict = {model.user_list_placeholder : test_user_batch[ids], model.product_list_placeholder: test_product_batch[ids], model.label_list_placeholder: test_label_batch[ids], model.logits_placeholder: test_logits[ids], model.reg_list_placeholder: test_product_batch[ids]}\n\n # Run the model test step updating the AUC object\n _, loss_val, mse_loss_val, log_loss_val = sess.run([model.auc_update_op, model.loss, model.mse_loss, model.log_loss], feed_dict=feed_dict)\n auc_score = sess.run(model.auc, feed_dict=feed_dict)\n\n # Run the Average Predictor graph\n ap_mse_val, ap_log_val = sess.run([ap_mse_loss, ap_log_loss], feed_dict=feed_dict)\n\n mse.append(mse_loss_val)\n llh.append(log_loss_val)\n ap_mse.append(ap_mse_val)\n ap_llh.append(ap_log_val)\n auc_list.append(auc_score)\n\n for i in range(30):\n mse_diff.append((ap_mse[i]-mse[i]) / ap_mse[i])\n llh_diff.append((ap_llh[i]-llh[i]) / ap_llh[i])\n\n print(\"MSE Mean Score On The Bootstrap = \", np.mean(mse))\n print(\"MSE Mean Lift Over Average Predictor (%) = \", np.round(np.mean(mse_diff)*100, decimals=2))\n print(\"MSE STD (%) =\" , np.round(np.std(mse_diff)*100, decimals=2))\n\n print(\"LLH Mean Over Average Predictor (%) =\", np.round(np.mean(llh_diff)*100, decimals=2))\n print(\"LLH STD (%) = \", np.round(np.std(llh_diff)*100, decimals=2))\n\n print(\"Mean AUC Score On The Bootstrap = \", np.round(np.mean(auc_list), decimals=4), \"+/-\", np.round(np.std(auc_list), decimals=4))",
"_____no_output_____"
]
],
[
[
"### About Supervised Prod2vec \n\n- Class to define MF of the implicit feedback matrix (1/0/unk) of Users x Products\n\n- When called it creates the TF graph for the associated NN:\n\nStep1: self.create_placeholders() => Creates the input placeholders\n\nStep2: self.build_graph() => Creates the 3 layers: \n - the user embedding layer\n - the product embedding layer \n - the output prediction layer\n\nStep3: self.create_losses() => Defines the loss function for prediction\n\nStep4: self.add_optimizer() => Defines the optimizer\n\nStep5: self.add_performance_metrics() => Defines the logging performance metrics ???\n\nStep6: self.add_summaries() => Defines the final performance stats\n\n ",
"_____no_output_____"
]
],
[
[
"class SupervisedProd2vec():\n def __init__(self, userid_size, productid_size, embedding_size, l2_pen, learning_rate):\n\n self.userid_size = userid_size\n self.productid_size = productid_size\n self.embedding_size = embedding_size\n self.l2_pen = l2_pen\n self.learning_rate = learning_rate\n\n # Build the graph\n self.create_placeholders()\n self.build_graph()\n self.create_losses()\n self.add_optimizer()\n self.add_performance_metrics()\n self.add_summaries()\n \n def create_placeholders(self):\n \"\"\"Create the placeholders to be used \"\"\"\n \n self.user_list_placeholder = tf.placeholder(tf.int32, [None], name=\"user_list_placeholder\")\n self.product_list_placeholder = tf.placeholder(tf.int32, [None], name=\"product_list_placeholder\")\n self.label_list_placeholder = tf.placeholder(tf.float32, [None, 1], name=\"label_list_placeholder\")\n\n # logits placeholder used to store the test CR for the bootstrapping process\n self.logits_placeholder = tf.placeholder(tf.float32, [None], name=\"logits_placeholder\")\n\n \n def build_graph(self):\n \"\"\"Build the main tensorflow graph with embedding layers\"\"\"\n\n with tf.name_scope('embedding_layer'):\n\n # User matrix and current batch\n self.user_embeddings = tf.get_variable(\"user_embeddings\", shape=[self.userid_size, self.embedding_size], initializer=tf.contrib.layers.xavier_initializer(), trainable=True)\n self.user_embed = tf.nn.embedding_lookup(self.user_embeddings, self.user_list_placeholder) # Lookup the Users for the given batch\n self.user_b = tf.Variable(tf.zeros([self.userid_size]), name='user_b', trainable=True)\n self.user_bias_embed = tf.nn.embedding_lookup(self.user_b, self.user_list_placeholder)\n\n # Product embedding\n self.product_embeddings = tf.get_variable(\"product_embeddings\", shape=[self.productid_size, self.embedding_size], initializer=tf.contrib.layers.xavier_initializer(), trainable=True)\n self.product_embed = tf.nn.embedding_lookup(self.product_embeddings, self.product_list_placeholder) # Lookup the embeddings2 for the given batch\n self.prod_b = tf.Variable(tf.zeros([self.productid_size]), name='prod_b', trainable=True)\n self.prod_bias_embed = tf.nn.embedding_lookup(self.prod_b, self.product_list_placeholder)\n\n with tf.variable_scope('logits'):\n\n self.b = tf.get_variable('b', [1], initializer=tf.constant_initializer(0.0, dtype=tf.float32), trainable=True)\n self.alpha = tf.get_variable('alpha', [], initializer=tf.constant_initializer(0.00000001, dtype=tf.float32), trainable=True)\n \n #alpha * (<user_i, prod_j> \n self.emb_logits = self.alpha * tf.reshape(tf.reduce_sum(tf.multiply(self.user_embed, self.product_embed), 1), [tf.shape(self.user_list_placeholder)[0], 1])\n \n #prod_bias + user_bias + global_bias\n self.logits = tf.reshape(tf.add(self.prod_bias_embed, self.user_bias_embed), [tf.shape(self.user_list_placeholder)[0], 1]) + self.b\n \n self.logits = self.emb_logits + self.logits\n\n self.prediction = tf.sigmoid(self.logits, name='sigmoid_prediction')\n\n \n def create_losses(self):\n \"\"\"Create the losses\"\"\"\n\n with tf.name_scope('losses'):\n #Sigmoid loss between the logits and labels\n self.loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=self.logits, labels=self.label_list_placeholder))\n \n #Adding the regularizer term on user vct and prod vct\n self.loss = self.loss + self.l2_pen * tf.nn.l2_loss(self.user_embeddings) + self.l2_pen * tf.nn.l2_loss(self.product_embeddings) + self.l2_pen * tf.nn.l2_loss(self.prod_b) + self.l2_pen * tf.nn.l2_loss(self.user_b)\n\n #Compute MSE loss\n self.mse_loss = tf.losses.mean_squared_error(labels=self.label_list_placeholder, predictions=tf.sigmoid(self.logits))\n \n #Compute Log loss\n self.log_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=self.logits, labels=self.label_list_placeholder))\n \n \n def add_optimizer(self):\n \"\"\"Add the required optimiser to the graph\"\"\"\n\n with tf.name_scope('optimizer'):\n # Global step variable to keep track of the number of training steps\n self.global_step = tf.Variable(0, dtype=tf.int32, trainable=False, name='global_step') \n self.apply_grads = tf.train.GradientDescentOptimizer(self.learning_rate).minimize(self.loss, global_step=self.global_step)\n\n \n def add_performance_metrics(self):\n \"\"\"Add the required performance metrics to the graph\"\"\"\n \n with tf.name_scope('performance_metrics'):\n # Add performance metrics to the tensorflow graph\n correct_predictions = tf.equal(tf.round(self.prediction), self.label_list_placeholder)\n self.accuracy = tf.reduce_mean(tf.cast(correct_predictions, tf.float32), name=\"accuracy\")\n self.auc, self.auc_update_op = tf.metrics.auc(labels=self.label_list_placeholder, predictions=self.prediction, num_thresholds=1000, name=\"auc_metric\")\n\n \n def add_summaries(self):\n \"\"\"Add the required summaries to the graph\"\"\"\n\n with tf.name_scope('summaries'):\n # Add loss to the summaries\n tf.summary.scalar('total_loss', self.loss)\n tf.summary.histogram('histogram_total_loss', self.loss)\n\n # Add weights to the summaries\n tf.summary.histogram('user_embedding_weights', self.user_embeddings)\n tf.summary.histogram('product_embedding_weights', self.product_embeddings)\n tf.summary.histogram('logits', self.logits)\n tf.summary.histogram('prod_b', self.prod_b)\n tf.summary.histogram('user_b', self.user_b)\n tf.summary.histogram('global_bias', self.b)\n\n tf.summary.scalar('alpha', self.alpha)\n",
"_____no_output_____"
]
],
[
[
"### CausalProd2Vec2i - inherits from SupervisedProd2vec\n\n- Class to define the causal version of MF of the implicit feedback matrix (1/0/unk) of Users x Products\n\n- When called it creates the TF graph for the associated NN:\n\n**Step1: Changed: +regularizer placeholder** self.create_placeholders() => Creates the input placeholders \n\n**Step2:** self.build_graph() => Creates the 3 layers: \n - the user embedding layer\n - the product embedding layer \n - the output prediction layer\n\n**New:**\n\n self.create_control_embeddings()\n self.create_counter_factual_loss()\n\n\n**Step3: Changed: +add regularizer between embeddings** self.create_losses() => Defines the loss function for prediction\n\n**Step4:** self.add_optimizer() => Defines the optimizer\n\n**Step5:** self.add_performance_metrics() => Defines the logging performance metrics ???\n\n**Step6:** self.add_summaries() => Defines the final performance stats\n\n",
"_____no_output_____"
]
],
[
[
"class CausalProd2Vec2i(SupervisedProd2vec):\n def __init__(self, userid_size, productid_size, embedding_size, l2_pen, learning_rate, cf_pen, cf='l1'):\n\n self.userid_size = userid_size\n self.productid_size = productid_size * 2 # Doubled to accommodate the treatment embeddings \n self.embedding_size = embedding_size\n self.l2_pen = l2_pen\n self.learning_rate = learning_rate\n self.cf_pen = cf_pen\n self.cf = cf\n\n # Build the graph\n self.create_placeholders()\n self.build_graph()\n self.create_control_embeddings()\n #self.create_counterfactual_regularizer()\n self.create_losses()\n self.add_optimizer()\n self.add_performance_metrics()\n self.add_summaries()\n \n def create_placeholders(self):\n \"\"\"Create the placeholders to be used \"\"\"\n \n self.user_list_placeholder = tf.placeholder(tf.int32, [None], name=\"user_list_placeholder\")\n self.product_list_placeholder = tf.placeholder(tf.int32, [None], name=\"product_list_placeholder\")\n self.label_list_placeholder = tf.placeholder(tf.float32, [None, 1], name=\"label_list_placeholder\")\n self.reg_list_placeholder = tf.placeholder(tf.int32, [None], name=\"reg_list_placeholder\")\n\n # logits placeholder used to store the test CR for the bootstrapping process\n self.logits_placeholder = tf.placeholder(tf.float32, [None], name=\"logits_placeholder\")\n\n \n def create_control_embeddings(self):\n \"\"\"Create the control embeddings\"\"\"\n\n with tf.name_scope('control_embedding'):\n # Get the control embedding at id 0\n self.control_embed = tf.stop_gradient(tf.nn.embedding_lookup(self.product_embeddings, self.reg_list_placeholder))\n \n\n #################################\n ## SOLUTION TO Q1 GOES HERE! ##\n #################################\n #def create_counterfactual_regularizer(self):\n \n # self.cf_reg\n \n \n \n def create_losses(self):\n \"\"\"Create the losses\"\"\"\n\n with tf.name_scope('losses'):\n #Sigmoid loss between the logits and labels\n self.log_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=self.logits, labels=self.label_list_placeholder))\n \n #Adding the regularizer term on user vct and prod vct and their bias terms\n reg_term = self.l2_pen * ( tf.nn.l2_loss(self.user_embeddings) + tf.nn.l2_loss(self.product_embeddings) )\n reg_term_biases = self.l2_pen * ( tf.nn.l2_loss(self.prod_b) + tf.nn.l2_loss(self.user_b) )\n self.loss = self.log_loss + reg_term + reg_term_biases\n \n #Adding the counterfactual regualizer term\n # Q1: Write the method that computes the counterfactual regularizer\n #self.create_counterfactual_regularizer()\n #self.loss = self.loss + (self.cf_pen * self.cf_reg)\n\n #Compute addtionally the MSE loss\n self.mse_loss = tf.losses.mean_squared_error(labels=self.label_list_placeholder, predictions=tf.sigmoid(self.logits))\n \n",
"_____no_output_____"
]
],
[
[
"### Create the TF Graph",
"_____no_output_____"
]
],
[
[
"# Create graph object\ngraph = tf.Graph()\nwith graph.as_default():\n\n with tf.device('/cpu:0'):\n # Load the required graph\n\n ### Number of products and users\n productid_size = 1683\n userid_size = 944\n\n model = CausalProd2Vec2i(userid_size, productid_size+1, FLAGS.embedding_size, FLAGS.l2_pen, FLAGS.learning_rate, FLAGS.cf_pen, cf=FLAGS.cf_loss)\n\n ap_mse_loss, ap_log_loss = create_average_predictor_tensors(model.label_list_placeholder, model.logits_placeholder)\n \n # Define initializer to initialize/reset running variables\n running_vars = tf.get_collection(tf.GraphKeys.LOCAL_VARIABLES, scope=\"performance_metrics/auc_metric\")\n running_vars_initializer = tf.variables_initializer(var_list=running_vars)\n\n # Get train data batch from queue\n next_batch = load_train_dataset(train_data_set_location, FLAGS.batch_size, FLAGS.num_epochs)\n test_user_batch, test_product_batch, test_label_batch, test_cr = load_test_dataset(test_data_set_location)\n val_test_user_batch, val_test_product_batch, val_test_label_batch, val_cr = load_test_dataset(validation_test_set_location)\n val_train_user_batch, val_train_product_batch, val_train_label_batch, val_cr = load_test_dataset(validation_train_set_location)\n\n # create the empirical CR test logits \n test_logits = np.empty(len(test_label_batch))\n test_logits.fill(test_cr)\n",
"_____no_output_____"
]
],
[
[
"### Launch the Session: Train the model",
"_____no_output_____"
]
],
[
[
"# Launch the Session\nwith tf.Session(graph=graph, config=tf.ConfigProto(allow_soft_placement=True, log_device_placement=False)) as sess:\n\n # initialise all the TF variables\n init_op = tf.global_variables_initializer()\n sess.run(init_op)\n\n # Setup tensorboard: tensorboard --logdir=/tmp/tensorboard\n time_tb = str(time.ctime(int(time.time())))\n train_writer = tf.summary.FileWriter('/tmp/tensorboard' + '/train' + time_tb, sess.graph)\n test_writer = tf.summary.FileWriter('/tmp/tensorboard' + '/test' + time_tb, sess.graph)\n merged = tf.summary.merge_all()\n\n # Embeddings viz (Possible to add labels for embeddings later)\n saver = tf.train.Saver()\n config = projector.ProjectorConfig()\n embedding = config.embeddings.add()\n embedding.tensor_name = model.product_embeddings.name\n projector.visualize_embeddings(train_writer, config)\n\n # Variables used in the training loop\n t = time.time()\n step = 0\n average_loss = 0\n average_mse_loss = 0\n average_log_loss = 0\n\n # Start the training loop---------------------------------------------------------------------------------------------\n print(\"Starting Training On Causal Prod2Vec\")\n print(FLAGS.cf_loss)\n print(\"Num Epochs = \", FLAGS.num_epochs)\n print(\"Learning Rate = \", FLAGS.learning_rate)\n print(\"L2 Reg = \", FLAGS.l2_pen)\n print(\"CF Reg = \", FLAGS.cf_pen)\n\n try:\n while True:\n # Run the TRAIN for this step batch ---------------------------------------------------------------------\n # Construct the feed_dict\n user_batch, product_batch, label_batch = sess.run(next_batch)\n # Treatment is the small set of samples from St, Control is the larger set of samples from Sc\n reg_ids = compute_2i_regularization_id(product_batch, productid_size) # Compute the product ID's for regularization\n feed_dict = {model.user_list_placeholder : user_batch, model.product_list_placeholder: product_batch, model.reg_list_placeholder: reg_ids, model.label_list_placeholder: label_batch}\n \n # Run the graph\n _, sum_str, loss_val, mse_loss_val, log_loss_val = sess.run([model.apply_grads, merged, model.loss, model.mse_loss, model.log_loss], feed_dict=feed_dict)\n\n step +=1\n average_loss += loss_val\n average_mse_loss += mse_loss_val\n average_log_loss += log_loss_val\n\n # Every num_steps print average loss\n if step % FLAGS.num_steps == 0:\n if step > FLAGS.num_steps:\n # The average loss is an estimate of the loss over the last set batches.\n average_loss /= FLAGS.num_steps\n average_mse_loss /= FLAGS.num_steps\n average_log_loss /= FLAGS.num_steps\n print(\"Average Training Loss on S_c (FULL, MSE, NLL) at step \", step, \": \", average_loss, \": \", average_mse_loss, \": \", average_log_loss, \"Time taken (S) = \" + str(round(time.time() - t, 1)))\n\n average_loss = 0\n t = time.time() # reset the time\n train_writer.add_summary(sum_str, step) # Write the summary\n\n # Run the VALIDATION for this step batch ---------------------------------------------------------------------\n val_train_product_batch = np.asarray(val_train_product_batch, dtype=np.float32)\n val_test_product_batch = np.asarray(val_test_product_batch, dtype=np.float32)\n vaL_train_reg_ids = compute_2i_regularization_id(val_train_product_batch, productid_size) # Compute the product ID's for regularization\n vaL_test_reg_ids = compute_2i_regularization_id(val_test_product_batch, productid_size) # Compute the product ID's for regularization\n feed_dict_test = {model.user_list_placeholder : val_test_user_batch, model.product_list_placeholder: val_test_product_batch, model.reg_list_placeholder: vaL_test_reg_ids, model.label_list_placeholder: val_test_label_batch}\n feed_dict_train = {model.user_list_placeholder : val_train_user_batch, model.product_list_placeholder: val_train_product_batch, model.reg_list_placeholder: vaL_train_reg_ids, model.label_list_placeholder: val_train_label_batch}\n \n sum_str, loss_val, mse_loss_val, log_loss_val = sess.run([merged, model.loss, model.mse_loss, model.log_loss], feed_dict=feed_dict_train)\n print(\"Validation loss on S_c (FULL, MSE, NLL) at step \", step, \": \", loss_val, \": \", mse_loss_val, \": \", log_loss_val)\n \n sum_str, loss_val, mse_loss_val, log_loss_val = sess.run([merged, model.loss, model.mse_loss, model.log_loss], feed_dict=feed_dict_test)\n cost_val.append(loss_val)\n print(\"Validation loss on S_t(FULL, MSE, NLL) at step \", step, \": \", loss_val, \": \", mse_loss_val, \": \", log_loss_val)\n \n print(\"####################################################################################################################\") \n\n test_writer.add_summary(sum_str, step) # Write the summary\n \n except tf.errors.OutOfRangeError:\n print(\"Reached the number of epochs\")\n\n finally:\n saver.save(sess, os.path.join('/tmp/tensorboard', model_name), model.global_step) # Save model\n\n train_writer.close()\n print(\"Training Complete\")\n\n # Run the bootstrap for this model ---------------------------------------------------------------------------------------------------------------\n print(\"Begin Bootstrap process...\")\n print(\"Running BootStrap On The Control Representations\")\n compute_bootstraps_2i(sess, model, test_user_batch, test_product_batch, test_label_batch, test_logits, running_vars_initializer, ap_mse_loss, ap_log_loss)\n\n print(\"Running BootStrap On The Treatment Representations\")\n test_product_batch = [int(x) + productid_size for x in test_product_batch]\n compute_bootstraps_2i(sess, model, test_user_batch, test_product_batch, test_label_batch, test_logits, running_vars_initializer, ap_mse_loss, ap_log_loss)\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d098af59246922be945d142b37dbd3225cc53886 | 28,321 | ipynb | Jupyter Notebook | notebooks/basketball-shooting-practice.ipynb | callysto/interesting-problems | f90c21f7533e105d94fa5bf35f2a82cd81ca4edc | [
"CC-BY-4.0"
] | null | null | null | notebooks/basketball-shooting-practice.ipynb | callysto/interesting-problems | f90c21f7533e105d94fa5bf35f2a82cd81ca4edc | [
"CC-BY-4.0"
] | null | null | null | notebooks/basketball-shooting-practice.ipynb | callysto/interesting-problems | f90c21f7533e105d94fa5bf35f2a82cd81ca4edc | [
"CC-BY-4.0"
] | null | null | null | 74.528947 | 17,764 | 0.741535 | [
[
[
"\n\n<a href=\"https://hub.callysto.ca/jupyter/hub/user-redirect/git-pull?repo=https%3A%2F%2Fgithub.com%2Fcallysto%2Finteresting-problems&branch=main&subPath=notebooks/basketball-shooting-practice.ipynb&depth=1\" target=\"_parent\"><img src=\"https://raw.githubusercontent.com/callysto/curriculum-notebooks/master/open-in-callysto-button.svg?sanitize=true\" width=\"123\" height=\"24\" alt=\"Open in Callysto\"/></a>",
"_____no_output_____"
],
[
"# Basketball Shooting Practice\n\n[Watch on YouTube](https://www.youtube.com/watch?v=Tm2ruZQLcqE&list=PL-j7ku2URmjZYtWzMCS4AqFS5SXPXRHwf)\n\nWhen Nick shoots a basketball, he either sinks the shot or misses. For each shot\nNick sinks, he is given 5 points by his father. For each missed shot, Nick’s Dad\ntakes 2 points away.\n\nNick attempts a total of 28 shots and ends up with zero points (i.e. he breaks\neven). How many shots did Nick sink?\n\nfrom https://www.cemc.uwaterloo.ca/resources/potw/2019-20/English/POTWB-19-NN-01-P.pdf",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nshots = 28\nfinal_score = 0\nshots_dataframe = pd.DataFrame(columns=['Sunk', 'Missed', 'Points'])\n\nfor sunk in range(0,shots+1):\n missed = shots - sunk\n points = sunk * 5 - missed * 2\n shots_dataframe = shots_dataframe.append({'Sunk':sunk, 'Missed':missed, 'Points':points}, ignore_index=True)\n if points == final_score:\n print('Nick ends up with', final_score, 'points if he sinks', sunk, 'shots.')\n\nshots_dataframe",
"Nick ends up with 0 points if he sinks 8 shots.\n"
],
[
"%matplotlib inline\nshots_dataframe.plot()",
"_____no_output_____"
]
],
[
[
"[](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
d098c80c034a5823adf7e84029692d8969539ab6 | 39,178 | ipynb | Jupyter Notebook | ray-rllib/01-Introduction-to-Reinforcement-Learning.ipynb | alexy/ray-qiskit-tutorial | baa827af20ea4e788da45545ce99a578f03ebb91 | [
"Apache-2.0"
] | 2 | 2021-03-16T11:51:31.000Z | 2021-03-16T11:51:36.000Z | ray-rllib/01-Introduction-to-Reinforcement-Learning.ipynb | alexy/ray-qiskit-tutorial | baa827af20ea4e788da45545ce99a578f03ebb91 | [
"Apache-2.0"
] | null | null | null | ray-rllib/01-Introduction-to-Reinforcement-Learning.ipynb | alexy/ray-qiskit-tutorial | baa827af20ea4e788da45545ce99a578f03ebb91 | [
"Apache-2.0"
] | 1 | 2021-06-15T23:37:34.000Z | 2021-06-15T23:37:34.000Z | 39.573737 | 653 | 0.635663 | [
[
[
"# Ray RLlib - Introduction to Reinforcement Learning\n\n© 2019-2021, Anyscale. All Rights Reserved\n\n\n\n_Reinforcement Learning_ is the category of machine learning that focuses on training one or more _agents_ to achieve maximal _rewards_ while operating in an environment. This lesson discusses the core concepts of RL, while subsequent lessons explore RLlib in depth. We'll use two examples with exercises to give you a taste of RL. If you already understand RL concepts, you can either skim this lesson or skip to the [next lesson](02-Introduction-to-RLlib.ipynb).",
"_____no_output_____"
],
[
"## What Is Reinforcement Learning?\n\nLet's explore the basic concepts of RL, specifically the _Markov Decision Process_ abstraction, and to show its use in Python.\n\nConsider the following image:\n\n\n\nIn RL, one or more **agents** interact with an **environment** to maximize a **reward**. The agents make **observations** about the **state** of the environment and take **actions** that are believed will maximize the long-term reward. However, at any particular moment, the agents can only observe the immediate reward. So, the training process usually involves lots and lot of replay of the game, the robot simulator traversing a virtual space, etc., so the agents can learn from repeated trials what decisions/actions work best to maximize the long-term, cumulative reward.\n\nThe trail and error search and delayed reward are the distinguishing characterists of RL vs. other ML methods ([Sutton 2018](06-RL-References.ipynb#Books)).\n\nThe way to formalize trial and error is the **exploitation vs. exploration tradeoff**. When an agent finds what appears to be a \"rewarding\" sequence of actions, the agent may naturally want to continue to **exploit** these actions. However, even better actions may exist. An agent won't know whether alternatives are better or not unless some percentage of actions taken **explore** the alternatives. So, all RL algorithms include a strategy for exploitation and exploration.",
"_____no_output_____"
],
[
"## RL Applications\n\nRL has many potential applications. RL became \"famous\" due to these successes, including achieving expert game play, training robots, autonomous vehicles, and other simulated agents:\n\n\n\n\n\n\n\n\n\n\nCredits:\n* [AlphaGo](https://www.youtube.com/watch?v=l7ngy56GY6k)\n* [Breakout](https://towardsdatascience.com/tutorial-double-deep-q-learning-with-dueling-network-architectures-4c1b3fb7f756) ([paper](https://arxiv.org/abs/1312.5602))\n* [Stacking Legos with Sawyer](https://robohub.org/soft-actor-critic-deep-reinforcement-learning-with-real-world-robots/)\n* [Walking Man](https://openai.com/blog/openai-baselines-ppo/)\n* [Autonomous Vehicle](https://www.daimler.com/innovation/case/autonomous/intelligent-drive-2.html)\n* [\"Cassie\": Two-legged Robot](https://mime.oregonstate.edu/research/drl/robots/cassie/) (Uses Ray!)",
"_____no_output_____"
],
[
"Recently other industry applications have emerged, include the following:\n\n* **Process optimization:** industrial processes (factories, pipelines) and other business processes, routing problems, cluster optimization.\n* **Ad serving and recommendations:** Some of the traditional methods, including _collaborative filtering_, are hard to scale for very large data sets. RL systems are being developed to do an effective job more efficiently than traditional methods.\n* **Finance:** Markets are time-oriented _environments_ where automated trading systems are the _agents_. ",
"_____no_output_____"
],
[
"## Markov Decision Processes\n\nAt its core, Reinforcement learning builds on the concepts of [Markov Decision Process (MDP)](https://en.wikipedia.org/wiki/Markov_decision_process), where the current state, the possible actions that can be taken, and overall goal are the building blocks.\n\nAn MDP models sequential interactions with an external environment. It consists of the following:\n\n- a **state space** where the current state of the system is sometimes called the **context**.\n- a set of **actions** that can be taken at a particular state $s$ (or sometimes the same set for all states).\n- a **transition function** that describes the probability of being in a state $s'$ at time $t+1$ given that the MDP was in state $s$ at time $t$ and action $a$ was taken. The next state is selected stochastically based on these probabilities.\n- a **reward function**, which determines the reward received at time $t$ following action $a$, based on the decision of **policy** $\\pi$.",
"_____no_output_____"
],
[
"The goal of MDP is to develop a **policy** $\\pi$ that specifies what action $a$ should be chosen for a given state $s$ so that the cumulative reward is maximized. When it is possible for the policy \"trainer\" to fully observe all the possible states, actions, and rewards, it can define a deterministic policy, fixing a single action choice for each state. In this scenario, the transition probabilities reduce to the probability of transitioning to state $s'$ given the current state is $s$, independent of actions, because the state now leads to a deterministic action choice. Various algorithms can be used to compute this policy. \n\nPut another way, if the policy isn't deterministic, then the transition probability to state $s'$ at a time $t+1$ when action $a$ is taken for state $s$ at time $t$, is given by:\n\n\\begin{equation}\nP_a(s',s) = P(s_{t+1} = s'|s_t=s,a)\n\\end{equation}\n\nWhen the policy is deterministic, this transition probability reduces to the following, independent of $a$:\n\n\\begin{equation}\nP(s',s) = P(s_{t+1} = s'|s_t=s)\n\\end{equation}\n\nTo be clear, a deterministic policy means that one and only one action will always be selected for a given state $s$, but the next state $s'$ will still be selected stochastically.\n\nIn the general case of RL, it isn't possible to fully know all this information, some of which might be hidden and evolving, so it isn't possible to specify a fully-deterministic policy.",
"_____no_output_____"
],
[
"Often this cumulative reward is computed using the **discounted sum** over all rewards observed:\n\n\\begin{equation}\n\\arg\\max_{\\pi} \\sum_{t=1}^T \\gamma^t R_t(\\pi),\n\\end{equation}\n\nwhere $T$ is the number of steps taken in the MDP (this is a random variable and may depend on $\\pi$), $R_t$ is the reward received at time $t$ (also a random variable which depends on $\\pi$), and $\\gamma$ is the **discount factor**. The value of $\\gamma$ is between 0 and 1, meaning it has the effect of \"discounting\" earlier rewards vs. more recent rewards. \n\nThe [Wikipedia page on MDP](https://en.wikipedia.org/wiki/Markov_decision_process) provides more details. Note what we said in the third bullet, that the new state only depends on the previous state and the action taken. The assumption is that we can simplify our effort by ignoring all the previous states except the last one and still achieve good results. This is known as the [Markov property](https://en.wikipedia.org/wiki/Markov_property). This assumption often works well and it greatly reduces the resources required.",
"_____no_output_____"
],
[
"## The Elements of RL\n\nHere are the elements of RL that expand on MDP concepts (see [Sutton 2018](https://mitpress.mit.edu/books/reinforcement-learning-second-edition) for more details):\n\n#### Policies\n\nUnlike MDP, the **transition function** probabilities are often not known in advance, but must be learned. Learning is done through repeated \"play\", where the agent interacts with the environment.\n\nThis makes the **policy** $\\pi$ harder to determine. Because the fully state space usually can't be fully known, the choice of action $a$ for given state $s$ almostly always remains a stochastic choice, never deterministic, unlike MDP.\n\n#### Reward Signal\n\nThe idea of a **reward signal** encapsulates the desired goal for the system and provides feedback for updating the policy based on how well particular events or actions contribute rewards towards the goal.\n\n#### Value Function\n\nThe **value function** encapsulates the maximum cumulative reward likely to be achieved starting from a given state for an **episode**. This is harder to determine than the simple reward returned after taking an action. In fact, much of the research in RL over the decades has focused on finding better and more efficient implementations of value functions. To illustrate the challenge, repeatedly taking one sequence of actions may yield low rewards for a while, but eventually provide large rewards. Conversely, always choosing a different sequence of actions may yield a good reward at each step, but be suboptimal for the cumulative reward.\n\n#### Episode\n\nA sequence of steps by the agent starting in an initial state. At each step, the agent observes the current state, chooses the next action, and receives the new reward. Episodes are used for both training policies and replaying with an existing policy (called _rollout_).\n\n#### Model\n\nAn optional feature, some RL algorithms develop or use a **model** of the environment to anticipate the resulting states and rewards for future actions. Hence, they are useful for _planning_ scenarios. Methods for solving RL problems that use models are called _model-based methods_, while methods that learn by trial and error are called _model-free methods_.",
"_____no_output_____"
],
[
"## Reinforcement Learning Example\n\nLet's finish this introduction let's learn about the popular \"hello world\" (1) example environment for RL, balancing a pole vertically on a moving cart, called `CartPole`. Then we'll see how to use RLlib to train a policy using a popular RL algorithm, _Proximal Policy Optimization_, again using `CartPole`.\n\n(1) In books and tutorials on programming languages, it is a tradition that the very first program shown prints the message \"Hello World!\".",
"_____no_output_____"
],
[
"### CartPole and OpenAI\n\nThe popular [OpenAI \"gym\" environment](https://gym.openai.com/) provides MDP interfaces to a variety of simulated environments. Perhaps the most popular for learning RL is `CartPole`, a simple environment that simulates the physics of balancing a pole on a moving cart. The `CartPole` problem is described at https://gym.openai.com/envs/CartPole-v1. Here is an image from that website, where the pole is currently falling to the right, which means the cart will need to move to the right to restore balance:\n\n",
"_____no_output_____"
],
[
"This example fits into the MDP framework as follows:\n- The **state** consists of the position and velocity of the cart (moving in one dimension from left to right) as well as the angle and angular velocity of the pole that is balancing on the cart.\n- The **actions** are to decrease or increase the cart's velocity by one unit. A negative velocity means it is moving to the left.\n- The **transition function** is deterministic and is determined by simulating physical laws. Specifically, for a given **state**, what should we choose as the next velocity value? In the RL context, the correct velocity value to choose has to be learned. Hence, we learn a _policy_ that approximates the optimal transition function that could be calculated from the laws of physics.\n- The **reward function** is a constant 1 as long as the pole is upright, and 0 once the pole has fallen over. Therefore, maximizing the reward means balancing the pole for as long as possible.\n- The **discount factor** in this case can be taken to be 1, meaning we treat the rewards at all time steps equally and don't discount any of them.\n\nMore information about the `gym` Python module is available at https://gym.openai.com/. The list of all the available Gym environments is in [this wiki page](https://github.com/openai/gym/wiki/Table-of-environments). We'll use a few more of them and even create our own in subsequent lessons.",
"_____no_output_____"
]
],
[
[
"import gym\nimport numpy as np\nimport pandas as pd\nimport json",
"_____no_output_____"
]
],
[
[
"The code below illustrates how to create and manipulate MDPs in Python. An MDP can be created by calling `gym.make`. Gym environments are identified by names like `CartPole-v1`. A **catalog of built-in environments** can be found at https://gym.openai.com/envs.",
"_____no_output_____"
]
],
[
[
"env = gym.make(\"CartPole-v1\")\nprint(\"Created env:\", env)",
"_____no_output_____"
]
],
[
[
"Reset the state of the MDP by calling `env.reset()`. This call returns the initial state of the MDP.",
"_____no_output_____"
]
],
[
[
"state = env.reset()\nprint(\"The starting state is:\", state)",
"_____no_output_____"
]
],
[
[
"Recall that the state is the position of the cart, its velocity, the angle of the pole, and the angular velocity of the pole.",
"_____no_output_____"
],
[
"The `env.step` method takes an action. In the case of the `CartPole` environment, the appropriate actions are 0 or 1, for pushing the cart to the left or right, respectively. `env.step()` returns a tuple of four things:\n1. the new state of the environment\n2. a reward\n3. a boolean indicating whether the simulation has finished\n4. a dictionary of miscellaneous extra information\n\nLet's show what happens if we take one step with an action of 0.",
"_____no_output_____"
]
],
[
[
"action = 0\nstate, reward, done, info = env.step(action)\nprint(state, reward, done, info)",
"_____no_output_____"
]
],
[
[
"A **rollout** is a simulation of a policy in an environment. It is used both during training and when running simulations with a trained policy. \n\nThe code below performs a rollout in a given environment. It takes **random actions** until the simulation has finished and returns the cumulative reward.",
"_____no_output_____"
]
],
[
[
"def random_rollout(env):\n state = env.reset()\n \n done = False\n cumulative_reward = 0\n\n # Keep looping as long as the simulation has not finished.\n while not done:\n # Choose a random action (either 0 or 1).\n action = np.random.choice([0, 1])\n \n # Take the action in the environment.\n state, reward, done, _ = env.step(action)\n \n # Update the cumulative reward.\n cumulative_reward += reward\n \n # Return the cumulative reward.\n return cumulative_reward ",
"_____no_output_____"
]
],
[
[
"Try rerunning the following cell a few times. How much do the answers change? Note that the maximum possible reward for `CartPole-v1` is 500. You'll probably get numbers well under 500.",
"_____no_output_____"
]
],
[
[
"reward = random_rollout(env)\nprint(reward)\n\nreward = random_rollout(env)\nprint(reward)",
"_____no_output_____"
]
],
[
[
"### Exercise 1\n\nChoosing actions at random in `random_rollout` is not a very effective policy, as the previous results showed. Finish implementing the `rollout_policy` function below, which takes an environment *and* a policy. Recall that the *policy* is a function that takes in a *state* and returns an *action*. The main difference is that instead of choosing a **random action**, like we just did (with poor results), the action should be chosen **with the policy** (as a function of the state).\n\n> **Note:** Exercise solutions for this tutorial can be found [here](solutions/Ray-RLlib-Solutions.ipynb).",
"_____no_output_____"
]
],
[
[
"def rollout_policy(env, policy):\n state = env.reset()\n \n done = False\n cumulative_reward = 0\n\n # EXERCISE: Fill out this function by copying the appropriate part of 'random_rollout'\n # and modifying it to choose the action using the policy.\n raise NotImplementedError\n\n # Return the cumulative reward.\n return cumulative_reward\n\ndef sample_policy1(state):\n return 0 if state[0] < 0 else 1\n\ndef sample_policy2(state):\n return 1 if state[0] < 0 else 0\n\nreward1 = np.mean([rollout_policy(env, sample_policy1) for _ in range(100)])\nreward2 = np.mean([rollout_policy(env, sample_policy2) for _ in range(100)])\n\nprint('The first sample policy got an average reward of {}.'.format(reward1))\nprint('The second sample policy got an average reward of {}.'.format(reward2))\n\nassert 5 < reward1 < 15, ('Make sure that rollout_policy computes the action '\n 'by applying the policy to the state.')\nassert 25 < reward2 < 35, ('Make sure that rollout_policy computes the action '\n 'by applying the policy to the state.')",
"_____no_output_____"
]
],
[
[
"We'll return to `CartPole` in lesson [01: Application Cart Pole](explore-rllib/01-Application-Cart-Pole.ipynb) in the `explore-rllib` section.",
"_____no_output_____"
],
[
"### RLlib Reinforcement Learning Example: Cart Pole with Proximal Policy Optimization\n\nThis section demonstrates how to use the _proximal policy optimization_ (PPO) algorithm implemented by [RLlib](http://rllib.io). PPO is a popular way to develop a policy. RLlib also uses [Ray Tune](http://tune.io), the Ray Hyperparameter Tuning framework, which is covered in the [Ray Tune Tutorial](../ray-tune/00-Ray-Tune-Overview.ipynb).\n\nWe'll provide relatively little explanation of **RLlib** concepts for now, but explore them in greater depth in subsequent lessons. For more on RLlib, see the documentation at http://rllib.io.",
"_____no_output_____"
],
[
"PPO is described in detail in [this paper](https://arxiv.org/abs/1707.06347). It is a variant of _Trust Region Policy Optimization_ (TRPO) described in [this earlier paper](https://arxiv.org/abs/1502.05477). [This OpenAI post](https://openai.com/blog/openai-baselines-ppo/) provides a more accessible introduction to PPO.\n\nPPO works in two phases. In the first phase, a large number of rollouts are performed in parallel. The rollouts are then aggregated on the driver and a surrogate optimization objective is defined based on those rollouts. In the second phase, we use SGD (_stochastic gradient descent_) to find the policy that maximizes that objective with a penalty term for diverging too much from the current policy.\n\n\n\n> **NOTE:** The SGD optimization step is best performed in a data-parallel manner over multiple GPUs. This is exposed through the `num_gpus` field of the `config` dictionary. Hence, for normal usage, one or more GPUs is recommended.\n\n(The original version of this example can be found [here](https://raw.githubusercontent.com/ucbrise/risecamp/risecamp2018/ray/tutorial/rllib_exercises/)).",
"_____no_output_____"
]
],
[
[
"import ray\nfrom ray.rllib.agents.ppo import PPOTrainer, DEFAULT_CONFIG\nfrom ray.tune.logger import pretty_print",
"_____no_output_____"
]
],
[
[
"Initialize Ray. If you are running these tutorials on your laptop, then a single-node Ray cluster will be started by the next cell. If you are running in the Anyscale platform, it will connect to the running Ray cluster.",
"_____no_output_____"
]
],
[
[
"info = ray.init(ignore_reinit_error=True, log_to_driver=False)\nprint(info)",
"_____no_output_____"
]
],
[
[
"> **Tip:** Having trouble starting Ray? See the [Troubleshooting](../reference/Troubleshooting-Tips-Tricks.ipynb) tips.",
"_____no_output_____"
],
[
"The next cell prints the URL for the Ray Dashboard. **This is only correct if you are running this tutorial on a laptop.** Click the link to open the dashboard.\n\nIf you are running on the Anyscale platform, use the URL provided by your instructor to open the Dashboard.",
"_____no_output_____"
]
],
[
[
"print(\"Dashboard URL: http://{}\".format(info[\"webui_url\"]))",
"_____no_output_____"
]
],
[
[
"Instantiate a PPOTrainer object. We pass in a config object that specifies how the network and training procedure should be configured. Some of the parameters are the following.\n\n- `num_workers` is the number of actors that the agent will create. This determines the degree of parallelism that will be used. In a cluster, these actors will be spread over the available nodes.\n- `num_sgd_iter` is the number of epochs of SGD (stochastic gradient descent, i.e., passes through the data) that will be used to optimize the PPO surrogate objective at each iteration of PPO, for each _minibatch_ (\"chunk\") of training data. Using minibatches is more efficient than training with one record at a time.\n- `sgd_minibatch_size` is the SGD minibatch size (batches of data) that will be used to optimize the PPO surrogate objective.\n- `model` contains a dictionary of parameters describing the neural net used to parameterize the policy. The `fcnet_hiddens` parameter is a list of the sizes of the hidden layers. Here, we have two hidden layers of size 100, each.\n- `num_cpus_per_worker` when set to 0 prevents Ray from pinning a CPU core to each worker, which means we could run out of workers in a constrained environment like a laptop or a cloud VM.",
"_____no_output_____"
]
],
[
[
"config = DEFAULT_CONFIG.copy()\nconfig['num_workers'] = 1\nconfig['num_sgd_iter'] = 30\nconfig['sgd_minibatch_size'] = 128\nconfig['model']['fcnet_hiddens'] = [100, 100]\nconfig['num_cpus_per_worker'] = 0 ",
"_____no_output_____"
],
[
"agent = PPOTrainer(config, 'CartPole-v1')",
"_____no_output_____"
]
],
[
[
"Now let's train the policy on the `CartPole-v1` environment for `N` steps. The JSON object returned by each call to `agent.train()` contains a lot of information we'll inspect below. For now, we'll extract information we'll graph, such as `episode_reward_mean`. The _mean_ values are more useful for determining successful training.",
"_____no_output_____"
]
],
[
[
"N = 10\nresults = []\nepisode_data = []\nepisode_json = []\n\nfor n in range(N):\n result = agent.train()\n results.append(result)\n \n episode = {'n': n, \n 'episode_reward_min': result['episode_reward_min'], \n 'episode_reward_mean': result['episode_reward_mean'], \n 'episode_reward_max': result['episode_reward_max'], \n 'episode_len_mean': result['episode_len_mean']} \n \n episode_data.append(episode)\n episode_json.append(json.dumps(episode))\n \n print(f'{n:3d}: Min/Mean/Max reward: {result[\"episode_reward_min\"]:8.4f}/{result[\"episode_reward_mean\"]:8.4f}/{result[\"episode_reward_max\"]:8.4f}')",
"_____no_output_____"
]
],
[
[
"Now let's convert the episode data to a Pandas `DataFrame` for easy manipulation. The results indicate how much reward the policy is receiving (`episode_reward_*`) and how many time steps of the environment the policy ran (`episode_len_mean`). The maximum possible reward for this problem is `500`. The reward mean and trajectory length are very close because the agent receives a reward of one for every time step that it survives. However, this is specific to this environment and not true in general.",
"_____no_output_____"
]
],
[
[
"df = pd.DataFrame(data=episode_data)\ndf",
"_____no_output_____"
],
[
"df.columns.tolist()",
"_____no_output_____"
]
],
[
[
"Let's plot the data. Since the length and reward means are equal, we'll only plot one line:",
"_____no_output_____"
]
],
[
[
"df.plot(x=\"n\", y=[\"episode_reward_mean\", \"episode_reward_min\", \"episode_reward_max\"], secondary_y=True)",
"_____no_output_____"
]
],
[
[
"The model is quickly able to hit the maximum value of 500, but the mean is what's most valuable. After 10 steps, we're more than half way there.",
"_____no_output_____"
],
[
"FYI, here are two views of the whole value for one result. First, a \"pretty print\" output.\n\n> **Tip:** The output will be long. When this happens for a cell, right click and select _Enable scrolling for outputs_.",
"_____no_output_____"
]
],
[
[
"print(pretty_print(results[-1]))",
"_____no_output_____"
]
],
[
[
"We'll learn about more of these values as continue the tutorial.\n\nThe whole, long JSON blob, which includes the historical stats about episode rewards and lengths:",
"_____no_output_____"
]
],
[
[
"results[-1]",
"_____no_output_____"
]
],
[
[
"Let's plot the `episode_reward` values:",
"_____no_output_____"
]
],
[
[
"episode_rewards = results[-1]['hist_stats']['episode_reward']\ndf_episode_rewards = pd.DataFrame(data={'episode':range(len(episode_rewards)), 'reward':episode_rewards})\n\ndf_episode_rewards.plot(x=\"episode\", y=\"reward\")",
"_____no_output_____"
]
],
[
[
"For a well-trained model, most runs do very well while occasional runs do poorly. Try plotting other results episodes by changing the array index in `results[-1]` to another number between `0` and `9`. (The length of `results` is `10`.)",
"_____no_output_____"
],
[
"### Exercise 2\n\nThe current network and training configuration are too large and heavy-duty for a simple problem like `CartPole`. Modify the configuration to use a smaller network (the `config['model']['fcnet_hiddens']` setting) and to speed up the optimization of the surrogate objective. (Fewer SGD iterations and a larger batch size should help.)",
"_____no_output_____"
]
],
[
[
"# Make edits here:\nconfig = DEFAULT_CONFIG.copy()\nconfig['num_workers'] = 3\nconfig['num_sgd_iter'] = 30\nconfig['sgd_minibatch_size'] = 128\nconfig['model']['fcnet_hiddens'] = [100, 100]\nconfig['num_cpus_per_worker'] = 0\n\nagent = PPOTrainer(config, 'CartPole-v1')",
"_____no_output_____"
]
],
[
[
"Train the agent and try to get a reward of 500. If it's training too slowly you may need to modify the config above to use fewer hidden units, a larger `sgd_minibatch_size`, a smaller `num_sgd_iter`, or a larger `num_workers`.\n\nThis should take around `N` = 20 or 30 training iterations.",
"_____no_output_____"
]
],
[
[
"N = 5\nresults = []\nepisode_data = []\nepisode_json = []\n\nfor n in range(N):\n result = agent.train()\n results.append(result)\n \n episode = {'n': n, \n 'episode_reward_mean': result['episode_reward_mean'], \n 'episode_reward_max': result['episode_reward_max'], \n 'episode_len_mean': result['episode_len_mean']} \n \n episode_data.append(episode)\n episode_json.append(json.dumps(episode))\n \n print(f'Max reward: {episode[\"episode_reward_max\"]}')",
"_____no_output_____"
]
],
[
[
"# Using Checkpoints\n\nYou checkpoint the current state of a trainer to save what it has learned. Checkpoints are used for subsequent _rollouts_ and also to continue training later from a known-good state. Calling `agent.save()` creates the checkpoint and returns the path to the checkpoint file, which can be used later to restore the current state to a new trainer. Here we'll load the trained policy into the same process, but often it would be loaded in a new process, for example on a production cluster for serving that is separate from the training cluster.",
"_____no_output_____"
]
],
[
[
"checkpoint_path = agent.save()\nprint(checkpoint_path)",
"_____no_output_____"
]
],
[
[
"Now load the checkpoint in a new trainer:",
"_____no_output_____"
]
],
[
[
"trained_config = config.copy()\ntest_agent = PPOTrainer(trained_config, \"CartPole-v1\")\ntest_agent.restore(checkpoint_path)",
"_____no_output_____"
]
],
[
[
"Use the previously-trained policy to act in an environment. The key line is the call to `test_agent.compute_action(state)` which uses the trained policy to choose an action. This is an example of _rollout_, which we'll study in a subsequent lesson.\n\nVerify that the cumulative reward received roughly matches up with the reward printed above. It will be at or near 500.",
"_____no_output_____"
]
],
[
[
"env = gym.make(\"CartPole-v1\")\nstate = env.reset()\ndone = False\ncumulative_reward = 0\n\nwhile not done:\n action = test_agent.compute_action(state) # key line; get the next action\n state, reward, done, _ = env.step(action)\n cumulative_reward += reward\n\nprint(cumulative_reward)",
"_____no_output_____"
],
[
"ray.shutdown()",
"_____no_output_____"
]
],
[
[
"The next lesson, [02: Introduction to RLlib](02-Introduction-to-RLlib.ipynb) steps back to introduce to RLlib, its goals and the capabilities it provides.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
d098e2d613ead2795dd7ee7546cf3faa26ea2252 | 2,383 | ipynb | Jupyter Notebook | src/17_Metric.ipynb | fkubota/kaggle-Predicting-Molecular-Properties | ceaf401a2bfab10a3314f3122b12cf07b7c6bf2c | [
"MIT"
] | null | null | null | src/17_Metric.ipynb | fkubota/kaggle-Predicting-Molecular-Properties | ceaf401a2bfab10a3314f3122b12cf07b7c6bf2c | [
"MIT"
] | null | null | null | src/17_Metric.ipynb | fkubota/kaggle-Predicting-Molecular-Properties | ceaf401a2bfab10a3314f3122b12cf07b7c6bf2c | [
"MIT"
] | 2 | 2020-09-26T08:38:36.000Z | 2021-01-10T10:56:57.000Z | 19.858333 | 123 | 0.496853 | [
[
[
"# Introduction\n- メトリックをちゃんと定義する\n- ref:\n > exploring-molecular-properties-data.ipynb",
"_____no_output_____"
],
[
"# Import everything I nead :)",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot\nimport pandas as pd\nfrom sklearn.metrics import mean_absolute_error ",
"_____no_output_____"
]
],
[
[
"# Metric",
"_____no_output_____"
],
[
"- このコンペでは、以下のスコアを用いる\n- type ごとの平均\n\n$$\nscore = \\frac{1}{T} \\sum_{t=1}^{T} \\log \\left ( \\frac{1}{n_t} \\sum_{i=1}^{n_i} |{y_i - \\hat{y_i}|} \\right)\n$$",
"_____no_output_____"
]
],
[
[
"def metric(df, preds):\n df[\"prediction\"] = preds\n maes = []\n for t in df.type.unique():\n y_true = df[df.type==t].scalar_coupling_constant.values\n y_pred = df[df.type==t].prediction.values\n mae = np.log(mean_absolute_error(y_true, y_pred))\n maes.append(mae)\n return np.mean(maes)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
d098e775fdcc0be348fd8f84722b6bc501012d71 | 22,644 | ipynb | Jupyter Notebook | t81_558_class_05_2_kfold.ipynb | machevres6/t81_558_deep_learning | 08c5e84a518d2f017cb5aab5c6d7bb84559a4738 | [
"Apache-2.0"
] | 2 | 2020-06-21T19:09:53.000Z | 2020-10-03T18:45:03.000Z | t81_558_class_05_2_kfold.ipynb | shantanusl15150/t81_558_deep_learning | 08c5e84a518d2f017cb5aab5c6d7bb84559a4738 | [
"Apache-2.0"
] | null | null | null | t81_558_class_05_2_kfold.ipynb | shantanusl15150/t81_558_deep_learning | 08c5e84a518d2f017cb5aab5c6d7bb84559a4738 | [
"Apache-2.0"
] | null | null | null | 40.220249 | 823 | 0.610802 | [
[
[
"<a href=\"https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_05_2_kfold.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# T81-558: Applications of Deep Neural Networks\n**Module 5: Regularization and Dropout**\n* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)\n* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).",
"_____no_output_____"
],
[
"# Module 5 Material\n\n* Part 5.1: Part 5.1: Introduction to Regularization: Ridge and Lasso [[Video]](https://www.youtube.com/watch?v=jfgRtCYjoBs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_1_reg_ridge_lasso.ipynb)\n* **Part 5.2: Using K-Fold Cross Validation with Keras** [[Video]](https://www.youtube.com/watch?v=maiQf8ray_s&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_2_kfold.ipynb)\n* Part 5.3: Using L1 and L2 Regularization with Keras to Decrease Overfitting [[Video]](https://www.youtube.com/watch?v=JEWzWv1fBFQ&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_3_keras_l1_l2.ipynb)\n* Part 5.4: Drop Out for Keras to Decrease Overfitting [[Video]](https://www.youtube.com/watch?v=bRyOi0L6Rs8&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_4_dropout.ipynb)\n* Part 5.5: Benchmarking Keras Deep Learning Regularization Techniques [[Video]](https://www.youtube.com/watch?v=1NLBwPumUAs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_5_bootstrap.ipynb)\n",
"_____no_output_____"
],
[
"# Google CoLab Instructions\n\nThe following code ensures that Google CoLab is running the correct version of TensorFlow.",
"_____no_output_____"
]
],
[
[
"try:\n %tensorflow_version 2.x\n COLAB = True\n print(\"Note: using Google CoLab\")\nexcept:\n print(\"Note: not using Google CoLab\")\n COLAB = False",
"Note: not using Google CoLab\n"
]
],
[
[
"# Part 5.2: Using K-Fold Cross-validation with Keras\n\nCross-validation can be used for a variety of purposes in predictive modeling. These include:\n\n* Generating out-of-sample predictions from a neural network\n* Estimate a good number of epochs to train a neural network for (early stopping)\n* Evaluate the effectiveness of certain hyperparameters, such as activation functions, neuron counts, and layer counts\n\nCross-validation uses a number of folds, and multiple models, to provide each segment of data a chance to serve as both the validation and training set. Cross validation is shown in Figure 5.CROSS.\n\n**Figure 5.CROSS: K-Fold Crossvalidation**\n\n\nIt is important to note that there will be one model (neural network) for each fold. To generate predictions for new data, which is data not present in the training set, predictions from the fold models can be handled in several ways:\n\n* Choose the model that had the highest validation score as the final model.\n* Preset new data to the 5 models (one for each fold) and average the result (this is an [ensemble](https://en.wikipedia.org/wiki/Ensemble_learning)).\n* Retrain a new model (using the same settings as the cross-validation) on the entire dataset. Train for as many epochs, and with the same hidden layer structure.\n\nGenerally, I prefer the last approach and will retrain a model on the entire data set once I have selected hyper-parameters. Of course, I will always set aside a final holdout set for model validation that I do not use in any aspect of the training process.\n\n### Regression vs Classification K-Fold Cross-Validation\n\nRegression and classification are handled somewhat differently with regards to cross-validation. Regression is the simpler case where you can simply break up the data set into K folds with little regard for where each item lands. For regression it is best that the data items fall into the folds as randomly as possible. It is also important to remember that not every fold will necessarily have exactly the same number of data items. It is not always possible for the data set to be evenly divided into K folds. For regression cross-validation we will use the Scikit-Learn class **KFold**.\n\nCross validation for classification could also use the **KFold** object; however, this technique would not ensure that the class balance remains the same in each fold as it was in the original. It is very important that the balance of classes that a model was trained on remains the same (or similar) to the training set. A drift in this distribution is one of the most important things to monitor after a trained model has been placed into actual use. Because of this, we want to make sure that the cross-validation itself does not introduce an unintended shift. This is referred to as stratified sampling and is accomplished by using the Scikit-Learn object **StratifiedKFold** in place of **KFold** whenever you are using classification. In summary, the following two objects in Scikit-Learn should be used:\n\n* **KFold** When dealing with a regression problem.\n* **StratifiedKFold** When dealing with a classification problem.\n\nThe following two sections demonstrate cross-validation with classification and regression. \n\n### Out-of-Sample Regression Predictions with K-Fold Cross-Validation\n\nThe following code trains the simple dataset using a 5-fold cross-validation. The expected performance of a neural network, of the type trained here, would be the score for the generated out-of-sample predictions. We begin by preparing a feature vector using the jh-simple-dataset to predict age. This is a regression problem.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nfrom scipy.stats import zscore\nfrom sklearn.model_selection import train_test_split\n\n# Read the data set\ndf = pd.read_csv(\n \"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv\",\n na_values=['NA','?'])\n\n# Generate dummies for job\ndf = pd.concat([df,pd.get_dummies(df['job'],prefix=\"job\")],axis=1)\ndf.drop('job', axis=1, inplace=True)\n\n# Generate dummies for area\ndf = pd.concat([df,pd.get_dummies(df['area'],prefix=\"area\")],axis=1)\ndf.drop('area', axis=1, inplace=True)\n\n# Generate dummies for product\ndf = pd.concat([df,pd.get_dummies(df['product'],prefix=\"product\")],axis=1)\ndf.drop('product', axis=1, inplace=True)\n\n# Missing values for income\nmed = df['income'].median()\ndf['income'] = df['income'].fillna(med)\n\n# Standardize ranges\ndf['income'] = zscore(df['income'])\ndf['aspect'] = zscore(df['aspect'])\ndf['save_rate'] = zscore(df['save_rate'])\ndf['subscriptions'] = zscore(df['subscriptions'])\n\n# Convert to numpy - Classification\nx_columns = df.columns.drop('age').drop('id')\nx = df[x_columns].values\ny = df['age'].values",
"_____no_output_____"
]
],
[
[
"Now that the feature vector is created a 5-fold cross-validation can be performed to generate out of sample predictions. We will assume 500 epochs, and not use early stopping. Later we will see how we can estimate a more optimal epoch count.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport os\nimport numpy as np\nfrom sklearn import metrics\nfrom scipy.stats import zscore\nfrom sklearn.model_selection import KFold\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense, Activation\n# Cross-Validate\nkf = KFold(5, shuffle=True, random_state=42) # Use for KFold classification\n \noos_y = []\noos_pred = []\n\nfold = 0\nfor train, test in kf.split(x):\n fold+=1\n print(f\"Fold #{fold}\")\n \n x_train = x[train]\n y_train = y[train]\n x_test = x[test]\n y_test = y[test]\n \n model = Sequential()\n model.add(Dense(20, input_dim=x.shape[1], activation='relu'))\n model.add(Dense(10, activation='relu'))\n model.add(Dense(1))\n model.compile(loss='mean_squared_error', optimizer='adam')\n \n model.fit(x_train,y_train,validation_data=(x_test,y_test),verbose=0,\n epochs=500)\n \n pred = model.predict(x_test)\n \n oos_y.append(y_test)\n oos_pred.append(pred) \n\n # Measure this fold's RMSE\n score = np.sqrt(metrics.mean_squared_error(pred,y_test))\n print(f\"Fold score (RMSE): {score}\")\n\n# Build the oos prediction list and calculate the error.\noos_y = np.concatenate(oos_y)\noos_pred = np.concatenate(oos_pred)\nscore = np.sqrt(metrics.mean_squared_error(oos_pred,oos_y))\nprint(f\"Final, out of sample score (RMSE): {score}\") \n \n# Write the cross-validated prediction\noos_y = pd.DataFrame(oos_y)\noos_pred = pd.DataFrame(oos_pred)\noosDF = pd.concat( [df, oos_y, oos_pred],axis=1 )\n#oosDF.to_csv(filename_write,index=False)\n",
"Fold #1\nFold score (RMSE): 0.6245484893737087\nFold #2\nFold score (RMSE): 0.5802295511082306\nFold #3\nFold score (RMSE): 0.6300965769274195\nFold #4\nFold score (RMSE): 0.4550931884841248\nFold #5\nFold score (RMSE): 1.0517027192572377\nFinal, out of sample score (RMSE): 0.6981314007708873\n"
]
],
[
[
"As you can see, the above code also reports the average number of epochs needed. A common technique is to then train on the entire dataset for the average number of epochs needed.",
"_____no_output_____"
],
[
"### Classification with Stratified K-Fold Cross-Validation\n\nThe following code trains and fits the jh-simple-dataset dataset with cross-validation to generate out-of-sample . It also writes out the out of sample (predictions on the test set) results.\n\nIt is good to perform a stratified k-fold cross validation with classification data. This ensures that the percentages of each class remains the same across all folds. To do this, make use of the **StratifiedKFold** object, instead of the **KFold** object used in regression.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nfrom scipy.stats import zscore\n\n# Read the data set\ndf = pd.read_csv(\n \"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv\",\n na_values=['NA','?'])\n\n# Generate dummies for job\ndf = pd.concat([df,pd.get_dummies(df['job'],prefix=\"job\")],axis=1)\ndf.drop('job', axis=1, inplace=True)\n\n# Generate dummies for area\ndf = pd.concat([df,pd.get_dummies(df['area'],prefix=\"area\")],axis=1)\ndf.drop('area', axis=1, inplace=True)\n\n# Missing values for income\nmed = df['income'].median()\ndf['income'] = df['income'].fillna(med)\n\n# Standardize ranges\ndf['income'] = zscore(df['income'])\ndf['aspect'] = zscore(df['aspect'])\ndf['save_rate'] = zscore(df['save_rate'])\ndf['age'] = zscore(df['age'])\ndf['subscriptions'] = zscore(df['subscriptions'])\n\n# Convert to numpy - Classification\nx_columns = df.columns.drop('product').drop('id')\nx = df[x_columns].values\ndummies = pd.get_dummies(df['product']) # Classification\nproducts = dummies.columns\ny = dummies.values",
"_____no_output_____"
]
],
[
[
"We will assume 500 epochs, and not use early stopping. Later we will see how we can estimate a more optimal epoch count.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport os\nimport numpy as np\nfrom sklearn import metrics\nfrom sklearn.model_selection import StratifiedKFold\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense, Activation\n\n# np.argmax(pred,axis=1)\n# Cross-validate\n# Use for StratifiedKFold classification\nkf = StratifiedKFold(5, shuffle=True, random_state=42) \n \noos_y = []\noos_pred = []\nfold = 0\n\n# Must specify y StratifiedKFold for\nfor train, test in kf.split(x,df['product']): \n fold+=1\n print(f\"Fold #{fold}\")\n \n x_train = x[train]\n y_train = y[train]\n x_test = x[test]\n y_test = y[test]\n \n model = Sequential()\n model.add(Dense(50, input_dim=x.shape[1], activation='relu')) # Hidden 1\n model.add(Dense(25, activation='relu')) # Hidden 2\n model.add(Dense(y.shape[1],activation='softmax')) # Output\n model.compile(loss='categorical_crossentropy', optimizer='adam')\n\n model.fit(x_train,y_train,validation_data=(x_test,y_test),verbose=0,epochs=500)\n \n pred = model.predict(x_test)\n \n oos_y.append(y_test)\n # raw probabilities to chosen class (highest probability)\n pred = np.argmax(pred,axis=1) \n oos_pred.append(pred) \n\n # Measure this fold's accuracy\n y_compare = np.argmax(y_test,axis=1) # For accuracy calculation\n score = metrics.accuracy_score(y_compare, pred)\n print(f\"Fold score (accuracy): {score}\")\n\n# Build the oos prediction list and calculate the error.\noos_y = np.concatenate(oos_y)\noos_pred = np.concatenate(oos_pred)\noos_y_compare = np.argmax(oos_y,axis=1) # For accuracy calculation\n\nscore = metrics.accuracy_score(oos_y_compare, oos_pred)\nprint(f\"Final score (accuracy): {score}\") \n \n# Write the cross-validated prediction\noos_y = pd.DataFrame(oos_y)\noos_pred = pd.DataFrame(oos_pred)\noosDF = pd.concat( [df, oos_y, oos_pred],axis=1 )\n#oosDF.to_csv(filename_write,index=False)\n\n",
"Fold #1\nFold score (accuracy): 0.6766169154228856\nFold #2\nFold score (accuracy): 0.6691542288557214\nFold #3\nFold score (accuracy): 0.6907730673316709\nFold #4\nFold score (accuracy): 0.6733668341708543\nFold #5\nFold score (accuracy): 0.654911838790932\nFinal score (accuracy): 0.673\n"
]
],
[
[
"### Training with both a Cross-Validation and a Holdout Set\n\nIf you have a considerable amount of data, it is always valuable to set aside a holdout set before you cross-validate. This hold out set will be the final evaluation before you make use of your model for its real-world use. Figure 5.HOLDOUT shows this division.\n\n**Figure 5.HOLDOUT: Cross Validation and a Holdout Set**\n\n\nThe following program makes use of a holdout set, and then still cross-validates. ",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nfrom scipy.stats import zscore\nfrom sklearn.model_selection import train_test_split\n\n# Read the data set\ndf = pd.read_csv(\n \"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv\",\n na_values=['NA','?'])\n\n# Generate dummies for job\ndf = pd.concat([df,pd.get_dummies(df['job'],prefix=\"job\")],axis=1)\ndf.drop('job', axis=1, inplace=True)\n\n# Generate dummies for area\ndf = pd.concat([df,pd.get_dummies(df['area'],prefix=\"area\")],axis=1)\ndf.drop('area', axis=1, inplace=True)\n\n# Generate dummies for product\ndf = pd.concat([df,pd.get_dummies(df['product'],prefix=\"product\")],axis=1)\ndf.drop('product', axis=1, inplace=True)\n\n# Missing values for income\nmed = df['income'].median()\ndf['income'] = df['income'].fillna(med)\n\n# Standardize ranges\ndf['income'] = zscore(df['income'])\ndf['aspect'] = zscore(df['aspect'])\ndf['save_rate'] = zscore(df['save_rate'])\ndf['subscriptions'] = zscore(df['subscriptions'])\n\n# Convert to numpy - Classification\nx_columns = df.columns.drop('age').drop('id')\nx = df[x_columns].values\ny = df['age'].values",
"_____no_output_____"
],
[
"from sklearn.model_selection import train_test_split\nimport pandas as pd\nimport os\nimport numpy as np\nfrom sklearn import metrics\nfrom scipy.stats import zscore\nfrom sklearn.model_selection import KFold\n\n# Keep a 10% holdout\nx_main, x_holdout, y_main, y_holdout = train_test_split( \n x, y, test_size=0.10) \n\n\n# Cross-validate\nkf = KFold(5)\n \noos_y = []\noos_pred = []\nfold = 0\nfor train, test in kf.split(x_main): \n fold+=1\n print(f\"Fold #{fold}\")\n \n x_train = x_main[train]\n y_train = y_main[train]\n x_test = x_main[test]\n y_test = y_main[test]\n \n model = Sequential()\n model.add(Dense(20, input_dim=x.shape[1], activation='relu'))\n model.add(Dense(5, activation='relu'))\n model.add(Dense(1))\n model.compile(loss='mean_squared_error', optimizer='adam')\n \n model.fit(x_train,y_train,validation_data=(x_test,y_test),\n verbose=0,epochs=500)\n \n pred = model.predict(x_test)\n \n oos_y.append(y_test)\n oos_pred.append(pred) \n\n # Measure accuracy\n score = np.sqrt(metrics.mean_squared_error(pred,y_test))\n print(f\"Fold score (RMSE): {score}\")\n\n\n# Build the oos prediction list and calculate the error.\noos_y = np.concatenate(oos_y)\noos_pred = np.concatenate(oos_pred)\nscore = np.sqrt(metrics.mean_squared_error(oos_pred,oos_y))\nprint()\nprint(f\"Cross-validated score (RMSE): {score}\") \n \n# Write the cross-validated prediction (from the last neural network)\nholdout_pred = model.predict(x_holdout)\n\nscore = np.sqrt(metrics.mean_squared_error(holdout_pred,y_holdout))\nprint(f\"Holdout score (RMSE): {score}\") \n",
"Fold #1\nFold score (RMSE): 24.299626704604506\nFold #2\nFold score (RMSE): 0.6609159891625663\nFold #3\nFold score (RMSE): 0.4997884237817687\nFold #4\nFold score (RMSE): 1.1084218284103058\nFold #5\nFold score (RMSE): 0.614899992174395\n\nCross-validated score (RMSE): 10.888206072135832\nHoldout score (RMSE): 0.6283593821273058\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d098ee030200b90f2aaa88016281e1f999576fdd | 665,668 | ipynb | Jupyter Notebook | notebooks/1_building_and_annotating_the_atlas_core/11_figure_2_data_overview.ipynb | LungCellAtlas/HLCA_reproducibility | 4d564ac9f4c5b12421ec6c596169eb8675b08f8b | [
"MIT"
] | null | null | null | notebooks/1_building_and_annotating_the_atlas_core/11_figure_2_data_overview.ipynb | LungCellAtlas/HLCA_reproducibility | 4d564ac9f4c5b12421ec6c596169eb8675b08f8b | [
"MIT"
] | null | null | null | notebooks/1_building_and_annotating_the_atlas_core/11_figure_2_data_overview.ipynb | LungCellAtlas/HLCA_reproducibility | 4d564ac9f4c5b12421ec6c596169eb8675b08f8b | [
"MIT"
] | null | null | null | 777.649533 | 207,516 | 0.951473 | [
[
[
"# HLCA Figure 2",
"_____no_output_____"
],
[
"Here we will generate the figures from the HLCA pre-print, figure 2. Figure 2d was generated separately in R, using code from integration benchmarking framework 'scIB'.",
"_____no_output_____"
],
[
"### import modules, set paths and parameters:",
"_____no_output_____"
]
],
[
[
"import scanpy as sc\nimport pandas as pd\nimport numpy as np\nimport sys\nimport os\nfrom collections import Counter\n\nsys.path.append(\"../../scripts/\")\nimport reference_based_harmonizing\nimport celltype_composition_plotting\nimport plotting\nimport sankey\n\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nfrom matplotlib.colors import to_hex\nimport ast",
"_____no_output_____"
],
[
"sc.set_figure_params(\n dpi=140,\n fontsize=12,\n frameon=False,\n transparent=True,\n)",
"_____no_output_____"
],
[
"sns.set_style(style=\"white\")\nsns.set_context(context=\"paper\")",
"_____no_output_____"
]
],
[
[
"for pretty code formatting (not needed to run notebook):",
"_____no_output_____"
]
],
[
[
"%load_ext lab_black",
"_____no_output_____"
]
],
[
[
"paths:",
"_____no_output_____"
]
],
[
[
"path_HLCA = \"../../data/HLCA_core_h5ads/HLCA_v1.h5ad\"\npath_celltype_reference = \"../../supporting_files/metadata_harmonization/HLCA_cell_type_reference_mapping_20211103.csv\"\ndir_figures = \"../../figures\"",
"_____no_output_____"
]
],
[
[
"## Generate figures:",
"_____no_output_____"
],
[
"initiate empty dictionary in which to store paper figures.",
"_____no_output_____"
]
],
[
[
"FIGURES = dict()",
"_____no_output_____"
]
],
[
[
"for automatic script updating and pretty coding (not necessary for code to run!)",
"_____no_output_____"
]
],
[
[
"adata = sc.read(path_HLCA)",
"_____no_output_____"
]
],
[
[
"#### Overview of stats (number of studies, cells, annotations etc.):",
"_____no_output_____"
],
[
"Number of studies, datasets, subjects, samples, cells:",
"_____no_output_____"
]
],
[
[
"print(\"Number of studies:\", len(set(adata.obs.study)))\nprint(\"Number of datasets:\", len(set(adata.obs.dataset)))\nprint(\"Number of subjects:\", len(set(adata.obs.subject_ID)))\nprint(\"Number of samples:\", len(set(adata.obs[\"sample\"])))\nprint(\"Number of cells:\", adata.obs.shape[0])",
"Number of studies: 11\nNumber of datasets: 14\nNumber of subjects: 107\nNumber of samples: 166\nNumber of cells: 584884\n"
]
],
[
[
"Proportions of cell compartments in the HLCA:",
"_____no_output_____"
]
],
[
[
"original_ann_lev_1_percs = np.round(\n adata.obs.original_ann_level_1.value_counts() / adata.n_obs * 100, 1\n)\nprint(\"Original annotation proportions (level 1):\")\nprint(original_ann_lev_1_percs)",
"Original annotation proportions (level 1):\nEpithelial 48.1\nImmune 38.7\nEndothelial 8.5\nStroma 4.3\nProliferating cells 0.3\nName: original_ann_level_1, dtype: float64\n"
]
],
[
[
"Perc. of cells annotated per level:",
"_____no_output_____"
]
],
[
[
"for level in range(1, 6):\n n_unannotated = np.sum(\n [\n isnone or isnull\n for isnone, isnull in zip(\n adata.obs[f\"original_ann_level_{level}_clean\"].values == \"None\",\n pd.isnull(adata.obs[f\"original_ann_level_{level}_clean\"].values),\n )\n ]\n )\n n_annotated = adata.n_obs - n_unannotated\n print(\n f\"Perc. originally annotated at level {level}: {round(n_annotated/adata.n_obs*100,1)}\"\n )",
"Perc. originally annotated at level 1: 100.0\nPerc. originally annotated at level 2: 98.8\nPerc. originally annotated at level 3: 93.6\nPerc. originally annotated at level 4: 65.7\nPerc. originally annotated at level 5: 6.8\n"
]
],
[
[
"Distribution of demographics:",
"_____no_output_____"
]
],
[
[
"print(f\"Min. and max. age: {adata.obs.age.min()}, {adata.obs.age.max()}\")",
"Min. and max. age: 10.0, 76.0\n"
],
[
"adata.obs.sex.value_counts() / adata.n_obs * 100",
"_____no_output_____"
],
[
"adata.obs.ethnicity.value_counts() / adata.n_obs * 100",
"_____no_output_____"
],
[
"print(f\"Min. and max. BMI: {adata.obs.BMI.min()}, {adata.obs.BMI.max()}\")",
"Min. and max. BMI: 19.9, 48.9\n"
],
[
"adata.obs.smoking_status.value_counts() / adata.n_obs * 100",
"_____no_output_____"
]
],
[
[
"## figures:",
"_____no_output_____"
],
[
"Overview of subjects, samples, and cells per study (not in the paper):",
"_____no_output_____"
]
],
[
[
"plotting.plot_dataset_statistics(adata, fontsize=8, figheightscale=3.5)",
"_____no_output_____"
]
],
[
[
"### 2a Subject/sample distributions",
"_____no_output_____"
],
[
"Re-map ethnicities:",
"_____no_output_____"
]
],
[
[
"ethnicity_remapper = {\n \"asian\": \"asian\",\n \"black\": \"black\",\n \"latino\": \"latino\",\n \"mixed\": \"mixed\",\n \"nan\": \"nan\",\n \"pacific islander\": \"other\",\n \"white\": \"white\",\n}",
"_____no_output_____"
],
[
"adata.obs.ethnicity = adata.obs.ethnicity.map(ethnicity_remapper)",
"_____no_output_____"
]
],
[
[
"Plot subject demographic and sample anatomical location distributions:",
"_____no_output_____"
]
],
[
[
"FIGURES[\"2a_subject_and_sample_stats\"] = plotting.plot_subject_and_sample_stats_incl_na(\n adata, return_fig=True\n)",
"age: 99% annotated\nBMI: 70% annotated\nsex: 100% annotated)\nethnicity 93% annotated\nsmoking_status: 92% annotated\n"
]
],
[
[
"## 2b Cell type composition sankey plot, level 1-3:",
"_____no_output_____"
],
[
"First, generate a color mapping. We want to map cell types from the same compartment in the same shade (e.g. epithilial orange/red, endothelial purple), at all levels. We'll need to incorporate our hierarchical cell type reference for that, and then calculate the colors per level. That is done with the code below:",
"_____no_output_____"
]
],
[
[
"harmonizing_df = reference_based_harmonizing.load_harmonizing_table(\n path_celltype_reference\n)\nconsensus_df = reference_based_harmonizing.create_consensus_table(harmonizing_df)\nmax_level = 5\ncolor_prop_df = celltype_composition_plotting.calculate_hierarchical_coloring_df(\n adata,\n consensus_df,\n max_level,\n lev1_colormap_dict={\n \"Epithelial\": \"Oranges\",\n \"Immune\": \"Greens\",\n \"Endothelial\": \"Purples\",\n \"Stroma\": \"Blues\",\n \"Proliferating cells\": \"Reds\",\n },\n ann_level_name_prefix=\"original_ann_level_\",\n)",
"/home/icb/lisa.sikkema/miniconda3/envs/scRNAseq_analysis/lib/python3.7/site-packages/pandas/core/indexing.py:671: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n self._setitem_with_indexer(indexer, value)\n"
]
],
[
[
"Set minimum percentage among plotted cells for a cell type to be included. This prevents the plot from becoming overcrowded with labels and including lines that are too thin to even see:",
"_____no_output_____"
]
],
[
[
"min_ct_perc = 0.02",
"_____no_output_____"
]
],
[
[
"Now generate the two sankey plots. ",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(figsize=(8, 8))\ncts_ordered_left_lev1 = [\n ct\n for ct in color_prop_df.l1_label\n if ct in adata.obs.original_ann_level_1_clean.values\n]\nct_to_color_lev1 = {\n ct: col for ct, col in zip(color_prop_df.l1_label, color_prop_df.l1_rgba)\n}\n# get level 1 anns:\ny_lev1 = adata.obs.original_ann_level_1_clean\nlev1_percs = {ct: n / len(y_lev1) * 100 for ct, n in Counter(y_lev1).items()}\nlev1_ct_to_keep = [ct for ct, perc in lev1_percs.items() if perc > min_ct_perc]\n\n\n# get level 1 anns, set \"None\" in level 2 compartment specific,\n# remove cell types that make up less than min_ct_perc of cells plotted\ny_lev2 = adata.obs.original_ann_level_2_clean.cat.remove_unused_categories()\ny_lev2 = [\n f\"{ct} ({lev1ann})\" if ct == \"None\" else ct\n for ct, lev1ann in zip(y_lev2, adata.obs.original_ann_level_1_clean)\n]\nlev2_percs = {ct: n / len(y_lev2) * 100 for ct, n in Counter(y_lev2).items()}\nlev2_ct_to_keep = [ct for ct, perc in lev2_percs.items() if perc > min_ct_perc]\n# plot sankeyy\nsankey.sankey(\n x=[\n lev1\n for lev1, lev2 in zip(y_lev1, list(y_lev2))\n if lev1 in lev1_ct_to_keep and lev2 in lev2_ct_to_keep\n ],\n y=[\n lev2\n for lev1, lev2 in zip(y_lev1, list(y_lev2))\n if lev1 in lev1_ct_to_keep and lev2 in lev2_ct_to_keep\n ],\n title=\"Hierarchical cell type annotation\",\n title_left=\"Level 1\",\n title_right=\"Level 2\",\n ax=ax,\n fontsize=\"x-small\",\n left_order=cts_ordered_left_lev1,\n colors={\n ct: to_hex(ast.literal_eval(ct_to_color_lev1[ct]))\n for ct in cts_ordered_left_lev1\n },\n alpha=0.8,\n)\nplt.show()\nplt.close()\nFIGURES[\"2b_sankey_1_2\"] = fig",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(figsize=(8, 8))\n# use order from earlier sankey plot\ncts_ordered_left_lev2 = [\n ct\n for ct in [\n \"Airway epithelium\",\n \"Alveolar epithelium\",\n \"Submucosal Gland\",\n \"None (Epithelial)\",\n \"Myeloid\",\n \"Lymphoid\",\n \"Megakaryocytic and erythroid\",\n \"Granulocytes\",\n \"Blood vessels\",\n \"Lymphatic EC\",\n \"None (Endothelial)\",\n \"Fibroblast lineage\",\n \"Smooth muscle\",\n \"None (Stroma)\",\n \"Mesothelium\",\n \"None (Proliferating cells)\",\n ]\n if ct in lev2_ct_to_keep\n]\n# ct for ct in color_prop_df.l2_label if ct in adata.obs.ann_level_2_clean.values\n# ]\nct_to_color_lev2 = {\n ct: col for ct, col in zip(color_prop_df.l2_label, color_prop_df.l2_rgba)\n}\n# manually locate colors fo \"None\" cell type annotations:\nfor none_ct in \"Epithelial\", \"Endothelial\", \"Stroma\", \"Proliferating cells\":\n ct_to_color_lev2[f\"None ({none_ct})\"] = color_prop_df.loc[\n color_prop_df.l1_label == none_ct, \"l1_rgba\"\n ].values[0]\ny_lev3 = adata.obs.original_ann_level_3_clean\ny_lev3 = [\n f\"{ct} ({lev1ann})\" if ct.startswith(\"None\") else ct\n for ct, lev1ann in zip(y_lev3, adata.obs.original_ann_level_1_clean)\n]\nlev3_percs = {ct: n / len(y_lev3) * 100 for ct, n in Counter(y_lev3).items()}\nlev3_ct_to_keep = [ct for ct, perc in lev3_percs.items() if perc > min_ct_perc]\nsankey.sankey(\n x=[\n lev2\n for lev2, lev3 in zip(y_lev2, list(y_lev3))\n if lev2 in lev2_ct_to_keep and lev3 in lev3_ct_to_keep\n ],\n y=[\n lev3\n for lev2, lev3 in zip(y_lev2, list(y_lev3))\n if lev2 in lev2_ct_to_keep and lev3 in lev3_ct_to_keep\n ],\n title=\"Hierarchical cell type annotation\",\n title_left=\"Level 2\",\n title_right=\"Level 3\",\n ax=ax,\n fontsize=5, # \"xx-small\",\n left_order=cts_ordered_left_lev2,\n colors={\n ct: to_hex(ast.literal_eval(ct_to_color_lev2[ct]))\n for ct in cts_ordered_left_lev2\n },\n alpha=0.8,\n)\nplt.show()\nplt.close()\nFIGURES[\"2b_sankey_2_3\"] = fig",
"_____no_output_____"
]
],
[
[
"### 2c Sample compositions:",
"_____no_output_____"
],
[
"In the paper we use ann level 2 and group by sample:",
"_____no_output_____"
]
],
[
[
"ann_level_number = \"2\"\ngrouping_covariate = \"sample\" # choose e.g. \"dataset\" or \"subject_ID\" or \"sample\"",
"_____no_output_____"
]
],
[
[
"Use the \"clean\" version, i.e. without forward-propagated labels for cells not annotated at the chosen label, but leaving those cells set to \"None\":",
"_____no_output_____"
]
],
[
[
"if ann_level_number == \"1\":\n ann_level = \"original_ann_level_\" + ann_level_number\nelse:\n ann_level = \"original_ann_level_\" + ann_level_number + \"_clean\"",
"_____no_output_____"
]
],
[
[
"Now plot:",
"_____no_output_____"
]
],
[
[
"FIGURES[\n \"2c_sample_compositions\"\n] = celltype_composition_plotting.plot_celltype_composition_per_sample(\n adata,\n ann_level_number,\n color_prop_df,\n return_fig=True,\n title=\"original cell type annotations (level 2) per sample\",\n ann_level_name_prefix=\"original_ann_level_\",\n)",
"_____no_output_____"
]
],
[
[
"# Store figures",
"_____no_output_____"
]
],
[
[
"# for figname, fig in FIGURES.items():\n# print(\"Saving\", figname)\n# fig.savefig(os.path.join(dir_figures, f\"{figname}.png\"), bbox_inches=\"tight\", dpi=140)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d098f25321445d6d030a6fd0a0bcdfe1768bcc9a | 1,353 | ipynb | Jupyter Notebook | dlwp/dlwp-keras-1.ipynb | mgfeller/tensorflow | eae672c3547e2b9fca276a2f228a64c39ac46132 | [
"Apache-2.0"
] | null | null | null | dlwp/dlwp-keras-1.ipynb | mgfeller/tensorflow | eae672c3547e2b9fca276a2f228a64c39ac46132 | [
"Apache-2.0"
] | null | null | null | dlwp/dlwp-keras-1.ipynb | mgfeller/tensorflow | eae672c3547e2b9fca276a2f228a64c39ac46132 | [
"Apache-2.0"
] | null | null | null | 16.301205 | 34 | 0.475979 | [
[
[
"import keras\nprint (keras.__version__)",
"Using Theano backend.\n"
],
[
"from keras import backend\nprint(backend._BACKEND)",
"theano\n"
]
]
] | [
"code"
] | [
[
"code",
"code"
]
] |
d098f37f60f49b452fd322c85551f6f83b7a3430 | 7,389 | ipynb | Jupyter Notebook | w2_variables_expressions.ipynb | kelseypatterson817/su21-it161 | c2698e45d58b53efcf4392ed4a806e76ddc7519d | [
"Apache-2.0"
] | null | null | null | w2_variables_expressions.ipynb | kelseypatterson817/su21-it161 | c2698e45d58b53efcf4392ed4a806e76ddc7519d | [
"Apache-2.0"
] | null | null | null | w2_variables_expressions.ipynb | kelseypatterson817/su21-it161 | c2698e45d58b53efcf4392ed4a806e76ddc7519d | [
"Apache-2.0"
] | null | null | null | 24.712375 | 250 | 0.413858 | [
[
[
"<a href=\"https://colab.research.google.com/github/kelseypatterson817/su21-it161/blob/main/w2_variables_expressions.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"**Reading**\n\nzyBooks Ch 2 Variables and Expressions\n\n**Reference**\n\n[Python Basics Cheatsheet](https://www.pythoncheatsheet.org/#Python-Basics)\n\n**Practice** \n\nIn class practice: zyBooks Lab 1.15 - No parking sign\nzyBooks Ch 2 Practice (graded participation activity)\n\n**Learning Outcomes** \n\nUpon successful completion of this chapter, students will be familiar with and able to apply the following concepts in their programs\n \n* Variables and Assignments\n\n* Identifiers\n\n* Objects\n\n* Numeric data types: Floating point\n\n* Expressions \n\n* Operators: Division and Modulo\n\n* Import and call functions from the math module\n\n* Text (string) data type\n \n",
"_____no_output_____"
]
],
[
[
"print (7+5)\nprint(7, 5, 7+5)\nprint()\nprint(7, 5, end=\" \")\nprint(\"Seven plus five is\", 12)\n\n",
"12\n7 5 12\n\n7 5 Seven plus five is 12\n"
],
[
"# Assignments\nx = 2 ** 3\nprint (x)\ny = 5\nprint(y)\n# Variable values can be changed\ny = 8\nprint(y)\n",
"8\n5\n8\n"
],
[
"# type() function tells the type of object \nprint(type(4))\nprint(type(3.14))\nprint(type(\"Hello\"))\nprint(type(True))",
"<class 'int'>\n<class 'float'>\n<class 'str'>\n<class 'bool'>\n"
],
[
"# id() function returns the memory address (identity) of an object\nprint(id(3+4))\n\nprint(id(\"Hello!\"))",
"10914688\n139799019686408\n"
],
[
"# Calculate the value of your coins change in dollars\nquarters = int(input(\"Quarters: \"))\ndimes = int(input(\"Dimes: \"))\nnickels = int(input(\"Nickels: \"))\npennies = int(input(\"Pennies: \"))\ndollars = quarters * 0.25 + dimes * 0.10 + nickels * 0.05 + pennies * 0.01\nprint()\nprint(\"You have\", round(dollars, 2), \"dollars.\")\n",
"Quarters: 1\nDimes: 2\nNickels: 5\nPennies: 6\n\nYou have 0.76 dollars.\n"
],
[
"# math library\nimport math \n\nprint(math.pow(2,3))\nprint(math.sqrt(144))",
"8.0\n12.0\n"
],
[
"# Text\nprint(\"Welcome \\nto \\nIT 111\")\n\ns = \"How are you?\"\nprint(len(s))\nprint(s.upper())",
"Welcome \nto \nIT 111\n12\nHOW ARE YOU?\n"
]
],
[
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d098fa40be7fec984dceafdefb1309fb26469044 | 16,594 | ipynb | Jupyter Notebook | notebooks/Object Oriented Programming/2.OOP_syntax_pants_practice/exercise.ipynb | jesussantana/AWS-Machine-Learning-Foundations | 526eddb486fe8398cafcc30184c4ecce49df5816 | [
"MIT"
] | 1 | 2021-10-31T07:39:22.000Z | 2021-10-31T07:39:22.000Z | notebooks/Object Oriented Programming/2.OOP_syntax_pants_practice/exercise.ipynb | jesussantana/AWS-Machine-Learning-Foundations | 526eddb486fe8398cafcc30184c4ecce49df5816 | [
"MIT"
] | null | null | null | notebooks/Object Oriented Programming/2.OOP_syntax_pants_practice/exercise.ipynb | jesussantana/AWS-Machine-Learning-Foundations | 526eddb486fe8398cafcc30184c4ecce49df5816 | [
"MIT"
] | null | null | null | 34.004098 | 303 | 0.530312 | [
[
[
"# OOP Syntax Exercise - Part 2\n\nNow that you've had some practice instantiating objects, it's time to write your own class from scratch. This lesson has two parts. In the first part, you'll write a Pants class. This class is similar to the shirt class with a couple of changes. Then you'll practice instantiating Pants objects\n\nIn the second part, you'll write another class called SalesPerson. You'll also instantiate objects for the SalesPerson.\n\nFor this exercise, you can do all of your work in this Jupyter notebook. You will not need to import the class because all of your code will be in this Jupyter notebook.\n\nAnswers are also provided. If you click on the Jupyter icon, you can open a folder called 2.OOP_syntax_pants_practice, which contains this Jupyter notebook ('exercise.ipynb') and a file called answer.py.",
"_____no_output_____"
],
[
"# Pants class\n\nWrite a Pants class with the following characteristics:\n* the class name should be Pants\n* the class attributes should include\n * color\n * waist_size\n * length\n * price\n* the class should have an init function that initializes all of the attributes\n* the class should have two methods\n * change_price() a method to change the price attribute\n * discount() to calculate a discount",
"_____no_output_____"
]
],
[
[
"### TODO:\n# - code a Pants class with the following attributes\n# - color (string) eg 'red', 'yellow', 'orange'\n# - waist_size (integer) eg 8, 9, 10, 32, 33, 34\n# - length (integer) eg 27, 28, 29, 30, 31\n# - price (float) eg 9.28\n\n### TODO: Declare the Pants Class \n\n### TODO: write an __init__ function to initialize the attributes\n\n### TODO: write a change_price method:\n# Args:\n# new_price (float): the new price of the shirt\n# Returns:\n# None\n\n### TODO: write a discount method:\n# Args:\n# discount (float): a decimal value for the discount. \n# For example 0.05 for a 5% discount.\n#\n# Returns:\n# float: the discounted price",
"_____no_output_____"
],
[
"class Pants:\n \"\"\"The Pants class represents an article of clothing sold in a store\n \"\"\"\n \n def __init__(self, color, waist_size, length, price):\n \"\"\"Method for initializing a Pants object\n \n Args: \n color (str)\n waist_size (int)\n length (int)\n price (float)\n \n Attributes:\n color (str): color of a pants object\n waist_size (str): waist size of a pants object\n length (str): length of a pants object\n price (float): price of a pants object\n \"\"\"\n \n self.color = color\n self.waist_size = waist_size\n self.length = length\n self.price = price\n \n def change_price(self, new_price):\n \"\"\"The change_price method changes the price attribute of a pants object\n \n Args: \n new_price (float): the new price of the pants object\n \n Returns: None\n \n \"\"\"\n self.price = new_price\n \n def discount(self, percentage):\n \"\"\"The discount method outputs a discounted price of a pants object\n\n Args:\n percentage (float): a decimal representing the amount to discount\n\n Returns:\n float: the discounted price\n \"\"\"\n return self.price * (1 - percentage)\n\n\nclass SalesPerson:\n \"\"\"The SalesPerson class represents an employee in the store\n\n \"\"\"\n\n def __init__(self, first_name, last_name, employee_id, salary):\n \"\"\"Method for initializing a SalesPerson object\n \n Args: \n first_name (str)\n last_name (str)\n employee_id (int)\n salary (float)\n\n Attributes:\n first_name (str): first name of the employee\n last_name (str): last name of the employee\n employee_id (int): identification number of the employee\n salary (float): yearly salary of the employee\n pants_sold (list): a list of pants objects sold by the employee\n total_sales (float): sum of all sales made by the employee\n\n \"\"\"\n self.first_name = first_name\n self.last_name = last_name\n self.employee_id = employee_id\n self.salary = salary\n self.pants_sold = []\n self.total_sales = 0\n\n def sell_pants(self, pants_object):\n \"\"\"The sell_pants method appends a pants object to the pants_sold attribute\n\n Args: \n pants_object (obj): a pants object that was sold\n\n Returns: None\n\n \"\"\"\n\n self.pants_sold.append(pants_object)\n\n def display_sales(self):\n \"\"\"The display_sales method prints out all pants that have been sold\n\n Args: None\n\n Returns: None\n\n \"\"\"\n\n for pants in self.pants_sold:\n print('color: {}, waist_size: {}, length: {}, price: {}'\\\n .format(pants.color, pants.waist_size, pants.length, pants.price))\n \n def calculate_sales(self):\n \"\"\"The calculate_sales method sums the total price of all pants sold\n\n Args: None\n\n Returns:\n float: sum of the price for all pants sold\n \n \"\"\"\n\n total = 0\n for pants in self.pants_sold:\n total += pants.price\n \n self.total_sales = total\n \n return total\n \n def calculate_commission(self, percentage):\n \"\"\"The calculate_commission method outputs the commission based on sales\n\n Args:\n percentage (float): the commission percentage as a decimal\n\n Returns:\n float: the commission due\n \"\"\"\n\n sales_total = self.calculate_sales()\n return sales_total * percentage ",
"_____no_output_____"
]
],
[
[
"# Run the code cell below to check results\n\nIf you run the next code cell and get an error, then revise your code until the code cell doesn't output anything.",
"_____no_output_____"
]
],
[
[
"def check_results():\n pants = Pants('red', 35, 36, 15.12)\n assert pants.color == 'red'\n assert pants.waist_size == 35\n assert pants.length == 36\n assert pants.price == 15.12\n \n pants.change_price(10) == 10\n assert pants.price == 10 \n \n assert pants.discount(.1) == 9\n \n print('You made it to the end of the check. Nice job!')\n\ncheck_results()",
"You made it to the end of the check. Nice job!\n"
]
],
[
[
"# SalesPerson class\n\nThe Pants class and Shirt class are quite similar. Here is an exercise to give you more practice writing a class. **This exercise is trickier than the previous exercises.**\n\nWrite a SalesPerson class with the following characteristics:\n* the class name should be SalesPerson\n* the class attributes should include\n * first_name \n * last_name\n * employee_id\n * salary\n * pants_sold\n * total_sales\n* the class should have an init function that initializes all of the attributes\n* the class should have four methods\n * sell_pants() a method to change the price attribute\n * calculate_sales() a method to calculate the sales\n * display_sales() a method to print out all the pants sold with nice formatting\n * calculate_commission() a method to calculate the salesperson commission based on total sales and a percentage",
"_____no_output_____"
]
],
[
[
"### TODO:\n# Code a SalesPerson class with the following attributes\n# - first_name (string), the first name of the salesperson\n# - last_name (string), the last name of the salesperson\n# - employee_id (int), the employee ID number like 5681923\n# - salary (float), the monthly salary of the employee\n# - pants_sold (list of Pants objects), \n# pants that the salesperson has sold \n# - total_sales (float), sum of sales of pants sold\n\n### TODO: Declare the SalesPerson Class \n\n### TODO: write an __init__ function to initialize the attributes\n### Input Args for the __init__ function:\n# first_name (str)\n# last_name (str)\n# employee_id (int)\n# . salary (float)\n#\n# You can initialize pants_sold as an empty list\n# You can initialize total_sales to zero.\n#\n###\n\n### TODO: write a sell_pants method:\n#\n# This method receives a Pants object and appends\n# the object to the pants_sold attribute list\n#\n# Args:\n# pants (Pants object): a pants object\n# Returns:\n# None\n\n### TODO: write a display_sales method:\n# \n# This method has no input or outputs. When this method \n# is called, the code iterates through the pants_sold list\n# and prints out the characteristics of each pair of pants\n# line by line. The print out should look something like this\n#\n# color: blue, waist_size: 34, length: 34, price: 10\n# color: red, waist_size: 36, length: 30, price: 14.15\n#\n#\n#\n###\n\n### TODO: write a calculate_sales method:\n# This method calculates the total sales for the sales person.\n# The method should iterate through the pants_sold attribute list\n# and sum the prices of the pants sold. The sum should be stored\n# in the total_sales attribute and then return the total.\n# \n# Args:\n# None\n# Returns:\n# float: total sales\n#\n### \n\n\n### TODO: write a calculate_commission method:\n#\n# The salesperson receives a commission based on the total\n# sales of pants. The method receives a percentage, and then\n# calculate the total sales of pants based on the price,\n# and then returns the commission as (percentage * total sales)\n#\n# Args:\n# percentage (float): comission percentage as a decimal\n#\n# Returns:\n# float: total commission\n#\n#\n###",
"_____no_output_____"
]
],
[
[
"# Run the code cell below to check results\n\nIf you run the next code cell and get an error, then revise your code until the code cell doesn't output anything.",
"_____no_output_____"
]
],
[
[
"def check_results():\n pants_one = Pants('red', 35, 36, 15.12)\n pants_two = Pants('blue', 40, 38, 24.12)\n pants_three = Pants('tan', 28, 30, 8.12)\n \n salesperson = SalesPerson('Amy', 'Gonzalez', 2581923, 40000)\n \n assert salesperson.first_name == 'Amy'\n assert salesperson.last_name == 'Gonzalez'\n assert salesperson.employee_id == 2581923\n assert salesperson.salary == 40000\n assert salesperson.pants_sold == []\n assert salesperson.total_sales == 0\n \n salesperson.sell_pants(pants_one)\n salesperson.pants_sold[0] == pants_one.color\n \n salesperson.sell_pants(pants_two)\n salesperson.sell_pants(pants_three)\n \n assert len(salesperson.pants_sold) == 3\n assert round(salesperson.calculate_sales(),2) == 47.36\n assert round(salesperson.calculate_commission(.1),2) == 4.74\n \n print('Great job, you made it to the end of the code checks!')\n \ncheck_results()",
"Great job, you made it to the end of the code checks!\n"
]
],
[
[
"### Check display_sales() method\n\nIf you run the code cell below, you should get output similar to this:\n\n```python\ncolor: red, waist_size: 35, length: 36, price: 15.12\ncolor: blue, waist_size: 40, length: 38, price: 24.12\ncolor: tan, waist_size: 28, length: 30, price: 8.12\n```",
"_____no_output_____"
]
],
[
[
"pants_one = Pants('red', 35, 36, 15.12)\npants_two = Pants('blue', 40, 38, 24.12)\npants_three = Pants('tan', 28, 30, 8.12)\n\nsalesperson = SalesPerson('Amy', 'Gonzalez', 2581923, 40000)\n\nsalesperson.sell_pants(pants_one) \nsalesperson.sell_pants(pants_two)\nsalesperson.sell_pants(pants_three)\n\nsalesperson.display_sales()",
"color: red, waist_size: 35, length: 36, price: 15.12\ncolor: blue, waist_size: 40, length: 38, price: 24.12\ncolor: tan, waist_size: 28, length: 30, price: 8.12\n"
]
],
[
[
"# Solution \n\nAs a reminder, answers are also provided. If you click on the Jupyter icon, you can open a folder called 2.OOP_syntax_pants_practice, which contains this Jupyter notebook and a file called answer.py.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d09903fc4d6efb8057ca3afe6a57cdb662dea89e | 2,089 | ipynb | Jupyter Notebook | Shreyansh_Open_CV/Face n Eye detection.ipynb | Shreyansh-Gupta/Open-contributions | e72a9ce2b0aa6a48081921bf8138b91ad259c422 | [
"MIT"
] | null | null | null | Shreyansh_Open_CV/Face n Eye detection.ipynb | Shreyansh-Gupta/Open-contributions | e72a9ce2b0aa6a48081921bf8138b91ad259c422 | [
"MIT"
] | null | null | null | Shreyansh_Open_CV/Face n Eye detection.ipynb | Shreyansh-Gupta/Open-contributions | e72a9ce2b0aa6a48081921bf8138b91ad259c422 | [
"MIT"
] | null | null | null | 29.422535 | 92 | 0.540929 | [
[
[
"\n##import OpenCV library\nimport cv2 \n##load the required XML classifiers\nface_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')\neye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml') \n##capture the frames from camera\ncap = cv2.VideoCapture(0) \n##using while loop fetch each frame from camera\nwhile 1: \n ret, img = cap.read() \n##convert frames into grayscale \n gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) \n faces = face_cascade.detectMultiScale(gray, 1.3, 5)\n##form a rectangle around the face and eye of the detected image from the camera \n for (x,y,w,h) in faces: \n cv2.rectangle(img,(x,y),(x+w,y+h),(255,255,0),2) \n roi_gray = gray[y:y+h, x:x+w] \n roi_color = img[y:y+h, x:x+w]\n eyes = eye_cascade.detectMultiScale(roi_gray) \n for (ex,ey,ew,eh) in eyes: \n cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,127,255),2)\n##display the detected face and eye from the camera \n cv2.imshow('img',img) \n###break the loop by pressing esc button \n k = cv2.waitKey(30) & 0xff\n if k == 27: \n break\ncap.release() \ncv2.destroyAllWindows()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
d09904c440c9251db00365bb473292c95a90dc9b | 497,399 | ipynb | Jupyter Notebook | 19_ybita_fraud-detection/identity_pre-processing.ipynb | ski02049/repos | ce5975877c8766f47b5f2f25b1a286f8dd2f4be4 | [
"MIT"
] | 1 | 2020-02-05T11:16:13.000Z | 2020-02-05T11:16:13.000Z | 19_ybita_fraud-detection/identity_pre-processing.ipynb | ski02049/repos | ce5975877c8766f47b5f2f25b1a286f8dd2f4be4 | [
"MIT"
] | null | null | null | 19_ybita_fraud-detection/identity_pre-processing.ipynb | ski02049/repos | ce5975877c8766f47b5f2f25b1a286f8dd2f4be4 | [
"MIT"
] | null | null | null | 74.774354 | 28,532 | 0.635536 | [
[
[
"# YBIGTA ML PROJECT / 염정운",
"_____no_output_____"
],
[
"## Setting",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\n\npd.set_option(\"max_columns\", 999)\npd.set_option(\"max_rows\", 999)",
"_____no_output_____"
],
[
"from sklearn.tree import DecisionTreeClassifier\nfrom sklearn.ensemble import RandomForestClassifier\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n#sns.set(rc={'figure.figsize':(11.7,10)})",
"_____no_output_____"
]
],
[
[
"## Identity data",
"_____no_output_____"
],
[
"Variables in this table are identity information – network connection information (IP, ISP, Proxy, etc) and digital signature \n<br>\n(UA/browser/os/version, etc) associated with transactions. \n<br>\nThey're collected by Vesta’s fraud protection system and digital security partners.\n<br>\nThe field names are masked and pairwise dictionary will not be provided for privacy protection and contract agreement)\n\nCategorical Features:\n<br>\nDeviceType\n<br>\nDeviceInfo\n<br>\nid12 - id38",
"_____no_output_____"
]
],
[
[
"#train_identity가 불편해서 나는 i_merged라는 isFraud를 merge하고 column 순서를 조금 바꾼 새로운 Dataframe을 만들었어! 이건 그 코드!\n\n#i_merged = train_i.merge(train_t[['TransactionID', 'isFraud']], how = 'left', on = 'TransactionID')\n#order_list =['TransactionID', 'isFraud', 'DeviceInfo', 'DeviceType', 'id_01', 'id_02', 'id_03', 'id_04', 'id_05', 'id_06', 'id_07', 'id_08',\n# 'id_09', 'id_10', 'id_11', 'id_12', 'id_13', 'id_14', 'id_15', 'id_16', 'id_17', 'id_18', 'id_19', 'id_20', 'id_21', \n# 'id_22', 'id_23', 'id_24', 'id_25', 'id_26', 'id_27', 'id_28', 'id_29', 'id_30', 'id_31', 'id_32', 'id_33', 'id_34', \n# 'id_35', 'id_36', 'id_37', 'id_38']\n \n\n#i_merged = i_merged[order_list]\n#i_merged.head()\n#i_merged.to_csv('identity_merged.csv', index = False)",
"_____no_output_____"
],
[
"save = pd.read_csv('identity_merged.csv')",
"_____no_output_____"
],
[
"i_merged = pd.read_csv('identity_merged.csv')",
"_____no_output_____"
]
],
[
[
"### <font color='blue'>NaN 비율</font> \n",
"_____no_output_____"
]
],
[
[
"nullrate = (((i_merged.isnull().sum() / len(i_merged)))*100).sort_values(ascending = False)",
"_____no_output_____"
],
[
"nullrate.plot(kind='barh', figsize=(15, 9))",
"_____no_output_____"
],
[
"i_merged.head()",
"_____no_output_____"
]
],
[
[
"### <font color='blue'>DeviceType</font> \n\nnan(3.1%) < desktop(6.5%) < mobile(10.1%) 순으로 isFraud 증가 추이\n<br>\n*전체 datatset에서 isFraud = 1의 비율 7.8%",
"_____no_output_____"
]
],
[
[
"#DeviceType\ni_merged.groupby(['DeviceType', 'isFraud']).size().unstack()",
"_____no_output_____"
],
[
"i_merged[i_merged.DeviceType.isnull()].groupby('isFraud').size()",
"_____no_output_____"
]
],
[
[
"### <font color='blue'>Null count in row</font> \n\n결측치 정도와 isFraud의 유의미한 상관관계 찾지 못함",
"_____no_output_____"
]
],
[
[
"i_merged = i_merged.assign(NaN_count = i_merged.isnull().sum(axis = 1))",
"_____no_output_____"
],
[
"print(i_merged.assign(NaN_count = i_merged.isnull().sum(axis = 1)).groupby('isFraud')['NaN_count'].mean(),\ni_merged.assign(NaN_count = i_merged.isnull().sum(axis = 1)).groupby('isFraud')['NaN_count'].std(),\ni_merged.assign(NaN_count = i_merged.isnull().sum(axis = 1)).groupby('isFraud')['NaN_count'].min(),\ni_merged.assign(NaN_count = i_merged.isnull().sum(axis = 1)).groupby('isFraud')['NaN_count'].max())",
"isFraud\n0 14.569838\n1 14.804471\nName: NaN_count, dtype: float64 isFraud\n0 5.317299\n1 4.350247\nName: NaN_count, dtype: float64 isFraud\n0 0\n1 0\nName: NaN_count, dtype: int64 isFraud\n0 37\n1 37\nName: NaN_count, dtype: int64\n"
],
[
"#isFraud = 1\ni_merged[i_merged.isFraud == 1].hist('NaN_count')",
"_____no_output_____"
],
[
"#isFraud = 0\ni_merged[i_merged.isFraud == 0].hist('NaN_count')",
"_____no_output_____"
],
[
"i_merged.head()",
"_____no_output_____"
]
],
[
[
"### <font color='blue'>변수별 EDA - Continous</font> \n",
"_____no_output_____"
]
],
[
[
"#Correlation Matrix\nrs = np.random.RandomState(0)\ndf = pd.DataFrame(rs.rand(10, 10))\ncorr = i_merged.corr()\ncorr.style.background_gradient(cmap='coolwarm')",
"_____no_output_____"
],
[
"#id_01 : 0 이하의 값들을 가지며 skewed 형태. 필요시 log 변환을 통한 처리가 가능할 듯.\ni_merged.id_01.plot(kind='hist', bins=22, figsize=(12,6), title='id_01 dist.')\nprint(i_merged.groupby('isFraud')['id_01'].mean(),\n i_merged.groupby('isFraud')['id_01'].std(),\n i_merged.id_01.min(),\n i_merged.id_01.max(), sep = '\\n')",
"isFraud\n0 -9.667667\n1 -16.075632\nName: id_01, dtype: float64\nisFraud\n0 13.592128\n1 20.397506\nName: id_01, dtype: float64\n-100.0\n0.0\n"
],
[
"Fraud = (i_merged[i_merged.isFraud == 1]['id_01'])\nnotFraud = i_merged[i_merged.isFraud == 0]['id_01']\nplt.hist([Fraud, notFraud],bins = 5, label=['Fraud', 'notFraud'])\nplt.legend(loc='upper left')\nplt.show()",
"_____no_output_____"
],
[
"#id02: 최솟값 1을 가지며 skewed 형태. 마찬가지로 로그 변환 가능\ni_merged.id_02.plot(kind='hist', bins=22, figsize=(12,6), title='id_02 dist.')\nprint(i_merged.groupby('isFraud')['id_02'].mean(),\n i_merged.groupby('isFraud')['id_02'].std(),\n i_merged.id_02.min(),\n i_merged.id_02.max(), sep = '\\n')",
"isFraud\n0 172396.362892\n1 201522.569239\nName: id_02, dtype: float64\nisFraud\n0 158181.415938\n1 173522.003842\nName: id_02, dtype: float64\n1.0\n999595.0\n"
],
[
"Fraud = (i_merged[i_merged.isFraud == 1]['id_02'])\nnotFraud = i_merged[i_merged.isFraud == 0]['id_02']\nplt.hist([Fraud, notFraud],bins = 5, label=['Fraud', 'notFraud'])\nplt.legend(loc='upper left')\nplt.show()",
"_____no_output_____"
],
[
"#id_05\ni_merged.id_05.plot(kind='hist', bins=22, figsize=(9,6), title='id_05 dist.')\nprint(i_merged.groupby('isFraud')['id_05'].mean(),\n i_merged.groupby('isFraud')['id_05'].std())",
"isFraud\n0 1.627956\n1 1.473775\nName: id_05, dtype: float64 isFraud\n0 5.245558\n1 5.297058\nName: id_05, dtype: float64\n"
],
[
"Fraud = (i_merged[i_merged.isFraud == 1]['id_05'])\nnotFraud = i_merged[i_merged.isFraud == 0]['id_05']\nplt.hist([Fraud, notFraud],bins = 10, label=['Fraud', 'notFraud'])\nplt.legend(loc='upper left')\nplt.show()",
"_____no_output_____"
],
[
"#id_06\ni_merged.id_06.plot(kind='hist', bins=22, figsize=(12,6), title='id_06 dist.')\nprint(i_merged.groupby('isFraud')['id_06'].mean(),\n i_merged.groupby('isFraud')['id_06'].std())",
"isFraud\n0 -6.566518\n1 -8.213987\nName: id_06, dtype: float64 isFraud\n0 16.442454\n1 16.966203\nName: id_06, dtype: float64\n"
],
[
"Fraud = (i_merged[i_merged.isFraud == 1]['id_06'])\nnotFraud = i_merged[i_merged.isFraud == 0]['id_06']\nplt.hist([Fraud, notFraud],bins = 20, label=['Fraud', 'notFraud'])\nplt.legend(loc='upper left')\nplt.show()",
"_____no_output_____"
],
[
"#id_11\ni_merged.id_11.plot(kind='hist', bins=22, figsize=(12,6), title='id_11 dist.')\nprint(i_merged.groupby('isFraud')['id_11'].mean(),\n i_merged.groupby('isFraud')['id_11'].std())",
"isFraud\n0 99.742701\n1 99.775677\nName: id_11, dtype: float64 isFraud\n0 1.137140\n1 1.010306\nName: id_11, dtype: float64\n"
],
[
"Fraud = (i_merged[i_merged.isFraud == 1]['id_11'])\nnotFraud = i_merged[i_merged.isFraud == 0]['id_11']\nplt.hist([Fraud, notFraud],bins = 20, label=['Fraud', 'notFraud'])\nplt.legend(loc='upper left')\nplt.show()",
"_____no_output_____"
]
],
[
[
"### <font color='blue'>변수별 EDA - Categorical</font> \n",
"_____no_output_____"
]
],
[
[
"sns.jointplot(x = 'id_09', y = 'id_03', data = i_merged)",
"_____no_output_____"
]
],
[
[
"### <font color='blue'>Feature Engineering</font> \n\n<br>\n<br>\n** Categorical이지만 가짓수가 많은 경우 정보가 있을 때 1, 아닐 때 0으로 처리함. BaseModel 돌리기 위해 이렇게 설정하였지만, 전처리를 바꿔가는 작업에서는 이 변수들을 다른 방식으로 처리 할 필요가 더 생길 수도 있음.\n<br>\n** Pair 관계가 있음. id03,04 / id05,06 / id07,08, 21~26 / id09, 10 ::함께 데이터가 존재하거나(1) NaN이거나(0). 한편 EDA-Category를 보면 id03, 09의 경우 상관관계가 있는 것으로 추정되어 추가적인 변형을 하지 않았음.\n<br>\n** https://www.kaggle.com/pablocanovas/exploratory-analysis-tidyverse 에서 변수별 EDA 시각화 참고하였고, nan값 제외하고는 Fraud 비율이 낮은 변수부터 1,2..차례로 할당함\n<br>\n<br>\n<br>\n### $Contionous Features$\n<br>\nid01:: 결측치가 없으며 로그변형을 통해 양수화 및 Scailing 시킴. 5의 배수임을 감안할 때 5로 나누는 scailing을 진행해봐도 좋을 듯.\n<br>\nid02:: 결측치가 존재하나, 로그 변형을 통해 정규분포에 흡사한 모양으로 만들고 매우 큰 단위를 Scailing하였음. 결측치는 Random 방식을 이용하여 채웠으나 가장 위험한 방식으로 imputation으로 한 것이므로 주의가 필요함.\n<br>\n<br>\n<br>\n### $Categorical Features$\n<br>\nDeviceType:: {NaN: 0, 'desktop': 1, 'mobile': 2} \n<br>\nDeviceInfo:: {Nan: 0, 정보있음:1}\n<br>\nid12::{0:0, 'Found': 1, 'NotFound': 2}\n<br>\nid13::{Nan: 0, 정보있음:1}\n<br>\nid14::{Nan: 0, 정보있음:1}\n<br>\nid15::{Nan:0, 'New':1, 'Unknown':2, 'Found':3} #15, 16은 연관성이 보임 \n<br>\nid16::{Nan:0, 'NotFound':1, 'Found':2}\n<br>\nid17::{Nan: 0, 정보있음:1}\n<br>\nid18::{Nan: 0, 정보있음:1} #가짓수 다소 적음\n<br>\nid19::{Nan: 0, 정보있음:1}\n<br>\nid20::{Nan: 0, 정보있음:1} #id 17, 19, 20은 Pair\n<br>\nid21\n<br>\nid22\n<br>\nid23::{IP_PROXY:ANONYMOUS:2, else:1, nan:0} #id 7,8 21~26은 Pair. Anonymous만 유독 Fraud 비율이 높기에 고려함. 우선은 베이스 모델에서는 id_23만 사용\n<br>\nid24\n<br>\nid25\n<br>\nid26\n<br>\nid27:: {Nan:0, 'NotFound':1, 'Found':2}\n<br>\nid28:: {0:0, 'New':1, 'Found':2}\n<br>\nid29:: {0:0, 'NotFound':1, 'Found':2}\n<br>\nid30(OS):: {Nan: 0, 정보있음:1}, 데이터가 있다 / 없다로 처리하였지만 Safari Generic에서 사기 확률이 높다 등의 조건을 고려해야한다면 다른 방식으로 전처리 필요할 듯\n<br>\nid31(browser):: {Nan: 0, 정보있음:1}, id30과 같음\n<br>\nid32::{nan:0, 24:1, 32:2, 16:3, 0:4}\n<br>\nid33(해상도)::{Nan: 0, 정보있음:1} \n<br>\nid34:: {nan:0, matchstatus= -1:1, matchstatus=0 :2, matchstatus=1 :3, matchstatus=2 :4} , matchstatus가 -1이면 fraud일 확률 매우 낮음\n<br>\nid35:: {Nan:0, 'T':1, 'F':2}\n<br>\nid36:: {Nan:0, 'T':1, 'F':2}\n<br>\nid37:: {Nan:0, 'T':2, 'F':1}\n<br>\nid38:: {Nan:0, 'T':1, 'F':2}\n<br>\n",
"_____no_output_____"
]
],
[
[
"#Continous Features\ni_merged.id_01 = np.log(-i_merged.id_01 + 1)\ni_merged.id_02 = np.log(i_merged.id_02) ",
"_____no_output_____"
],
[
"medi = i_merged.id_02.median()",
"_____no_output_____"
],
[
"i_merged.id_02 = i_merged.id_02.fillna(medi)",
"_____no_output_____"
],
[
"i_merged.id_02.hist()",
"_____no_output_____"
],
[
"#id_02의 NaN값을 random하게 채워줌\n#i_merged['id_02_filled'] = i_merged['id_02']\n#temp = (i_merged['id_02'].dropna()\n# .sample(i_merged['id_02'].isnull().sum())\n# )\n#temp.index = i_merged[lambda x: x.id_02.isnull()].index\n#i_merged.loc[i_merged['id_02'].isnull(), 'id_02_filled'] = temp",
"_____no_output_____"
],
[
"#Categorical Features\n\ni_merged.DeviceType = i_merged.DeviceType.fillna(0).map({0:0, 'desktop': 1, 'mobile': 2})\ni_merged.DeviceInfo = i_merged.DeviceInfo.notnull().astype(int)\ni_merged.id_12 = i_merged.id_12.fillna(0).map({0:0, 'Found': 1, 'NotFound': 2})\ni_merged.id_13 = i_merged.id_13.notnull().astype(int)\ni_merged.id_14 = i_merged.id_14.notnull().astype(int)\ni_merged.id_14 = i_merged.id_14.notnull().astype(int)\ni_merged.id_15 = i_merged.id_15.fillna(0).map({0:0, 'New':1, 'Unknown':2, 'Found':3})\ni_merged.id_16 = i_merged.id_16.fillna(0).map({0:0, 'NotFound':1, 'Found':2})\ni_merged.id_17 = i_merged.id_17.notnull().astype(int)\ni_merged.id_18 = i_merged.id_18.notnull().astype(int) \ni_merged.id_19 = i_merged.id_19.notnull().astype(int)\ni_merged.id_20 = i_merged.id_20.notnull().astype(int)\ni_merged.id_23 = i_merged.id_23.fillna('temp').map({'temp':0, 'IP_PROXY:ANONYMOUS':2}).fillna(1)\ni_merged.id_27 = i_merged.id_27.fillna(0).map({0:0, 'NotFound':1, 'Found':2})\ni_merged.id_28 = i_merged.id_28.fillna(0).map({0:0, 'New':1, 'Found':2})\ni_merged.id_29 = i_merged.id_29.fillna(0).map({0:0, 'NotFound':1, 'Found':2})\ni_merged.id_30 = i_merged.id_30.notnull().astype(int)\ni_merged.id_31 = i_merged.id_31.notnull().astype(int)\ni_merged.id_32 = i_merged.id_32.fillna('temp').map({'temp':0, 24:1, 32:2, 16:3, 0:4})\ni_merged.id_33 = i_merged.id_33.notnull().astype(int)\ni_merged.id_34 = i_merged.id_34.fillna('temp').map({'temp':0, 'match_status:-1':1, 'match_status:0':3, 'match_status:1':4, 'match_status:2':2})\ni_merged.id_35 = i_merged.id_35.fillna(0).map({0:0, 'T':1, 'F':2})\ni_merged.id_36 = i_merged.id_38.fillna(0).map({0:0, 'T':1, 'F':2})\ni_merged.id_37 = i_merged.id_38.fillna(0).map({0:0, 'T':2, 'F':1})\ni_merged.id_38 = i_merged.id_38.fillna(0).map({0:0, 'T':1, 'F':2})",
"_____no_output_____"
]
],
[
[
"Identity_Device FE",
"_____no_output_____"
]
],
[
[
"i_merged['Device_info_clean'] = i_merged['DeviceInfo']\ni_merged['Device_info_clean'] = i_merged['Device_info_clean'].fillna('unknown')",
"_____no_output_____"
],
[
"def name_divide(name):\n if name == 'Windows':\n return 'Windows'\n elif name == 'iOS Device':\n return 'iOS Device'\n elif name == 'MacOS':\n return 'MacOS'\n elif name == 'Trident/7.0':\n return 'Trident/rv'\n elif \"rv\" in name:\n return 'Trident/rv'\n elif \"SM\" in name:\n return 'SM/moto/lg'\n elif name == 'SAMSUNG':\n return 'SM'\n elif 'LG' in name:\n return 'SM/Moto/LG'\n elif 'Moto' in name:\n return 'SM/Moto/LG'\n elif name == 'unknown':\n return 'unknown'\n else:\n return 'others'",
"_____no_output_____"
],
[
"i_merged['Device_info_clean'] = i_merged['Device_info_clean'].apply(name_divide)\ni_merged['Device_info_clean'].value_counts()",
"_____no_output_____"
]
],
[
[
"### <font color='blue'>Identity_feature engineered_dataset</font>",
"_____no_output_____"
]
],
[
[
"i_merged.columns",
"_____no_output_____"
],
[
"selected = []\nselected.extend(['TransactionID', 'isFraud', 'id_01', 'id_02', 'DeviceType','Device_info_clean'])",
"_____no_output_____"
],
[
"id_exist = i_merged[selected].assign(Exist = 1)",
"_____no_output_____"
],
[
"id_exist.DeviceType.fillna('unknown', inplace = True)",
"_____no_output_____"
],
[
"id_exist.to_csv('identity_first.csv',index = False)",
"_____no_output_____"
]
],
[
[
"### <font color='blue'>Test: Decision Tree / Random Forest Test</font> ",
"_____no_output_____"
]
],
[
[
"from sklearn.ensemble import RandomForestClassifier\nfrom sklearn.model_selection import GridSearchCV\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import precision_score, recall_score, f1_score, accuracy_score, roc_auc_score",
"_____no_output_____"
],
[
"X = id_exist.drop(['isFraud'], axis = 1)\nY = id_exist['isFraud']\nX_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.3)",
"_____no_output_____"
],
[
"tree_clf = DecisionTreeClassifier(max_depth=10)\ntree_clf.fit(X_train, y_train)",
"_____no_output_____"
],
[
"pred = tree_clf.predict(X_test)\nprint('F1:{}'.format(f1_score(y_test, pred)))",
"F1:0.22078820004609356\n"
]
],
[
[
"--------------------------",
"_____no_output_____"
]
],
[
[
"param_grid = {\n 'max_depth': list(range(10,51,10)),\n 'n_estimators': [20, 20, 20]\n}\n\nrf = RandomForestClassifier()\ngs = GridSearchCV(estimator = rf, param_grid = param_grid, \n cv = 5, n_jobs = -1, verbose = 2)\ngs.fit(X_train,y_train)\nbest_rf = gs.best_estimator_",
"Fitting 5 folds for each of 15 candidates, totalling 75 fits\n"
],
[
"print('best parameter: \\n',gs.best_params_)",
"best parameter: \n {'max_depth': 20, 'n_estimators': 20}\n"
],
[
"y_pred = best_rf.predict(X_test)",
"_____no_output_____"
],
[
"print('Accuracy:{}'.format(accuracy_score(y_test, y_pred)),\n 'Precision:{}'.format(precision_score(y_test, y_pred)),\n 'Recall:{}'.format(recall_score(y_test, y_pred)),\n 'F1:{}'.format(f1_score(y_test, y_pred)),\n 'ROC_AUC:{}'.format(roc_auc_score(y_test, y_pred)), sep = '\\n')",
"Accuracy:0.9298821354287035\nPrecision:0.6993006993006993\nRecall:0.20390329158170697\nF1:0.31574199368516015\nROC_AUC:0.5981737508690471\n"
]
],
[
[
"-----------------------",
"_____no_output_____"
],
[
"### <font color='blue'>거래 + ID merge</font> ",
"_____no_output_____"
]
],
[
[
"transaction_c = pd.read_csv('train_combined.csv')\nid_c = pd.read_csv('identity_first.csv')",
"_____no_output_____"
],
[
"region = pd.read_csv('region.csv')\ncountry = region[['TransactionID', 'Country_code']]\ncountry.head()",
"_____no_output_____"
],
[
"f_draft = transaction_c.merge(id_c.drop(['isFraud'], axis = 1) ,how = 'left', on = 'TransactionID')",
"_____no_output_____"
],
[
"f_draft.drop('DeviceInfo', axis = 1, inplace = True)",
"_____no_output_____"
],
[
"f_draft = f_draft.merge(country, how = 'left', on = 'TransactionID')\nf_draft.head()",
"_____no_output_____"
],
[
"f_draft.dtypes",
"_____no_output_____"
]
],
[
[
"Categorical: 'ProductCD', 'card4', 'card6', 'D15', 'DeviceType', 'Device_info_clean'",
"_____no_output_____"
]
],
[
[
"print(\nf_draft.ProductCD.unique(),\nf_draft.card4.unique(),\nf_draft.card6.unique(),\nf_draft.D15.unique(),\nf_draft.DeviceType.unique(),\nf_draft.Device_info_clean.unique(),\n)",
"['W' 'H' 'C' 'S' 'R'] ['discover' 'mastercard' 'visa' 'american express'] ['credit' 'debit' 'debit or credit' 'charge card'] [ 0. 315. 111. 318. 107. 45. 62. 109. 65. 26. 244. 391. 259. 121.\n 245. 290. 477. 541. 389. 22. 289. 2. 406. 458. 20. 5. 35. 12.\n 104. 248. 237. 466. 284. 46. 455. 456. 218. 77. 450. 403. 444. 71.\n 9. 39. 428. 327. 40. 249. 143. 292. 416. 36. 362. 454. 72. 479.\n 120. 426. 247. 453. 457. 124. 335. 7. 145. 14. 100. 413. 232. 268.\n 63. 37. 591. 30. 363. 190. 374. 76. 151. 152. 10. 32. 82. 17.\n 299. 163. 233. 66. 81. 55. 102. 211. 462. 242. 485. 142. 338. 321.\n 125. 127. 302. 48. 93. 137. 304. 421. 330. 471. 212. 6. 27. 264.\n 117. 461. 439. 90. 15. 173. 126. 401. 449. 347. 11. 440. 451. 97.\n 420. 49. 101. 280. 481. 332. 385. 79. 149. 470. 380. 164. 1. 204.\n 483. 319. 8. 394. 309. 43. 480. 92. 58. 438. 448. 69. 350. 42.\n 367. 67. 566. 314. 105. 99. 467. 371. 28. 351. 56. 89. 459. 74.\n 230. 274. 91. 256. 255. 172. 213. 51. 19. 293. 398. 474. 484. 18.\n 475. 167. 160. 346. 29. 473. 60. 222. 44. 388. 201. 312. 59. 425.\n 103. 266. 260. 108. 75. 262. 179. 382. 472. 365. 129. 57. 277. 402.\n 240. 464. 323. 47. 287. 251. 227. 3. 84. 147. 80. 468. 352. 313.\n 433. 83. 54. 476. 13. 334. 183. 434. 465. 469. 478. 169. 214. 189.\n 112. 364. 61. 241. 156. 73. 85. 70. 378. 178. 298. 122. 16. 395.\n 181. 361. 197. 427. 133. 273. 234. 275. 215. 303. 52. 392. 23. 225.\n 153. 203. 349. 98. 336. 216. 131. 182. 344. 252. 154. 320. 279. 134.\n 114. 33. 87. 195. 162. 452. 86. 4. 296. 340. 442. 295. 226. 188.\n 144. 168. 356. 492. 78. 286. 307. 490. 269. 250. 25. 446. 422. 415.\n 390. 228. 50. 115. 317. 424. 138. 443. 376. 423. 38. 331. 301. 41.\n 96. 399. 436. 343. 202. 161. 288. 31. 238. 486. 306. 185. 368. 184.\n 155. 357. 397. 210. 396. 460. 235. 243. 187. 221. 482. 272. 180. 206.\n 322. 106. 24. 193. 354. 263. 348. 118. 447. 325. 130. 435. 270. 291.\n 95. 68. 165. 217. 379. 373. 310. 53. 177. 407. 355. 170. 208. 239.\n 166. 236. 305. 200. 191. 253. 316. 258. 360. 353. 387. 265. 123. 34.\n 194. 171. 192. 393. 487. 219. 94. 175. 281. 404. 409. 186. 429. 308.\n 207. 543. 119. 140. 377. 328. 326. 136. 283. 176. 278. 271. 209. 148.\n 430. 300. 431. 21. 174. -15. 491. 116. -30. 437. 337. 196. 400. 369.\n 88. 383. 297. 489. 418. 626. 410. 358. 329. 139. 555. 493. 432. 267.\n 582. 417. 345. 64. 665. 294. 113. 141. 311. 110. 366. 359. -13. 381.\n 282. 695. 463. 158. 445. 198. -83. 135. 132. 556. 159. 657. 128. 339.\n 333. 257. 567. 488. -1. 254. 342. 412. 150. 261. 229. 199. 324. 224.\n 414. 246. 285. 441. -60. 411. 341. 375. 583. 220. 408. 231. 372. 370.\n 276. 157. 419. 405. 223. 146. 544. 384. 494. 642. -2. 205. 699. 495.\n 585. 623. 701. 684. 386. 497. 537. 700. 653. 504. 616. 514. 618. 507.\n 630. 588. 528. 598. 703. 664. 496. 499. 500. 599. 705. 526. 674. 670.\n 600. 678. 694. 698. 590. 501. 575. 502. 498. 557. 610. 633. 576. 602.\n 592. 560. 552. 542. 697. 603. 551. 709. 578. 520. 629. 635. 505. 661.\n 518. 594. 648. 512. 686. 710. 708. 604. 547. 637. 638. 574. 596. 531.\n 634. 519. 713. 715. 503. 554. 666. 506. 658. 682. 609. 683. 536. 672.\n 676. 509. 620. 525. 508. 714. 660. 584. 510. 663. 511. 553. 597. 535.\n 706. 527. 549. 631. 613. 671. 712. 696. 702. 593. 655. 716. 612. 719.\n 644. 624. 718. 615. 721. 595. 563. 704. 614. 513. 515. 548. 606. 685.\n 605. 659. 522. 545. 724. 540. 516. 617. 656. 675. 647. 667. 601. 645.\n 726. 559. 677. 651. 517. 570. 581. 643. 572. 521. 561. 728. 725. 529.\n 523. 693. 538. 649. 524. 722. 627. 731. 628. 692. 732. 736. 580. 734.\n 530. 733. 621. 532. 533. 607. 735. 689. 730. 608. 573. 691. 534. 740.\n 742. 679. 744. 539. 625. 707. 579. 589. 641. 739. 743. 749. 727. 729.\n 750. 711. 646. 546. 564. 753. 717. 752. 640. 569. 720. 757. 758. 654.\n 687. 754. 688. 550. 759. 558. 737. 763. 680. 577. 652. 662. 562. 639.\n 770. 766. 650. 565. -6. 771. 741. 768. 568. 668. 669. 745. 571. 673.\n 774. 586. 779. 755. 781. 756. 782. 723. 619. 786. 785. 788. 587. 764.\n 790. 787. 738. 773. -53. 791. 798. 799. 765. 793. 805. 803. 806. 780.\n 778. 801. 784. 796. 812. 815. 777. 611. 622. 632. 772. 818. 821. 751.\n 802. 807. 690. -3. -29. 819. 789. 746. 827. 824. 829. 825. 748. 830.\n -28. 800. 833. 813. 636. 835. 804. 837. -74. 809. 836. 841. 842. 811.\n 843. 814. 845. 847. 844. 839. 851. 850. 810. 761. 747. 854. 823. 820.\n 859. 857. 838. 849. 855. 840. 760. 864. 861. 834. 681. 868. 769. 865.\n 867. 856. 876. 879. 878.] [nan 'mobile' 'desktop' 'unknown'] [nan 'SM/moto/lg' 'iOS Device' 'Windows' 'unknown' 'MacOS' 'others'\n 'Trident/rv']\n"
],
[
"print(map_ProductCD, map_card4,map_card6,map_D15, sep = '\\n')",
"{'W': 0, 'H': 1, 'C': 2, 'S': 3, 'R': 4}\n{'discover': 0, 'mastercard': 1, 'visa': 2, 'american express': 3}\n{'credit': 0, 'debit': 1, 'debit or credit': 2, 'charge card': 3}\n{'credit': 0, 'debit': 1, 'debit or credit': 2, 'charge card': 3}\n"
]
],
[
[
"map_ProductCD = {'W': 0, 'H': 1, 'C': 2, 'S': 3, 'R': 4}\n<br>\nmap_card4 = {'discover': 0, 'mastercard': 1, 'visa': 2, '}american express': 3}\n<br>\nmap_card6 = {'credit': 0, 'debit': 1, 'debit or credit': 2, 'charge card': 3}\n<br>\nmap_D15 = {'credit': 0, 'debit': 1, 'debit or credit': 2, 'charge card': 3}\n<br>\nmap_DeviceType = {'mobile':2 'desktop':1 'unknown':0}\n<br>\nmap_Device_info_clean = {'SM/moto/lg':1, 'iOS Device':2, 'Windows':3, 'unknown':0, 'MacOS':4, 'others':5,\n 'Trident/rv':6}",
"_____no_output_____"
]
],
[
[
"f_draft.ProductCD = f_draft.ProductCD.map(map_ProductCD)\nf_draft.card4 = f_draft.card4.map(map_card4)\nf_draft.card6 = f_draft.card6.map(map_card6)\nf_draft.D15 = f_draft.D15.map(map_D15)\nf_draft.DeviceType = f_draft.DeviceType.map(map_DeviceType)\nf_draft.Device_info_clean = f_draft.Device_info_clean.map(map_Device_info_clean)",
"_____no_output_____"
],
[
"f_draft.to_csv('transaction_id_combined(no_label_encoded).csv', index = False)",
"_____no_output_____"
],
[
"f_draft.ProductCD = f_draft.ProductCD.astype('category')\nf_draft.card4 = f_draft.card4.astype('category')\nf_draft.card6 = f_draft.card6.astype('category')\nf_draft.card1 = f_draft.card1.astype('category')\nf_draft.card2 = f_draft.card2.astype('category')\nf_draft.card3 = f_draft.card3.astype('category')\nf_draft.card5 = f_draft.card5.astype('category')\nf_draft.D15 = f_draft.D15.astype('category')\nf_draft.DeviceType = f_draft.DeviceType.astype('category')\nf_draft.Device_info_clean = f_draft.Device_info_clean.astype('category')\nf_draft.Country_code = f_draft.Country_code.astype('category')",
"_____no_output_____"
],
[
"f_draft.card1 = f_draft.card1.astype('category')\nf_draft.card2 = f_draft.card2.astype('category')\nf_draft.card3 = f_draft.card3.astype('category')\nf_draft.card5 = f_draft.card5.astype('category')",
"_____no_output_____"
],
[
"f_draft.dtypes",
"_____no_output_____"
],
[
"f_draft.to_csv('transaction_id_combined.csv', index = False)",
"_____no_output_____"
],
[
"f_draft.head()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0991984908cf4c4494a1586cdddfea0f880d422 | 18,034 | ipynb | Jupyter Notebook | scripts/.ipynb_checkpoints/getOverviews-checkpoint.ipynb | mhkoscience/leskourova-offcenter | 213686756f8935e49bc877b0276700b42439eaca | [
"MIT"
] | null | null | null | scripts/.ipynb_checkpoints/getOverviews-checkpoint.ipynb | mhkoscience/leskourova-offcenter | 213686756f8935e49bc877b0276700b42439eaca | [
"MIT"
] | null | null | null | scripts/.ipynb_checkpoints/getOverviews-checkpoint.ipynb | mhkoscience/leskourova-offcenter | 213686756f8935e49bc877b0276700b42439eaca | [
"MIT"
] | null | null | null | 31.805996 | 121 | 0.360874 | [
[
[
"import glob\nimport numpy as np\nimport pandas as pd\nimport matplotlib\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"# README\n\nThis notebook extracts some information about fitting. For each molecule, it creates a CSV file.\n\nIt calculates the Euclidean distance and topological distance (number of bonds separating an atom and the halogen).",
"_____no_output_____"
]
],
[
[
"def parsePrepAc(prep_ac):\n \n # read file content\n with open(prep_ac) as stream:\n lines = stream.readlines()\n \n # browse file content\n atoms = {}\n bonds = []\n ref_at_name = None\n for line in lines:\n \n l_spl = line.split()\n \n # skip short\n if len(l_spl) == 0:\n continue\n \n # save atom\n if l_spl[0] == \"ATOM\":\n at_id = int(l_spl[1])\n at_name = l_spl[2]\n at_type = l_spl[-1]\n x = float(line[30:38])\n y = float(line[38:46])\n z = float(line[46:54])\n atoms[at_name] = [at_id, at_type, np.array((x, y, z))]\n \n if \"I\" in at_name or \"Cl\" in at_name or \"Br\" in at_name:\n ref_at_name = at_name\n continue\n \n\n if l_spl[0] == \"BOND\":\n at_name1 = l_spl[-2]\n at_name2 = l_spl[-1]\n bonds.append([at_name1, at_name2])\n \n return atoms, bonds, ref_at_name\n\n \n\ndef getNBDistances(atoms, bonds, ref_at_name):\n \n distances = []\n \n for atom in atoms:\n\n distance = findShortestNBDistance(atom, bonds, ref_at_name)\n distances.append(distance)\n \n return distances\n\n\ndef findShortestNBDistance(atom, bonds, ref_atom):\n dist = 0\n \n starts = [atom]\n \n while True:\n ends = []\n for start in starts:\n if start == ref_atom:\n return dist\n for bond in bonds:\n if start in bond:\n end = [i for i in bond if i != start][0]\n ends.append(end)\n starts = ends\n dist += 1\n\n \ndef getEuclideanDistances(atoms, ref_at_name):\n \n distances = []\n \n coords_ref = atoms[ref_at_name][2]\n \n for at_name, at_values in atoms.items():\n\n at_id, at_type, coords = at_values\n \n distance = np.linalg.norm(coords_ref - coords)\n distances.append(distance)\n \n return distances\n\n\n\n\ndef getChargesFromPunch(punch, n_atoms, sigma=False):\n \n # initialize output container\n charges = []\n \n # read file content\n with open(punch) as stream:\n lines = stream.readlines()\n\n # define, where to find atoms and charges\n lines_start = 11\n lines_end = lines_start + n_atoms\n if sigma:\n lines_end += 1\n \n # browse selected lines and save charges\n for line in lines[lines_start:lines_end]:\n l_spl = line.split()\n charge = float(l_spl[3])\n charges.append(charge)\n \n return charges\n\n\ndef sortAtoms(atoms):\n at_names = list(atoms.keys())\n at_ids = [i[0] for i in atoms.values()]\n at_types = [i[1] for i in atoms.values()]\n atoms_unsorted = list(zip(at_names, at_ids, at_types))\n atoms_sorted = sorted(atoms_unsorted, key=lambda x: x[1])\n at_names_sorted = [a[0] for a in atoms_sorted]\n at_types_sorted = [a[2] for a in atoms_sorted]\n return at_names_sorted, at_types_sorted\n\n\n\nfor halogen in \"chlorine bromine iodine\".split():\n \n mols = sorted(glob.glob(f\"../{halogen}/ZINC*\"))\n\n for mol in mols:\n\n # get info about atoms and bonds\n prep_ac = mol + \"/antechamber/ANTECHAMBER_PREP.AC\"\n atoms, bonds, ref_at_name = parsePrepAc(prep_ac)\n n_atoms = len(atoms)\n\n # number-of-bond distance from the halogen\n nb_distances = getNBDistances(atoms, bonds, ref_at_name)\n \n # eucledian distances from the halogen\n distances = getEuclideanDistances(atoms, ref_at_name)\n\n # standard RESP charges\n punch_std = mol + \"/antechamber/punch\"\n qs_std = getChargesFromPunch(punch_std, n_atoms)\n\n # modified RESP charges including sigma-hole\n punch_mod = mol + \"/mod2/punch\"\n qs_mod = getChargesFromPunch(punch_mod, n_atoms, sigma=True)\n\n # correct sorting of atoms\n atom_names_sorted, atom_types_sorted = sortAtoms(atoms)\n\n # output dataframe\n df = pd.DataFrame({\"name\": atom_names_sorted + [\"X\"],\n \"type\": atom_types_sorted + [\"x\"],\n \"nb_distance\": nb_distances + [-1],\n \"distance\": distances + [-1],\n \"q_std\": qs_std + [0],\n \"q_mod\": qs_mod})\n\n # save\n df.to_csv(mol + \"/overview.csv\", index=False)\n\n\"done\"",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d09926dae5fdd0fef590a05ae8f7a691ac90debe | 165,137 | ipynb | Jupyter Notebook | DataCleansingAndEda.ipynb | Pager07/A-Hackers-AI-Voice-Assistant | 7bb51da05afb08c7f0b355592484fe6e4eae8a3d | [
"MIT"
] | null | null | null | DataCleansingAndEda.ipynb | Pager07/A-Hackers-AI-Voice-Assistant | 7bb51da05afb08c7f0b355592484fe6e4eae8a3d | [
"MIT"
] | null | null | null | DataCleansingAndEda.ipynb | Pager07/A-Hackers-AI-Voice-Assistant | 7bb51da05afb08c7f0b355592484fe6e4eae8a3d | [
"MIT"
] | null | null | null | 81.791481 | 39,642 | 0.749057 | [
[
[
"<a href=\"https://colab.research.google.com/github/Pager07/A-Hackers-AI-Voice-Assistant/blob/master/DataCleansingAndEda.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"#Load Data",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\n\nisMergedDatasetAvailabel = True\nif not isMergedDatasetAvailabel:\n train_bodies_df = pd.read_csv('train_bodies.csv')\n train_stance_df = pd.read_csv('train_stances.csv')\n test_bodies_df = pd.read_csv('competition_test_bodies.csv')\n test_stance_df = pd.read_csv('competition_test_stances.csv')\n \n #merge the training dataframe\n train_merged = pd.merge(train_stance_df,train_bodies_df,on='Body ID',how='outer')\n test_merged = pd.merge(test_stance_df,test_bodies_df,on='Body ID', how='outer')\nelse:\n train_merged = pd.read_csv('train_merged.csv',index_col=0)\n test_merged = pd.read_csv('test_merged.csv',index_col=0)",
"_____no_output_____"
],
[
"train_merged.head()",
"_____no_output_____"
],
[
"test_merged.head()",
"_____no_output_____"
]
],
[
[
"#Data Cleaning",
"_____no_output_____"
]
],
[
[
"import re\nimport numpy as np\n\nfrom sklearn import feature_extraction\nfrom sklearn.feature_extraction.text import CountVectorizer\n\nimport nltk\nfrom nltk.corpus import wordnet\nfrom nltk.tokenize import word_tokenize\n\n#downloads\nnltk.download('punkt')\nnltk.download('wordnet')\nnltk.download('stopwords')",
"[nltk_data] Downloading package punkt to /root/nltk_data...\n[nltk_data] Unzipping tokenizers/punkt.zip.\n[nltk_data] Downloading package wordnet to /root/nltk_data...\n[nltk_data] Unzipping corpora/wordnet.zip.\n[nltk_data] Downloading package stopwords to /root/nltk_data...\n[nltk_data] Unzipping corpora/stopwords.zip.\n"
],
[
"wnl= nltk.WordNetLemmatizer()\ndef normalize(word):\n '''\n Helper function fo get_normalized_tokens()\n Takes a word and lemmatizes it eg. bats -> bat\n Args:\n word: str\n '''\n return wnl.lemmatize(word,wordnet.VERB).lower()\n\ndef get_normalized_tokens(seq):\n '''\n Takes a sentence and returns normalized tokens\n Args:\n seq: str, A sentece\n '''\n normalized_tokens = []\n for token in nltk.word_tokenize(seq):\n normalized_tokens.append(normalize(token))\n return normalized_tokens\n\ndef clean(seq):\n '''\n Takes a senetence and removes emojies, non-numerical, non-alphabetically words \n Args:\n seq: str, A sentece\n '''\n valid = re.findall(r'\\w+', seq, flags=re.UNICODE)\n seq = ' '.join(valid).lower()\n return seq\n\ndef remove_stopwords(token_list):\n '''\n Args:\n token_list: List, containg tokens\n '''\n filtered_token_list = []\n for w in token_list:\n if w not in feature_extraction.text.ENGLISH_STOP_WORDS:\n filtered_token_list.append(w)\n return filtered_token_list\n\n\ndef preprocess(sentence):\n '''\n This function takes in a raw body sentence|title and returns preproccesed sentence\n\n '''\n #Remove non-alphabatically, non-numerical,emojis etc..\n sentence = clean(sentence)\n #(normalization/lemmatization)\n tokens = get_normalized_tokens(sentence)\n #remove any stopwords\n tokens = remove_stopwords(tokens)\n sentence = ' '.join(tokens)\n return sentence\n\n\n\n\n\n",
"_____no_output_____"
],
[
"train_merged['articleBody']= train_merged['articleBody'].apply(preprocess)\ntest_merged['articleBody'] = test_merged['articleBody'].apply(preprocess)\ntrain_merged['Headline']=train_merged['Headline'].apply(preprocess)\ntest_merged['Headline']= test_merged['Headline'].apply(preprocess)",
"_____no_output_____"
],
[
"train_merged.to_csv('train_merged.csv')",
"_____no_output_____"
],
[
"test_merged.to_csv('test_merged.csv')",
"_____no_output_____"
]
],
[
[
"#EDA",
"_____no_output_____"
]
],
[
[
"def get_top_trigrams(corpus, n=10):\n vec = CountVectorizer(ngram_range=(3, 3)).fit(corpus) # parameter is set for 2 (bigram)\n \n bag_of_words = vec.transform(corpus)\n sum_words = bag_of_words.sum(axis=0)\n \n words_freq = [(word, sum_words[0, idx]) for word, idx in vec.vocabulary_.items()]\n words_freq = sorted(words_freq, key = lambda x: x[1], reverse=True)\n \n return words_freq[:n]\n\n",
"_____no_output_____"
],
[
"#first let us check the biagram of all the data\nplt.figure(figsize=(10, 5))\ntop_tweet_bigrams = get_top_trigrams(train_merged['Headline'],n=20)\n\ny, x = map(list, zip(*top_tweet_bigrams))\n\nsns.barplot(x=x, y=y)\nplt.title('Biagrams (Headline)')",
"_____no_output_____"
],
[
"#first let us check the biagram of all the data\nplt.figure(figsize=(10, 5))\ntop_tweet_bigrams = get_top_trigrams(train_merged['articleBody'],n=20)\n\ny, x = map(list, zip(*top_tweet_bigrams))\n\nsns.barplot(x=x, y=y)\nplt.title('Biagrams (articleBody)')",
"_____no_output_____"
],
[
"word = 'plays'\nout = normalize(word)\nassert out == 'play'\n\ntext ='hello I #like to eatsfood 123'\nout = get_normalized_tokens(text)\nassert out == ['hello', 'i', '#', 'like', 'to', 'eatsfood','123']\n\ntext ='. hello I #like to eatsfood 123 -+~@:%^&www.*😔😔'\nout = clean(text);out\nassert out == 'hello i like to eatsfood 123 www'\n\ntoken_list = ['hello', 'i', '#', 'like', 'to', 'eatsfood','123']\nout = remove_stopwords(token_list);\nassert out == ['hello', '#', 'like', 'eatsfood', '123']\n\ntext ='. hello bats,cats, alphakenny I am #like to eatsfood 123 -+~@:%^&www.*😔😔'\nout = preprocess(text); out",
"_____no_output_____"
],
[
"#Very imblanaced\ntrain_merged['Stance'].hist()",
"_____no_output_____"
],
[
"test_merged['Stance'].hist()",
"_____no_output_____"
],
[
"lens = train_merged['Headline'].str.len()\nlens.mean(), lens.std(), lens.max()",
"_____no_output_____"
],
[
"lens = test_merged['Headline'].str.len()\nlens.mean(), lens.std(), lens.max()",
"_____no_output_____"
],
[
"#The lenght seem to vary alot\nlens = train_merged['articleBody'].str.len()\nlens.mean(), lens.std(), lens.max()",
"_____no_output_____"
],
[
"lens = test_merged['articleBody'].str.len()\nlens.mean(), lens.std(), lens.max()",
"_____no_output_____"
]
],
[
[
"#1.a tf-idf feature extraction",
"_____no_output_____"
]
],
[
[
"from sklearn.feature_extraction.text import TfidfVectorizer\nfrom scipy.sparse import hstack\n\ntotaldata= (train_merged['articleBody'].tolist() + train_merged['Headline'].tolist()+test_merged['articleBody'].tolist()+test_merged['Headline'].tolist())\ntfidf_vect = TfidfVectorizer(analyzer='word', token_pattern=r'\\w{1,}', max_features=80, stop_words='english')\ntfidf_vect.fit(totaldata)\n\n\nprint('===Starting train headline====')\ntrain_head_feature= tfidf_vect.transform(train_merged['Headline']) #(49972, 80)\nprint('===Starting Train body====')\ntrain_body_feature= tfidf_vect.transform(train_merged['articleBody']) #(49972, 80)\nprint('===Starting Test headline====')\ntest_head_feature= tfidf_vect.transform(test_merged['Headline']) #(25413, 80)\nprint('===Starting Test articleBody====')\ntest_body_feature = tfidf_vect.transform(test_merged['articleBody']) #(25413, 80)\n\ndef binary_labels(label):\n if label in ['discuss', 'agree', 'disagree']:\n return 'related'\n elif label in ['unrelated']:\n return label\n else:\n assert f'{label} not found!'\n\ntrain_merged_labels = train_merged['Stance'].apply(binary_labels)\ntest_merged_labels = test_merged['Stance'].apply(binary_labels)\nprint(train_merged_labels.unique(), test_merged_labels.unique())\nX_train_tfidf,Y_train = hstack([train_head_feature,train_body_feature]).toarray(), train_merged_labels.values\nX_test_tfidf,Y_test = hstack([test_head_feature,test_body_feature]).toarray(), test_merged_labels.values",
"===Starting train headline====\n===Starting Train body====\n===Starting Test headline====\n===Starting Test articleBody====\n['unrelated' 'related'] ['unrelated' 'related']\n"
]
],
[
[
"#Train with tf-idf features - Navie Bayes\n",
"_____no_output_____"
]
],
[
[
"from sklearn.naive_bayes import MultinomialNB\nfrom sklearn.metrics import accuracy_score\nfrom sklearn import metrics\nfrom sklearn.metrics import confusion_matrix,accuracy_score,roc_auc_score,roc_curve,auc,f1_score\nfrom sklearn.preprocessing import LabelEncoder",
"_____no_output_____"
],
[
"def binary_labels(label):\n if label in ['discuss', 'agree', 'disagree']:\n return 'related'\n elif label in ['unrelated']:\n return label\n else:\n assert f'{label} not found!'\n\ntrain_merged_labels = train_merged['Stance'].apply(binary_labels)\ntest_merged_labels = test_merged['Stance'].apply(binary_labels)\nprint(train_merged_labels.unique(), test_merged_labels.unique())",
"['unrelated' 'related'] ['unrelated' 'related']\n"
],
[
"X_train_tfidf,Y_train = hstack([train_head_feature,train_body_feature]).toarray(), train_merged_labels.values\nX_test_tfidf,Y_test = hstack([test_head_feature,test_body_feature]).toarray(), test_merged_labels.values",
"_____no_output_____"
],
[
"print(X_train_tfidf.shape,X_test_tfidf.shape )\n",
"(49972, 160) (25413, 160)\n"
],
[
"train_merged['Stance'].unique()",
"_____no_output_____"
],
[
"net = MultinomialNB(alpha=0.39)\nnet.fit(X_train_tfidf, Y_train)\nprint(\"train score:\", net.score(X_train_tfidf, Y_train))\nprint(\"validation score:\", net.score(X_test_tfidf, Y_test))",
"train score: 0.7430561114223966\nvalidation score: 0.7278164718844686\n"
],
[
"import matplotlib.pyplot as plt\nimport seaborn as sn\nplt.style.use('ggplot')\n# Create the confussion matrix\ndef plot_confussion_matrix(y_test, y_pred):\n ''' Plot the confussion matrix for the target labels and predictions '''\n cm = confusion_matrix(y_test, y_pred)\n\n # Create a dataframe with the confussion matrix values\n df_cm = pd.DataFrame(cm, range(cm.shape[0]),\n range(cm.shape[1]))\n\n # Plot the confussion matrix\n sn.set(font_scale=1.4) #for label size\n sn.heatmap(df_cm, annot=True,fmt='.0f',cmap=\"YlGnBu\",annot_kws={\"size\": 10})# font size\n plt.show()\n\n\n# ROC Curve\n# plot no skill\n# Calculate the points in the ROC curve\ndef plot_roc_curve(y_test, y_pred):\n ''' Plot the ROC curve for the target labels and predictions'''\n\n enc = LabelEncoder()\n y_test = enc.fit_transform(y_test)\n y_pred = enc.fit_transform(y_pred)\n fpr, tpr, thresholds = roc_curve(y_test, y_pred, pos_label=1)\n roc_auc= auc(fpr,tpr)\n plt.figure(figsize=(12, 12))\n ax = plt.subplot(121)\n ax.set_aspect(1)\n \n plt.title('Receiver Operating Characteristic')\n plt.plot(fpr, tpr, 'b', label = 'AUC = %0.2f' % roc_auc)\n plt.legend(loc = 'lower right')\n plt.plot([0, 1], [0, 1],'r--')\n plt.xlim([0, 1])\n plt.ylim([0, 1])\n plt.ylabel('True Positive Rate')\n plt.xlabel('False Positive Rate')\n plt.show()",
"_____no_output_____"
],
[
"# Predicting the Test set results\nprediction = net.predict(X_test_tfidf)\n\n#print the classification report to highlight the accuracy with f1-score, precision and recall\nprint(metrics.classification_report(prediction, Y_test))\nplot_confussion_matrix(prediction, Y_test)\nplot_roc_curve(prediction, Y_test)",
"_____no_output_____"
]
],
[
[
"#TF-idf Binary classification with logistice regression",
"_____no_output_____"
],
[
"Steps:\n - Create k-fold starififed dataloader [x]\n - Use Sampler to have more control over the batch\n - Write a function get_dataloaders() that will return dict of shape fold_id x tuple. The tuple contains dataloader \n - Train the modle on all the splits [x]\n - How?\n - Write a function that will train for 1 single fold \n - It will take the train_loader and test_loader of that split\n - These loaders can be accessed by the get_dataloaders()\n- Evaluate the model \n - Do we need to evaluate the model after each epoch?\n - Yes we need need to\n - Print the stats \n - Track the stats\n - Use tracked stats of (fold x stats) to generate global stats \n - What is stats, in other words what are we using to measure the performance?\n - Accurracy and F-Score??\n - the class-wise and the macro-averaged F1scores \n - this metrics are not affected by the large size of the majority class. \n - What is class-wise F1score?\n - harmoic means of precison and recalls of four class\n - What is F1m meteric?\n - The macro F1 Score\n - What is macro F1 Score?\n - Draw/do compuatation across all the rows then compute average across that\n - How can we get this score?\n - Use sklearn classification report\n - set the output_dict=1\n - out['macro avg']['f1-score']\n - out['macro avg']['accuracy']\n - How will I know if the model is overfitting?\n - calcualte the test loss \n\n - At last I can send the whole test set for classification\n - then plot ROC\n - confusion matrxi\n \n - What about the class weights?\n - FNC-1 paper: \n - 0.25 reward crrectly classfiying reward R\n - 1-0.25: 0.75 (extra pentaly)\n - Total Pentatlty: 1+0.75\n - 0.25 reward crrectly classfiying reward UR\n - Train the model\n - Load the dataset\n - load the csv\n - load the X_Train,Y-train\n - load the X_text , Y_test\n - Send them into gpu\n - trian\n \n\n\n \n",
"_____no_output_____"
]
],
[
[
"from torch.utils.data import DataLoader,Dataset\nimport torch\nfrom sklearn.model_selection import KFold\nfrom sklearn.model_selection import StratifiedKFold\nfrom torch.utils.data import ConcatDataset,SubsetRandomSampler\nfrom collections import defaultdict\nclass TfidfBinaryStanceDataset(Dataset):\n def __init__(self, X,Y):\n '''\n Args:\n X: (samples x Features)\n Y: (samples). containing binary class eg. [1,0,1,1,....] \n '''\n super(TfidfBinaryStanceDataset, self).__init__()\n self.x = torch.tensor(X).float()\n self.y = torch.tensor(Y).long()\n def __len__(self):\n return len(self.x)\n def __getitem__(self,idx):\n return (self.x[idx] ,self.y[idx])\n\n\ndef get_dataloaders(x_train,y_train,x_test,y_test,bs=256,nfold=5):\n '''\n Args:\n x_train: nd.array of shape (samples x features)\n y_train: nd.array of shape (labels )\n x_test: nd.array of shape (samples x features)\n y_test: nd.array of shape (labels )\n nfold: Scalar, number of total folds, It can't be greater than number of samples in each class\n Returns:\n loaders: Dict of shape (nfolds x 2), where the keys are fold ids and tuple containing train and test loader for \n that split\n '''\n train_dataset = TfidfBinaryStanceDataset(x_train,y_train)\n test_dataset = TfidfBinaryStanceDataset(x_test,y_test)\n dataset = ConcatDataset([train_dataset,test_dataset]) #A big dataset\n \n kfold = StratifiedKFold(n_splits=nfold, shuffle=False)\n labels = [data[1] for data in dataset]\n loaders = defaultdict(tuple)\n for fold,(train_ids,test_ids) in enumerate(kfold.split(dataset,labels)):\n train_subsampler = SubsetRandomSampler(train_ids)\n test_subsampler = SubsetRandomSampler(test_ids)\n train_loader = torch.utils.data.DataLoader(dataset,batch_size=bs, sampler=train_subsampler) #\n test_loader = torch.utils.data.DataLoader(dataset,batch_size=bs, sampler=test_subsampler)\n loaders[fold] = (train_loader,test_loader)\n return loaders\n \n \n \n\n \n \n \n \n",
"_____no_output_____"
],
[
"import torch\nimport torch.nn as nn\nfrom torch.optim import Adam\nimport numpy as np\nfrom collections import defaultdict\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.metrics import classification_report\n\nclass LogisticRegression(nn.Module):\n def __init__(self, input_dim, output_dim):\n super(LogisticRegression,self).__init__()\n self.linear = nn.Linear(input_dim,output_dim)\n \n def forward(self,x):\n out = self.linear(x)\n return out\n\ndef eval_one_epoch(net,dataloader,optim,lfn,triplet_lfn,margin):\n net.eval()\n losses = []\n f1m = []\n for batch_id, (x,y) in enumerate(dataloader):\n assert len(torch.unique(y)) != 1\n x = x.to(device).float()\n y = y.to(device).long()\n hs = net(x) #(sampels x 2)\n #BCE-loss\n ce_loss = lfn(hs,y)\n \n #triplet-loss\n #generate the triplet\n probs = hs.softmax(dim=1) #(samples x 2)\n y_hat = probs.argmax(dim=1)\n anchors,positives, negatives = generate_triplets(hs,y_hat,y) #(misclassified_samples, d_model=2)\n anchors,positives, negatives = mine_hard_triplets(anchors,positives,negatives,margin) \n triplet_loss = triplet_lfn(anchors,positives,negatives)\n \n #total-loss \n loss = (ce_loss + triplet_loss)/2\n losses += [loss.item()]\n\n target_names = ['unrelated','related']\n f1m += [classification_report(y_hat.detach().cpu().numpy(),\n y.detach().cpu().numpy(), target_names=target_names,output_dict=1)['macro avg']['f1-score']]\n return np.mean(losses), np.mean(f1m)\n\n \n\ndef train_one_epoch(net,dataloader,optim,lfn,triplet_lfn,margin):\n net.train()\n losses = []\n for batch_id, (x_train,y_train) in enumerate(dataloader):\n x_train = x_train.to(device).float()\n y_train = y_train.to(device).long()\n hs = net(x_train) #(sampels x 2)\n \n #BCE-loss\n ce_loss = lfn(hs,y_train)\n \n #triplet-loss\n #generate the triplet\n probs = hs.softmax(dim=1) #(samples x 2)\n y_hat = probs.argmax(dim=1)\n anchors,positives, negatives = generate_triplets(hs,y_hat,y_train) #(misclassified_samples, d_model=2)\n anchors,positives, negatives = mine_hard_triplets(anchors,positives,negatives,margin) \n triplet_loss = triplet_lfn(anchors,positives,negatives)\n \n #total-loss \n loss = (ce_loss + triplet_loss)/2\n loss.backward()\n optim.step()\n optim.zero_grad()\n\n losses += [loss.item()]\n return sum(losses)/len(losses)\n\ndef mine_hard_triplets(anchors,positives,negatives,margin):\n '''\n Args:\n anchor: Tensor of shape (missclassified_samples x 2 )\n positive: Tensor of shape (missclassified_smaples_positive x 2)\n negative: Tensor of shape (missclassified_smaples_negative x 2)\n \n Returns:\n anchor: Tensor of shape (hard_missclassified_samples x 2 )\n positive: Tensor of shape (hard_missclassified_smaples_positive x 2)\n negative: Tensor of shape (hard_missclassified_smaples_negative x 2)\n \n '''\n #mine-semihar triplets \n l2_dist = nn.PairwiseDistance()\n d_p = l2_dist(anchors, positives) \n d_n = l2_dist(anchors, negatives) \n hard_triplets = torch.where((d_n - d_p < margin))[0]\n\n anchors = anchors[hard_triplets]\n positives = positives[hard_triplets]\n negatives = negatives[hard_triplets]\n return anchors,positives,negatives\n\ndef generate_triplets(hs,y_hat,y):\n '''\n Args:\n hs: (Samples x 2) \n y_hat: Tensor of shape (samples,), Containing predicted label eg. [1,0,1,1,1,1]\n y: Tensor of shape (samples,), Containing GT label eg. [1,0,1,1,1,1] \n \n Returns:\n anchor: Tensor of shape (missclassified_samples x 2 )\n positive: Tensor of shape (missclassified_smaples_positive x 2)\n negative: Tensor of shape (missclassified_smaples_negative x 2)\n '''\n mismatch_indices = torch.where(y_hat != y)[0]\n anchors = hs[mismatch_indices] #(miscalssfied_samples x 2)\n positives = get_positives(hs,mismatch_indices,y) #(miscalssfied_samples x 2)\n negatives = get_negatives(hs,mismatch_indices,y)\n return anchors,positives, negatives\n\n\ndef get_positives(hs,misclassified_indicies,y):\n '''\n For each misclassfied sample we, randomly pick 1 positive anchor\n Args:\n hs: (Samples x 2) \n mismatch_indices: A tensor of shape [misclassified], containing row indices relatie to hs\n y: Tensor of shape (samples,), Containing GT label eg. [1,0,1,1,1,1] \n \n Returns:\n positive: Tensor of shape [misclassified x 2]\n '''\n positives_indices = []\n negative_indices = []\n for anchor_index in misclassified_indicies:\n anchor_class = y[anchor_index]\n\n possible_positives = torch.where(y == anchor_class)[0]\n\n positive_index = anchor_index\n while anchor_index == positive_index:\n positive_index = np.random.choice(possible_positives.detach().cpu().numpy())\n positives_indices += [positive_index]\n \n positives = hs[positives_indices]\n return positives\n\ndef get_negatives(hs,misclassified_indicies,y):\n '''\n For each misclassfied sample we, randomly pick 1 negative anchor\n Args:\n hs: (Samples x 2) \n mismatch_indices: A tensor of shape [misclassified], containing row indices relatie to hs\n y: Tensor of shape (samples,), Containing GT label eg. [1,0,1,1,1,1] \n \n Returns:\n positive: Tensor of shape [misclassified x 2]\n '''\n negative_indices = []\n for anchor_index in misclassified_indicies:\n anchor_class = y[anchor_index]\n\n possible_negatives = torch.where(y != anchor_class)[0]\n\n negative_index = np.random.choice(possible_negatives.detach().cpu().numpy()) #possible_negatives are empty\n\n negative_indices += [negative_index]\n \n negatives = hs[negative_indices]\n return negatives\n\ndef save_model(net,macro_fs,fs):\n if fs>=max(macro_fs):\n torch.save(net,'./net.pth')\n\n\n\n#TODO: Find the class wegihts \ndevice = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\nepoch = 20\nmargin= 0.5\nlr = 4.33E-02\nbs = 1024\nnfolds = 5\nenc = LabelEncoder()\nx_train = X_train_tfidf\ny_train =enc.fit_transform(Y_train)\nx_test = X_test_tfidf\ny_test = enc.fit_transform(Y_test)\ndef train():\n class_weights = torch.tensor([1.75,1]).to(device)\n lfn = nn.CrossEntropyLoss(weight=class_weights).to(device)\n triplet_lfn = nn.TripletMarginLoss(margin=margin).to(device)\n \n loaders = get_dataloaders(x_train,y_train,x_test,y_test,bs=bs, nfold=nfolds) #dict of shape (nfold x 2),2 because it consist of train_loader and test_loader\n macro_f1m = []\n for fold in range(nfolds):\n fold_macro_f1m =[]\n print(f'Starting training for fold:{fold}')\n net = LogisticRegression(input_dim= x_train.shape[1],\n output_dim= 2).to(device)\n \n optim = Adam(net.parameters(), lr=lr)\n for e in range(epoch):\n train_loss = train_one_epoch(net,\n loaders[fold][0],\n optim,\n lfn,\n triplet_lfn,\n margin)\n eval_loss,f1m = eval_one_epoch(net,\n loaders[fold][1],\n optim,\n lfn,\n triplet_lfn,\n margin)\n macro_f1m += [f1m]\n fold_macro_f1m += [f1m]\n save_model(net,macro_f1m,f1m)\n if (e+1)%5==0:\n print(f'nfold:{fold},epoch:{e},train loss:{train_loss}, eval loss:{eval_loss}, fm1:{f1m}')\n print(f'Fold:{fold}, Average F1-Macro:{np.mean(fold_macro_f1m)}')\n print('=======================================')\n print(f'{nfolds}-Folds Average F1-Macro:{np.mean(macro_f1m)}')\n return np.mean(macro_f1m)\n\n",
"_____no_output_____"
],
[
"#Use Cyclical Learning Rates for Training Neural Networks to roughly estimate good lr \n#!pip install torch_lr_finder\nfrom torch_lr_finder import LRFinder\nloaders = get_dataloaders(x_train,y_train,x_test,y_test,bs=256, nfold=nfolds)\ntrain_loader = loaders[0][0]\nmodel = LogisticRegression(160,2)\ncriterion = nn.CrossEntropyLoss()\noptimizer = Adam(model.parameters(), lr=1e-7, weight_decay=1e-2)\nlr_finder = LRFinder(model, optimizer, criterion, device=\"cuda\")\nlr_finder.range_test(train_loader, end_lr=100, num_iter=100)\nlr_finder.plot() # to inspect the loss-learning rate graph\nlr_finder.reset() # to reset the model and optimizer to their initial state",
"Collecting torch_lr_finder\n Downloading https://files.pythonhosted.org/packages/ea/51/1a869067989a0fdaf18e49f0ee3aebfcb63470525245aac7dc390cfc676a/torch_lr_finder-0.2.1-py3-none-any.whl\nRequirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from torch_lr_finder) (4.41.1)\nRequirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from torch_lr_finder) (3.2.2)\nRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from torch_lr_finder) (1.19.5)\nRequirement already satisfied: packaging in /usr/local/lib/python3.7/dist-packages (from torch_lr_finder) (20.9)\nRequirement already satisfied: torch>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from torch_lr_finder) (1.8.0+cu101)\nRequirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->torch_lr_finder) (2.8.1)\nRequirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->torch_lr_finder) (1.3.1)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->torch_lr_finder) (2.4.7)\nRequirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->torch_lr_finder) (0.10.0)\nRequirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch>=0.4.1->torch_lr_finder) (3.7.4.3)\nRequirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.1->matplotlib->torch_lr_finder) (1.15.0)\nInstalling collected packages: torch-lr-finder\nSuccessfully installed torch-lr-finder-0.2.1\n"
],
[
"train()",
"Starting training for fold:0\nnfold:0,epoch:4,train loss:0.6024146110324536, eval loss:0.6200127760569255, fm1:0.41559849509023383\nnfold:0,epoch:9,train loss:0.5978681485531694, eval loss:0.6050375978151957, fm1:0.4030031157333551\nnfold:0,epoch:14,train loss:0.5998595609503278, eval loss:0.5948281049728393, fm1:0.407867282161874\nnfold:0,epoch:19,train loss:0.5981861807532229, eval loss:0.6027443011601766, fm1:0.42075218038799256\nFold:0, Average F1-Macro:0.4127324674774284\nStarting training for fold:1\nnfold:1,epoch:4,train loss:0.6025787218142364, eval loss:0.6016569177309672, fm1:0.43250310569308986\nnfold:1,epoch:9,train loss:0.5973691990820028, eval loss:0.6045343279838562, fm1:0.44679272166665146\nnfold:1,epoch:14,train loss:0.6005635180715787, eval loss:0.5813541769981384, fm1:0.4268086770363271\nnfold:1,epoch:19,train loss:0.6011431914264873, eval loss:0.5997809410095215, fm1:0.42668907904661124\nFold:1, Average F1-Macro:0.44108015002020895\nStarting training for fold:2\nnfold:2,epoch:4,train loss:0.5987946845717349, eval loss:0.5846388498942058, fm1:0.42370208601695203\nnfold:2,epoch:9,train loss:0.5992476344108582, eval loss:0.6136406898498535, fm1:0.43747891095105446\nnfold:2,epoch:14,train loss:0.5968704809576778, eval loss:0.6002291361490886, fm1:0.42699673979421504\nnfold:2,epoch:19,train loss:0.6000207652479915, eval loss:0.5900104999542236, fm1:0.4247215177374668\nFold:2, Average F1-Macro:0.42944124669108985\nStarting training for fold:3\nnfold:3,epoch:4,train loss:0.5997520897348049, eval loss:0.590477712949117, fm1:0.40392972882751793\nnfold:3,epoch:9,train loss:0.5979607580071788, eval loss:0.5996671398480733, fm1:0.4073626480036444\nnfold:3,epoch:14,train loss:0.5993322851294178, eval loss:0.5894471764564514, fm1:0.4100946046554814\nnfold:3,epoch:19,train loss:0.5980879502781367, eval loss:0.5922741095225016, fm1:0.41269239065377616\nFold:3, Average F1-Macro:0.40943418809767385\nStarting training for fold:4\nnfold:4,epoch:4,train loss:0.6021876820063187, eval loss:0.6027048508326213, fm1:0.41075418397788915\nnfold:4,epoch:9,train loss:0.5996838737342317, eval loss:0.6106006304423014, fm1:0.4247308204855665\nnfold:4,epoch:14,train loss:0.5994551697019803, eval loss:0.5860225121180217, fm1:0.4144711033184684\nnfold:4,epoch:19,train loss:0.6028875496427891, eval loss:0.596651287873586, fm1:0.4246737953911698\nFold:4, Average F1-Macro:0.41547834566387065\n=======================================\n5-Folds Average F1-Macro:0.42163327959005437\n"
]
],
[
[
"#test",
"_____no_output_____"
]
],
[
[
"#net = torch.load('./net.pth')\nnet.eval()\nx_test = torch.from_numpy(X_test_tfidf).to(device).float()\nprobs = net(x_test)\nprediction = probs.argmax(dim=1).detach().cpu().numpy()\n# #print the classification report to highlight the accuracy with f1-score, precision and recall\nprediction = ['unrelated' if p else 'related' for p in prediction ]\nprint(metrics.classification_report(prediction, Y_test))\nplot_confussion_matrix(prediction, Y_test)\nplot_roc_curve(prediction, Y_test)",
" precision recall f1-score support\n\n related 0.08 0.14 0.10 4143\n unrelated 0.81 0.69 0.75 21270\n\n accuracy 0.60 25413\n macro avg 0.44 0.42 0.42 25413\nweighted avg 0.69 0.60 0.64 25413\n\n"
],
[
"#test for get_positives\ntest_hs = torch.tensor([[0.8799, 0.0234],\n [0.2341, 0.8839],\n [0.8705, 0.1356],\n [0.9723, 0.1930],\n [0.7416, 0.4498]])\ntest_mi = torch.tensor([0,1,2])\ny = torch.tensor([0,0,1,1,1])\nout = get_positives(test_hs,test_mi, y)\nassert out.shape == (3,2)\n\n#test for get_negatives\ntest_hs = torch.tensor([[0.8799, 0.0234],\n [0.2341, 0.8839],\n [0.8705, 0.1356],\n [0.9723, 0.1930],\n [0.7416, 0.4498]])\ntest_mi = torch.tensor([0,1,2])\ny = torch.tensor([0,0,1,1,1])\nout = get_negatives(test_hs,test_mi, y)\nassert out.shape == (3,2)\n\n\n#test for generate_triplets\ntest_hs = torch.tensor([[0.8799, 0.0234],\n [0.2341, 0.8839],\n [0.8705, 0.1356],\n [0.9723, 0.1930],\n [0.7416, 0.4498]])\ny_hat = torch.tensor([1,1,1,1,1]) #\ny = torch.tensor([1,1,1,0,0])\na,p,n = generate_triplets(test_hs,y_hat,y)\nassert a.shape == (2,2)\nassert p.shape == (2,2)\nassert n.shape == (2,2)\n\n#test for mine_hard_triplets\na = torch.tensor([[0.8799, 0.0234],\n [0.2341, 0.8839],\n [0.7416, 0.4498]])\np = torch.tensor([[0.8799, 0.0234],\n [0.2341, 0.8839],\n [0.7416, 0.4498]])\n\nn = torch.tensor([[0.8799, 0.0234],\n [0.2341, 0.8839],\n [0.7416, 0.4498]])\nh_a , h_p ,h_n= mine_hard_triplets(a,p,n,0.5)\nassert h_a.shape == (3,2)\nassert h_p.shape == (3,2)\nassert h_n.shape == (3,2)\n\nx_train = torch.tensor([[0.8799, 0.0234],\n [0.2341, 0.8839],\n [0.7416, 0.4498]])\ny_train = [1,1,0]\nx_test = torch.tensor([[0.8799, 0.0234],\n [0.2341, 0.8839],\n [0.7416, 0.4498]])\ny_test = [1,1,0,0]\nloader = get_dataloaders(x_train,y_train,x_test,y_test,bs=1,nfold=2)\nassert len(loader) == 2",
"_____no_output_____"
],
[
"for k,(train_loader,test_loader) in loader.items():\n print(loaders)\n for x,y in train_loader:\n print(x.shape)\n print(y.shape)\n",
"<torch.utils.data.dataloader.DataLoader object at 0x7f2aaa3bd890>\ntorch.Size([1, 2])\ntorch.Size([1])\ntorch.Size([1, 2])\ntorch.Size([1])\ntorch.Size([1, 2])\ntorch.Size([1])\n<torch.utils.data.dataloader.DataLoader object at 0x7f2aaa3bd890>\ntorch.Size([1, 2])\ntorch.Size([1])\ntorch.Size([1, 2])\ntorch.Size([1])\ntorch.Size([1, 2])\ntorch.Size([1])\n"
],
[
"x ,y = loader[0]",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d09931563d3d8439d2d5d5c4f6c422318be4c5b3 | 6,862 | ipynb | Jupyter Notebook | colab-example-notebooks/colab_github_demo.ipynb | tuanavu/deep-learning-tutorials | 84cfc618ab62120d5e927ba22cf0334b3a731e83 | [
"MIT"
] | 5 | 2019-07-29T02:43:24.000Z | 2021-08-24T02:06:51.000Z | colab-example-notebooks/colab_github_demo.ipynb | tuanavu/deep-learning-tutorials | 84cfc618ab62120d5e927ba22cf0334b3a731e83 | [
"MIT"
] | null | null | null | colab-example-notebooks/colab_github_demo.ipynb | tuanavu/deep-learning-tutorials | 84cfc618ab62120d5e927ba22cf0334b3a731e83 | [
"MIT"
] | 8 | 2019-05-11T17:31:40.000Z | 2021-08-24T08:11:28.000Z | 42.358025 | 377 | 0.603906 | [
[
[
"<a href=\"https://colab.research.google.com/github/tuanavu/deep-learning-tutorials/blob/development/colab-example-notebooks/colab_github_demo.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Using Google Colab with GitHub\n\n",
"_____no_output_____"
],
[
"\n[Google Colaboratory](http://colab.research.google.com) is designed to integrate cleanly with GitHub, allowing both loading notebooks from github and saving notebooks to github.",
"_____no_output_____"
],
[
"## Loading Public Notebooks Directly from GitHub\n\nColab can load public github notebooks directly, with no required authorization step.\n\nFor example, consider the notebook at this address: https://github.com/googlecolab/colabtools/blob/master/notebooks/colab-github-demo.ipynb.\n\nThe direct colab link to this notebook is: https://colab.research.google.com/github/googlecolab/colabtools/blob/master/notebooks/colab-github-demo.ipynb.\n\nTo generate such links in one click, you can use the [Open in Colab](https://chrome.google.com/webstore/detail/open-in-colab/iogfkhleblhcpcekbiedikdehleodpjo) Chrome extension.",
"_____no_output_____"
],
[
"## Browsing GitHub Repositories from Colab\n\nColab also supports special URLs that link directly to a GitHub browser for any user/organization, repository, or branch. For example:\n\n- http://colab.research.google.com/github will give you a general github browser, where you can search for any github organization or username.\n- http://colab.research.google.com/github/googlecolab/ will open the repository browser for the ``googlecolab`` organization. Replace ``googlecolab`` with any other github org or user to see their repositories.\n- http://colab.research.google.com/github/googlecolab/colabtools/ will let you browse the main branch of the ``colabtools`` repository within the ``googlecolab`` organization. Substitute any user/org and repository to see its contents.\n- http://colab.research.google.com/github/googlecolab/colabtools/blob/master will let you browse ``master`` branch of the ``colabtools`` repository within the ``googlecolab`` organization. (don't forget the ``blob`` here!) You can specify any valid branch for any valid repository.",
"_____no_output_____"
],
[
"## Loading Private Notebooks\n\nLoading a notebook from a private GitHub repository is possible, but requires an additional step to allow Colab to access your files.\nDo the following:\n\n1. Navigate to http://colab.research.google.com/github.\n2. Click the \"Include Private Repos\" checkbox.\n3. In the popup window, sign-in to your Github account and authorize Colab to read the private files.\n4. Your private repositories and notebooks will now be available via the github navigation pane.",
"_____no_output_____"
],
[
"## Saving Notebooks To GitHub or Drive\n\nAny time you open a GitHub hosted notebook in Colab, it opens a new editable view of the notebook. You can run and modify the notebook without worrying about overwriting the source.\n\nIf you would like to save your changes from within Colab, you can use the File menu to save the modified notebook either to Google Drive or back to GitHub. Choose **File→Save a copy in Drive** or **File→Save a copy to GitHub** and follow the resulting prompts. To save a Colab notebook to GitHub requires giving Colab permission to push the commit to your repository.",
"_____no_output_____"
],
[
"## Open In Colab Badge\n\nAnybody can open a copy of any github-hosted notebook within Colab. To make it easier to give people access to live views of GitHub-hosted notebooks,\ncolab provides a [shields.io](http://shields.io/)-style badge, which appears as follows:\n\n[](https://colab.research.google.com/github/googlecolab/colabtools/blob/master/notebooks/colab-github-demo.ipynb)\n\nThe markdown for the above badge is the following:\n\n```markdown\n[](https://colab.research.google.com/github/googlecolab/colabtools/blob/master/notebooks/colab-github-demo.ipynb)\n```\n\nThe HTML equivalent is:\n\n```HTML\n<a href=\"https://colab.research.google.com/github/googlecolab/colabtools/blob/master/notebooks/colab-github-demo.ipynb\">\n <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n</a>\n```\n\nRemember to replace the notebook URL in this template with the notebook you want to link to.",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
]
] |
d0993306e150e99d93d6235c42e6125125f91c1e | 1,900 | ipynb | Jupyter Notebook | notebooks/analytical_bend/.ipynb_checkpoints/discretize-checkpoint.ipynb | yizaochen/smsl_na | b4f52b358d98de500a74e9ef2165ff4904dd9e85 | [
"MIT"
] | null | null | null | notebooks/analytical_bend/.ipynb_checkpoints/discretize-checkpoint.ipynb | yizaochen/smsl_na | b4f52b358d98de500a74e9ef2165ff4904dd9e85 | [
"MIT"
] | null | null | null | notebooks/analytical_bend/.ipynb_checkpoints/discretize-checkpoint.ipynb | yizaochen/smsl_na | b4f52b358d98de500a74e9ef2165ff4904dd9e85 | [
"MIT"
] | null | null | null | 19 | 48 | 0.475263 | [
[
[
"import numpy as np",
"_____no_output_____"
],
[
"discretize_array = np.zeros((1, 5, 3))",
"_____no_output_____"
],
[
"discretize_array",
"_____no_output_____"
],
[
"discretize_array[0,0]",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
d09935caa42029b7864a085ca766e16e801a6709 | 841 | ipynb | Jupyter Notebook | HelloGitHub.ipynb | JakubTomaszewski/dw-matrix | 00e3f437b38217259a41de4e98369c0872b32a83 | [
"MIT"
] | null | null | null | HelloGitHub.ipynb | JakubTomaszewski/dw-matrix | 00e3f437b38217259a41de4e98369c0872b32a83 | [
"MIT"
] | null | null | null | HelloGitHub.ipynb | JakubTomaszewski/dw-matrix | 00e3f437b38217259a41de4e98369c0872b32a83 | [
"MIT"
] | null | null | null | 841 | 841 | 0.689655 | [
[
[
"print('Hello github!')",
"Hello github!\n"
],
[
"",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code"
]
] |
d099516f337d8e080de1055cd23cb28697c50f7d | 9,242 | ipynb | Jupyter Notebook | src/rightsize/rightsizereport.ipynb | wpbrown/azmeta-rightsize-iaas | ec1db8c7f2557836deb707e00c79d2275eec3d43 | [
"MIT"
] | 1 | 2020-06-26T19:30:35.000Z | 2020-06-26T19:30:35.000Z | src/rightsize/rightsizereport.ipynb | wpbrown/azmeta-rightsize-iaas | ec1db8c7f2557836deb707e00c79d2275eec3d43 | [
"MIT"
] | 1 | 2020-05-22T17:55:08.000Z | 2020-05-22T17:55:08.000Z | src/rightsize/rightsizereport.ipynb | wpbrown/azmeta-rightsize-iaas | ec1db8c7f2557836deb707e00c79d2275eec3d43 | [
"MIT"
] | null | null | null | 42.200913 | 242 | 0.616533 | [
[
[
"# Parameters",
"_____no_output_____"
],
[
"# Build the dataset\n\nfrom typing import Optional\nimport pandas as pd\nimport functools\n\n\ndef add_parent_level(df: pd.DataFrame, name: str) -> None:\n df.columns = pd.MultiIndex.from_tuples([(name, x) for x in df.columns])\n\n\ndef calculate_limit(row: pd.Series, attribute: str) -> Optional[float]:\n row_analysis = local_analysis.get(row.name)\n if row_analysis is None:\n return None\n vm_spec = compute_specs.virtual_machine_by_name(row_analysis.advisor_sku)\n return getattr(vm_spec.capabilities, attribute)\n\n\ndef add_limit(df: pd.DataFrame, name: str) -> None:\n df['new_limit'] = df.apply(functools.partial(calculate_limit, attribute=name), axis=1)\n\ndrop_utilization = ['samples', 'percentile_50th', 'percentile_80th']\ndrop_disk_utilization = ['cached', 'counter_name']\n\nres_data = resources.assign(resource_name=resources.resource_id.str.extract(r'([^/]+)$'))\nres_data = res_data.drop(columns=['subscription_id', 'storage_profile'])\nres_data = res_data.set_index('resource_id')\nres_data_col = res_data.columns.to_list()\nres_data_col = res_data_col[1:-1] + res_data_col[-1:] + res_data_col[0:1]\nres_data = res_data[res_data_col]\nadd_parent_level(res_data, 'Resource')\n\nif local_analysis:\n local_data = pd.DataFrame([(k, v.advisor_sku, v.advisor_sku_invalid_reason, v.annual_savings_no_ri) for k,v in local_analysis.items()], columns=['resource_id', 'recommendation', 'invalidation', 'annual_savings']).convert_dtypes()\n local_data = local_data.set_index('resource_id')\n add_parent_level(local_data, 'AzMeta')\n\nif advisor_analysis:\n advisor_data = pd.DataFrame([(k, v.advisor_sku, v.advisor_sku_invalid_reason) for k,v in advisor_analysis.items()], dtype='string', columns=['resource_id', 'recommendation', 'invalidation'])\n advisor_data = advisor_data.set_index('resource_id')\n add_parent_level(advisor_data, 'Advisor')\n\ncpu_data = cpu_utilization.drop(columns=drop_utilization).set_index('resource_id')\nadd_limit(cpu_data, 'd_total_acus')\nadd_parent_level(cpu_data, 'CPU Used (ACUs)')\n\nmem_data = mem_utilization.drop(columns=drop_utilization).set_index('resource_id')\nmem_data = mem_data / 1024.0\nadd_limit(mem_data, 'memory_gb')\nadd_parent_level(mem_data, 'Memory Used (GiB)')\n\ndisk_tput_cached = disk_utilization[(disk_utilization.cached == True) & (disk_utilization.counter_name == 'Disk Bytes/sec')]\ndisk_tput_cached = disk_tput_cached.drop(columns=drop_utilization + drop_disk_utilization).set_index('resource_id')\nadd_limit(disk_tput_cached, 'combined_temp_disk_and_cached_read_bytes_per_second')\ndisk_tput_cached = disk_tput_cached / (1024.0 ** 2)\nadd_parent_level(disk_tput_cached, 'Cached Disk Througput (MiB/sec)')\n\ndisk_trans_cached = disk_utilization[(disk_utilization.cached == True) & (disk_utilization.counter_name == 'Disk Transfers/sec')]\ndisk_trans_cached = disk_trans_cached.drop(columns=drop_utilization + drop_disk_utilization).set_index('resource_id')\nadd_limit(disk_trans_cached, 'combined_temp_disk_and_cached_iops')\nadd_parent_level(disk_trans_cached, 'Cached Disk Operations (IOPS)')\n\ndisk_tput_uncached = disk_utilization[(disk_utilization.cached == False) & (disk_utilization.counter_name == 'Disk Bytes/sec')]\ndisk_tput_uncached = disk_tput_uncached.drop(columns=drop_utilization + drop_disk_utilization).set_index('resource_id')\nadd_limit(disk_tput_uncached, 'uncached_disk_bytes_per_second')\ndisk_tput_uncached = disk_tput_uncached / (1024.0 ** 2)\nadd_parent_level(disk_tput_uncached, 'Uncached Disk Througput (MiB/sec)')\n\ndisk_trans_uncached = disk_utilization[(disk_utilization.cached == False) & (disk_utilization.counter_name == 'Disk Transfers/sec')]\ndisk_trans_uncached = disk_trans_uncached.drop(columns=drop_utilization + drop_disk_utilization).set_index('resource_id')\nadd_limit(disk_trans_uncached, 'uncached_disk_iops')\nadd_parent_level(disk_trans_uncached, 'Uncached Disk Operations (IOPS)')\n\nall_joins = [cpu_data, mem_data, disk_tput_cached, disk_trans_cached, disk_tput_uncached, disk_trans_uncached]\nif local_analysis:\n all_joins.insert(0, local_data)\nif advisor_analysis:\n all_joins.append(advisor_data)\nfull_data = res_data.join(all_joins)\nfull_data.sort_index(inplace=True)\nfull_data.to_excel('final_out_test.xlsx')",
"_____no_output_____"
]
],
[
[
"# AzMeta Resize Recommendations",
"_____no_output_____"
]
],
[
[
"import datetime\n\nprint(\"Report Date:\", datetime.datetime.now().isoformat())\nprint(\"Total Annual Savings:\", \"${:,.2f}\".format(local_data[('AzMeta', 'annual_savings')].sum()), \"(Non-RI Pricing, SQL and Windows AHUB Licensing)\")",
"_____no_output_____"
],
[
"# Present the dataset\nimport matplotlib as plt\nimport itertools\nfrom matplotlib import colors\n\n\ndef background_limit_coloring(row):\n cmap=\"coolwarm\"\n text_color_threshold=0.408\n limit_index = (row.index.get_level_values(0)[0], 'new_limit')\n smin = 0\n smax = row[limit_index] \n if pd.isna(smax):\n return [''] * len(row)\n \n rng = smax - smin\n norm = colors.Normalize(smin, smax)\n rgbas = plt.cm.get_cmap(cmap)(norm(row.to_numpy(dtype=float)))\n\n def relative_luminance(rgba):\n r, g, b = (\n x / 12.92 if x <= 0.03928 else ((x + 0.055) / 1.055 ** 2.4)\n for x in rgba[:3]\n )\n return 0.2126 * r + 0.7152 * g + 0.0722 * b\n\n def css(rgba):\n dark = relative_luminance(rgba) < text_color_threshold\n text_color = \"#f1f1f1\" if dark else \"#000000\"\n return f\"background-color: {colors.rgb2hex(rgba)};color: {text_color};\"\n \n return [css(rgba) for rgba in rgbas[0:-1]] + ['']\n\n\ndef build_header_style(col_groups):\n start = 0\n styles = []\n palette = ['#f6f6f6', '#eae9e9', '#d4d7dd', '#f6f6f6', '#eae9e9', '#d4d7dd', '#f6f6f6', '#eae9e9', '#d4d7dd']\n for i,group in enumerate(itertools.groupby(col_groups, lambda c:c[0])):\n styles.append({'selector': f'.col_heading.level0.col{start}', 'props': [('background-color', palette[i])]})\n group_len = len(tuple(group[1]))\n for j in range(group_len):\n styles.append({'selector': f'.col_heading.level1.col{start + j}', 'props': [('background-color', palette[i])]})\n start += group_len\n return styles\n\n\ndata_group_names = [x for x in full_data.columns.get_level_values(0).unique() if x not in ('Resource', 'AzMeta', 'Advisor')]\nnum_mask = [x[0] in data_group_names for x in full_data.columns.to_flat_index()]\nstyler = full_data.style.hide_index() \\\n .set_properties(**{'font-weight': 'bold'}, subset=[('Resource', 'resource_name')]) \\\n .format('{:.1f}', subset=num_mask, na_rep='N/A') \\\n .format('${:.2f}', subset=[('AzMeta', 'annual_savings')], na_rep='N/A') \\\n .set_table_styles(build_header_style(full_data.columns))\nfor data_group in data_group_names:\n mask = [x == data_group for x in full_data.columns.get_level_values(0)]\n styler = styler.apply(background_limit_coloring, axis=1, subset=mask)\nstyler",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d09959b5c3d8aa89eb0800dd1e7137cd3c2b2310 | 212,545 | ipynb | Jupyter Notebook | docs/source/examples/Propagation using Cowell's formulation.ipynb | nikita-astronaut/poliastro | 7f675d76da413618f3bcc25317de750d74ea667e | [
"MIT"
] | 1 | 2019-02-05T06:19:59.000Z | 2019-02-05T06:19:59.000Z | docs/source/examples/Propagation using Cowell's formulation.ipynb | AlexS12/poliastro | 70f20d20ea8746e581876b27b51504dd05d3d8fc | [
"MIT"
] | null | null | null | docs/source/examples/Propagation using Cowell's formulation.ipynb | AlexS12/poliastro | 70f20d20ea8746e581876b27b51504dd05d3d8fc | [
"MIT"
] | null | null | null | 61.041068 | 53,374 | 0.671053 | [
[
[
"# Cowell's formulation\n\nFor cases where we only study the gravitational forces, solving the Kepler's equation is enough to propagate the orbit forward in time. However, when we want to take perturbations that deviate from Keplerian forces into account, we need a more complex method to solve our initial value problem: one of them is **Cowell's formulation**.\n\nIn this formulation we write the two body differential equation separating the Keplerian and the perturbation accelerations:\n\n$$\\ddot{\\mathbb{r}} = -\\frac{\\mu}{|\\mathbb{r}|^3} \\mathbb{r} + \\mathbb{a}_d$$",
"_____no_output_____"
],
[
"<div class=\"alert alert-info\">For an in-depth exploration of this topic, still to be integrated in poliastro, check out https://github.com/Juanlu001/pfc-uc3m</div>",
"_____no_output_____"
],
[
"<div class=\"alert alert-info\">An earlier version of this notebook allowed for more flexibility and interactivity, but was considerably more complex. Future versions of poliastro and plotly might bring back part of that functionality, depending on user feedback. You can still download the older version <a href=\"https://github.com/poliastro/poliastro/blob/0.8.x/docs/source/examples/Propagation%20using%20Cowell's%20formulation.ipynb\">here</a>.</div>",
"_____no_output_____"
],
[
"## First example\n\nLet's setup a very simple example with constant acceleration to visualize the effects on the orbit.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom astropy import units as u\n\nfrom matplotlib import pyplot as plt\nplt.ion()\n\nfrom poliastro.bodies import Earth\nfrom poliastro.twobody import Orbit\nfrom poliastro.examples import iss\n\nfrom poliastro.twobody.propagation import cowell\nfrom poliastro.plotting import OrbitPlotter3D\nfrom poliastro.util import norm\n\nfrom plotly.offline import init_notebook_mode\ninit_notebook_mode(connected=True)",
"_____no_output_____"
]
],
[
[
"To provide an acceleration depending on an extra parameter, we can use **closures** like this one:",
"_____no_output_____"
]
],
[
[
"accel = 2e-5",
"_____no_output_____"
],
[
"def constant_accel_factory(accel):\n def constant_accel(t0, u, k):\n v = u[3:]\n norm_v = (v[0]**2 + v[1]**2 + v[2]**2)**.5\n return accel * v / norm_v\n\n return constant_accel",
"_____no_output_____"
],
[
"def custom_propagator(orbit, tof, rtol, accel=accel):\n # Workaround for https://github.com/poliastro/poliastro/issues/328\n if tof == 0:\n return orbit.r.to(u.km).value, orbit.v.to(u.km / u.s).value\n else:\n # Use our custom perturbation acceleration\n return cowell(orbit, tof, rtol, ad=constant_accel_factory(accel))",
"_____no_output_____"
],
[
"times = np.linspace(0, 10 * iss.period, 500)\ntimes",
"_____no_output_____"
],
[
"times, positions = iss.sample(times, method=custom_propagator)",
"_____no_output_____"
]
],
[
[
"And we plot the results:",
"_____no_output_____"
]
],
[
[
"frame = OrbitPlotter3D()\n\nframe.set_attractor(Earth)\nframe.plot_trajectory(positions, label=\"ISS\")\n\nframe.show()",
"_____no_output_____"
]
],
[
[
"## Error checking",
"_____no_output_____"
]
],
[
[
"def state_to_vector(ss):\n r, v = ss.rv()\n x, y, z = r.to(u.km).value\n vx, vy, vz = v.to(u.km / u.s).value\n return np.array([x, y, z, vx, vy, vz])",
"_____no_output_____"
],
[
"k = Earth.k.to(u.km**3 / u.s**2).value",
"_____no_output_____"
],
[
"rtol = 1e-13\nfull_periods = 2",
"_____no_output_____"
],
[
"u0 = state_to_vector(iss)\ntf = ((2 * full_periods + 1) * iss.period / 2).to(u.s).value\n\nu0, tf",
"_____no_output_____"
],
[
"iss_f_kep = iss.propagate(tf * u.s, rtol=1e-18)",
"_____no_output_____"
],
[
"r, v = cowell(iss, tf, rtol=rtol)\n\niss_f_num = Orbit.from_vectors(Earth, r * u.km, v * u.km / u.s, iss.epoch + tf * u.s)",
"_____no_output_____"
],
[
"iss_f_num.r, iss_f_kep.r",
"_____no_output_____"
],
[
"assert np.allclose(iss_f_num.r, iss_f_kep.r, rtol=rtol, atol=1e-08 * u.km)\nassert np.allclose(iss_f_num.v, iss_f_kep.v, rtol=rtol, atol=1e-08 * u.km / u.s)",
"_____no_output_____"
],
[
"assert np.allclose(iss_f_num.a, iss_f_kep.a, rtol=rtol, atol=1e-08 * u.km)\nassert np.allclose(iss_f_num.ecc, iss_f_kep.ecc, rtol=rtol)\nassert np.allclose(iss_f_num.inc, iss_f_kep.inc, rtol=rtol, atol=1e-08 * u.rad)\nassert np.allclose(iss_f_num.raan, iss_f_kep.raan, rtol=rtol, atol=1e-08 * u.rad)\nassert np.allclose(iss_f_num.argp, iss_f_kep.argp, rtol=rtol, atol=1e-08 * u.rad)\nassert np.allclose(iss_f_num.nu, iss_f_kep.nu, rtol=rtol, atol=1e-08 * u.rad)",
"_____no_output_____"
]
],
[
[
"## Numerical validation\n\nAccording to [Edelbaum, 1961], a coplanar, semimajor axis change with tangent thrust is defined by:\n\n$$\\frac{\\operatorname{d}\\!a}{a_0} = 2 \\frac{F}{m V_0}\\operatorname{d}\\!t, \\qquad \\frac{\\Delta{V}}{V_0} = \\frac{1}{2} \\frac{\\Delta{a}}{a_0}$$\n\nSo let's create a new circular orbit and perform the necessary checks, assuming constant mass and thrust (i.e. constant acceleration):",
"_____no_output_____"
]
],
[
[
"ss = Orbit.circular(Earth, 500 * u.km)\ntof = 20 * ss.period\n\nad = constant_accel_factory(1e-7)\n\nr, v = cowell(ss, tof.to(u.s).value, ad=ad)\n\nss_final = Orbit.from_vectors(Earth, r * u.km, v * u.km / u.s, ss.epoch + tof)",
"_____no_output_____"
],
[
"da_a0 = (ss_final.a - ss.a) / ss.a\nda_a0",
"_____no_output_____"
],
[
"dv_v0 = abs(norm(ss_final.v) - norm(ss.v)) / norm(ss.v)\n2 * dv_v0",
"_____no_output_____"
],
[
"np.allclose(da_a0, 2 * dv_v0, rtol=1e-2)",
"_____no_output_____"
]
],
[
[
"This means **we successfully validated the model against an extremely simple orbit transfer with approximate analytical solution**. Notice that the final eccentricity, as originally noticed by Edelbaum, is nonzero:",
"_____no_output_____"
]
],
[
[
"ss_final.ecc",
"_____no_output_____"
]
],
[
[
"## References\n\n* [Edelbaum, 1961] \"Propulsion requirements for controllable satellites\"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d099619f93964d15e9fab84157fea845f2aa4dc9 | 54,279 | ipynb | Jupyter Notebook | notebooks/.ipynb_checkpoints/Comp_Scatt_NeuralNetwork_Classification_noposition-Copy1-checkpoint.ipynb | chiarabadiali/comp_scatt_ML | 9a87dbcdff34e63e81439483e529b9404e5ff125 | [
"MIT"
] | null | null | null | notebooks/.ipynb_checkpoints/Comp_Scatt_NeuralNetwork_Classification_noposition-Copy1-checkpoint.ipynb | chiarabadiali/comp_scatt_ML | 9a87dbcdff34e63e81439483e529b9404e5ff125 | [
"MIT"
] | null | null | null | notebooks/.ipynb_checkpoints/Comp_Scatt_NeuralNetwork_Classification_noposition-Copy1-checkpoint.ipynb | chiarabadiali/comp_scatt_ML | 9a87dbcdff34e63e81439483e529b9404e5ff125 | [
"MIT"
] | null | null | null | 74.151639 | 9,732 | 0.702887 | [
[
[
"import pandas as pd \nimport numpy as np\nimport math\nimport keras\nimport tensorflow as tf\nimport progressbar\nimport os\nfrom os import listdir",
"_____no_output_____"
]
],
[
[
"## Print Dependencies\n\n\n\nDependences are fundamental to record the computational environment.",
"_____no_output_____"
]
],
[
[
"%load_ext watermark\n\n# python, ipython, packages, and machine characteristics\n%watermark -v -m -p pandas,keras,numpy,math,tensorflow,matplotlib,h5py\n\n# date\nprint (\" \")\n%watermark -u -n -t -z",
"Python implementation: CPython\nPython version : 3.8.8\nIPython version : 7.22.0\n\npandas : 1.2.3\nkeras : 2.4.3\nnumpy : 1.19.5\nmath : unknown\ntensorflow: 2.4.1\nmatplotlib: 3.4.0\nh5py : 2.10.0\n\nCompiler : Clang 12.0.0 (clang-1200.0.32.29)\nOS : Darwin\nRelease : 19.6.0\nMachine : x86_64\nProcessor : i386\nCPU cores : 8\nArchitecture: 64bit\n\n \nLast updated: Wed Apr 14 2021 11:56:48CEST\n\n"
]
],
[
[
"## Load of the data",
"_____no_output_____"
]
],
[
[
"from process import loaddata\nclass_data0 = loaddata(\"../data/{}.csv\".format('low_ene'))",
"_____no_output_____"
],
[
"class_data0 = class_data0[class_data0[:,0] > 0.001]",
"_____no_output_____"
],
[
"class_data0.shape",
"_____no_output_____"
],
[
"y0 = class_data0[:,0]\nA0 = class_data0\nA0[:,9] = A0[:,13]\nx0 = class_data0[:,1:10]",
"_____no_output_____"
]
],
[
[
"## Check to see if the data are balanced now",
"_____no_output_____"
]
],
[
[
"from matplotlib import pyplot\ny0 = np.array(y0)\nbins = np.linspace(0, 0.55, 50)\nn, edges, _ = pyplot.hist(y0, bins, color = 'indianred', alpha=0.5, label='Osiris')\n#pyplot.hist(y_pred, bins, color = 'mediumslateblue', alpha=0.5, label='NN')\npyplot.legend(loc='upper right')\npyplot.xlabel('Probability')\npyplot.yscale('log')\npyplot.title('Trained on ($p_e$, $p_{\\gamma}$, $\\omega_e$, $\\omega_{\\gamma}$, n)')\npyplot.show()",
"_____no_output_____"
],
[
"def balance_data(class_data, nbins):\n\n from matplotlib import pyplot as plt\n y = class_data[:,0]\n n, edges, _ = plt.hist(y, nbins, color = 'indianred', alpha=0.5, label='Osiris')\n n_max = n.max()\n data = []\n\n for class_ in class_data:\n for i in range(len(n)):\n edges_min = edges[i]\n edges_max = edges[i+1]\n if class_[0] > edges_min and class_[0] < edges_max:\n for j in range(int(n_max/n[i])):\n data.append(class_)\n break\n\n return np.array(data)",
"_____no_output_____"
],
[
"class_data = balance_data(class_data0, 100)",
"_____no_output_____"
],
[
"np.random.shuffle(class_data)\ny = class_data[:,0]\nA = class_data\nprint(A[0])\nA[:,9] = A[:,13]\nprint(A[0])\nx = class_data[:,1:10]\nprint(x[0])\nprint(x.shape)",
"[ 1.90684774e-01 -1.46123555e-01 1.77115860e-01 -7.74108079e-02\n -1.68575786e-01 1.85246188e-01 1.83115385e-01 8.62400000e+07\n 8.59200000e+07 9.37569433e-06 4.04303363e-01 -4.13583152e-01\n 2.14926385e-01 9.37569433e-06]\n[ 1.90684774e-01 -1.46123555e-01 1.77115860e-01 -7.74108079e-02\n -1.68575786e-01 1.85246188e-01 1.83115385e-01 8.62400000e+07\n 8.59200000e+07 9.37569433e-06 4.04303363e-01 -4.13583152e-01\n 2.14926385e-01 9.37569433e-06]\n[-1.46123555e-01 1.77115860e-01 -7.74108079e-02 -1.68575786e-01\n 1.85246188e-01 1.83115385e-01 8.62400000e+07 8.59200000e+07\n 9.37569433e-06]\n(38154174, 9)\n"
],
[
"from matplotlib import pyplot\ny0 = np.array(y0)\nbins = np.linspace(0, 0.55, 100)\npyplot.hist(y, bins, color = 'indianred', alpha=0.5, label='Osiris')\n#pyplot.hist(y_pred, bins, color = 'mediumslateblue', alpha=0.5, label='NN')\npyplot.legend(loc='upper right')\npyplot.xlabel('Probability')\npyplot.yscale('log')\npyplot.title('Trained on ($p_e$, $p_{\\gamma}$, $\\omega_e$, $\\omega_{\\gamma}$, n)')\npyplot.show()",
"_____no_output_____"
],
[
"train_split = 0.75\ntrain_limit = int(len(y)*train_split)\nprint(\"Training sample: {0} \\nValuation sample: {1}\".format(train_limit, len(y)-train_limit))",
"Training sample: 28615630 \nValuation sample: 9538544\n"
],
[
"x_train = x[:train_limit]\nx_val = x[train_limit:]\n\ny_train = y[:train_limit]\ny_val = y[train_limit:]",
"_____no_output_____"
]
],
[
[
"## Model Build",
"_____no_output_____"
]
],
[
[
"from keras.models import Sequential\nfrom keras.layers.core import Dense\nimport keras.backend as K\nfrom keras import optimizers\nfrom keras import models\nfrom keras import layers\nfrom keras.layers.normalization import BatchNormalization",
"_____no_output_____"
],
[
"def build_model() :\n model = models.Sequential()\n model.add (BatchNormalization(input_dim = 9))\n model.add (layers.Dense (12 , activation = \"sigmoid\"))\n model.add (layers.Dense (9 , activation = \"relu\"))\n model.add (layers.Dense (1 , activation = \"sigmoid\"))\n model.compile(optimizer = \"adam\" , loss = 'mae' , metrics = [\"mape\"])\n return model",
"_____no_output_____"
],
[
"model = build_model ()\nhistory = model.fit ( x_train, y_train, epochs = 1000, batch_size = 10000 , validation_data = (x_val, y_val) )\nmodel.save(\"../models/classifier/{}_noposition2.h5\".format('probability'))",
"Epoch 1/1000\n1668/1668 [==============================] - 14s 8ms/step - loss: 0.1329 - mape: 540.4114 - val_loss: 0.0794 - val_mape: 153.3372\nEpoch 2/1000\n1668/1668 [==============================] - 7s 4ms/step - loss: 0.0753 - mape: 119.3088 - val_loss: 0.0605 - val_mape: 78.1843\nEpoch 3/1000\n1668/1668 [==============================] - 6s 4ms/step - loss: 0.0610 - mape: 73.5968 - val_loss: 0.0490 - val_mape: 63.1923\nEpoch 4/1000\n1668/1668 [==============================] - 5s 3ms/step - loss: 0.0518 - mape: 61.9396 - val_loss: 0.0461 - val_mape: 59.9148\nEpoch 5/1000\n1668/1668 [==============================] - 5s 3ms/step - loss: 0.0479 - mape: 58.6098 - val_loss: 0.0400 - val_mape: 54.7601\nEpoch 6/1000\n1668/1668 [==============================] - 5s 3ms/step - loss: 0.0449 - mape: 56.7082 - val_loss: 0.0378 - val_mape: 52.3916\nEpoch 7/1000\n1668/1668 [==============================] - 5s 3ms/step - loss: 0.0431 - mape: 54.5080 - val_loss: 0.0378 - val_mape: 52.2395\nEpoch 8/1000\n1668/1668 [==============================] - 5s 3ms/step - loss: 0.0419 - mape: 52.4003 - val_loss: 0.0360 - val_mape: 48.8127\nEpoch 9/1000\n1668/1668 [==============================] - 5s 3ms/step - loss: 0.0412 - mape: 50.8756 - val_loss: 0.0345 - val_mape: 47.0829\nEpoch 10/1000\n1668/1668 [==============================] - 5s 3ms/step - loss: 0.0399 - mape: 49.7018 - val_loss: 0.0339 - val_mape: 47.9915\nEpoch 11/1000\n1668/1668 [==============================] - 6s 3ms/step - loss: 0.0389 - mape: 49.9091 - val_loss: 0.0334 - val_mape: 47.6032\nEpoch 12/1000\n1668/1668 [==============================] - 6s 3ms/step - loss: 0.0375 - mape: 48.7020 - val_loss: 0.0316 - val_mape: 45.1653\nEpoch 13/1000\n1668/1668 [==============================] - 6s 3ms/step - loss: 0.0363 - mape: 46.9142 - val_loss: 0.0312 - val_mape: 42.9836\nEpoch 14/1000\n1668/1668 [==============================] - 6s 3ms/step - loss: 0.0353 - mape: 45.0681 - val_loss: 0.0291 - val_mape: 41.10030.0354 - mape\nEpoch 15/1000\n1668/1668 [==============================] - 6s 4ms/step - loss: 0.0333 - mape: 42.5617 - val_loss: 0.0281 - val_mape: 40.3332\nEpoch 16/1000\n1668/1668 [==============================] - 6s 3ms/step - loss: 0.0326 - mape: 41.5307 - val_loss: 0.0268 - val_mape: 38.6455\nEpoch 17/1000\n1668/1668 [==============================] - 6s 3ms/step - loss: 0.0318 - mape: 41.1388 - val_loss: 0.0271 - val_mape: 39.1501\nEpoch 18/1000\n1668/1668 [==============================] - 5s 3ms/step - loss: 0.0311 - mape: 40.5582 - val_loss: 0.0257 - val_mape: 38.1473\nEpoch 19/1000\n1668/1668 [==============================] - 6s 3ms/step - loss: 0.0303 - mape: 40.4007 - val_loss: 0.0272 - val_mape: 40.0439\nEpoch 20/1000\n1668/1668 [==============================] - 6s 3ms/step - loss: 0.0302 - mape: 40.5599 - val_loss: 0.0247 - val_mape: 38.7125\nEpoch 21/1000\n1668/1668 [==============================] - 6s 4ms/step - loss: 0.0297 - mape: 40.4191 - val_loss: 0.0245 - val_mape: 38.5639\nEpoch 22/1000\n1668/1668 [==============================] - 6s 3ms/step - loss: 0.0291 - mape: 39.8814 - val_loss: 0.0240 - val_mape: 38.2065\nEpoch 23/1000\n1668/1668 [==============================] - 5s 3ms/step - loss: 0.0293 - mape: 39.8657 - val_loss: 0.0258 - val_mape: 39.3967\nEpoch 24/1000\n1668/1668 [==============================] - 6s 3ms/step - loss: 0.0289 - mape: 39.4856 - val_loss: 0.0234 - val_mape: 36.9538\nEpoch 25/1000\n1668/1668 [==============================] - 6s 3ms/step - loss: 0.0283 - mape: 38.9562 - val_loss: 0.0239 - val_mape: 37.8122\nEpoch 26/1000\n1668/1668 [==============================] - 6s 3ms/step - loss: 0.0285 - mape: 39.2230 - val_loss: 0.0236 - val_mape: 36.6988\nEpoch 27/1000\n1668/1668 [==============================] - 6s 4ms/step - loss: 0.0281 - mape: 38.8895 - val_loss: 0.0250 - val_mape: 38.0366\nEpoch 28/1000\n1668/1668 [==============================] - 6s 3ms/step - loss: 0.0278 - mape: 38.8051 - val_loss: 0.0252 - val_mape: 36.2744\nEpoch 29/1000\n1668/1668 [==============================] - 6s 3ms/step - loss: 0.0276 - mape: 38.3461 - val_loss: 0.0231 - val_mape: 37.6435\nEpoch 30/1000\n1668/1668 [==============================] - 6s 3ms/step - loss: 0.0272 - mape: 38.5847 - val_loss: 0.0230 - val_mape: 37.4969\nEpoch 31/1000\n1668/1668 [==============================] - 7s 4ms/step - loss: 0.0276 - mape: 38.6638 - val_loss: 0.0224 - val_mape: 36.8575\nEpoch 32/1000\n1668/1668 [==============================] - 6s 3ms/step - loss: 0.0272 - mape: 38.4316 - val_loss: 0.0231 - val_mape: 36.8778\nEpoch 33/1000\n1668/1668 [==============================] - 5s 3ms/step - loss: 0.0271 - mape: 38.5521 - val_loss: 0.0244 - val_mape: 37.4241\nEpoch 34/1000\n1668/1668 [==============================] - 5s 3ms/step - loss: 0.0269 - mape: 38.2031 - val_loss: 0.0236 - val_mape: 37.1017\nEpoch 35/1000\n1668/1668 [==============================] - 6s 3ms/step - loss: 0.0269 - mape: 38.2873 - val_loss: 0.0215 - val_mape: 36.4097\nEpoch 36/1000\n1668/1668 [==============================] - 5s 3ms/step - loss: 0.0269 - mape: 38.6758 - val_loss: 0.0225 - val_mape: 36.4239TA\nEpoch 37/1000\n1668/1668 [==============================] - 5s 3ms/step - loss: 0.0266 - mape: 38.6122 - val_loss: 0.0224 - val_mape: 35.9039\nEpoch 38/1000\n1668/1668 [==============================] - 5s 3ms/step - loss: 0.0265 - mape: 38.6147 - val_loss: 0.0216 - val_mape: 35.6083\nEpoch 39/1000\n1668/1668 [==============================] - 5s 3ms/step - loss: 0.0265 - mape: 38.1206 - val_loss: 0.0219 - val_mape: 36.2718\nEpoch 40/1000\n1668/1668 [==============================] - 6s 3ms/step - loss: 0.0263 - mape: 38.0552 - val_loss: 0.0233 - val_mape: 36.4668\nEpoch 41/1000\n1668/1668 [==============================] - 6s 3ms/step - loss: 0.0263 - mape: 37.9702 - val_loss: 0.0218 - val_mape: 35.2546\nEpoch 42/1000\n1668/1668 [==============================] - 6s 3ms/step - loss: 0.0267 - mape: 38.4897 - val_loss: 0.0224 - val_mape: 36.1998\nEpoch 43/1000\n1668/1668 [==============================] - 6s 3ms/step - loss: 0.0263 - mape: 37.9351 - val_loss: 0.0228 - val_mape: 35.24763 - mape: 37\nEpoch 44/1000\n1668/1668 [==============================] - 5s 3ms/step - loss: 0.0263 - mape: 37.8394 - val_loss: 0.0221 - val_mape: 35.6225\nEpoch 45/1000\n1668/1668 [==============================] - 6s 3ms/step - loss: 0.0259 - mape: 37.3755 - val_loss: 0.0222 - val_mape: 36.2098\nEpoch 46/1000\n1668/1668 [==============================] - 5s 3ms/step - loss: 0.0261 - mape: 37.6104 - val_loss: 0.0224 - val_mape: 35.6982\nEpoch 47/1000\n1668/1668 [==============================] - 5s 3ms/step - loss: 0.0259 - mape: 37.3128 - val_loss: 0.0208 - val_mape: 34.6768\nEpoch 48/1000\n1668/1668 [==============================] - 6s 4ms/step - loss: 0.0258 - mape: 36.9036 - val_loss: 0.0208 - val_mape: 34.2785\nEpoch 49/1000\n1668/1668 [==============================] - 6s 4ms/step - loss: 0.0259 - mape: 36.9692 - val_loss: 0.0213 - val_mape: 34.3795\nEpoch 50/1000\n1668/1668 [==============================] - 6s 4ms/step - loss: 0.0255 - mape: 36.6320 - val_loss: 0.0214 - val_mape: 34.2269\nEpoch 51/1000\n1668/1668 [==============================] - 7s 4ms/step - loss: 0.0260 - mape: 36.8380 - val_loss: 0.0218 - val_mape: 34.6632\nEpoch 52/1000\n1668/1668 [==============================] - 7s 4ms/step - loss: 0.0257 - mape: 36.3892 - val_loss: 0.0212 - val_mape: 34.3034\nEpoch 53/1000\n1668/1668 [==============================] - 6s 4ms/step - loss: 0.0258 - mape: 36.6680 - val_loss: 0.0209 - val_mape: 33.9893\nEpoch 54/1000\n1668/1668 [==============================] - 6s 3ms/step - loss: 0.0258 - mape: 36.3683 - val_loss: 0.0203 - val_mape: 33.2743\nEpoch 55/1000\n1668/1668 [==============================] - 6s 3ms/step - loss: 0.0258 - mape: 36.1640 - val_loss: 0.0219 - val_mape: 34.2993\nEpoch 56/1000\n1668/1668 [==============================] - 6s 4ms/step - loss: 0.0260 - mape: 36.3291 - val_loss: 0.0211 - val_mape: 33.0683\nEpoch 57/1000\n1668/1668 [==============================] - 5s 3ms/step - loss: 0.0258 - mape: 35.9737 - val_loss: 0.0211 - val_mape: 32.9233\nEpoch 58/1000\n1668/1668 [==============================] - 5s 3ms/step - loss: 0.0257 - mape: 35.7136 - val_loss: 0.0209 - val_mape: 33.2303\n"
],
[
"model.summary()",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\n\nloss = history.history['loss']\nval_loss = history.history['val_loss']\n\naccuracy = history.history['mape']\nval_accuracy = history.history['val_mape']\n\n\nepochs = range(1, len(loss) + 1)\nfig, ax1 = plt.subplots()\n\nl1 = ax1.plot(epochs, loss, 'bo', label='Training loss')\nvl1 = ax1.plot(epochs, val_loss, 'b', label='Validation loss')\nax1.set_title('Training and validation loss')\nax1.set_xlabel('Epochs')\nax1.set_ylabel('Loss (mae))')\n\nax2 = ax1.twinx()\nac2= ax2.plot(epochs, accuracy, 'o', c=\"red\", label='Training acc')\nvac2= ax2.plot(epochs, val_accuracy, 'r', label='Validation acc')\nax2.set_ylabel('mape')\n\nlns = l1 + vl1 + ac2 + vac2\nlabs = [l.get_label() for l in lns]\nax2.legend(lns, labs, loc=\"center right\")\nfig.tight_layout()\n#fig.savefig(\"acc+loss_drop.pdf\")\nfig.show()",
"_____no_output_____"
]
],
[
[
"## Probability density distribution",
"_____no_output_____"
]
],
[
[
"y0 = class_data0[:,0]\nA0 = class_data0\nA0[:,9] = A0[:,13]\nx0 = class_data0[:,1:10]",
"_____no_output_____"
],
[
"y_pred = model.predict(x0)",
"_____no_output_____"
],
[
"y_pred",
"_____no_output_____"
],
[
"from matplotlib import pyplot\ny = np.array(y)\nbins = np.linspace(0, 0.8, 100)\npyplot.hist(y0, bins, color = 'indianred', alpha=0.5, label='Osiris')\npyplot.hist(y_pred, bins, color = 'mediumslateblue', alpha=0.5, label='NN')\npyplot.legend(loc='upper right')\npyplot.xlabel('Probability')\npyplot.yscale('log')\npyplot.title('Trained on ($p_e$, $p_{\\gamma}$, $\\omega_e$, $\\omega_{\\gamma}$, n)')\npyplot.show()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d09965051cf54340e8afe324ffa5b7ede8db0ad5 | 251,014 | ipynb | Jupyter Notebook | Ngspice/Recreating_Terrys_Notebook_with_spice_Part_1.ipynb | JD-14/grill-spice | 1705e250d5b52a6bfbd22fa2b2a4993c540eeeb2 | [
"MIT"
] | null | null | null | Ngspice/Recreating_Terrys_Notebook_with_spice_Part_1.ipynb | JD-14/grill-spice | 1705e250d5b52a6bfbd22fa2b2a4993c540eeeb2 | [
"MIT"
] | 1 | 2020-11-08T05:13:40.000Z | 2020-11-08T05:13:40.000Z | Ngspice/Recreating_Terrys_Notebook_with_spice_Part_1.ipynb | JD-14/grill-spice | 1705e250d5b52a6bfbd22fa2b2a4993c540eeeb2 | [
"MIT"
] | 4 | 2020-08-13T08:51:22.000Z | 2020-09-08T21:19:18.000Z | 427.621806 | 132,936 | 0.934406 | [
[
[
"## Recreation of Terry's Notebook with NgSpice\n\nIn this experiment we are going to recreate Terry's notebook with NgSpice simulation backend.",
"_____no_output_____"
],
[
"## Step 1: Set up Python3 and NgSpice",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\n\n# check if ngspice can be found from python\nfrom ctypes.util import find_library\nngspice_lib_filename = find_library('libngspice')\nprint(ngspice_lib_filename) ## if the result is none, make sure that libngspice is installed\n\nimport PySpice.Logging.Logging as Logging\nlogger = Logging.setup_logging()\nfrom PySpice.Spice.NgSpice.Shared import NgSpiceShared\nngspice = NgSpiceShared.new_instance()\nprint(ngspice.exec_command('version -f'))\n\nimport nengo\nimport numpy as np",
"/usr/local/lib/libngspice.dylib\n******\n** ngspice-32 : Circuit level simulation program\n** The U. C. Berkeley CAD Group\n** Copyright 1985-1994, Regents of the University of California.\n** Copyright 2001-2020, The ngspice team.\n** Please get your ngspice manual from http://ngspice.sourceforge.net/docs.html\n** Please file your bug-reports at http://ngspice.sourceforge.net/bugrep.html\n**\n** CIDER 1.b1 (CODECS simulator) included\n** XSPICE extensions included\n** Relevant compilation options (refer to user's manual):\n** X11 interface not compiled into ngspice\n**\n******\n"
]
],
[
[
"## Step 2: Define a single neuron",
"_____no_output_____"
],
[
"Let's start with the subcircuit of a single neuron. We are going to use voltage amplifier leaky-integrate and fire neurons discussed in Section 3.3 of Indiveri et al.(May 2011).",
"_____no_output_____"
]
],
[
[
"neuron_model = '''\n.subckt my_neuron Vmem out cvar=100p vsupply=1.8 vtau=0.4 vthr=0.2 vb=1\n\nV1 Vdd 0 {vsupply}\nV6 Vtau 0 {vtau}\nV2 Vthr 0 {vthr}\nV3 Vb1 0 {vb}\nC1 Vmem 0 {cvar}\n\n\nM5 N001 N001 Vdd Vdd pmos l=0.5 w=1.2 ad=1.2 as=1.2 pd=4.4 ps=4.4\nM6 N002 N001 Vdd Vdd pmos l=0.5 w=1.2 ad=1.2 as=1.2 pd=4.4 ps=4.4\nM8 N001 Vmem N004 N004 nmos l=0.5 w=0.6 ad=0.6 as=0.6 pd=3.2 ps=3.2\nM9 N002 Vthr N004 N004 nmos l=0.5 w=0.6 ad=0.6 as=0.6 pd=3.2 ps=3.2\nM10 N004 Vb1 0 0 nmos l=0.5 w=0.6 ad=0.6 as=0.6 pd=3.2 ps=3.2\nMreset Vmem out 0 0 nmos l=0.5 w=0.6 ad=0.6 as=0.6 pd=3.2 ps=3.2\nM7 N003 N002 0 0 nmos l=0.5 w=0.6 ad=0.6 as=0.6 pd=3.2 ps=3.2\nM18 out N003 0 0 nmos l=0.5 w=0.6 ad=0.6 as=0.6 pd=3.2 ps=3.2\nM19 N003 N002 Vdd Vdd pmos l=0.5 w=1.2 ad=1.2 as=1.2 pd=4.4 ps=4.4\nM20 out N003 Vdd Vdd pmos l=0.5 w=1.2 ad=1.2 as=1.2 pd=4.4 ps=4.4\nMleak Vmem Vtau 0 0 nmos l=0.5 w=0.6 ad=0.6 as=0.6 pd=3.2 ps=3.2\n\n\n.ends my_neuron\n'''",
"_____no_output_____"
]
],
[
[
"Create the neuron's netlist",
"_____no_output_____"
]
],
[
[
"def create_neuron_netlist(N):\n # N is the number of neurons\n netlist = ''\n for i in range(N):\n netlist += 'x'+str(i)+' Vmem'+str(i)+' out'+str(i)+' my_neuron vsupply={vsource} cvar=150p vthr=0.25 \\n'\n netlist += 'Rload'+str(i)+' out'+str(i)+ ' 0 100k\\n'\n return netlist\n\nnetlist_neurons = create_neuron_netlist(1)",
"_____no_output_____"
]
],
[
[
"## Step 3: Generate the input",
"_____no_output_____"
],
[
"Now, let's generate some input and see what it does. We are going to use the WhiteSignal that Terry used; however,we are going to shink the signal in amplitude (since this would be a current signal in the circuit) and also increase the frequency of the signal.",
"_____no_output_____"
]
],
[
[
"stim = nengo.processes.WhiteSignal(period=10, high=5, seed=1).run(1, dt=0.001)\ninput_signal = [[i*1e-6, J[0]*10e-6] for i, J in enumerate(stim)] #scaling",
"_____no_output_____"
]
],
[
[
"Lets convert this signal to a current source.",
"_____no_output_____"
]
],
[
[
"def pwl_conv(signal):\n # signal should be a list of lists where wach sublist has this form [time_value, current_value]\n pwl_string = ''\n for i in signal:\n pwl_string += str(i[0]) + ' ' + str(i[1]) + ' '\n return pwl_string",
"_____no_output_____"
]
],
[
[
"## Step 4: Generate remaining parts of the Spice Netlist ",
"_____no_output_____"
]
],
[
[
"netlist_input = 'Iin0 Vdd Vmem0 PWL(' + pwl_conv(input_signal) +')\\n' # Converting the input to a current source",
"_____no_output_____"
],
[
"## other setup parameters\nargs= {}\nargs['simulation_time'] = '1m'\nargs['simulation_step'] = '1u'\nargs['simulation_lib'] = '180nm.lib'\n\n\nnetlist_top= '''*Sample SPICE file\n.include {simulation_lib}\n.option scale=1u\n.OPTIONS ABSTOL=1N VNTOL=1M.\n.options savecurrents\n.tran {simulation_step} {simulation_time} UIC\n'''.format(**args)\n\nnetlist_bottom = '''\n.end'''",
"_____no_output_____"
],
[
"## define the sources\nnetlist_source = '''\n.param vsource = 1.8\nVdd Vdd 0 {vsource}\n'''",
"_____no_output_____"
],
[
"netlist = netlist_top + netlist_source + neuron_model + netlist_input+ netlist_neurons+ netlist_bottom",
"_____no_output_____"
]
],
[
[
"## Step 5: Simulate the netlist",
"_____no_output_____"
]
],
[
[
"def simulate(circuit):\n ngspice.load_circuit(circuit)\n ngspice.run()\n print('Plots:', ngspice.plot_names)\n plot = ngspice.plot(simulation=None, plot_name=ngspice.last_plot)\n return plot",
"_____no_output_____"
],
[
"out=simulate(netlist)",
"2020-09-08 17:06:14,902 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin0: no DC value, transient time 0 value used\nPlots: ['tran1', 'const']\n"
],
[
"plt.plot(out['time']._data,out['@rload0[i]']._data, label='output_current')\nplt.plot(out['time']._data,out['@iin0[current]']._data, label = 'input_current')\nplt.legend()",
"_____no_output_____"
]
],
[
[
"Great! We have a system that does some sort of nonlinearity. Now let's create a feedforward system with a bunch of neurons and see if the system can be used for approximating a function. ",
"_____no_output_____"
],
[
"## Step 6: Function approximation with a feedforward network",
"_____no_output_____"
]
],
[
[
"N = 50 # how many neurons there are\nE = np.random.normal(size=(N, 1))\nB = np.random.normal(size=(N))*0.1\nnetlist_neurons = create_neuron_netlist(N)",
"_____no_output_____"
]
],
[
[
"*Now let's feed that same stimulus to all the neurons and see how they behave.*",
"_____no_output_____"
]
],
[
[
"def create_neuron_current_netlist(E,B,stim,N):\n # take the A matrix and the number of neurons\n # refactor \n netlist_input='\\n'\n signal = np.zeros((len(stim), N))\n for i, J in enumerate(stim):\n Js = np.dot(E, J)\n for k, JJ in enumerate(Js):\n signal[i][k] = JJ+B[k]\n \n for k in range(N):\n input_signal = [[i*1e-6, J*10e-6] for i, J in enumerate(signal[:,k])]\n netlist_input += 'Iin'+str(k)+' Vdd Vmem'+str(k)+' PWL(' + pwl_conv(input_signal) +')\\n\\n'\n \n return netlist_input\n\nnetlist_inputs = create_neuron_current_netlist(E,B,stim,N)",
"_____no_output_____"
],
[
"netlist = netlist_top + netlist_source + neuron_model + netlist_inputs+ netlist_neurons+ netlist_bottom\nout=simulate(netlist)",
"2020-09-08 17:06:15,916 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin49: no DC value, transient time 0 value used\n2020-09-08 17:06:15,917 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin48: no DC value, transient time 0 value used\n2020-09-08 17:06:15,919 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin47: no DC value, transient time 0 value used\n2020-09-08 17:06:15,920 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin46: no DC value, transient time 0 value used\n2020-09-08 17:06:15,921 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin45: no DC value, transient time 0 value used\n2020-09-08 17:06:15,924 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin44: no DC value, transient time 0 value used\n2020-09-08 17:06:15,926 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin43: no DC value, transient time 0 value used\n2020-09-08 17:06:15,928 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin42: no DC value, transient time 0 value used\n2020-09-08 17:06:15,929 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin41: no DC value, transient time 0 value used\n2020-09-08 17:06:15,937 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin40: no DC value, transient time 0 value used\n2020-09-08 17:06:15,938 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin39: no DC value, transient time 0 value used\n2020-09-08 17:06:15,943 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin38: no DC value, transient time 0 value used\n2020-09-08 17:06:15,949 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin37: no DC value, transient time 0 value used\n2020-09-08 17:06:15,951 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin36: no DC value, transient time 0 value used\n2020-09-08 17:06:15,952 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin35: no DC value, transient time 0 value used\n2020-09-08 17:06:15,954 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin34: no DC value, transient time 0 value used\n2020-09-08 17:06:15,959 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin33: no DC value, transient time 0 value used\n2020-09-08 17:06:15,964 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin32: no DC value, transient time 0 value used\n2020-09-08 17:06:15,969 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin31: no DC value, transient time 0 value used\n2020-09-08 17:06:15,970 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin30: no DC value, transient time 0 value used\n2020-09-08 17:06:15,976 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin29: no DC value, transient time 0 value used\n2020-09-08 17:06:15,979 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin28: no DC value, transient time 0 value used\n2020-09-08 17:06:15,983 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin27: no DC value, transient time 0 value used\n2020-09-08 17:06:15,986 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin26: no DC value, transient time 0 value used\n2020-09-08 17:06:15,987 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin25: no DC value, transient time 0 value used\n2020-09-08 17:06:15,995 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin24: no DC value, transient time 0 value used\n2020-09-08 17:06:15,997 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin23: no DC value, transient time 0 value used\n2020-09-08 17:06:15,998 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin22: no DC value, transient time 0 value used\n2020-09-08 17:06:15,999 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin21: no DC value, transient time 0 value used\n2020-09-08 17:06:16,001 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin20: no DC value, transient time 0 value used\n2020-09-08 17:06:16,002 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin19: no DC value, transient time 0 value used\n2020-09-08 17:06:16,004 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin18: no DC value, transient time 0 value used\n2020-09-08 17:06:16,008 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin17: no DC value, transient time 0 value used\n2020-09-08 17:06:16,014 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin16: no DC value, transient time 0 value used\n2020-09-08 17:06:16,016 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin15: no DC value, transient time 0 value used\n2020-09-08 17:06:16,017 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin14: no DC value, transient time 0 value used\n2020-09-08 17:06:16,020 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin13: no DC value, transient time 0 value used\n2020-09-08 17:06:16,023 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin12: no DC value, transient time 0 value used\n2020-09-08 17:06:16,028 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin11: no DC value, transient time 0 value used\n2020-09-08 17:06:16,031 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin10: no DC value, transient time 0 value used\n2020-09-08 17:06:16,032 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin9: no DC value, transient time 0 value used\n2020-09-08 17:06:16,034 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin8: no DC value, transient time 0 value used\n2020-09-08 17:06:16,036 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin7: no DC value, transient time 0 value used\n2020-09-08 17:06:16,037 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin6: no DC value, transient time 0 value used\n2020-09-08 17:06:16,042 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin5: no DC value, transient time 0 value used\n2020-09-08 17:06:16,045 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin4: no DC value, transient time 0 value used\n2020-09-08 17:06:16,046 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin3: no DC value, transient time 0 value used\n2020-09-08 17:06:16,048 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin2: no DC value, transient time 0 value used\n2020-09-08 17:06:16,049 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin1: no DC value, transient time 0 value used\n2020-09-08 17:06:16,051 - PySpice.Spice.NgSpice.Shared.NgSpiceShared - Shared.WARNING - Warning: iin0: no DC value, transient time 0 value used\nPlots: ['tran2', 'tran1', 'const']\n"
]
],
[
[
"So it seems we have some output from the ensemble. Lets convert this output to get the A matrix",
"_____no_output_____"
]
],
[
[
"def extract_A_matrix(result, N, stim):\n t = np.linspace(min(out['time']._data), max(out['time']._data), len(stim))\n temp_time = out['time']._data\n inpterpolated_result = np.zeros((len(stim), N))\n A = np.zeros((len(stim), N))\n for j in range(N):\n temp_str = '@rload'+str(j)+'[i]'\n temp_out = result[temp_str]._data\n inpterpolated_result[:,j] = np.interp(t, temp_time, temp_out)\n A[:,j] = inpterpolated_result[:,j] > max(inpterpolated_result[:,j])/2\n return A\nA_from_spice = extract_A_matrix(out, N, stim)\nplt.figure(figsize=(12,6))\nplt.imshow(A_from_spice.T, aspect='auto', cmap='gray_r')\nplt.show()",
"_____no_output_____"
]
],
[
[
"Cool! This is similar to the A matrix we got from Terry's notebook. We can also calculate the D matrix from this output and approximate the y(t)=x(t) function.",
"_____no_output_____"
]
],
[
[
"target = stim\nD_from_spice, info = nengo.solvers.LstsqL2()(A_from_spice, target)\n\nplt.plot(A_from_spice.dot(D_from_spice), label='output')\nplt.plot(target, lw=3, label='target')\nplt.legend()\nplt.show()\nprint('RMSE:', info['rmses'])",
"_____no_output_____"
]
],
[
[
"*With spiking neuron models, it's very common to have a low-pass filter (i.e. a synapse) after the spike. Let's see what our output looks like with a low-pass filter applied.*",
"_____no_output_____"
]
],
[
[
"filt = nengo.synapses.Lowpass(0.01) #need to implement synapses in circuit\nplt.plot(filt.filt(A_from_spice.dot(D_from_spice)), label='output (filtered)')\nplt.plot(target, lw=3, label='target')\nplt.legend()\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d099662b65a33b5bf2be8d0862669c4564d6c4da | 291,535 | ipynb | Jupyter Notebook | JupyterWorkflow2.ipynb | Andycappfi/JupyterWorkflow2 | 7f910ce08ca3692268c5cabc1ffa79e99b791a1f | [
"MIT"
] | null | null | null | JupyterWorkflow2.ipynb | Andycappfi/JupyterWorkflow2 | 7f910ce08ca3692268c5cabc1ffa79e99b791a1f | [
"MIT"
] | null | null | null | JupyterWorkflow2.ipynb | Andycappfi/JupyterWorkflow2 | 7f910ce08ca3692268c5cabc1ffa79e99b791a1f | [
"MIT"
] | null | null | null | 718.066502 | 115,910 | 0.933812 | [
[
[
"# Python in Jupyter\n### Fremont bicycle data demo\n#### Antero Kangas 3.6.2018",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.style.use('seaborn')",
"_____no_output_____"
],
[
"from jupyterworkflow2.data import get_fremont_data",
"_____no_output_____"
],
[
"data = get_fremont_data()\ndata.head()",
"_____no_output_____"
],
[
"data.resample(\"W\").sum().plot()",
"_____no_output_____"
],
[
"ax = data.resample(\"D\").sum().rolling(365).sum().plot()\nax.set_ylim(0, None)",
"_____no_output_____"
],
[
"data.groupby(data.index.time).mean().plot()",
"_____no_output_____"
],
[
"pivoted = data.pivot_table('Total', index=data.index.time, columns=data.index.date)\npivoted.iloc[:5, :7]",
"_____no_output_____"
],
[
"pivoted.plot(legend=False, alpha=0.01)",
"_____no_output_____"
],
[
"get_fremont_data??",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0996a95864f562346a6579bc2ea137dba0d8ba8 | 220,358 | ipynb | Jupyter Notebook | NeuralNet-Classification-REST/NeuralNet.ipynb | rphillip/Case-Studies | aeb107875ca1ca69d554693b674523a2f331d632 | [
"MIT"
] | null | null | null | NeuralNet-Classification-REST/NeuralNet.ipynb | rphillip/Case-Studies | aeb107875ca1ca69d554693b674523a2f331d632 | [
"MIT"
] | null | null | null | NeuralNet-Classification-REST/NeuralNet.ipynb | rphillip/Case-Studies | aeb107875ca1ca69d554693b674523a2f331d632 | [
"MIT"
] | null | null | null | 57.504697 | 21,340 | 0.521315 | [
[
[
"import numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\nimport torch\nimport torch.nn as nn\nimport torch.optim as optim\nfrom torch.utils.data import Dataset, DataLoader\n\nfrom sklearn.preprocessing import StandardScaler \nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import confusion_matrix, classification_report",
"_____no_output_____"
],
[
"df = pd.read_excel(\"default of credit card clients.xls\", header=1, index_col=0)\ndf",
"_____no_output_____"
],
[
"mindf = df.min(skipna = False)\nmindf",
"_____no_output_____"
],
[
"maxdf = df.max(skipna = False)\nmaxdf",
"_____no_output_____"
],
[
"sns.countplot(x = 'default payment next month', data=df)",
"_____no_output_____"
],
[
"df.shape\ndf.iloc[:,:23]",
"_____no_output_____"
],
[
"from imblearn.combine import SMOTEENN\nsmote_enn = SMOTEENN(random_state=0)\nX_resampled, y_resampled = smote_enn.fit_resample(df.iloc[:,:23], df['default payment next month'])",
"_____no_output_____"
],
[
"sns.barplot(x=[0,1],y=np.bincount(y_resampled))",
"_____no_output_____"
],
[
"X_resampled, y_resampled = smote_enn.fit_resample(df.iloc[:,:23], df['default payment next month'])",
"_____no_output_____"
],
[
"X_train1, X_test1, y_train1, y_test1 = train_test_split(X_resampled, y_resampled, test_size=0.33, random_state=43)",
"_____no_output_____"
],
[
"from imblearn.over_sampling import SMOTENC\nsm = SMOTENC(random_state=42, categorical_features=[1,2,3,5,6,7,8,9,10])\nX_res, y_res = sm.fit_resample(df.iloc[:,:23], df['default payment next month'])",
"_____no_output_____"
],
[
"sns.barplot(x=[0,1],y=np.bincount(y_res))",
"_____no_output_____"
]
],
[
[
"# Scale everything",
"_____no_output_____"
]
],
[
[
"scaler = StandardScaler()\nX_scale = scaler.fit_transform(X_res)\n",
"_____no_output_____"
],
[
"X_scale_torch = torch.FloatTensor(X_scale)\ny_scale_torch = torch.FloatTensor(y_res)\ny_scale_torch",
"_____no_output_____"
],
[
"from skorch import NeuralNetBinaryClassifier\nfrom classes import MyModule\nclass toTensor(BaseEstimator, TransformerMixin):\n def fit(self, X, y=None):\n return self\n def transform(self, X):\n return torch.FloatTensor(X)\nclass MyModule(nn.Module):\n def __init__(self, num_units=128, dropoutrate = 0.5):\n super(MyModule, self).__init__()\n self.dropoutrate = dropoutrate\n self.layer1 = nn.Linear(23, num_units)\n self.nonlin = nn.ReLU()\n self.dropout1 = nn.Dropout(self.dropoutrate)\n self.dropout2 = nn.Dropout(self.dropoutrate)\n self.layer2 = nn.Linear(num_units, num_units)\n self.output = nn.Linear(num_units,1)\n self.batchnorm1 = nn.BatchNorm1d(128)\n self.batchnorm2 = nn.BatchNorm1d(128)\n\n def forward(self, X, **kwargs):\n X = self.nonlin(self.layer1(X))\n X = self.batchnorm1(X)\n X = self.dropout1(X)\n X = self.nonlin(self.layer2(X))\n X = self.batchnorm2(X)\n X = self.dropout2(X)\n X = self.output(X)\n return X\n\nmodel = NeuralNetBinaryClassifier(\n MyModule(dropoutrate = 0.2),\n max_epochs=40,\n lr=0.01,\n batch_size=128,\n # Shuffle training data on each epoch\n iterator_train__shuffle=True,\n)\n",
"_____no_output_____"
],
[
"model.fit(X_scale_torch, y_scale_torch)",
"Re-initializing module.\nRe-initializing criterion.\nRe-initializing optimizer.\n epoch train_loss valid_acc valid_loss dur\n------- ------------ ----------- ------------ ------\n 1 \u001b[36m0.5481\u001b[0m \u001b[32m0.6897\u001b[0m \u001b[35m0.5846\u001b[0m 1.6928\n 2 0.5493 0.6884 0.5853 1.5468\n 3 \u001b[36m0.5474\u001b[0m 0.6857 0.5868 1.5793\n 4 0.5481 \u001b[32m0.6899\u001b[0m \u001b[35m0.5827\u001b[0m 1.8644\n 5 0.5481 \u001b[32m0.6930\u001b[0m \u001b[35m0.5826\u001b[0m 1.8235\n 6 0.5475 0.6849 0.5842 1.8484\n 7 \u001b[36m0.5448\u001b[0m 0.6930 \u001b[35m0.5825\u001b[0m 1.7220\n 8 0.5470 0.6921 0.5852 1.7885\n 9 \u001b[36m0.5430\u001b[0m 0.6868 0.5868 1.6701\n 10 0.5454 0.6913 0.5839 1.7286\n 11 0.5441 0.6882 0.5863 1.7842\n 12 0.5450 0.6854 0.5829 1.8867\n 13 0.5448 0.6895 0.5841 1.7885\n 14 0.5441 \u001b[32m0.6940\u001b[0m 0.5830 1.7386\n 15 \u001b[36m0.5429\u001b[0m 0.6892 0.5869 1.5763\n 16 0.5431 0.6915 0.5865 2.2282\n 17 \u001b[36m0.5428\u001b[0m 0.6876 0.5879 2.1400\n 18 \u001b[36m0.5422\u001b[0m 0.6885 0.5847 2.0586\n 19 0.5437 0.6834 0.5859 2.2294\n 20 \u001b[36m0.5418\u001b[0m 0.6886 0.5852 1.7133\n 21 0.5419 0.6898 0.5830 2.1115\n 22 0.5436 0.6910 0.5828 1.9653\n 23 \u001b[36m0.5411\u001b[0m 0.6895 0.5866 1.9435\n 24 \u001b[36m0.5409\u001b[0m 0.6872 0.5861 1.8671\n 25 \u001b[36m0.5398\u001b[0m 0.6866 0.5864 1.7620\n 26 0.5402 0.6872 0.5867 1.9456\n 27 \u001b[36m0.5390\u001b[0m 0.6899 0.5843 1.7428\n 28 0.5392 0.6874 0.5856 1.5178\n 29 0.5398 0.6903 0.5866 1.6236\n 30 0.5403 0.6860 0.5879 1.5265\n 31 \u001b[36m0.5389\u001b[0m 0.6929 0.5840 1.5700\n 32 0.5390 0.6876 0.5862 1.5404\n 33 0.5399 0.6861 0.5883 1.4063\n 34 0.5390 \u001b[32m0.6975\u001b[0m \u001b[35m0.5770\u001b[0m 1.4633\n 35 0.5402 0.6899 0.5867 1.5340\n 36 \u001b[36m0.5376\u001b[0m 0.6816 0.5931 1.3643\n 37 \u001b[36m0.5375\u001b[0m 0.6957 0.5815 1.5661\n 38 \u001b[36m0.5365\u001b[0m 0.6853 0.5851 1.5214\n 39 0.5365 0.6817 0.5869 1.4918\n 40 \u001b[36m0.5360\u001b[0m 0.6850 0.5900 1.4744\n"
],
[
"val_loss = []\ntrain_loss = []\nepochs = range(1,41)\nfor i in range(40):\n val_loss.append(model.history[i]['valid_loss'])\n train_loss.append(model.history[i]['train_loss'])\ndfloss = (pd.DataFrame({'epoch': epochs, 'val_loss': val_loss, 'train_loss': train_loss}, \n columns=['epoch', 'val_loss', 'train_loss']).set_index('epoch'))\nsns.lineplot(data=dfloss)\n ",
"_____no_output_____"
],
[
"from skorch.helper import SliceDataset\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.model_selection import cross_validate\n\ntrain_slice = SliceDataset(X_scale_torch)\ny_slice = SliceDataset(y_scale_torch)\nscores = cross_validate(model, X_scale_torch, y_scale_torch, scoring='accuracy', cv=4)\n",
" epoch train_loss valid_acc valid_loss dur\n------- ------------ ----------- ------------ ------\n 1 \u001b[36m0.7422\u001b[0m \u001b[32m0.5981\u001b[0m \u001b[35m0.6681\u001b[0m 0.8009\n 2 \u001b[36m0.7018\u001b[0m \u001b[32m0.6262\u001b[0m \u001b[35m0.6462\u001b[0m 0.8059\n 3 \u001b[36m0.6801\u001b[0m \u001b[32m0.6437\u001b[0m \u001b[35m0.6344\u001b[0m 0.7908\n 4 \u001b[36m0.6680\u001b[0m 0.6437 \u001b[35m0.6290\u001b[0m 0.7710\n 5 \u001b[36m0.6595\u001b[0m \u001b[32m0.6555\u001b[0m \u001b[35m0.6214\u001b[0m 0.7588\n 6 \u001b[36m0.6514\u001b[0m 0.6536 \u001b[35m0.6189\u001b[0m 0.7778\n 7 \u001b[36m0.6447\u001b[0m \u001b[32m0.6563\u001b[0m \u001b[35m0.6153\u001b[0m 0.7794\n 8 \u001b[36m0.6411\u001b[0m \u001b[32m0.6585\u001b[0m \u001b[35m0.6121\u001b[0m 0.7884\n 9 \u001b[36m0.6363\u001b[0m \u001b[32m0.6602\u001b[0m \u001b[35m0.6110\u001b[0m 0.7841\n 10 \u001b[36m0.6346\u001b[0m \u001b[32m0.6656\u001b[0m \u001b[35m0.6090\u001b[0m 0.7597\n 11 \u001b[36m0.6308\u001b[0m 0.6628 \u001b[35m0.6084\u001b[0m 0.7602\n 12 \u001b[36m0.6289\u001b[0m 0.6626 0.6086 0.7605\n 13 \u001b[36m0.6263\u001b[0m 0.6646 \u001b[35m0.6060\u001b[0m 0.7634\n 14 \u001b[36m0.6229\u001b[0m 0.6643 0.6061 0.7471\n 15 \u001b[36m0.6227\u001b[0m 0.6622 0.6068 0.7732\n 16 0.6227 \u001b[32m0.6665\u001b[0m \u001b[35m0.6045\u001b[0m 0.7986\n 17 \u001b[36m0.6201\u001b[0m 0.6625 0.6066 0.7884\n 18 \u001b[36m0.6191\u001b[0m 0.6602 0.6078 0.7631\n 19 \u001b[36m0.6184\u001b[0m 0.6655 \u001b[35m0.6019\u001b[0m 0.8011\n 20 \u001b[36m0.6175\u001b[0m 0.6665 \u001b[35m0.6017\u001b[0m 0.8340\n 21 \u001b[36m0.6159\u001b[0m \u001b[32m0.6705\u001b[0m 0.6017 0.8640\n 22 \u001b[36m0.6156\u001b[0m 0.6645 \u001b[35m0.6012\u001b[0m 0.8956\n 23 \u001b[36m0.6142\u001b[0m 0.6626 0.6047 0.8541\n 24 \u001b[36m0.6127\u001b[0m 0.6692 \u001b[35m0.5995\u001b[0m 0.7387\n 25 \u001b[36m0.6112\u001b[0m 0.6662 0.6013 0.8709\n 26 \u001b[36m0.6085\u001b[0m 0.6676 0.6005 0.9085\n 27 0.6109 0.6652 \u001b[35m0.5993\u001b[0m 0.8738\n 28 0.6095 0.6672 \u001b[35m0.5987\u001b[0m 0.8432\n 29 \u001b[36m0.6079\u001b[0m 0.6685 0.6003 0.9241\n 30 \u001b[36m0.6072\u001b[0m 0.6673 \u001b[35m0.5978\u001b[0m 0.9492\n 31 0.6095 0.6693 0.6028 0.9233\n 32 0.6075 0.6685 \u001b[35m0.5968\u001b[0m 0.8054\n 33 \u001b[36m0.6057\u001b[0m 0.6670 0.5970 0.8271\n 34 0.6080 0.6693 \u001b[35m0.5962\u001b[0m 0.8135\n 35 \u001b[36m0.6038\u001b[0m 0.6665 0.5994 0.8080\n 36 0.6046 \u001b[32m0.6745\u001b[0m \u001b[35m0.5954\u001b[0m 0.7959\n 37 0.6042 \u001b[32m0.6748\u001b[0m 0.5995 0.7966\n 38 0.6046 0.6736 0.5970 0.7870\n 39 \u001b[36m0.6024\u001b[0m 0.6648 0.5960 0.8368\n 40 0.6035 0.6736 \u001b[35m0.5949\u001b[0m 0.7449\n epoch train_loss valid_acc valid_loss dur\n------- ------------ ----------- ------------ ------\n 1 \u001b[36m0.7240\u001b[0m \u001b[32m0.6098\u001b[0m \u001b[35m0.6616\u001b[0m 0.7022\n 2 \u001b[36m0.6868\u001b[0m \u001b[32m0.6231\u001b[0m \u001b[35m0.6447\u001b[0m 0.7312\n 3 \u001b[36m0.6668\u001b[0m \u001b[32m0.6391\u001b[0m \u001b[35m0.6362\u001b[0m 0.7035\n 4 \u001b[36m0.6536\u001b[0m \u001b[32m0.6482\u001b[0m \u001b[35m0.6298\u001b[0m 0.7796\n 5 \u001b[36m0.6464\u001b[0m \u001b[32m0.6561\u001b[0m \u001b[35m0.6272\u001b[0m 0.8602\n 6 \u001b[36m0.6427\u001b[0m \u001b[32m0.6585\u001b[0m \u001b[35m0.6228\u001b[0m 0.9531\n 7 \u001b[36m0.6377\u001b[0m \u001b[32m0.6623\u001b[0m \u001b[35m0.6211\u001b[0m 0.9218\n 8 \u001b[36m0.6353\u001b[0m \u001b[32m0.6642\u001b[0m \u001b[35m0.6185\u001b[0m 0.8628\n 9 \u001b[36m0.6335\u001b[0m 0.6642 \u001b[35m0.6181\u001b[0m 0.8745\n 10 \u001b[36m0.6279\u001b[0m 0.6632 \u001b[35m0.6171\u001b[0m 0.9107\n 11 \u001b[36m0.6237\u001b[0m \u001b[32m0.6743\u001b[0m \u001b[35m0.6151\u001b[0m 0.8516\n 12 \u001b[36m0.6236\u001b[0m 0.6742 \u001b[35m0.6145\u001b[0m 0.8030\n 13 \u001b[36m0.6219\u001b[0m 0.6632 0.6151 0.8129\n 14 \u001b[36m0.6184\u001b[0m \u001b[32m0.6805\u001b[0m \u001b[35m0.6130\u001b[0m 0.7993\n 15 0.6192 0.6752 \u001b[35m0.6121\u001b[0m 0.8758\n 16 \u001b[36m0.6160\u001b[0m 0.6738 0.6130 0.8165\n 17 \u001b[36m0.6145\u001b[0m \u001b[32m0.6807\u001b[0m \u001b[35m0.6107\u001b[0m 0.8668\n 18 \u001b[36m0.6142\u001b[0m 0.6797 0.6111 0.9158\n 19 0.6157 0.6780 0.6109 0.8874\n 20 0.6147 \u001b[32m0.6817\u001b[0m \u001b[35m0.6103\u001b[0m 0.7932\n 21 \u001b[36m0.6137\u001b[0m \u001b[32m0.6829\u001b[0m \u001b[35m0.6099\u001b[0m 1.1629\n 22 \u001b[36m0.6095\u001b[0m \u001b[32m0.6840\u001b[0m 0.6111 0.8394\n 23 \u001b[36m0.6081\u001b[0m 0.6779 0.6104 0.7875\n 24 0.6097 0.6760 0.6101 0.7587\n 25 0.6091 \u001b[32m0.6843\u001b[0m 0.6105 0.7656\n 26 \u001b[36m0.6072\u001b[0m 0.6820 0.6107 0.7538\n 27 \u001b[36m0.6069\u001b[0m 0.6777 \u001b[35m0.6095\u001b[0m 0.7461\n 28 0.6089 0.6725 0.6100 0.7482\n 29 \u001b[36m0.6064\u001b[0m 0.6817 \u001b[35m0.6076\u001b[0m 0.7581\n 30 \u001b[36m0.6063\u001b[0m 0.6807 0.6079 0.7499\n 31 \u001b[36m0.6039\u001b[0m 0.6799 0.6088 0.7717\n 32 0.6071 0.6782 0.6090 0.8257\n 33 0.6043 0.6812 0.6081 0.7591\n 34 0.6043 0.6840 0.6094 0.7547\n 35 0.6047 0.6809 0.6081 0.7363\n 36 \u001b[36m0.6028\u001b[0m 0.6792 0.6081 0.7522\n 37 \u001b[36m0.6016\u001b[0m 0.6793 0.6080 0.7416\n 38 0.6038 \u001b[32m0.6853\u001b[0m \u001b[35m0.6075\u001b[0m 0.7449\n 39 0.6026 0.6766 0.6078 0.7514\n 40 0.6021 0.6786 \u001b[35m0.6071\u001b[0m 0.7429\n epoch train_loss valid_acc valid_loss dur\n------- ------------ ----------- ------------ ------\n 1 \u001b[36m0.7363\u001b[0m \u001b[32m0.5979\u001b[0m \u001b[35m0.6626\u001b[0m 0.7429\n 2 \u001b[36m0.6836\u001b[0m \u001b[32m0.6214\u001b[0m \u001b[35m0.6446\u001b[0m 0.7643\n 3 \u001b[36m0.6628\u001b[0m \u001b[32m0.6358\u001b[0m \u001b[35m0.6367\u001b[0m 0.7566\n 4 \u001b[36m0.6558\u001b[0m \u001b[32m0.6389\u001b[0m \u001b[35m0.6300\u001b[0m 0.7597\n 5 \u001b[36m0.6449\u001b[0m \u001b[32m0.6506\u001b[0m \u001b[35m0.6254\u001b[0m 0.7447\n 6 \u001b[36m0.6414\u001b[0m \u001b[32m0.6526\u001b[0m \u001b[35m0.6226\u001b[0m 0.7568\n 7 \u001b[36m0.6358\u001b[0m \u001b[32m0.6539\u001b[0m \u001b[35m0.6213\u001b[0m 0.7429\n 8 \u001b[36m0.6335\u001b[0m \u001b[32m0.6568\u001b[0m \u001b[35m0.6190\u001b[0m 0.7495\n 9 \u001b[36m0.6323\u001b[0m \u001b[32m0.6585\u001b[0m \u001b[35m0.6171\u001b[0m 0.7485\n 10 \u001b[36m0.6258\u001b[0m \u001b[32m0.6598\u001b[0m 0.6176 0.7524\n 11 \u001b[36m0.6219\u001b[0m \u001b[32m0.6648\u001b[0m \u001b[35m0.6158\u001b[0m 0.7740\n 12 \u001b[36m0.6199\u001b[0m \u001b[32m0.6655\u001b[0m \u001b[35m0.6152\u001b[0m 0.9099\n 13 0.6218 \u001b[32m0.6678\u001b[0m \u001b[35m0.6145\u001b[0m 0.8077\n 14 0.6201 0.6628 0.6151 0.8408\n 15 \u001b[36m0.6170\u001b[0m \u001b[32m0.6709\u001b[0m \u001b[35m0.6121\u001b[0m 0.9577\n 16 \u001b[36m0.6157\u001b[0m 0.6669 0.6132 0.8927\n 17 0.6195 0.6609 0.6147 0.8456\n 18 \u001b[36m0.6125\u001b[0m 0.6696 \u001b[35m0.6118\u001b[0m 0.7558\n 19 0.6137 0.6706 0.6119 0.7619\n 20 0.6135 0.6690 0.6123 0.8154\n 21 \u001b[36m0.6098\u001b[0m 0.6689 \u001b[35m0.6117\u001b[0m 0.8306\n 22 0.6109 \u001b[32m0.6799\u001b[0m \u001b[35m0.6103\u001b[0m 0.8831\n 23 0.6120 0.6718 0.6107 0.8229\n 24 \u001b[36m0.6091\u001b[0m 0.6689 0.6112 0.7922\n 25 \u001b[36m0.6088\u001b[0m 0.6725 0.6103 0.7907\n 26 \u001b[36m0.6082\u001b[0m 0.6712 0.6105 0.8171\n 27 0.6085 0.6783 \u001b[35m0.6088\u001b[0m 0.8175\n 28 \u001b[36m0.6074\u001b[0m 0.6777 0.6094 0.8614\n 29 \u001b[36m0.6069\u001b[0m 0.6762 0.6090 0.9208\n"
],
[
"import functools as f\nprint('validation accuracy for each fold: {}'.format(scores))\n#print('avg validation accuracy: {:.3f}'.format(scores.mean()))\n#loop through the dictionary\nfor key,value in scores.items(): \n #use reduce to calculate the avg\n print(f\"Average {key}\", f.reduce(lambda x, y: x + y, scores[key]) / len(scores[key]))",
"validation accuracy for each fold: {'fit_time': array([32.64935684, 32.56045794, 33.10055208, 30.3942039 ]), 'score_time': array([0.13864493, 0.15048194, 0.19447684, 0.137707 ]), 'test_score': array([0.68130457, 0.67582606, 0.67762369, 0.69748331])}\nAverage fit_time 32.17614269256592\nAverage score_time 0.1553276777267456\nAverage test_score 0.6830594076356789\n"
],
[
"from sklearn.model_selection import GridSearchCV\nparams = {\n 'lr': [0.01, 0.001],\n 'module__dropoutrate': [0.2, 0.5]\n\n}\nmodel.module",
"_____no_output_____"
],
[
"gs = GridSearchCV(model, params, refit=False, cv=4, scoring='accuracy', verbose=2)\ngs_results = gs.fit(X_scale_torch,y_scale_torch )",
"Fitting 4 folds for each of 4 candidates, totalling 16 fits\n epoch train_loss valid_acc valid_loss dur\n------- ------------ ----------- ------------ ------\n 1 \u001b[36m0.6192\u001b[0m \u001b[32m0.6536\u001b[0m \u001b[35m0.6164\u001b[0m 0.7819\n 2 \u001b[36m0.5903\u001b[0m \u001b[32m0.6765\u001b[0m \u001b[35m0.6009\u001b[0m 0.7738\n 3 \u001b[36m0.5852\u001b[0m 0.6759 \u001b[35m0.5906\u001b[0m 0.8328\n 4 \u001b[36m0.5790\u001b[0m \u001b[32m0.6870\u001b[0m \u001b[35m0.5871\u001b[0m 0.7694\n 5 \u001b[36m0.5756\u001b[0m 0.6670 0.5945 0.7599\n 6 \u001b[36m0.5733\u001b[0m 0.6799 0.5873 0.7709\n 7 \u001b[36m0.5706\u001b[0m 0.6779 0.5879 0.7603\n 8 \u001b[36m0.5687\u001b[0m \u001b[32m0.6889\u001b[0m \u001b[35m0.5838\u001b[0m 0.7594\n 9 \u001b[36m0.5649\u001b[0m \u001b[32m0.6926\u001b[0m \u001b[35m0.5802\u001b[0m 0.7511\n 10 \u001b[36m0.5629\u001b[0m 0.6880 0.5808 0.7578\n 11 0.5634 0.6899 0.5818 0.7553\n 12 \u001b[36m0.5617\u001b[0m 0.6923 0.5816 0.7562\n 13 \u001b[36m0.5601\u001b[0m 0.6867 0.5829 0.7679\n 14 \u001b[36m0.5577\u001b[0m 0.6832 0.5830 0.7449\n 15 \u001b[36m0.5560\u001b[0m 0.6907 \u001b[35m0.5794\u001b[0m 0.7540\n 16 0.5564 0.6922 \u001b[35m0.5773\u001b[0m 0.7526\n 17 \u001b[36m0.5553\u001b[0m \u001b[32m0.6942\u001b[0m \u001b[35m0.5764\u001b[0m 0.7570\n 18 \u001b[36m0.5553\u001b[0m 0.6924 \u001b[35m0.5762\u001b[0m 0.7473\n 19 \u001b[36m0.5524\u001b[0m 0.6904 0.5783 0.7534\n 20 \u001b[36m0.5519\u001b[0m \u001b[32m0.6981\u001b[0m \u001b[35m0.5736\u001b[0m 0.7792\n 21 \u001b[36m0.5513\u001b[0m 0.6922 0.5748 0.7866\n 22 \u001b[36m0.5497\u001b[0m 0.6916 0.5785 0.7597\n 23 0.5502 0.6867 0.5795 0.7584\n 24 \u001b[36m0.5484\u001b[0m 0.6835 0.5812 0.7562\n 25 \u001b[36m0.5473\u001b[0m 0.6847 0.5806 0.7536\n 26 \u001b[36m0.5471\u001b[0m 0.6951 \u001b[35m0.5735\u001b[0m 0.7638\n 27 \u001b[36m0.5452\u001b[0m 0.6795 0.5841 0.7492\n 28 0.5470 \u001b[32m0.6997\u001b[0m \u001b[35m0.5705\u001b[0m 0.7528\n 29 \u001b[36m0.5444\u001b[0m 0.6879 0.5778 0.7600\n 30 \u001b[36m0.5435\u001b[0m 0.6902 0.5764 0.7554\n 31 \u001b[36m0.5420\u001b[0m 0.6953 0.5721 0.7582\n 32 \u001b[36m0.5414\u001b[0m 0.6990 0.5733 0.7745\n 33 0.5427 0.6950 0.5710 0.7585\n 34 0.5430 0.6760 0.5899 0.7603\n 35 \u001b[36m0.5391\u001b[0m 0.6912 0.5729 0.7587\n 36 0.5409 0.6927 0.5748 0.7599\n 37 0.5415 0.6971 \u001b[35m0.5694\u001b[0m 0.7615\n 38 \u001b[36m0.5387\u001b[0m 0.6863 0.5789 0.7548\n 39 0.5406 0.6883 0.5797 0.7492\n 40 0.5402 0.6976 0.5730 0.7543\n[CV] END ...................lr=0.01, module__dropoutrate=0.2; total time= 30.8s\n epoch train_loss valid_acc valid_loss dur\n------- ------------ ----------- ------------ ------\n 1 \u001b[36m0.6258\u001b[0m \u001b[32m0.6598\u001b[0m \u001b[35m0.6221\u001b[0m 0.7502\n 2 \u001b[36m0.5981\u001b[0m 0.6598 \u001b[35m0.6139\u001b[0m 0.7545\n 3 \u001b[36m0.5898\u001b[0m \u001b[32m0.6669\u001b[0m \u001b[35m0.6076\u001b[0m 0.7680\n 4 \u001b[36m0.5849\u001b[0m \u001b[32m0.6748\u001b[0m \u001b[35m0.6060\u001b[0m 0.8768\n 5 \u001b[36m0.5785\u001b[0m 0.6740 \u001b[35m0.6026\u001b[0m 0.7876\n 6 \u001b[36m0.5751\u001b[0m 0.6715 0.6056 0.7524\n 7 \u001b[36m0.5731\u001b[0m 0.6732 0.6058 0.7543\n 8 \u001b[36m0.5714\u001b[0m \u001b[32m0.6835\u001b[0m \u001b[35m0.6015\u001b[0m 0.7667\n 9 \u001b[36m0.5701\u001b[0m 0.6755 0.6019 0.7826\n 10 \u001b[36m0.5684\u001b[0m 0.6793 \u001b[35m0.5996\u001b[0m 0.7592\n 11 \u001b[36m0.5655\u001b[0m 0.6770 \u001b[35m0.5982\u001b[0m 0.7496\n 12 \u001b[36m0.5639\u001b[0m 0.6820 \u001b[35m0.5977\u001b[0m 0.7574\n 13 \u001b[36m0.5627\u001b[0m 0.6742 0.6014 0.7962\n 14 \u001b[36m0.5605\u001b[0m 0.6829 \u001b[35m0.5975\u001b[0m 0.7472\n 15 0.5615 0.6767 \u001b[35m0.5962\u001b[0m 0.7388\n 16 \u001b[36m0.5601\u001b[0m 0.6833 0.5986 0.7460\n 17 \u001b[36m0.5578\u001b[0m \u001b[32m0.6836\u001b[0m 0.5973 0.7403\n 18 0.5598 \u001b[32m0.6842\u001b[0m \u001b[35m0.5962\u001b[0m 0.7425\n 19 0.5579 0.6819 0.5972 0.7660\n 20 \u001b[36m0.5551\u001b[0m 0.6805 0.5973 0.7485\n 21 \u001b[36m0.5551\u001b[0m 0.6802 0.6009 0.7573\n 22 0.5553 0.6779 0.5981 0.7599\n 23 \u001b[36m0.5531\u001b[0m \u001b[32m0.6853\u001b[0m \u001b[35m0.5916\u001b[0m 0.7571\n 24 0.5537 0.6732 0.5979 0.7541\n 25 0.5547 0.6753 0.5978 0.7614\n 26 \u001b[36m0.5508\u001b[0m 0.6849 0.5965 0.7556\n 27 0.5529 \u001b[32m0.6863\u001b[0m 0.5969 0.7541\n 28 \u001b[36m0.5499\u001b[0m 0.6756 0.5953 0.7587\n 29 0.5504 0.6748 0.5963 0.7789\n 30 0.5511 \u001b[32m0.6880\u001b[0m 0.5958 0.7702\n 31 0.5517 0.6862 0.5940 0.7557\n 32 \u001b[36m0.5497\u001b[0m 0.6853 0.5928 0.7497\n 33 \u001b[36m0.5482\u001b[0m 0.6797 0.5970 0.7533\n 34 \u001b[36m0.5476\u001b[0m 0.6819 0.5967 0.7550\n 35 \u001b[36m0.5472\u001b[0m 0.6819 0.5959 0.7517\n 36 0.5479 0.6816 0.5954 0.7615\n 37 \u001b[36m0.5460\u001b[0m 0.6695 0.6031 0.7539\n 38 0.5469 0.6846 0.5935 0.7546\n 39 0.5463 0.6723 0.6010 0.7582\n 40 0.5469 0.6837 0.5924 0.7601\n[CV] END ...................lr=0.01, module__dropoutrate=0.2; total time= 30.8s\n epoch train_loss valid_acc valid_loss dur\n------- ------------ ----------- ------------ ------\n 1 \u001b[36m0.6254\u001b[0m \u001b[32m0.6757\u001b[0m \u001b[35m0.6117\u001b[0m 0.7631\n 2 \u001b[36m0.5934\u001b[0m 0.6718 \u001b[35m0.6071\u001b[0m 0.7558\n 3 \u001b[36m0.5866\u001b[0m 0.6738 \u001b[35m0.6049\u001b[0m 0.7514\n 4 \u001b[36m0.5818\u001b[0m 0.6748 \u001b[35m0.6036\u001b[0m 0.7457\n 5 \u001b[36m0.5799\u001b[0m \u001b[32m0.6835\u001b[0m \u001b[35m0.6022\u001b[0m 0.8746\n 6 \u001b[36m0.5787\u001b[0m 0.6767 \u001b[35m0.6022\u001b[0m 0.9506\n 7 \u001b[36m0.5762\u001b[0m 0.6766 \u001b[35m0.6011\u001b[0m 0.9422\n 8 \u001b[36m0.5739\u001b[0m 0.6735 0.6025 0.9419\n 9 \u001b[36m0.5733\u001b[0m 0.6749 \u001b[35m0.6000\u001b[0m 0.7980\n 10 \u001b[36m0.5732\u001b[0m 0.6746 0.6004 0.7875\n 11 \u001b[36m0.5695\u001b[0m 0.6796 0.6000 0.7909\n 12 0.5696 0.6805 0.6011 0.7816\n 13 \u001b[36m0.5669\u001b[0m \u001b[32m0.6853\u001b[0m 0.6002 0.7936\n 14 \u001b[36m0.5651\u001b[0m 0.6766 \u001b[35m0.5972\u001b[0m 0.7910\n 15 0.5651 0.6812 0.6001 0.7912\n 16 \u001b[36m0.5648\u001b[0m 0.6680 0.5976 0.8696\n 17 \u001b[36m0.5616\u001b[0m 0.6786 \u001b[35m0.5941\u001b[0m 0.7909\n 18 0.5637 0.6796 0.5974 0.8131\n 19 0.5635 0.6787 0.5982 0.8024\n 20 \u001b[36m0.5616\u001b[0m 0.6750 0.5965 0.8006\n 21 \u001b[36m0.5615\u001b[0m 0.6790 0.5979 0.7962\n 22 \u001b[36m0.5605\u001b[0m 0.6685 0.5955 0.7885\n 23 0.5608 0.6806 0.5951 0.8134\n 24 \u001b[36m0.5598\u001b[0m 0.6822 0.5966 0.7875\n 25 \u001b[36m0.5581\u001b[0m 0.6732 0.5952 0.7948\n 26 \u001b[36m0.5573\u001b[0m 0.6790 0.5967 0.8010\n 27 0.5574 0.6762 0.5957 0.7960\n 28 \u001b[36m0.5556\u001b[0m 0.6812 \u001b[35m0.5937\u001b[0m 0.8154\n 29 0.5559 0.6836 0.5960 0.7986\n 30 0.5564 0.6842 0.5957 0.7905\n 31 0.5570 0.6773 0.5959 0.7997\n 32 \u001b[36m0.5547\u001b[0m 0.6765 \u001b[35m0.5936\u001b[0m 0.7936\n 33 0.5560 0.6785 \u001b[35m0.5924\u001b[0m 0.7890\n"
],
[
"for key in gs.cv_results_.keys():\n print(key, gs.cv_results_[key])",
"mean_fit_time [31.88682443 30.51245248 30.57055247 31.74798542]\nstd_fit_time [1.33337528 0.20746012 0.19286978 1.00132016]\nmean_score_time [0.16935319 0.15622282 0.15137154 0.15740955]\nstd_score_time [0.00573338 0.00570575 0.00165977 0.00264987]\nparam_lr [0.01 0.01 0.001 0.001]\nparam_module__dropoutrate [0.2 0.5 0.2 0.5]\nparams [{'lr': 0.01, 'module__dropoutrate': 0.2}, {'lr': 0.01, 'module__dropoutrate': 0.5}, {'lr': 0.001, 'module__dropoutrate': 0.2}, {'lr': 0.001, 'module__dropoutrate': 0.5}]\nsplit0_test_score [0.68113337 0.67599726 0.67531245 0.67719569]\nsplit1_test_score [0.70330423 0.69149118 0.68156138 0.68130457]\nsplit2_test_score [0.71409005 0.69003595 0.68515665 0.68224619]\nsplit3_test_score [0.71391885 0.69928095 0.69910974 0.69637048]\nmean_test_score [0.70311162 0.68920134 0.68528505 0.68427923]\nstd_test_score [0.01342016 0.00839473 0.00872435 0.00723458]\nrank_test_score [1 2 3 4]\n"
],
[
"import pickle\nwith open('model1.pkl', 'wb') as f:\n pickle.dump(model, f)",
"_____no_output_____"
],
[
"model.save_params(\n f_params='model.pkl', f_optimizer='opt.pkl', f_history='history.json')\n",
"_____no_output_____"
],
[
"from sklearn.pipeline import Pipeline\nfrom sklearn.base import TransformerMixin, BaseEstimator\nfrom classes import toTensor\npipeline = Pipeline([\n ('scale', StandardScaler()),\n ('tensor',toTensor()),\n ('classification',model)\n ])\npipeline.fit(X_res, torch.FloatTensor(y_res))",
" epoch train_loss valid_acc valid_loss dur\n------- ------------ ----------- ------------ ------\n 1 \u001b[36m0.6173\u001b[0m \u001b[32m0.6842\u001b[0m \u001b[35m0.5973\u001b[0m 1.0533\n 2 \u001b[36m0.5909\u001b[0m \u001b[32m0.6884\u001b[0m \u001b[35m0.5961\u001b[0m 1.0349\n 3 \u001b[36m0.5853\u001b[0m \u001b[32m0.6924\u001b[0m \u001b[35m0.5916\u001b[0m 1.0474\n 4 \u001b[36m0.5807\u001b[0m 0.6900 \u001b[35m0.5912\u001b[0m 1.0496\n 5 \u001b[36m0.5785\u001b[0m 0.6871 0.5928 1.0142\n 6 \u001b[36m0.5760\u001b[0m 0.6895 \u001b[35m0.5890\u001b[0m 1.0178\n 7 \u001b[36m0.5734\u001b[0m 0.6848 \u001b[35m0.5882\u001b[0m 1.0188\n 8 \u001b[36m0.5733\u001b[0m 0.6831 0.5899 1.0135\n 9 \u001b[36m0.5710\u001b[0m 0.6816 0.5954 1.0644\n 10 \u001b[36m0.5689\u001b[0m 0.6903 \u001b[35m0.5866\u001b[0m 1.0293\n 11 \u001b[36m0.5670\u001b[0m 0.6863 \u001b[35m0.5860\u001b[0m 1.0264\n 12 0.5673 0.6886 0.5867 1.0984\n 13 \u001b[36m0.5660\u001b[0m 0.6865 \u001b[35m0.5859\u001b[0m 1.0589\n 14 \u001b[36m0.5658\u001b[0m 0.6847 0.5887 1.0605\n 15 \u001b[36m0.5648\u001b[0m 0.6859 \u001b[35m0.5846\u001b[0m 1.0302\n 16 \u001b[36m0.5642\u001b[0m 0.6878 0.5856 1.0346\n 17 \u001b[36m0.5618\u001b[0m 0.6875 0.5858 1.0358\n 18 \u001b[36m0.5604\u001b[0m 0.6865 0.5894 1.0314\n 19 0.5621 0.6818 0.5859 1.0423\n 20 \u001b[36m0.5598\u001b[0m 0.6788 0.5900 1.0218\n 21 \u001b[36m0.5576\u001b[0m 0.6890 0.5894 1.0408\n 22 0.5589 0.6813 0.5880 1.0328\n 23 0.5579 0.6849 0.5878 1.0323\n 24 0.5581 0.6868 0.5865 1.0221\n 25 \u001b[36m0.5567\u001b[0m \u001b[32m0.6942\u001b[0m \u001b[35m0.5837\u001b[0m 1.0206\n 26 \u001b[36m0.5554\u001b[0m 0.6891 0.5850 1.0167\n 27 \u001b[36m0.5547\u001b[0m 0.6920 0.5843 0.9964\n 28 0.5550 0.6898 0.5862 1.0285\n 29 \u001b[36m0.5522\u001b[0m 0.6921 \u001b[35m0.5804\u001b[0m 1.2718\n 30 0.5529 0.6901 0.5868 1.3067\n 31 0.5524 0.6852 0.5891 1.2900\n 32 0.5528 0.6880 0.5844 1.2534\n 33 \u001b[36m0.5514\u001b[0m 0.6923 0.5830 1.0566\n 34 0.5524 0.6884 0.5846 1.0673\n 35 \u001b[36m0.5497\u001b[0m 0.6891 0.5818 1.0503\n 36 0.5509 0.6891 0.5854 1.0943\n 37 \u001b[36m0.5488\u001b[0m 0.6889 0.5866 1.0429\n 38 0.5490 0.6897 0.5850 1.1093\n 39 0.5502 0.6921 0.5835 1.1217\n 40 0.5500 0.6866 0.5860 1.1302\n"
],
[
"import joblib\n\nwith open('model1.pkl', 'wb') as f:\n joblib.dump(pipeline,f)",
"_____no_output_____"
],
[
"jinput = X_res.iloc[15].to_json()\njinput",
"_____no_output_____"
],
[
"{\n \"LIMIT_BAL\": 20000,\n \"SEX\": 2,\n \"EDUCATION\": 2,\n \"MARRIAGE\": 1,\n \"AGE\": 24,\n \"PAY_0\": 2,\n \"PAY_2\": -1,\n \"PAY_3\": -1,\n \"PAY_4\": -1,\n \"PAY_5\": -2,\n \"PAY_6\": -2,\n \"BILL_AMT1\": 3913,\n \"BILL_AMT2\": 3102,\n \"BILL_AMT3\": 689,\n \"BILL_AMT4\": 0,\n \"BILL_AMT5\": 0,\n \"BILL_AMT6\": 0,\n \"PAY_AMT1\": 0,\n \"PAY_AMT2\": 689,\n \"PAY_AMT3\": 0,\n \"PAY_AMT4\": 0,\n \"PAY_AMT5\": 0,\n \"PAY_AMT6\": 0\n}",
"_____no_output_____"
],
[
"import requests\nbashCommand = f\"\"\"curl -X 'POST' 'http://127.0.0.1:8000/predict' -H 'accept: application/json' -H 'Content-Type: application/json' -d {jinput}\"\"\"\nheaders = {\n \n}\nres = requests.post('http://127.0.0.1:8000/predict', data=jinput, headers=headers)",
"_____no_output_____"
],
[
"res.text",
"_____no_output_____"
],
[
"%%timeit\nres = requests.post('http://127.0.0.1:8000/predict', data=jinput, headers=headers)",
"10.4 ms ± 538 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n"
]
],
[
[
"# Embeds",
"_____no_output_____"
]
],
[
[
"df.columns",
"_____no_output_____"
],
[
"from sklearn.compose import ColumnTransformer\nfrom sklearn.preprocessing import StandardScaler\nlincolumns = (['LIMIT_BAL','AGE', 'BILL_AMT1', 'BILL_AMT2', 'BILL_AMT3', 'BILL_AMT4', 'BILL_AMT5', 'BILL_AMT6',\n 'PAY_AMT1', 'PAY_AMT2', 'PAY_AMT3', 'PAY_AMT4', 'PAY_AMT5', 'PAY_AMT6'])\nct = ColumnTransformer([\n ('scalethis', StandardScaler(), lincolumns)\n ], remainder='passthrough')\n\nct2 = ct.fit_transform(df.iloc[:,:23])",
"_____no_output_____"
],
[
"dfct2 = pd.DataFrame(ct2)\ndfct2",
"_____no_output_____"
],
[
"df_numeric = dfct2.iloc[:,:14]\ndf_cat = dfct2.iloc[:,14:]\ndf_cat1 = df_cat.iloc[:,0]\ndf_cat2 = df_cat.iloc[:,1]\ndf_cat3 = df_cat.iloc[:,2]\ndf_cat4 = df_cat.iloc[:,3:]\ndf_cat4",
"_____no_output_____"
],
[
"def emb_sz_rule(n_cat): \n return min(600, round(1.6 * n_cat**0.56))\nembed = nn.Embedding(2, emb_sz_rule(2))\n\nembed(torch.tensor(df_cat1.values).to(torch.int64))",
"_____no_output_____"
],
[
"def emb_sz_rule(n_cat): \n return min(600, round(1.6 * n_cat**0.56))\nclass MyModule(nn.Module):\n def __init__(self, num_inputs=23, num_units_d1=128, num_units_d2=128)):\n super(MyModule, self).__init__()\n self.dense0 = nn.Linear(20, num_units)\n self.nonlin = nonlin\n self.dropout = nn.Dropout(0.5)\n self.dense1 = nn.Linear(num_units, num_units)\n self.output = nn.Linear(num_units, 2)\n self.softmax = nn.Softmax(dim=-1)\n self.embed1 = nn.Embedding(2, emb_sz_rule(2))\n self.embed2 = nn.Embedding(7, emb_sz_rule(7))\n self.embed3 = nn.Embedding(4, emb_sz_rule(4))\n self.embed4 = nn.Embedding(11, emb_sz_rule(11))\n def forward(self, X, cat1, cat2, cat3, cat4):\n x1 = self.embed1(cat1)\n x2 = self.embed2(cat2)\n x3 = self.embed3(cat3)\n x4 = self.embed4(cat4)\n X = torch.cat((X,x1,x2,x3,x4), dim=1)\n X = self.nonlin(self.dense0(X))\n X = self.dropout(X)\n X = self.nonlin(self.dense1(X))\n X = self.softmax(self.output(X))\n return X\n\nmodel = NeuralNetBinaryClassifier(\n MyModule,\n max_epochs=40,\n lr=0.001,\n\n # Shuffle training data on each epoch\n iterator_train__shuffle=True,\n)",
"_____no_output_____"
],
[
"EPOCHS = 50\nBATCH_SIZE = 64\nLEARNING_RATE = 0.001\n",
"_____no_output_____"
],
[
"class BinaryClassification(nn.Module):\n def __init__(self):\n super(BinaryClassification, self).__init__()\n self.layer_1 = nn.Linear(23, 64) \n self.layer_2 = nn.Linear(64, 64)\n self.layer_out = nn.Linear(64, 1) \n \n self.relu = nn.ReLU()\n self.dropout = nn.Dropout(p=0.1)\n self.batchnorm1 = nn.BatchNorm1d(64)\n self.batchnorm2 = nn.BatchNorm1d(64)\n \n def forward(self, inputs):\n x = self.relu(self.layer_1(inputs))\n x = self.batchnorm1(x)\n x = self.relu(self.layer_2(x))\n x = self.batchnorm2(x)\n x = self.dropout(x)\n x = self.layer_out(x)\n \n return x\n\nclass MyModule(nn.Module):\n def __init__(self, num_inputs=23, num_units_d1=128, num_units_d2=128)):\n super(MyModule, self).__init__()\n self.dense0 = nn.Linear(20, num_units)\n self.nonlin = nonlin\n self.dropout = nn.Dropout(0.5)\n self.dense1 = nn.Linear(num_units, num_units)\n self.output = nn.Linear(num_units, 2)\n self.softmax = nn.Softmax(dim=-1)\n\n def forward(self, X, **kwargs):\n X = self.nonlin(self.dense0(X))\n X = self.dropout(X)\n X = self.nonlin(self.dense1(X))\n X = self.softmax(self.output(X))\n return X\n\nmodel = NeuralNetBinaryClassifier(\n MyModule,\n max_epochs=40,\n lr=0.001,\n\n # Shuffle training data on each epoch\n iterator_train__shuffle=True,\n)",
"_____no_output_____"
]
],
[
[
"# Py Torch",
"_____no_output_____"
]
],
[
[
"## train data\nclass TrainData(Dataset):\n \n def __init__(self, X_data, y_data):\n self.X_data = X_data\n self.y_data = y_data\n \n def __getitem__(self, index):\n return self.X_data[index], self.y_data[index]\n \n def __len__ (self):\n return len(self.X_data)\n\n\ntrain_data = TrainData(torch.FloatTensor(X_train), torch.FloatTensor(y_train.to_numpy(dtype=np.float64)))\n## test data \nclass TestData(Dataset):\n \n def __init__(self, X_data):\n self.X_data = X_data\n \n def __getitem__(self, index):\n return self.X_data[index]\n \n def __len__ (self):\n return len(self.X_data)\n \n\ntest_data = TestData(torch.FloatTensor(X_test))",
"_____no_output_____"
],
[
"train_loader = DataLoader(dataset=train_data, batch_size=BATCH_SIZE, shuffle=True)\ntest_loader = DataLoader(dataset=test_data, batch_size=1)",
"_____no_output_____"
],
[
"class BinaryClassification(nn.Module):\n def __init__(self):\n super(BinaryClassification, self).__init__()\n self.layer_1 = nn.Linear(23, 64) \n self.layer_2 = nn.Linear(64, 64)\n self.layer_out = nn.Linear(64, 1) \n \n self.relu = nn.ReLU()\n self.dropout = nn.Dropout(p=0.1)\n self.batchnorm1 = nn.BatchNorm1d(64)\n self.batchnorm2 = nn.BatchNorm1d(64)\n \n def forward(self, inputs):\n x = self.relu(self.layer_1(inputs))\n x = self.batchnorm1(x)\n x = self.relu(self.layer_2(x))\n x = self.batchnorm2(x)\n x = self.dropout(x)\n x = self.layer_out(x)\n \n return x",
"_____no_output_____"
],
[
"device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\nprint(device)",
"cpu\n"
],
[
"model = BinaryClassification()\nmodel.to(device)\nprint(model)\ncriterion = nn.BCEWithLogitsLoss()\noptimizer = optim.Adam(model.parameters(), lr=LEARNING_RATE)",
"BinaryClassification(\n (layer_1): Linear(in_features=23, out_features=64, bias=True)\n (layer_2): Linear(in_features=64, out_features=64, bias=True)\n (layer_out): Linear(in_features=64, out_features=1, bias=True)\n (relu): ReLU()\n (dropout): Dropout(p=0.1, inplace=False)\n (batchnorm1): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (batchnorm2): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n)\n"
],
[
"def binary_acc(y_pred, y_test):\n y_pred_tag = torch.round(torch.sigmoid(y_pred))\n\n correct_results_sum = (y_pred_tag == y_test).sum().float()\n acc = correct_results_sum/y_test.shape[0]\n acc = torch.round(acc * 100)\n \n return acc",
"_____no_output_____"
],
[
"model.train()\nfor e in range(1, EPOCHS+1):\n epoch_loss = 0\n epoch_acc = 0\n for X_batch, y_batch in train_loader:\n X_batch, y_batch = X_batch.to(device), y_batch.to(device)\n optimizer.zero_grad()\n \n y_pred = model(X_batch)\n \n loss = criterion(y_pred, y_batch.unsqueeze(1))\n acc = binary_acc(y_pred, y_batch.unsqueeze(1))\n \n loss.backward()\n optimizer.step()\n \n epoch_loss += loss.item()\n epoch_acc += acc.item()\n \n\n print(f'Epoch {e+0:03}: | Loss: {epoch_loss/len(train_loader):.5f} | Acc: {epoch_acc/len(train_loader):.3f}')",
"Epoch 001: | Loss: 0.45479 | Acc: 78.439\nEpoch 002: | Loss: 0.41233 | Acc: 80.850\nEpoch 003: | Loss: 0.40462 | Acc: 81.488\nEpoch 004: | Loss: 0.39199 | Acc: 82.028\nEpoch 005: | Loss: 0.38622 | Acc: 82.199\nEpoch 006: | Loss: 0.38073 | Acc: 82.523\nEpoch 007: | Loss: 0.37937 | Acc: 82.760\nEpoch 008: | Loss: 0.37373 | Acc: 83.192\nEpoch 009: | Loss: 0.36585 | Acc: 83.111\nEpoch 010: | Loss: 0.36803 | Acc: 83.328\nEpoch 011: | Loss: 0.36370 | Acc: 83.610\nEpoch 012: | Loss: 0.35545 | Acc: 83.979\nEpoch 013: | Loss: 0.35516 | Acc: 84.031\nEpoch 014: | Loss: 0.35149 | Acc: 84.261\nEpoch 015: | Loss: 0.35078 | Acc: 84.237\nEpoch 016: | Loss: 0.34535 | Acc: 84.415\nEpoch 017: | Loss: 0.34704 | Acc: 84.495\nEpoch 018: | Loss: 0.34282 | Acc: 84.679\nEpoch 019: | Loss: 0.34274 | Acc: 84.491\nEpoch 020: | Loss: 0.33811 | Acc: 85.070\nEpoch 021: | Loss: 0.33696 | Acc: 84.927\nEpoch 022: | Loss: 0.33600 | Acc: 84.868\nEpoch 023: | Loss: 0.33675 | Acc: 85.223\nEpoch 024: | Loss: 0.33064 | Acc: 85.334\nEpoch 025: | Loss: 0.33340 | Acc: 84.714\nEpoch 026: | Loss: 0.33138 | Acc: 85.220\nEpoch 027: | Loss: 0.32413 | Acc: 85.585\nEpoch 028: | Loss: 0.32957 | Acc: 85.425\nEpoch 029: | Loss: 0.32780 | Acc: 85.213\nEpoch 030: | Loss: 0.32576 | Acc: 85.533\nEpoch 031: | Loss: 0.32255 | Acc: 85.749\nEpoch 032: | Loss: 0.32203 | Acc: 85.659\nEpoch 033: | Loss: 0.32118 | Acc: 85.648\nEpoch 034: | Loss: 0.31853 | Acc: 85.878\nEpoch 035: | Loss: 0.31647 | Acc: 85.854\nEpoch 036: | Loss: 0.31892 | Acc: 85.728\nEpoch 037: | Loss: 0.31600 | Acc: 85.885\nEpoch 038: | Loss: 0.31326 | Acc: 86.160\nEpoch 039: | Loss: 0.31420 | Acc: 85.972\nEpoch 040: | Loss: 0.31226 | Acc: 86.307\nEpoch 041: | Loss: 0.31323 | Acc: 86.073\nEpoch 042: | Loss: 0.31081 | Acc: 86.052\nEpoch 043: | Loss: 0.31061 | Acc: 86.254\nEpoch 044: | Loss: 0.30963 | Acc: 86.328\nEpoch 045: | Loss: 0.31058 | Acc: 86.348\nEpoch 046: | Loss: 0.31165 | Acc: 86.324\nEpoch 047: | Loss: 0.30692 | Acc: 86.446\nEpoch 048: | Loss: 0.30742 | Acc: 86.554\nEpoch 049: | Loss: 0.30156 | Acc: 86.568\nEpoch 050: | Loss: 0.30101 | Acc: 86.826\n"
],
[
"y_pred_list = []\nmodel.eval()\nwith torch.no_grad():\n for X_batch in test_loader:\n X_batch = X_batch.to(device)\n y_test_pred = model(X_batch)\n y_test_pred = torch.sigmoid(y_test_pred)\n y_pred_tag = torch.round(y_test_pred)\n y_pred_list.append(y_pred_tag.cpu().numpy())\n\ny_pred_list = [a.squeeze().tolist() for a in y_pred_list]",
"_____no_output_____"
],
[
"confusion_matrix(y_test, y_pred_list)",
"_____no_output_____"
],
[
"print(classification_report(y_test, y_pred_list))\n",
" precision recall f1-score support\n\n 0 0.81 0.73 0.77 3457\n 1 0.84 0.90 0.87 5590\n\n accuracy 0.83 9047\n macro avg 0.83 0.81 0.82 9047\nweighted avg 0.83 0.83 0.83 9047\n\n"
],
[
"# use the original data",
"_____no_output_____"
],
[
"scaler = StandardScaler()\nX_og = scaler.fit_transform(df.iloc[:,:23])\nX_og",
"_____no_output_____"
],
[
"og_data = TestData(torch.FloatTensor(X_og))\nog_loader = DataLoader(dataset=og_data, batch_size=1)\nog_y_pred_list = []\nmodel.eval()\nwith torch.no_grad():\n for X_batch in og_loader:\n X_batch = X_batch.to(device)\n y_test_pred = model(X_batch)\n y_test_pred = torch.sigmoid(y_test_pred)\n y_pred_tag = torch.round(y_test_pred)\n og_y_pred_list.append(y_pred_tag.cpu().numpy())\n\nog_y_pred_list = [a.squeeze().tolist() for a in og_y_pred_list]",
"_____no_output_____"
],
[
"confusion_matrix(df['default payment next month'].to_numpy(dtype=np.float64), og_y_pred_list)",
"_____no_output_____"
],
[
"print(classification_report(df['default payment next month'].to_numpy(dtype=np.float64), og_y_pred_list))",
" precision recall f1-score support\n\n 0.0 0.92 0.33 0.49 23364\n 1.0 0.28 0.90 0.42 6636\n\n accuracy 0.46 30000\n macro avg 0.60 0.62 0.45 30000\nweighted avg 0.78 0.46 0.47 30000\n\n"
],
[
"torch.save(model.state_dict(), \"model1.pt\")",
"_____no_output_____"
],
[
"https://towardsdatascience.com/pytorch-tabular-binary-classification-a0368da5bb89",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d09972f5b0077969d45431ecb8af9f1f1966592a | 14,082 | ipynb | Jupyter Notebook | notebooks/Therese/TrimSamBySlidingWindowDepth_Mac_EV_L_TotalsRNA_pysam.ipynb | VCMason/PyGenToolbox | 3367a9b3df3bdb0223dd9671e9d355b81455fe2f | [
"MIT"
] | null | null | null | notebooks/Therese/TrimSamBySlidingWindowDepth_Mac_EV_L_TotalsRNA_pysam.ipynb | VCMason/PyGenToolbox | 3367a9b3df3bdb0223dd9671e9d355b81455fe2f | [
"MIT"
] | null | null | null | notebooks/Therese/TrimSamBySlidingWindowDepth_Mac_EV_L_TotalsRNA_pysam.ipynb | VCMason/PyGenToolbox | 3367a9b3df3bdb0223dd9671e9d355b81455fe2f | [
"MIT"
] | null | null | null | 161.862069 | 11,267 | 0.821119 | [
[
[
"%load_ext autoreload\n%autoreload 2\nimport datetime\nprint(datetime.datetime.now())\n\nfrom pygentoolbox.TrimSamBySlidingWindowsMeanDepthWithPysam import main\n#dir(pygentoolbox.Tools)\n%matplotlib inline\nimport matplotlib.pyplot as plt",
"2022-03-24 12:22:18.581116\n"
],
[
"filelist = ['/media/sf_LinuxShare/Projects/Theresa/Hisat2/EV_Total_sRNA/Late/Pt_51_Mac/fastp/hisat2/190118_NB501473_A_L1-4_ADPF-56_AdapterTrimmed_R1_23bp.trim.Pt_51_Mac.sort.bam']\n# filelist = ['D:\\\\LinuxShare\\\\Projects\\\\Theresa\\\\Hisat2\\\\FlagHA_Pt08\\\\Pt_51_Mac\\\\52_Late.Uniq.23M.sort.sam', 'D:\\\\LinuxShare\\\\Projects\\\\Theresa\\\\Hisat2\\\\FlagHA_Pt08\\\\Pt_51_Mac\\\\53_Later.Uniq.23M.sort.sam']\n# filelist_f = ['D:\\\\LinuxShare\\\\Projects\\\\Theresa\\\\Hisat2\\\\FlagHA_Pt08\\\\Pt_51_MacAndIES\\\\52_Late.Uniq.23M.F.sort.sam', 'D:\\\\LinuxShare\\\\Projects\\\\Theresa\\\\Hisat2\\\\FlagHA_Pt08\\\\Pt_51_MacAndIES\\\\53_Later.Uniq.23M.F.sort.sam']\n# filelist_r = ['D:\\\\LinuxShare\\\\Projects\\\\Theresa\\\\Hisat2\\\\FlagHA_Pt08\\\\Pt_51_MacAndIES\\\\52_Late.Uniq.23M.R.sort.sam', 'D:\\\\LinuxShare\\\\Projects\\\\Theresa\\\\Hisat2\\\\FlagHA_Pt08\\\\Pt_51_MacAndIES\\\\53_Later.Uniq.23M.R.sort.sam']\ngenomefile='/media/sf_LinuxShare/Ciliates/Genomes/Seqs/ptetraurelia_mac_51.fa'\nstep = 50\nwinsize = 100\ncutoff = 50\n\nmain(filelist, genomefile, step, winsize, cutoff)",
"step: 50, window size: 100, cutoff: 50\nCounting lengths of all scaffolds\nscaffold51_9, scaffold51_221, scaffold51_186, scaffold51_366, scaffold51_491, scaffold51_207, scaffold51_440, scaffold51_283, scaffold51_281, scaffold51_513, scaffold51_166, scaffold51_89, scaffold51_533, scaffold51_111, scaffold51_592, scaffold51_661, scaffold51_637, scaffold51_330, scaffold51_220, scaffold51_667, scaffold51_630, scaffold51_246, scaffold51_147, scaffold51_251, scaffold51_594, scaffold51_642, scaffold51_538, scaffold51_394, scaffold51_371, scaffold51_373, scaffold51_569, scaffold51_250, scaffold51_258, scaffold51_1, scaffold51_315, scaffold51_543, scaffold51_388, scaffold51_322, scaffold51_669, scaffold51_554, scaffold51_231, scaffold51_293, scaffold51_297, scaffold51_279, scaffold51_420, scaffold51_412, scaffold51_605, scaffold51_327, scaffold51_151, scaffold51_185, scaffold51_241, scaffold51_188, scaffold51_25, scaffold51_426, scaffold51_332, scaffold51_103, scaffold51_219, scaffold51_683, scaffold51_487, scaffold51_382, scaffold51_177, scaffold51_489, scaffold51_160, scaffold51_375, scaffold51_348, scaffold51_432, scaffold51_433, scaffold51_175, scaffold51_176, scaffold51_194, scaffold51_648, scaffold51_77, scaffold51_353, scaffold51_145, scaffold51_229, scaffold51_431, scaffold51_74, scaffold51_232, scaffold51_355, scaffold51_206, scaffold51_504, scaffold51_364, scaffold51_589, scaffold51_419, scaffold51_205, scaffold51_37, scaffold51_421, scaffold51_131, scaffold51_417, scaffold51_466, scaffold51_625, scaffold51_384, scaffold51_459, scaffold51_680, scaffold51_15, scaffold51_61, scaffold51_237, scaffold51_167, scaffold51_312, scaffold51_139, scaffold51_169, scaffold51_483, scaffold51_17, scaffold51_653, scaffold51_60, scaffold51_441, scaffold51_280, scaffold51_423, scaffold51_272, scaffold51_174, scaffold51_579, scaffold51_578, scaffold51_529, scaffold51_326, scaffold51_53, scaffold51_140, scaffold51_192, scaffold51_369, scaffold51_510, scaffold51_73, scaffold51_516, scaffold51_418, scaffold51_128, scaffold51_102, scaffold51_475, scaffold51_350, scaffold51_584, scaffold51_328, scaffold51_189, scaffold51_541, scaffold51_42, scaffold51_568, scaffold51_291, scaffold51_379, scaffold51_621, scaffold51_212, scaffold51_180, scaffold51_252, scaffold51_170, scaffold51_313, scaffold51_234, scaffold51_68, scaffold51_674, scaffold51_547, scaffold51_628, scaffold51_159, scaffold51_509, scaffold51_643, scaffold51_276, scaffold51_502, scaffold51_123, scaffold51_240, scaffold51_154, scaffold51_120, scaffold51_521, scaffold51_358, scaffold51_287, scaffold51_689, scaffold51_155, scaffold51_277, scaffold51_47, scaffold51_633, scaffold51_401, scaffold51_255, scaffold51_649, scaffold51_202, scaffold51_640, scaffold51_453, scaffold51_290, scaffold51_470, scaffold51_82, scaffold51_171, scaffold51_372, scaffold51_359, scaffold51_97, scaffold51_673, scaffold51_664, scaffold51_96, scaffold51_362, scaffold51_65, scaffold51_93, scaffold51_301, scaffold51_392, scaffold51_115, scaffold51_41, scaffold51_63, scaffold51_321, scaffold51_627, scaffold51_342, scaffold51_477, scaffold51_304, scaffold51_2, scaffold51_130, scaffold51_570, scaffold51_298, scaffold51_525, scaffold51_460, scaffold51_106, scaffold51_165, scaffold51_557, scaffold51_598, scaffold51_407, scaffold51_659, scaffold51_604, scaffold51_593, scaffold51_210, scaffold51_34, scaffold51_199, scaffold51_308, scaffold51_560, scaffold51_404, scaffold51_562, scaffold51_239, scaffold51_346, scaffold51_201, scaffold51_561, scaffold51_519, scaffold51_337, scaffold51_270, scaffold51_599, scaffold51_493, scaffold51_197, scaffold51_203, scaffold51_49, scaffold51_235, scaffold51_143, scaffold51_623, scaffold51_334, scaffold51_335, scaffold51_646, scaffold51_299, scaffold51_85, scaffold51_66, scaffold51_45, scaffold51_294, scaffold51_31, scaffold51_24, scaffold51_456, scaffold51_296, scaffold51_132, scaffold51_162, scaffold51_125, scaffold51_118, scaffold51_32, scaffold51_697, scaffold51_30, scaffold51_153, scaffold51_474, scaffold51_662, scaffold51_500, scaffold51_485, scaffold51_319, scaffold51_69, scaffold51_284, scaffold51_463, scaffold51_19, scaffold51_44, scaffold51_390, scaffold51_84, scaffold51_78, scaffold51_26, scaffold51_70, scaffold51_28, scaffold51_11, scaffold51_99, scaffold51_134, scaffold51_329, scaffold51_127, scaffold51_105, scaffold51_27, scaffold51_409, scaffold51_23, scaffold51_14, scaffold51_114, scaffold51_395, scaffold51_149, scaffold51_230, scaffold51_222, scaffold51_261, scaffold51_566, scaffold51_540, scaffold51_681, scaffold51_546, scaffold51_666, scaffold51_438, scaffold51_524, scaffold51_600, scaffold51_58, scaffold51_292, scaffold51_267, scaffold51_181, scaffold51_602, scaffold51_467, scaffold51_50, scaffold51_626, scaffold51_480, scaffold51_374, scaffold51_377, scaffold51_682, scaffold51_478, scaffold51_430, scaffold51_405, scaffold51_216, scaffold51_668, scaffold51_582, scaffold51_555, scaffold51_228, scaffold51_323, scaffold51_161, scaffold51_507, scaffold51_585, scaffold51_349, scaffold51_577, scaffold51_552, scaffold51_211, scaffold51_264, scaffold51_425, scaffold51_365, scaffold51_129, scaffold51_387, scaffold51_79, scaffold51_486, scaffold51_135, scaffold51_508, scaffold51_33, scaffold51_393, scaffold51_86, scaffold51_124, scaffold51_696, scaffold51_94, scaffold51_110, scaffold51_295, scaffold51_136, scaffold51_104, scaffold51_18, scaffold51_259, scaffold51_62, scaffold51_101, scaffold51_54, scaffold51_462, scaffold51_35, scaffold51_4, scaffold51_48, scaffold51_164, scaffold51_564, scaffold51_354, scaffold51_303, scaffold51_39, scaffold51_316, scaffold51_386, scaffold51_20, scaffold51_59, scaffold51_126, scaffold51_389, scaffold51_396, scaffold51_428, scaffold51_333, scaffold51_146, scaffold51_652, scaffold51_588, scaffold51_238, scaffold51_422, scaffold51_157, scaffold51_631, scaffold51_195, scaffold51_415, scaffold51_320, scaffold51_632, scaffold51_548, scaffold51_472, scaffold51_83, scaffold51_339, scaffold51_447, scaffold51_302, scaffold51_282, scaffold51_408, scaffold51_198, scaffold51_670, scaffold51_444, scaffold51_686, scaffold51_410, scaffold51_690, scaffold51_606, scaffold51_484, scaffold51_357, scaffold51_590, scaffold51_306, scaffold51_448, scaffold51_325, scaffold51_403, scaffold51_274, scaffold51_644, scaffold51_331, scaffold51_218, scaffold51_676, scaffold51_352, scaffold51_383, scaffold51_178, scaffold51_271, scaffold51_455, scaffold51_236, scaffold51_399, scaffold51_117, scaffold51_107, scaffold51_694, scaffold51_619, scaffold51_601, scaffold51_51, scaffold51_288, scaffold51_586, scaffold51_629, scaffold51_615, scaffold51_678, scaffold51_645, scaffold51_614, scaffold51_40, scaffold51_591, scaffold51_576, scaffold51_574, scaffold51_90, scaffold51_341, scaffold51_193, scaffold51_636, scaffold51_618, scaffold51_183, scaffold51_479, scaffold51_580, scaffold51_190, scaffold51_608, scaffold51_553, scaffold51_616, scaffold51_691, scaffold51_539, scaffold51_217, scaffold51_607, scaffold51_473, scaffold51_91, scaffold51_367, scaffold51_515, scaffold51_449, scaffold51_262, scaffold51_273, scaffold51_465, scaffold51_361, scaffold51_464, scaffold51_397, scaffold51_655, scaffold51_443, scaffold51_638, scaffold51_597, scaffold51_196, scaffold51_603, scaffold51_514, scaffold51_672, scaffold51_531, scaffold51_573, scaffold51_565, scaffold51_150, scaffold51_116, scaffold51_247, scaffold51_360, scaffold51_451, scaffold51_184, scaffold51_36, scaffold51_450, scaffold51_476, scaffold51_138, scaffold51_142, scaffold51_52, scaffold51_278, scaffold51_269, scaffold51_406, scaffold51_253, scaffold51_182, scaffold51_527, scaffold51_496, scaffold51_243, scaffold51_677, scaffold51_522, scaffold51_257, scaffold51_611, scaffold51_437, scaffold51_314, scaffold51_660, scaffold51_693, scaffold51_347, scaffold51_227, scaffold51_260, scaffold51_191, scaffold51_442, scaffold51_225, scaffold51_163, scaffold51_256, scaffold51_208, scaffold51_215, scaffold51_209, scaffold51_16, scaffold51_113, scaffold51_121, scaffold51_242, scaffold51_424, scaffold51_64, scaffold51_363, scaffold51_204, scaffold51_233, scaffold51_55, scaffold51_286, scaffold51_87, scaffold51_12, scaffold51_7, scaffold51_542, scaffold51_266, scaffold51_92, scaffold51_141, scaffold51_76, scaffold51_651, scaffold51_402, scaffold51_108, scaffold51_67, scaffold51_46, scaffold51_137, scaffold51_634, scaffold51_380, scaffold51_468, scaffold51_265, scaffold51_385, scaffold51_503, scaffold51_249, scaffold51_650, scaffold51_695, scaffold51_665, scaffold51_624, scaffold51_534, scaffold51_133, scaffold51_609, scaffold51_595, scaffold51_445, scaffold51_656, scaffold51_528, scaffold51_400, scaffold51_72, scaffold51_148, scaffold51_29, scaffold51_411, scaffold51_536, scaffold51_692, scaffold51_10, scaffold51_275, scaffold51_684, scaffold51_658, scaffold51_351, scaffold51_498, scaffold51_248, scaffold51_457, scaffold51_398, scaffold51_254, scaffold51_200, scaffold51_310, scaffold51_336, scaffold51_452, scaffold51_6, scaffold51_490, scaffold51_112, scaffold51_98, scaffold51_13, scaffold51_88, scaffold51_223, scaffold51_268, scaffold51_492, scaffold51_214, scaffold51_311, scaffold51_559, scaffold51_226, scaffold51_613, scaffold51_488, scaffold51_391, scaffold51_687, scaffold51_345, scaffold51_381, scaffold51_497, scaffold51_495, scaffold51_458, scaffold51_435, scaffold51_307, scaffold51_119, scaffold51_526, scaffold51_22, scaffold51_499, scaffold51_530, scaffold51_187, scaffold51_581, scaffold51_172, scaffold51_685, scaffold51_263, scaffold51_617, scaffold51_446, scaffold51_305, scaffold51_563, scaffold51_122, scaffold51_285, scaffold51_338, scaffold51_318, scaffold51_224, scaffold51_344, scaffold51_340, scaffold51_572, scaffold51_482, scaffold51_179, scaffold51_587, scaffold51_289, scaffold51_244, scaffold51_612, scaffold51_376, scaffold51_671, scaffold51_416, scaffold51_439, scaffold51_641, scaffold51_520, scaffold51_647, scaffold51_434, scaffold51_517, scaffold51_56, scaffold51_21, scaffold51_80, scaffold51_71, scaffold51_152, scaffold51_501, scaffold51_429, scaffold51_81, scaffold51_511, scaffold51_688, scaffold51_596, scaffold51_663, scaffold51_620, scaffold51_532, scaffold51_571, scaffold51_471, scaffold51_368, scaffold51_461, scaffold51_575, scaffold51_654, scaffold51_469, scaffold51_414, scaffold51_378, scaffold51_535, scaffold51_481, scaffold51_309, scaffold51_356, scaffold51_494, scaffold51_657, scaffold51_38, scaffold51_551, scaffold51_3, scaffold51_158, scaffold51_57, scaffold51_518, scaffold51_43, scaffold51_436, scaffold51_324, scaffold51_109, scaffold51_639, scaffold51_558, scaffold51_505, scaffold51_300, scaffold51_343, scaffold51_567, scaffold51_550, scaffold51_679, scaffold51_100, scaffold51_413, scaffold51_370, scaffold51_635, scaffold51_545, scaffold51_610, scaffold51_512, scaffold51_454, scaffold51_427, scaffold51_245, scaffold51_506, scaffold51_675, scaffold51_549, scaffold51_583, scaffold51_622, scaffold51_317, scaffold51_95, scaffold51_75, scaffold51_156, scaffold51_8, scaffold51_144, scaffold51_537, scaffold51_168, scaffold51_5, scaffold51_173, scaffold51_213, scaffold51_523, scaffold51_544, scaffold51_556, Writing windows to: /media/sf_LinuxShare/Projects/Theresa/Hisat2/EV_Total_sRNA/Late/Pt_51_Mac/fastp/hisat2/190118_NB501473_A_L1-4_ADPF-56_AdapterTrimmed_R1_23bp.trim.Pt_51_Mac.sort.windows.s50.w100.d50.pileup.bed\nWriting windows to: /media/sf_LinuxShare/Projects/Theresa/Hisat2/EV_Total_sRNA/Late/Pt_51_Mac/fastp/hisat2/190118_NB501473_A_L1-4_ADPF-56_AdapterTrimmed_R1_23bp.trim.Pt_51_Mac.sort.collapsedwindows.s50.w100.d50.pileup.bed\n"
]
]
] | [
"code"
] | [
[
"code",
"code"
]
] |
d099799a96b628cc0a573078c095c04846ebc6bb | 9,419 | ipynb | Jupyter Notebook | PlotlyandPython/Lessons/(01) Intro to Python/Notebooks/(12) For Loops (2) - Looping through the items in a sequence.ipynb | peternewman22/Python_Courses | 07a798b6f264fc6069eb1205c9d429f00fb54bc5 | [
"MIT"
] | null | null | null | PlotlyandPython/Lessons/(01) Intro to Python/Notebooks/(12) For Loops (2) - Looping through the items in a sequence.ipynb | peternewman22/Python_Courses | 07a798b6f264fc6069eb1205c9d429f00fb54bc5 | [
"MIT"
] | null | null | null | PlotlyandPython/Lessons/(01) Intro to Python/Notebooks/(12) For Loops (2) - Looping through the items in a sequence.ipynb | peternewman22/Python_Courses | 07a798b6f264fc6069eb1205c9d429f00fb54bc5 | [
"MIT"
] | null | null | null | 24.592689 | 498 | 0.546236 | [
[
[
"# For Loops (2) - Looping through the items in a sequence",
"_____no_output_____"
],
[
"In the last lesson we introduced the concept of a For loop and learnt how we can use them to repeat a section of code. We learnt how to write a For loop that repeats a piece of code a specific number of times using the <code>range()</code> function, and saw that we have to create a variable to keep track of our position in the loop (conventionally called <code>i</code>). We also found out how to implement if-else statements within our loop to change which code is run inside the loop.\n\nAs well as writing a loop which runs a specific number of times, we can also create a loop which acts upon each item in a sequence. In this lesson we'll learn how to implement this functionality and find out how to use this knowledge to help us make charts with Plotly.\n\n## Looping through each item in a sequence\n\nBeing able to access each item in turn in a sequence is a really useful ability and one which we'll use often in this course. The syntax is very similar to that which we use to loop through the numbers in a range:\n```` python\nfor <variable name> in <sequence>:\n <code to run>\n````\n\nThe difference here is that the variable which keeps track of our position in the loop does not increment by 1 each time the loop is run. Instead, the variable takes the value of each item in the sequence in turn:",
"_____no_output_____"
]
],
[
[
"list1 = ['a', 'b', 'c', 'd', 'e']\n\nfor item in list1:\n print(item)",
"a\nb\nc\nd\ne\n"
]
],
[
[
"It's not important what we call this variable:",
"_____no_output_____"
]
],
[
[
"for banana in list1:\n print(banana)",
"a\nb\nc\nd\ne\n"
]
],
[
[
"But it's probably a good idea to call the variable something meaningful:",
"_____no_output_____"
]
],
[
[
"data = [20, 50, 10, 67]\n\nfor d in data:\n print(d)",
"20\n50\n10\n67\n"
]
],
[
[
"## Using these loops\n\nWe can use these loops in conjunction with other concepts we have already learnt. For example, imagine that you had a list of proportions stored as decimals, but that you needed to create a new list to store them as whole numbers.\n\nWe can use <code>list.append()</code> with a for loop to create this new list. First, we have to create an empty list to which we'll append the percentages:",
"_____no_output_____"
]
],
[
[
"proportions = [0.3, 0.45, 0.99, 0.23, 0.46]\n\npercentages = []",
"_____no_output_____"
]
],
[
[
"Next, we'll loop through each item in proportions, multiply it by 100 and append it to percentages:",
"_____no_output_____"
]
],
[
[
"for prop in proportions:\n percentages.append(prop * 100)\n \nprint(percentages)",
"[30.0, 45.0, 99.0, 23.0, 46.0]\n"
]
],
[
[
"## Using for loops with dictionaries\n\nWe've seen how to loop through each item in a list. We will also make great use of the ability to loop through the keys and values in a dictionary.\n\nIf you remember from the dictionaries lessons, we can get the keys and values in a dictionary by using <code>dict.items()</code>. We can use this in conjunction with a for loop to manipulate each item in a dictionary. This is something which we'll use often; we'll often have data for several years stored in a dictionary; looping through these items will let us plot the data really easily.\n\nIn the cell below, I've created a simple data structure which we'll access using a for loop. Imagine that this data contains sales figures for the 4 quarters in a year:",
"_____no_output_____"
]
],
[
[
"data = {2009 : [10,20,30,40],\n 2010 : [15,30,45,60],\n 2011 : [7,14,21,28],\n 2012 : [5,10,15,20]}",
"_____no_output_____"
]
],
[
[
"We can loop through the keys by using <code>dict.keys()</code>:",
"_____no_output_____"
]
],
[
[
"for k in data.keys():\n print(k)",
"2009\n2010\n2011\n2012\n"
]
],
[
[
"And we can loop through the values (which are lists):",
"_____no_output_____"
]
],
[
[
"for v in data.values():\n print(v)",
"[10, 20, 30, 40]\n[15, 30, 45, 60]\n[7, 14, 21, 28]\n[5, 10, 15, 20]\n"
]
],
[
[
"We can loop through them both together:",
"_____no_output_____"
]
],
[
[
"for k, v in data.items():\n print(k, v)",
"2009 [10, 20, 30, 40]\n2010 [15, 30, 45, 60]\n2011 [7, 14, 21, 28]\n2012 [5, 10, 15, 20]\n"
]
],
[
[
"Having the data available to compare each year is really handy, but it might also be helpful to store them as one long list so we can plot the data and see trends over time. \n\nFirst, we'll make a new list to store all of the data items:",
"_____no_output_____"
]
],
[
[
"allYears = []",
"_____no_output_____"
]
],
[
[
"And then we'll loop through the dictionary and concatenate each year's data to the <code>allYears</code> list:",
"_____no_output_____"
]
],
[
[
"for v in data.values():\n allYears = allYears + v\n \nprint(allYears)",
"[10, 20, 30, 40, 15, 30, 45, 60, 7, 14, 21, 28, 5, 10, 15, 20]\n"
]
],
[
[
"### What have we learnt this lesson?",
"_____no_output_____"
],
[
"In this lesson we've seen how to access each item in a sequence. We've learnt that the variable that keeps track of our position in the loop stores each value in the sequence in turn. We've seen how to apply this knowledge to loop through a dictionary of data and concatenate data for several years into one long list.",
"_____no_output_____"
],
[
"If you have any questions, please ask in the comments section or email <a href=\"mailto:[email protected]\">[email protected]</a>",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
d0997b4ddb05a6bd98b9c428e248e9b0c483dafc | 1,495 | ipynb | Jupyter Notebook | post-processing.ipynb | Sustainable-Science/northbay | 41190a369cdd9d474f31933f92bdc7764763bed4 | [
"MIT"
] | null | null | null | post-processing.ipynb | Sustainable-Science/northbay | 41190a369cdd9d474f31933f92bdc7764763bed4 | [
"MIT"
] | null | null | null | post-processing.ipynb | Sustainable-Science/northbay | 41190a369cdd9d474f31933f92bdc7764763bed4 | [
"MIT"
] | null | null | null | 21.985294 | 91 | 0.573244 | [
[
[
"### Import modules",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom scipy import interpolate\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D \nfrom matplotlib import cm\nimport os\nimport sys\n\nimport warnings\nwarnings.filterwarnings(\"ignore\", category=DeprecationWarning) ",
"_____no_output_____"
],
[
"sys.path.append(os.path.abspath(os.path.join('lib', 'xbeach-toolbox', 'scripts')))\n\nfrom xbeachtools import xb_read_output\nplt.style.use(os.path.join('lib', 'xbeach-toolbox', 'scripts', 'xb.mplstyle'))\n\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.