hexsha
stringlengths 40
40
| size
int64 6
14.9M
| ext
stringclasses 1
value | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 6
260
| max_stars_repo_name
stringlengths 6
119
| max_stars_repo_head_hexsha
stringlengths 40
41
| max_stars_repo_licenses
sequence | max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 6
260
| max_issues_repo_name
stringlengths 6
119
| max_issues_repo_head_hexsha
stringlengths 40
41
| max_issues_repo_licenses
sequence | max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 6
260
| max_forks_repo_name
stringlengths 6
119
| max_forks_repo_head_hexsha
stringlengths 40
41
| max_forks_repo_licenses
sequence | max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | avg_line_length
float64 2
1.04M
| max_line_length
int64 2
11.2M
| alphanum_fraction
float64 0
1
| cells
sequence | cell_types
sequence | cell_type_groups
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
d0af21f60a365b0644d5a82ee24b9af8d5446a1f | 3,058 | ipynb | Jupyter Notebook | tests/nb_export_builds/nb_water_export/BA.00-References.ipynb | rmsrosa/nbjoint | 7019ff336e4a7bb1f6ed20da5fd12b9f702c424a | [
"MIT"
] | null | null | null | tests/nb_export_builds/nb_water_export/BA.00-References.ipynb | rmsrosa/nbjoint | 7019ff336e4a7bb1f6ed20da5fd12b9f702c424a | [
"MIT"
] | null | null | null | tests/nb_export_builds/nb_water_export/BA.00-References.ipynb | rmsrosa/nbjoint | 7019ff336e4a7bb1f6ed20da5fd12b9f702c424a | [
"MIT"
] | null | null | null | 29.403846 | 521 | 0.592871 | [
[
[
"<!--HEADER-->\n[*NBJoint test on a collection of notebooks about some thermodynamic properperties of water*](https://github.com/rmsrosa/nbjoint)",
"_____no_output_____"
],
[
"<!--BADGES-->\n<a href=\"https://nbviewer.jupyter.org/github/rmsrosa/nbjoint/blob/master/tests/nb_export_builds/nb_water_md/BA.00-References.md\"><img align=\"left\" src=\"https://img.shields.io/badge/view-markdown-orange\" alt=\"View Markdown\" title=\"View Markdown\"></a><a href=\"https://nbviewer.jupyter.org/github/rmsrosa/nbjoint/blob/master/tests/nb_export_builds/nb_water_pdf/BA.00-References.pdf\"><img align=\"left\" src=\"https://img.shields.io/badge/view-pdf-blueviolet\" alt=\"View PDF\" title=\"View PDF\"></a> ",
"_____no_output_____"
],
[
"<!--NAVIGATOR-->\n[<- Choosing the Best Fit with AIC](05.00-Best_AIC_Fitting.ipynb) | [Water Contents](00.00-Water_Contents.ipynb) | [References](BA.00-References.ipynb) \n\n---\n",
"_____no_output_____"
],
[
"# References",
"_____no_output_____"
],
[
"- G. K. Batchelor (2000); \"An Introduction to Fluid Dynamics\"; Cambridge University Press, UK.\n- E. A. Bender (2000), \"An Introduction to Mathematical Modeling\"; (Dover Books on Computer Science) Dover Publications; 1st edition.\n- K. P. Burnham, D. R. Anderson (2002), \"Model Selection and Multimodel Inference: A practical information-theoretic approach\"; Springer-Verlag; 2nd edition.\n- G. H. Golub and C. F. Van Loan (1996), \"Matrix Computations\", Johns Hopkins University Press, 3rd edition.\n- L. N. Trefethen and D. Bau III (1997); \"Numerical Linear Algebra\"; SIAM: Society for Industrial and Applied Mathematics; 1st edition.",
"_____no_output_____"
],
[
"<!--NAVIGATOR-->\n\n---\n[<- Choosing the Best Fit with AIC](05.00-Best_AIC_Fitting.ipynb) | [Water Contents](00.00-Water_Contents.ipynb) | [References](BA.00-References.ipynb) ",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d0af223db49112f27303f9e5bbd0b1710bf453b8 | 307,527 | ipynb | Jupyter Notebook | _notebooks/2020-12-10-Tears-In-Rain.ipynb | bailey-deep-learning/im_sorry_dave | 61392cb99c3aeeec84af2738f34cd7f06eac9e12 | [
"Apache-2.0"
] | null | null | null | _notebooks/2020-12-10-Tears-In-Rain.ipynb | bailey-deep-learning/im_sorry_dave | 61392cb99c3aeeec84af2738f34cd7f06eac9e12 | [
"Apache-2.0"
] | 1 | 2022-02-26T10:18:01.000Z | 2022-02-26T10:18:01.000Z | _notebooks/2020-12-10-Tears-In-Rain.ipynb | bailey-deep-learning/im_sorry_dave | 61392cb99c3aeeec84af2738f34cd7f06eac9e12 | [
"Apache-2.0"
] | null | null | null | 58.056825 | 17,248 | 0.719703 | [
[
[
"# All those moments will be los(s)t in time, like tears in rain.\n> I am completely operational, and all my circuits are functioning perfectly.\n\n- toc: true \n- badges: true\n- comments: true\n- categories: [jupyter]\n- image: images/posts/2020-12-10-Tears-In-Rain/Tears-In-Rain.jpg",
"_____no_output_____"
]
],
[
[
"#hide\n!pip install -Uqq fastbook\nimport fastbook\nfastbook.setup_book()",
"_____no_output_____"
],
[
"#hide\nfrom fastai.vision.all import *\nfrom fastbook import *\n\nmatplotlib.rc('image', cmap='Greys')",
"_____no_output_____"
]
],
[
[
"# Under the Hood: Training a Digit Classifier",
"_____no_output_____"
],
[
"This one is the big one. Now we have made it past the introduction of the course it is time to get under the hood and start implementing some of the functionality for ourselves. I am going to be taking quite thorough notes as I want to make sure I understand everything before I move onto Chapter 5.",
"_____no_output_____"
],
[
"So far all the heavy lifting has been done for us, the fastai library is a nice high level API written on top of PyTorch and abstracted in such a way that the \"magic\" / \"obfuscation\" is a little hard to determine what is actually happening. So in this chapter we will be making a hand written digit classifier, a simple classifier that can determine whether or not a 28px * 28px image of a hand written digit is either a **3** or a **7**. We will try to figure out a good baseline from which we can assess our model and then proceed to write out each element of the model in simple python, before seeing it all wrapped up in the nice and tidy fastai API.",
"_____no_output_____"
],
[
"## Pixels: The Foundations of Computer Vision",
"_____no_output_____"
],
[
"Here we are going to take a look at what we are actually dealing with when it comes to our hand written digits. Thankfully there is a great training set put together by Yann LeCun called the [MNIST](https://en.wikipedia.org/wiki/MNIST_database) or Modified National Institute of Standards and Technology database. It contains thousands of individual hand written digits that have been collated as 28px * 28px grayscale images. This is the data we will be using to build our classifier. ",
"_____no_output_____"
],
[
"The team at fastai have made it simple to download and unzip the data we are going to use for this lesson. Instead of having to manually go to the [fastai datasets](https://course.fast.ai/datasets) documentation page, download the tar file, unzip it in a location accessible to your notebooks they have a handy [utility](https://github.com/fastai/fastai/blob/715c027b0ad8586feda09e29fb2b483dfe30c910/fastai/data/external.py) that will do all of that in single line of code:",
"_____no_output_____"
]
],
[
[
"path = untar_data(URLs.MNIST_SAMPLE)",
"_____no_output_____"
],
[
"#hide\nPath.BASE_PATH = path",
"_____no_output_____"
]
],
[
[
"The have also added the UNIX [ls()](https://en.wikipedia.org/wiki/Ls) function to Python [path](https://docs.python.org/3/library/pathlib.html) to give a handy way to see what is in our directory. Here you can see we have our training and validation directory along with a label.csv file. ",
"_____no_output_____"
]
],
[
[
"path.ls()",
"_____no_output_____"
]
],
[
[
"Inside each of the training and validation directories we have a folder which contains all of our **3's** and **7's** respectively",
"_____no_output_____"
]
],
[
[
"(path/'train').ls()",
"_____no_output_____"
]
],
[
[
"Now we can create a list of the file paths and store them in variables",
"_____no_output_____"
]
],
[
[
"threes = (path/'train'/'3').ls().sorted()\nsevens = (path/'train'/'7').ls().sorted()\nthrees",
"_____no_output_____"
]
],
[
[
"Let's take a look at what we are working with. By indexing into the above **threes** list we can retrive the first path in the list. Using [PIL](https://pillow.readthedocs.io/en/stable/) the Python Imaging Library, we can display the data at that path.",
"_____no_output_____"
]
],
[
[
"im3_path = threes[1]\nim3 = Image.open(im3_path)\nim3",
"_____no_output_____"
]
],
[
[
"At this point it may not be entirely intuative what image information is. It is just an array of values for 0 to 255 for an 8-bit grayscale image like we have above. By casting into a Numpy array and taking a slice we can se what that might look like.",
"_____no_output_____"
]
],
[
[
"array(im3)[4:10,4:10]",
"_____no_output_____"
]
],
[
[
"As we have seen earlier in the course the PyTorch tensors have very similar functionality to Numpy arrays, but have the added benefit of being pushed to the GPU, this is a optimization and can be used in replacement of standard Numpy arrays. As a huge fan of Numpy my natural propensity is to use them all the time. But I think I need to reconsider . . .",
"_____no_output_____"
]
],
[
[
"tensor(im3)[4:10,4:10]",
"_____no_output_____"
]
],
[
[
"By loading the tensor into a Pandas Dataframe we are able to see what the pixel values look like by using the .background_gradients method",
"_____no_output_____"
]
],
[
[
"im3_t = tensor(im3)\ndf = pd.DataFrame(im3_t[4:15,4:22])\ndf.style.set_properties(**{'font-size':'6pt'}).background_gradient('Greys')",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
],
[
"## First Try: Pixel Similarity",
"_____no_output_____"
],
[
"Before we start building our model we should consider if there are feasible alternatives to our problem. Working with Deep Learning is like weilding a very large hammer and every problem can appear to us as a nail. But for small self contained problem that do not require scale, simple alternatives are preferable. Here we are exploring an alternative to determine a baseline. What results can we achieve with a naive approach, is this sufficient to solve our problem and can they be improved by the use of a deep learning model. This is an important step, making sure that you are making headway on your problem is more important than having a shiny new model.",
"_____no_output_____"
],
[
"### Problem Statement",
"_____no_output_____"
],
[
"Can we create a simple baseline application that can classify an unseen 28px * 28px image of a **3** or a **7**. Theoretically, this is a simpler task than differentiating between a **3** and an **8** for example because both have similar curves at the top of the number and the bottom which might be difficult to disambiguate. If we try making a `mean` image from all the data in the **threes** folder and another for the **sevens** and compare the `pixel similarity` of an unseen image with this `mean` image we might be able to classify it with some manner of accuracy ",
"_____no_output_____"
],
[
"### Process",
"_____no_output_____"
],
[
"Let's create two more arrays that store a tensor of each image in our **threes** and **sevens** folders.",
"_____no_output_____"
]
],
[
[
"seven_tensors = [tensor(Image.open(o)) for o in sevens]\nthree_tensors = [tensor(Image.open(o)) for o in threes]\nlen(three_tensors),len(seven_tensors)",
"_____no_output_____"
]
],
[
[
"In a similar way to the method above to cast our image to an array or tensor, we can use the fastai `show_image` method to take a tensor and display it as an image, this will be useful for debugging",
"_____no_output_____"
]
],
[
[
"show_image(three_tensors[1]);",
"_____no_output_____"
]
],
[
[
"Our next operation is to normalize the values of the images between 0 and 1, first we stack the images on top of one another using `torch.stack`, convert the integer values in the tensor stack to a float to ensure that values aren't rounded after our division operation, and then we divide the image by 255. ",
"_____no_output_____"
],
[
"By looking at the shape of the torch tensor stack we can see that we have 6131 image tensors that have 28 rows and 28 columns",
"_____no_output_____"
]
],
[
[
"stacked_sevens = torch.stack(seven_tensors).float()/255\nstacked_threes = torch.stack(three_tensors).float()/255\nstacked_threes.shape",
"_____no_output_____"
]
],
[
[
"The length of the tensor.shape is the same as the tensor rank or number of dimensions, we can see that below by using the `ndim` method. This is a fast way of checking that your tensors have the correct rank before moving forward building your model or baseline ",
"_____no_output_____"
]
],
[
[
"len(stacked_threes.shape)",
"_____no_output_____"
],
[
"stacked_threes.ndim",
"_____no_output_____"
]
],
[
[
"Now that we have all of our image tensors stack on top of one another and normalized, we can use the `mean` method to find the average. By passing in the argument 0, we are telling the operation to take the `mean` across the first axis. In this example this is the `mean` of the 6131 images.",
"_____no_output_____"
]
],
[
[
"mean3 = stacked_threes.mean(0)\nshow_image(mean3);",
"_____no_output_____"
],
[
"mean7 = stacked_sevens.mean(0)\nshow_image(mean7);",
"_____no_output_____"
]
],
[
[
"Now that we have our `mean` images, we can compare one of the images against them to see what the `pixel similarity` is and determine if the image is either a **3** or a **7**",
"_____no_output_____"
]
],
[
[
"a_3 = stacked_threes[1]\nshow_image(a_3);",
"_____no_output_____"
]
],
[
[
"### L1 and L2 norm",
"_____no_output_____"
],
[
"- Take the mean of the absolute value of differences (absolute value is the function that replaces negative values with positive values). This is called the mean absolute difference or L1 norm\n- Take the mean of the square of differences (which makes everything positive) and then take the square root (which undoes the squaring). This is called the root mean squared error (RMSE) or L2 norm.",
"_____no_output_____"
],
[
"To determine the `pixel similarity` of our test image and our mean image we are going to use the `mean absolte difference` or the `L1 Norm` and the `root mean squared error` or the `L2 Norm`. ",
"_____no_output_____"
],
[
"If we were simply going to subtract one for the other we could end up with negative values, which once averaged would negate positive values and give us inconcluse information. By comparing the test image againt the mean of the **3's** and the mean of the **7's** we can see that error is lower when comparing to the mean of the **3's** giving us a classification that this image is of a **3**",
"_____no_output_____"
]
],
[
[
"dist_3_abs = (a_3 - mean3).abs().mean()\ndist_3_sqr = ((a_3 - mean3)**2).mean().sqrt()\ndist_3_abs,dist_3_sqr",
"_____no_output_____"
],
[
"dist_7_abs = (a_3 - mean7).abs().mean()\ndist_7_sqr = ((a_3 - mean7)**2).mean().sqrt()\ndist_7_abs,dist_7_sqr",
"_____no_output_____"
]
],
[
[
"Here we can see by using the `l1_loss` and the `mse_loss` methods we are getting the same answers our implementations above",
"_____no_output_____"
]
],
[
[
"F.l1_loss(a_3.float(),mean7), F.mse_loss(a_3,mean7).sqrt()",
"_____no_output_____"
]
],
[
[
"## Computing Metrics Using Broadcasting",
"_____no_output_____"
],
[
"At this point we could simply use a loop and iterate through each of the images in the validation set, calculate the `L1` loss for each image and determine whether or not our baseline application determines the number to be a **3** or a **7**, check the prediction against and determine an accuracy. This would take a very long time and wouldn't make use of the GPU accelleration needed for Deep Learning.",
"_____no_output_____"
],
[
"Here we are going to look at the secret sauce that makes Python (an interpreted language) powerful enough to be used in Deep Learning applications. What is that secret sauce you ask? `Broadcasting`",
"_____no_output_____"
],
[
"So lets start by stacking the images from our validation directories in the same way we did with our training images. We will normalize them and check the shape of the tensor stack",
"_____no_output_____"
]
],
[
[
"valid_3_tens = torch.stack([tensor(Image.open(o)) \n for o in (path/'valid'/'3').ls()])\nvalid_3_tens = valid_3_tens.float()/255\nvalid_7_tens = torch.stack([tensor(Image.open(o)) \n for o in (path/'valid'/'7').ls()])\nvalid_7_tens = valid_7_tens.float()/255\nvalid_3_tens.shape,valid_7_tens.shape",
"_____no_output_____"
]
],
[
[
"Here we are going to write our `L1 Norm` in the form of a function, here taking the mean across the rows and columns of the absolute difference of the tow images.",
"_____no_output_____"
]
],
[
[
"def mnist_distance(a,b): return (a-b).abs().mean((-1,-2))\nmnist_distance(a_3, mean3)",
"_____no_output_____"
]
],
[
[
"But by using `Broadcasting`, we can instead use our entire tensor stack and compare it against the `mean` image of the **3** all at once. What is happening under the hood is that PyTorch is making a \"virtual\" copy of the mean image tensor for every image tensor in the validation stack so it can determine the `pixel similarity` for all of them at once. What it returns is a tensor of all of the results. ",
"_____no_output_____"
]
],
[
[
"valid_3_dist = mnist_distance(valid_3_tens, mean3)\nvalid_3_dist, valid_3_dist.shape",
"_____no_output_____"
]
],
[
[
"Now lets create a simple definition to determine if the prediction is a **3** if the result is `False` then we assume that the image is classified as a **7**",
"_____no_output_____"
],
[
"If the `mnist_distance` measured againt the `mean` **3** is lower than the `mean` **7** then the function will return `True`",
"_____no_output_____"
]
],
[
[
"def is_3(x): return mnist_distance(x,mean3) < mnist_distance(x,mean7)",
"_____no_output_____"
]
],
[
[
"Here by passing in our test image we can see that it returns `True`, but by casting it to a float we can get a value instead",
"_____no_output_____"
]
],
[
[
"is_3(a_3), is_3(a_3).float()",
"_____no_output_____"
]
],
[
[
"Taking advantage of `Broadcasting` we can pass the entire tensor stack to the function and we get back an array of predictions",
"_____no_output_____"
]
],
[
[
"is_3(valid_3_tens)",
"_____no_output_____"
]
],
[
[
"Lets check the accuracy of our classification application. We are expecting every image tensor in the `valid_3_tens` tensor to return true and all the image tensors in the `valid_7_tens` tensor to return false. We can convert them to a floating point value and take the `mean` to determine the accuracy",
"_____no_output_____"
]
],
[
[
"accuracy_3s = is_3(valid_3_tens).float() .mean()\naccuracy_7s = (1 - is_3(valid_7_tens).float()).mean()\n\naccuracy_3s,accuracy_7s,(accuracy_3s+accuracy_7s)/2",
"_____no_output_____"
]
],
[
[
"Wow! It looks like our naive `pixel similarity` application gives us a **95%** accuracy on this task! ",
"_____no_output_____"
],
[
"We have only used PyTorch for its power tensor operations in this task and proven that with a simple baseline we can determine a highly accurate classifier. Let's see if making a Deep Learning model we can improve the accuracy even further!",
"_____no_output_____"
],
[
"## Stochastic Gradient Descent (SGD)",
"_____no_output_____"
],
[
"Up until this point we have a classifier but it really does follow the description by Arthur Samuel ",
"_____no_output_____"
],
[
"**Suppose we arrange for some automatic means of testing the effectiveness of any current weight assignment in terms of actual performance and provide a mechanism for altering the weight assignment so as to maximize the performance. We need not go into the details of such a procedure to see that it could be made entirely automatic and to see that a machine so programmed would \"learn\" from its experience.**",
"_____no_output_____"
],
[
"To turn our function into a machine learning classifier we will need:",
"_____no_output_____"
],
[
"- Initialize the weights.\n- For each image, use these weights to predict whether it appears to be a 3 or a 7.\n- Based on these predictions, calculate how good the model is (its loss).\n- Calculate the gradient, which measures for each weight, how changing that weight would change the loss\n- Step (that is, change) all the weights based on that calculation.\n- Go back to the step 2, and repeat the process.\n- Iterate until you decide to stop the training process (for instance, because the model is good enough or you don't want to wait any longer).",
"_____no_output_____"
]
],
[
[
"gv('''\ninit->predict->loss->gradient->step->stop\nstep->predict[label=repeat]\n''')",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
],
[
"To understand `SGD` a little better lets start with a simpler example using this quadratic function:",
"_____no_output_____"
]
],
[
[
"def f(x): return x**2",
"_____no_output_____"
]
],
[
[
"Let's plot what that function looks like:",
"_____no_output_____"
]
],
[
[
"plot_function(f, 'x', 'x**2')",
"/opt/conda/envs/fastai/lib/python3.8/site-packages/fastbook/__init__.py:73: UserWarning: Not providing a value for linspace's steps is deprecated and will throw a runtime error in a future release. This warning will appear only once per process. (Triggered internally at /opt/conda/conda-bld/pytorch_1603729096996/work/aten/src/ATen/native/RangeFactories.cpp:23.)\n x = torch.linspace(min,max)\n"
]
],
[
[
"The sequence of steps we described earlier starts by picking some random value for a parameter, and calculating the value of the loss:",
"_____no_output_____"
]
],
[
[
"plot_function(f, 'x', 'x**2')\nplt.scatter(-1.5, f(-1.5), color='red');",
"_____no_output_____"
]
],
[
[
"Now we look to see what would happen if we increased or decreased our parameter by a little bit—the adjustment. This is simply the slope at a particular point:",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"We can change our weight by a little in the direction of the slope, calculate our loss and adjustment again, and repeat this a few times. Eventually, we will get to the lowest point on our curve:",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"This basic idea goes all the way back to Isaac Newton, who pointed out that we can optimize arbitrary functions in this way",
"_____no_output_____"
],
[
"### Calculating Gradients",
"_____no_output_____"
],
[
"*This is a lot of text from this portion of the book, its **really** important and I getting it right is paramount*",
"_____no_output_____"
],
[
"\"The one magic step is the bit where we calculate the gradients. As we mentioned, we use calculus as a performance optimization; it allows us to more quickly calculate whether our loss will go up or down when we adjust our parameters up or down. In other words, the gradients will tell us how much we have to change each weight to make our model better.\"",
"_____no_output_____"
],
[
"Thankfully PyTorch is a very powerful auto-differential library. It utilises the [Chain Rule](https://medium.com/machine-learning-and-math/deep-learning-and-chain-rule-of-calculus-80896a1e91f9) to [calculate the derivative](https://pytorch.org/docs/stable/notes/autograd.html) of our functions.",
"_____no_output_____"
],
[
"First, let's pick a tensor value which we want gradients at:",
"_____no_output_____"
]
],
[
[
"xt = tensor(3.).requires_grad_()",
"_____no_output_____"
]
],
[
[
"Notice the special method requires_grad_? That's the magical incantation we use to tell PyTorch that we want to calculate gradients with respect to that variable at that value. It is essentially tagging the variable, so PyTorch will remember to keep track of how to compute gradients of the other, direct calculations on it that you will ask for.",
"_____no_output_____"
],
[
"This API might throw you off if you're coming from math or physics. In those contexts the \"gradient\" of a function is just another function (i.e., its derivative), so you might expect gradient-related APIs to give you a new function. **But in deep learning, \"gradients\" usually means the value of a function's derivative at a particular argument value**. The PyTorch API also puts the focus on the argument, not the function you're actually computing the gradients of. It may feel backwards at first, but it's just a different perspective.",
"_____no_output_____"
],
[
"Now we calculate our function with that value. Notice how PyTorch prints not just the value calculated, but also a note that it has a gradient function it'll be using to calculate our gradients when needed:",
"_____no_output_____"
]
],
[
[
"yt = f(xt)\nyt",
"_____no_output_____"
]
],
[
[
"Calculating the derivative value of our function at this input tensor is simple, we just call the `backward` method.",
"_____no_output_____"
]
],
[
[
"yt.backward()",
"_____no_output_____"
]
],
[
[
"The \"backward\" here refers to backpropagation, which is the name given to the process of calculating the derivative of each layer.",
"_____no_output_____"
],
[
"We can now view the gradients by checking the grad attribute of our tensor:",
"_____no_output_____"
]
],
[
[
"xt.grad",
"_____no_output_____"
]
],
[
[
"If you remember your high school calculus rules, the derivative of x***2 is 2*x, and we have x=3, so the gradients should be 2*3=6, which is what PyTorch calculated for us!\n\nNow we'll repeat the preceding steps, but with a vector argument for our function:",
"_____no_output_____"
]
],
[
[
"xt = tensor([3.,4.,10.]).requires_grad_()\nxt",
"_____no_output_____"
]
],
[
[
"And we'll add sum to our function so it can take a vector (i.e., a rank-1 tensor), and return a scalar (i.e., a rank-0 tensor):",
"_____no_output_____"
]
],
[
[
"def f(x): return (x**2).sum()\n\nyt = f(xt)\nyt",
"_____no_output_____"
]
],
[
[
"Our gradients are 2*xt, as we'd expect!",
"_____no_output_____"
]
],
[
[
"yt.backward()\nxt.grad",
"_____no_output_____"
]
],
[
[
"The gradients only tell us the slope of our function, they don't actually tell us exactly how far to adjust the parameters. But it gives us some idea of how far; if the slope is very large, then that may suggest that we have more adjustments to do, whereas if the slope is very small, that may suggest that we are close to the optimal value.",
"_____no_output_____"
],
[
"### Stepping With a Learning Rate",
"_____no_output_____"
],
[
"Because our gradient only shows the slope, or the direction, in which we need to change our parameters. We will need to figure out how much we move in this direction. This is where we introduce the learning rate. The learning rate is often a number between 0.001 and 0.1, although it could be anything. ",
"_____no_output_____"
],
[
"Once you've picked a learning rate, you can adjust your parameters using this simple function:",
"_____no_output_____"
]
],
[
[
"w -= gradient(w) * lr",
"_____no_output_____"
]
],
[
[
"If you pick a learning rate that's too low, it can mean having to do a lot of steps",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"But picking a learning rate that's too high is even worse—it can actually result in the loss getting worse",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"If the learning rate is too high, it may also \"bounce\" around, rather than actually diverging",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"### An End-to-End SGD Example",
"_____no_output_____"
],
[
"Let's work through a practical example from end-to-end. First we will make a float tensor that represents our time variable. Each index is a time in seconds",
"_____no_output_____"
]
],
[
[
"time = torch.arange(0,20).float(); time",
"_____no_output_____"
]
],
[
[
"Here we create a quadratic function, a function of the form a*(time***2)+(b*time)+c, and add some noiose to simulate realworld measurments",
"_____no_output_____"
]
],
[
[
"speed = torch.randn(20)*3 + 0.75*(time-9.5)**2 + 1\nplt.scatter(time,speed);",
"_____no_output_____"
]
],
[
[
"We want to distinguish clearly between the function's input (the time when we are measuring the coaster's speed) and its parameters (the values that define which quadratic we're trying). So, let's collect the parameters in one argument and thus separate the input, t, and the parameters, params, in the function's signature:",
"_____no_output_____"
]
],
[
[
"def f(t, params):\n a,b,c = params\n return a*(t**2) + (b*t) + c",
"_____no_output_____"
]
],
[
[
"We need to determine which loss function we would like to use. For continuous data, it's common to use mean squared error:",
"_____no_output_____"
]
],
[
[
"def mse(preds, targets): return ((preds-targets)**2).mean().sqrt()",
"_____no_output_____"
]
],
[
[
"#### Step 1: Initialize the parameters",
"_____no_output_____"
],
[
"First thing we want to do is create a tensor for our parameters and to tell PyTorch that this tensor `requires_grad_()`.",
"_____no_output_____"
]
],
[
[
"params = torch.randn(3).requires_grad_()",
"_____no_output_____"
],
[
"#hide\norig_params = params.clone()",
"_____no_output_____"
]
],
[
[
"#### Step 2: Calculate the predictions",
"_____no_output_____"
],
[
"Lets pass our time tensor and our params into our function",
"_____no_output_____"
]
],
[
[
"preds = f(time, params)",
"_____no_output_____"
]
],
[
[
"Here we are creating a small matplotlib function to show a comparison between our predictions and our observations",
"_____no_output_____"
]
],
[
[
"def show_preds(preds, ax=None):\n if ax is None: ax=plt.subplots()[1]\n ax.scatter(time, speed)\n ax.scatter(time, to_np(preds), color='red')\n ax.set_ylim(-300,100)",
"_____no_output_____"
],
[
"show_preds(preds)",
"_____no_output_____"
]
],
[
[
"#### Step 3: Calculate the loss",
"_____no_output_____"
],
[
"Now we can use our `L2` or `RMSE` function to determine our loss",
"_____no_output_____"
]
],
[
[
"loss = mse(preds, speed)\nloss",
"_____no_output_____"
]
],
[
[
"#### Step 4: Calculate the gradients",
"_____no_output_____"
],
[
"By performing `Back Propogation` with our `backward()` method on the loss, we can calculate the slope of our gradient and which direction the we need to move in to improve our loss",
"_____no_output_____"
]
],
[
[
"loss.backward()\nparams.grad",
"_____no_output_____"
]
],
[
[
"Here we use a small learning rate and multiply that to our parameter's gradient to make sure we move towards the optimal solution over each iteration",
"_____no_output_____"
]
],
[
[
"params.grad * 1e-5",
"_____no_output_____"
],
[
"params",
"_____no_output_____"
]
],
[
[
"#### Step 5: Step the weights. ",
"_____no_output_____"
]
],
[
[
"lr = 1e-5\nparams.data -= lr * params.grad.data\nparams.grad = None",
"_____no_output_____"
]
],
[
[
"**Understanding this bit depends on remembering recent history. To calculate the gradients we call backward on the loss. But this loss was itself calculated by mse, which in turn took preds as an input, which was calculated using f taking as an input params, which was the object on which we originally called required_grads_—which is the original call that now allows us to call backward on loss. This chain of function calls represents the mathematical composition of functions, which enables PyTorch to use calculus's chain rule under the hood to calculate these gradients.**",
"_____no_output_____"
]
],
[
[
"preds = f(time,params)\nmse(preds, speed)",
"_____no_output_____"
],
[
"show_preds(preds)",
"_____no_output_____"
]
],
[
[
"Here we simply wrap our previous steps into a function",
"_____no_output_____"
],
[
"- Make a prediction\n- Calculate the loss\n- Perform back propogation on the loss, see above for details\n- Apply the learning rate \n- Zero our gradients\n- Return our predictions",
"_____no_output_____"
]
],
[
[
"def apply_step(params, prn=True):\n preds = f(time, params)\n loss = mse(preds, speed)\n loss.backward()\n params.data -= lr * params.grad.data\n params.grad = None\n if prn: print(loss.item())\n return preds",
"_____no_output_____"
]
],
[
[
"#### Step 6: Repeat the process ",
"_____no_output_____"
],
[
"Now we simply call our `apply_step` function a number of times, as we can see our loss improves each iteration",
"_____no_output_____"
]
],
[
[
"for i in range(10): apply_step(params)",
"155.75035095214844\n155.4757537841797\n155.20118713378906\n154.92662048339844\n154.65211486816406\n154.37762451171875\n154.1031494140625\n153.82872009277344\n153.55430603027344\n153.27992248535156\n"
],
[
"#hide\nparams = orig_params.detach().requires_grad_()",
"_____no_output_____"
],
[
"_,axs = plt.subplots(1,4,figsize=(12,3))\nfor ax in axs: show_preds(apply_step(params, False), ax)\nplt.tight_layout()",
"_____no_output_____"
]
],
[
[
"#### Step 7: stop",
"_____no_output_____"
],
[
"Either we can stop after a certain accuracy or simply after a number of iterations",
"_____no_output_____"
],
[
"### Summarizing Gradient Descent",
"_____no_output_____"
]
],
[
[
"gv('''\ninit->predict->loss->gradient->step->stop\nstep->predict[label=repeat]\n''')",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
],
[
"- Initialize the weights.\n- For each image, use these weights to predict whether it appears to be a 3 or a 7.\n- Based on these predictions, calculate how good the model is (its loss).\n- Calculate the gradient, which measures for each weight, how changing that weight would change the loss\n- Step (that is, change) all the weights based on that calculation.\n- Go back to the step 2, and repeat the process.\n- Iterate until you decide to stop the training process (for instance, because the model is good enough or you don't want to wait any longer).",
"_____no_output_____"
],
[
"To summarize, at the beginning, the weights of our model can be random (training from scratch) or come from a pretrained model (transfer learning). In the first case, the output we will get from our inputs won't have anything to do with what we want, and even in the second case, it's very likely the pretrained model won't be very good at the specific task we are targeting. So the model will need to learn better weights.\n\nWe begin by comparing the outputs the model gives us with our targets (we have labeled data, so we know what result the model should give) using a loss function, which returns a number that we want to make as low as possible by improving our weights. To do this, we take a few data items (such as images) from the training set and feed them to our model. We compare the corresponding targets using our loss function, and the score we get tells us how wrong our predictions were. We then change the weights a little bit to make it slightly better.\n\nTo find how to change the weights to make the loss a bit better, we use calculus to calculate the gradients. (Actually, we let PyTorch do it for us!) Let's consider an analogy. Imagine you are lost in the mountains with your car parked at the lowest point. To find your way back to it, you might wander in a random direction, but that probably wouldn't help much. Since you know your vehicle is at the lowest point, you would be better off going downhill. By always taking a step in the direction of the steepest downward slope, you should eventually arrive at your destination. We use the magnitude of the gradient (i.e., the steepness of the slope) to tell us how big a step to take; specifically, we multiply the gradient by a number we choose called the learning rate to decide on the step size. We then iterate until we have reached the lowest point, which will be our parking lot, then we can stop.",
"_____no_output_____"
],
[
"## The MNIST Loss Function",
"_____no_output_____"
],
[
"Now we have seen how the mothod works on a simple function, we can now put this into practice on our MNIST **3's** and **7's** problem. As we have our dependent variable x:",
"_____no_output_____"
],
[
"Remember:\n- **Dependent Variable** == input variables\n- **Independent Varibale** == output variables",
"_____no_output_____"
],
[
"Our dependent variable in this is example are the images themselves. Here we concatinate the stacked image tensors of the **3's** and the **7's**. Having the image as Matrix is irrelevant we can use the Pytorch `view` method to reshape every tensor to rank 1,",
"_____no_output_____"
]
],
[
[
"train_x = torch.cat([stacked_threes, stacked_sevens]).view(-1, 28*28)",
"_____no_output_____"
]
],
[
[
"We also need a label for each of the images, here we can simply create a tensor by combining an array of 1's of length of the number of **3's** and an array of 0's the length of the number of **7's**. We use the PyTorch function `unsqueeze` to transpose the tensor from a vector with 123396 elements into a tensor with 12396 rows and a single column",
"_____no_output_____"
]
],
[
[
"train_y = tensor([1]*len(threes) + [0]*len(sevens)).unsqueeze(1)\ntrain_x.shape,train_y.shape",
"_____no_output_____"
]
],
[
[
"Now we need to create a dataset, a dataset needs to be able to be indexable, and at that index we expect a tuple (data, label). Here we are using the python `zip` function to take the `train_x` and `train_y` varibles and combine them into a tuple at each index as described",
"_____no_output_____"
]
],
[
[
"dset = list(zip(train_x,train_y))\nx,y = dset[0]\nx.shape,y",
"_____no_output_____"
]
],
[
[
"We need to do the same operations above for our validation set as well\n- Concatinate the images and reshape\n- Create labels and unsqueeze\n- Zip data and label into dataset",
"_____no_output_____"
]
],
[
[
"valid_x = torch.cat([valid_3_tens, valid_7_tens]).view(-1, 28*28)\nvalid_y = tensor([1]*len(valid_3_tens) + [0]*len(valid_7_tens)).unsqueeze(1)\nvalid_dset = list(zip(valid_x,valid_y))",
"_____no_output_____"
]
],
[
[
"Here is a simple function that will initialize our parameters with random values. PyTorch `randn` returns a tensor the shape and size of its argument with normalized values between 0 and 1. We can use a variance argument here to scale the random values if necessary, but not in this example. We want to be able to calculate the gradient of this tensor, so we use the `requires_grad` method ",
"_____no_output_____"
]
],
[
[
"def init_params(size, std=1.0): return (torch.randn(size)*std).requires_grad_()",
"_____no_output_____"
]
],
[
[
"We create and initialize our weights and bias variables",
"_____no_output_____"
]
],
[
[
"weights = init_params((28*28,1))",
"_____no_output_____"
],
[
"bias = init_params(1)",
"_____no_output_____"
]
],
[
[
"In neural networks, the w in the equation y=w*x+b is called the weights, and the b is called the bias. Together, the weights and bias make up the parameters.",
"_____no_output_____"
],
[
"We can now calculate a prediction for one image:",
"_____no_output_____"
]
],
[
[
"(train_x[0]*weights.T).sum() + bias",
"_____no_output_____"
]
],
[
[
"While we could use a Python for loop to calculate the prediction for each image, that would be very slow. Because Python loops don't run on the GPU, and because Python is a slow language for loops in general, we need to represent as much of the computation in a model as possible using higher-level functions.\n\nIn this case, there's an extremely convenient mathematical operation that calculates w*x for every row of a matrix—it's called matrix multiplication.",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"In Python, matrix multiplication is represented with the @ operator. Let's try it:",
"_____no_output_____"
]
],
[
[
"def linear1(xb): return xb@weights + bias\npreds = linear1(train_x)\npreds",
"_____no_output_____"
]
],
[
[
"Lets check how good our random initialization is. Here is make a determination that if a prediction is over 0 it's a **3** else its a **7** and we compare it against our labels",
"_____no_output_____"
]
],
[
[
"corrects = (preds>0.0).float() == train_y\ncorrects",
"_____no_output_____"
]
],
[
[
"As you can see, and could have predicted, a random initilization is correct roughly 50% of the time.",
"_____no_output_____"
]
],
[
[
"corrects.float().mean().item()",
"_____no_output_____"
]
],
[
[
"So lets change one of our parameters a little bit and see if that changes our result",
"_____no_output_____"
]
],
[
[
"weights[0] *= 1.0001",
"_____no_output_____"
],
[
"preds = linear1(train_x)\n((preds>0.0).float() == train_y).float().mean().item()",
"_____no_output_____"
]
],
[
[
"As we can see above changing one parameter a little has absolutely no affect on the results of our network. What does this mean practically?\n\nBy changing this pixel by some small amount is not sufficient in itself to change the prediction of an image from a **3** to a **7**\n\nBecaue there is no change we cannot calculate a gradient and we cannot make a step to improve our predictions. This is because of the thresholding in our determining our correctness.",
"_____no_output_____"
],
[
"**In mathematical terms, accuracy is a function that is constant almost everywhere (except at the threshold, 0.5), so its derivative is nil almost everywhere (and infinity at the threshold). This then gives gradients that are 0 or infinite, which are useless for updating the model.**",
"_____no_output_____"
],
[
"So how do we fix this?",
"_____no_output_____"
]
],
[
[
"trgts = tensor([1,0,1])\nprds = tensor([0.9, 0.4, 0.2])",
"_____no_output_____"
]
],
[
[
"Here's a first try at a loss function that measures the distance between predictions and targets:",
"_____no_output_____"
]
],
[
[
"def mnist_loss(predictions, targets):\n return torch.where(targets==1, 1-predictions, predictions).mean()",
"_____no_output_____"
]
],
[
[
"**Read the Docs: It's important to learn about PyTorch functions like this, because looping over tensors in Python performs at Python speed, not C/CUDA speed! Try running help(torch.where) now to read the docs for this function, or, better still, look it up on the PyTorch documentation site.**",
"_____no_output_____"
]
],
[
[
"torch.where(trgts==1, 1-prds, prds)",
"_____no_output_____"
]
],
[
[
"You can see that this function returns a lower number when predictions are more accurate, when accurate predictions are more confident (higher absolute values), and when inaccurate predictions are less confident. In PyTorch, we always assume that a lower value of a loss function is better. Since we need a scalar for the final loss, mnist_loss takes the mean of the previous tensor:",
"_____no_output_____"
]
],
[
[
"mnist_loss(prds,trgts)",
"_____no_output_____"
]
],
[
[
"For instance, if we change our prediction for the one \"false\" target from 0.2 to 0.8 the loss will go down, indicating that this is a better prediction:",
"_____no_output_____"
]
],
[
[
"mnist_loss(tensor([0.9, 0.4, 0.8]),trgts)",
"_____no_output_____"
]
],
[
[
"One problem with mnist_loss as currently defined is that it assumes that predictions are always between 0 and 1. We need to ensure, then, that this is actually the case! As it happens, there is a function that does exactly that—let's take a look.",
"_____no_output_____"
],
[
"### Sigmoid",
"_____no_output_____"
],
[
"The sigmoid function always outputs a number between 0 and 1. It's defined as follows:",
"_____no_output_____"
]
],
[
[
"def sigmoid(x): return 1/(1+torch.exp(-x))",
"_____no_output_____"
],
[
"plot_function(torch.sigmoid, title='Sigmoid', min=-4, max=4)",
"_____no_output_____"
]
],
[
[
"As you can see, it takes any input value, positive or negative, and smooshes it onto an output value between 0 and 1. It's also a smooth curve that only goes up, which makes it easier for SGD to find meaningful gradients.\n\nLet's update mnist_loss to first apply sigmoid to the inputs:",
"_____no_output_____"
]
],
[
[
"def mnist_loss(predictions, targets):\n predictions = predictions.sigmoid()\n return torch.where(targets==1, 1-predictions, predictions).mean()",
"_____no_output_____"
]
],
[
[
"### SGD and Mini-Batches",
"_____no_output_____"
],
[
"In the context of SGD, \"Minibatch\" means that the gradient is calculated across the entire batch before updating weights. If you are not using a \"minibatch\", every training example in a \"batch\" updates the learning algorithm's parameters independently.",
"_____no_output_____"
]
],
[
[
"coll = range(15)\ndl = DataLoader(coll, batch_size=5, shuffle=True)\nlist(dl)",
"_____no_output_____"
],
[
"ds = L(enumerate(string.ascii_lowercase))\nds",
"_____no_output_____"
],
[
"dl = DataLoader(ds, batch_size=6, shuffle=True)\nlist(dl)",
"_____no_output_____"
]
],
[
[
"## Putting It All Together",
"_____no_output_____"
],
[
"First, let's re-initialize our parameters:",
"_____no_output_____"
]
],
[
[
"weights = init_params((28*28,1))\nbias = init_params(1)",
"_____no_output_____"
]
],
[
[
"A DataLoader can be created from a Dataset and we can set our bacth size here:",
"_____no_output_____"
]
],
[
[
"dl = DataLoader(dset, batch_size=256)\nxb,yb = first(dl)\nxb.shape,yb.shape",
"_____no_output_____"
]
],
[
[
"We'll do the same for the validation set:",
"_____no_output_____"
]
],
[
[
"valid_dl = DataLoader(valid_dset, batch_size=256)",
"_____no_output_____"
]
],
[
[
"Let's create a mini-batch of size 4 for testing:",
"_____no_output_____"
]
],
[
[
"batch = train_x[:4]\nbatch.shape",
"_____no_output_____"
],
[
"preds = linear1(batch)\npreds",
"_____no_output_____"
],
[
"loss = mnist_loss(preds, train_y[:4])\nloss",
"_____no_output_____"
]
],
[
[
"Now we can calculate the gradients:",
"_____no_output_____"
]
],
[
[
"loss.backward()\nweights.grad.shape,weights.grad.mean(),bias.grad",
"_____no_output_____"
]
],
[
[
"Let's put this into a function:",
"_____no_output_____"
]
],
[
[
"def calc_grad(xb, yb, model):\n preds = model(xb)\n loss = mnist_loss(preds, yb)\n loss.backward()",
"_____no_output_____"
],
[
"calc_grad(batch, train_y[:4], linear1)\nweights.grad.mean(),bias.grad",
"_____no_output_____"
]
],
[
[
"But look what happens if we call it twice:",
"_____no_output_____"
]
],
[
[
"calc_grad(batch, train_y[:4], linear1)\nweights.grad.mean(),bias.grad",
"_____no_output_____"
]
],
[
[
"The gradients have changed! The reason for this is that loss.backward actually adds the gradients of loss to any gradients that are currently stored. So, we have to set the current gradients to 0 first:",
"_____no_output_____"
]
],
[
[
"weights.grad.zero_()\nbias.grad.zero_();",
"_____no_output_____"
]
],
[
[
"**Inplace Operations: Methods in PyTorch whose names end in an underscore modify their objects in place. For instance, bias.zero_() sets all elements of the tensor bias to 0.**",
"_____no_output_____"
],
[
"Our only remaining step is to update the weights and biases based on the gradient and learning rate. When we do so, we have to tell PyTorch not to take the gradient of this step too—otherwise things will get very confusing when we try to compute the derivative at the next batch! \n\n**If we assign to the data attribute of a tensor then PyTorch will not take the gradient of that step. Here's our basic training loop for an epoch:**",
"_____no_output_____"
]
],
[
[
"def train_epoch(model, lr, params):\n for xb,yb in dl:\n calc_grad(xb, yb, model)\n for p in params:\n p.data -= p.grad*lr\n p.grad.zero_()",
"_____no_output_____"
]
],
[
[
"We also want to check how we're doing, by looking at the accuracy of the validation set. To decide if an output represents a 3 or a 7, we can just check whether it's greater than 0. So our accuracy for each item can be calculated (using broadcasting, so no loops!) with:",
"_____no_output_____"
]
],
[
[
"(preds>0.0).float() == train_y[:4]",
"_____no_output_____"
]
],
[
[
"That gives us this function to calculate our validation accuracy:",
"_____no_output_____"
]
],
[
[
"def batch_accuracy(xb, yb):\n preds = xb.sigmoid()\n correct = (preds>0.5) == yb\n return correct.float().mean()",
"_____no_output_____"
],
[
"batch_accuracy(linear1(batch), train_y[:4])",
"_____no_output_____"
]
],
[
[
"and then put the batches together:",
"_____no_output_____"
]
],
[
[
"def validate_epoch(model):\n accs = [batch_accuracy(model(xb), yb) for xb,yb in valid_dl]\n return round(torch.stack(accs).mean().item(), 4)",
"_____no_output_____"
],
[
"validate_epoch(linear1)",
"_____no_output_____"
]
],
[
[
"That's our starting point. Let's train for one epoch, and see if the accuracy improves:",
"_____no_output_____"
]
],
[
[
"lr = 1.\nparams = weights,bias\ntrain_epoch(linear1, lr, params)\nvalidate_epoch(linear1)",
"_____no_output_____"
]
],
[
[
"Then do a few more:",
"_____no_output_____"
]
],
[
[
"for i in range(20):\n train_epoch(linear1, lr, params)\n print(validate_epoch(linear1), end=' ')",
"0.8265 0.89 0.9183 0.9276 0.9398 0.9467 0.9506 0.9525 0.9559 0.9579 0.9598 0.9608 0.9613 0.9618 0.9633 0.9637 0.9647 0.9657 0.9672 0.9677 "
]
],
[
[
"Looking good! We're already about at the same accuracy as our \"pixel similarity\" approach, and we've created a general-purpose foundation we can build on. Our next step will be to create an object that will handle the SGD step for us. In PyTorch, it's called an optimizer.",
"_____no_output_____"
],
[
"### Creating an Optimizer",
"_____no_output_____"
],
[
"PyTorch's `nn.Linear` does the same thing as our init_params and linear together. It contains both the weights and biases in a single class. Here's how we replicate our model from the previous section:",
"_____no_output_____"
]
],
[
[
"linear_model = nn.Linear(28*28,1)",
"_____no_output_____"
]
],
[
[
"Every PyTorch module knows what parameters it has that can be trained; they are available through the parameters method:",
"_____no_output_____"
]
],
[
[
"w,b = linear_model.parameters()\nw.shape,b.shape",
"_____no_output_____"
]
],
[
[
"Now, let's create an optimizer:",
"_____no_output_____"
]
],
[
[
"class BasicOptim:\n def __init__(self,params,lr): self.params,self.lr = list(params),lr\n\n def step(self, *args, **kwargs):\n for p in self.params: p.data -= p.grad.data * self.lr\n\n def zero_grad(self, *args, **kwargs):\n for p in self.params: p.grad = None",
"_____no_output_____"
]
],
[
[
"We can create our optimizer by passing in the model's parameters:",
"_____no_output_____"
]
],
[
[
"opt = BasicOptim(linear_model.parameters(), lr)",
"_____no_output_____"
],
[
"def train_epoch(model):\n for xb,yb in dl:\n calc_grad(xb, yb, model)\n opt.step()\n opt.zero_grad()",
"_____no_output_____"
],
[
"validate_epoch(linear_model)",
"_____no_output_____"
],
[
"def train_model(model, epochs):\n for i in range(epochs):\n train_epoch(model)\n print(validate_epoch(model), end=' ')",
"_____no_output_____"
],
[
"train_model(linear_model, 20)",
"0.4932 0.7685 0.8554 0.9136 0.9346 0.9482 0.957 0.9634 0.9658 0.9678 0.9697 0.9717 0.9736 0.9746 0.9761 0.977 0.9775 0.9775 0.978 0.9785 "
]
],
[
[
"fastai provides the SGD class which, by default, does the same thing as our BasicOptim:",
"_____no_output_____"
]
],
[
[
"linear_model = nn.Linear(28*28,1)\nopt = SGD(linear_model.parameters(), lr)\ntrain_model(linear_model, 20)",
"0.4932 0.8179 0.8496 0.914 0.9346 0.9482 0.957 0.9619 0.9658 0.9673 0.9692 0.9712 0.9741 0.9751 0.9761 0.9775 0.9775 0.978 0.9785 0.979 "
]
],
[
[
"fastai also provides Learner.fit, which we can use instead of train_model. To create a Learner we first need to create a DataLoaders, by passing in our training and validation DataLoaders:",
"_____no_output_____"
]
],
[
[
"dls = DataLoaders(dl, valid_dl)",
"_____no_output_____"
]
],
[
[
"To create a Learner without using an application (such as cnn_learner) we need to pass in all the elements that we've created in this chapter: the DataLoaders, the model, the optimization function (which will be passed the parameters), the loss function, and optionally any metrics to print:",
"_____no_output_____"
]
],
[
[
"learn = Learner(dls, nn.Linear(28*28,1), opt_func=SGD,\n loss_func=mnist_loss, metrics=batch_accuracy)",
"_____no_output_____"
],
[
"learn.fit(10, lr=lr)",
"_____no_output_____"
]
],
[
[
"## Adding a Nonlinearity",
"_____no_output_____"
],
[
"So far we have a general procedure for optimizing the parameters of a function, and we have tried it out on a very boring function: a simple linear classifier. A linear classifier is very constrained in terms of what it can do. To make it a bit more complex (and able to handle more tasks), we need to add something nonlinear between two linear classifiers—this is what gives us a neural network.\n\nHere is the entire definition of a basic neural network:",
"_____no_output_____"
]
],
[
[
"def simple_net(xb): \n res = xb@w1 + b1\n res = res.max(tensor(0.0))\n res = res@w2 + b2\n return res",
"_____no_output_____"
]
],
[
[
"That's it! All we have in simple_net is two linear classifiers with a max function between them.\n\nHere, w1 and w2 are weight tensors, and b1 and b2 are bias tensors; that is, parameters that are initially randomly initialized, just like we did in the previous section:",
"_____no_output_____"
]
],
[
[
"w1 = init_params((28*28,30))\nb1 = init_params(30)\nw2 = init_params((30,1))\nb2 = init_params(1)",
"_____no_output_____"
]
],
[
[
"The key point about this is that w1 has 30 output activations (which means that w2 must have 30 input activations, so they match). That means that the first layer can construct 30 different features, each representing some different mix of pixels. You can change that 30 to anything you like, to make the model more or less complex.\n\nThat little function res.max(tensor(0.0)) is called a rectified linear unit, also known as ReLU. We think we can all agree that rectified linear unit sounds pretty fancy and complicated... But actually, there's nothing more to it than res.max(tensor(0.0))—in other words, replace every negative number with a zero. This tiny function is also available in PyTorch as F.relu:",
"_____no_output_____"
]
],
[
[
"plot_function(F.relu)",
"_____no_output_____"
]
],
[
[
"**Mathematically, we say the composition of two linear functions is another linear function. So, we can stack as many linear classifiers as we want on top of each other, and without nonlinear functions between them, it will just be the same as one linear classifier.**",
"_____no_output_____"
]
],
[
[
"simple_net = nn.Sequential(\n nn.Linear(28*28,30),\n nn.ReLU(),\n nn.Linear(30,1)\n)",
"_____no_output_____"
]
],
[
[
"nn.Sequential creates a module that will call each of the listed layers or functions in turn.\n\nnn.ReLU is a PyTorch module that does exactly the same thing as the F.relu function. Most functions that can appear in a model also have identical forms that are modules. Generally, it's just a case of replacing F with nn and changing the capitalization. When using nn.Sequential, PyTorch requires us to use the module version. Since modules are classes, we have to instantiate them, which is why you see nn.ReLU() in this example.\n\nBecause nn.Sequential is a module, we can get its parameters, which will return a list of all the parameters of all the modules it contains. Let's try it out! As this is a deeper model, we'll use a lower learning rate and a few more epochs.",
"_____no_output_____"
]
],
[
[
"learn = Learner(dls, simple_net, opt_func=SGD,\n loss_func=mnist_loss, metrics=batch_accuracy)",
"_____no_output_____"
],
[
"learn.fit(40, 0.1)",
"_____no_output_____"
],
[
"plt.plot(L(learn.recorder.values).itemgot(2));",
"_____no_output_____"
]
],
[
[
"And we can view the final accuracy:",
"_____no_output_____"
]
],
[
[
"learn.recorder.values[-1][2]",
"_____no_output_____"
]
],
[
[
"At this point we have something that is rather magical:\n\nA function that can solve any problem to any level of accuracy (the neural network) given the correct set of parameters\nA way to find the best set of parameters for any function (stochastic gradient descent)",
"_____no_output_____"
],
[
"### Going Deeper",
"_____no_output_____"
],
[
"Here what happens when we train an 18-layer model . . .",
"_____no_output_____"
]
],
[
[
"dls = ImageDataLoaders.from_folder(path)\nlearn = cnn_learner(dls, resnet18, pretrained=False,\n loss_func=F.cross_entropy, metrics=accuracy)\nlearn.fit_one_cycle(1, 0.1)",
"_____no_output_____"
]
],
[
[
"Almost 100% accuracy!",
"_____no_output_____"
],
[
"## Jargon Recap",
"_____no_output_____"
],
[
"- **ReLU :** Function that returns 0 for negative numbers and doesn't change positive numbers.\n- **Mini-batch :** A small group of inputs and labels gathered together in two arrays. A gradient descent step is updated on this batch (rather than a whole epoch).\n- **Forward pass :** Applying the model to some input and computing the predictions.\n- **Loss :** A value that represents how well (or badly) our model is doing.\n- **Gradient :** The derivative of the loss with respect to some parameter of the model.\n- **Backward pass :** Computing the gradients of the loss with respect to all model parameters.\n- **Gradient descent :** Taking a step in the directions opposite to the gradients to make the model parameters a little bit better.\n- **Learning rate :** The size of the step we take when applying SGD to update the parameters of the model.",
"_____no_output_____"
],
[
"## Questionnaire",
"_____no_output_____"
],
[
"**How is a grayscale image represented on a computer? How about a color image?**",
"_____no_output_____"
],
[
"Images are represented by arrays with pixel values representing the content of the image. For greyscale images, a 2-dimensional array is used with the pixels representing the greyscale values, with a range of 256 integers. A value of 0 would represent white, and a value of 255 represents black, and different shades of greyscale in between. For color images, three color channels (red, green, blue) are typicall used, with a separate 256-range 2D array used for each channel. A pixel value of 0 again represents white, with 255 representing solid red, green, or blue. The three 2-D arrays form a final 3-D array (rank 3 tensor) representing the color image.",
"_____no_output_____"
],
[
"**How are the files and folders in the `MNIST_SAMPLE` dataset structured? Why?**",
"_____no_output_____"
],
[
"There are two subfolders, train and valid, the former contains the data for model training, the latter contains the data for validating model performance after each training step. Evaluating the model on the validation set serves two purposes: a) to report a human-interpretable metric such as accuracy (in contrast to the often abstract loss functions used for training), b) to facilitate the detection of overfitting by evaluating the model on a dataset it hasn’t been trained on (in short, an overfitting model performs increasingly well on the training set but decreasingly so on the validation set). Of course, every practicioner could generate their own train/validation-split of the data. Public datasets are usually pre-split to simplifiy comparing results between implementations/publications.\n\nEach subfolder has two subsubfolders 3 and 7 which contain the .jpg files for the respective class of images. This is a common way of organizing datasets comprised of pictures. For the full MNIST dataset there are 10 subsubfolders, one for the images for each digit.",
"_____no_output_____"
],
[
"**Explain how the \"pixel similarity\" approach to classifying digits works.**",
"_____no_output_____"
],
[
"In the “pixel similarity” approach, we generate an archetype for each class we want to identify. In our case, we want to distinguish images of 3’s from images of 7’s. We define the archetypical 3 as the pixel-wise mean value of all 3’s in the training set. Analoguously for the 7’s. You can visualize the two archetypes and see that they are in fact blurred versions of the numbers they represent.\nIn order to tell if a previously unseen image is a 3 or a 7, we calculate its distance to the two archetypes (here: mean pixel-wise absolute difference). We say the new image is a 3 if its distance to the archetypical 3 is lower than two the archetypical 7.",
"_____no_output_____"
],
[
"**What is a list comprehension? Create one now that selects odd numbers from a list and doubles them.**",
"_____no_output_____"
],
[
"Lists (arrays in other programming languages) are often generated using a for-loop. A list comprehension is a Pythonic way of condensing the creation of a list using a for-loop into a single expression. List comprehensions will also often include if clauses for filtering.",
"_____no_output_____"
]
],
[
[
"lst_in = range(10)\nlst_out = [2*el for el in lst_in if el%2==1]\n# is equivalent to:\nlst_out = []\nfor el in lst_in:\n if el%2==1:\n lst_out.append(2*el)",
"_____no_output_____"
]
],
[
[
"**What is a \"rank-3 tensor\"?**",
"_____no_output_____"
],
[
"The rank of a tensor is the number of dimensions it has. An easy way to identify the rank is the number of indices you would need to reference a number within a tensor. A scalar can be represented as a tensor of rank 0 (no index), a vector can be represented as a tensor of rank 1 (one index, e.g., v[i]), a matrix can be represented as a tensor of rank 2 (two indices, e.g., a[i,j]), and a tensor of rank 3 is a cuboid or a “stack of matrices” (three indices, e.g., b[i,j,k]). In particular, the rank of a tensor is independent of its shape or dimensionality, e.g., a tensor of shape 2x2x2 and a tensor of shape 3x5x7 both have rank 3.\nNote that the term “rank” has different meanings in the context of tensors and matrices (where it refers to the number of linearly independent column vectors).",
"_____no_output_____"
],
[
"**What is the difference between tensor rank and shape? How do you get the rank from the shape?**",
"_____no_output_____"
],
[
"Rank is the number of axes or dimensions in a tensor; shape is the size of each axis of a tensor.",
"_____no_output_____"
],
[
"**How do you get the rank from the shape?**",
"_____no_output_____"
],
[
"The length of a tensor’s shape is its rank.\n\nSo if we have the images of the 3 folder from the MINST_SAMPLE dataset in a tensor called stacked_threes and we find its shape like this.",
"_____no_output_____"
]
],
[
[
"stacked_threes.shape",
"_____no_output_____"
]
],
[
[
"torch.Size([6131, 28, 28])\n\nWe just need to find its length to know its rank. This is done as follows.",
"_____no_output_____"
]
],
[
[
"len(stacked_threes.shape)",
"_____no_output_____"
]
],
[
[
"3\n\nYou can also get a tensor’s rank directly with ndim .",
"_____no_output_____"
]
],
[
[
"stacked_threes.ndim",
"_____no_output_____"
]
],
[
[
"3 ",
"_____no_output_____"
],
[
"**What are RMSE and L1 norm?**",
"_____no_output_____"
],
[
"Root mean square error (RMSE), also called the L2 norm, and mean absolute difference (MAE), also called the L1 norm, are two commonly used methods of measuring “distance”. Simple differences do not work because some difference are positive and others are negative, canceling each other out. Therefore, a function that focuses on the magnitudes of the differences is needed to properly measure distances. The simplest would be to add the absolute values of the differences, which is what MAE is. RMSE takes the mean of the square (makes everything positive) and then takes the square root (undoes squaring).",
"_____no_output_____"
],
[
"**How can you apply a calculation on thousands of numbers at once, many thousands of times faster than a Python loop?**",
"_____no_output_____"
],
[
"As loops are very slow in Python, it is best to represent the operations as array operations rather than looping through individual elements. If this can be done, then using NumPy or PyTorch will be thousands of times faster, as they use underlying C code which is much faster than pure Python. Even better, PyTorch allows you to run operations on GPU, which will have significant speedup if there are parallel operations that can be done.",
"_____no_output_____"
],
[
"**Create a 3×3 tensor or array containing the numbers from 1 to 9. Double it. Select the bottom-right four numbers.**",
"_____no_output_____"
]
],
[
[
"a = torch.Tensor(list(range(1,10))).view(3,3); print(a)",
"_____no_output_____"
]
],
[
[
"tensor([[1., 2., 3.],\n [4., 5., 6.],\n [7., 8., 9.]])",
"_____no_output_____"
]
],
[
[
"b = 2*a; \nprint(b)",
"_____no_output_____"
]
],
[
[
"tensor([[ 2., 4., 6.],\n [ 8., 10., 12.],\n [14., 16., 18.]])",
"_____no_output_____"
]
],
[
[
"b[1:,1:]",
"_____no_output_____"
]
],
[
[
"tensor([[10., 12.],\n [16., 18.]])",
"_____no_output_____"
],
[
"**What is broadcasting?**",
"_____no_output_____"
],
[
"Scientific/numerical Python packages like NumPy and PyTorch will often implement broadcasting that often makes code easier to write. In the case of PyTorch, tensors with smaller rank are expanded to have the same size as the larger rank tensor. In this way, operations can be performed between tensors with different rank.",
"_____no_output_____"
],
[
"**Are metrics generally calculated using the training set, or the validation set? Why?**",
"_____no_output_____"
],
[
"Metrics are generally calculated on a validation set. As the validation set is unseen data for the model, evaluating the metrics on the validation set is better in order to determine if there is any overfitting and how well the model might generalize if given similar data.",
"_____no_output_____"
],
[
"**What is SGD?**",
"_____no_output_____"
],
[
"SGD, or stochastic gradient descent, is an optimization algorithm. Specifically, SGD is an algorithm that will update the parameters of a model in order to minimize a given loss function that was evaluated on the predictions and target. The key idea behind SGD (and many optimization algorithms, for that matter) is that the gradient of the loss function provides an indication of how that loss function changes in the parameter space, which we can use to determine how best to update the parameters in order to minimize the loss function. This is what SGD does.",
"_____no_output_____"
],
[
"**Why does SGD use mini-batches?**",
"_____no_output_____"
],
[
"We need to calculate our loss function (and our gradient) on one or more data points. We cannot calculate on the whole datasets due to compute limitations and time constraints. If we iterated through each data point, however, the gradient will be unstable and imprecise, and is not suitable for training. As a compromise, we calculate the average loss for a small subset of the dataset at a time. This subset is called a mini-batch. Using mini-batches are also more computationally efficient than single items on a GPU.",
"_____no_output_____"
],
[
"**What are the seven steps in SGD for machine learning?**",
"_____no_output_____"
],
[
"- Initialize the weights.\n- For each image, use these weights to predict whether it appears to be a 3 or a 7.\n- Based on these predictions, calculate how good the model is (its loss).\n- Calculate the gradient, which measures for each weight, how changing that weight would change the loss\n- Step (that is, change) all the weights based on that calculation.\n- Go back to the step 2, and repeat the process.\n- Iterate until you decide to stop the training process (for instance, because the model is good enough or you don't want to wait any longer).",
"_____no_output_____"
],
[
"**How do we initialize the weights in a model?**",
"_____no_output_____"
],
[
"Random weights work pretty well.",
"_____no_output_____"
],
[
"**What is \"loss\"?**",
"_____no_output_____"
],
[
"The loss function will return a value based on the given predictions and targets, where lower values correspond to better model predictions.",
"_____no_output_____"
],
[
"**Why can't we always use a high learning rate?**",
"_____no_output_____"
],
[
"The loss may “bounce” around (oscillate) or even diverge, as the optimizer is taking steps that are too large, and updating the parameters faster than it should be.",
"_____no_output_____"
],
[
"**What is a \"gradient\"?**",
"_____no_output_____"
],
[
"The gradients tell us how much we have to change each weight to make our model better. It is essentially a measure of how the loss function changes with changes of the weights of the model (the derivative).",
"_____no_output_____"
],
[
"**Do you need to know how to calculate gradients yourself?**",
"_____no_output_____"
],
[
"Manual calculation of the gradients are not required, as deep learning libraries will automatically calculate the gradients for you. This feature is known as automatic differentiation. In PyTorch, if requires_grad=True, the gradients can be returned by calling the backward method: a.backward()",
"_____no_output_____"
],
[
"**Why can't we use accuracy as a loss function?**",
"_____no_output_____"
],
[
"A loss function needs to change as the weights are being adjusted. Accuracy only changes if the predictions of the model change. So if there are slight changes to the model that, say, improves confidence in a prediction, but does not change the prediction, the accuracy will still not change. Therefore, the gradients will be zero everywhere except when the actual predictions change. The model therefore cannot learn from the gradients equal to zero, and the model’s weights will not update and will not train. A good loss function gives a slightly better loss when the model gives slightly better predictions. Slightly better predictions mean if the model is more confident about the correct prediction. For example, predicting 0.9 vs 0.7 for probability that a MNIST image is a 3 would be slightly better prediction. The loss function needs to reflect that.",
"_____no_output_____"
],
[
"**Draw the sigmoid function. What is special about its shape?**",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"Sigmoid function is a smooth curve that squishes all values into values between 0 and 1. Most loss functions assume that the model is outputting some form of a probability or confidence level between 0 and 1 so we use a sigmoid function at the end of the model in order to do this.",
"_____no_output_____"
],
[
"**What is the difference between a loss function and a metric?**",
"_____no_output_____"
],
[
"The key difference is that metrics drive human understanding and losses drive automated learning. In order for loss to be useful for training, it needs to have a meaningful derivative. Many metrics, like accuracy are not like that. Metrics instead are the numbers that humans care about, that reflect the performance of the model.",
"_____no_output_____"
],
[
"**What is the function to calculate new weights using a learning rate?**",
"_____no_output_____"
],
[
"The optimizer step function",
"_____no_output_____"
],
[
"**What does the `DataLoader` class do?**",
"_____no_output_____"
],
[
"The DataLoader class can take any Python collection and turn it into an iterator over many batches.",
"_____no_output_____"
],
[
"**Write pseudocode showing the basic steps taken in each epoch for SGD.**",
"_____no_output_____"
]
],
[
[
"for x,y in dl:\n pred = model(x)\n loss = loss_func(pred, y)\n loss.backward()\n parameters -= parameters.grad * lr",
"_____no_output_____"
]
],
[
[
"**Create a function that, if passed two arguments `[1,2,3,4]` and `'abcd'`, returns `[(1, 'a'), (2, 'b'), (3, 'c'), (4, 'd')]`. What is special about that output data structure?**",
"_____no_output_____"
]
],
[
[
"def func(a,b): return list(zip(a,b))",
"_____no_output_____"
]
],
[
[
"This data structure is useful for machine learning models when you need lists of tuples where each tuple would contain input data and a label.",
"_____no_output_____"
],
[
"**What does `view` do in PyTorch?**",
"_____no_output_____"
],
[
"It changes the shape of a Tensor without changing its contents.",
"_____no_output_____"
],
[
"**What are the \"bias\" parameters in a neural network? Why do we need them?**",
"_____no_output_____"
],
[
"Without the bias parameters, if the input is zero, the output will always be zero. Therefore, using bias parameters adds additional flexibility to the model.",
"_____no_output_____"
],
[
"**What does the `@` operator do in Python?**",
"_____no_output_____"
],
[
"This is the matrix multiplication operator.",
"_____no_output_____"
],
[
"**What does the `backward` method do?**",
"_____no_output_____"
],
[
"This method returns the current gradients.",
"_____no_output_____"
],
[
"**Why do we have to zero the gradients?**",
"_____no_output_____"
],
[
"PyTorch will add the gradients of a variable to any previously stored gradients. If the training loop function is called multiple times, without zeroing the gradients, the gradient of current loss would be added to the previously stored gradient value.",
"_____no_output_____"
],
[
"**What information do we have to pass to `Learner`?**",
"_____no_output_____"
],
[
"We need to pass in the DataLoaders, the model, the optimization function, the loss function, and optionally any metrics to print.",
"_____no_output_____"
],
[
"**Show Python or pseudocode for the basic steps of a training loop.**",
"_____no_output_____"
]
],
[
[
"def train_epoch(model, lr, params):\n for xb,yb in dl:\n calc_grad(xb, yb, model)\n for p in params:\n p.data -= p.grad*lr\n p.grad.zero_()\n\nfor i in range(20):\n train_epoch(model, lr, params)",
"_____no_output_____"
]
],
[
[
"**What is \"ReLU\"? Draw a plot of it for values from `-2` to `+2`.**",
"_____no_output_____"
],
[
"ReLU just means “replace any negative numbers with zero”. It is a commonly used activation function.",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"**What is an \"activation function\"?**",
"_____no_output_____"
],
[
"The activation function is another function that is part of the neural network, which has the purpose of providing non-linearity to the model. The idea is that without an activation function, we just have multiple linear functions of the form y=mx+b. However, a series of linear layers is equivalent to a single linear layer, so our model can only fit a line to the data. By introducing a non-linearity in between the linear layers, this is no longer true. Each layer is somewhat decoupled from the rest of the layers, and the model can now fit much more complex functions. In fact, it can be mathematically proven that such a model can solve any computable problem to an arbitrarily high accuracy, if the model is large enough with the correct weights. This is known as the universal approximation theorem.",
"_____no_output_____"
],
[
"**What's the difference between `F.relu` and `nn.ReLU`?**",
"_____no_output_____"
],
[
"F.relu is a Python function for the relu activation function. On the other hand, nn.ReLU is a PyTorch module. This means that it is a Python class that can be called as a function in the same way as F.relu.",
"_____no_output_____"
],
[
"**The universal approximation theorem shows that any function can be approximated as closely as needed using just one nonlinearity. So why do we normally use more?**",
"_____no_output_____"
],
[
"There are practical performance benefits to using more than one nonlinearity. We can use a deeper model with less number of parameters, better performance, faster training, and less compute/memory requirements.",
"_____no_output_____"
],
[
"### Further Research",
"_____no_output_____"
],
[
"1. Create your own implementation of `Learner` from scratch, based on the training loop shown in this chapter.\n1. Complete all the steps in this chapter using the full MNIST datasets (that is, for all digits, not just 3s and 7s). This is a significant project and will take you quite a bit of time to complete! You'll need to do some of your own research to figure out how to overcome some obstacles you'll meet on the way.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d0af280c5bd662018d79480f05713523e4cdcb15 | 35,454 | ipynb | Jupyter Notebook | notebooks/08-common-problems.ipynb | markovmodel/pyemma_tutorials | 6b9183686d2238d4f60c752a73e9b710c667ec10 | [
"CC-BY-4.0"
] | 49 | 2018-05-18T13:01:28.000Z | 2022-03-26T07:16:06.000Z | notebooks/08-common-problems.ipynb | Jimmy-INL/pyemma_tutorials | 6b9183686d2238d4f60c752a73e9b710c667ec10 | [
"CC-BY-4.0"
] | 149 | 2018-05-08T13:48:49.000Z | 2021-12-10T08:28:30.000Z | notebooks/08-common-problems.ipynb | Jimmy-INL/pyemma_tutorials | 6b9183686d2238d4f60c752a73e9b710c667ec10 | [
"CC-BY-4.0"
] | 33 | 2018-05-17T15:15:56.000Z | 2022-02-03T21:33:20.000Z | 38.245955 | 316 | 0.62591 | [
[
[
"# 08 - Common problems & bad data situations\n\n<a rel=\"license\" href=\"http://creativecommons.org/licenses/by/4.0/\"><img alt=\"Creative Commons Licence\" style=\"border-width:0\" src=\"https://i.creativecommons.org/l/by/4.0/88x31.png\" title='This work is licensed under a Creative Commons Attribution 4.0 International License.' align=\"right\"/></a>\n\nIn this notebook, we will revise common problems that might come up when dealing with real-world data.\n\nMaintainers: [@thempel](https://github.com/thempel), [@cwehmeyer](https://github.com/cwehmeyer), [@marscher](https://github.com/marscher), [@psolsson](https://github.com/psolsson)\n\n**Remember**:\n- to run the currently highlighted cell, hold <kbd>⇧ Shift</kbd> and press <kbd>⏎ Enter</kbd>;\n- to get help for a specific function, place the cursor within the function's brackets, hold <kbd>⇧ Shift</kbd>, and press <kbd>⇥ Tab</kbd>;\n- you can find the full documentation at [PyEMMA.org](http://www.pyemma.org).\n\n---\n\nMost problems in Markov modeling of MD data arise from bad sampling combined with a poor discretization.\nFor estimating a Markov model, it is required to have a connected data set,\ni.e., we must have observed each process we want to describe in both directions.\nPyEMMA checks if this requirement is fulfilled but, however, in certain situations this might be less obvious.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport mdshare\nimport pyemma",
"_____no_output_____"
]
],
[
[
"## Case 1: preprocessed, two-dimensional data (toy model)\n\n### well-sampled double-well potential\n\nLet's again have a look at the double-well potential.\nSince we are only interested in the problematic situations here,\nwe will simplify our data a bit and work with a 1D projection.",
"_____no_output_____"
]
],
[
[
"file = mdshare.fetch('hmm-doublewell-2d-100k.npz', working_directory='data')\nwith np.load(file) as fh:\n data = [fh['trajectory'][:, 1]]",
"_____no_output_____"
]
],
[
[
"Since this particular example is simple enough, we can define a plotting function that combines histograms with trajectory data:",
"_____no_output_____"
]
],
[
[
"def plot_1D_histogram_trajectories(data, cluster=None, max_traj_length=200, ax=None):\n if ax is None:\n fig, ax = plt.subplots()\n for n, _traj in enumerate(data):\n ax.hist(_traj, bins=30, alpha=.33, density=True, color='C{}'.format(n));\n ylims = ax.get_ylim()\n xlims = ax.get_xlim()\n for n, _traj in enumerate(data):\n ax.plot(\n _traj[:min(len(_traj), max_traj_length)], \n np.linspace(*ylims, min(len(_traj), max_traj_length)), \n alpha=0.6, color='C{}'.format(n), label='traj {}'.format(n))\n if cluster is not None:\n ax.plot(\n cluster.clustercenters[cluster.dtrajs[n][:min(len(_traj), max_traj_length)], 0], \n np.linspace(*ylims, min(len(_traj), max_traj_length)), \n '.-', alpha=.6, label='dtraj {}'.format(n), linewidth=.3)\n ax.annotate(\n '', xy=(0.8500001 * xlims[1], 0.7 * ylims[1]), xytext=(0.85 * xlims[1], 0.3 * ylims[1]),\n arrowprops=dict(fc='C0', ec='None', alpha=0.6, width=2))\n ax.text(0.86 * xlims[1], 0.5 * ylims[1], '$x(time)$', ha='left', va='center', rotation=90)\n ax.set_xlabel('TICA coordinate')\n ax.set_ylabel('histogram counts & trajectory time')\n ax.legend(loc=2)",
"_____no_output_____"
]
],
[
[
"As a reference, we visualize the histogram of this well-sampled trajectory along with the first $200$ steps (left panel) and the MSM implied timescales (right panel):",
"_____no_output_____"
]
],
[
[
"fig, axes = plt.subplots(1, 2, figsize=(10, 4))\n\ncluster = pyemma.coordinates.cluster_regspace(data, dmin=0.05)\n\nplot_1D_histogram_trajectories(data, cluster=cluster, ax=axes[0])\n\nlags = [i + 1 for i in range(10)]\nits = pyemma.msm.its(cluster.dtrajs, lags=lags)\npyemma.plots.plot_implied_timescales(its, marker='o', ax=axes[1], nits=4)\nfig.tight_layout()",
"_____no_output_____"
]
],
[
[
"We see a nice, reversibly connected trajectory.\nThat means we have sampled transitions between the basins in both directions that are correctly resolved by the discretization.\nAs we see from the almost perfect overlay of discrete and continuous trajectory, nearly no discretization error is made. \n\n### irreversibly connected double-well trajectories\n\nIn MD simulations, we often face the problem that a process is sampled only in one direction.\nFor example, consider protein-protein binding.\nThe unbinding might take on the order of seconds to minutes and is thus difficult to sample.\nWe will have a look what happens with the MSM in this case. \n\nOur example are two trajectories sampled from a double-well potential, each started in a different basin.\nThey will be color coded.",
"_____no_output_____"
]
],
[
[
"file = mdshare.fetch('doublewell_oneway.npy', working_directory='data')\ndata = [trj for trj in np.load(file)]\n\nplot_1D_histogram_trajectories(data, max_traj_length=data[0].shape[0])",
"_____no_output_____"
]
],
[
[
"We note that the orange trajectory does not leave its potential well while the blue trajectory does overcome the barrier exactly once.\n\n⚠️ Even though we have sampled one direction of the process,\nwe do not sample the way out of one of the potential wells, thus effectively finding a sink state in our data. \n\nLet's have a look at the MSM.\nSince in higher dimensions, we often face the problem of poor discretization,\nwe will simulate this situation by using too few cluster centers.",
"_____no_output_____"
]
],
[
[
"cluster_fine = pyemma.coordinates.cluster_regspace(data, dmin=0.1)\ncluster_poor = pyemma.coordinates.cluster_regspace(data, dmin=0.7)\nprint(cluster_fine.n_clusters, cluster_poor.n_clusters)",
"_____no_output_____"
],
[
"fig, axes = plt.subplots(2, 2, figsize=(10, 8), sharey='col')\nfor cluster, ax in zip([cluster_poor, cluster_fine], axes):\n plot_1D_histogram_trajectories(data, cluster=cluster, max_traj_length=data[0].shape[0], ax=ax[0])\n its = pyemma.msm.its(cluster.dtrajs, lags=[1, 10, 100, 200, 300, 500, 800, 1000])\n pyemma.plots.plot_implied_timescales(its, marker='o', ax=ax[1], nits=4)\naxes[0, 0].set_title('poor discretization')\naxes[1, 0].set_title('fine discretization')\nfig.tight_layout()",
"_____no_output_____"
]
],
[
[
"#### What do we see? \n\n1) We observe implied timescales that even look converged in the fine discretization case. \n\n2) With poor clustering, the process cannot be resolved any more, i.e., the ITS does not convergence before the lag time exceeds the implied time scale. \n\nThe obvious question is, what is the process that can be observed in the fine discretization case?\nPyEMMA checks for disconnectivity and thus should not find the process between the two wells.\nWe follow this question by taking a look at the first eigenvector, which corresponds to that process.",
"_____no_output_____"
]
],
[
[
"msm = pyemma.msm.estimate_markov_model(cluster_fine.dtrajs, 200)\nfig, ax = plt.subplots()\nax.plot(\n cluster_fine.clustercenters[msm.active_set, 0],\n msm.eigenvectors_right()[:, 1],\n 'o:',\n label='first eigvec')\ntx = ax.twinx()\ntx.hist(np.concatenate(data), bins=30, alpha=0.33)\ntx.set_yticklabels([])\ntx.set_yticks([])\nfig.legend()\nfig.tight_layout()",
"_____no_output_____"
]
],
[
[
"We observe a process which is entirely taking place in the left potential well.\nHow come?\nPyEMMA estimates MSMs only on the largest connected set because they are only defined on connected sets.\nIn this particular example, the largest connected set is the microstates in the left potential well.\nThat means that we find a transition between the right and the left side of this well.\nThis is not wrong, it might just be non-informative or even irrelevant. \n\nThe set of microstates which is used for the MSM estimation is stored in the MSM object `msm` and can be retrieved via `.active_set`.",
"_____no_output_____"
]
],
[
[
"print('Active set: {}'.format(msm.active_set))\nprint('Active state fraction: {:.2}'.format(msm.active_state_fraction))",
"_____no_output_____"
]
],
[
[
"In this example we clearly see that some states are missing.\n\n### disconnected double-well trajectories with cross-overs\n\nThis example covers the worst-case scenario.\nWe have two trajectories that live in two separated wells and never transition to the other one.\nDue to a very bad clustering, we believe that the data is connected.\nThis can happen if we cluster a large dataset in very high dimensions where it is especially difficult to debug. ",
"_____no_output_____"
]
],
[
[
"file = mdshare.fetch('doublewell_disconnected.npy', working_directory='data')\ndata = [trj for trj in np.load(file)]\n\nplot_1D_histogram_trajectories(data, max_traj_length=data[0].shape[0])",
"_____no_output_____"
]
],
[
[
"We, again, compare a reasonable to a deliberately poor discretization:",
"_____no_output_____"
]
],
[
[
"cluster_fine = pyemma.coordinates.cluster_regspace(data, dmin=0.1)\ncluster_poor = pyemma.coordinates.cluster_regspace(data, dmin=0.7)\nprint(cluster_fine.n_clusters, cluster_poor.n_clusters)",
"_____no_output_____"
],
[
"fig, axes = plt.subplots(2, 2, figsize=(10, 8), sharey='col')\nfor cluster, ax in zip([cluster_poor, cluster_fine], axes):\n plot_1D_histogram_trajectories(data, cluster=cluster, max_traj_length=data[0].shape[0], ax=ax[0])\n its = pyemma.msm.its(cluster.dtrajs, lags=[1, 10, 100, 200, 300, 500, 800, 1000])\n pyemma.plots.plot_implied_timescales(its, marker='o', ax=ax[1], nits=4)\naxes[0, 0].set_title('poor discretization')\naxes[1, 0].set_title('fine discretization')\nfig.tight_layout()",
"_____no_output_____"
]
],
[
[
"#### What do we see?\n\n1) With the fine discretization, we observe some timescales that are converged. These are most probably processes within one of the wells, similar to the ones we saw before.\n\n2) The poor discretization induces a large error and describes artificial short visits to the other basin.\n\n3) The timescales in the poor discretization are much higher but not converged. \n\nThe reason for the high timescales in 3) are in fact the artificial cross-over events created by the poor discretization.\nThis process was not actually sampled and is an artifact of bad clustering.\nLet's look at it in more detail and see what happens if we estimate an MSM and even compute metastable states with PCCA++.",
"_____no_output_____"
]
],
[
[
"msm = pyemma.msm.estimate_markov_model(cluster_poor.dtrajs, 200)\n\nnstates = 2\nmsm.pcca(nstates)\n\nindex_order = np.argsort(cluster_poor.clustercenters[:, 0])\n\nfig, axes = plt.subplots(1, 3, figsize=(12, 3))\naxes[0].plot(\n cluster_poor.clustercenters[index_order, 0],\n msm.eigenvectors_right()[index_order, 1],\n 'o:',\n label='1st eigvec')\naxes[0].set_title('first eigenvector')\nfor n, metastable_distribution in enumerate(msm.metastable_distributions):\n axes[1].step(\n cluster_poor.clustercenters[index_order, 0],\n metastable_distribution[index_order],\n ':', \n label='md state {}'.format(n + 1),\n where='mid')\naxes[1].set_title('metastable distributions (md)')\naxes[2].step(\n cluster_poor.clustercenters[index_order, 0],\n msm.pi[index_order],\n 'k--',\n label='$\\pi$',\n where='mid')\naxes[2].set_title('stationary distribution $\\pi$')\nfor ax in axes:\n tx = ax.twinx()\n tx.hist(np.concatenate(data), bins=30, alpha=0.33)\n tx.set_yticklabels([])\n tx.set_yticks([])\nfig.legend(loc=7)\nfig.tight_layout()",
"_____no_output_____"
]
],
[
[
"We observe that the first eigenvector represents a process that does not exist, i.e., is an artifact.\nNevertheless, the PCCA++ algorithm can separate metastable states in a way we would expect.\nIt finds the two disconnected states. However, the stationary distribution yields arbitrary results. \n\n#### How to detect disconnectivity?\n\nGenerally, hidden Markov models (HMMs) are much more reliable because they come with an additional layer of hidden states.\nCross-over events are thus unlikely to be counted as \"real\" transitions.\nThus, it is a good idea to estimate an HMM.\nWhat happens if we try to estimate a two state HMM on the same, poorly discretized data? \n\n⚠️ It is important to note that the HMM estimation is initialized from the PCCA++ metastable states that we already analyzed.",
"_____no_output_____"
]
],
[
[
"hmm = pyemma.msm.estimate_hidden_markov_model(cluster_poor.dtrajs, nstates, msm.lag)",
"_____no_output_____"
]
],
[
[
"We are getting an error message which already explains what is going wrong, i.e.,\nthat the (macro-) states are not connected and thus no unique stationary distribution can be estimated.\nThis is equivalent to having two eigenvalues of magnitude 1 or an implied timescale of infinity which is what we observe in the implied timescales plot.",
"_____no_output_____"
]
],
[
[
"its = pyemma.msm.timescales_hmsm(cluster_poor.dtrajs, nstates, lags=[1, 3, 4, 10, 100])\npyemma.plots.plot_implied_timescales(its, marker='o', ylog=True);",
"_____no_output_____"
]
],
[
[
"As we see, the requested timescales above $4$ steps could not be computed because the underlying HMM is disconnected,\ni.e., the corresponding timescales are infinity.\nThe implied timescales that could be computed are most likely the same process that we observed from the fine clustering before, i.e., jumps within one basin.\n\nIn general, it is a non-trivial problem to show that processes were not sampled reversibly.\nIn our experience, HMMs are a good choice here, even though situations can occur where they might not detect the problem as easily as in this example. \n\n<a id=\"poorly_sampled_dw\"></a>\n### poorly sampled double-well trajectories\n\nLet's now assume that everything worked out fine but our sampling is somewhat poor.\nThis is a realistic scenario when dealing with large systems that were well-sampled but still contain only few events of interest.\nWe expect that our trajectories are just long enough to sample a certain process but are too short to capture them with a large lag time.\nTo rule out discretization issues and to make the example clear, we use the full data set for discretization.",
"_____no_output_____"
]
],
[
[
"file = mdshare.fetch('hmm-doublewell-2d-100k.npz', working_directory='data')\nwith np.load(file) as fh:\n data = [fh['trajectory'][:, 1]]\ncluster = pyemma.coordinates.cluster_regspace(data, dmin=0.05)",
"_____no_output_____"
]
],
[
[
"We want to simulate a process that happens on a timescale that is on the order of magnitude of the trajectory length.\nTo do so, we choose `n_trajs` chunks from the full data set that contain `traj_length` steps by splitting the original trajectory:",
"_____no_output_____"
]
],
[
[
"traj_length = 10\nn_trajs = 50\n\ndata_short_trajs = list(data[0].reshape((data[0].shape[0] // traj_length, traj_length)))[:n_trajs]\ndtrajs_short = list(cluster.dtrajs[0].reshape((data[0].shape[0] // traj_length, traj_length)))[:n_trajs]",
"_____no_output_____"
]
],
[
[
"Now, let's plot the trajectories (left panel) and estimate implied timescales (right panel) as above.\nSince we know the true ITS of this process, we visualize it as a dotted line.",
"_____no_output_____"
]
],
[
[
"fig, axes = plt.subplots(1, 2, figsize=(10, 4))\n\nfor n, _traj in enumerate(data_short_trajs):\n axes[0].plot(_traj, np.linspace(0, 1, _traj.shape[0]) + n)\n\nlags = [i + 1 for i in range(9)]\n\nits = pyemma.msm.its(dtrajs_short, lags=lags)\npyemma.plots.plot_implied_timescales(its, marker='o', ax=axes[1], nits=1)\nits_reference = pyemma.msm.its(cluster.dtrajs, lags=lags)\npyemma.plots.plot_implied_timescales(its_reference, linestyle=':', ax=axes[1], nits=1)\nfig.tight_layout()",
"_____no_output_____"
]
],
[
[
"We note that the slowest process is clearly contained in the data chunks and is reversibly sampled (left panel, short trajectory pieces color coded and stacked).\nDue to very short trajectories, we find that this process can only be captured at a very short MSM lag time (right panel).\nAbove that interval, the slowest timescale diverges.\nLuckily, here we know that it is already converged at $\\tau = 1$, so we estimate an MSM:",
"_____no_output_____"
]
],
[
[
"msm_short_trajectories = pyemma.msm.estimate_markov_model(dtrajs_short, 1)",
"_____no_output_____"
]
],
[
[
"Let's now have a look at the CK-test:",
"_____no_output_____"
]
],
[
[
"pyemma.plots.plot_cktest(msm_short_trajectories.cktest(2), marker='.');",
"_____no_output_____"
]
],
[
[
"As already discussed, we cannot expect new estimates above a certain lag time to agree with the model prediction due to too short trajectories.\nIndeed, we find that new estimates and model predictions diverge at very high lag times.\nThis does not necessarily mean that the model at $\\tau=1$ is wrong and in this particular case,\nwe can even explain the divergence and find that it fits to the implied timescales divergence. \n\nThis example mirrors another incarnation of the sampling problem: Working with large systems,\nwe often have comparably short trajectories with few rare events.\nThus, implied timescales convergence can often be achieved only in a certain interval and CK-tests will not converge up to arbitrary multiples of the lag time.\nIt is the responsibility of the modeler to interpret these results and to ensure that a valid model can be obtained from the data.\n\nPlease note that this is only a special case of a failed CK test.\nMore general information about CK tests and what it means if it fails are explained in\n[Notebook 03 ➜ 📓](03-msm-estimation-and-validation.ipynb).\n\n## Case 2: low-dimensional molecular dynamics data (alanine dipeptide)\n\nIn this example, we will show how an ill-conducted TICA analysis can yield results that look metastable in the 2D histogram,\nbut in fact are not describing the slow dynamics.\nPlease note that this was deliberately broken with a nonsensical TICA-lagtime of almost trajectory length, which is 250 ns.\n\nWe start off with adding all atom coordinates.\nThat is a non-optimal choice because it artificially blows up the dimensionality,\nbut might still be a reasonable choice depending on the problem.\nA well-conducted TICA projection can extract the slow coordinates, as we will see at the end of this example.",
"_____no_output_____"
]
],
[
[
"pdb = mdshare.fetch('alanine-dipeptide-nowater.pdb', working_directory='data')\nfiles = mdshare.fetch('alanine-dipeptide-*-250ns-nowater.xtc', working_directory='data')\nfeat = pyemma.coordinates.featurizer(pdb)\n\nfeat.add_all()\ndata = pyemma.coordinates.load(files, features=feat)",
"_____no_output_____"
]
],
[
[
"TICA analysis is conducted with an extremely high lag time of almost $249.9$ ns. We map down to two dimensions.",
"_____no_output_____"
]
],
[
[
"tica = pyemma.coordinates.tica(data, lag=data[0].shape[0] - 100, dim=2)\ntica_output = tica.get_output()\n\npyemma.plots.plot_free_energy(*np.concatenate(tica_output).T, legacy=False);",
"_____no_output_____"
]
],
[
[
"In the free energy plot, we recognize two defined basins that are nicely separated by the first TICA component. We thus continue with a discretization of this space and estimate MSM implied timescales.",
"_____no_output_____"
]
],
[
[
"cluster = pyemma.coordinates.cluster_kmeans(tica_output, k=200, max_iter=30, stride=100)",
"_____no_output_____"
],
[
"its = pyemma.msm.its(cluster.dtrajs, lags=[1, 5, 10, 20, 30, 50])\npyemma.plots.plot_implied_timescales(its, marker='o', units='ps', nits=3);",
"_____no_output_____"
]
],
[
[
"Indeed, we observe a converged implied timescale.\nIn this example we already know that it is way lower than expected,\nbut in the general case we are unaware of the real dynamics of the system. \n\nThus, we estimate an MSM at lag time $20 $ ps.\nCoarse graining and validation will be done with $2$ metastable states since we found $2$ basins in the free energy landscape and have one slow process in the ITS plot.",
"_____no_output_____"
]
],
[
[
"msm = pyemma.msm.estimate_markov_model(cluster.dtrajs, 20)\n\nnstates = 2\nmsm.pcca(nstates);",
"_____no_output_____"
],
[
"stride = 10\nmetastable_trajs_strided = [msm.metastable_assignments[dtrj[::stride]] for dtrj in cluster.dtrajs]\ntica_output_strided = [i[::stride] for i in tica_output]\n_, _, misc = pyemma.plots.plot_state_map(*np.concatenate(tica_output_strided).T, \n np.concatenate(metastable_trajs_strided));\nmisc['cbar'].set_ticklabels(range(1, nstates + 1)) # set state numbers 1 ... nstates",
"_____no_output_____"
]
],
[
[
"As we see, the PCCA++ algorithm is perfectly able to separate the two basins.\nLet's go on with a Chapman-Kolmogorow validation.",
"_____no_output_____"
]
],
[
[
"pyemma.plots.plot_cktest(msm.cktest(nstates), units='ps');",
"_____no_output_____"
]
],
[
[
"Congratulations, we have estimated a well-validated MSM.\nThe only question remaining is: What does it actually describe?\nFor this, we usually extract representative structures as described in [Notebook 00 ➜ 📓](00-pentapeptide-showcase.ipynb).\nWe will not do this here but look at the metastable trajectories instead.\n\n#### What could be wrong about it?\n\nLet's have a look at the trajectories as assigned to PCCA++ metastable states.\nWe have already computed them before but not looked at their time dependence.",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(1, 1, figsize=(15, 6), sharey=True, sharex=True)\nax_yticks_labels = []\nfor n, pcca_traj in enumerate(metastable_trajs_strided):\n ax.plot(range(len(pcca_traj)), msm.n_metastable * n + pcca_traj, color='k', linewidth=0.3)\n ax.scatter(range(len(pcca_traj)), msm.n_metastable * n + pcca_traj, c=pcca_traj, s=0.1)\n ax_yticks_labels.append(((msm.n_metastable * (2 * n + 1) - 1) / 2, n + 1))\nax.set_yticks([l[0] for l in ax_yticks_labels])\nax.set_yticklabels([str(l[1]) for l in ax_yticks_labels])\nax.set_ylabel('Trajectory #')\nax.set_xlabel('time / {} ps'.format(stride))\nfig.tight_layout()",
"_____no_output_____"
]
],
[
[
"#### What do we see?\nThe above figure shows the metastable states visited by the trajectory over time.\nEach metastable state is color-coded, the trajectory is shown by the black line.\nThis is clearly not a metastable trajectory as we would have expected. \n\nWhat did we do wrong?\nLet's have a look at the TICA trajectories, not only the histogram!",
"_____no_output_____"
]
],
[
[
"fig, axes = plt.subplots(2, 3, figsize=(12, 6), sharex=True, sharey='row')\n\nfor n, trj in enumerate(tica_output):\n for dim, traj1d in enumerate(trj.T):\n axes[dim, n].plot(traj1d[::stride], linewidth=.5)\nfor ax in axes[1]:\n ax.set_xlabel('time / {} ps'.format(stride))\nfor dim, ax in enumerate(axes[:, 0]):\n ax.set_ylabel('IC {}'.format(dim + 1))\nfor n, ax in enumerate(axes[0]):\n ax.set_title('Trajectory # {}'.format(n + 1))\nfig.tight_layout()",
"_____no_output_____"
]
],
[
[
"This is essentially noise, so it is not surprising that the metastable trajectories do not show significant metastability.\nThe MSM nevertheless found a process in the above TICA components which, however,\ndoes not seem to describe any of the slow dynamics.\nThus, the model is not wrong, it is just not informative. \n\nAs we see in this example, it can be instructive to keep the trajectories in mind and not to rely on the histograms alone.\n\n⚠️ Histograms are no proof of metastability,\nthey can only give us a hint towards defined states in a multi-dimensional state space which can be metastable.\n\n#### How to fix it?\n\nIn this particular example, we already know the issue:\nthe TICA lag time was deliberately chosen way too high.\nThat's easy to fix.\n\nLet's now have a look at how the metastable trajectories should look for a decent model such as the one estimated in [Notebook 05 ➜ 📓](05-pcca-tpt.ipynb).\nWe will take the same input data,\ndo a TICA transform with a realistic lag time of $10$ ps,\nand coarse grain into $2$ metastable states in order to compare with the example above.",
"_____no_output_____"
]
],
[
[
"tica = pyemma.coordinates.tica(data, lag=10, dim=2)\ntica_output = tica.get_output()\ncluster = pyemma.coordinates.cluster_kmeans(tica_output, k=200, max_iter=30, stride=100)\n\npyemma.plots.plot_free_energy(*np.concatenate(tica_output).T, legacy=False);",
"_____no_output_____"
]
],
[
[
"As wee see, TICA yields a very nice state separation.\nWe will see that these states are in fact metastable.",
"_____no_output_____"
]
],
[
[
"msm = pyemma.msm.estimate_markov_model(cluster.dtrajs, lag=20)\nmsm.pcca(nstates);",
"_____no_output_____"
],
[
"metastable_trajs_strided = [msm.metastable_assignments[dtrj[::stride]] for dtrj in cluster.dtrajs]",
"_____no_output_____"
],
[
"stride = 10\ntica_output_strided = [i[::stride] for i in tica_output]\n_, _, misc = pyemma.plots.plot_state_map(*np.concatenate(tica_output_strided).T, \n np.concatenate(metastable_trajs_strided));\nmisc['cbar'].set_ticklabels(range(1, nstates + 1)) # set state numbers 1 ... nstates",
"_____no_output_____"
]
],
[
[
"We note that PCCA++ separates the two basins of the free energy plot.\nLet's have a look at the metastable trajectories:",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(1, 1, figsize=(12, 6), sharey=True, sharex=True)\nax_yticks_labels = []\nfor n, pcca_traj in enumerate(metastable_trajs_strided):\n ax.plot(range(len(pcca_traj)), msm.n_metastable * n + pcca_traj, color='k', linewidth=0.3)\n ax.scatter(range(len(pcca_traj)), msm.n_metastable * n + pcca_traj, c=pcca_traj, s=0.1)\n ax_yticks_labels.append(((msm.n_metastable * (2 * n + 1) - 1) / 2, n + 1))\nax.set_yticks([l[0] for l in ax_yticks_labels])\nax.set_yticklabels([str(l[1]) for l in ax_yticks_labels])\nax.set_ylabel('Trajectory #')\nax.set_xlabel('time / {} ps'.format(stride))\nfig.tight_layout()",
"_____no_output_____"
]
],
[
[
"These trajectories show the expected behavior of a metastable trajectory,\ni.e., it does not quickly jump back and forth between the states.\n\n## Wrapping up\n\nIn this notebook, we have learned about some problems that can arise when estimating MSMs with \"real world\" data at simple examples.\nIn detail, we have seen\n- irreversibly connected dynamics and what it means for MSM estimation,\n- fully disconnected trajectories and how to identify them,\n- connected but poorly sampled trajectories and how convergence looks in this case,\n- ill-conducted TICA analysis and what it yields.\n\nThe most important lesson from this tutorial is that histograms, which are usually calculated in a projected space, are not a sufficient means of identifying metastability or connectedness.\nIt is crucial to remember that the underlying trajectories play the role of ground truth for the model. \nUltimately, histograms only help us to understand this ground truth but cannot provide a complete picture.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0af2eaa1cc6e14ae1fe1cc4f03e421e0541a756 | 17,296 | ipynb | Jupyter Notebook | Research_env/Perceptron_implementation.ipynb | sharad28/Perceptron_implementation | 5ff7070c3102e6c6aed27318dc8118978ea142ab | [
"MIT"
] | 1 | 2022-02-02T06:10:57.000Z | 2022-02-02T06:10:57.000Z | Research_env/Perceptron_implementation.ipynb | sharad28/Perceptron_implementation | 5ff7070c3102e6c6aed27318dc8118978ea142ab | [
"MIT"
] | null | null | null | Research_env/Perceptron_implementation.ipynb | sharad28/Perceptron_implementation | 5ff7070c3102e6c6aed27318dc8118978ea142ab | [
"MIT"
] | null | null | null | 25.96997 | 103 | 0.401943 | [
[
[
"# pip install joblib\n",
"_____no_output_____"
],
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport joblib\nplt.style.use('fivethirtyeight')",
"_____no_output_____"
],
[
"class Perceptron:\n def __init__(self,eta: float=None, epochs: int=None):\n self.weights = np.random.randn(3)*1e-4\n self.eta=eta #learning rate\n self.epochs = epochs # number of iteration\n def _z_outcome(self, inputs, weights):\n return np.dot(inputs,weights)\n def activation_function(self,z):\n return np.where(z>0,1,0)\n def fit(self, X, y):\n self.X = X\n self.y = y\n \n X_with_bias=np.c_[self.X,-np.ones(((len(self.X)),1))]\n print(X_with_bias)\n# self.z=_z_cal()\n for epoch in range(self.epochs):\n print(\"__\"*10)\n print(f'for epoch: {epoch+1}')\n z = self._z_outcome(X_with_bias,self.weights)\n y_hat = self.activation_function(z)\n \n print(f'predicted value after forward pass: \\n{y_hat}')\n \n self.error = self.y-y_hat\n print(f'error : \\n{self.error}')\n \n self.weights = self.weights + self.eta*np.dot(X_with_bias.T,self.error)\n print(f'Updated weights after epoch : {epoch +1}/{self.epochs} \\n{self.weights}')\n def predict(self,X):\n X_with_bais = np.c_[X,-np.ones(((len(X)),1))]\n z=self._z_outcome(X_with_bais,self.weights)\n return self.activation_function(z)\n ",
"_____no_output_____"
],
[
"obj= Perceptron(eta=.2,epochs=10)",
"_____no_output_____"
],
[
"OR ={\n \"x1\":[0,0,1,1],\n \"x2\":[0,1,0,1],\n \"y\":[0,1,1,1]\n}\ndf_or = pd.DataFrame(OR)\ndf_or",
"_____no_output_____"
],
[
"AND ={\n \"x1\":[0,0,1,1],\n \"x2\":[0,1,0,1],\n \"y\":[0,0,0,1]\n}\ndf_and = pd.DataFrame(AND)\ndf_and",
"_____no_output_____"
],
[
"def prepare_data(df, target_col='y'):\n X = df.drop(target_col,axis=1)\n y = df[target_col]\n return X, y",
"_____no_output_____"
],
[
"X,y = prepare_data(df_and)\nETA= 0.1\nEPOCHS = 10\nmodel_and = Perceptron(eta=ETA, epochs=EPOCHS)\nmodel_and.fit(X,y)",
"[[ 0. 0. -1.]\n [ 0. 1. -1.]\n [ 1. 0. -1.]\n [ 1. 1. -1.]]\n____________________\nfor epoch: 1\npredicted value after forward pass: \n[1 1 0 0]\nerror : \n0 -1\n1 -1\n2 0\n3 1\nName: y, dtype: int64\nUpdated weights after epoch : 1/10 \n[9.98616217e-02 4.88378852e-05 9.99127134e-02]\n____________________\nfor epoch: 2\npredicted value after forward pass: \n[0 0 0 0]\nerror : \n0 0\n1 0\n2 0\n3 1\nName: y, dtype: int64\nUpdated weights after epoch : 2/10 \n[ 1.99861622e-01 1.00048838e-01 -8.72865745e-05]\n____________________\nfor epoch: 3\npredicted value after forward pass: \n[1 1 1 1]\nerror : \n0 -1\n1 -1\n2 -1\n3 0\nName: y, dtype: int64\nUpdated weights after epoch : 3/10 \n[9.98616217e-02 4.88378852e-05 2.99912713e-01]\n____________________\nfor epoch: 4\npredicted value after forward pass: \n[0 0 0 0]\nerror : \n0 0\n1 0\n2 0\n3 1\nName: y, dtype: int64\nUpdated weights after epoch : 4/10 \n[0.19986162 0.10004884 0.19991271]\n____________________\nfor epoch: 5\npredicted value after forward pass: \n[0 0 0 1]\nerror : \n0 0\n1 0\n2 0\n3 0\nName: y, dtype: int64\nUpdated weights after epoch : 5/10 \n[0.19986162 0.10004884 0.19991271]\n____________________\nfor epoch: 6\npredicted value after forward pass: \n[0 0 0 1]\nerror : \n0 0\n1 0\n2 0\n3 0\nName: y, dtype: int64\nUpdated weights after epoch : 6/10 \n[0.19986162 0.10004884 0.19991271]\n____________________\nfor epoch: 7\npredicted value after forward pass: \n[0 0 0 1]\nerror : \n0 0\n1 0\n2 0\n3 0\nName: y, dtype: int64\nUpdated weights after epoch : 7/10 \n[0.19986162 0.10004884 0.19991271]\n____________________\nfor epoch: 8\npredicted value after forward pass: \n[0 0 0 1]\nerror : \n0 0\n1 0\n2 0\n3 0\nName: y, dtype: int64\nUpdated weights after epoch : 8/10 \n[0.19986162 0.10004884 0.19991271]\n____________________\nfor epoch: 9\npredicted value after forward pass: \n[0 0 0 1]\nerror : \n0 0\n1 0\n2 0\n3 0\nName: y, dtype: int64\nUpdated weights after epoch : 9/10 \n[0.19986162 0.10004884 0.19991271]\n____________________\nfor epoch: 10\npredicted value after forward pass: \n[0 0 0 1]\nerror : \n0 0\n1 0\n2 0\n3 0\nName: y, dtype: int64\nUpdated weights after epoch : 10/10 \n[0.19986162 0.10004884 0.19991271]\n"
],
[
"xor ={\n \"x1\":[0,0,1,1],\n \"x2\":[0,1,0,1],\n \"y\":[0,1,1,0]\n}\ndf_xor = pd.DataFrame(xor)\ndf_xor",
"_____no_output_____"
],
[
"X,y = prepare_data(df_xor)\nETA= 0.1\nEPOCHS = 10\nmodel_xor = Perceptron(eta=ETA, epochs=EPOCHS)\nmodel_xor.fit(X,y)",
"[[ 0. 0. -1.]\n [ 0. 1. -1.]\n [ 1. 0. -1.]\n [ 1. 1. -1.]]\n____________________\nfor epoch: 1\npredicted value after forward pass: \n[0 1 0 0]\nerror : \n0 0\n1 0\n2 1\n3 0\nName: y, dtype: int64\nUpdated weights after epoch : 1/10 \n[ 0.0999219 0.000151 -0.09989023]\n____________________\nfor epoch: 2\npredicted value after forward pass: \n[1 1 1 1]\nerror : \n0 -1\n1 0\n2 0\n3 -1\nName: y, dtype: int64\nUpdated weights after epoch : 2/10 \n[-7.81023823e-05 -9.98489975e-02 1.00109772e-01]\n____________________\nfor epoch: 3\npredicted value after forward pass: \n[0 0 0 0]\nerror : \n0 0\n1 1\n2 1\n3 0\nName: y, dtype: int64\nUpdated weights after epoch : 3/10 \n[ 0.0999219 0.000151 -0.09989023]\n____________________\nfor epoch: 4\npredicted value after forward pass: \n[1 1 1 1]\nerror : \n0 -1\n1 0\n2 0\n3 -1\nName: y, dtype: int64\nUpdated weights after epoch : 4/10 \n[-7.81023823e-05 -9.98489975e-02 1.00109772e-01]\n____________________\nfor epoch: 5\npredicted value after forward pass: \n[0 0 0 0]\nerror : \n0 0\n1 1\n2 1\n3 0\nName: y, dtype: int64\nUpdated weights after epoch : 5/10 \n[ 0.0999219 0.000151 -0.09989023]\n____________________\nfor epoch: 6\npredicted value after forward pass: \n[1 1 1 1]\nerror : \n0 -1\n1 0\n2 0\n3 -1\nName: y, dtype: int64\nUpdated weights after epoch : 6/10 \n[-7.81023823e-05 -9.98489975e-02 1.00109772e-01]\n____________________\nfor epoch: 7\npredicted value after forward pass: \n[0 0 0 0]\nerror : \n0 0\n1 1\n2 1\n3 0\nName: y, dtype: int64\nUpdated weights after epoch : 7/10 \n[ 0.0999219 0.000151 -0.09989023]\n____________________\nfor epoch: 8\npredicted value after forward pass: \n[1 1 1 1]\nerror : \n0 -1\n1 0\n2 0\n3 -1\nName: y, dtype: int64\nUpdated weights after epoch : 8/10 \n[-7.81023823e-05 -9.98489975e-02 1.00109772e-01]\n____________________\nfor epoch: 9\npredicted value after forward pass: \n[0 0 0 0]\nerror : \n0 0\n1 1\n2 1\n3 0\nName: y, dtype: int64\nUpdated weights after epoch : 9/10 \n[ 0.0999219 0.000151 -0.09989023]\n____________________\nfor epoch: 10\npredicted value after forward pass: \n[1 1 1 1]\nerror : \n0 -1\n1 0\n2 0\n3 -1\nName: y, dtype: int64\nUpdated weights after epoch : 10/10 \n[-7.81023823e-05 -9.98489975e-02 1.00109772e-01]\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0af3cdd3ccdd72a22e32058b47207f042795e12 | 232,615 | ipynb | Jupyter Notebook | 2. Define the Network Architecture.ipynb | thanhthtran/Facial_keypoint_cnv | 0328bd096246db28e2349f54aa93f0fff6bac3b1 | [
"MIT"
] | null | null | null | 2. Define the Network Architecture.ipynb | thanhthtran/Facial_keypoint_cnv | 0328bd096246db28e2349f54aa93f0fff6bac3b1 | [
"MIT"
] | 1 | 2018-12-04T13:38:48.000Z | 2019-02-26T14:47:17.000Z | 2. Define the Network Architecture.ipynb | thanhthtran/Facial_keypoint_cnv | 0328bd096246db28e2349f54aa93f0fff6bac3b1 | [
"MIT"
] | null | null | null | 223.238964 | 28,828 | 0.901485 | [
[
[
"## Define the Convolutional Neural Network\n\nAfter you've looked at the data you're working with and, in this case, know the shapes of the images and of the keypoints, you are ready to define a convolutional neural network that can *learn* from this data.\n\nIn this notebook and in `models.py`, you will:\n1. Define a CNN with images as input and keypoints as output\n2. Construct the transformed FaceKeypointsDataset, just as before\n3. Train the CNN on the training data, tracking loss\n4. See how the trained model performs on test data\n5. If necessary, modify the CNN structure and model hyperparameters, so that it performs *well* **\\***\n\n**\\*** What does *well* mean?\n\n\"Well\" means that the model's loss decreases during training **and**, when applied to test image data, the model produces keypoints that closely match the true keypoints of each face. And you'll see examples of this later in the notebook.\n\n---\n",
"_____no_output_____"
],
[
"## CNN Architecture\n\nRecall that CNN's are defined by a few types of layers:\n* Convolutional layers\n* Maxpooling layers\n* Fully-connected layers\n\nYou are required to use the above layers and encouraged to add multiple convolutional layers and things like dropout layers that may prevent overfitting. You are also encouraged to look at literature on keypoint detection, such as [this paper](https://arxiv.org/pdf/1710.00977.pdf), to help you determine the structure of your network.\n\n\n### TODO: Define your model in the provided file `models.py` file\n\nThis file is mostly empty but contains the expected name and some TODO's for creating your model.\n\n---",
"_____no_output_____"
],
[
"## PyTorch Neural Nets\n\nTo define a neural network in PyTorch, you define the layers of a model in the function `__init__` and define the feedforward behavior of a network that employs those initialized layers in the function `forward`, which takes in an input image tensor, `x`. The structure of this Net class is shown below and left for you to fill in.\n\nNote: During training, PyTorch will be able to perform backpropagation by keeping track of the network's feedforward behavior and using autograd to calculate the update to the weights in the network.\n\n#### Define the Layers in ` __init__`\nAs a reminder, a conv/pool layer may be defined like this (in `__init__`):\n```\n# 1 input image channel (for grayscale images), 32 output channels/feature maps, 3x3 square convolution kernel\nself.conv1 = nn.Conv2d(1, 32, 3)\n\n# maxpool that uses a square window of kernel_size=2, stride=2\nself.pool = nn.MaxPool2d(2, 2) \n```\n\n#### Refer to Layers in `forward`\nThen referred to in the `forward` function like this, in which the conv1 layer has a ReLu activation applied to it before maxpooling is applied:\n```\nx = self.pool(F.relu(self.conv1(x)))\n```\n\nBest practice is to place any layers whose weights will change during the training process in `__init__` and refer to them in the `forward` function; any layers or functions that always behave in the same way, such as a pre-defined activation function, should appear *only* in the `forward` function.",
"_____no_output_____"
],
[
"#### Why models.py\n\nYou are tasked with defining the network in the `models.py` file so that any models you define can be saved and loaded by name in different notebooks in this project directory. For example, by defining a CNN class called `Net` in `models.py`, you can then create that same architecture in this and other notebooks by simply importing the class and instantiating a model:\n```\n from models import Net\n net = Net()\n```",
"_____no_output_____"
]
],
[
[
"# import the usual resources\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n# watch for any changes in model.py, if itchanges, re-load it automatically\n%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
],
[
"## TODO: Define the Net in models.py\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\n## TODO: Once you've define the network, you can instantiate it\n# one example conv layer has been provided for you\nfrom models import Net\n\nnet = Net()\nprint(net)",
"Net(\n (conv1): Conv2d(1, 32, kernel_size=(4, 4), stride=(1, 1))\n (conv2): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1))\n (conv3): Conv2d(64, 128, kernel_size=(2, 2), stride=(1, 1))\n (conv4): Conv2d(128, 256, kernel_size=(2, 2), stride=(1, 1))\n (drop_out1): Dropout(p=0.1)\n (drop_out2): Dropout(p=0.1)\n (drop_out3): Dropout(p=0.2)\n (drop_out4): Dropout(p=0.2)\n (drop_out5): Dropout(p=0.5)\n (drop_out6): Dropout(p=0.5)\n (fc1): Linear(in_features=36864, out_features=3200, bias=True)\n (fc2): Linear(in_features=3200, out_features=1600, bias=True)\n (fc3): Linear(in_features=1600, out_features=136, bias=True)\n (pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n)\n"
]
],
[
[
"## Transform the dataset \n\nTo prepare for training, create a transformed dataset of images and keypoints.\n\n### TODO: Define a data transform\n\nIn PyTorch, a convolutional neural network expects a torch image of a consistent size as input. For efficient training, and so your model's loss does not blow up during training, it is also suggested that you normalize the input images and keypoints. The necessary transforms have been defined in `data_load.py` and you **do not** need to modify these; take a look at this file (you'll see the same transforms that were defined and applied in Notebook 1).\n\nTo define the data transform below, use a [composition](http://pytorch.org/tutorials/beginner/data_loading_tutorial.html#compose-transforms) of:\n1. Rescaling and/or cropping the data, such that you are left with a square image (the suggested size is 224x224px)\n2. Normalizing the images and keypoints; turning each RGB image into a grayscale image with a color range of [0, 1] and transforming the given keypoints into a range of [-1, 1]\n3. Turning these images and keypoints into Tensors\n\nThese transformations have been defined in `data_load.py`, but it's up to you to call them and create a `data_transform` below. **This transform will be applied to the training data and, later, the test data**. It will change how you go about displaying these images and keypoints, but these steps are essential for efficient training.\n\nAs a note, should you want to perform data augmentation (which is optional in this project), and randomly rotate or shift these images, a square image size will be useful; rotating a 224x224 image by 90 degrees will result in the same shape of output.",
"_____no_output_____"
]
],
[
[
"from torch.utils.data import Dataset, DataLoader\nfrom torchvision import transforms, utils\n\n# the dataset we created in Notebook 1 is copied in the helper file `data_load.py`\nfrom data_load import FacialKeypointsDataset\n# the transforms we defined in Notebook 1 are in the helper file `data_load.py`\nfrom data_load import Rescale, RandomCrop, Normalize, ToTensor\n\n\n## TODO: define the data_transform using transforms.Compose([all tx's, . , .])\n# order matters! i.e. rescaling should come before a smaller crop\ndata_transform = transforms.Compose([Rescale(256), RandomCrop(224),Normalize(), ToTensor()])\n#data_transform = transforms.Compose([Rescale((224,224)),Normalize(), ToTensor()])\n\n# testing that you've defined a transform\nassert(data_transform is not None), 'Define a data_transform'",
"_____no_output_____"
],
[
"# create the transformed dataset\ntransformed_dataset = FacialKeypointsDataset(csv_file='data/training_frames_keypoints.csv',\n root_dir='data/training/',\n transform=data_transform)\n\n\nprint('Number of images: ', len(transformed_dataset))\n\n# iterate through the transformed dataset and print some stats about the first few samples\nfor i in range(4):\n sample = transformed_dataset[i]\n print(i, sample['image'].size(), sample['keypoints'].size())\n \n\n",
"Number of images: 3462\n0 torch.Size([1, 224, 224]) torch.Size([68, 2])\n1 torch.Size([1, 224, 224]) torch.Size([68, 2])\n2 torch.Size([1, 224, 224]) torch.Size([68, 2])\n3 torch.Size([1, 224, 224]) torch.Size([68, 2])\n"
]
],
[
[
"## Batching and loading data\n\nNext, having defined the transformed dataset, we can use PyTorch's DataLoader class to load the training data in batches of whatever size as well as to shuffle the data for training the model. You can read more about the parameters of the DataLoader, in [this documentation](http://pytorch.org/docs/master/data.html).\n\n#### Batch size\nDecide on a good batch size for training your model. Try both small and large batch sizes and note how the loss decreases as the model trains.\n\n**Note for Windows users**: Please change the `num_workers` to 0 or you may face some issues with your DataLoader failing.",
"_____no_output_____"
]
],
[
[
"# load training data in batches\nbatch_size = 128\n\ntrain_loader = DataLoader(transformed_dataset, \n batch_size=batch_size,\n shuffle=True, \n num_workers=4)\n",
"_____no_output_____"
]
],
[
[
"## Before training\n\nTake a look at how this model performs before it trains. You should see that the keypoints it predicts start off in one spot and don't match the keypoints on a face at all! It's interesting to visualize this behavior so that you can compare it to the model after training and see how the model has improved.\n\n#### Load in the test dataset\n\nThe test dataset is one that this model has *not* seen before, meaning it has not trained with these images. We'll load in this test data and before and after training, see how your model performs on this set!\n\nTo visualize this test data, we have to go through some un-transformation steps to turn our images into python images from tensors and to turn our keypoints back into a recognizable range. ",
"_____no_output_____"
]
],
[
[
"# load in the test data, using the dataset class\n# AND apply the data_transform you defined above\n\n# create the test dataset\ntest_dataset = FacialKeypointsDataset(csv_file='data/test_frames_keypoints.csv',\n root_dir='data/test/',\n transform=data_transform)\n\n",
"_____no_output_____"
],
[
"# load test data in batches\nbatch_size = 128\n\ntest_loader = DataLoader(test_dataset, \n batch_size=batch_size,\n shuffle=True, \n num_workers=4)",
"_____no_output_____"
]
],
[
[
"## Apply the model on a test sample\n\nTo test the model on a test sample of data, you have to follow these steps:\n1. Extract the image and ground truth keypoints from a sample\n2. Make sure the image is a FloatTensor, which the model expects.\n3. Forward pass the image through the net to get the predicted, output keypoints.\n\nThis function test how the network performs on the first batch of test data. It returns the images, the transformed images, the predicted keypoints (produced by the model), and the ground truth keypoints.",
"_____no_output_____"
]
],
[
[
"# test the model on a batch of test images\n\ndef net_sample_output():\n \n # iterate through the test dataset\n for i, sample in enumerate(test_loader):\n \n # get sample data: images and ground truth keypoints\n images = sample['image']\n key_pts = sample['keypoints']\n\n # convert images to FloatTensors\n images = images.type(torch.FloatTensor)\n\n # forward pass to get net output\n output_pts = net(images)\n \n # reshape to batch_size x 68 x 2 pts\n output_pts = output_pts.view(output_pts.size()[0], 68, -1)\n \n # break after first image is tested\n if i == 0:\n return images, output_pts, key_pts\n ",
"_____no_output_____"
]
],
[
[
"#### Debugging tips\n\nIf you get a size or dimension error here, make sure that your network outputs the expected number of keypoints! Or if you get a Tensor type error, look into changing the above code that casts the data into float types: `images = images.type(torch.FloatTensor)`.",
"_____no_output_____"
]
],
[
[
"# call the above function\n# returns: test images, test predicted keypoints, test ground truth keypoints\ntest_images, test_outputs, gt_pts = net_sample_output()\n\n# print out the dimensions of the data to see if they make sense\nprint(test_images.data.size())\nprint(test_outputs.data.size())\nprint(gt_pts.size())",
"_____no_output_____"
]
],
[
[
"## Visualize the predicted keypoints\n\nOnce we've had the model produce some predicted output keypoints, we can visualize these points in a way that's similar to how we've displayed this data before, only this time, we have to \"un-transform\" the image/keypoint data to display it.\n\nNote that I've defined a *new* function, `show_all_keypoints` that displays a grayscale image, its predicted keypoints and its ground truth keypoints (if provided).",
"_____no_output_____"
]
],
[
[
"def show_all_keypoints(image, predicted_key_pts, gt_pts=None):\n \"\"\"Show image with predicted keypoints\"\"\"\n # image is grayscale\n plt.imshow(image, cmap='gray')\n plt.scatter(predicted_key_pts[:, 0], predicted_key_pts[:, 1], s=20, marker='.', c='m')\n # plot ground truth points as green pts\n if gt_pts is not None:\n plt.scatter(gt_pts[:, 0], gt_pts[:, 1], s=20, marker='.', c='g')\n",
"_____no_output_____"
]
],
[
[
"#### Un-transformation\n\nNext, you'll see a helper function. `visualize_output` that takes in a batch of images, predicted keypoints, and ground truth keypoints and displays a set of those images and their true/predicted keypoints.\n\nThis function's main role is to take batches of image and keypoint data (the input and output of your CNN), and transform them into numpy images and un-normalized keypoints (x, y) for normal display. The un-transformation process turns keypoints and images into numpy arrays from Tensors *and* it undoes the keypoint normalization done in the Normalize() transform; it's assumed that you applied these transformations when you loaded your test data.",
"_____no_output_____"
]
],
[
[
"# visualize the output\n# by default this shows a batch of 10 images\ndef visualize_output(test_images, test_outputs, gt_pts=None, batch_size=10):\n\n for i in range(batch_size):\n plt.figure(figsize=(20,10))\n ax = plt.subplot(1, batch_size, i+1)\n\n # un-transform the image data\n image = test_images[i].data # get the image from it's wrapper\n image = image.numpy() # convert to numpy array from a Tensor\n image = np.transpose(image, (1, 2, 0)) # transpose to go from torch to numpy image\n\n # un-transform the predicted key_pts data\n predicted_key_pts = test_outputs[i].data\n predicted_key_pts = predicted_key_pts.numpy()\n # undo normalization of keypoints \n predicted_key_pts = predicted_key_pts*50.0+100\n \n # plot ground truth points for comparison, if they exist\n ground_truth_pts = None\n if gt_pts is not None:\n ground_truth_pts = gt_pts[i] \n ground_truth_pts = ground_truth_pts*50.0+100\n \n # call show_all_keypoints\n show_all_keypoints(np.squeeze(image), predicted_key_pts, ground_truth_pts)\n \n plt.axis('off')\n\n plt.show()\n \n# call it\n#visualize_output(test_images, test_outputs, gt_pts)",
"_____no_output_____"
]
],
[
[
"## Training\n\n#### Loss function\nTraining a network to predict keypoints is different than training a network to predict a class; instead of outputting a distribution of classes and using cross entropy loss, you may want to choose a loss function that is suited for regression, which directly compares a predicted value and target value. Read about the various kinds of loss functions (like MSE or L1/SmoothL1 loss) in [this documentation](http://pytorch.org/docs/master/_modules/torch/nn/modules/loss.html).\n\n### TODO: Define the loss and optimization\n\nNext, you'll define how the model will train by deciding on the loss function and optimizer.\n\n---",
"_____no_output_____"
]
],
[
[
"## TODO: Define the loss and optimization\nimport torch.optim as optim\n\n#criterion = nn.MSELoss()\ncriterion = nn.SmoothL1Loss()\n\noptimizer = optim.Adam(net.parameters(), lr=1e-3, betas=(0.9, 0.999))\n#optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)",
"_____no_output_____"
]
],
[
[
"## Training and Initial Observation\n\nNow, you'll train on your batched training data from `train_loader` for a number of epochs. \n\nTo quickly observe how your model is training and decide on whether or not you should modify it's structure or hyperparameters, you're encouraged to start off with just one or two epochs at first. As you train, note how your the model's loss behaves over time: does it decrease quickly at first and then slow down? Does it take a while to decrease in the first place? What happens if you change the batch size of your training data or modify your loss function? etc. \n\nUse these initial observations to make changes to your model and decide on the best architecture before you train for many epochs and create a final model.",
"_____no_output_____"
]
],
[
[
"from livelossplot import PlotLosses\n\ndef train_net(n_epochs):\n \n # prepare the net for training\n net.train()\n liveloss = PlotLosses()\n\n\n for epoch in range(n_epochs): # loop over the dataset multiple times\n epoch_loss = 0.0\n epoch_correct = 0\n epoch_loss_val = 0.0\n epoch_correct_val = 0\n running_loss = 0.0\n\n # train on batches of data, assumes you already have train_loader\n for batch_i, data in enumerate(train_loader):\n # get the input images and their corresponding labels\n images = data['image']\n key_pts = data['keypoints']\n\n # flatten pts\n key_pts = key_pts.view(key_pts.size(0), -1)\n\n # convert variables to floats for regression loss\n key_pts = key_pts.type(torch.FloatTensor)\n images = images.type(torch.FloatTensor)\n\n # forward pass to get outputs\n output_pts = net(images)\n\n # calculate the loss between predicted and target keypoints\n loss = criterion(output_pts, key_pts)\n\n # zero the parameter (weight) gradients\n optimizer.zero_grad()\n \n # backward pass to calculate the weight gradients\n loss.backward()\n\n # update the weights\n optimizer.step()\n \n epoch_loss += loss.data[0]\n epoch_correct += (output_pts.max(1)[1] == key_pts).sum().data[0]\n\n # print loss statistics\n # to convert loss into a scalar and add it to the running_loss, use .item()\n running_loss += loss.item()\n if batch_i % 10 == 9: # print every 10 batches\n print('Epoch: {}, Batch: {}, Avg. Loss: {}'.format(epoch + 1, batch_i+1, running_loss/1000))\n running_loss = 0.0\n avg_loss = epoch_loss / len(train_loader.dataset)\n avg_accuracy = epoch_correct / len(train_loader.dataset)\n liveloss.update({\n 'log loss': avg_loss,\n 'val_log loss': avg_loss_val,\n 'accuracy': avg_accuracy,\n 'val_accuracy': avg_accuracy_val})\n liveloss.draw()\n \n print('Finished Training')\n",
"_____no_output_____"
],
[
"def train_net (n_epochs):\n\n # prepare the net for training\n net.train()\n\n for epoch in range(n_epochs): # loop over the dataset multiple times\n \n running_loss = 0.0\n\n # train on batches of data, assumes you already have train_loader\n for batch_i, data in enumerate(train_loader):\n # get the input images and their corresponding labels\n images = data['image']\n key_pts = data['keypoints']\n\n # flatten pts\n key_pts = key_pts.view(key_pts.size(0), -1)\n\n # convert variables to floats for regression loss\n key_pts = key_pts.type(torch.FloatTensor)\n images = images.type(torch.FloatTensor)\n\n # forward pass to get outputs\n output_pts = net(images)\n\n # calculate the loss between predicted and target keypoints\n loss = criterion(output_pts, key_pts)\n\n # zero the parameter (weight) gradients\n optimizer.zero_grad()\n \n # backward pass to calculate the weight gradients\n loss.backward()\n\n # update the weights\n optimizer.step()\n\n # print loss statistics\n # to convert loss into a scalar and add it to the running_loss, use .item()\n running_loss += loss.item()\n if batch_i % 10 == 9: # print every 10 batches\n print('Epoch: {}, Batch: {}, Avg. Loss: {}'.format(epoch + 1, batch_i+1, running_loss/1000))\n running_loss = 0.0\n model_dir = 'saved_models/'\n model_name = 'model_3200_1600_smoothL1.pt'\n\n # after training, save your model parameters in the dir 'saved_models'\n #torch.save(net.state_dict(), model_dir+model_name)\n\n print('Finished Training')",
"_____no_output_____"
],
[
"net.load_state_dict(torch.load('saved_models/model_3200_1600_smoothL1.pt'))\n",
"_____no_output_____"
],
[
"# train your network\nn_epochs = 5 # start small, and increase when you've decided on your model structure and hyperparams\n\ntrain_net(n_epochs)",
"Epoch: 1, Batch: 10, Avg. Loss: 0.0005024574361741543\nEpoch: 1, Batch: 20, Avg. Loss: 0.00045902186259627343\nEpoch: 2, Batch: 10, Avg. Loss: 0.0004713563397526741\nEpoch: 2, Batch: 20, Avg. Loss: 0.000502167847007513\nEpoch: 3, Batch: 10, Avg. Loss: 0.0005260687507688999\nEpoch: 3, Batch: 20, Avg. Loss: 0.00044130625203251837\n"
]
],
[
[
"## Test data\n\nSee how your model performs on previously unseen, test data. We've already loaded and transformed this data, similar to the training data. Next, run your trained model on these images to see what kind of keypoints are produced. You should be able to see if your model is fitting each new face it sees, if the points are distributed randomly, or if the points have actually overfitted the training data and do not generalize.",
"_____no_output_____"
]
],
[
[
"# get a sample of test data again\ntest_images, test_outputs, gt_pts = net_sample_output()\ntorch.cuda.get_device_name(0)\nprint(test_images.data.size())\nprint(test_outputs.data.size())\nprint(gt_pts.size())",
"torch.Size([128, 1, 224, 224])\ntorch.Size([128, 68, 2])\ntorch.Size([128, 68, 2])\n"
],
[
"visualize_output(test_images, test_outputs, gt_pts)\n",
"_____no_output_____"
],
[
"## TODO: visualize your test output\n# you can use the same function as before, by un-commenting the line below:\n\nvisualize_output(test_images, test_outputs, gt_pts)\n",
"_____no_output_____"
]
],
[
[
"Once you've found a good model (or two), save your model so you can load it and use it later!",
"_____no_output_____"
]
],
[
[
"## TODO: change the name to something uniqe for each new model\nmodel_dir = 'saved_models/'\nmodel_name = 'model_3200_1600_smoothL1.pt'\n\n\n# after training, save your model parameters in the dir 'saved_models'\ntorch.save(net.state_dict(), model_dir+model_name)",
"_____no_output_____"
]
],
[
[
"After you've trained a well-performing model, answer the following questions so that we have some insight into your training and architecture selection process. Answering all questions is required to pass this project.",
"_____no_output_____"
],
[
"### Question 1: What optimization and loss functions did you choose and why?\n",
"_____no_output_____"
],
[
"**Answer**: write your answer here (double click to edit this cell)\nI use the ADAM optimization and MSE loss functions. I choose it because the network's output error should be as close as the desire value ( not classification). Therefore the ADAM and MSE works well. ",
"_____no_output_____"
],
[
"### Question 2: What kind of network architecture did you start with and how did it change as you tried different architectures? Did you decide to add more convolutional layers or any layers to avoid overfitting the data?",
"_____no_output_____"
],
[
"**Answer**: write your answer here\n\nFirstly, i try to apply LeNet architecture because it simple and fast. However, the result looks not good. Therefore, i change to the NaimishNet. This CNN apply many dropout layers to avoid overfitting the data.",
"_____no_output_____"
],
[
"### Question 3: How did you decide on the number of epochs and batch_size to train your model?",
"_____no_output_____"
],
[
"**Answer**: write your answer here\n•\tEpoch: 30. The bigger the epoch, the better model is. Because there mare many dropout() layer, the network does not overfit the data with 30 epoch.\n•\tBatch size: 10 . The bigger the batch size, the faster model is.\n",
"_____no_output_____"
],
[
"## Feature Visualization\n\nSometimes, neural networks are thought of as a black box, given some input, they learn to produce some output. CNN's are actually learning to recognize a variety of spatial patterns and you can visualize what each convolutional layer has been trained to recognize by looking at the weights that make up each convolutional kernel and applying those one at a time to a sample image. This technique is called feature visualization and it's useful for understanding the inner workings of a CNN.",
"_____no_output_____"
],
[
"In the cell below, you can see how to extract a single filter (by index) from your first convolutional layer. The filter should appear as a grayscale grid.",
"_____no_output_____"
]
],
[
[
"# Get the weights in the first conv layer, \"conv1\"\n# if necessary, change this to reflect the name of your first conv layer\nweights1 = net.conv1.weight.data\n\nw = weights1.numpy()\n\nfilter_index = 0\n\nprint(w[filter_index][0])\nprint(w[filter_index][0].shape)\n\n# display the filter weights\nplt.imshow(w[filter_index][0], cmap='gray')\n",
"_____no_output_____"
]
],
[
[
"## Feature maps\n\nEach CNN has at least one convolutional layer that is composed of stacked filters (also known as convolutional kernels). As a CNN trains, it learns what weights to include in it's convolutional kernels and when these kernels are applied to some input image, they produce a set of **feature maps**. So, feature maps are just sets of filtered images; they are the images produced by applying a convolutional kernel to an input image. These maps show us the features that the different layers of the neural network learn to extract. For example, you might imagine a convolutional kernel that detects the vertical edges of a face or another one that detects the corners of eyes. You can see what kind of features each of these kernels detects by applying them to an image. One such example is shown below; from the way it brings out the lines in an the image, you might characterize this as an edge detection filter.\n\n<img src='images/feature_map_ex.png' width=50% height=50%/>\n\n\nNext, choose a test image and filter it with one of the convolutional kernels in your trained CNN; look at the filtered output to get an idea what that particular kernel detects.\n\n### TODO: Filter an image to see the effect of a convolutional kernel\n---",
"_____no_output_____"
]
],
[
[
"##TODO: load in and display any image from the transformed test dataset\n\n## TODO: Using cv's filter2D function,\n## apply a specific set of filter weights (like the one displayed above) to the test image\n\n\nfig = plt.figure(figsize=(120, 4))\nfor idx in np.arange(4):\n ax = fig.add_subplot(2, 128/2, idx+1, xticks=[], yticks=[])\n image = test_dataset[idx]['image'].numpy()\n ax.imshow(np.squeeze(image), cmap='gray')\n\n ",
"_____no_output_____"
],
[
"import cv2\nnum = 3\nimg = np.squeeze(test_dataset[num]['image'].numpy())\nplt.imshow(img, cmap='gray')\nweights = net.conv1.weight.data\nw = weights.numpy()\nfig=plt.figure(figsize=(30, 10))\ncolumns = 5*2\nrows = 2\nfor i in range(0, columns*rows):\n fig.add_subplot(rows, columns, i+1)\n if ((i%2)==0):\n plt.imshow(w[int(i/2)][0], cmap='gray')\n else:\n c = cv2.filter2D(img, -1, w[int((i-1)/2)][0])\n plt.imshow(c, cmap='gray')\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Question 4: Choose one filter from your trained CNN and apply it to a test image; what purpose do you think it plays? What kind of feature do you think it detects?\n",
"_____no_output_____"
],
[
"**Answer**: (does it detect vertical lines or does it blur out noise, etc.) write your answer here",
"_____no_output_____"
],
[
"---\n## Moving on!\n\nNow that you've defined and trained your model (and saved the best model), you are ready to move on to the last notebook, which combines a face detector with your saved model to create a facial keypoint detection system that can predict the keypoints on *any* face in an image!",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
d0af42fefbd582389797b41d70f3f8069f023dac | 619,255 | ipynb | Jupyter Notebook | Neural Style Transfer with TensorFlow-Copy1.ipynb | shashwatsaket46/Deep-Learning-Projects | 0f72555633d0515f98b8fc8040cfc62eb756830c | [
"Apache-2.0"
] | 2 | 2020-10-31T21:02:34.000Z | 2021-07-29T13:02:37.000Z | Neural Style Transfer with TensorFlow-Copy1.ipynb | shashwatsaket46/Deep-Learning-Projects | 0f72555633d0515f98b8fc8040cfc62eb756830c | [
"Apache-2.0"
] | null | null | null | Neural Style Transfer with TensorFlow-Copy1.ipynb | shashwatsaket46/Deep-Learning-Projects | 0f72555633d0515f98b8fc8040cfc62eb756830c | [
"Apache-2.0"
] | null | null | null | 1,651.346667 | 455,256 | 0.960399 | [
[
[
"import tensorflow as tf",
"_____no_output_____"
],
[
"import tensorflow as tf\nfrom tensorflow.python.keras.applications.vgg19 import VGG19",
"_____no_output_____"
],
[
"model=VGG19(\n include_top=False,\n weights='imagenet'\n)\nmodel.trainable=False\nmodel.summary()",
"Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/vgg19/vgg19_weights_tf_dim_ordering_tf_kernels_notop.h5\n80142336/80134624 [==============================] - 36s 0us/step\nModel: \"vgg19\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_1 (InputLayer) [(None, None, None, 3)] 0 \n_________________________________________________________________\nblock1_conv1 (Conv2D) (None, None, None, 64) 1792 \n_________________________________________________________________\nblock1_conv2 (Conv2D) (None, None, None, 64) 36928 \n_________________________________________________________________\nblock1_pool (MaxPooling2D) (None, None, None, 64) 0 \n_________________________________________________________________\nblock2_conv1 (Conv2D) (None, None, None, 128) 73856 \n_________________________________________________________________\nblock2_conv2 (Conv2D) (None, None, None, 128) 147584 \n_________________________________________________________________\nblock2_pool (MaxPooling2D) (None, None, None, 128) 0 \n_________________________________________________________________\nblock3_conv1 (Conv2D) (None, None, None, 256) 295168 \n_________________________________________________________________\nblock3_conv2 (Conv2D) (None, None, None, 256) 590080 \n_________________________________________________________________\nblock3_conv3 (Conv2D) (None, None, None, 256) 590080 \n_________________________________________________________________\nblock3_conv4 (Conv2D) (None, None, None, 256) 590080 \n_________________________________________________________________\nblock3_pool (MaxPooling2D) (None, None, None, 256) 0 \n_________________________________________________________________\nblock4_conv1 (Conv2D) (None, None, None, 512) 1180160 \n_________________________________________________________________\nblock4_conv2 (Conv2D) (None, None, None, 512) 2359808 \n_________________________________________________________________\nblock4_conv3 (Conv2D) (None, None, None, 512) 2359808 \n_________________________________________________________________\nblock4_conv4 (Conv2D) (None, None, None, 512) 2359808 \n_________________________________________________________________\nblock4_pool (MaxPooling2D) (None, None, None, 512) 0 \n_________________________________________________________________\nblock5_conv1 (Conv2D) (None, None, None, 512) 2359808 \n_________________________________________________________________\nblock5_conv2 (Conv2D) (None, None, None, 512) 2359808 \n_________________________________________________________________\nblock5_conv3 (Conv2D) (None, None, None, 512) 2359808 \n_________________________________________________________________\nblock5_conv4 (Conv2D) (None, None, None, 512) 2359808 \n_________________________________________________________________\nblock5_pool (MaxPooling2D) (None, None, None, 512) 0 \n=================================================================\nTotal params: 20,024,384\nTrainable params: 0\nNon-trainable params: 20,024,384\n_________________________________________________________________\n"
],
[
"from tensorflow.python.keras.preprocessing.image import load_img, img_to_array\nfrom tensorflow.python.keras.applications.vgg19 import preprocess_input\nfrom tensorflow.python.keras.models import Model\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
],
[
"def load_and_process_image(image_path):\n img=load_img(image_path)\n img=img_to_array(img)\n img=preprocess_input(img)\n img=np.expand_dims(img,axis=0)\n return img",
"_____no_output_____"
],
[
"def deprocess(x):\n x[:,:,0]+=103.939\n x[:,:,1]+=116.779\n x[:,:,2]+=123.68\n x=x[:,:,::-1]\n \n x=np.clip(x,0,255).astype('uint8')\n return x\ndef display_image(image):\n if len(image.shape)==4:\n img=np.squeeze(image,axis=0)\n img=deprocess(img)\n \n plt.grid(False)\n plt.xticks([])\n plt.yticks([])\n plt.imshow(img)\n return",
"_____no_output_____"
],
[
"display_image(load_and_process_image('style.jpg'))",
"_____no_output_____"
],
[
"style_layers = [\n 'block1_conv1', \n 'block3_conv1', \n 'block5_conv1'\n]\n\ncontent_layer = 'block5_conv2'\n\n# intermediate models\ncontent_model = Model(\n inputs = model.input, \n outputs = model.get_layer(content_layer).output\n)\n\nstyle_models = [Model(inputs = model.input, \n outputs = model.get_layer(layer).output) for layer in style_layers]",
"_____no_output_____"
],
[
"# Content Cost\ndef content_cost(content, generated):\n a_C = content_model(content)\n a_G = content_model(generated)\n cost = tf.reduce_mean(tf.square(a_C - a_G))\n return cost",
"_____no_output_____"
],
[
"def gram_matrix(A):\n channels = int(A.shape[-1])\n a = tf.reshape(A, [-1, channels])\n n = tf.shape(a)[0]\n gram = tf.matmul(a, a, transpose_a = True)\n return gram / tf.cast(n, tf.float32)",
"_____no_output_____"
],
[
"lam = 1. / len(style_models)\n\ndef style_cost(style, generated):\n J_style = 0\n \n for style_model in style_models:\n a_S = style_model(style)\n a_G = style_model(generated)\n GS = gram_matrix(a_S)\n GG = gram_matrix(a_G)\n current_cost = tf.reduce_mean(tf.square(GS - GG))\n J_style += current_cost * lam\n \n return J_style",
"_____no_output_____"
],
[
"import time\n\ngenerated_images = []\n\ndef training_loop(content_path, style_path, iterations = 20, a = 10., b = 20.):\n # initialise\n content = load_and_process_image(content_path)\n style = load_and_process_image(style_path)\n generated = tf.Variable(content, dtype = tf.float32)\n \n opt = tf.optimizers.Adam(learning_rate = 7.)\n \n best_cost = 1e12+0.1\n best_image = None\n \n start_time = time.time()\n \n for i in range(iterations):\n \n with tf.GradientTape() as tape:\n J_content = content_cost(content, generated)\n J_style = style_cost(style, generated)\n J_total = a * J_content + b * J_style\n \n grads = tape.gradient(J_total, generated)\n opt.apply_gradients([(grads, generated)])\n \n if J_total < best_cost:\n best_cost = J_total\n best_image = generated.numpy()\n \n if i % int(iterations/10) == 0:\n time_taken = time.time() - start_time\n print('Cost at {}: {}. Time elapsed: {}'.format(i, J_total, time_taken))\n generated_images.append(generated.numpy())\n \n return best_image",
"_____no_output_____"
],
[
"final = training_loop('content.jpg','style.jpg')",
"Cost at 0: 6672084992.0. Time elapsed: 3.4547882080078125\nCost at 2: 1479381760.0. Time elapsed: 10.296310186386108\nCost at 4: 863365824.0. Time elapsed: 17.23086667060852\nCost at 6: 594945280.0. Time elapsed: 24.286461353302002\nCost at 8: 454304224.0. Time elapsed: 31.091986417770386\nCost at 10: 369753824.0. Time elapsed: 37.903504848480225\nCost at 12: 308917152.0. Time elapsed: 44.70805072784424\nCost at 14: 260217664.0. Time elapsed: 51.643970251083374\nCost at 16: 220326704.0. Time elapsed: 58.485512256622314\nCost at 18: 188584880.0. Time elapsed: 65.35505986213684\n"
],
[
"plt.figure(figsize = (12, 12))\n\nfor i in range(10):\n plt.subplot(5, 2, i + 1)\n display_image(generated_images[i])\nplt.show()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0af469ba84364eca885686e1e7726d64ee296e1 | 38,587 | ipynb | Jupyter Notebook | Wine.ipynb | solweigH/Naive-Bayes-Classifier | 847f967b8f54356710066218fd06121270d3f641 | [
"MIT"
] | null | null | null | Wine.ipynb | solweigH/Naive-Bayes-Classifier | 847f967b8f54356710066218fd06121270d3f641 | [
"MIT"
] | null | null | null | Wine.ipynb | solweigH/Naive-Bayes-Classifier | 847f967b8f54356710066218fd06121270d3f641 | [
"MIT"
] | null | null | null | 81.235789 | 23,252 | 0.774665 | [
[
[
"from sklearn import datasets\nwine = datasets.load_wine()\nprint(wine.DESCR)",
".. _wine_dataset:\n\nWine recognition dataset\n------------------------\n\n**Data Set Characteristics:**\n\n :Number of Instances: 178 (50 in each of three classes)\n :Number of Attributes: 13 numeric, predictive attributes and the class\n :Attribute Information:\n \t\t- Alcohol\n \t\t- Malic acid\n \t\t- Ash\n\t\t- Alcalinity of ash \n \t\t- Magnesium\n\t\t- Total phenols\n \t\t- Flavanoids\n \t\t- Nonflavanoid phenols\n \t\t- Proanthocyanins\n\t\t- Color intensity\n \t\t- Hue\n \t\t- OD280/OD315 of diluted wines\n \t\t- Proline\n\n - class:\n - class_0\n - class_1\n - class_2\n\t\t\n :Summary Statistics:\n \n ============================= ==== ===== ======= =====\n Min Max Mean SD\n ============================= ==== ===== ======= =====\n Alcohol: 11.0 14.8 13.0 0.8\n Malic Acid: 0.74 5.80 2.34 1.12\n Ash: 1.36 3.23 2.36 0.27\n Alcalinity of Ash: 10.6 30.0 19.5 3.3\n Magnesium: 70.0 162.0 99.7 14.3\n Total Phenols: 0.98 3.88 2.29 0.63\n Flavanoids: 0.34 5.08 2.03 1.00\n Nonflavanoid Phenols: 0.13 0.66 0.36 0.12\n Proanthocyanins: 0.41 3.58 1.59 0.57\n Colour Intensity: 1.3 13.0 5.1 2.3\n Hue: 0.48 1.71 0.96 0.23\n OD280/OD315 of diluted wines: 1.27 4.00 2.61 0.71\n Proline: 278 1680 746 315\n ============================= ==== ===== ======= =====\n\n :Missing Attribute Values: None\n :Class Distribution: class_0 (59), class_1 (71), class_2 (48)\n :Creator: R.A. Fisher\n :Donor: Michael Marshall (MARSHALL%[email protected])\n :Date: July, 1988\n\nThis is a copy of UCI ML Wine recognition datasets.\nhttps://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data\n\nThe data is the results of a chemical analysis of wines grown in the same\nregion in Italy by three different cultivators. There are thirteen different\nmeasurements taken for different constituents found in the three types of\nwine.\n\nOriginal Owners: \n\nForina, M. et al, PARVUS - \nAn Extendible Package for Data Exploration, Classification and Correlation. \nInstitute of Pharmaceutical and Food Analysis and Technologies,\nVia Brigata Salerno, 16147 Genoa, Italy.\n\nCitation:\n\nLichman, M. (2013). UCI Machine Learning Repository\n[https://archive.ics.uci.edu/ml]. Irvine, CA: University of California,\nSchool of Information and Computer Science. \n\n.. topic:: References\n\n (1) S. Aeberhard, D. Coomans and O. de Vel, \n Comparison of Classifiers in High Dimensional Settings, \n Tech. Rep. no. 92-02, (1992), Dept. of Computer Science and Dept. of \n Mathematics and Statistics, James Cook University of North Queensland. \n (Also submitted to Technometrics). \n\n The data was used with many others for comparing various \n classifiers. The classes are separable, though only RDA \n has achieved 100% correct classification. \n (RDA : 100%, QDA 99.4%, LDA 98.9%, 1NN 96.1% (z-transformed data)) \n (All results using the leave-one-out technique) \n\n (2) S. Aeberhard, D. Coomans and O. de Vel, \n \"THE CLASSIFICATION PERFORMANCE OF RDA\" \n Tech. Rep. no. 92-01, (1992), Dept. of Computer Science and Dept. of \n Mathematics and Statistics, James Cook University of North Queensland. \n (Also submitted to Journal of Chemometrics).\n\n"
],
[
"print('Features: ', wine.feature_names)\nprint('Labels: ', wine.target_names)",
"Features: ['alcohol', 'malic_acid', 'ash', 'alcalinity_of_ash', 'magnesium', 'total_phenols', 'flavanoids', 'nonflavanoid_phenols', 'proanthocyanins', 'color_intensity', 'hue', 'od280/od315_of_diluted_wines', 'proline']\nLabels: ['class_0' 'class_1' 'class_2']\n"
],
[
"#wine=wine.sample(frac=1)",
"_____no_output_____"
],
[
"data = wine.data\ntarget = wine.target",
"_____no_output_____"
],
[
"from sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(data, target, test_size=.3, random_state=109)",
"_____no_output_____"
],
[
"print('Xtrain : ',X_train.shape)\nprint('Xtest : ',X_test.shape)\nprint('Ytrain : ',y_train.shape)\nprint('Ytest : ',y_test.shape)",
"Xtrain : (124, 13)\nXtest : (54, 13)\nYtrain : (124,)\nYtest : (54,)\n"
],
[
"print('Xtrain : ',X_train[:5])\nprint('Xtest : ',X_test[:5])\nprint('Ytrain : ',y_train[:5])\nprint('Ytest : ',y_test[:5])",
"Xtrain : [[1.323e+01 3.300e+00 2.280e+00 1.850e+01 9.800e+01 1.800e+00 8.300e-01\n 6.100e-01 1.870e+00 1.052e+01 5.600e-01 1.510e+00 6.750e+02]\n [1.384e+01 4.120e+00 2.380e+00 1.950e+01 8.900e+01 1.800e+00 8.300e-01\n 4.800e-01 1.560e+00 9.010e+00 5.700e-01 1.640e+00 4.800e+02]\n [1.220e+01 3.030e+00 2.320e+00 1.900e+01 9.600e+01 1.250e+00 4.900e-01\n 4.000e-01 7.300e-01 5.500e+00 6.600e-01 1.830e+00 5.100e+02]\n [1.288e+01 2.990e+00 2.400e+00 2.000e+01 1.040e+02 1.300e+00 1.220e+00\n 2.400e-01 8.300e-01 5.400e+00 7.400e-01 1.420e+00 5.300e+02]\n [1.305e+01 3.860e+00 2.320e+00 2.250e+01 8.500e+01 1.650e+00 1.590e+00\n 6.100e-01 1.620e+00 4.800e+00 8.400e-01 2.010e+00 5.150e+02]]\nXtest : [[1.330e+01 1.720e+00 2.140e+00 1.700e+01 9.400e+01 2.400e+00 2.190e+00\n 2.700e-01 1.350e+00 3.950e+00 1.020e+00 2.770e+00 1.285e+03]\n [1.293e+01 3.800e+00 2.650e+00 1.860e+01 1.020e+02 2.410e+00 2.410e+00\n 2.500e-01 1.980e+00 4.500e+00 1.030e+00 3.520e+00 7.700e+02]\n [1.221e+01 1.190e+00 1.750e+00 1.680e+01 1.510e+02 1.850e+00 1.280e+00\n 1.400e-01 2.500e+00 2.850e+00 1.280e+00 3.070e+00 7.180e+02]\n [1.253e+01 5.510e+00 2.640e+00 2.500e+01 9.600e+01 1.790e+00 6.000e-01\n 6.300e-01 1.100e+00 5.000e+00 8.200e-01 1.690e+00 5.150e+02]\n [1.421e+01 4.040e+00 2.440e+00 1.890e+01 1.110e+02 2.850e+00 2.650e+00\n 3.000e-01 1.250e+00 5.240e+00 8.700e-01 3.330e+00 1.080e+03]]\nYtrain : [2 2 2 2 1]\nYtest : [0 0 1 2 0]\n"
],
[
"from sklearn.naive_bayes import GaussianNB\nnb = GaussianNB()\nnb.fit(X_train, y_train)",
"_____no_output_____"
],
[
"y_pred = nb.predict(X_test)",
"_____no_output_____"
],
[
"from sklearn import metrics\nscores = metrics.accuracy_score(y_test, y_pred)\nprint('Accuracy: ','{:2.2%}'.format(scores))",
"Accuracy: 90.74%\n"
],
[
"cm = metrics.confusion_matrix(y_test, y_pred)\nprint(cm)",
"[[20 1 0]\n [ 2 15 2]\n [ 0 0 14]]\n"
],
[
"from sklearn.metrics import classification_report\nprint(classification_report(y_test, y_pred))",
" precision recall f1-score support\n\n 0 0.91 0.95 0.93 21\n 1 0.94 0.79 0.86 19\n 2 0.88 1.00 0.93 14\n\n accuracy 0.91 54\n macro avg 0.91 0.91 0.91 54\nweighted avg 0.91 0.91 0.91 54\n\n"
],
[
"import numpy as np\nprint(np.sum(np.diag(cm)/np.sum(cm)))",
"0.9074074074074074\n"
],
[
"import itertools\nimport matplotlib.pyplot as plt\n%matplotlib inline\ndef plot_confusion_matrix(cm,\n target_names,\n title='Confusion matrix',\n cmap=None,\n normalize=True):\n \"\"\"\n given a sklearn confusion matrix (cm), make a nice plot\n\n Arguments\n ---------\n cm: confusion matrix from sklearn.metrics.confusion_matrix\n\n target_names: given classification classes such as [0, 1, 2]\n the class names, for example: ['high', 'medium', 'low']\n\n title: the text to display at the top of the matrix\n\n cmap: the gradient of the values displayed from matplotlib.pyplot.cm\n see http://matplotlib.org/examples/color/colormaps_reference.html\n plt.get_cmap('jet') or plt.cm.Blues\n\n normalize: If False, plot the raw numbers\n If True, plot the proportions\n\n Usage\n -----\n plot_confusion_matrix(cm = cm, # confusion matrix created by\n # sklearn.metrics.confusion_matrix\n normalize = True, # show proportions\n target_names = y_labels_vals, # list of names of the classes\n title = best_estimator_name) # title of graph\n\n Citiation\n ---------\n https://www.kaggle.com/grfiv4/plot-a-confusion-matrix\n \n \"\"\"\n\n\n accuracy = np.trace(cm) / float(np.sum(cm))\n misclass = 1 - accuracy\n\n if cmap is None:\n cmap = plt.get_cmap('Blues')\n\n plt.figure(figsize=(8, 6))\n plt.imshow(cm, interpolation='nearest', cmap=cmap)\n plt.title(title)\n plt.colorbar()\n\n if target_names is not None:\n tick_marks = np.arange(len(target_names))\n plt.xticks(tick_marks, target_names, rotation=45)\n plt.yticks(tick_marks, target_names)\n\n if normalize:\n cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]\n\n\n thresh = cm.max() / 1.5 if normalize else cm.max() / 2\n for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):\n if normalize:\n plt.text(j, i, \"{:0.4f}\".format(cm[i, j]),\n horizontalalignment=\"center\",\n color=\"white\" if cm[i, j] > thresh else \"black\")\n else:\n plt.text(j, i, \"{:,}\".format(cm[i, j]),\n horizontalalignment=\"center\",\n color=\"white\" if cm[i, j] > thresh else \"black\")\n\n\n plt.tight_layout()\n plt.ylabel('True label')\n plt.xlabel('Predicted label\\naccuracy={:0.4f}; misclass={:0.4f}'.format(accuracy, misclass))\n plt.show()",
"_____no_output_____"
],
[
"plot_confusion_matrix(cm,wine.target_names)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0af4a155fe0799929a866ed8ed9aa66eec369db | 25,061 | ipynb | Jupyter Notebook | cheat-sheets/05-Lists Sets and Dictionaries.ipynb | nborwankar/python-fundamentals | 65ffb7c98fa5834a5ea8bd1f635f91553f062e75 | [
"Apache-2.0"
] | 4 | 2015-05-15T05:39:28.000Z | 2017-07-05T10:47:21.000Z | cheat-sheets/05-Lists Sets and Dictionaries.ipynb | zalzala/python-fundamentals | b818e5e7030b223ce86788760320c8c716e7a463 | [
"Apache-2.0"
] | null | null | null | cheat-sheets/05-Lists Sets and Dictionaries.ipynb | zalzala/python-fundamentals | b818e5e7030b223ce86788760320c8c716e7a463 | [
"Apache-2.0"
] | 4 | 2015-09-25T17:22:27.000Z | 2018-10-10T18:07:30.000Z | 24.354713 | 496 | 0.476517 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
d0af64015702fb600139868cbe54ea9bf530ae1b | 58,112 | ipynb | Jupyter Notebook | notebooks/amplicon_samplesheet_generator.ipynb | antgonza/metagenomics_pooling_notebook | 431e1856ec4028087ffb3964b1988ac043ae5945 | [
"MIT"
] | 8 | 2021-08-30T03:54:39.000Z | 2022-01-17T18:07:35.000Z | notebooks/amplicon_samplesheet_generator.ipynb | antgonza/metagenomics_pooling_notebook | 431e1856ec4028087ffb3964b1988ac043ae5945 | [
"MIT"
] | 57 | 2021-08-19T20:45:16.000Z | 2022-03-24T19:24:49.000Z | notebooks/amplicon_samplesheet_generator.ipynb | antgonza/metagenomics_pooling_notebook | 431e1856ec4028087ffb3964b1988ac043ae5945 | [
"MIT"
] | 11 | 2021-09-03T17:48:26.000Z | 2021-12-13T20:21:51.000Z | 37.395109 | 502 | 0.474584 | [
[
[
"%reload_ext watermark\n%matplotlib inline\nfrom os.path import exists\n\nfrom metapool.metapool import *\nfrom metapool import (validate_plate_metadata, assign_emp_index, make_sample_sheet, KLSampleSheet, parse_prep, validate_and_scrub_sample_sheet, generate_qiita_prep_file)\n%watermark -i -v -iv -m -h -p metapool,sample_sheet,openpyxl -u",
"Last updated: 2021-12-15T17:15:09.841175-06:00\n\nPython implementation: CPython\nPython version : 3.9.7\nIPython version : 7.30.1\n\nmetapool : 0+untagged.112.g8fed443.dirty\nsample_sheet: 0.12.0\nopenpyxl : 3.0.9\n\nCompiler : Clang 10.0.0 \nOS : Darwin\nRelease : 20.6.0\nMachine : x86_64\nProcessor : i386\nCPU cores : 12\nArchitecture: 64bit\n\nHostname: Kelly-Fogelsons-MacBook-Pro.local\n\nseaborn : 0.11.2\nmatplotlib: 3.5.0\nre : 2.2.1\npandas : 1.3.4\nnumpy : 1.21.2\n\n"
]
],
[
[
"# Knight Lab Amplicon Sample Sheet and Mapping (preparation) File Generator \n\n### What is it?\n\nThis Jupyter Notebook allows you to automatically generate sample sheets for amplicon sequencing. \n\n\n### Here's how it should work.\n\nYou'll start out with a **basic plate map** (platemap.tsv) , which just links each sample to it's approprite row and column.\n\nYou can use this google sheet template to generate your plate map:\n\nhttps://docs.google.com/spreadsheets/d/1xPjB6iR3brGeG4bm2un4ISSsTDxFw5yME09bKqz0XNk/edit?usp=sharing\n\nNext you'll automatically assign EMP barcodes in order to produce a **sample sheet** (samplesheet.csv) that can be used in combination with the rest of the sequence processing pipeline. \n\n**Please designate what kind of amplicon sequencing you want to perform:**",
"_____no_output_____"
]
],
[
[
"seq_type = '16S'\n#options are ['16S', '18S', 'ITS']",
"_____no_output_____"
]
],
[
[
"## Step 1: read in plate map\n\n**Enter the correct path to the plate map file**. This will serve as the plate map for relating all subsequent information.",
"_____no_output_____"
]
],
[
[
"plate_map_fp = './test_data/amplicon/compressed-map.tsv'\n\nif not exists(plate_map_fp):\n print(\"Error: %s is not a path to a valid file\" % plate_map_fp)",
"_____no_output_____"
]
],
[
[
"**Read in the plate map**. It should look something like this:\n\n```\nSample\tRow\tCol\tBlank\nGLY_01_012\tA\t1\tFalse\nGLY_14_034\tB\t1\tFalse\nGLY_11_007\tC\t1\tFalse\nGLY_28_018\tD\t1\tFalse\nGLY_25_003\tE\t1\tFalse\nGLY_06_106\tF\t1\tFalse\nGLY_07_011\tG\t1\tFalse\nGLY_18_043\tH\t1\tFalse\nGLY_28_004\tI\t1\tFalse\n```\n\n**Make sure there a no duplicate IDs.** If each sample doesn't have a different name, an error will be thrown and you won't be able to generate a sample sheet.",
"_____no_output_____"
]
],
[
[
"plate_df = read_plate_map_csv(open(plate_map_fp,'r'))\n\nplate_df.head()",
"_____no_output_____"
]
],
[
[
"# Assign barcodes according to primer plate\n\nThis portion of the notebook will assign a barcode to each sample according to the primer plate number.\n\nAs inputs, it requires:\n1. A plate map dataframe (from previous step)\n2. Preparation metadata for the plates, importantly we need the Primer Plate # so we know what **EMP barcodes** to assign to each plate.\n\nThe workflow then:\n1. Joins the preparation metadata with the plate metadata.\n2. Assigns indices per sample",
"_____no_output_____"
],
[
"## Enter and validate the plating metadata\n\n- In general you will want to update all the fields, but the most important ones are the `Primer Plate #` and the `Plate Position`. `Primer Plate #` determines which EMP barcodes will be used for this plate. `Plate Position` determines the physical location of the plate.\n- If you are plating less than four plates, then remove the metadata for that plate by deleting the text between the curly braces.\n- For missing fields, write NA between the single quotes for example `'NA'`.\n- To enter a plate copy and paste the contents from the plates below.",
"_____no_output_____"
]
],
[
[
"_metadata = [\n {\n # top left plate\n 'Plate Position': '1',\n 'Primer Plate #': '1',\n \n 'Sample Plate': 'THDMI_UK_Plate_2',\n 'Project_Name': 'THDMI UK',\n\n 'Plating': 'SF',\n 'Extraction Kit Lot': '166032128',\n 'Extraction Robot': 'Carmen_HOWE_KF3',\n 'TM1000 8 Tool': '109379Z',\n 'Primer Date': '2021-08-17', # yyyy-mm-dd\n 'MasterMix Lot': '978215',\n 'Water Lot': 'RNBJ0628',\n 'Processing Robot': 'Echo550',\n 'Original Name': ''\n },\n {\n # top right plate\n 'Plate Position': '2',\n 'Primer Plate #': '2',\n \n 'Sample Plate': 'THDMI_UK_Plate_3',\n 'Project_Name': 'THDMI UK',\n\n 'Plating':'AS',\n 'Extraction Kit Lot': '166032128',\n 'Extraction Robot': 'Carmen_HOWE_KF4',\n 'TM1000 8 Tool': '109379Z',\n 'Primer Date': '2021-08-17', # yyyy-mm-dd\n 'MasterMix Lot': '978215',\n 'Water Lot': 'RNBJ0628',\n 'Processing Robot': 'Echo550',\n 'Original Name': ''\n },\n {\n # bottom left plate\n 'Plate Position': '3',\n 'Primer Plate #': '3',\n \n 'Sample Plate': 'THDMI_UK_Plate_4',\n 'Project_Name': 'THDMI UK',\n\n 'Plating':'MB_SF',\n 'Extraction Kit Lot': '166032128',\n 'Extraction Robot': 'Carmen_HOWE_KF3',\n 'TM1000 8 Tool': '109379Z',\n 'Primer Date': '2021-08-17', # yyyy-mm-dd\n 'MasterMix Lot': '978215',\n 'Water Lot': 'RNBJ0628',\n 'Processing Robot': 'Echo550',\n 'Original Name': ''\n },\n {\n # bottom right plate\n 'Plate Position': '4',\n 'Primer Plate #': '4',\n \n 'Sample Plate': 'THDMI_US_Plate_6',\n 'Project_Name': 'THDMI US',\n\n 'Plating':'AS',\n 'Extraction Kit Lot': '166032128',\n 'Extraction Robot': 'Carmen_HOWE_KF4',\n 'TM1000 8 Tool': '109379Z',\n 'Primer Date': '2021-08-17', # yyyy-mm-dd\n 'MasterMix Lot': '978215',\n 'Water Lot': 'RNBJ0628',\n 'Processing Robot': 'Echo550',\n 'Original Name': ''\n },\n]\n\nplate_metadata = validate_plate_metadata(_metadata)\nplate_metadata",
"_____no_output_____"
]
],
[
[
"The `Plate Position` and `Primer Plate #` allow us to figure out which wells are associated with each of the EMP barcodes.",
"_____no_output_____"
]
],
[
[
"if plate_metadata is not None:\n plate_df = assign_emp_index(plate_df, plate_metadata, seq_type).reset_index()\n\n plate_df.head()\nelse:\n print('Error: Please fix the errors in the previous cell')",
"_____no_output_____"
]
],
[
[
"As you can see in the table above, the resulting table is now associated with the corresponding EMP barcodes (`Golay Barcode`, `Forward Primer Linker`, etc), and the plating metadata (`Primer Plate #`, `Primer Date`, `Water Lot`, etc).",
"_____no_output_____"
]
],
[
[
"plate_df.head()",
"_____no_output_____"
]
],
[
[
"# Combine plates (optional)\n\nIf you would like to combine existing plates with these samples, enter the path to their corresponding sample sheets and mapping (preparation) files below. Otherwise you can skip to the next section.\n\n- sample sheet and mapping (preparation)",
"_____no_output_____"
]
],
[
[
"files = [\n # uncomment the line below and point to the correct filepaths to combine with previous plates\n # ['test_output/amplicon/2021_08_17_THDMI-4-6_samplesheet.csv', 'test_output/amplicon/2021-08-01-515f806r_prep.tsv'],\n]\nsheets, preps = [], []\n\nfor sheet, prep in files:\n sheets.append(KLSampleSheet(sheet))\n preps.append(parse_prep(prep))\n \nif len(files):\n print('%d pair of files loaded' % len(files))",
"_____no_output_____"
]
],
[
[
"# Make Sample Sheet\n\nThis workflow takes the pooled sample information and writes an Illumina sample sheet that can be given directly to the sequencing center or processing pipeline. Note that as of writing `bcl2fastq` does not support error-correction in Golay barcodes so the sample sheet is used to generate a mapping (preparation) file but not to demultiplex sequences. Demultiplexing takes place in [Qiita](https://qiita.ucsd.edu).\n\nAs inputs, this notebook requires:\n1. A plate map DataFrame (from previous step)\n\nThe workflow:\n1. formats sample names as bcl2fastq-compatible\n2. formats sample data\n3. sets values for sample sheet fields and formats sample sheet.\n4. writes the sample sheet to a file",
"_____no_output_____"
],
[
"## Step 1: Format sample names to be bcl2fastq-compatible\n\nbcl2fastq requires *only* alphanumeric, hyphens, and underscore characters. We'll replace all non-those characters\nwith underscores and add the bcl2fastq-compatible names to the DataFrame.",
"_____no_output_____"
]
],
[
[
"plate_df['sample sheet Sample_ID'] = plate_df['Sample'].map(bcl_scrub_name)\n\nplate_df.head()",
"_____no_output_____"
]
],
[
[
"## Format the sample sheet data\n\nThis step formats the data columns appropriately for the sample sheet, using the values we've calculated previously.\n\nThe newly-created `bcl2fastq`-compatible names will be in the `Sample ID` and `Sample Name` columns. The original sample names will be in the Description column.\n\nModify lanes to indicate which lanes this pool will be sequenced on.\n\nThe `Project Name` and `Project Plate` columns will be placed in the `Sample_Project` and `Sample_Name` columns, respectively.\n\nsequencer is important for making sure the i5 index is in the correct orientation for demultiplexing. `HiSeq4000`, `HiSeq3000`, `NextSeq`, and `MiniSeq` all require reverse-complemented i5 index sequences. If you enter one of these exact strings in for sequencer, it will revcomp the i5 sequence for you.\n\n`HiSeq2500`, `MiSeq`, and `NovaSeq` will not revcomp the i5 sequence.",
"_____no_output_____"
]
],
[
[
"sequencer = 'HiSeq4000'\nlanes = [1]\n\nmetadata = {\n 'Bioinformatics': [\n {\n 'Sample_Project': 'THDMI_10317',\n 'QiitaID': '10317',\n 'BarcodesAreRC': 'False',\n 'ForwardAdapter': '',\n 'ReverseAdapter': '',\n 'HumanFiltering': 'True',\n 'library_construction_protocol': 'Illumina EMP protocol 515fbc, 806r amplification of 16S rRNA V4',\n 'experiment_design_description': 'Equipment',\n },\n ],\n 'Contact': [\n {\n 'Sample_Project': 'THDMI_10317',\n # non-admin contacts who want to know when the sequences\n # are available in Qiita\n 'Email': '[email protected],[email protected]'\n },\n ],\n 'Chemistry': 'Amplicon',\n 'Assay': 'TruSeq HT',\n}\n\nsheet = make_sample_sheet(metadata, plate_df, sequencer, lanes)\n\n\nsheet.Settings['Adapter'] = 'AGATCGGAAGAGCACACGTCTGAACTCCAGTCA'\nsheet.Settings['AdapterRead2'] = 'AGATCGGAAGAGCGTCGTGTAGGGAAAGAGTGT'",
"/Users/kfolelso/Documents/metagenomics_pooling_notebook_troubleshooting/metapool/sample_sheet.py:473: UserWarning: The column I5_Index_ID in the sample sheet is empty\n warnings.warn('The column %s in the sample sheet is empty' %\n/Users/kfolelso/Documents/metagenomics_pooling_notebook_troubleshooting/metapool/sample_sheet.py:473: UserWarning: The column index2 in the sample sheet is empty\n warnings.warn('The column %s in the sample sheet is empty' %\n"
]
],
[
[
"Check for any possible errors in the sample sheet",
"_____no_output_____"
]
],
[
[
"sheet = validate_and_scrub_sample_sheet(sheet)",
"_____no_output_____"
]
],
[
[
"Add the other sample sheets",
"_____no_output_____"
]
],
[
[
"if len(sheets):\n sheet.merge(sheets)",
"_____no_output_____"
]
],
[
[
"## Step 3: Write the sample sheet to file",
"_____no_output_____"
]
],
[
[
"# write sample sheet as .csv\nsample_sheet_fp = './test_output/amplicon/2021_08_17_THDMI-4-6_samplesheet16S.csv'\n\nif exists(sample_sheet_fp):\n print(\"Warning! This file exists already.\")",
"_____no_output_____"
],
[
"with open(sample_sheet_fp,'w') as f:\n sheet.write(f)\n \n!head -n 30 {sample_sheet_fp}\n!echo ...\n!tail -n 15 {sample_sheet_fp}",
"[Header],,,,,,,,,,\nIEMFileVersion,4,,,,,,,,,\nDate,2021-12-15,,,,,,,,,\nWorkflow,GenerateFASTQ,,,,,,,,,\nApplication,FASTQ Only,,,,,,,,,\nAssay,TruSeq HT,,,,,,,,,\nDescription,,,,,,,,,,\nChemistry,Amplicon,,,,,,,,,\n,,,,,,,,,,\n[Reads],,,,,,,,,,\n151,,,,,,,,,,\n151,,,,,,,,,,\n,,,,,,,,,,\n[Settings],,,,,,,,,,\nReverseComplement,0,,,,,,,,,\nAdapter,AGATCGGAAGAGCACACGTCTGAACTCCAGTCA,,,,,,,,,\nAdapterRead2,AGATCGGAAGAGCGTCGTGTAGGGAAAGAGTGT,,,,,,,,,\n,,,,,,,,,,\n[Data],,,,,,,,,,\nSample_ID,Sample_Name,Sample_Plate,Sample_Well,I7_Index_ID,index,Sample_Project,Well_description,I5_Index_ID,index2,Lane\nX00180471,X00180471,THDMI_10317_PUK2,A1,515rcbc0,AGCCTTCGTCGC,THDMI_10317,X00180471,,,1\nX00180199,X00180199,THDMI_10317_PUK2,C1,515rcbc12,CGTATAAATGCG,THDMI_10317,X00180199,,,1\nX00179789,X00179789,THDMI_10317_PUK2,E1,515rcbc24,TGACTAATGGCC,THDMI_10317,X00179789,,,1\nX00180201,X00180201,THDMI_10317_PUK2,G1,515rcbc36,GTGGAGTCTCAT,THDMI_10317,X00180201,,,1\nX00180464,X00180464,THDMI_10317_PUK2,I1,515rcbc48,TGATGTGCTAAG,THDMI_10317,X00180464,,,1\nX00179796,X00179796,THDMI_10317_PUK2,K1,515rcbc60,TGTGCACGCCAT,THDMI_10317,X00179796,,,1\nX00179888,X00179888,THDMI_10317_PUK2,M1,515rcbc72,GGTGAGCAAGCA,THDMI_10317,X00179888,,,1\nX00179969,X00179969,THDMI_10317_PUK2,O1,515rcbc84,CTATGTATTAGT,THDMI_10317,X00179969,,,1\nBLANK2_2A,BLANK2.2A,THDMI_10317_PUK2,A3,515rcbc1,TCCATACCGGAA,THDMI_10317,BLANK2.2A,,,1\nBLANK2_2B,BLANK2.2B,THDMI_10317_PUK2,C3,515rcbc13,ATGCTGCAACAC,THDMI_10317,BLANK2.2B,,,1\n...\nX00179670,X00179670,THDMI_10317_PUS6,F24,515rcbc323,GTCAGTATGGCT,THDMI_10317,X00179670,,,1\nX00179548,X00179548,THDMI_10317_PUS6,H24,515rcbc335,GTCCTCGCGACT,THDMI_10317,X00179548,,,1\nX00179326,X00179326,THDMI_10317_PUS6,J24,515rcbc347,CGTTCGCTAGCC,THDMI_10317,X00179326,,,1\nX00179165,X00179165,THDMI_10317_PUS6,L24,515rcbc359,TGCCTGCTCGAC,THDMI_10317,X00179165,,,1\nX00179035,X00179035,THDMI_10317_PUS6,N24,515rcbc371,TCTTACCCATAA,THDMI_10317,X00179035,,,1\nX00179260,X00179260,THDMI_10317_PUS6,P24,515rcbc383,TGTGCTTGTAGG,THDMI_10317,X00179260,,,1\n,,,,,,,,,,\n[Bioinformatics],,,,,,,,,,\nSample_Project,QiitaID,BarcodesAreRC,ForwardAdapter,ReverseAdapter,HumanFiltering,library_construction_protocol,experiment_design_description,,,\nTHDMI_10317,10317,False,,,True,\"Illumina EMP protocol 515fbc, 806r amplification of 16S rRNA V4\",Equipment,,,\n,,,,,,,,,,\n[Contact],,,,,,,,,,\nSample_Project,Email,,,,,,,,,\nTHDMI_10317,\"[email protected],[email protected]\",,,,,,,,,\n,,,,,,,,,,\n"
]
],
[
[
"# Create a mapping (preparation) file for Qiita",
"_____no_output_____"
]
],
[
[
"output_filename = 'test_output/amplicon/2021-08-01-515f806r_prep.tsv'",
"_____no_output_____"
],
[
"qiita_df = generate_qiita_prep_file(plate_df, seq_type)\n\nqiita_df.head()",
"_____no_output_____"
],
[
"qiita_df.set_index('sample_name', verify_integrity=True).to_csv(output_filename, sep='\\t')",
"_____no_output_____"
]
],
[
[
"Add the previous sample sheets",
"_____no_output_____"
]
],
[
[
"if len(preps):\n prep = prep.append(preps, ignore_index=True)",
"_____no_output_____"
],
[
"!head -n 5 {output_filename}",
"sample_name\tbarcode\tprimer\tprimer_plate\twell_id\tplating\textractionkit_lot\textraction_robot\ttm1000_8_tool\tprimer_date\tmastermix_lot\twater_lot\tprocessing_robot\ttm300_8_tool\ttm50_8_tool\tsample_plate\tproject_name\torig_name\twell_description\texperiment_design_description\tlibrary_construction_protocol\tlinker\tplatform\trun_center\trun_date\trun_prefix\tpcr_primers\tsequencing_meth\ttarget_gene\ttarget_subfragment\tcenter_name\tcenter_project_name\tinstrument_model\trunid\r\nX00180471\tAGCCTTCGTCGC\tGTGYCAGCMGCCGCGGTAA\t1\tA1\tSF\t166032128\tCarmen_HOWE_KF3\t109379Z\t2021-08-17\t978215\tRNBJ0628\tEcho550\t\t\tTHDMI_UK_Plate_2\tTHDMI_10317\tX00180471\tTHDMI_UK_Plate_2.X00180471.A1\t\tIllumina EMP protocol 515fbc, 806r amplification of 16S rRNA V4\tGT\tIllumina\tUCSDMI\t\t\tFWD:GTGYCAGCMGCCGCGGTAA; REV:GGACTACNVGGGTWTCTAAT\tSequencing by synthesis\t16S rRNA\tV4\tUCSDMI\t\t\t\r\nX00180199\tCGTATAAATGCG\tGTGYCAGCMGCCGCGGTAA\t1\tC1\tSF\t166032128\tCarmen_HOWE_KF3\t109379Z\t2021-08-17\t978215\tRNBJ0628\tEcho550\t\t\tTHDMI_UK_Plate_2\tTHDMI_10317\tX00180199\tTHDMI_UK_Plate_2.X00180199.C1\t\tIllumina EMP protocol 515fbc, 806r amplification of 16S rRNA V4\tGT\tIllumina\tUCSDMI\t\t\tFWD:GTGYCAGCMGCCGCGGTAA; REV:GGACTACNVGGGTWTCTAAT\tSequencing by synthesis\t16S rRNA\tV4\tUCSDMI\t\t\t\r\nX00179789\tTGACTAATGGCC\tGTGYCAGCMGCCGCGGTAA\t1\tE1\tSF\t166032128\tCarmen_HOWE_KF3\t109379Z\t2021-08-17\t978215\tRNBJ0628\tEcho550\t\t\tTHDMI_UK_Plate_2\tTHDMI_10317\tX00179789\tTHDMI_UK_Plate_2.X00179789.E1\t\tIllumina EMP protocol 515fbc, 806r amplification of 16S rRNA V4\tGT\tIllumina\tUCSDMI\t\t\tFWD:GTGYCAGCMGCCGCGGTAA; REV:GGACTACNVGGGTWTCTAAT\tSequencing by synthesis\t16S rRNA\tV4\tUCSDMI\t\t\t\r\nX00180201\tGTGGAGTCTCAT\tGTGYCAGCMGCCGCGGTAA\t1\tG1\tSF\t166032128\tCarmen_HOWE_KF3\t109379Z\t2021-08-17\t978215\tRNBJ0628\tEcho550\t\t\tTHDMI_UK_Plate_2\tTHDMI_10317\tX00180201\tTHDMI_UK_Plate_2.X00180201.G1\t\tIllumina EMP protocol 515fbc, 806r amplification of 16S rRNA V4\tGT\tIllumina\tUCSDMI\t\t\tFWD:GTGYCAGCMGCCGCGGTAA; REV:GGACTACNVGGGTWTCTAAT\tSequencing by synthesis\t16S rRNA\tV4\tUCSDMI\t\t\t\r\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0af74626d0a23e4376d5c4fccc96187364e91aa | 5,646 | ipynb | Jupyter Notebook | Task_1.ipynb | ezekielonaloye-gmit/Machine-learning-and-statistics--Assessment-2020 | a81a96797d294310b0328ef387707953199a9172 | [
"Apache-2.0"
] | null | null | null | Task_1.ipynb | ezekielonaloye-gmit/Machine-learning-and-statistics--Assessment-2020 | a81a96797d294310b0328ef387707953199a9172 | [
"Apache-2.0"
] | null | null | null | Task_1.ipynb | ezekielonaloye-gmit/Machine-learning-and-statistics--Assessment-2020 | a81a96797d294310b0328ef387707953199a9172 | [
"Apache-2.0"
] | null | null | null | 28.23 | 234 | 0.558094 | [
[
[
"<p align=\"center\">\n <h1 align=\"center\">Machine Learning and Statistics Tasks 2020</h1>\n <h1 align=\"center\"> Task 1: Python function sqrt2</h1>\n <h2 align=\"center\"> Author: Ezekiel Onaloye</h2>\n <h2 align=\"center\"> Created: November 2020 </h2> \n</p>",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"### Task 1\nWrite a Python function called sqrt2 that calculates and prints to the screen the square root of 2 to 100 decimal places. ",
"_____no_output_____"
],
[
"### Introduction\n\nA square root of a number is a value that gives the original number when multiplied by itself. For example, 2 x 2 = 4, so a square root of 4 is 2. -2 x -2 is 4 too, so -2 is also a square root of 4.\n\nA square root is like asking ourselves, \"what value can we multiply by itself to get this outcome?\". That multiplication by itself, is also called squaring. So, in other words, 3 squared is 9, and so the square root of 9 is 3.",
"_____no_output_____"
],
[
"### Simple Python function sqrt2",
"_____no_output_____"
],
[
"<h4 align=\"left\"> Algorithm </h4>\n\n1. Take the input from the user\n \n2. Create a function that takes one argument\n \n3. Then, if the number is negative, return nothing \n \n4. As a square root of a number is simply the number raised to the power 0.5, raise the given number to the power of 0.5\n\n5. This will give us the square root of the number; return it\n \n6. Print out the result to the user",
"_____no_output_____"
]
],
[
[
"# Python function called sqrt2 \n# Adapted from https://www.educative.io/edpresso/how-to-take-the-square-root-of-a-number-in-python\n# Function takes input,calculates and prints to the screen the square root of 2 to 100 decimal places\n\nn = int(input(\"Please input the base value: \"))\n \ndef sqrt(n):\n if n < 0:\n return\n else:\n return n**0.5\n \nprint(\"The square root of\", n, \"is\", format(sqrt(n),'.100f'))",
"Please input the base value: 2\nThe square root of 2 is 1.4142135623730951454746218587388284504413604736328125000000000000000000000000000000000000000000000000\n"
]
],
[
[
"### Python function sqrt2",
"_____no_output_____"
],
[
"<h4 align=\"left\"> Algorithm </h4> \n\n1. Let n be the given number\n2. create function sqrt2(takes a parameter)\n3. Given number stored as variable x\n4. y equate given number + 1 divided by 2\n5. while y is less than x\n6. then x = y\n7. y store the value of x+n divided by itself and overall by 2\n8. return y as the square root",
"_____no_output_____"
]
],
[
[
"# Python function called sqrt2 \n# Adapted from \n# Function takes input,calculates and prints to the screen the square root of 2 to 100 decimal places\n\n# user asked to enter a number \nprint(\"\")\nn = int(input(\"Please enter any number to find square root: \"))\n\n# function declared\ndef sqrt2(n):\n # number entered stored as x, x is a variable carrying entered number\n x = n\n # y store x + 1 divided by 2 based on \n y = (x + 1) / 2\n while y < x:\n x = y\n y = (x + n / x) / 2\n return y;\nprint(\"Square root of\", n, \"the given number is %.100f\" % sqrt2(n))",
"\nPlease enter any number to find square root: 2\nSquare root of 2 the given number is 1.4142135623730949234300169337075203657150268554687500000000000000000000000000000000000000000000000000\n"
]
],
[
[
"### References",
"_____no_output_____"
],
[
"[1] https://www.educative.io/edpresso/how-to-take-the-square-root-of-a-number-in-python\n\n[2] https://www.codegrepper.com/code-examples/python/python+print+upto+two+decimal+places \n\n[3] https://kodify.net/python/math/square-root/\n\n[4] https://www.educba.com/square-root-in-python/ ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d0af79242f136c2e437d810835e871d4b0ec4e5b | 753,218 | ipynb | Jupyter Notebook | notebooks/02.0-make-syllable_df/3-create-swamp-sparrow-df.ipynb | xingjeffrey/avgn_paper | 412e95dabc7b7b13a434b85cc54a21c06efe4e2b | [
"MIT"
] | null | null | null | notebooks/02.0-make-syllable_df/3-create-swamp-sparrow-df.ipynb | xingjeffrey/avgn_paper | 412e95dabc7b7b13a434b85cc54a21c06efe4e2b | [
"MIT"
] | null | null | null | notebooks/02.0-make-syllable_df/3-create-swamp-sparrow-df.ipynb | xingjeffrey/avgn_paper | 412e95dabc7b7b13a434b85cc54a21c06efe4e2b | [
"MIT"
] | null | null | null | 469.002491 | 302,320 | 0.91805 | [
[
[
"%load_ext autoreload\n%autoreload 2",
"The autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\n"
],
[
"import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nfrom tqdm.autonotebook import tqdm\nfrom joblib import Parallel, delayed\nimport umap\nimport pandas as pd",
"_____no_output_____"
],
[
"from avgn.utils.paths import DATA_DIR, most_recent_subdirectory, ensure_dir",
"_____no_output_____"
],
[
"DATASET_ID = 'swamp_sparrow'",
"_____no_output_____"
],
[
"from avgn.utils.hparams import HParams\nfrom avgn.dataset import DataSet",
"_____no_output_____"
],
[
"from avgn.signalprocessing.create_spectrogram_dataset import prepare_wav, create_label_df, get_row_audio",
"_____no_output_____"
]
],
[
[
"### create dataset",
"_____no_output_____"
]
],
[
[
"hparams = HParams(\n num_mel_bins = 32,\n mel_lower_edge_hertz=100,\n mel_upper_edge_hertz=22000,\n butter_lowcut = 100,\n butter_highcut = 22000,\n ref_level_db = 25,\n min_level_db = -50,\n mask_spec = True,\n win_length_ms = 5,\n hop_length_ms = .5,\n mask_spec_kwargs = {\"spec_thresh\": 0.9, \"offset\": 1e-10}\n)",
"_____no_output_____"
],
[
"# create a dataset object\ndataset = DataSet(DATASET_ID, hparams = hparams)",
"_____no_output_____"
],
[
"dataset.sample_json",
"_____no_output_____"
]
],
[
[
"#### Create dataset based upon JSON",
"_____no_output_____"
]
],
[
[
"from joblib import Parallel, delayed\nn_jobs = -1; verbosity = 10",
"_____no_output_____"
],
[
"with Parallel(n_jobs=n_jobs, verbose=verbosity) as parallel:\n syllable_dfs = parallel(\n delayed(create_label_df)(\n dataset.data_files[key].data,\n hparams=dataset.hparams,\n labels_to_retain=[\"syllable\", \"pos_in_syllable\"],\n unit=\"elements\",\n key = key,\n )\n for key in tqdm(dataset.data_files.keys())\n )\nsyllable_df = pd.concat(syllable_dfs)\nlen(syllable_df)",
"_____no_output_____"
],
[
"syllable_df[:3]",
"_____no_output_____"
],
[
"syllable_df",
"_____no_output_____"
]
],
[
[
"### get audio for dataset",
"_____no_output_____"
]
],
[
[
"with Parallel(n_jobs=n_jobs, verbose=verbosity) as parallel:\n syllable_dfs = parallel(\n delayed(get_row_audio)(\n syllable_df[syllable_df.key == key], \n dataset.data_files[key].data['wav_loc'], \n dataset.hparams\n )\n for key in tqdm(syllable_df.key.unique())\n )\nsyllable_df = pd.concat(syllable_dfs)\nlen(syllable_df)",
"_____no_output_____"
],
[
"syllable_df[:3]",
"_____no_output_____"
],
[
"syllable_df.indvi.values[:100]",
"_____no_output_____"
],
[
"sylls = syllable_df.audio.values",
"_____no_output_____"
],
[
"nrows = 5\nncols = 10\nzoom = 2\nfig, axs = plt.subplots(ncols=ncols, nrows = nrows,figsize = (ncols*zoom, nrows+zoom/1.5))\nfor i, syll in tqdm(enumerate(sylls), total = nrows*ncols):\n ax = axs.flatten()[i]\n ax.plot(syll)\n if i == nrows*ncols -1:\n break",
"_____no_output_____"
]
],
[
[
"### Create spectrograms",
"_____no_output_____"
]
],
[
[
"from avgn.visualization.spectrogram import draw_spec_set\nfrom avgn.signalprocessing.create_spectrogram_dataset import make_spec, mask_spec, log_resize_spec, pad_spectrogram",
"_____no_output_____"
],
[
"syllables_wav = syllable_df.audio.values\nsyllables_rate = syllable_df.rate.values",
"_____no_output_____"
],
[
"with Parallel(n_jobs=n_jobs, verbose=verbosity) as parallel:\n # create spectrograms\n syllables_spec = parallel(\n delayed(make_spec)(\n syllable,\n rate,\n hparams=dataset.hparams,\n mel_matrix=dataset.mel_matrix,\n use_mel=True,\n use_tensorflow=False,\n )\n for syllable, rate in tqdm(\n zip(syllables_wav, syllables_rate),\n total=len(syllables_rate),\n desc=\"getting syllable spectrograms\",\n leave=False,\n )\n )",
"_____no_output_____"
]
],
[
[
"### Rescale spectrogram\n- using log rescaling",
"_____no_output_____"
]
],
[
[
"log_scaling_factor = 4",
"_____no_output_____"
],
[
"with Parallel(n_jobs=n_jobs, verbose=verbosity) as parallel:\n syllables_spec = parallel(\n delayed(log_resize_spec)(spec, scaling_factor=log_scaling_factor)\n for spec in tqdm(syllables_spec, desc=\"scaling spectrograms\", leave=False)\n )",
"_____no_output_____"
],
[
"draw_spec_set(syllables_spec, zoom=1, maxrows=10, colsize=25)",
"(25.0, 10) (320, 800) 25.0 32 800\n"
]
],
[
[
"### Pad spectrograms",
"_____no_output_____"
]
],
[
[
"syll_lens = [np.shape(i)[1] for i in syllables_spec]\npad_length = np.max(syll_lens)",
"_____no_output_____"
],
[
"plt.hist(syll_lens)",
"_____no_output_____"
],
[
"with Parallel(n_jobs=n_jobs, verbose=verbosity) as parallel:\n\n syllables_spec = parallel(\n delayed(pad_spectrogram)(spec, pad_length)\n for spec in tqdm(\n syllables_spec, desc=\"padding spectrograms\", leave=False\n )\n )",
"_____no_output_____"
],
[
"draw_spec_set(syllables_spec, zoom=1, maxrows=10, colsize=25)",
"(25.0, 10) (320, 800) 25.0 32 800\n"
]
],
[
[
"### save dataset",
"_____no_output_____"
]
],
[
[
"np.shape(syllables_spec)",
"_____no_output_____"
],
[
"syllable_df[:3]",
"_____no_output_____"
],
[
"syllable_df['spectrogram'] = syllables_spec",
"_____no_output_____"
],
[
"save_loc = DATA_DIR / 'syllable_dfs' / DATASET_ID / 'swampsparrow.pickle'\nensure_dir(save_loc)\nsyllable_df.to_pickle(save_loc)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d0af892e4bfd0455f7e19d19b72acaf7df5101ab | 3,881 | ipynb | Jupyter Notebook | docs/model_structure.ipynb | srijithbalakrishnan/dreaminsg-integrated-model | e568ce37c0ffb40b8453ac084329e6e478a3db96 | [
"BSD-3-Clause"
] | null | null | null | docs/model_structure.ipynb | srijithbalakrishnan/dreaminsg-integrated-model | e568ce37c0ffb40b8453ac084329e6e478a3db96 | [
"BSD-3-Clause"
] | null | null | null | docs/model_structure.ipynb | srijithbalakrishnan/dreaminsg-integrated-model | e568ce37c0ffb40b8453ac084329e6e478a3db96 | [
"BSD-3-Clause"
] | null | null | null | 28.748148 | 199 | 0.545993 | [
[
[
"# Model structure\n<img src=\"../freaminsg_integrated_model/notebooks/meetings/figures/model_structure.jpeg\" alt=\"Alt text\" width=\"800\" title=\"Title text\" />",
"_____no_output_____"
],
[
"# Project structure\n\n```\nDREAMINSG_INTEGRATED_MODEL/\n|-- dreaminsg_integrated_model/\n|\t|-- data/\n|\t|\t|-- disruptive_scenarios/\n|\t|\t|\t|-- disrupt_generator.py\n|\t|\t|\n|\t|\t|-- networks/\n|\t|\t\t|-- examples/\n|\t|\t\t|-- power/\n|\t|\t\t|-- water/\n|\t|\t\t|-- transportation/\n|\t|\n|\t|-- network_sim_models/\n|\t|\t|-- power/power_system_model.py\n|\t|\t|-- transportation/\n|\t|\t|-- water/water_network_model.py\n|\t|\t|-- interdependencies.py\n|\t|\n|\t|-- results/\n|\t|\t|-- figures/plots.py\n|\t|\n|\t|-- main.py\n|\n|-- notebooks\n|\t|-- demo.ipynb\n|\t|-- model_structure.ipynb\n|\t|-- plots.ipynb\n|\n|-- setup.py\n|-- README.md\n|-- LICENSE\n```\n\nModel available on [Github](https://github.com/srijithbalakrishnan/dreaminsg-integrated-model.git)",
"_____no_output_____"
],
[
"# Model Demonstration\n\n[demo.ipynb](./demo.ipynb)",
"_____no_output_____"
],
[
"# _To do list_\n\n## Ongoing (April - May)\n1. ### Scaling the model to larger networks.\n \n (a) Some issues related to wntr package identified and are being fixed.\n \n (b) Discrete event simulation is being tested. Saves a lot of time, but there are some issues with it while implementing the wntr simulation. Simulates only the network conditions during:\n \n - the start of the simulation;\n - when some component fails;\n - when the crew reaches a failed component for repair; and\n - when a component is completely repaired and the end of the simulation. \n \n2. ### Changes to the way recovery is modeled -- recovery times instead of recovery rates.\n\n3. ### Incorporate pipe leaks and failures in the model. (duration of pipe leaks + repair time)\n4. ### Identification of relevant metrics for resilience quantification.\n\n (a) Currently using ratio of consumption to pre-disaster consumption (water and power)\n \n (b) Additional suggestions provided by Dr. Giovini Sansavinni to be incorporated.\n\n5. ### Add more interdependencies and direct impact scenarios.\n\n## Mid-term (May - July)\n1. ### Integrating module for optimizing the recovery strategy. \n\n2. ### Network generation and automation.",
"_____no_output_____"
],
[
"## Comments and Feedback\n\n### 1. What data to be collected from the model?\n - Network\n - Disruptive scenario\n\n### 2. Weighting\n- Fuzzy Ordered Weighting Methods\n- ",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d0af91789a7b6fc8709eea02e4afbc5a1a98d5f7 | 10,674 | ipynb | Jupyter Notebook | notebooks/pYPK0_PDC1_HIS3_RPS19b.ipynb | MetabolicEngineeringGroupCBMA/ypk-xylose-pathways | 36ee66c0d22e5d09d70c292544888c3e7c41e9ca | [
"BSD-3-Clause"
] | 1 | 2021-08-31T07:03:34.000Z | 2021-08-31T07:03:34.000Z | notebooks/pYPK0_PDC1_HIS3_RPS19b.ipynb | MetabolicEngineeringGroupCBMA/ypk-xylose-pathways | 36ee66c0d22e5d09d70c292544888c3e7c41e9ca | [
"BSD-3-Clause"
] | null | null | null | notebooks/pYPK0_PDC1_HIS3_RPS19b.ipynb | MetabolicEngineeringGroupCBMA/ypk-xylose-pathways | 36ee66c0d22e5d09d70c292544888c3e7c41e9ca | [
"BSD-3-Clause"
] | 1 | 2021-08-17T05:30:01.000Z | 2021-08-17T05:30:01.000Z | 25.907767 | 122 | 0.528199 | [
[
[
"# Construction of pYPK0_PDC1_HIS3_RPS19b\n\n[pYPKa_Z_PDC1](pYPKa_Z_PDC1.ipynb)\n\n[pYPKa_E_RPS19b](pYPKa_E_RPS19b.ipynb) ",
"_____no_output_____"
]
],
[
[
"from pydna.all import *",
"_____no_output_____"
],
[
"p567,p577,p468,p467,p568,p578,p775,p778,p167,p166 = parse(\"yeast_pahtway_kit_standard_primers.txt\")",
"_____no_output_____"
]
],
[
[
"[Yeast Pathway Kit Standard Primers](ypk_std_primers.ipynb)",
"_____no_output_____"
]
],
[
[
"from Bio.Restriction import ZraI, AjiI, EcoRV",
"_____no_output_____"
],
[
"pYPK0 =read(\"pYPK0.gb\")",
"_____no_output_____"
],
[
"promoter_clone = pYPKa_Z_PDC1 =read(\"pYPKa_Z_PDC1.gb\")",
"_____no_output_____"
],
[
"gene_clone =read(\"pYPKa_A_ScHIS3.gb\")",
"_____no_output_____"
],
[
"terminator_clone = pYPKa_E_RPS19b =read(\"pYPKa_E_RPS19b.gb\")",
"_____no_output_____"
],
[
"p =pcr( p167, p567, promoter_clone)\ng =pcr( p468, p467, gene_clone)\nt =pcr( p568, p166, terminator_clone)",
"_____no_output_____"
],
[
"pYPK0_E_Z, stuffer = pYPK0.cut((EcoRV, ZraI))",
"_____no_output_____"
],
[
"(pYPK0_E_Z, p, g, t)",
"_____no_output_____"
],
[
"asm =Assembly((pYPK0_E_Z, p, g, t), limit=31)",
"_____no_output_____"
],
[
"asm",
"_____no_output_____"
],
[
"candidate = asm.assemble_circular()[0]\ncandidate.figure()",
"_____no_output_____"
],
[
"result = candidate.synced(pYPK0)",
"_____no_output_____"
]
],
[
[
"The new construct should have cseguid ```m-7lOIJ60aZl4b7rN4vanlD7OWk``` and 9404 bp.",
"_____no_output_____"
]
],
[
[
"result.write(\"pYPK0_PDC1_HIS3_RPS19b.gb\")",
"_____no_output_____"
]
],
[
[
"###[Download](pYPK0_PDC1_HIS3_RPS19b.gb)",
"_____no_output_____"
]
],
[
[
"from pydna.all import *\nreloaded =read(\"pYPK0_PDC1_HIS3_RPS19b.gb\")",
"_____no_output_____"
],
[
"reloaded =read(\"pYPK0_PDC1_HIS3_RPS19b.gb\")",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0af935335939018f77b3190a4629a9ac8d36a94 | 5,320 | ipynb | Jupyter Notebook | utilsToLearn/caffe/.ipynb_checkpoints/runCaffe-checkpoint.ipynb | kl456123/machine_learning | c9097918f94560c72feeddbd7d4a7b0f388a1e6a | [
"MIT"
] | null | null | null | utilsToLearn/caffe/.ipynb_checkpoints/runCaffe-checkpoint.ipynb | kl456123/machine_learning | c9097918f94560c72feeddbd7d4a7b0f388a1e6a | [
"MIT"
] | null | null | null | utilsToLearn/caffe/.ipynb_checkpoints/runCaffe-checkpoint.ipynb | kl456123/machine_learning | c9097918f94560c72feeddbd7d4a7b0f388a1e6a | [
"MIT"
] | null | null | null | 22.166667 | 116 | 0.450564 | [
[
[
"import caffe\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
],
[
"net = caffe.Net('deploy.prototxt',caffe.TEST)",
"_____no_output_____"
],
[
"# net.layers[0].blobs>0\n\nfor i ,(name,layer) in enumerate(zip(net._layer_names,net.layers)):\n \n# display all layers\n print(name)\n for bottom_name in net.bottom_names[name]:\n \n print('layer_%s bottom name: '%name,bottom_name)\n if(len(net.bottom_names[name])>1):\n print('True')\n# parameters layers\n# if(len(layer.blobs)>0):\n# print(name)\n ",
"data\nconv1\n('layer_conv1 bottom name: ', 'data')\nrelu1\n('layer_relu1 bottom name: ', 'conv1')\nnorm1\n('layer_norm1 bottom name: ', 'conv1')\npool1\n('layer_pool1 bottom name: ', 'norm1')\nconv2\n('layer_conv2 bottom name: ', 'pool1')\nrelu2\n('layer_relu2 bottom name: ', 'conv2')\nnorm2\n('layer_norm2 bottom name: ', 'conv2')\npool2\n('layer_pool2 bottom name: ', 'norm2')\nconv3\n('layer_conv3 bottom name: ', 'pool2')\nrelu3\n('layer_relu3 bottom name: ', 'conv3')\nconv4\n('layer_conv4 bottom name: ', 'conv3')\nrelu4\n('layer_relu4 bottom name: ', 'conv4')\nconv5\n('layer_conv5 bottom name: ', 'conv4')\nrelu5\n('layer_relu5 bottom name: ', 'conv5')\npool5\n('layer_pool5 bottom name: ', 'conv5')\nfc6\n('layer_fc6 bottom name: ', 'pool5')\nrelu6\n('layer_relu6 bottom name: ', 'fc6')\ndrop6\n('layer_drop6 bottom name: ', 'fc6')\nfc7\n('layer_fc7 bottom name: ', 'fc6')\nrelu7\n('layer_relu7 bottom name: ', 'fc7')\ndrop7\n('layer_drop7 bottom name: ', 'fc7')\nfc8\n('layer_fc8 bottom name: ', 'fc7')\nprob\n('layer_prob bottom name: ', 'fc8')\n"
],
[
"arr = np.array([[1,2,3],[2,3,4]])\narr.swapaxes(0,1)",
"_____no_output_____"
],
[
"arr2 = np.array([[2,6,7],[8,9,5]])\nnp.concatenate([arr,arr2],axis=1)",
"_____no_output_____"
],
[
"# np.concatenate([[[1,2]],[[2,3]],[[3,3]]],axis=1)\nprint(arr2[:])\nprint(arr2[None])\nprint(arr2[:,None])\nprint(arr2[:,:,None])",
"[[2 6 7]\n [8 9 5]]\n[[[2 6 7]\n [8 9 5]]]\n[[[2 6 7]]\n\n [[8 9 5]]]\n[[[2]\n [6]\n [7]]\n\n [[8]\n [9]\n [5]]]\n"
],
[
"def transform(D,W):\n W, D = np.concatenate([w[:,None] for w in W], axis=1), np.concatenate([d[:,:,None] for d in D], axis=2)\n return W.reshape((-1,)+W.shape[2:]), D.reshape((D.shape[0], -1)+D.shape[3:])",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0afa2397a6b7694a660989f97778664087c0482 | 1,322 | ipynb | Jupyter Notebook | engr1330jb/lessons/lesson19/lesson19.ipynb | dustykat/engr-1330-psuedo-course | 3e7e31a32a1896fcb1fd82b573daa5248e465a36 | [
"CC0-1.0"
] | null | null | null | engr1330jb/lessons/lesson19/lesson19.ipynb | dustykat/engr-1330-psuedo-course | 3e7e31a32a1896fcb1fd82b573daa5248e465a36 | [
"CC0-1.0"
] | null | null | null | engr1330jb/lessons/lesson19/lesson19.ipynb | dustykat/engr-1330-psuedo-course | 3e7e31a32a1896fcb1fd82b573daa5248e465a36 | [
"CC0-1.0"
] | null | null | null | 20.030303 | 81 | 0.530257 | [
[
[
"<div class=\"alert alert-block alert-info\">\n <b><h1>ENGR 1330 Computational Thinking with Data Science </h1></b> \n</div> \n\nCopyright © 2021 Theodore G. Cleveland and Farhang Forghanparast\n\nLast GitHub Commit Date: 5 Mar 2022\n \n# 19: Simulation\n- Simulating random values\n- Draw with/without replacement\n- Histograms to check behavior\n- Concept of an interval",
"_____no_output_____"
],
[
"## References",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown"
]
] |
d0afa46a7ec9e39fc8fa334cf231bc4f7b9bbda9 | 29,410 | ipynb | Jupyter Notebook | Python/01. Basics/02 Conda Environments.ipynb | saramazaheri/CS-Tutorial | 0dbd98e37c7445116fbeb3ebfed5931a013c4a17 | [
"MIT"
] | 1 | 2021-11-19T10:51:32.000Z | 2021-11-19T10:51:32.000Z | Python/01. Basics/02 Conda Environments.ipynb | saramazaheri/CS-Tutorial | 0dbd98e37c7445116fbeb3ebfed5931a013c4a17 | [
"MIT"
] | null | null | null | Python/01. Basics/02 Conda Environments.ipynb | saramazaheri/CS-Tutorial | 0dbd98e37c7445116fbeb3ebfed5931a013c4a17 | [
"MIT"
] | null | null | null | 29,410 | 29,410 | 0.590173 | [
[
[
"<img src=\"../../images/banners/python-basics.png\" width=\"600\"/>",
"_____no_output_____"
],
[
"# <img src=\"../../images/logos/python.png\" width=\"23\"/> Conda Environments \n",
"_____no_output_____"
],
[
"## <img src=\"../../images/logos/toc.png\" width=\"20\"/> Table of Contents \n* [Understanding Conda Environments](#understanding_conda_environments)\n* [Understanding Basic Package Management With Conda](#understanding_basic_package_management_with_conda)\n * [Searching and Installing Packages](#searching_and_installing_packages)\n * [Updating and Removing Packages](#updating_and_removing_packages)\n* [Cheat Sheet](#cheat_sheet)\n* [<img src=\"../../images/logos/web.png\" width=\"20\"/> Read More](#<img_src=\"../../images/logos/web.png\"_width=\"20\"/>_read_more)\n\n---",
"_____no_output_____"
],
[
"<a class=\"anchor\" id=\"understanding_conda_environments\"></a>\n## Understanding Conda Environments",
"_____no_output_____"
],
[
"When you start developing a project from scratch, it’s recommended that you use the latest versions of the libraries you need. However, when working with someone else’s project, such as when running an example from [Kaggle](https://www.kaggle.com/) or [Github](https://github.com/), you may need to install specific versions of packages or even another version of Python due to compatibility issues.",
"_____no_output_____"
],
[
"This problem may also occur when you try to run an application you’ve developed long ago, which uses a particular library version that does not work with your application anymore due to updates.",
"_____no_output_____"
],
[
"Virtual environments are a solution to this kind of problem. By using them, it is possible to create multiple environments, each one with different versions of packages. A typical Python set up includes [Virtualenv](https://virtualenv.pypa.io/en/stable/#), a tool to create isolated Python virtual environments, widely used in the Python community.",
"_____no_output_____"
],
[
"Conda includes its own environment manager and presents some advantages over Virtualenv, especially concerning numerical applications, such as the ability to manage non-Python dependencies and the ability to manage different versions of Python, which is not possible with Virtualenv. Besides that, Conda environments are entirely compatible with default [Python packages](https://realpython.com/python-modules-packages/) that may be installed using pip.",
"_____no_output_____"
],
[
"Miniconda installation provides Conda and a root environment with a version of Python and some basic packages installed. Besides this root environment, it is possible to set up additional environments including different versions of Python and packages.",
"_____no_output_____"
],
[
"<a class=\"anchor\" id=\"conda_environments:\"></a>\nUsing the Anaconda prompt, it is possible to check the available Conda environments by running `conda env list`:\n\n```bash\n\n$ (base) ~ % conda env list\n\n# conda environments:\n#\nbase * /home/ali/anaconda3\n```",
"_____no_output_____"
],
[
"<a class=\"anchor\" id=\"package_plan_##\"></a>\nThis base environment is the root environment, created by the Miniconda installer. It is possible to create another environment, named `otherenv`, by running `conda create --name otherenv`:\n\n\n```bash\n$ (base) ~ % conda create --name otherenv\nSolving environment: done\n\n## Package Plan ##\n\n environment location: C:\\Users\\IEUser\\Miniconda3\\envs\\otherenv\n\n\nProceed ([y]/n)? y\n\nPreparing transaction: done\nVerifying transaction: done\nExecuting transaction: done\n#\n# To activate this environment, use\n#\n# $ conda activate otherenv\n#\n# To deactivate an active environment, use\n#\n# $ conda deactivate\n```",
"_____no_output_____"
],
[
"As notified after the environment creation process is finished, it is possible to activate the otherenv environment by running `conda activate otherenv`. You’ll notice the environment has changed by the indication between parentheses in the beginning of the prompt:\n\n```bash\n$ (base) ~ % conda activate otherenv\n$ (otherenv) ~ %\n```",
"_____no_output_____"
],
[
"You can open the Python interpreter within this environment by running `python`:\n\n```bash\n$ (otherenv) ~ % python\n\nPython 3.7.0 (default, Jun 28 2018, 08:04:48) [MSC v.1912 64 bit (AMD64)] :: Anaconda, Inc. on win32\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>>\n```",
"_____no_output_____"
],
[
"The environment includes Python 3.7.0, the same version included in the root base environment. To exit the Python interpreter, just run `quit()`:\n\n```bash\n>>> quit()\n\n(otherenv) ~ %\n```",
"_____no_output_____"
],
[
"To deactivate the otherenv environment and go back to the root base environment, you should run `deactivate`:\n\n```bash\n(otherenv) ~ % conda deactivate\n\n(base) ~ %\n```",
"_____no_output_____"
],
[
"<a class=\"anchor\" id=\"package_plan_##\"></a>\nAs mentioned earlier, Conda allows you to easily create environments with different versions of Python, which is not straightforward with Virtualenv. To include a different Python version within an environment, you have to specify it by using `python=<version>` when running conda create. For example, to create an environment named `py2` with `Python 2.7`, you have to run `conda create --name py2 python=2.7`:\n\n\n```bash\n(base) ~ % create --name py2 python=2.7\nSolving environment: done\n\n## Package Plan ##\n\n environment location: C:\\Users\\IEUser\\Miniconda3\\envs\\py2\n\n added / updated specs:\n - python=2.7\n\n\nThe following NEW packages will be INSTALLED:\n\n certifi: 2018.8.24-py27_1\n pip: 10.0.1-py27_0\n python: 2.7.15-he216670_0\n setuptools: 40.2.0-py27_0\n vc: 9-h7299396_1\n vs2008_runtime: 9.00.30729.1-hfaea7d5_1\n wheel: 0.31.1-py27_0\n wincertstore: 0.2-py27hf04cefb_0\n\nProceed ([y]/n)? y\n\nPreparing transaction: done\nVerifying transaction: done\nExecuting transaction: done\n#\n# To activate this environment, use\n#\n# $ conda activate py2\n#\n# To deactivate an active environment, use\n#\n# $ conda deactivate\n\n(base) /mnt/c/Users/username%\n```",
"_____no_output_____"
],
[
"As shown by the output of `conda create`, this time some new packages were installed, since the new environment uses Python 2. You can check the new environment indeed uses Python 2 by activating it and running the Python interpreter:\n\n```\n(base) ~ % conda activate py2\n```",
"_____no_output_____"
],
[
"<a class=\"anchor\" id=\"conda_environments:\"></a>\nNow, if you run `conda env list`, you should see the two environments that were created, besides the root base environment:\n\n```bash\n(py2) ~ % conda env list\n# conda environments:\n#\nbase C:\\Users\\IEUser\\Miniconda3\notherenv C:\\Users\\IEUser\\Miniconda3\\envs\\otherenv\npy2 * C:\\Users\\IEUser\\Miniconda3\\envs\\py2\n\n\n(py2) ~ %\n```",
"_____no_output_____"
],
[
"<a class=\"anchor\" id=\"package_plan_##\"></a>\nIn the list, the asterisk indicates the activated environment. It is possible to remove an environment by running `conda remove --name <environment name> --all`. Since it is not possible to remove an activated environment, you should first deactivate the `py2` environment, to remove it:\n\n```bash\n(py2) ~ % conda deactivate\n(base) ~ % conda remove --name py2 --all\n\nRemove all packages in environment C:\\Users\\IEUser\\Miniconda3\\envs\\py2:\n\n\n## Package Plan ##\n\n environment location: C:\\Users\\IEUser\\Miniconda3\\envs\\py2\n\n\nThe following packages will be REMOVED:\n\n certifi: 2018.8.24-py27_1\n pip: 10.0.1-py27_0\n python: 2.7.15-he216670_0\n setuptools: 40.2.0-py27_0\n vc: 9-h7299396_1\n vs2008_runtime: 9.00.30729.1-hfaea7d5_1\n wheel: 0.31.1-py27_0\n wincertstore: 0.2-py27hf04cefb_0\n\nProceed ([y]/n)? y\n\n\n(base) /mnt/c/Users/username%\n```",
"_____no_output_____"
],
[
"Now that you’ve covered the basics of managing environments with Conda, let’s see how to manage packages within the environments.",
"_____no_output_____"
],
[
"<a class=\"anchor\" id=\"understanding_basic_package_management_with_conda\"></a>\n## Understanding Basic Package Management With Conda\n\nWithin each environment, packages of software can be installed using the Conda package manager. The root base environment created by the Miniconda installer includes some packages by default that are not part of Python standard library.",
"_____no_output_____"
],
[
"<a class=\"anchor\" id=\"packages_in_environment_at_c:\\users\\ieuser\\miniconda3:\"></a>\nThe default installation includes the minimum packages necessary to use Conda. To check the list of installed packages in an environment, you just have to make sure it is activated and run `conda list`. In the root environment, the following packages are installed by default:\n\n```bash\n(base) ~ % conda list\n# packages in environment at C:\\Users\\IEUser\\Miniconda3:\n#\n# Name Version Build Channel\nasn1crypto 0.24.0 py37_0\nca-certificates 2018.03.07 0\ncertifi 2018.8.24 py37_1\ncffi 1.11.5 py37h74b6da3_1\nchardet 3.0.4 py37_1\nconda 4.5.11 py37_0\nconda-env 2.6.0 1\nconsole_shortcut 0.1.1 3\ncryptography 2.3.1 py37h74b6da3_0\nidna 2.7 py37_0\nmenuinst 1.4.14 py37hfa6e2cd_0\nopenssl 1.0.2p hfa6e2cd_0\npip 10.0.1 py37_0\npycosat 0.6.3 py37hfa6e2cd_0\npycparser 2.18 py37_1\npyopenssl 18.0.0 py37_0\npysocks 1.6.8 py37_0\npython 3.7.0 hea74fb7_0\npywin32 223 py37hfa6e2cd_1\nrequests 2.19.1 py37_0\nruamel_yaml 0.15.46 py37hfa6e2cd_0\nsetuptools 40.2.0 py37_0\nsix 1.11.0 py37_1\nurllib3 1.23 py37_0\nvc 14 h0510ff6_3\nvs2015_runtime 14.0.25123 3\nwheel 0.31.1 py37_0\nwin_inet_pton 1.0.1 py37_1\nwincertstore 0.2 py37_0\nyaml 0.1.7 hc54c509_2\n```",
"_____no_output_____"
],
[
"To manage the packages, you should also use Conda. Next, let’s see how to search, install, update, and remove packages using Conda.",
"_____no_output_____"
],
[
"<a class=\"anchor\" id=\"searching_and_installing_packages\"></a>\n### Searching and Installing Packages\n\nPackages are installed from repositories called **channels** by Conda, and some default channels are configured by the installer. To search for a specific package, you can run `conda search <package name>`. For example, this is how you search for the `keras` package (a machine learning library):\n\n```bash\n(base) ~ % conda search keras\nLoading channels: done\n# Name Version Build Channel\nkeras 2.0.8 py35h15001cb_0 pkgs/main\nkeras 2.0.8 py36h65e7a35_0 pkgs/main\nkeras 2.1.2 py35_0 pkgs/main\nkeras 2.1.2 py36_0 pkgs/main\nkeras 2.1.3 py35_0 pkgs/main\nkeras 2.1.3 py36_0 pkgs/main\n\n... (more)\n```",
"_____no_output_____"
],
[
"According to the previous output, there are different versions of the package and different builds for each version, such as for Python 3.5 and 3.6.",
"_____no_output_____"
],
[
"<a class=\"anchor\" id=\"name_version_build_channel\"></a>\nThe previous search shows only exact matches for packages named `keras`. To perform a broader search, including all packages containing `keras` in their names, you should use the wildcard `*`. For example, when you run conda search `*keras*`, you get the following:\n\n```bash\n(base) ~ % conda search \"*keras*\"\nLoading channels: done\n# Name Version Build Channel\nkeras 2.0.8 py35h15001cb_0 pkgs/main\nkeras 2.0.8 py36h65e7a35_0 pkgs/main\nkeras 2.1.2 py35_0 pkgs/main\nkeras 2.1.2 py36_0 pkgs/main\nkeras 2.1.3 py35_0 pkgs/main\nkeras 2.1.3 py36_0 pkgs/main\n\n... (more)\n\nkeras-applications 1.0.2 py35_0 pkgs/main\nkeras-applications 1.0.2 py36_0 pkgs/main\nkeras-applications 1.0.4 py35_0 pkgs/main\n\n... (more)\n\nkeras-base 2.2.0 py35_0 pkgs/main\nkeras-base 2.2.0 py36_0 pkgs/main\n\n... (more)\n```",
"_____no_output_____"
],
[
"As the previous output shows, there are some other keras related packages in the default channels.",
"_____no_output_____"
],
[
"<a class=\"anchor\" id=\"package_plan_##\"></a>\nTo install a package, you should run `conda install <package name>`. By default, the newest version of the package will be installed in the active environment. So, let’s install the package `keras` in the environment `otherenv` that you’ve already created:\n\n```bash\n(base) ~ % conda activate otherenv\n\n(otherenv) ~ % conda install keras\nSolving environment: done\n\n## Package Plan ##\n\n environment location: C:\\Users\\IEUser\\Miniconda3\\envs\\otherenv\n\n added / updated specs:\n - keras\n\n\nThe following NEW packages will be INSTALLED:\n\n _tflow_1100_select: 0.0.3-mkl\n absl-py: 0.4.1-py36_0\n astor: 0.7.1-py36_0\n blas: 1.0-mkl\n certifi: 2018.8.24-py36_1\n gast: 0.2.0-py36_0\n grpcio: 1.12.1-py36h1a1b453_0\n h5py: 2.8.0-py36h3bdd7fb_2\n hdf5: 1.10.2-hac2f561_1\n icc_rt: 2017.0.4-h97af966_0\n intel-openmp: 2018.0.3-0\n keras: 2.2.2-0\n keras-applications: 1.0.4-py36_1\n keras-base: 2.2.2-py36_0\n keras-preprocessing: 1.0.2-py36_1\n libmklml: 2018.0.3-1\n libprotobuf: 3.6.0-h1a1b453_0\n markdown: 2.6.11-py36_0\n mkl: 2019.0-117\n mkl_fft: 1.0.4-py36h1e22a9b_1\n mkl_random: 1.0.1-py36h77b88f5_1\n numpy: 1.15.1-py36ha559c80_0\n numpy-base: 1.15.1-py36h8128ebf_0\n pip: 10.0.1-py36_0\n protobuf: 3.6.0-py36he025d50_0\n python: 3.6.6-hea74fb7_0\n pyyaml: 3.13-py36hfa6e2cd_0\n scipy: 1.1.0-py36h4f6bf74_1\n setuptools: 40.2.0-py36_0\n six: 1.11.0-py36_1\n tensorboard: 1.10.0-py36he025d50_0\n tensorflow: 1.10.0-mkl_py36hb361250_0\n tensorflow-base: 1.10.0-mkl_py36h81393da_0\n termcolor: 1.1.0-py36_1\n vc: 14-h0510ff6_3\n vs2013_runtime: 12.0.21005-1\n vs2015_runtime: 14.0.25123-3\n werkzeug: 0.14.1-py36_0\n wheel: 0.31.1-py36_0\n wincertstore: 0.2-py36h7fe50ca_0\n yaml: 0.1.7-hc54c509_2\n zlib: 1.2.11-h8395fce_2\n\nProceed ([y]/n)?\n```",
"_____no_output_____"
],
[
"Conda manages the necessary dependencies for a package when it is installed. Since the package keras has a lot of dependencies, when you install it, Conda manages to install this big list of packages.",
"_____no_output_____"
],
[
"> **Note:** The paragraph below may not happen when you run it as newer versions of `keras` may be available that use python 3.7.\n\nIt’s worth noticing that, since the keras package’s newest build uses Python 3.6 and the otherenv environment was created using Python 3.7, the package python version 3.6.6 was included as a dependency. After confirming the installation, you can check that the Python version for the otherenv environment is downgraded to the 3.6.6 version.",
"_____no_output_____"
],
[
"Sometimes, you don’t want packages to be downgraded, and it would be better to just create a new environment with the necessary version of Python. To check the list of new packages, updates, and downgrades necessary for a package without installing it, you should use the parameter `--dry-run`. For example, to check the packages that will be changed by the installation of the package keras, you should run the following:\n\n```\n(base) ~ % conda install keras --dry-run\n```",
"_____no_output_____"
],
[
"<a class=\"anchor\" id=\"package_plan_##\"></a>\nHowever, if necessary, it is possible to change the default Python of a Conda environment by installing a specific version of the package python. To demonstrate that, let’s create a new environment called envpython:\n\n```bash\n(otherenv) ~ % conda create --name envpython\nSolving environment: done\n\n## Package Plan ##\n\n environment location: C:\\Users\\IEUser\\Miniconda3\\envs\\envpython\n\n\nProceed ([y]/n)? y\n\nPreparing transaction: done\nVerifying transaction: done\nExecuting transaction: done\n#\n# To activate this environment, use\n#\n# $ conda activate envpython\n#\n# To deactivate an active environment, use\n#\n# $ conda deactivate\n```",
"_____no_output_____"
],
[
"As you saw before, since the root base environment uses Python 3.7, envpython is created including this same version of Python:\n\n```bash\n(base) ~ % conda activate envpython\n(envpython) ~ % python\nPython 3.7.0 (default, Jun 28 2018, 08:04:48) [MSC v.1912 64 bit (AMD64)] :: Anaconda, Inc. on win32\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> quit()\n```",
"_____no_output_____"
],
[
"<a class=\"anchor\" id=\"package_plan_##\"></a>\nTo install a specific version of a package, you can run `conda install <package name>=<version>`. For example, this is how you install Python 3.6 in the envpython environment:\n\n```bash\n(envpython) ~ % conda install python=3.6\nSolving environment: done\n\n## Package Plan ##\n\n environment location: C:\\Users\\IEUser\\Miniconda3\\envs\\envpython\n\n added / updated specs:\n - python=3.6\n\n\nThe following NEW packages will be INSTALLED:\n\n certifi: 2018.8.24-py36_1\n pip: 10.0.1-py36_0\n python: 3.6.6-hea74fb7_0\n setuptools: 40.2.0-py36_0\n vc: 14-h0510ff6_3\n vs2015_runtime: 14.0.25123-3\n wheel: 0.31.1-py36_0\n wincertstore: 0.2-py36h7fe50ca_0\n\nProceed ([y]/n)?\n```",
"_____no_output_____"
],
[
"<a class=\"anchor\" id=\"package_plan_##\"></a>\nIn case you need to install more than one package in an environment, it is possible to run conda install only once, passing the names of the packages. To illustrate that, let’s install `numpy`, `scipy`, and `matplotlib`, basic packages for numerical computation:\n\n```bash\n(envpython) ~ % conda install numpy scipy matplotlib\n\nSolving environment: done\n\n## Package Plan ##\n\n environment location: C:\\Users\\IEUser\\Miniconda3\n\n added / updated specs:\n - matplotlib\n - numpy\n - scipy\n\n\nThe following packages will be downloaded:\n\n package | build\n ---------------------------|-----------------\n libpng-1.6.34 | h79bbb47_0 1.3 MB\n mkl_random-1.0.1 | py37h77b88f5_1 267 KB\n intel-openmp-2019.0 | 117 1.7 MB\n qt-5.9.6 | vc14h62aca36_0 92.5 MB\n matplotlib-2.2.3 | py37hd159220_0 6.5 MB\n tornado-5.1 | py37hfa6e2cd_0 668 KB\n pyqt-5.9.2 | py37ha878b3d_0 4.6 MB\n pytz-2018.5 | py37_0 232 KB\n scipy-1.1.0 | py37h4f6bf74_1 13.5 MB\n jpeg-9b | hb83a4c4_2 313 KB\n python-dateutil-2.7.3 | py37_0 260 KB\n numpy-base-1.15.1 | py37h8128ebf_0 3.9 MB\n numpy-1.15.1 | py37ha559c80_0 37 KB\n mkl_fft-1.0.4 | py37h1e22a9b_1 120 KB\n kiwisolver-1.0.1 | py37h6538335_0 61 KB\n pyparsing-2.2.0 | py37_1 96 KB\n cycler-0.10.0 | py37_0 13 KB\n freetype-2.9.1 | ha9979f8_1 470 KB\n icu-58.2 | ha66f8fd_1 21.9 MB\n sqlite-3.24.0 | h7602738_0 899 KB\n sip-4.19.12 | py37h6538335_0 283 KB\n ------------------------------------------------------------\n Total: 149.5 MB\n\nThe following NEW packages will be INSTALLED:\n\n blas: 1.0-mkl\n cycler: 0.10.0-py37_0\n freetype: 2.9.1-ha9979f8_1\n icc_rt: 2017.0.4-h97af966_0\n icu: 58.2-ha66f8fd_1\n intel-openmp: 2019.0-117\n jpeg: 9b-hb83a4c4_2\n kiwisolver: 1.0.1-py37h6538335_0\n libpng: 1.6.34-h79bbb47_0\n matplotlib: 2.2.3-py37hd159220_0\n mkl: 2019.0-117\n mkl_fft: 1.0.4-py37h1e22a9b_1\n mkl_random: 1.0.1-py37h77b88f5_1\n numpy: 1.15.1-py37ha559c80_0\n numpy-base: 1.15.1-py37h8128ebf_0\n pyparsing: 2.2.0-py37_1\n pyqt: 5.9.2-py37ha878b3d_0\n python-dateutil: 2.7.3-py37_0\n pytz: 2018.5-py37_0\n qt: 5.9.6-vc14h62aca36_0\n scipy: 1.1.0-py37h4f6bf74_1\n sip: 4.19.12-py37h6538335_0\n sqlite: 3.24.0-h7602738_0\n tornado: 5.1-py37hfa6e2cd_0\n zlib: 1.2.11-h8395fce_2\n\nProceed ([y]/n)?\n```",
"_____no_output_____"
],
[
"Now that you’ve covered how to search and install packages, let’s see how to update and remove them using Conda.",
"_____no_output_____"
],
[
"<a class=\"anchor\" id=\"updating_and_removing_packages\"></a>\n### Updating and Removing Packages\n\nSometimes, when new packages are released, you need to update them. To do so, you may run `conda update <package name>`. In case you wish to update all the packages within one environment, you should activate the environment and run `conda update --all`.",
"_____no_output_____"
],
[
"<a class=\"anchor\" id=\"package_plan_##\"></a>\nTo remove a package, you can run `conda remove <package name>`. For example, this is how you remove numpy from the root base environment:\n\n```bash\n(envpython) ~ % conda remove numpy\nSolving environment: done\n\n## Package Plan ##\n\n environment location: C:\\Users\\IEUser\\Miniconda3\n\n removed specs:\n - numpy\n\n\nThe following packages will be REMOVED:\n\n matplotlib: 2.2.3-py37hd159220_0\n mkl_fft: 1.0.4-py37h1e22a9b_1\n mkl_random: 1.0.1-py37h77b88f5_1\n numpy: 1.15.1-py37ha559c80_0\n scipy: 1.1.0-py37h4f6bf74_1\n\nProceed ([y]/n)?\n```",
"_____no_output_____"
],
[
"> **Note:** It’s worth noting that when you remove a package, all packages that depend on it are also removed.",
"_____no_output_____"
],
[
"<a class=\"anchor\" id=\"cheat_sheet\"></a>\n## Cheat Sheet\n\n[Click here to get access to a Conda cheat sheet](https://static.realpython.com/conda-cheatsheet.pdf) with handy usage examples for managing your Python environment and packages.",
"_____no_output_____"
],
[
"<a class=\"anchor\" id=\"read_more\"></a>\n## <img src=\"../../images/logos/web.png\" width=\"20\"/> Read More \n",
"_____no_output_____"
],
[
"Also, if you’d like a deeper understanding of Anaconda and Conda, check out the following links:\n\n- [Why you need Python environments and how to manage them with Conda](https://medium.freecodecamp.org/why-you-need-python-environments-and-how-to-manage-them-with-conda-85f155f4353c)\n- [Conda: Myths and Misconceptions](http://jakevdp.github.io/blog/2016/08/25/conda-myths-and-misconceptions/)",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d0afa7f47d7dfedff7a70e9af6f49aca73aaf2d3 | 47,442 | ipynb | Jupyter Notebook | Sesion1_Que_es_Linux/deberes/jupyter.ipynb | RSG-Ecuador/Grupo-De-Estudio-Linux-Bash- | fc21da83ad9d425b557b7490c3cc6492c222ff54 | [
"MIT"
] | 8 | 2021-01-09T21:48:12.000Z | 2021-02-20T17:37:36.000Z | Sesion1_Que_es_Linux/deberes/jupyter.ipynb | RSG-Ecuador/Grupo-De-Estudio-Linux-Bash- | fc21da83ad9d425b557b7490c3cc6492c222ff54 | [
"MIT"
] | null | null | null | Sesion1_Que_es_Linux/deberes/jupyter.ipynb | RSG-Ecuador/Grupo-De-Estudio-Linux-Bash- | fc21da83ad9d425b557b7490c3cc6492c222ff54 | [
"MIT"
] | 10 | 2021-01-02T19:03:37.000Z | 2021-02-23T06:49:01.000Z | 256.443243 | 41,000 | 0.914633 | [
[
[
"# Configuraciones para el Grupo de Estudio\n\n<img src=\"./img/f_mail.png\" style=\"width: 700px;\"/>\n\n## Contenidos\n- ¿Por qué jupyter notebooks?\n- Bash\n- ¿Que es un *kernel*?\n- Instalación\n- Deberes",
"_____no_output_____"
],
[
"## Python y proyecto Jupyter\n\n<img src=\"./img/py.jpg\" style=\"width: 500px;\"/>\n<img src=\"./img/jp.png\" style=\"width: 100px;\"/>\n\n- Necesitamos llevar un registro del avance de cada integrante. \n- Lenguaje de programación interpretado de alto nivel.\n- Jupyter notebooks: son fáciles de usar\n- `Necesitamos que todos tengan una versión de Python con jupyter lab`",
"_____no_output_____"
],
[
"## ¿Cómo funciona Jupyter?\n\n- Es un derivado del proyecto `iPython`, que ofrece una interfaz interactiva para programadores.\n- Tiene formato `.ipynb`\n- Es posible usar otros lenguajes de programación diferentes a Python.\n- Permite al usuario configurar cómo se visualiza su código mediante `Markdown`.\n- Ahora una demostración\n\n<img src=\"./img/jupex.png\" style=\"width: 500px;\"/>",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport numpy as np\nimport math\n\n# constantes\npi = math.pi; h = 6.626e-34; kB = 1.380e-23; c = 3.0e+8;\nTemps = [9940.00, 8500.00, 7500.00, 6627.00, 5810.93, 4231.15, 3000.00, 2973.15, 288.15]\nlabels = ['Sirius', 'White star', 'Yellow-white star', 'Polaris', 'Sol', 'HfC', 'Bombilla', 'TaN', 'Atmósfera ']\ncolors = ['r','g','#FF9633','c','m','#eeefff','y','b','k']\n\n# arreglo de frecuencias\nfreq = np.arange(0.25e14,3e15,0.25e14)\n\n# funcion spectral energy density (SED)\ndef SED(f, T):\n energyDensity = ( 8*pi*h*(np.power(f, 3.0))/(c**3) ) / (np.exp((h/kB)*f/T) - 1)\n return energyDensity\n\n# Calculo de SED para temperaturas\nfor i in range(len(Temps)):\n r = SED(freq,Temps[i])\n plt.plot(freq*1e-12,r,color=colors[i],label=labels[i])\n\nplt.legend(); plt.xlabel('frequency ( THz )'); plt.ylabel('SED_frequency ( J $m^{-3}$ $Hz^{-1}$ )')\nplt.xlim(0.25e2,2.5e3); plt.show()",
"_____no_output_____"
]
],
[
[
"### Permite escribir expresiones matemáticas complejas\n\nEs posible escribir código en $\\LaTeX$ si es necesario",
"_____no_output_____"
],
[
"\\begin{align}\n \\frac{\\partial u(\\lambda, T)}{\\partial \\lambda} &= \\frac{\\partial}{\\partial \\lambda} \\left( \\frac{C_{1}}{\\lambda^{5}}\\left(\\frac{1}{e^{C_{2}/T\\lambda} -1}\\right) \\right) \\\\\n 0 &= \\left(\\frac{-5}{e^{C_{2}/T\\lambda} -1}\\frac{1}{\\lambda^{6}}\\right) + \\left( \\frac{C_{2}e^{C_{2}/T\\lambda}}{T\\lambda^{7}} \\right)\\left(\\frac{1}{e^{C_{2}/T\\lambda} -1}\\right)^{2} \\\\\n 0 &= \\frac{-\\lambda T5}{C_{2}} + \\frac{e^{C_{2}/T\\lambda}}{e^{C_{2}/T\\lambda} -1} \\\\\n 0 &= -5 + \\left(\\frac{C_{2}}{\\lambda T}\\right) \\left(\\frac{e^{C_{2}/T\\lambda}}{e^{C_{2}/T\\lambda} -1}\\right)\n\\end{align}",
"_____no_output_____"
],
[
"## ¿Cómo es que usa un lenguaje diferente a Python?\n\n- Un kernel es una especie de `motor computacional` que ejecuta el código dentro de un archivo `.ipynb`. \n- Los kernels hay para varios lengajes de programación, como R, Bash, C++, julia.\n\n<img src=\"./img/ker.png\" style=\"width: 250px;\"/>\n\n## ¿Por qué Bash?\n\n- Bash es un lenguaje de scripting que se comunica con la shell e históricamente ha ayudado a científicos a llevarse mejor con la bioinformática.",
"_____no_output_____"
],
[
"## ¿Dónde encontramos las instrucciones para instalar Python?\n\n- Es posible hacerlo de varias maneras: `Anaconda` y el `intérprete oficial` desde https://www.python.org/downloads/\n- Usaremos el intérprete de `Anaconda`: es más fácil la instalación si no te acostumbras a usar la línea de comandos.\n- Si ustedes ya están familiarizados con Python y no desean instalar el intérprete de `Anaconda` pueden usar `pip` desde https://pypi.org/project/bash_kernel/\n\n<img src=\"./img/qrgit.png\" style=\"width: 250px;\"/>",
"_____no_output_____"
],
[
"## Deberes\n- Creamos una carpeta en `google Drive` donde harán subirán los archivos `.ipynb` y una conversión a HTML, u otro tipo de archivo dependiendo de la sesión.\n- Vamos a tener un quiz cada semana, que les enviaremos por el servidor de Discord del grupo de estudio.\n- El deber para la siguiente semana: \n 1. Instalar Ubuntu si aún no lo poseen usando cualquiera de las alternativas presentadas.\n 2. Instalar Anaconda, jupyter lab y el kernel de bash. \n\nSe deben enviar un documento word o pdf con capturas de pantalla que compruebe esto.\nSi tienen algún problema, usen por favor los foros de `Discord` y nos ayudamos entre todos.\n\n<img src=\"./img/deberes.png\" style=\"width: 500px;\"/>",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d0afb436d9694d50991bc299ff118cc3f34957ff | 256,478 | ipynb | Jupyter Notebook | TEMA-2/Clase9_GenDistribucionProbabilidad.ipynb | duarteandres/SPF-2020-II-G1 | 81dcbfa924951ad9e7561457f31f9ba0e007937a | [
"MIT"
] | null | null | null | TEMA-2/Clase9_GenDistribucionProbabilidad.ipynb | duarteandres/SPF-2020-II-G1 | 81dcbfa924951ad9e7561457f31f9ba0e007937a | [
"MIT"
] | null | null | null | TEMA-2/Clase9_GenDistribucionProbabilidad.ipynb | duarteandres/SPF-2020-II-G1 | 81dcbfa924951ad9e7561457f31f9ba0e007937a | [
"MIT"
] | null | null | null | 301.738824 | 28,136 | 0.925417 | [
[
[
"# Generación de observaciones aleatorias a partir de una distribución de probabilidad",
"_____no_output_____"
],
[
"La primera etapa de la simulación es la **generación de números aleatorios**. Los números aleatorios sirven como el bloque de construcción de la simulación. La segunda etapa de la simulación es la **generación de variables aleatorias basadas en números aleatorios**. Esto incluye generar variables aleatorias <font color ='red'> discretas y continuas de distribuciones conocidas </font>. En esta clase, estudiaremos técnicas para generar variables aleatorias.\n\nIntentaremos dar respuesta a el siguiente interrogante:\n>Dada una secuencia de números aleatorios, ¿cómo se puede generar una secuencia de observaciones aleatorias a partir de una distribución de probabilidad dada? Varios enfoques diferentes están disponibles, dependiendo de la naturaleza de la distribución",
"_____no_output_____"
],
[
"Considerando la generación de números alestorios estudiados previamente, asumiremos que tenemos disponble una secuencia $U_1,U_2,\\cdots$ variables aleatorias independientes, para las cuales se satisface que:\n$$\nP(U_i\\leq u) = \\begin{cases}0,& u<0\\\\ u,&0\\leq u \\leq 1\\\\ 1,& u>1 \\end{cases}\n$$\nes decir, cada variable se distribuye uniformemente entre 0 y 1.\n\n**Recordar:** En clases pasadas, observamos como transformar un número p-seudoaletorio distribuido uniformemte entre 0 y 1, en una distribución normalmente distribuida con media $(\\mu,\\sigma^2)\\longrightarrow$ <font color='red'> [Médoto de Box Muller](http://www.lmpt.univ-tours.fr/~nicolis/Licence_NEW/08-09/boxmuller.pdf) </font> como un caso particular.\n\nEn esta sesión, se presentarán dos de los técnicas más ampliamente utilizados para generar variables aletorias, a partir de una distribución de probabilidad.",
"_____no_output_____"
],
[
"## 1. Método de la transformada inversa",
"_____no_output_____"
],
[
"Este método puede ser usado en ocasiones para generar una observación aleatoria. Tomando $X$ como la variable aletoria involucrada, denotaremos la función de distribución de probabilidad acumulada por\n$$F(x)=P(X\\leq x),\\quad \\forall x$$\n<font color ='blue'> Dibujar graficamente esta situación en el tablero</font>\n\nEl método de la transformada inversa establece\n$$X = F^{-1}(U),\\quad U \\sim \\text{Uniforme[0,1]}$$\ndonde $F^{-1}$ es la transformada inversa de $F$.\n\nRecordar que $F^{-1}$ está bien definida si $F$ es estrictamente creciente, de otro modo necesitamos una regla para solucionar los casos donde esta situación no se satisface. Por ejemplo, podríamos tomar\n$$F^{-1}(u)=\\inf\\{x:F(x)\\geq u\\}$$ \nSi hay muchos valores de $x$ para los cuales $F(x)=u$, esta regla escoje el valor mas pequeño. Observar esta situación en el siguiente ejemplo:\n\n\nObserve que en el intervalo $(a,b]$ si $X$ tiene distribución $F$, entonces\n$$P(a<X\\leq b)=F(b)-F(a)=0\\longrightarrow \\text{secciones planas}$$\n\nPor lo tanto si $F$ tienen una densidad continua, entonces $F$ es estrictamente creciente y su inversa está bien definida. \n",
"_____no_output_____"
],
[
"Ahora observemos cuando se tienen las siguientes funciones:\n\nObservemos que sucede en $x_0$\n$$\\lim_{x \\to x_0^-} F(x)\\equiv F(x^-)<F(x^+)\\equiv \\lim_{x\\to x_0^+}F(x)$$\nBajo esta distribución el resultado $x_0$ tiene probabilidad $F(x^+)-F(x^-)$. Por otro lado todos los valores de $u$ entre $[u_2,u_1]$ serán mapeados a $x_0$.\n\nLos siguientes ejemplos mostrarán una implementación directa de este método.",
"_____no_output_____"
],
[
"### Ejemplo 1: Distribución exponencial\nLa distribución exponencial con media $\\theta$ tiene distribución \n$$F(x)=1-e^{-x/\\theta}, \\quad x\\geq 0$$\n> Distrubución exponencial python: https://en.wikipedia.org/wiki/Exponential_distribution",
"_____no_output_____"
],
[
">### <font color= blue> Mostrar en el tablero la demostración ",
"_____no_output_____"
]
],
[
[
"# Importamos las librerías principales\nimport numpy as np\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"# Creamos la función que crea muestras distribuidas exponencialmente\ndef D_exponential(theta,N):\n return -np.log(np.random.random(N))*theta",
"_____no_output_____"
],
[
"theta = 4 # Media\nN = 10**6 # Número de muestras\n# creamos muestras exponenciales con la función que esta en numpy\nx = np.random.exponential(theta,N) \n# creamos muestras exponenciales con la función creada\nx2 = D_exponential(theta,N)\n# Graficamos el historial para x\nplt.hist(x,100,density=True)\nplt.xlabel('valores aleatorios')\nplt.ylabel('probabilidad')\nplt.title('histograma función de numpy')\nprint(np.mean(x))\nplt.show()",
"4.003380413190708\n"
],
[
"plt.hist(x2,100,density=True)\nplt.xlabel('valores aleatorios')\nplt.ylabel('probabilidad')\nplt.title('histograma función creada')\nprint(np.mean(x2))\nplt.show()",
"4.001007346372537\n"
]
],
[
[
"### Ejemplo 2\nSe sabe que la distribución Erlang resulta de la suma de $k$ variables distribuidas exponencialmente cada una con media $\\theta$, y por lo tanto esta variable resultante tiene distribución Erlang de tamaño $k$ y media $theta$.\n\n> Enlace distribución Erlang: https://en.wikipedia.org/wiki/Erlang_distribution",
"_____no_output_____"
]
],
[
[
"N = 10**4\n# Variables exponenciales\nx1 = np.random.exponential(4,N)\nx2 = np.random.exponential(4,N)\nx3 = np.random.exponential(4,N)\nx4 = np.random.exponential(4,N)\nx5 = np.random.exponential(4,N)\n\n# Variables erlang\ne0 = x1\ne1 = (x1+x2)\ne2 = (x3+x4+x5)\ne3 = (x1+x2+x3+x4)\ne4 = x1+x2+x3+x4+x5\nplt.hist(e0,100,density=True,label='1 exponencial')\nplt.hist(e1,100,density=True,label='suma de 2 exp')\nplt.hist(e2,100,density=True,label='suma de 3 exp')\nplt.hist(e3,100,density=True,label='suma de 4 exp')\nplt.hist(e4,100,density=True,label='suma de 5 exp')\nplt.legend()\n\nplt.show()",
"_____no_output_____"
],
[
"# Función para crear variables aleatorias Erlang\ndef D_erlang(theta:'media distrubucion',k,N):\n f = np.random.rand(N,k) # Matriz de variables aleatorias de dim N*k mejora la velocidad del algoritmo\n y =list(map(lambda i:-(theta)*np.log(np.prod(f[i,:])),range(N)))\n return y",
"_____no_output_____"
],
[
"# Prueba de la función creada\nN = 10**4\nks = [1,2,3,4,5]\ntheta = 4\ny = list(map(lambda k:D_erlang(theta,k,N),ks))\n\n[plt.hist(y[i],bins=100,density=True,label='suma de %i exp'%(i+1)) for i in range(len(y))]\nplt.legend()\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Función de densidad variables Erlang\n\n$$p(x)=x^{k-1}\\frac{e^{-x/\\theta}}{\\theta^k\\Gamma(k)}\\equiv x^{k-1}\\frac{e^{-x/\\theta}}{\\theta^k(k-1)!}$$",
"_____no_output_____"
]
],
[
[
"#Librería que tiene la función gamma y factorial \n# Para mostrar la equivalencia entre el factorial y la función gamma\nimport scipy.special as sps \nfrom math import factorial as fac\nk = 4\ntheta = 4\n\nx = np.arange(0,60,0.01)\nplt.show() \ny= x**(k-1)*(np.exp(-x/theta) /(sps.gamma(k)*theta**k))\ny2 = x**(k-1)*(np.exp(-x/theta) /(fac(k-1)*theta**k))\nplt.plot(x,y,'r')\nplt.plot(x,y2,'b--')\n# plt.show()\n\n# Creo variables aleatorias erlang y obtengo su histograma en la misma gráfica anterior\nN = 10**4\nr1 = D_erlang(theta,k,N)\nplt.hist(r1,bins=50,density=True)\nplt.show()",
"_____no_output_____"
]
],
[
[
"Para mejorar la eficiencia, creemos una función que grafique la misma gráfica anterior pero este caso que le podamos variar los parámetros `k` y $\\theta$ de la distribución",
"_____no_output_____"
]
],
[
[
"# Función que grafica subplots para cada señal de distribución Erlang\ndef histograma_erlang(signal:'señal que desea graficar',\n k:'Parámetro de la función Erlang'):\n\n plt.figure(figsize=(8,3))\n count, x, _ = plt.hist(signal,100,density=True,label='k=%d'%k)\n y = x**(k-1)*(np.exp(-x/theta) /(sps.gamma(k)*theta**k))\n plt.plot(x, y, linewidth=2,color='k')\n plt.ylabel('Probabilidad')\n plt.xlabel('Muestras')\n plt.legend()\n plt.show()",
"_____no_output_____"
]
],
[
[
"Con la función anterior, graficar la función de distribución de una Erlang con parámetros $\\theta = 4$ y `Ks = [1,8,3,6] `",
"_____no_output_____"
]
],
[
[
"theta = 4 # media \nN = 10**5 # Número de muestras\nKs = [1,8,3,6] # Diferentes valores de k para la distribución Erlang\n# Obtengo\nY = list(map(lambda k:D_erlang(theta,k,N),Ks))\nlist(map(histograma_erlang,Y,Ks));",
"_____no_output_____"
]
],
[
[
"### Ejemplo 4\nDistribución de Rayleigh\n$$F(x)=1-e^{-2x(x-b)},\\quad x\\geq b $$",
"_____no_output_____"
],
[
"> Fuente: https://en.wikipedia.org/wiki/Rayleigh_distribution",
"_____no_output_____"
]
],
[
[
"# Función del ejemplo 4\ndef D_rayleigh(b,N):\n return (b/2)+np.sqrt(b**2-2*np.log(np.random.rand(N)))/2\n\nnp.random.rayleigh?\n\n# Función de Raylegh que contiene numpy\ndef D_rayleigh2(sigma,N):\n return np.sqrt(-2*sigma**2*np.log(np.random.rand(N)))",
"_____no_output_____"
],
[
"b = 0.5; N =10**6;sigma = 2\nr = D_rayleigh(b,N) # Función del ejemplo \nr2 = np.random.rayleigh(sigma,N) # Función que contiene python\nr3 = D_rayleigh2(sigma,N) # Función creada de acuerdo a la función de python\n\nplt.figure(1,figsize=(10,8))\nplt.subplot(311)\nplt.hist(r3,100,density=True)\nplt.xlabel('valores aleatorios')\nplt.ylabel('probabilidad')\nplt.title('histograma función D_rayleigh2')\n\nplt.subplot(312)\nplt.hist(r2,100,density=True)\nplt.xlabel('valores aleatorios')\nplt.ylabel('probabilidad')\nplt.title('histograma función numpy')\n\nplt.subplot(313)\nplt.hist(r,100,density=True)\nplt.xlabel('valores aleatorios')\nplt.ylabel('probabilidad')\nplt.title('histograma función D_rayleigh')\nplt.subplots_adjust(top=0.92, bottom=0.08, left=0.10, right=0.95,\n hspace=.5,wspace=0)\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Distribuciones discretas\n\nPara una variable dicreta, evaluar $F^{-1}$ se reduce a buscar en una tabla. Considere por ejemplo una variable aleatoria discreta, cuyos posibles valores son $c_1<c_2<\\cdots<c_n$. Tome $p_i$ la probabilidad alcanzada por $c_i$, $i=1,\\cdots,n$ y tome $q_0=0$, en donde $q_i$ representa las **probabilidades acumuladas asociadas con $c_i$** y está definido como:\n$$q_i=\\sum_{j=1}^{i}p_j,\\quad i=1,\\cdots,n \\longrightarrow q_i=F(c_i)$$\nEntonces, para tomar muestras de esta distribución se deben de realizar los siguientes pasos:\n 1. Generar un número uniforme $U$ entre (0,1).\n 2. Encontrar $k\\in\\{1,\\cdots,n\\}$ tal que $q_{k-1}<U\\leq q_k$\n 3. Tomar $X=c_k$.",
"_____no_output_____"
],
[
"### Ejemplo numérico",
"_____no_output_____"
]
],
[
[
"# Librería para crear tablas\nimport pandas as pd",
"_____no_output_____"
],
[
"val = [1,2,3,4,5]\np_ocur = [.1,.2,.4,.2,.1]\np_acum = np.cumsum(p_ocur)\n\ndf = pd.DataFrame(index=val,columns=['Probabilidades','Probabilidad acumulada'], dtype='float')\ndf.index.name = \"Valores (índices)\"\ndf.loc[val,'Probabilidades'] = p_ocur\ndf.loc[val,'Probabilidad acumulada'] = p_acum\ndf",
"_____no_output_____"
],
[
"u = .5\nprint(sum(1 for i in p_acum if i<u) + 1)",
"3\n"
],
[
"def Gen_distr_discreta(U:'vector de números aleatorios',\n p_acum: 'P.Acumulada de la distribución a generar'):\n '''Tener en cuenta que este arreglo cuenta números empezando del 0'''\n v = np.array(list(map(lambda j:sum(1 for i in p_acum if i<U[j]),range(len(U)))))\n return v",
"_____no_output_____"
]
],
[
[
"# Lo que no se debe de hacer, cuando queremos graficar el histograma de una distribución discreta",
"_____no_output_____"
]
],
[
[
"N = 10**4\nu =np.random.rand(N)\nv = Gen_distr_discreta(u,p_acum)+1\nplt.hist(v,bins = 6)\nplt.show()",
"_____no_output_____"
],
[
"N = 10**4\nu =np.random.rand(N)\nv = Gen_distr_discreta(u,p_acum)+1 #+1 porque los índices comienzan en 1 \n# print(u,v)\n\n# Método 1 (Correcto)\nhist,bins = np.histogram(v,bins=len(val))\n# print(hist,bins)\nplt.bar(val,hist)\nplt.title('METODO CORRECTO')\nplt.xlabel('valores (índices)')\nplt.ylabel('frecuencias')\nplt.show()\n \n# Método 2 (incorrecto)\ny,x,_ = plt.hist(v,bins=len(val))\nplt.title('METODO INCORRECTO')\nplt.xlabel('valores (índices)')\nplt.ylabel('frecuencias')\nplt.legend(['incorrecto'])\nplt.show()\n",
"_____no_output_____"
],
[
"def plot_histogram_discrete(distribucion:'distribución a graficar histograma',\n label:'label del legend'):\n # len(set(distribucion)) cuenta la cantidad de elementos distintos de la variable 'distribucion'\n plt.figure(figsize=[8,4])\n y,x = np.histogram(distribucion,bins = len(set(distribucion))) \n plt.bar(list(set(distribucion)),y,label=label)\n plt.legend()\n plt.show()",
"_____no_output_____"
]
],
[
[
">### <font color ='red'> **Tarea 4** \n> 1. Generación variable aleatoria continua\n>El tiempo en el cual un movimiento browniano se mantiene sobre su punto máximo en el intervalo [0,1] tiene una distribución\n>$$F(x)=\\frac{2}{\\pi}\\sin^{-1}(\\sqrt x),\\quad 0\\leq x\\leq 1$$ </font>\n\n> 2. Generación variable aleatoria Discreta\n> La distribución binomial modela el número de éxitos de n ensayos independientes donde hay una probabilidad p de éxito en cada ensayo.\n> Generar una variable aletoria binomial con parámetros $n=10$ y $p=0.7$. Recordar que $$X\\sim binomial(n,p) \\longrightarrow p_i=P(X=i)=\\frac{n!}{i!(n-i)!}p^i(1-p)^{n-i},\\quad i=0,1,\\cdots,n$$\n> Por propiedades de la operación factorial la anterior $p_i$ se puede escribir como:\n> $$p_{i+1}=\\frac{n-i}{i+1}\\frac{p}{1-p} p_i $$\n\n> **Nota:** Por notación recuerde que para el caso continuo $f(x)$ es la distribución de probabilidad (PDF), mientras $F(x)$ corresponde a la distribución de probabilidad acumulada (CDF). Para el caso discreto, $P(X=i)$ corresponde a su distribución de probabilidad (PMF) y $ F_{X}(x)=\\operatorname {P} (X\\leq x)=\\sum _{x_{i}\\leq x}\\operatorname {P} (X=x_{i})=\\sum _{x_{i}\\leq x}p(x_{i})$, corresponde a su distribución de probabilidad acumulada (CDF).\n\nGenere muestres aleatorias que distribuyan según la función dada usando el método de la transformada inversa y grafique el histograma de 100 muestras generadas con el método y compárela con el función $f(x)$ dada, esto con el fín de validar que el procedimiento fue realizado de manera correcta",
"_____no_output_____"
],
[
"<script>\n $(document).ready(function(){\n $('div.prompt').hide();\n $('div.back-to-top').hide();\n $('nav#menubar').hide();\n $('.breadcrumb').hide();\n $('.hidden-print').hide();\n });\n</script>\n\n<footer id=\"attribution\" style=\"float:right; color:#808080; background:#fff;\">\nCreated with Jupyter by Oscar David Jaramillo Z.\n</footer>",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
d0afb888223e3b5bd719d921536a778fd575d051 | 932 | ipynb | Jupyter Notebook | HelloGithub.ipynb | ekonstankiewicz/dw_matrix | ef8c3ee7f95cdf4b3af03066eebdd1918e0750b2 | [
"MIT"
] | null | null | null | HelloGithub.ipynb | ekonstankiewicz/dw_matrix | ef8c3ee7f95cdf4b3af03066eebdd1918e0750b2 | [
"MIT"
] | null | null | null | HelloGithub.ipynb | ekonstankiewicz/dw_matrix | ef8c3ee7f95cdf4b3af03066eebdd1918e0750b2 | [
"MIT"
] | null | null | null | 932 | 932 | 0.709227 | [
[
[
"print(\"Hello Github\")",
"Hello Github\n"
],
[
"",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code"
]
] |
d0afc537081b453e88822d4d9530f3f3384debe9 | 805,321 | ipynb | Jupyter Notebook | examples/qc_example.ipynb | fleming79/MHKiT-Python | 006de2ad8162a0ee127cfdf2665c1d36b015263c | [
"BSD-3-Clause"
] | 21 | 2020-04-20T19:10:03.000Z | 2022-03-30T18:46:03.000Z | examples/qc_example.ipynb | fleming79/MHKiT-Python | 006de2ad8162a0ee127cfdf2665c1d36b015263c | [
"BSD-3-Clause"
] | 110 | 2020-03-06T22:11:08.000Z | 2022-03-25T20:28:36.000Z | examples/qc_example.ipynb | fleming79/MHKiT-Python | 006de2ad8162a0ee127cfdf2665c1d36b015263c | [
"BSD-3-Clause"
] | 32 | 2020-03-05T20:33:10.000Z | 2022-03-24T20:19:34.000Z | 1,709.81104 | 159,624 | 0.959832 | [
[
[
"# MHKiT Quality Control Module\nThe following example runs a simple quality control analysis on wave elevation data using the [MHKiT QC module](https://mhkit-software.github.io/MHKiT/mhkit-python/api.qc.html). The data file used in this example is stored in the [\\\\\\\\MHKiT\\\\\\\\examples\\\\\\\\data](https://github.com/MHKiT-Software/MHKiT-Python/tree/master/examples/data) directory.\n\nStart by importing the necessary Python packages and MHKiT modules.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nfrom mhkit import qc, utils",
"_____no_output_____"
]
],
[
[
"## Load Data",
"_____no_output_____"
],
[
"The wave elevation data used in this example includes several issues, including timestamps that are out of order, corrupt data with values of -999, data outside the expected range, and stagnant data. \n\nThe data is loaded into a pandas DataFrame using the pandas method `read_csv`. The first 5 rows of data are shown below, along with a plot.",
"_____no_output_____"
]
],
[
[
"# Load data from the csv file into a DataFrame\ndata = pd.read_csv('data/qc/wave_elevation_data.csv', index_col='Time') \n\n# Plot the data\ndata.plot(figsize=(15,5), ylim=(-60,60)) \n\n# Print the first 5 rows of data\nprint(data.head()) ",
" probe1 probe2 probe3\nTime \n10.000 24.48 28.27 1.3\n10.002 34.48 40.27 -8.7\n10.004 30.48 38.27 -13.7\n10.006 12.48 24.27 -32.7\n10.008 13.48 22.27 -21.7\n"
]
],
[
[
"The data is indexed by time in seconds. To use the quality control functions, the data must be indexed by datetime. The index can be converted to datetime using the following utility function.",
"_____no_output_____"
]
],
[
[
"# Convert the index to datetime\ndata.index = utils.index_to_datetime(data.index, origin='2019-05-20') \n\n# Print the first 5 rows of data\nprint(data.head())",
" probe1 probe2 probe3\nTime \n2019-05-20 00:00:10.000 24.48 28.27 1.3\n2019-05-20 00:00:10.002 34.48 40.27 -8.7\n2019-05-20 00:00:10.004 30.48 38.27 -13.7\n2019-05-20 00:00:10.006 12.48 24.27 -32.7\n2019-05-20 00:00:10.008 13.48 22.27 -21.7\n"
]
],
[
[
"## Quality control tests\nThe following quality control tests are used to identify timestamp issues, corrupt data, data outside the expected range, and stagnant data.\n\nEach quality control tests results in the following information:\n\n* Cleaned data, which is a DataFrame that has *NaN* in place of data that did not pass the quality control test\n* Boolean mask, which is a DataFrame with True/False that indicates if each data point passed the quality control test\n* Summary of the quality control test results, the summary includes the variable name (which is blank for timestamp issues), the start and end time of the test failure, and an error flag for each test failure",
"_____no_output_____"
],
[
"### Check timestamp\nQuality control analysis generally starts by checking the timestamp index of the data. \n\nThe following test checks to see if 1) the data contains duplicate timestamps, 2) timestamps are not monotonically increasing, and 3) timestamps occur at irregular intervals (an interval of 0.002s is expected for this data). \n\nIf duplicate timestamps are found, the resulting DataFrames (cleaned data and mask) keep the first occurrence. If timestamps are not monotonic, the timestamps in the resulting DataFrames are reordered.",
"_____no_output_____"
]
],
[
[
"# Define expected frequency of the data, in seconds\nfrequency = 0.002 \n\n# Run the timestamp quality control test\nresults = qc.check_timestamp(data, frequency) ",
"_____no_output_____"
]
],
[
[
"The cleaned data, boolean mask, and test results summary are shown below. The summary is transposed (using .T) so that it is easier to read.",
"_____no_output_____"
]
],
[
[
"# Plot cleaned data\nresults['cleaned_data'].plot(figsize=(15,5), ylim=(-60,60)) \n\n# Print the first 5 rows of the cleaned data\nprint(results['cleaned_data'].head()) ",
" probe1 probe2 probe3\n2019-05-20 00:00:10.000 24.48 28.27 1.3\n2019-05-20 00:00:10.002 34.48 40.27 -8.7\n2019-05-20 00:00:10.004 30.48 38.27 -13.7\n2019-05-20 00:00:10.006 12.48 24.27 -32.7\n2019-05-20 00:00:10.008 13.48 22.27 -21.7\n"
],
[
"# Print the first 5 rows of the mask\nprint(results['mask'].head()) ",
" probe1 probe2 probe3\n2019-05-20 00:00:10.000 True True True\n2019-05-20 00:00:10.002 True True True\n2019-05-20 00:00:10.004 True True True\n2019-05-20 00:00:10.006 True True True\n2019-05-20 00:00:10.008 True True True\n"
],
[
"# Print the test results summary\n# The summary is transposed (using .T) so that it is easier to read.\nprint(results['test_results'].T) ",
" 0 1 \\\nVariable Name \nStart Time 2019-05-20 00:00:10.230000 2019-05-20 00:00:10.340000 \nEnd Time 2019-05-20 00:00:10.230000 2019-05-20 00:00:10.340000 \nTimesteps 1 1 \nError Flag Nonmonotonic timestamp Duplicate timestamp \n\n 2 \nVariable Name \nStart Time 2019-05-20 00:00:10.042000 \nEnd Time 2019-05-20 00:00:10.044000 \nTimesteps 2 \nError Flag Missing timestamp \n"
]
],
[
[
"### Check for corrupt data\nIn the following quality control tests, the cleaned data from the previous test is used as input to the subsequent test. For each quality control test, a plot of the cleaned data is shown along with the test results summary.\n\nNote, that if you want to run a series of quality control tests before extracting the cumulative cleaned data, boolean mask, and summary, we recommend using Pecos directly with the object-oriented approach, see https://pecos.readthedocs.io/ for more details.\n\nThe quality control test below checks for corrupt data, indicated by a value of -999.",
"_____no_output_____"
]
],
[
[
"# Define corrupt values\ncorrupt_values = [-999] \n\n# Run the corrupt data quality control test\nresults = qc.check_corrupt(results['cleaned_data'], corrupt_values) \n\n# Plot cleaned data\nresults['cleaned_data'].plot(figsize=(15,5), ylim=(-60,60)) \n\n# Print test results summary\nprint(results['test_results'].T)",
" 0 1\nVariable Name probe1 probe3\nStart Time 2019-05-20 00:00:10.110000 2019-05-20 00:00:10.834000\nEnd Time 2019-05-20 00:00:10.134000 2019-05-20 00:00:10.848000\nTimesteps 13 8\nError Flag Corrupt data Corrupt data\n"
]
],
[
[
"### Check for data outside the expected range\nThe next quality control test checks for data that is greater than 50 or less than -50. Note that expected range tests can also be used to compare measured values to a model, or analyze the expected relationships between data columns.",
"_____no_output_____"
]
],
[
[
"# Define expected lower and upper bound ([lower bound, upper bound])\nexpected_bounds = [-50, 50] \n\n# Run expected range quality control test\nresults = qc.check_range(results['cleaned_data'], expected_bounds) \n\n# Plot cleaned data\nresults['cleaned_data'].plot(figsize=(15,5), ylim=(-60,60)) \n\n# Print test results summary\nprint(results['test_results'].T) ",
" 0 1 \\\nVariable Name probe3 probe3 \nStart Time 2019-05-20 00:00:10.240000 2019-05-20 00:00:10.468000 \nEnd Time 2019-05-20 00:00:10.240000 2019-05-20 00:00:10.468000 \nTimesteps 1 1 \nError Flag Data < lower bound, -50 Data < lower bound, -50 \n\n 2 \nVariable Name probe3 \nStart Time 2019-05-20 00:00:10.716000 \nEnd Time 2019-05-20 00:00:10.716000 \nTimesteps 1 \nError Flag Data > upper bound, 50 \n"
]
],
[
[
"### Check for stagnant data\nThe final quality control test checks for stagnant data by looking for data that changes by less than 0.001 within a 0.02 second moving window. ",
"_____no_output_____"
]
],
[
[
"# Define expected lower bound (no upper bound is specified in this example)\nexpected_bound = [0.001, None] \n\n# Define the moving window, in seconds\nwindow = 0.02 \n\n# Run the delta quality control test\nresults = qc.check_delta(results['cleaned_data'], expected_bound, window) \n\n# Plot cleaned data\nresults['cleaned_data'].plot(figsize=(15,5), ylim=(-60,60))\n\n# Print test results summary\nprint(results['test_results'].T) ",
" 0\nVariable Name probe2\nStart Time 2019-05-20 00:00:10.400000\nEnd Time 2019-05-20 00:00:10.544000\nTimesteps 73\nError Flag Delta < lower bound, 0.001\n"
]
],
[
[
"## Cleaned Data\nThe cleaned data can be used directly in MHKiT analysis, or the missing values can be replaced using various methods before analysis is run. \nData replacement strategies are generally defined on a case by case basis. Pandas includes methods to interpolate, replace, and fill missing values.",
"_____no_output_____"
]
],
[
[
"# Extract final cleaned data for MHKiT analysis\ncleaned_data = results['cleaned_data'] ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0afd0b81905ddeac61e5da4309069f8396b0e73 | 45,494 | ipynb | Jupyter Notebook | Final Project.ipynb | jesseddeng/Machine-learning-Final-Project | 87284d51e1a968035feae33a1f9f44754a144a00 | [
"MIT"
] | null | null | null | Final Project.ipynb | jesseddeng/Machine-learning-Final-Project | 87284d51e1a968035feae33a1f9f44754a144a00 | [
"MIT"
] | null | null | null | Final Project.ipynb | jesseddeng/Machine-learning-Final-Project | 87284d51e1a968035feae33a1f9f44754a144a00 | [
"MIT"
] | null | null | null | 38.521592 | 8,992 | 0.53407 | [
[
[
"import numpy as np\nimport pandas as pd\nimport scipy as sp\nfrom scipy.stats import mode\nfrom sklearn import linear_model\nimport matplotlib\nimport matplotlib.pyplot as plt\nfrom sklearn.decomposition import PCA\nfrom sklearn import preprocessing\nimport sklearn as sk\nimport sklearn.discriminant_analysis as da\nimport sklearn.neighbors as knn\nfrom IPython.display import Markdown, display\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.preprocessing import Imputer\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn import cross_validation, ensemble, preprocessing, metrics\nfrom sklearn.externals import joblib\n\n%matplotlib inline",
"C:\\Users\\cdrgv\\Anaconda3\\lib\\site-packages\\sklearn\\cross_validation.py:41: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.\n \"This module will be removed in 0.20.\", DeprecationWarning)\n"
],
[
"df = pd.read_csv('listings_new_york_2018.csv')",
"C:\\Users\\cdrgv\\Anaconda3\\lib\\site-packages\\IPython\\core\\interactiveshell.py:2785: DtypeWarning: Columns (43,87,88) have mixed types. Specify dtype option on import or set low_memory=False.\n interactivity=interactivity, compiler=compiler, result=result)\n"
]
],
[
[
"## 全部的columns名稱",
"_____no_output_____"
]
],
[
[
"df.columns.values",
"_____no_output_____"
]
],
[
[
"## 把全部的分數相關的平均起來成新的column",
"_____no_output_____"
]
],
[
[
"col = df.loc[: , \"review_scores_accuracy\":\"review_scores_value\"]\ndf['review_scores_mean'] = col.mean(axis=1)\n",
"_____no_output_____"
]
],
[
[
"## 留下來的attributes",
"_____no_output_____"
]
],
[
[
"cols_to_keep = [\n 'latitude',\n 'longitude',\n 'property_type',\n 'room_type',\n 'accommodates',\n 'bathrooms',\n 'bedrooms',\n 'beds',\n 'price','review_scores_mean'\n]\ndf = df[cols_to_keep]",
"_____no_output_____"
]
],
[
[
"## 清理price的符號 只剩下數字和小數點",
"_____no_output_____"
]
],
[
[
"df['price'] = df['price'].replace('[^(0-9).]','', regex=True).replace('[(]','-', regex=True).astype(float)\n\ndisplay(df.head())\n\n",
"_____no_output_____"
]
],
[
[
"## 要預測的欄位 price換成log price",
"_____no_output_____"
]
],
[
[
"df['log_price'] = np.log(df['price'].values)\n",
"_____no_output_____"
]
],
[
[
"## drop price",
"_____no_output_____"
]
],
[
[
"data = df.drop('price', axis=1)\nlist(data.columns.values)\ndata.head()",
"_____no_output_____"
]
],
[
[
"## Missing Values \n找出各欄位有多少NAN",
"_____no_output_____"
]
],
[
[
"#Replace blanks with NaNs\ndata = data.replace('_', np.nan)\ndata = data.replace(' ', np.nan)\ndata = data.replace([np.inf, -np.inf], np.nan)\ncol_analysis = []\nfor column in data.columns:\n numNulls = len(data[column][data[column].isnull()])\n totalLength = len(data[column])\n dict1 = {'Name':column,'DataType':data[column].dtype, 'NumberOfNulls':numNulls, 'PercentageNulls':numNulls*100.0/totalLength}\n col_analysis.append(dict1)\n \ncol_anal_df = pd.DataFrame(col_analysis)[['Name', 'DataType','NumberOfNulls','PercentageNulls']].sort_values(by='PercentageNulls', ascending=False)\n\nuseful_cols = col_anal_df[col_anal_df.PercentageNulls < 50.0]\n\nprint('List of Predictors and their respective percentages of missing values')\ndisplay(useful_cols.head(28))\n\nfor cols in data.columns.values:\n if (np.any(useful_cols.Name.values == cols) == False):\n data.drop(cols, axis=1, inplace=True)\n \ndata.head(5)\n\n",
"List of Predictors and their respective percentages of missing values\n"
]
],
[
[
"## Impute Missing Values\n把空白的填補成整個欄位的平均值",
"_____no_output_____"
]
],
[
[
"#Use Mean for Real values Columns\nreal_value_cols = useful_cols[useful_cols.DataType == 'float64']\n\nimp = Imputer(missing_values='NaN', strategy='mean', axis=0)\n\ndata[real_value_cols.Name.values] = imp.fit_transform(data[real_value_cols.Name.values])\n#Use Highest frequency for categorical columns\ncategorical_value_cols = useful_cols[useful_cols.DataType == 'object'].Name.values\ndata[categorical_value_cols] = data[categorical_value_cols].apply(lambda x:x.fillna(x.value_counts().index[0]))\n\ndata.head()\ndata.dtypes",
"_____no_output_____"
],
[
"\ndata = data.dropna()\n",
"_____no_output_____"
]
],
[
[
"## log_price的直方圖 (可以看出試常態分佈)",
"_____no_output_____"
]
],
[
[
"data.log_price.hist()",
"_____no_output_____"
]
],
[
[
"## Convert Categorical Variables to dummy integer values here\n- We convert the categorical variables to numeric here such that we can run models that work only with numbers\n\n## One-Hot Encoding\n把nomial的欄位換成數字",
"_____no_output_____"
]
],
[
[
"\ndata_ohe = data.copy(deep= True)\n\n#Encode categorical variables\ndef encode_categorical(array):\n return preprocessing.LabelEncoder().fit_transform(array) \n\n\ncategorical_value_cols = useful_cols[useful_cols.DataType == 'object'].Name.values\n\n#print(categorical_value_cols)\n#Convert Categories to numbers here\n\n\ndata_ohe[categorical_value_cols] = data_ohe[categorical_value_cols].apply(encode_categorical)\n\n\n#data_ohe['property_type'] = data_ohe['property_type'].apply(encode_categorical)\n\n# Apply one hot endcoing\n# Leads to inferior performance and hence we disable for now\n#data_ohe = pd.get_dummies(data_ohe.ix[:,:-1], columns=categorical_value_cols)\n\nprint ('Final Dataset ready for modelling after filling in missing values, and encoding categorical variables')\ndata_ohe.head()\n",
"Final Dataset ready for modelling after filling in missing values, and encoding categorical variables\n"
]
],
[
[
"## Separate response from predictors",
"_____no_output_____"
]
],
[
[
"\nx = data_ohe.values[:, :-1]\ny = data_ohe.values[:, -1]\n\n#response = df_filtered[['log_price']]\n#predictors = df_filtered.drop(['log_price'], axis=1)",
"_____no_output_____"
]
],
[
[
"## Split into train/test",
"_____no_output_____"
]
],
[
[
"\nx_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.4, random_state=42)",
"_____no_output_____"
]
],
[
[
"## Simple Regression Model",
"_____no_output_____"
]
],
[
[
"#OLS regression\nclf = LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)\nclf.fit(x_train, y_train)\npredicted = clf.predict(x_test)\n\nscore = sk.metrics.r2_score(y_test, predicted)\nprint('sklearn: R2 score for Linear Regression is: {}'.format(score))",
"sklearn: R2 score for Linear Regression is: 0.5703453525474875\n"
],
[
"from sklearn import cross_validation, ensemble, preprocessing, metrics\n# 建立 random forest 模型\nforest = ensemble.RandomForestClassifier(n_estimators = 100)\ny_train = np.array(y_train, dtype=int)\nforest_fit = forest.fit(x_train, y_train)\n\n# 預測\ntest_y_predicted = forest.predict(x_test)\ny_test = np.array(y_test, dtype=int)\n# 績效\nscore = sk.metrics.r2_score(y_test, test_y_predicted)\nprint('sklearn: R2 score for Random Forest is: {}'.format(score))",
"sklearn: R2 score for Linear Regression is: 0.4082299522656714\n"
]
],
[
[
"## 把model打包",
"_____no_output_____"
]
],
[
[
"joblib.dump(clf, 'predicted.pkl')\nestimator = joblib.load('predicted.pkl')\nestimator",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0afd7f4eea142af2c12d348b8c787b2aa2be413 | 163,391 | ipynb | Jupyter Notebook | test/Nuwa.ipynb | raulniconico/Nuwa | b88b95f22473540da54f71122f1b244b37c69402 | [
"MIT"
] | null | null | null | test/Nuwa.ipynb | raulniconico/Nuwa | b88b95f22473540da54f71122f1b244b37c69402 | [
"MIT"
] | null | null | null | test/Nuwa.ipynb | raulniconico/Nuwa | b88b95f22473540da54f71122f1b244b37c69402 | [
"MIT"
] | null | null | null | 157.71332 | 55,103 | 0.838681 | [
[
[
"import random\nimport copy\nimport os\nimport time\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport warnings\n# warnings.filterwarnings(\"ignore\")\n%matplotlib inline",
"_____no_output_____"
],
[
"class Dataset:\n def __init__(self,X,y,proportion=0.8,shuffle=True, mini_batch=0):\n \"\"\"\n Dataset class provide tools to manage dataset\n\n :param X: ndarray, features, highly recommand ndarray\n :param y: ndarray, labels\n :param proportion: number between 0 and 1, the proportion of train dataset and test dataset\n :param shuffle: boolean,\n :param mini_batch mini batch size, 0 by default, in this case no mini batch size dataset will be generated\n \"\"\"\n self.X = X\n self.y = y\n self.trainset = None\n self.testset = None\n self.validationset = None\n self.proportion = proportion\n self.shuffle = shuffle\n self.mini_batch = mini_batch\n self.allset = np.concatenate((X,y),axis=1)\n self.minisets = []\n\n if self.shuffle:\n # automatic distribution\n self.distribute()\n\n # @classmethod\n # def imageset(cls, path, proportion = 0.8, shuffle = None):\n # pass\n\n def distribute(self):\n \"\"\"\n This function will automatically distribute train and test dataset\n call this function to reshuffle all the dataset and also generate new train and test set\n \"\"\"\n n = np.shape(self.X)[0]\n samples = np.concatenate((self.X,self.y),axis=1)\n random.shuffle(samples)\n # sample train and test dataset\n self.trainset = samples[0:round(n * self.proportion),:]\n self.testset = samples[round(n * self.proportion) + 1:, :]\n\n def getX(self):\n return self.X\n\n def gety(self):\n return self.y\n\n def getminibatch(self):\n return self.mini_batch\n\n def gettrainset(self):\n \"\"\"\n :return: return train dataset with respect of proportion\n \"\"\"\n return Dataset(self.trainset[:, 0:self.X.shape[1]], self.trainset[:, self.X.shape[1]:], mini_batch=self.mini_batch)\n\n def gettestset(self):\n \"\"\"\n :return: test dataset with respect of proportion\n \"\"\"\n return Dataset(self.testset[:, 0:self.X.shape[1]], self.testset[:, self.X.shape[1]:], mini_batch=self.mini_batch)\n\n def getminiset(self):\n \"\"\"\n get mini sets with mini batch size\n :return: Dataset array\n \"\"\"\n spilit_list = np.arange(self.mini_batch, self.allset.shape[0], self.mini_batch)\n minisets = np.split(self.allset, spilit_list)\n for i in range(len(minisets)):\n self.minisets.append(Dataset(minisets[i][:, 0:self.X.shape[1]], minisets[i][:, self.X.shape[1]:],shuffle =False, mini_batch=self.mini_batch))\n return self.minisets\n",
"_____no_output_____"
],
[
"class NN:\n import numpy as np\n def __init__(self,dataset):\n \"\"\"\n This class contains Activation function util class, Layer class for construct networks, it contains several extend classes like LinearLayer, Conv2D etc.\n\n Examples:\n\n layer_list = [NN.Layer('Linear',3,10,'sigmoid',BN=True), NN.Layer('Linear',10,100,'sigmoid',BN=True),\n NN.Layer('Linear',100,10,'sigmoid',BN=True),NN.Layer('Linear',10,3,'none') ]\n\n dataset = Dataset(X, y, mini_batch= 64)\n\n nn = NN(dataset)\n\n layer_list is a list has 4 layers all are Layer class. Note that here we don't use LinearLayer,\n to use LinearLayer, replace NN.Layer('Linear',3,10,'sigmoid',BN=True) as NN.LinearLayer(,3,10,'sigmoid',BN=True) or\n NN.LinearLayer(3,10,'sigmoid'), NN.BN()\n\n :param dataset: Dataset class\n \"\"\"\n # self.input = input\n self.dataset = dataset\n self.layer_list = []\n\n def addlayers(self,layers):\n self.layer_list = layers\n\n def getlayers(self):\n return self.layer_list\n\n # activation functions\n class ActivationFunc:\n \"\"\"\n ActivationFunc is an util class with different types of activation function.\n it can\n \"\"\"\n @staticmethod\n def sigmoid(x):\n \"\"\"\n Sigmoid function\n \"\"\"\n return 1.0 / (1.0 + np.exp(-x))\n\n @staticmethod\n def ReLU(x):\n \"\"\"\n :param x: ndarray, \n :return:\n \"\"\"\n return np.maximum(0, x)\n\n @staticmethod\n def LeakyReLU(x):\n return np.where(x > 0, x, x * 0.01)\n\n @staticmethod\n def tanh(x):\n return np.tanh(x)\n\n @staticmethod\n def none(x):\n return x\n\n # Layer class\n class Layer:\n def __init__(self, type, input_dim, output_dim, activation, BN = False):\n \"\"\"\n Define a layer contains activation function or other normalization.\n\n :param type: Layer type, choose 'Linear', 'Conv' etc\n :param input_dim: input dim or previous layer's output\n :param output_dim: output dim of this layer\n :param activation: activation function, it now support \"sigmoid\", \"ReLU\", \"LeakyReLU\", \"tanh\" and \"none\" for no activation function\n :param BN, batch normalization , Default False\n\n Examples:\n\n A linear layer with input dim = 3 and output dim = 10, following batch normalization and a sigmoid activation function\n NN.Layer('Linear',3,10,'sigmoid',BN=True)\n\n \"\"\"\n self.type = type\n self.input_dim = input_dim\n self.output_dim = output_dim\n self.activation = activation\n self.BN = BN\n\n def getinputdim(self):\n return self.input_dim\n\n def getoutputdim(self):\n return self.output_dim\n\n def gettype(self):\n return self.type\n\n def getact(self, x):\n func_name = \"NN.ActivationFunc.\"+self.activation\n func = eval(func_name)\n return func(x)\n\n def getactname(self):\n return self.activation\n\n def getBN(self):\n return self.BN\n\n class LinearLayer(Layer):\n \"\"\"\n Define a linear layer\n\n As same as Layer except no need to clarify type\n \"\"\"\n def __init__(self, input_dim, output_dim):\n self.type = \"Linear\"\n self.input_dim = input_dim\n self.output_dim = output_dim\n\n\n class Conv2DLayer(Layer):\n \"\"\"\n Define a 2D convolutional layer_\n \"\"\"\n def __init__(self, input_size, kernel_size, stride, padding):\n \"\"\"\n initialize 2D conv layer\n\n :param input_size: Union[tuple, ndarray] layer's input size\n :param kernel_size: Union[tuple, ndarray] layer's kernel size\n :param stride: Int\n :param padding: Int\n \"\"\"\n self.type = \"Conv2D\"\n self.input_size = input_size\n self.kernel_size = kernel_size\n self.stride = stride\n self.padding = padding\n\n def getimagesize(self):\n return self.image_size\n\n def getkernelsize(self):\n return self.kernel_size\n\n def getstride(self):\n return self.stride\n\n def getpadding(self):\n return self.padding\n\n class BN(Layer):\n def __init__(self):\n \"\"\"\n Define a batch normalization layer\n \"\"\"\n self.type = \"BN\"\n self.activation =\"none\"\n",
"_____no_output_____"
],
[
"class Optimizer:\n def __init__(self,nn ,optimizer,loss_function, batch_size=8,epoch=20000,lr=0.0001,decay_rate=0):\n \"\"\"\n :param nn: input an NN class\n :param optimizer: optimizer as \"GD\", \"SGD\" etc\n :param batch_size: batch size for mini batch optimization\n :param epoch: epoch number\n :param lr: learning rate\n :param decay_rate: float, learning rate decay rate by default is 0\n \"\"\"\n\n self.nn = nn\n self.optimizer = optimizer\n self.loss_function = loss_function\n self.batch_size = batch_size\n self.epoch = epoch\n self.lr = lr\n self.weight_list = None\n self.gradient_list = None\n self.loss_list = None\n self.passer_list = None\n self.decay_rate = decay_rate\n\n def getgradientlist(self):\n return self.gradient_list\n\n def getlosslist(self):\n return self.loss_list\n\n def getweightlist(self):\n return self.weight_list\n\n class LossFunc:\n class Logarithmic:\n def __init__(self, y_true=None, y_pred=None, eps=1e-16):\n self.y_true = y_true\n self.y_pred = y_pred\n self.eps = eps\n \"\"\"\n Loss function we would like to optimize (minimize)\n We are using Logarithmic Loss\n http://scikit-learn.org/stable/modules/model_evaluation.html#log-loss\n \"\"\"\n def loss(self):\n self.y_pred = np.maximum(self.y_pred, self.eps)\n self.y_pred = np.minimum(self.y_pred, (1 - self.eps))\n return -(np.sum(self.y_true * np.log(self.y_pred)) + np.sum((1 - self.y_true) * np.log(1 - self.y_pred))) / len(self.y_true)\n\n class Quadratic:\n def __init__(self, y_true=None, y_pred=None, norm = 0):\n self.y_true = y_true\n self.y_pred = y_pred\n self.norm = norm\n\n def loss(self):\n return 1 / self.y_true.shape[0] * 0.5 * np.sum((self.y_pred - self.y_true) ** 2)\n\n def diff(self):\n return 2 * (self.y_pred - self.y_true)\n\n class MSE:\n def __init__(self, y_true=None, y_pred=None, x=None):\n self.y_true = y_true\n self.y_pred = y_pred\n self.x = x\n\n def loss(self):\n return 1 / np.shape(self.y_true)[0] * np.sum((self.y_pred - self.y_true) ** 2)\n\n def diff(self):\n return 2 / np.shape(self.y_true)[0] * np.sum(self.x @ (self.y_pred - self.y_true))\n\n\n class Node:\n def __init__(self, data: np.ndarray, type : str):\n \"\"\"\n Node class, is the node of binary tree which has two child node: left and right.\n It can also be presented as weight. Every passer during the back propagation is saved as\n a node class contains data, type, back and cache for calculation\n\n :param data: ndarray, value given during forward propagation\n :param type: str, the type of node, it can be \"weight\", \"data\" or calculation like \"@\", \"+\" etc\n :param back: ndarray, value updated during back propagation\n :param cache: array_like stock forward propagation's detail and middle value for the convenient of back propagation\n \"\"\"\n self.left = None\n self.right = None\n self.data = data\n self.type = type\n self.back = None\n self.cache = None\n self.momentum = None\n\n def getleft(self):\n return self.left\n\n def getright(self):\n return self.right\n\n def gettype(self):\n return self.type\n\n def getdata(self):\n return self.data\n\n def getback(self):\n return self.back\n\n def getmomentum(self):\n return self.momentum\n\n class WeightIni:\n \"\"\"\n Provide weight initial functions. util class\n \"\"\"\n @staticmethod\n def init_linear_weight(input_dim, output_dim):\n return np.random.uniform(-1, 1, (input_dim, output_dim))\n\n @staticmethod\n def init_BN_weight(dim):\n\n return np.ones((1, dim)), np.ones((1, dim), dtype=\"float32\")\n\n @staticmethod\n def init_conv2D_kernel(shape):\n \"\"\"\n :param shape: Union[tuple, int, float] shape of kernel\n :return:\n \"\"\"\n return np.random.random(shape)\n\n @staticmethod\n def initial_weight_list(layer_list):\n \"\"\"\n @Staticmethod. Given layer list and return respected initiall weight list\n\n :param layer_list: list, layer list\n :return: list, list of weight in Node class\n \"\"\"\n weight_list = []\n # initial weights in weight list by their type\n layer_num = len(layer_list)\n for i in range(layer_num):\n # linear weight operation\n if layer_list[i].gettype() == \"Linear\":\n weight_list.append(Optimizer.Node(Optimizer.WeightIni.init_linear_weight(layer_list[i].getinputdim(), layer_list[i].getoutputdim()),\"weight\"))\n elif layer_list[i].gettype() == \"BN\":\n dim = layer_list[i-1].getoutputdim()\n gamma, beta = Optimizer.WeightIni.init_BN_weight(dim)\n weight_list.append(Optimizer.Node(gamma,\"weight\"))\n weight_list.append(Optimizer.Node(beta,\"weight\"))\n layer_list[i].input_dim = dim\n layer_list[i].output_dim = dim\n # kernel parse operation\n elif layer_list[i].gettype() == \"Conv2D\":\n weight_list.append(Optimizer.Node(Optimizer.WeightIni.init_conv2D_kernel(layer_list[i].getkernelsize()),\"weight\"))\n else:\n return NameError\n # check if you need BN init\n if layer_list[i].getBN():\n dim = layer_list[i].getoutputdim()\n gamma, beta = Optimizer.WeightIni.init_BN_weight(dim)\n weight_list.append(Optimizer.Node(gamma,\"weight\"))\n weight_list.append(Optimizer.Node(beta,\"weight\"))\n\n return weight_list\n\n @staticmethod\n def forword(passer, weight_list, layer_list):\n layer_num = len(layer_list)\n passer_list = [Optimizer.Node(passer, \"data\")]\n # Every layer not necessarily has only one weight, like BN has 2 weights in a single layer\n weight_count = 0\n\n for i in range(layer_num):\n if layer_list[i].gettype() =='Linear':\n passer = passer@weight_list[weight_count].getdata()\n # append binary tree after inner product of weight and previous layer\n node = Optimizer.Node(passer,\"@\")\n node.left = passer_list[-1]\n node.right = weight_list[weight_count]\n passer_list.append(node)\n\n weight_count += 1\n\n if layer_list[i].getBN():\n node_cache = [passer, np.var(passer,axis = 0), np.mean(passer, axis=0 )]\n\n passer = (passer - np.mean(passer,axis=0))/np.sqrt(np.var(passer,axis=0))\n node = Optimizer.Node(passer,\"normalization\")\n node.cache = node_cache\n node.left = passer_list[-1]\n passer_list.append(node)\n\n node = Optimizer.Node(passer,\"*scalar\")\n node.left = passer_list[-1]\n node.right = weight_list[weight_count]\n passer_list.append(node)\n\n passer = passer + weight_list[weight_count+1].getdata()\n node = Optimizer.Node(passer,\"+scalar\")\n node.left = passer_list[-1]\n node.right = weight_list[weight_count+1]\n passer_list.append(node)\n\n weight_count += 2\n\n passer = layer_list[i].getact(passer)\n #append binary tree after activation function\n node = Optimizer.Node(passer,layer_list[i].getactname())\n node.left = passer_list[-1]\n passer_list.append(node)\n\n # elif layer_list[j].gettype() == \"Conv2D\":\n else: raise NameError\n\n return passer_list\n\n @staticmethod\n def backpropagation(node):\n epsilon = 1e-8\n if node.getleft() is not None:\n if node.gettype() == \"@\":\n node.getleft().back = node.getback()@node.getright().getdata().T\n node.getright().back = node.getleft().getdata()[email protected]()\n elif node.gettype() == \"sigmoid\":\n node.getleft().back = np.multiply(node.getback(),np.multiply(NN.ActivationFunc.sigmoid(node.getback()),\n 1-NN.ActivationFunc.sigmoid(node.getback())))\n elif node.gettype() == \"ReLU\":\n back = copy.deepcopy(node.getback())\n back[back<=0] = 0\n node.getleft().back = back\n elif node.gettype() == \"LeakyReLU\":\n back = copy.deepcopy(node.getback())\n back[back<0] = 0.01*back[back<0]\n node.getleft().back = back\n elif node.gettype() == \"tanh\":\n node.getleft().back = np.multiply((np.ones(node.getback().shape)-NN.ActivationFunc.tanh(node.getback())**2),\n node.getback())\n elif node.gettype() == \"+\":\n node.getleft().back = node.getback()\n node.getright().back = node.getback()\n elif node.gettype() == \"-\":\n node.getleft().back = node.getback()\n node.getright().back = -node.getback()\n elif node.gettype() == \"+scalar\":\n node.getleft().back = node.getback()\n node.getright().back = np.sum(node.getback(),axis=0)\n elif node.gettype() == \"*scalar\":\n node.getleft().back = node.getright().getdata() * node.getback()\n node.getright().back = np.sum(node.getleft().getdata().T,axis=0)@node.getback()\n elif node.gettype() == \"none\":\n node.getleft().back = node.getback()\n elif node.gettype() == \"normalization\":\n # cache = [x, sigma_beta^2, mu_beta]\n\n # dx = 1/N / std * (N * dx_norm -\n # dx_norm.sum(axis=0) -\n # x_norm * (dx_norm * x_norm).sum(axis=0))\n\n x = node.cache[0]\n sigma2 = node.cache[1]\n mu = node.cache[2]\n\n dl_dx_hat = node.getback()\n dl_dsigma2 = np.sum(dl_dx_hat,axis=0) * (x-mu) * -0.5*(sigma2+epsilon)**-3/2\n dl_dmu = np.sum(dl_dx_hat,axis=0) * -1/np.sqrt(sigma2+epsilon) + dl_dsigma2 * np.sum(-2*(x-mu),axis= 0)/x.shape[0]\n dl_dx = dl_dx_hat * 1/np.sqrt(sigma2+epsilon) + dl_dsigma2*2*(x-mu)/x.shape[0] + dl_dmu /x.shape[0]\n node.getleft().back = dl_dx\n\n Optimizer.backpropagation(node.getleft())\n else:\n return\n\n def lrdecay(self, iter):\n \"\"\"\n Learning rate decay function. Given iteration, modify learning rate\n\n :param iter: int, iteration count\n \"\"\"\n self.lr = 1 / (1 + self.decay_rate * iter) * self.lr\n\n def GD(self, root: Node, weight_list):\n \"\"\"\n Gradient descent, do the back propagation and update weight list\n\n :param root: Node, the root of passer binary tree\n :param weight_list: list, weight list\n :return: list, updated weight list\n \"\"\"\n Optimizer.backpropagation(root)\n gradient_list = []\n\n for node in weight_list:\n node.data = node.data - self.lr * node.back\n gradient_list.append(node.back)\n return weight_list, gradient_list\n\n def SGD(self, weight_list, passer_list):\n # we resume mini-batch equals 1 each time\n \"\"\"\n Stochastic gradient descent. It takes weight list and passer list as inputs, it will\n :param weight_list:\n :param passer_list:\n :return:\n \"\"\"\n def init_random_node(node, random_num_list, mini_weight_list):\n node.data = node.data[random_num_list,:]\n node.back = None\n if node.getright() is not None:\n mini_weight_list.append(node.getright())\n if node.getleft() is not None:\n init_random_node(node.getleft(), random_num_list, mini_weight_list)\n else: return\n\n # obs = observation number = output layer's dim 0\n num_obs = self.nn.dataset.gettrainset().getX().shape[0]\n mini_passer_list = copy.deepcopy(passer_list)\n root = mini_passer_list[-1]\n gradient_list = []\n\n # randomly pick observations from original obs\n random_num_list = np.random.randint(0, num_obs, num_obs)\n\n # initial random node\n mini_weight_list = []\n init_random_node(root, random_num_list, mini_weight_list)\n\n # back propagation\n root.back = 2 * (- self.nn.dataset.gettrainset().gety()[random_num_list] + root.getdata()[random_num_list])\n Optimizer.backpropagation(root)\n\n i = 0\n # update weight list\n for weight in weight_list:\n weight.data = weight.data - self.lr * mini_weight_list[-i-1].back\n gradient_list.append(mini_weight_list[-i-1].back)\n i = i + 1\n\n return weight_list, gradient_list\n\n def momentumgd(self, root: Node, weight_list, beta = 0.2):\n \"\"\"\n\n :param root: Node, the root of passer binary tree\n :param weight_list: list, weight list\n :param beta: momentum conservation rate\n :return: list, updated weight list\n \"\"\"\n Optimizer.backpropagation(root)\n gradient_list = []\n\n for node in weight_list:\n if node.getmomentum() is None:\n node.momentum = (1 - beta) * node.getback()\n else:\n node.momentum = beta * node.getmomentum() + (1 - beta) * node.getback()\n node.data = node.getdata() - self.lr * (1 - beta) * node.getback()\n gradient_list.append(node.back)\n return weight_list, gradient_list\n\n def RMSprop(self, root: Node, weight_list, beta = 0.2, eps =1e-10):\n\n Optimizer.backpropagation(root)\n gradient_list = []\n\n for node in weight_list:\n if node.getmomentum() is None:\n node.momentum = (1 - beta) * node.getback() ** 2\n else:\n node.momentum = beta * node.getmomentum() + (1 - beta) * node.getback() ** 2\n\n node.data = node.getdata() - self.lr * node.getback() / (np.sqrt(node.getmomentum()) + eps)\n gradient_list.append(node.back)\n return weight_list, gradient_list\n\n\n def Adam(self, root: Node, weight_list, beta_mom = 0.2, beta_rms = 0.2, eps = 1e-10):\n \"\"\"\n Adam optimizer\n :param root:\n :param weight_list:\n :param beta_mom:\n :param beta_rms:\n :param eps:\n :return:\n \"\"\"\n Optimizer.backpropagation(root)\n gradient_list = []\n\n for node in weight_list:\n if node.getmomentum() is None:\n node.momentum = [(1 - beta_mom) * node.getback(), (1 - beta_rms) * node.getback() ** 2]\n else:\n node.momentum[0] = (beta_mom * node.getmomentum()[0] + (1 - beta_mom) * node.getback()) / (1 - beta_mom)\n node.momentum[1] = (beta_rms * node.getmomentum()[1] + (1 - beta_rms) * node.getback() ** 2 ) / (1 - beta_rms)\n\n node.data = node.getdata() - self.lr * node.getmomentum()[0] / (np.sqrt(node.getmomentum()[1])+eps)\n gradient_list.append(node.back)\n return weight_list, gradient_list\n\n def train(self):\n \"\"\"\n train process, it will first initial weight, loss, gradient and passer list, then, optimize weights by given optimizer.\n In the end, calculate loss and step to the next epoch.\n\n It will finally stock all the weight, loss, gradient and passer during the training process\n \"\"\"\n layer_list = self.nn.getlayers()\n\n # initial weight, loss and gradient list\n self.weight_list = [[] for i in range(self.epoch+1)]\n self.weight_list[0] = Optimizer.WeightIni.initial_weight_list(layer_list)\n self.loss_list = np.zeros(self.epoch)\n self.gradient_list = [[] for i in range(self.epoch)]\n self.passer_list = [[] for i in range(self.epoch)]\n\n # for GD and SGD, they use full dataset, so need only read X and y once\n if self.optimizer ==\"GD\" or self.optimizer == \"SGD\":\n X = self.nn.dataset.gettrainset().getX()\n X = Optimizer.Node(X, \"data\")\n for i in range(self.epoch):\n # forward propagation\n self.passer_list[i] = Optimizer.forword(X.getdata(), self.weight_list[i],layer_list)\n root = self.passer_list[i][-1]\n\n # calculate loss by using: loss 2 * (-self.nn.dataset.gettrainset().gety() + root.getdata())\n loss_func = self.loss_function(self.nn.dataset.gettrainset().gety(), root.getdata())\n self.loss_list[i] = loss_func.loss()\n\n root.back = loss_func.diff()\n # upgrade gradient by selected optimizer\n if self.optimizer ==\"GD\":\n self.weight_list[i+1], self.gradient_list[i] = Optimizer.GD(self, root, self.weight_list[i])\n\n elif self.optimizer ==\"SGD\":\n self.weight_list[i+1], self.gradient_list[i] = Optimizer.SGD(self, self.weight_list[i], self.passer_list[i])\n\n # mini batch type gradient descent\n else:\n for i in range(self.epoch):\n start_time = time.time()\n # get mini batch\n minisets = self.nn.dataset.gettrainset().getminiset()\n epoch_weight_list = [copy.deepcopy(self.weight_list[i])]\n epoch_loss_list = np.zeros(len(minisets))\n\n # GD for every mini batch\n for j in range(len(minisets)):\n X_bar = minisets[j]\n self.passer_list[i].append(Optimizer.forword(X_bar.getX(), epoch_weight_list[j], layer_list))\n\n root = self.passer_list[i][j][-1]\n loss_func = self.loss_function(X_bar.gety(), root.getdata())\n\n epoch_loss_list[j] = loss_func.loss()\n root.back = loss_func.diff()\n root.momentum = root.getback()\n\n if self.optimizer == \"minibatchgd\":\n weight, gradient = Optimizer.GD(self, root, epoch_weight_list[j])\n elif self.optimizer == \"momentumgd\":\n weight, gradient = Optimizer.momentumgd(self, root, epoch_weight_list[j])\n elif self.optimizer == \"RMSprop\":\n weight, gradient = Optimizer.RMSprop(self, root, epoch_weight_list[j])\n elif self.optimizer == \"Adam\":\n weight, gradient = Optimizer.Adam(self, root, epoch_weight_list[j])\n else: raise NameError\n epoch_weight_list.append(weight)\n\n self.weight_list[i+1]= epoch_weight_list[-1]\n self.gradient_list[i] = gradient\n\n self.loss_list[i] = sum(epoch_loss_list)/len(epoch_loss_list)\n\n # learnign rate decay\n\n self.lrdecay(i)\n # every epoch shuffle the dataset\n self.nn.dataset.distribute()\n\n if (i + 1) % 1 ==0:\n used_time = time.time() - start_time\n print(\"epoch \" + str(i + 1) + ', Training time: %.4f' % used_time + ', Training loss: %.6f' % self.loss_list[i])\n\n def test(self):\n \"\"\"\n Use trained weight on testset for the evaluation of the model\n :return: model prediction and loss on the testset\n \"\"\"\n weight = self.weight_list[-1]\n layer_list = self.nn.getlayers()\n testset = self.nn.dataset.gettestset()\n passer = testset.getX()\n\n passer_list = self.forword(passer,weight,layer_list)\n predicted = passer_list[-1].getdata()\n\n loss = self.loss_function.loss(testset.gety(), predicted)\n return predicted, loss\n\n def predict(self, X):\n \"\"\"\n Use trained weight on X and output prediction\n :param X: ndarray, feature data wish to be predicted\n :return: model's prediction by using trained data\n \"\"\"\n passer = X\n weight = self.weight_list[-1]\n passer_list = self.forword(passer, weight, self.nn.getlayers())\n return passer_list",
"_____no_output_____"
],
[
"class Visual:\n def __init__(self, optim):\n self.optim = optim\n\n def plotloss(self):\n \"\"\"\n :return: plot loss flow during the training\n \"\"\"\n plt.style.use('seaborn-whitegrid')\n fig = plt.figure()\n ax = plt.axes()\n ax.plot(self.optim.loss_list, label = 'loss')\n ax.legend(loc='upper right')\n ax.set_ylabel('Loss during the training')\n\n def plotgradientnorm(self):\n plt.style.use('seaborn-whitegrid')\n fig, axs = plt.subplots(len(self.optim.getgradientlist()[0]))\n for i in range(len(self.optim.getgradientlist()[0])):\n gradient_norm_list = []\n for j in range(len(self.optim.getgradientlist())):\n gradient_norm_list.append(np.linalg.norm(self.optim.getgradientlist()[j][i]))\n axs[i].plot(gradient_norm_list, label = 'norm 2')\n axs[i].legend(loc='upper right')\n axs[i].set_ylabel('W' + str(i) +\" norm\")\n",
"_____no_output_____"
],
[
"# total observation number\nn = 300\n# x1, x2 are generated by two\nx1 = np.random.uniform(0,1,n)\nx2 = np.random.uniform(0,1,n)\nconst = np.ones(n)\neps = np.random.normal(0,.05,n)\nb = 1.5\ntheta1 = 2\ntheta2 = 5\nTheta = np.array([b, theta1, theta2])\ny = np.array(b * const+ theta1 * x1 + theta2 * x2 + eps)\ny=np.reshape(y,(-1,1))\nX = np.array([const,x1,x2]).T",
"_____no_output_____"
],
[
"layer_list = [NN.Layer('Linear',3,100,'LeakyReLU'),NN.Layer('Linear',100,3,'LeakyReLU'),\n NN.Layer('Linear',3,1,'none')]\ndataset = Dataset(X, y)\nnn = NN(dataset)\nnn.addlayers(layer_list)\nloss_func = Optimizer.LossFunc.Quadratic\noptim = Optimizer(nn,\"SGD\",loss_func, epoch = 10000, lr=1e-6)\noptim.train()\nvisual = Visual(optim)\nvisual.plotloss()\nvisual.plotgradientnorm()",
"_____no_output_____"
],
[
"# total observation number\nn = 10000\n# x1, x2 are generated by two\nx1 = np.random.uniform(0,1,n)\nx2 = np.random.uniform(0,1,n)\nconst = np.ones(n)\neps = np.random.normal(0,.05,n)\nb = 1.5\ntheta1 = 2\ntheta2 = 5\nTheta = np.array([b, theta1, theta2])\ny = np.array(b * const+ theta1 * x1 + theta2 * x2 + eps)\ny=np.reshape(y,(-1,1))\nX = np.array([const,x1,x2]).T",
"_____no_output_____"
],
[
"layer_list = [NN.Layer('Linear',3,10,'sigmoid',BN=True), NN.Layer('Linear',10,100,'sigmoid',BN=True),\n NN.Layer('Linear',100,10,'sigmoid',BN=True),NN.Layer('Linear',10,3,'none') ]\ndataset = Dataset(X, y, mini_batch= 64)\nnn = NN(dataset)\nnn.addlayers(layer_list)\nloss_func = Optimizer.LossFunc.Quadratic\noptim = Optimizer(nn,\"Adam\", loss_func, epoch = 10, lr=1e-2, decay_rate=0.01)\noptim.train()\nvisual = Visual(optim)\nvisual.plotloss()\nvisual.plotgradientnorm()",
"/tmp/ipykernel_15455/216912902.py:43: RuntimeWarning: overflow encountered in exp\n return 1.0 / (1.0 + np.exp(-x))\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0afd92f9a44ae846717a66b372b12eb045d36a9 | 34,940 | ipynb | Jupyter Notebook | ExamPrep/Shit Comp/ShitComp/Fourier_Transforms.ipynb | FHomewood/ScientificComputing | bc3477b4607b25a700f2d89ca4f01cb3ea0998c4 | [
"IJG"
] | null | null | null | ExamPrep/Shit Comp/ShitComp/Fourier_Transforms.ipynb | FHomewood/ScientificComputing | bc3477b4607b25a700f2d89ca4f01cb3ea0998c4 | [
"IJG"
] | null | null | null | ExamPrep/Shit Comp/ShitComp/Fourier_Transforms.ipynb | FHomewood/ScientificComputing | bc3477b4607b25a700f2d89ca4f01cb3ea0998c4 | [
"IJG"
] | null | null | null | 204.327485 | 13,636 | 0.902061 | [
[
[
"%pylab inline",
"Populating the interactive namespace from numpy and matplotlib\n"
]
],
[
[
"** Problem: Fourier transforms of simple functions **\n\nWrite Python programs to calculate the coefficients in the discrete Fourier transforms of the following periodic functions sampled at $N=1000$ evenly spaced points, and make plots of their amplitudes:\n\n1) A single cycle of a square-wave with amplitude 1\n\n2) The sawtooth wave $y_n=n$\n\n3) The modulated sine wave $y_n = \\sin(\\pi n/N) \\sin(20\\pi n/N)$\n",
"_____no_output_____"
]
],
[
[
"# Import required modules\nfrom numpy import arange\nfrom numpy.fft import rfft\nfrom math import sin,pi\n\n# Square wave\ndef f(x):\n if x<0.5:\n return 1\n else:\n return -1\n \nN=1000\nx=arange(0.0,1.0,1.0/N)\ny=map(f,x)\nc=rfft(y) # Calculate the Fourier coefficients (complex numbers!)\nplot(abs(c)) # plot their magnitudes vs. mode number\nxlim(0,100) # show the first 100 modes\nshow()",
"_____no_output_____"
],
[
"# Sawtooth function\n\nN=1000\ny=arange(N)\nc=rfft(y)\nplot(abs(c))\nxlim(0,100)\nshow()",
"_____no_output_____"
],
[
"# sin wave\n\ndef f(x):\n return sin(pi*x)*sin(20*pi*x)\n\nN=1000\nx=arange(0.0,1.0,1.0/N)\ny=map(f,x)\nc=rfft(y) # Calculate the Fourier coefficients (complex numbers!)\nplot(abs(c)) # plot their magnitudes vs. mode number\nxlim(0,100) # show the first 100 modes\nshow()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0b01bb5fd05bf0a38407a5b743fc63dd7f246cd | 36,677 | ipynb | Jupyter Notebook | src/nAlert/test.ipynb | FDUJiaG/financial-analysis | 787c94d742f7c982802930e4c7ea6b1ec63b25f5 | [
"MIT"
] | 1 | 2021-12-27T11:50:36.000Z | 2021-12-27T11:50:36.000Z | src/nAlert/test.ipynb | FDUJiaG/financial-analysis | 787c94d742f7c982802930e4c7ea6b1ec63b25f5 | [
"MIT"
] | null | null | null | src/nAlert/test.ipynb | FDUJiaG/financial-analysis | 787c94d742f7c982802930e4c7ea6b1ec63b25f5 | [
"MIT"
] | null | null | null | 48.902667 | 246 | 0.450937 | [
[
[
"import tushare as ts\nimport sina_data\nimport numpy as np\nimport pandas as pd\nfrom pandas import DataFrame, Series\nfrom datetime import datetime, timedelta\nfrom dateutil.parser import parse\nimport time\nimport common_util\nimport os",
"_____no_output_____"
],
[
"def get_time(date=False, utc=False, msl=3):\n if date:\n time_fmt = \"%Y-%m-%d %H:%M:%S.%f\"\n else:\n time_fmt = \"%H:%M:%S.%f\"\n\n if utc:\n return datetime.utcnow().strftime(time_fmt)[:(msl-6)]\n else:\n return datetime.now().strftime(time_fmt)[:(msl-6)]\n\n\ndef print_info(status=\"I\"):\n return \"\\033[0;33;1m[{} {}]\\033[0m\".format(status, get_time())",
"_____no_output_____"
],
[
"def judgement(df, change_rate=0.01, buy1_rate=0.03, buy1_volume=1e5):\n float_share = df['float_share'].to_numpy().astype(np.int)\n open = df['今日开盘价'].to_numpy().astype(np.float)\n pre_close = df['昨日收盘价'].to_numpy().astype(np.float)\n limit_up = limit_up_price(pre_close)\n price = df['当前价'].to_numpy().astype(np.float)\n high = df['今日最高价'].to_numpy().astype(np.float)\n low = df['今日最低价'].to_numpy().astype(np.float)\n volume = df['成交股票数'].to_numpy().astype(np.int)\n buy_1v = df['买一量'].to_numpy().astype(np.int)\n\n judge_list = [\n low < limit_up,\n price < limit_up,\n volume < float_share * change_rate,\n buy_1v < float_share * buy1_rate,\n buy_1v < buy1_volume\n ]\n\n return judge_list\n\n\n# 基于前一交易日收盘价的涨停价计算\ndef limit_up_price(pre_close):\n return np.around(pre_close * 1.1, decimals=2)\n\n\n# 日K数据判断是否开板\ndef is_sold(code, start_date):\n print(code)\n try:\n time.sleep(1)\n pro = ts.pro_api('ba73b3943bdd57c2ff05991f7556ef417f457ac453355972ff5d01ce')\n start_date = (parse(str(start_date))+timedelta(1)).strftime('%Y%m%d')\n end_date = datetime.now().strftime('%Y%m%d')\n daily_k = pro.daily(ts_code=code, start_date=start_date, end_date=end_date)\n if len(daily_k) > 0:\n daily_k['flag'] = daily_k.apply(\n lambda x: x['high'] == x['low'] and x['open'] == x['close'] and x['high'] == x['low'],\n axis=1\n )\n flag = daily_k['flag'].sum()\n result = True\n for each in daily_k['flag'].tolist():\n result = result and each\n return result\n else:\n return True\n except Exception as e:\n print('再次请求ts数据')\n time.sleep(1)\n a = is_sold(code, start_date)\n return a\n\n\n# 获取流通股本\ndef get_float_share(code):\n print(code)\n try:\n time.sleep(1)\n pro = ts.pro_api('ba73b3943bdd57c2ff05991f7556ef417f457ac453355972ff5d01ce')\n # target_date = datetime.now().strftime('%Y%m%d')\n target_data = []\n delta = 0\n count = 1\n while len(target_data) == 0:\n target_date = datetime.now() + timedelta(delta)\n target_data = pro.daily_basic(\n ts_code=code, trade_date=target_date.strftime('%Y%m%d'), fields='free_share'\n )\n delta = delta - 1\n time.sleep(0.5)\n count = count + 1\n if count > 3:\n return 1000000\n return target_data.values[0][0] * 10000\n\n except Exception as e:\n time.sleep(1)\n get_float_share(code)\n print('再次请求ts数据.....')",
"_____no_output_____"
],
[
"# 新股筛选\n# 获取股票列表\npro = ts.pro_api('ba73b3943bdd57c2ff05991f7556ef417f457ac453355972ff5d01ce')\nbasic_data = pro.stock_basic()\nprint('股票筛选')\n# basic_data.to_excel(r'C:\\Users\\duanp\\Desktop\\test\\stock_basic.xlsx')\n# basic_data = pd.read_excel(r'C:\\Users\\duanp\\Desktop\\test\\stock_basic.xlsx')\n# 筛选上市日期为近一月的股票\nstart_date = datetime.now() + timedelta(-30)\nend_date = datetime.now() + timedelta(1)\nbasic_data['list_date'] = basic_data['list_date'].apply(lambda x: parse(str(x)))\nbasic_data = basic_data[basic_data['list_date'] > start_date]\nbasic_data = basic_data[basic_data['list_date'] < end_date]\n# 剔除科创板股票\nbasic_data = basic_data[basic_data['market'] != '科创板']\n# 筛选未开板的股票\nbasic_data['target_flag'] = basic_data.apply(lambda x: is_sold(x['ts_code'], x['list_date']), axis=1)\n# basic_data = basic_data[basic_data['target_flag']]\nprint('补充流通股本数据')\n# 补充流通股本信息\nbasic_data['float_share'] = basic_data.apply(lambda x: get_float_share(x['ts_code']), axis=1)\nbasic_data['float_share'] = basic_data['float_share'].fillna('100000')\nprint('预警股票如下:')\nprint(basic_data)\n\nchange_rate = 0.01\nbuy1_rate = 0.03\nbuy1_volume = 1e5\n\ntick_list = [\n '股票代码',\n '今日开盘价',\n '昨日收盘价',\n '当前价',\n '今日最高价',\n '今日最低价',\n '成交股票数',\n '买一量'\n]\n\nflag_dict = {\n \"low_flag\": \"当日曾开板!\",\n \"price_flag\": \"已经开板!\",\n \"volume_top_flag\": \"换手率超过 {:.0%}!\".format(change_rate),\n \"buy1_percent_flag\": \"买一量不足总流通市值的 {:.0%}!\".format(buy1_rate),\n \"buy1_volume_flag\": \"买一量不足 {} 股!\".format(buy1_volume),\n}\n\nflag_list = list(flag_dict.keys())\nflag_len = len(flag_list)",
"股票筛选\n003001.SZ\n003012.SZ\n003013.SZ\n003015.SZ\n003016.SZ\n003017.SZ\n300898.SZ\n300899.SZ\n300900.SZ\n300901.SZ\n300902.SZ\n300999.SZ\n601187.SH\n601568.SH\n601995.SH\n605058.SH\n605169.SH\n605336.SH\n605338.SH\n补充流通股本数据\n003001.SZ\n003012.SZ\n003013.SZ\n003015.SZ\n003016.SZ\n003017.SZ\n300898.SZ\n300899.SZ\n300900.SZ\n300901.SZ\n300902.SZ\n300999.SZ\n601187.SH\n601568.SH\n601995.SH\n605058.SH\n605169.SH\n605336.SH\n605338.SH\n预警股票如下:\n ts_code symbol name area industry market list_date target_flag \\\n1417 003001.SZ 003001 中岩大地 北京 建筑工程 中小板 2020-10-13 False \n1427 003012.SZ 003012 东鹏控股 广东 陶瓷 中小板 2020-10-19 False \n1428 003013.SZ 003013 地铁设计 广东 建筑工程 中小板 2020-10-22 False \n1429 003015.SZ 003015 日久光电 江苏 元器件 中小板 2020-10-21 False \n1430 003016.SZ 003016 欣贺股份 福建 服饰 中小板 2020-10-26 False \n1431 003017.SZ 003017 大洋生物 浙江 化工原料 中小板 2020-10-26 False \n2296 300898.SZ 300898 熊猫乳品 浙江 乳制品 创业板 2020-10-16 False \n2297 300899.SZ 300899 上海凯鑫 上海 环境保护 创业板 2020-10-16 False \n2298 300900.SZ 300900 C广联 黑龙江 航空 创业板 2020-10-29 False \n2299 300901.SZ 300901 C中胤 浙江 服饰 创业板 2020-10-29 False \n2300 300902.SZ 300902 C国安达 福建 专用机械 创业板 2020-10-29 False \n2301 300999.SZ 300999 金龙鱼 上海 食品 创业板 2020-10-15 False \n3160 601187.SH 601187 厦门银行 福建 银行 主板 2020-10-27 False \n3210 601568.SH 601568 北元集团 陕西 化工原料 主板 2020-10-20 False \n3304 601995.SH 601995 中金公司 北京 证券 主板 2020-11-02 False \n3828 605058.SH 605058 澳弘电子 江苏 元器件 主板 2020-10-21 False \n3843 605169.SH 605169 N洪通 新疆 供气供热 主板 2020-10-30 False \n3854 605336.SH 605336 帅丰电器 浙江 家用电器 主板 2020-10-19 False \n3855 605338.SH 605338 巴比食品 上海 食品 主板 2020-10-12 False \n\n float_share \n1417 24293828.0 \n1427 143000000.0 \n1428 40010000.0 \n1429 70266667.0 \n1430 106666700.0 \n1431 15000000.0 \n2296 26456438.0 \n2297 15950000.0 \n2298 44464076.0 \n2299 56887984.0 \n2300 30346688.0 \n2301 356738334.0 \n3160 263912789.0 \n3210 361111112.0 \n3304 260278568.0 \n3828 35731000.0 \n3843 40000000.0 \n3854 35200000.0 \n3855 62000000.0 \n"
],
[
"basic_data['target_code'] = basic_data['ts_code'].apply(lambda x: common_util.get_format_code(x, 'num'))",
"_____no_output_____"
],
[
"basic_data['ts_code'].to_list()",
"_____no_output_____"
],
[
"tick_data = sina_data.get_tick_data(basic_data['symbol'].to_list())",
"_____no_output_____"
],
[
"tick_data['股票代码'] = tick_data['股票代码'].apply(lambda x: common_util.get_format_code(x, 'wind'))",
"_____no_output_____"
],
[
"tick_data = tick_data[tick_list]",
"_____no_output_____"
],
[
"temp_data = basic_data.merge(tick_data, left_on='ts_code', right_on='股票代码')",
"_____no_output_____"
],
[
"judge_list = judgement(temp_data, change_rate, buy1_rate, buy1_volume)",
"_____no_output_____"
],
[
"judge_list",
"_____no_output_____"
],
[
"alert_dict = dict()\ncount = 0",
"_____no_output_____"
],
[
"for idx in range(flag_len):\n temp_data[flag_list[idx]] = judge_list[idx]\n alert_dict[flag_list[idx]] = temp_data[temp_data[flag_list[idx]]][\"name\"].tolist()\n if len(alert_dict[flag_list[idx]]) > 0:\n print(print_info(\"W\"), end=\" \")\n print(flag_dict[flag_list[idx]])\n print(\",\".join(alert_dict[flag_list[idx]]))\n else:\n count += 1",
"\u001b[0;33;1m[W 23:24:45.211]\u001b[0m 当日曾开板!\n中岩大地,东鹏控股,地铁设计,日久光电,欣贺股份,大洋生物,熊猫乳品,上海凯鑫,C广联,C中胤,C国安达,金龙鱼,厦门银行,北元集团,中金公司,澳弘电子,N洪通,帅丰电器,巴比食品\n\u001b[0;33;1m[W 23:24:45.211]\u001b[0m 已经开板!\n中岩大地,东鹏控股,地铁设计,欣贺股份,大洋生物,熊猫乳品,上海凯鑫,C广联,C中胤,C国安达,金龙鱼,厦门银行,北元集团,中金公司,澳弘电子,N洪通,帅丰电器,巴比食品\n\u001b[0;33;1m[W 23:24:45.212]\u001b[0m 买一量不足总流通市值的 3%!\n中岩大地,东鹏控股,地铁设计,日久光电,欣贺股份,大洋生物,熊猫乳品,上海凯鑫,C广联,C中胤,C国安达,金龙鱼,厦门银行,北元集团,中金公司,澳弘电子,N洪通,帅丰电器,巴比食品\n\u001b[0;33;1m[W 23:24:45.213]\u001b[0m 买一量不足 100000.0 股!\n中岩大地,东鹏控股,地铁设计,欣贺股份,大洋生物,熊猫乳品,上海凯鑫,C广联,C国安达,北元集团,澳弘电子,N洪通,帅丰电器,巴比食品\n"
],
[
"idx=1\ntemp_data[flag_list[idx]] = judge_list[idx]",
"_____no_output_____"
],
[
"alert_dict[flag_list[idx]] = temp_data[temp_data[flag_list[idx]]]\nalert_dict",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0b03b32cb8aba22c25ca2873a1612fd89d9404f | 730,712 | ipynb | Jupyter Notebook | notebooks/.ipynb_checkpoints/Untitled1-Copy2 Encore-checkpoint.ipynb | harveyslash/Deep-Image-Analogy-TF | 9bda06fbe3a5786217a3db112d2f162573b1dd90 | [
"MIT"
] | 4 | 2018-02-27T21:43:42.000Z | 2021-08-22T14:42:47.000Z | notebooks/.ipynb_checkpoints/Untitled1-Copy2 Encore-checkpoint.ipynb | harveyslash/Deep-Image-Analogy-TF | 9bda06fbe3a5786217a3db112d2f162573b1dd90 | [
"MIT"
] | null | null | null | notebooks/.ipynb_checkpoints/Untitled1-Copy2 Encore-checkpoint.ipynb | harveyslash/Deep-Image-Analogy-TF | 9bda06fbe3a5786217a3db112d2f162573b1dd90 | [
"MIT"
] | 2 | 2018-02-25T21:50:08.000Z | 2018-11-26T23:42:32.000Z | 920.292191 | 151,614 | 0.937622 | [
[
[
"%matplotlib inline\n%pylab inline\npylab.rcParams['figure.figsize'] = (10, 6)\nimport numpy as np\nfrom numpy.lib import stride_tricks\nimport cv2\nfrom matplotlib.colors import hsv_to_rgb\nimport matplotlib.pyplot as plt\nimport numpy as np\nnp.set_printoptions(precision=3)",
"Populating the interactive namespace from numpy and matplotlib\n"
],
[
"class PatchMatch(object):\n def __init__(self, a, b, patch_size):\n assert a.shape == b.shape, \"Dimensions were unequal for patch-matching input\"\n self.A = a\n self.B = b\n self.patch_size = patch_size\n self.nnf = np.zeros((2, self.A.shape[0], self.A.shape[1])).astype(np.int)\n self.nnd = np.zeros((self.A.shape[0], self.A.shape[1]))\n self.initialise_nnf()\n \n def initialise_nnf(self):\n self.nnf[0] = np.random.randint(self.B.shape[0], size=(self.A.shape[0], self.A.shape[1]))\n self.nnf[1] = np.random.randint(self.B.shape[1], size=(self.A.shape[0], self.A.shape[1]))\n self.nnf = self.nnf.transpose((1, 2 ,0))\n for i in range(self.A.shape[0]):\n for j in range(self.A.shape[1]):\n pos = self.nnf[i,j]\n self.nnd[i,j] = self.cal_dist(i, j, pos[0], pos[1])\n\n def cal_dist(self, ai ,aj, bi, bj):\n dx0 = dy0 = self.patch_size//2\n dx1 = dy1 = self.patch_size//2 + 1\n dx0 = min(ai, bi, dx0)\n dx1 = min(self.A.shape[0]-ai, self.B.shape[0]-bi, dx1)\n dy0 = min(aj, bj, dy0)\n dy1 = min(self.A.shape[1]-aj, self.B.shape[1]-bj, dy1)\n return np.sum((self.A[ai-dx0:ai+dx1, aj-dy0:aj+dy1]-self.B[bi-dx0:bi+dx1, bj-dy0:bj+dy1])**2) / (dx1+dx0) / (dy1+dy0)\n \n def reconstruct(self):\n ans = np.zeros_like(self.A)\n for i in range(self.A.shape[0]):\n for j in range(self.A.shape[1]):\n pos = self.nnf[i,j]\n ans[i,j] = self.B[pos[0], pos[1]]\n return ans\n \n def reconstruct_img_voting(self, patch_size=3,arr_v=None):\n if patch_size is None:\n patch_size = self.patch_size\n b_prime = np.zeros_like(self.A,dtype=np.uint8)\n\n for i in range(self.A.shape[0]): #traverse down a\n for j in range(self.A.shape[1]): #traverse across a\n \n dx0 = dy0 = patch_size//2\n dx1 = dy1 = patch_size//2 + 1\n dx0 = min(i,dx0)\n dx1 = min(self.A.shape[0]-i, dx1)\n dy0 = min(j, dy0)\n dy1 = min(self.A.shape[1]-j, dy1)\n \n votes = self.nnf[i-dx0:i+dx1, j-dy0:j+dy1] \n b_patch = np.zeros(shape=(votes.shape[0],votes.shape[1],self.A.shape[2]))\n \n for p_i in range(votes.shape[0]):\n for p_j in range(votes.shape[1]):\n \n b_patch[p_i, p_j] = self.B[votes[p_i,p_j][0] , votes[p_i,p_j][1]]\n\n averaged_patch = np.average(b_patch,axis=(0,1))\n b_prime[i, j] = averaged_patch[:]\n plt.imshow(b_prime[:,:,::-1])\n plt.show()\n \n def visualize_nnf(self):\n nnf = self.nnf\n nnd = self.nnd\n def angle_between_alt(p1, p2):\n ang1 = np.arctan2(*p1[::-1])\n ang2 = np.arctan2(*p2[::-1])\n return np.rad2deg((ang1 - ang2) % (2 * np.pi))\n\n def norm_dist(arr):\n return (arr)/(arr.max())\n \n img = np.zeros((nnf.shape[0], nnf.shape[1], 3),dtype=np.uint8)\n for i in range(1, nnf.shape[0]):\n for j in range(1, nnf.shape[1]):\n angle = angle_between_alt([j, i], [nnf[i, j][0], nnf[i, j][1]])\n img[i, j, :] = np.array([angle, nnd[i,j], 250])\n img = hsv_to_rgb(norm_dist(img/255))\n plt.imshow(img)\n plt.show()\n \n def propagate(self):\n compare_value = -1 \n for i in range(self.A.shape[0]):\n for j in range(self.A.shape[1]):\n x,y = self.nnf[i,j]\n bestx, besty, bestd = x, y, self.nnd[i,j]\n \n compare_value *=-1\n \n if (i + compare_value >= 0 and compare_value == -1) or (i + compare_value < self.A.shape[0] and compare_value == 1) :\n rx, ry = self.nnf[i+compare_value, j][0] , self.nnf[i+compare_value, j][1]\n if rx < self.B.shape[0]:\n val = self.cal_dist(i, j, rx, ry)\n if val < bestd:\n bestx, besty, bestd = rx, ry, val\n\n if (j+compare_value >= 0 and compare_value == -1)or (j + compare_value < self.A.shape[1] and compare_value == 1) :\n rx, ry = self.nnf[i, j+compare_value][0], self.nnf[i, j+compare_value][1] \n if ry < self.B.shape[1]:\n val = self.cal_dist(i, j, rx, ry)\n if val < bestd:\n bestx, besty, bestd = rx, ry, val\n \n rand_d = min(self.B.shape[0]//2, self.B.shape[1]//2)\n while rand_d > 0:\n try:\n xmin = max(bestx - rand_d, 0)\n xmax = min(bestx + rand_d, self.B.shape[0])\n ymin = max(besty - rand_d, 0)\n ymax = min(besty + rand_d, self.B.shape[1])\n #print(xmin, xmax)\n rx = np.random.randint(xmin, xmax)\n ry = np.random.randint(ymin, ymax)\n val = self.cal_dist(i, j, rx, ry)\n if val < bestd:\n bestx, besty, bestd = rx, ry, val\n except:\n print(rand_d)\n print(xmin, xmax)\n print(ymin, ymax)\n print(bestx, besty)\n print(self.B.shape)\n rand_d = rand_d // 2\n\n self.nnf[i, j] = [bestx, besty]\n self.nnd[i, j] = bestd\n \n print(\"Done\")",
"_____no_output_____"
],
[
"x = cv2.imread(\"./blue.jpg\")\ny = cv2.imread(\"./yellow.jpg\")\n\nx = cv2.resize(x,(200,200))\ny = cv2.resize(y,(200,200))",
"_____no_output_____"
],
[
"pm = PatchMatch(x,y, 3)\npm.visualize_nnf()\ndef do():\n pm.propagate()\n pm.reconstruct_img_voting(patch_size=3)\n# pm.propagate()\n# pm.reconstruct_img_voting(patch_size=3)\n# pm.propagate()\n# pm.reconstruct_img_voting(patch_size=3)\n# pm.propagate()\n# pm.reconstruct_img_voting(patch_size=3)\n\ndo()\npm.visualize_nnf()\n\n",
"_____no_output_____"
],
[
"do()",
"_____no_output_____"
],
[
"plt.figure(1)\nplt.subplot(131)\nplt.axis('off')\nplt.imshow(x[:,:,::-1])\n\nplt.subplot(132)\nplt.axis('off')\nplt.imshow(y[:,:,::-1])\n\nplt.subplot(133)\nplt.axis('off')\nplt.imshow(pm.reconstruct()[:,:,::-1])\n\nplt.show()",
"_____no_output_____"
],
[
"import os\nimport sys\n\n# add the 'src' directory as one where we can import modules\nsrc_dir = os.path.join(os.getcwd(), os.pardir)\nsys.path.append(src_dir)",
"_____no_output_____"
],
[
"os.path.join(os.getcwd(), os.pardir)",
"_____no_output_____"
],
[
"from src.PatchMatch import PatchMatchSimple",
"_____no_output_____"
],
[
"pm = PatchMatchSimple(x,y,patch_size=3)\nfor i in range(15):\n pm.propagate()\n pm.reconstruct_img_voting(patch_size=3)",
"Done\n"
],
[
"pm.visualize_nnf()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0b0465aab49634fa5eb01423c6bb5a5a2d5e443 | 2,163 | ipynb | Jupyter Notebook | A1-Appendix-creating-Goldbach-data.ipynb | cdvv7788/Python-Mathematics-Handbook | ee91644ab2028acb2f4c42839cc335aca20229c5 | [
"MIT"
] | 147 | 2017-02-14T20:07:00.000Z | 2022-03-01T10:41:28.000Z | A1-Appendix-creating-Goldbach-data.ipynb | cdvv7788/Python-Mathematics-Handbook | ee91644ab2028acb2f4c42839cc335aca20229c5 | [
"MIT"
] | 8 | 2017-02-14T20:07:53.000Z | 2018-11-16T11:11:40.000Z | A1-Appendix-creating-Goldbach-data.ipynb | cdvv7788/Python-Mathematics-Handbook | ee91644ab2028acb2f4c42839cc335aca20229c5 | [
"MIT"
] | 87 | 2017-02-15T02:16:16.000Z | 2022-03-01T10:41:21.000Z | 23.769231 | 174 | 0.510402 | [
[
[
"This notebook is used to generate data regarding the Goldbach conjecture: rows are of the form $N, a, b$ where $N$ is an even number, $a< b$ are prime and $N=a + b$.\n\nFirst we write a Python function `goldbach` that takes `N` and returns all pairs of numbers $a, b$.",
"_____no_output_____"
]
],
[
[
"import sympy as sym\nimport pandas as pd\n\ndef goldbach(N):\n \"\"\"Returns all pairs of primes that sum to give N\"\"\"\n primes = list(sym.primerange(1, N))\n sums = []\n for i, p1 in enumerate(primes):\n for p2 in primes[i:]:\n if p1 + p2 == N:\n sums.append((p1, p2))\n return sums",
"_____no_output_____"
]
],
[
[
"Let us use the above function to create our data:",
"_____no_output_____"
]
],
[
[
"maxN = 500\ndata = [[N, *pair] for N in range(4, maxN + 1) \n for pair in goldbach(N) if N % 2 == 0 ]",
"_____no_output_____"
]
],
[
[
"Let us write our data to an excel file:",
"_____no_output_____"
]
],
[
[
"df = pd.DataFrame(data, columns=[\"N\",\"a\", \"b\"]) # Create a data frame\ndf.to_excel(\"data/goldbach.xlsx\") # Write it to excel",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0b046dcf9f09ece6e396fc0dd0ce3e42395eed7 | 9,366 | ipynb | Jupyter Notebook | GHZGame/GHZGame.ipynb | jainvasu631/QuantumKatas | 4b810b477050a531a68b9790213539fb70efdb2a | [
"MIT"
] | 1 | 2021-01-01T20:39:30.000Z | 2021-01-01T20:39:30.000Z | GHZGame/GHZGame.ipynb | nisheethjoshi/QuantumKatas | ddc5d86742fffb09afc7e08d931d33c1be2c4870 | [
"MIT"
] | null | null | null | GHZGame/GHZGame.ipynb | nisheethjoshi/QuantumKatas | ddc5d86742fffb09afc7e08d931d33c1be2c4870 | [
"MIT"
] | null | null | null | 29.733333 | 174 | 0.564809 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
d0b05abeba5ce5e432c1947b8dc87c1b5aa62e29 | 332,878 | ipynb | Jupyter Notebook | LR_4/Primer.ipynb | tamaranesterenko/TRO_LR_4 | 3b01779b038baed78e4073962953c5818dbecb36 | [
"MIT"
] | null | null | null | LR_4/Primer.ipynb | tamaranesterenko/TRO_LR_4 | 3b01779b038baed78e4073962953c5818dbecb36 | [
"MIT"
] | null | null | null | LR_4/Primer.ipynb | tamaranesterenko/TRO_LR_4 | 3b01779b038baed78e4073962953c5818dbecb36 | [
"MIT"
] | null | null | null | 463.618384 | 41,900 | 0.944619 | [
[
[
"import matplotlib.pyplot as plt\n%matplotlib inline\nplt.plot([1, 2, 3, 4, 5], [1, 2, 3, 4, 5])",
"_____no_output_____"
],
[
"plt.plot()",
"_____no_output_____"
],
[
"plt.plot([1, 7, 3, 5, 11, 1])",
"_____no_output_____"
],
[
"plt.plot([1, 5, 10, 15, 20], [1, 7, 3, 5, 11])",
"_____no_output_____"
],
[
"plt.xlabel('Day', fontsize=15, color='blue')",
"_____no_output_____"
],
[
"plt.title('Chart price', fontsize=17)",
"_____no_output_____"
],
[
"plt.text(1, 1, 'type: Steel')",
"_____no_output_____"
],
[
"x = [1, 5, 10, 15, 20]\ny = [1, 7, 3, 5, 11]\nplt.plot(x, y, label='steel price')\nplt.title('Chart price', fontsize=15)\nplt.xlabel('Day', fontsize=12, color='blue')\nplt.ylabel('Price', fontsize=12, color='blue')\nplt.legend()\nplt.grid(True)\nplt.text(15, 4, 'grow up!')",
"_____no_output_____"
],
[
"plt.plot(x, y, color='red')",
"_____no_output_____"
],
[
"x = [1, 5, 10, 15, 20]\ny = [1, 7, 3, 5, 11]\nplt.plot(x, y, '--')",
"_____no_output_____"
],
[
"x = [1, 5, 10, 15, 20]\ny = [1, 7, 3, 5, 11]\nline = plt.plot(x, y)\nplt.setp(line, linestyle='--')",
"_____no_output_____"
],
[
"x = [1, 5, 10, 15, 20]\ny1 = [1, 7, 3, 5, 11]\ny2 = [i*1.2 + 1 for i in y1]\ny3 = [i*1.2 + 1 for i in y2]\ny4 = [i*1.2 + 1 for i in y3]\nplt.plot(x, y1, '-', x, y2, '--', x, y3, '-.', x, y4, ':')",
"_____no_output_____"
],
[
"plt.plot(x, y1, '-')\nplt.plot(x, y2, '--')\nplt.plot(x, y3, '-.')\nplt.plot(x, y4, ':')",
"_____no_output_____"
],
[
"x = [1, 5, 10, 15, 20]\ny = [1, 7, 3, 5, 11]\nplt.plot(x, y, '--r')",
"_____no_output_____"
],
[
"plt.plot(x, y, 'ro')",
"_____no_output_____"
],
[
"plt.plot(x, y, 'bx')",
"_____no_output_____"
],
[
"x = [1, 5, 10, 15, 20]\ny1 = [1, 7, 3, 5, 11]\ny2 = [i*1.2 + 1 for i in y1]\ny3 = [i*1.2 + 1 for i in y2]\ny4 = [i*1.2 + 1 for i in y3]\n\nplt.figure(figsize=(12, 7))\n\nplt.subplot(2, 2, 1)\nplt.plot(x, y1, '-')\n\nplt.subplot(2, 2, 2)\nplt.plot(x, y2, '--')\n\nplt.subplot(2, 2, 3)\nplt.plot(x, y3, '-.')\n\nplt.subplot(2, 2, 4)\nplt.plot(x, y4, ':')",
"_____no_output_____"
],
[
"plt.figure(figsize=(12, 7))\n\nplt.subplot(221)\nplt.plot(x, y1, '-')\n\nplt.subplot(222)\nplt.plot(x, y2, '--')\n\nplt.subplot(223)\nplt.plot(x, y3, '-.')\n\nplt.subplot(224)\nplt.plot(x, y4, ':')",
"_____no_output_____"
],
[
"fig, axs = plt.subplots(2, 2, figsize=(12, 7))\n\naxs[0, 0].plot(x, y1, '-')\naxs[0, 1].plot(x, y2, '--')\naxs[1, 0].plot(x, y3, '-.')\naxs[1, 1].plot(x, y4, ':')",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0b067a5aaafa26ecf8ca8d1d29c2495e38101aa | 18,905 | ipynb | Jupyter Notebook | model/1.3/train_thainer.ipynb | Sutthipong/thai-ner | a5084661edf70ecd6a649d620255fbe5bed1b417 | [
"CC-BY-3.0",
"Apache-2.0"
] | 20 | 2018-09-06T06:08:56.000Z | 2019-12-10T08:32:12.000Z | model/1.3/train_thainer.ipynb | Sutthipong/thai-ner | a5084661edf70ecd6a649d620255fbe5bed1b417 | [
"CC-BY-3.0",
"Apache-2.0"
] | 3 | 2019-01-17T03:12:56.000Z | 2019-04-07T07:07:58.000Z | model/1.3/train_thainer.ipynb | Sutthipong/thai-ner | a5084661edf70ecd6a649d620255fbe5bed1b417 | [
"CC-BY-3.0",
"Apache-2.0"
] | 11 | 2020-01-13T06:18:26.000Z | 2021-11-03T16:15:57.000Z | 33.050699 | 225 | 0.468448 | [
[
[
"# -*- coding: utf-8 -*-\n# เรียกใช้งานโมดูล\nfile_name=\"data\"\nimport codecs\nfrom tqdm import tqdm\nfrom pythainlp.tokenize import word_tokenize\n#import deepcut\nfrom pythainlp.tag import pos_tag\nfrom nltk.tokenize import RegexpTokenizer\nimport glob\nimport nltk\nimport re\n# thai cut\nthaicut=\"newmm\"\nfrom sklearn_crfsuite import scorers,metrics\nfrom sklearn.metrics import make_scorer\nfrom sklearn.model_selection import cross_validate,train_test_split\nimport sklearn_crfsuite\nfrom pythainlp.corpus.common import thai_stopwords\nstopwords = list(thai_stopwords())\n#จัดการประโยคซ้ำ\ndata_not=[]\ndef Unique(p):\n text=re.sub(\"<[^>]*>\",\"\",p)\n text=re.sub(\"\\[(.*?)\\]\",\"\",text)\n text=re.sub(\"\\[\\/(.*?)\\]\",\"\",text)\n if text not in data_not:\n data_not.append(text)\n return True\n else:\n return False\n# เตรียมตัวตัด tag ด้วย re\npattern = r'\\[(.*?)\\](.*?)\\[\\/(.*?)\\]'\ntokenizer = RegexpTokenizer(pattern) # ใช้ nltk.tokenize.RegexpTokenizer เพื่อตัด [TIME]8.00[/TIME] ให้เป็น ('TIME','ไง','TIME')\n# จัดการกับ tag ที่ไม่ได้ tag\ndef toolner_to_tag(text):\n text=text.strip().replace(\"FACILITY\",\"LOCATION\").replace(\"[AGO]\",\"\").replace(\"[/AGO]\",\"\").replace(\"[T]\",\"\").replace(\"[/T]\",\"\")\n text=re.sub(\"<[^>]*>\",\"\",text)\n text=re.sub(\"(\\[\\/(.*?)\\])\",\"\\\\1***\",text)#.replace('(\\[(.*?)\\])','***\\\\1')# text.replace('>','>***') # ตัดการกับพวกไม่มี tag word\n text=re.sub(\"(\\[\\w+\\])\",\"***\\\\1\",text)\n text2=[]\n for i in text.split('***'):\n if \"[\" in i:\n text2.append(i)\n else:\n text2.append(\"[word]\"+i+\"[/word]\")\n text=\"\".join(text2)#re.sub(\"[word][/word]\",\"\",\"\".join(text2))\n return text.replace(\"[word][/word]\",\"\")\n# แปลง text ให้เป็น conll2002\ndef text2conll2002(text,pos=True):\n \"\"\"\n ใช้แปลงข้อความให้กลายเป็น conll2002\n \"\"\"\n text=toolner_to_tag(text)\n text=text.replace(\"''\",'\"')\n text=text.replace(\"’\",'\"').replace(\"‘\",'\"')#.replace('\"',\"\")\n tag=tokenizer.tokenize(text)\n j=0\n conll2002=\"\"\n for tagopen,text,tagclose in tag:\n word_cut=word_tokenize(text,engine=thaicut) # ใช้ตัวตัดคำ newmm\n i=0\n txt5=\"\"\n while i<len(word_cut):\n if word_cut[i]==\"''\" or word_cut[i]=='\"':pass\n elif i==0 and tagopen!='word':\n txt5+=word_cut[i]\n txt5+='\\t'+'B-'+tagopen\n elif tagopen!='word':\n txt5+=word_cut[i]\n txt5+='\\t'+'I-'+tagopen\n else:\n txt5+=word_cut[i]\n txt5+='\\t'+'O'\n txt5+='\\n'\n #j+=1\n i+=1\n conll2002+=txt5\n if pos==False:\n return conll2002\n return postag(conll2002)\n# ใช้สำหรับกำกับ pos tag เพื่อใช้กับ NER\n# print(text2conll2002(t,pos=False))\ndef postag(text):\n listtxt=[i for i in text.split('\\n') if i!='']\n list_word=[]\n for data in listtxt:\n list_word.append(data.split('\\t')[0])\n #print(text)\n list_word=pos_tag(list_word,engine=\"perceptron\", corpus=\"orchid_ud\")\n text=\"\"\n i=0\n for data in listtxt:\n text+=data.split('\\t')[0]+'\\t'+list_word[i][1]+'\\t'+data.split('\\t')[1]+'\\n'\n i+=1\n return text\n# เขียนไฟล์ข้อมูล conll2002\ndef write_conll2002(file_name,data):\n \"\"\"\n ใช้สำหรับเขียนไฟล์\n \"\"\"\n with codecs.open(file_name, \"w\", \"utf-8-sig\") as temp:\n temp.write(data)\n return True\n# อ่านข้อมูลจากไฟล์\ndef get_data(fileopen):\n\t\"\"\"\n สำหรับใช้อ่านทั้งหมดทั้งในไฟล์ทีละรรทัดออกมาเป็น list\n \"\"\"\n\twith codecs.open(fileopen, 'r',encoding='utf-8-sig') as f:\n\t\tlines = f.read().splitlines()\n\treturn [a for a in tqdm(lines) if Unique(a)] # เอาไม่ซ้ำกัน\n\ndef alldata(lists):\n text=\"\"\n for data in lists:\n text+=text2conll2002(data)\n text+='\\n'\n return text\n\ndef alldata_list(lists):\n data_all=[]\n for data in lists:\n data_num=[]\n try:\n txt=text2conll2002(data,pos=True).split('\\n')\n for d in txt:\n tt=d.split('\\t')\n if d!=\"\":\n if len(tt)==3:\n data_num.append((tt[0],tt[1],tt[2]))\n else:\n data_num.append((tt[0],tt[1]))\n #print(data_num)\n data_all.append(data_num)\n except:\n print(data)\n #print(data_all)\n return data_all\n\ndef alldata_list_str(lists):\n\tstring=\"\"\n\tfor data in lists:\n\t\tstring1=\"\"\n\t\tfor j in data:\n\t\t\tstring1+=j[0]+\"\t\"+j[1]+\"\t\"+j[2]+\"\\n\"\n\t\tstring1+=\"\\n\"\n\t\tstring+=string1\n\treturn string\n\ndef get_data_tag(listd):\n\tlist_all=[]\n\tc=[]\n\tfor i in listd:\n\t\tif i !='':\n\t\t\tc.append((i.split(\"\\t\")[0],i.split(\"\\t\")[1],i.split(\"\\t\")[2]))\n\t\telse:\n\t\t\tlist_all.append(c)\n\t\t\tc=[]\n\treturn list_all\ndef getall(lista):\n ll=[]\n for i in tqdm(lista):\n o=True\n for j in ll:\n if re.sub(\"\\[(.*?)\\]\",\"\",i)==re.sub(\"\\[(.*?)\\]\",\"\",j):\n o=False\n break\n if o==True:\n ll.append(i)\n return ll",
"_____no_output_____"
],
[
"data1=getall(get_data(file_name+\".txt\"))\nprint(len(data1))\n'''\n'''\n#del datatofile[0]\ndatatofile=alldata_list(data1)\ntt=[]\n#datatofile.reverse()\nimport random\n#random.shuffle(datatofile)\nprint(len(datatofile))\n#training_samples = datatofile[:int(len(datatofile) * 0.8)]\n#test_samples = datatofile[int(len(datatofile) * 0.8):]\n'''training_samples = datatofile[:2822]\ntest_samples = datatofile[2822:]'''\n#print(test_samples[0])\n#tag=TrainChunker(training_samples,test_samples) # Train\n\n#run(training_samples,test_samples)",
"100%|██████████| 6457/6457 [00:00<00:00, 12023.70it/s]\n100%|██████████| 6349/6349 [03:18<00:00, 20.21it/s] \n"
],
[
"#import dill\n#with open('train.data', 'rb') as file:\n# datatofile = dill.load(file)",
"_____no_output_____"
],
[
"with open(file_name+\"-pos.conll\",\"w\") as f:\n i=0\n while i<len(datatofile):\n for j in datatofile[i]:\n f.write(j[0]+\"\\t\"+j[1]+\"\\t\"+j[2]+\"\\n\")\n if i+1<len(datatofile):\n f.write(\"\\n\")\n i+=1\n\nwith open(file_name+\".conll\",\"w\") as f:\n i=0\n while i<len(datatofile):\n for j in datatofile[i]:\n f.write(j[0]+\"\\t\"+j[2]+\"\\n\")\n if i+1<len(datatofile):\n f.write(\"\\n\")\n i+=1",
"_____no_output_____"
],
[
"def isThai(chr):\n cVal = ord(chr)\n if(cVal >= 3584 and cVal <= 3711):\n return True\n return False\ndef isThaiWord(word):\n t=True\n for i in word:\n l=isThai(i)\n if l!=True and i!='.':\n t=False\n break\n return t\n\ndef is_stopword(word):\n return word in stopwords\ndef is_s(word):\n if word == \" \" or word ==\"\\t\" or word==\"\":\n return True\n else:\n return False\n\ndef lennum(word,num):\n if len(word)==num:\n return True\n return False\ndef doc2features(doc, i):\n word = doc[i][0]\n postag = doc[i][1]\n # Features from current word\n features={\n 'word.word': word,\n 'word.stopword': is_stopword(word),\n 'word.isthai':isThaiWord(word),\n 'word.isspace':word.isspace(),\n 'postag':postag,\n 'word.isdigit()': word.isdigit()\n }\n if word.isdigit() and len(word)==5:\n features['word.islen5']=True\n if i > 0:\n prevword = doc[i-1][0]\n postag1 = doc[i-1][1]\n features['word.prevword'] = prevword\n features['word.previsspace']=prevword.isspace()\n features['word.previsthai']=isThaiWord(prevword)\n features['word.prevstopword']=is_stopword(prevword)\n features['word.prepostag'] = postag1\n features['word.prevwordisdigit'] = prevword.isdigit()\n else:\n features['BOS'] = True # Special \"Beginning of Sequence\" tag\n # Features from next word\n if i < len(doc)-1:\n nextword = doc[i+1][0]\n postag1 = doc[i+1][1]\n features['word.nextword'] = nextword\n features['word.nextisspace']=nextword.isspace()\n features['word.nextpostag'] = postag1\n features['word.nextisthai']=isThaiWord(nextword)\n features['word.nextstopword']=is_stopword(nextword)\n features['word.nextwordisdigit'] = nextword.isdigit()\n else:\n features['EOS'] = True # Special \"End of Sequence\" tag\n return features\n\ndef extract_features(doc):\n return [doc2features(doc, i) for i in range(len(doc))]\n\ndef get_labels(doc):\n return [tag for (token,postag,tag) in doc]",
"_____no_output_____"
],
[
"X_data = [extract_features(doc) for doc in tqdm(datatofile)]\ny_data = [get_labels(doc) for doc in tqdm(datatofile)]",
"100%|██████████| 6349/6349 [00:11<00:00, 549.11it/s]\n100%|██████████| 6349/6349 [00:00<00:00, 214278.19it/s]\n"
],
[
"X, X_test, y, y_test = train_test_split(X_data, y_data, test_size=0.2)",
"/home/wannaphong/anaconda3/lib/python3.7/site-packages/sklearn/metrics/classification.py:1437: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.\n 'precision', 'predicted', average, warn_for)\n/home/wannaphong/anaconda3/lib/python3.7/site-packages/sklearn/metrics/classification.py:1439: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no true samples.\n 'recall', 'true', average, warn_for)\n"
],
[
"crf = sklearn_crfsuite.CRF(\n algorithm='lbfgs',\n c1=0.1,\n c2=0.1,\n max_iterations=500,\n all_possible_transitions=True,\n model_filename=file_name+\"-pos.model0\"\n)\ncrf.fit(X, y);\n\nlabels = list(crf.classes_)\nlabels.remove('O')\ny_pred = crf.predict(X_test)\ne=metrics.flat_f1_score(y_test, y_pred,\n average='weighted', labels=labels)\nprint(e)\nsorted_labels = sorted(\n labels,\n key=lambda name: (name[1:], name[0])\n)\nprint(metrics.flat_classification_report(\n y_test, y_pred, labels=sorted_labels, digits=3\n))",
"_____no_output_____"
],
[
"#del X_data[0]\n#del y_data[0]",
"_____no_output_____"
],
[
"!export PYTHONIOENCODING=utf-8",
"_____no_output_____"
],
[
"import sklearn_crfsuite\ncrf2 = sklearn_crfsuite.CRF(\n algorithm='lbfgs',\n c1=0.1,\n c2=0.1,\n max_iterations=500,\n all_possible_transitions=True,\n model_filename=file_name+\".model\"\n)\ncrf2.fit(X_data, y_data);",
"_____no_output_____"
],
[
"import dill\nwith open(\"train.data\", \"wb\") as dill_file:\n dill.dump(datatofile, dill_file)",
"_____no_output_____"
],
[
"# cross_validate\n\"\"\"\nimport dill\nwith open(\"datatrain.data\", \"wb\") as dill_file:\n dill.dump(datatofile, dill_file)\nf1_scorer = make_scorer(metrics.flat_f1_score, average='macro') \n\nscores = cross_validate(crf, X, y, scoring=f1_scorer, cv=5)\n# save data\nprint(scores)\n\"\"\"",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0b0783e9c68bf6af077223b3823087ca8609db6 | 5,082 | ipynb | Jupyter Notebook | notebooks/pre-analysis-plan.ipynb | annaj98/ls88-final-project | 1aefa620acf03973f1ac2fdc66e17b918f5ae9f0 | [
"CC0-1.0"
] | null | null | null | notebooks/pre-analysis-plan.ipynb | annaj98/ls88-final-project | 1aefa620acf03973f1ac2fdc66e17b918f5ae9f0 | [
"CC0-1.0"
] | null | null | null | notebooks/pre-analysis-plan.ipynb | annaj98/ls88-final-project | 1aefa620acf03973f1ac2fdc66e17b918f5ae9f0 | [
"CC0-1.0"
] | null | null | null | 61.97561 | 918 | 0.696773 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
d0b07e9fac6c0b92d945499ed972e37bf470d3ed | 164,246 | ipynb | Jupyter Notebook | programming/Untitled.ipynb | mariaulf4h/ds-welcome-package | 258285458dd4bc3a9824ddd3703dbe2ddc687a4a | [
"MIT"
] | null | null | null | programming/Untitled.ipynb | mariaulf4h/ds-welcome-package | 258285458dd4bc3a9824ddd3703dbe2ddc687a4a | [
"MIT"
] | null | null | null | programming/Untitled.ipynb | mariaulf4h/ds-welcome-package | 258285458dd4bc3a9824ddd3703dbe2ddc687a4a | [
"MIT"
] | null | null | null | 15.876849 | 36 | 0.311131 | [
[
[
"[1, 2]*3",
"_____no_output_____"
],
[
"import random\na = random.randint(1, 7)\nb = random.random(1, 7)",
"_____no_output_____"
],
[
"print(a)",
"1\n"
],
[
"print(b)",
"0.39243641919448247\n"
],
[
"v = 1 + 1\nt = \"apa\" * 3\nz = False + True",
"_____no_output_____"
],
[
"v",
"_____no_output_____"
],
[
"t",
"_____no_output_____"
],
[
"z",
"_____no_output_____"
],
[
"x = 10\nif (x/2 >5):\n print(\"yes\")",
"_____no_output_____"
],
[
"sum = 0\nnumber = 1\nwhile sum < 10000:\n sum = sum + number\n print(sum)",
"1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\n37\n38\n39\n40\n41\n42\n43\n44\n45\n46\n47\n48\n49\n50\n51\n52\n53\n54\n55\n56\n57\n58\n59\n60\n61\n62\n63\n64\n65\n66\n67\n68\n69\n70\n71\n72\n73\n74\n75\n76\n77\n78\n79\n80\n81\n82\n83\n84\n85\n86\n87\n88\n89\n90\n91\n92\n93\n94\n95\n96\n97\n98\n99\n100\n101\n102\n103\n104\n105\n106\n107\n108\n109\n110\n111\n112\n113\n114\n115\n116\n117\n118\n119\n120\n121\n122\n123\n124\n125\n126\n127\n128\n129\n130\n131\n132\n133\n134\n135\n136\n137\n138\n139\n140\n141\n142\n143\n144\n145\n146\n147\n148\n149\n150\n151\n152\n153\n154\n155\n156\n157\n158\n159\n160\n161\n162\n163\n164\n165\n166\n167\n168\n169\n170\n171\n172\n173\n174\n175\n176\n177\n178\n179\n180\n181\n182\n183\n184\n185\n186\n187\n188\n189\n190\n191\n192\n193\n194\n195\n196\n197\n198\n199\n200\n201\n202\n203\n204\n205\n206\n207\n208\n209\n210\n211\n212\n213\n214\n215\n216\n217\n218\n219\n220\n221\n222\n223\n224\n225\n226\n227\n228\n229\n230\n231\n232\n233\n234\n235\n236\n237\n238\n239\n240\n241\n242\n243\n244\n245\n246\n247\n248\n249\n250\n251\n252\n253\n254\n255\n256\n257\n258\n259\n260\n261\n262\n263\n264\n265\n266\n267\n268\n269\n270\n271\n272\n273\n274\n275\n276\n277\n278\n279\n280\n281\n282\n283\n284\n285\n286\n287\n288\n289\n290\n291\n292\n293\n294\n295\n296\n297\n298\n299\n300\n301\n302\n303\n304\n305\n306\n307\n308\n309\n310\n311\n312\n313\n314\n315\n316\n317\n318\n319\n320\n321\n322\n323\n324\n325\n326\n327\n328\n329\n330\n331\n332\n333\n334\n335\n336\n337\n338\n339\n340\n341\n342\n343\n344\n345\n346\n347\n348\n349\n350\n351\n352\n353\n354\n355\n356\n357\n358\n359\n360\n361\n362\n363\n364\n365\n366\n367\n368\n369\n370\n371\n372\n373\n374\n375\n376\n377\n378\n379\n380\n381\n382\n383\n384\n385\n386\n387\n388\n389\n390\n391\n392\n393\n394\n395\n396\n397\n398\n399\n400\n401\n402\n403\n404\n405\n406\n407\n408\n409\n410\n411\n412\n413\n414\n415\n416\n417\n418\n419\n420\n421\n422\n423\n424\n425\n426\n427\n428\n429\n430\n431\n432\n433\n434\n435\n436\n437\n438\n439\n440\n441\n442\n443\n444\n445\n446\n447\n448\n449\n450\n451\n452\n453\n454\n455\n456\n457\n458\n459\n460\n461\n462\n463\n464\n465\n466\n467\n468\n469\n470\n471\n472\n473\n474\n475\n476\n477\n478\n479\n480\n481\n482\n483\n484\n485\n486\n487\n488\n489\n490\n491\n492\n493\n494\n495\n496\n497\n498\n499\n500\n501\n502\n503\n504\n505\n506\n507\n508\n509\n510\n511\n512\n513\n514\n515\n516\n517\n518\n519\n520\n521\n522\n523\n524\n525\n526\n527\n528\n529\n530\n531\n532\n533\n534\n535\n536\n537\n538\n539\n540\n541\n542\n543\n544\n545\n546\n547\n548\n549\n550\n551\n552\n553\n554\n555\n556\n557\n558\n559\n560\n561\n562\n563\n564\n565\n566\n567\n568\n569\n570\n571\n572\n573\n574\n575\n576\n577\n578\n579\n580\n581\n582\n583\n584\n585\n586\n587\n588\n589\n590\n591\n592\n593\n594\n595\n596\n597\n598\n599\n600\n601\n602\n603\n604\n605\n606\n607\n608\n609\n610\n611\n612\n613\n614\n615\n616\n617\n618\n619\n620\n621\n622\n623\n624\n625\n626\n627\n628\n629\n630\n631\n632\n633\n634\n635\n636\n637\n638\n639\n640\n641\n642\n643\n644\n645\n646\n647\n648\n649\n650\n651\n652\n653\n654\n655\n656\n657\n658\n659\n660\n661\n662\n663\n664\n665\n666\n667\n668\n669\n670\n671\n672\n673\n674\n675\n676\n677\n678\n679\n680\n681\n682\n683\n684\n685\n686\n687\n688\n689\n690\n691\n692\n693\n694\n695\n696\n697\n698\n699\n700\n701\n702\n703\n704\n705\n706\n707\n708\n709\n710\n711\n712\n713\n714\n715\n716\n717\n718\n719\n720\n721\n722\n723\n724\n725\n726\n727\n728\n729\n730\n731\n732\n733\n734\n735\n736\n737\n738\n739\n740\n741\n742\n743\n744\n745\n746\n747\n748\n749\n750\n751\n752\n753\n754\n755\n756\n757\n758\n759\n760\n761\n762\n763\n764\n765\n766\n767\n768\n769\n770\n771\n772\n773\n774\n775\n776\n777\n778\n779\n780\n781\n782\n783\n784\n785\n786\n787\n788\n789\n790\n791\n792\n793\n794\n795\n796\n797\n798\n799\n800\n801\n802\n803\n804\n805\n806\n807\n808\n809\n810\n811\n812\n813\n814\n815\n816\n817\n818\n819\n820\n821\n822\n823\n824\n825\n826\n827\n828\n829\n830\n831\n832\n833\n834\n835\n836\n837\n838\n839\n840\n841\n842\n843\n844\n845\n846\n847\n848\n849\n850\n851\n852\n853\n854\n855\n856\n857\n858\n859\n860\n861\n862\n863\n864\n865\n866\n867\n868\n869\n870\n871\n872\n873\n874\n875\n876\n877\n878\n879\n880\n881\n882\n883\n884\n885\n886\n887\n888\n889\n890\n891\n892\n893\n894\n895\n896\n897\n898\n899\n900\n901\n902\n903\n904\n905\n906\n907\n908\n909\n910\n911\n912\n913\n914\n915\n916\n917\n918\n919\n920\n921\n922\n923\n924\n925\n926\n927\n928\n929\n930\n931\n932\n933\n934\n935\n936\n937\n938\n939\n940\n941\n942\n943\n944\n945\n946\n947\n948\n949\n950\n951\n952\n953\n954\n955\n956\n957\n958\n959\n960\n961\n962\n963\n964\n965\n966\n967\n968\n969\n970\n971\n972\n973\n974\n975\n976\n977\n978\n979\n980\n981\n982\n983\n984\n985\n986\n987\n988\n989\n990\n991\n992\n993\n994\n995\n996\n997\n998\n999\n1000\n1001\n1002\n1003\n1004\n1005\n1006\n1007\n1008\n1009\n1010\n1011\n1012\n1013\n1014\n1015\n1016\n1017\n1018\n1019\n1020\n1021\n1022\n1023\n1024\n1025\n1026\n1027\n1028\n1029\n1030\n1031\n1032\n1033\n1034\n1035\n1036\n1037\n1038\n1039\n1040\n1041\n1042\n1043\n1044\n1045\n1046\n1047\n1048\n1049\n1050\n1051\n1052\n1053\n1054\n1055\n1056\n1057\n1058\n1059\n1060\n1061\n1062\n1063\n1064\n1065\n1066\n1067\n1068\n1069\n1070\n1071\n1072\n1073\n1074\n1075\n1076\n1077\n1078\n1079\n1080\n1081\n1082\n1083\n1084\n1085\n1086\n1087\n1088\n1089\n1090\n1091\n1092\n1093\n1094\n1095\n1096\n1097\n1098\n1099\n1100\n1101\n1102\n1103\n1104\n1105\n1106\n1107\n1108\n1109\n1110\n1111\n1112\n1113\n1114\n1115\n1116\n1117\n1118\n1119\n1120\n1121\n1122\n1123\n1124\n1125\n1126\n1127\n1128\n1129\n1130\n1131\n1132\n1133\n1134\n1135\n1136\n1137\n1138\n1139\n1140\n1141\n1142\n1143\n1144\n1145\n1146\n1147\n1148\n1149\n1150\n1151\n1152\n1153\n1154\n1155\n1156\n1157\n1158\n1159\n1160\n1161\n1162\n1163\n1164\n1165\n1166\n1167\n1168\n1169\n1170\n1171\n1172\n1173\n1174\n1175\n1176\n1177\n1178\n1179\n1180\n1181\n1182\n1183\n1184\n1185\n1186\n1187\n1188\n1189\n1190\n1191\n1192\n1193\n1194\n1195\n1196\n1197\n1198\n1199\n1200\n1201\n1202\n1203\n1204\n1205\n1206\n1207\n1208\n1209\n1210\n1211\n1212\n1213\n1214\n1215\n1216\n1217\n1218\n1219\n1220\n1221\n1222\n1223\n1224\n1225\n1226\n1227\n1228\n1229\n1230\n1231\n1232\n1233\n1234\n1235\n1236\n1237\n1238\n1239\n1240\n1241\n1242\n1243\n1244\n1245\n1246\n1247\n1248\n1249\n1250\n1251\n1252\n1253\n1254\n1255\n1256\n1257\n1258\n1259\n1260\n1261\n1262\n1263\n1264\n1265\n1266\n1267\n1268\n1269\n1270\n1271\n1272\n1273\n1274\n1275\n1276\n1277\n1278\n1279\n1280\n1281\n1282\n1283\n1284\n1285\n1286\n1287\n1288\n1289\n1290\n1291\n1292\n1293\n1294\n1295\n1296\n1297\n1298\n1299\n1300\n1301\n1302\n1303\n1304\n1305\n1306\n1307\n1308\n1309\n1310\n1311\n1312\n1313\n1314\n1315\n1316\n1317\n1318\n1319\n1320\n1321\n1322\n1323\n1324\n1325\n1326\n1327\n1328\n1329\n1330\n1331\n1332\n1333\n1334\n1335\n1336\n1337\n1338\n1339\n1340\n1341\n1342\n1343\n1344\n1345\n1346\n1347\n1348\n1349\n1350\n1351\n1352\n1353\n1354\n1355\n1356\n1357\n1358\n1359\n1360\n1361\n1362\n1363\n1364\n1365\n1366\n1367\n1368\n1369\n1370\n1371\n1372\n1373\n1374\n1375\n1376\n1377\n1378\n1379\n1380\n1381\n1382\n1383\n1384\n1385\n1386\n1387\n1388\n1389\n1390\n1391\n1392\n1393\n1394\n1395\n1396\n1397\n1398\n1399\n1400\n1401\n1402\n1403\n1404\n1405\n1406\n1407\n1408\n1409\n1410\n1411\n1412\n1413\n1414\n1415\n1416\n1417\n1418\n1419\n1420\n1421\n1422\n1423\n1424\n1425\n1426\n1427\n1428\n1429\n1430\n1431\n1432\n1433\n1434\n1435\n1436\n1437\n1438\n1439\n1440\n1441\n1442\n1443\n1444\n1445\n1446\n1447\n1448\n1449\n1450\n1451\n1452\n1453\n1454\n1455\n1456\n1457\n1458\n1459\n1460\n1461\n1462\n1463\n1464\n1465\n1466\n1467\n1468\n1469\n1470\n1471\n1472\n1473\n1474\n1475\n1476\n1477\n1478\n1479\n1480\n1481\n1482\n1483\n1484\n1485\n1486\n1487\n1488\n1489\n1490\n1491\n1492\n1493\n1494\n1495\n1496\n1497\n1498\n1499\n1500\n1501\n1502\n1503\n1504\n1505\n1506\n1507\n1508\n1509\n1510\n1511\n1512\n1513\n1514\n1515\n1516\n1517\n1518\n1519\n1520\n1521\n1522\n1523\n1524\n1525\n1526\n1527\n1528\n1529\n1530\n1531\n1532\n1533\n1534\n1535\n1536\n1537\n1538\n1539\n1540\n1541\n1542\n1543\n1544\n1545\n1546\n1547\n1548\n1549\n1550\n1551\n1552\n1553\n1554\n1555\n1556\n1557\n1558\n1559\n1560\n1561\n1562\n1563\n1564\n1565\n1566\n1567\n1568\n1569\n1570\n1571\n1572\n1573\n1574\n1575\n1576\n1577\n1578\n1579\n1580\n1581\n1582\n1583\n1584\n1585\n1586\n1587\n1588\n1589\n1590\n1591\n1592\n1593\n1594\n1595\n1596\n1597\n1598\n1599\n1600\n1601\n1602\n1603\n1604\n1605\n1606\n1607\n1608\n1609\n1610\n1611\n1612\n1613\n1614\n1615\n1616\n1617\n1618\n1619\n1620\n1621\n1622\n1623\n1624\n1625\n1626\n1627\n1628\n1629\n1630\n1631\n1632\n1633\n1634\n1635\n1636\n1637\n1638\n1639\n1640\n1641\n1642\n1643\n1644\n1645\n1646\n1647\n1648\n1649\n1650\n1651\n1652\n1653\n1654\n1655\n1656\n1657\n1658\n1659\n1660\n1661\n1662\n1663\n1664\n1665\n1666\n1667\n1668\n1669\n1670\n1671\n1672\n1673\n1674\n1675\n1676\n1677\n1678\n1679\n1680\n1681\n1682\n1683\n1684\n1685\n1686\n1687\n1688\n1689\n1690\n1691\n1692\n1693\n1694\n1695\n1696\n1697\n1698\n1699\n1700\n1701\n1702\n1703\n1704\n1705\n1706\n1707\n1708\n1709\n1710\n1711\n1712\n1713\n1714\n1715\n1716\n1717\n1718\n1719\n1720\n1721\n1722\n1723\n1724\n1725\n1726\n1727\n1728\n1729\n1730\n1731\n1732\n1733\n1734\n1735\n1736\n1737\n1738\n1739\n1740\n1741\n1742\n1743\n1744\n1745\n1746\n1747\n1748\n1749\n1750\n1751\n1752\n1753\n1754\n1755\n1756\n1757\n1758\n1759\n1760\n1761\n1762\n1763\n1764\n1765\n1766\n1767\n1768\n1769\n1770\n1771\n1772\n1773\n1774\n1775\n1776\n1777\n1778\n1779\n1780\n1781\n1782\n1783\n1784\n1785\n1786\n1787\n1788\n1789\n1790\n1791\n1792\n1793\n1794\n1795\n1796\n1797\n1798\n1799\n1800\n1801\n1802\n1803\n1804\n1805\n1806\n1807\n1808\n1809\n1810\n1811\n1812\n1813\n1814\n1815\n1816\n1817\n1818\n1819\n1820\n1821\n1822\n1823\n1824\n1825\n1826\n1827\n1828\n1829\n1830\n1831\n1832\n1833\n1834\n1835\n1836\n1837\n1838\n1839\n1840\n1841\n1842\n1843\n1844\n1845\n1846\n1847\n1848\n1849\n1850\n1851\n1852\n1853\n1854\n1855\n1856\n1857\n1858\n1859\n1860\n1861\n1862\n1863\n1864\n1865\n1866\n1867\n1868\n1869\n1870\n1871\n1872\n1873\n1874\n1875\n1876\n1877\n1878\n1879\n1880\n1881\n1882\n1883\n1884\n1885\n1886\n1887\n1888\n1889\n1890\n1891\n"
],
[
"for i in range(10):\n if i%3:\n print(i)\n ",
"1\n2\n4\n5\n7\n8\n"
],
[
"def AddNumbers(a, b):\n a = a+b\n return a",
"_____no_output_____"
],
[
"a = 3\nb = 7\n\nprint(AddNumbers(a,b), a)",
"10 3\n"
],
[
"1/20 * 3/5",
"_____no_output_____"
],
[
"(7/10 - 1/20)*3/5",
"_____no_output_____"
],
[
"8/400",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0b091ea5658d7b0bb88ec8c09906d4d456a33ad | 708,398 | ipynb | Jupyter Notebook | main.ipynb | eskilhamre/PPLM | e7a8fa14a2291a53cb0c6b4d1fdff0d26c6b9ac7 | [
"Apache-2.0"
] | null | null | null | main.ipynb | eskilhamre/PPLM | e7a8fa14a2291a53cb0c6b4d1fdff0d26c6b9ac7 | [
"Apache-2.0"
] | null | null | null | main.ipynb | eskilhamre/PPLM | e7a8fa14a2291a53cb0c6b4d1fdff0d26c6b9ac7 | [
"Apache-2.0"
] | null | null | null | 65.440924 | 7,046 | 0.67814 | [
[
[
"# Introduction\nThis notebook can be used to both train a discriminator on the AG news dataset and steering text generation in the direction of each of the four classes of this dataset, namely world, sports, business and sci/tech. \n\nMy code uses and builds on a text generation plug-and-play model developed by the Uber AI research team, which can be found here: https://github.com/uber-research/PPLM. \nI had a lot of problems setting up the code provided by the original paper, since the package versions is old and has a lot of incompatability issues. I spent loads of times trying to set up pip and/or anaconda environments to run their code, but there was always an issue. \nTherefore, I developed this in Google Colab, which seems to be the only place where I can run their code without problems. **I strongly recommend you running this in Google Colab as well**. Thus, my code is kind of hard to use exactly because the original PPLM code is hard to use. I forked the PPLM repo and removed lots of unecessary stuff, only keeping the parts I'm using in this notebook. Also, I added my newly trained discriminator model. \n \nBy running this entire notebook cell for cell, you both train the discriminator and performs the generation experiment. However, since I've already trained this very discriminator, you can skip those cells. You can also skip the cells corresponding to saving models and results to disk. I've marked the \"mandatory\" cells with the comment \"# MUST BE RUN\" for this purpose.\n\n## Main functionality\nThis notebook essentially just runs my experiment setup using the newly trained discriminator to steer text generation in the direction of the discriminator classes text. \n \nThe main function is named *text_generation*, which can be used to generate a user-chosen amount of samples using either an unperturbed model or perturbed model. In the latter case, the user might choose which class he/she wished to steer text generation towards. I should also say that it's not quite new functionality, it's based on some of the PPLM code modified to suit my experiment.\n\n## Termology used throughout:\n- Model setting: using a general language model (GPT-2) together with the discriminator fixed on optimizing for one specific class.\n- Perturbed and unperturbed: This is essentially whether a discriminator has been used in the text generation. For instance, unperturbated text is \"clean\", meaning unsteered, while perturbated text is steered in a class direction.",
"_____no_output_____"
],
[
"# Setup: import the code base from github, and install requirements",
"_____no_output_____"
]
],
[
[
"# MUST BE RUN\n!git clone https://github.com/eskilhamre/PPLM.git",
"Cloning into 'PPLM'...\nremote: Enumerating objects: 297, done.\u001b[K\nremote: Counting objects: 100% (47/47), done.\u001b[K\nremote: Compressing objects: 100% (47/47), done.\u001b[K\nremote: Total 297 (delta 20), reused 14 (delta 0), pack-reused 250\u001b[K\nReceiving objects: 100% (297/297), 2.46 MiB | 19.66 MiB/s, done.\nResolving deltas: 100% (125/125), done.\n"
],
[
"# MUST BE RUN\nimport os\nos.chdir('PPLM')",
"_____no_output_____"
],
[
"# MUST BE RUN\n!pip install -r requirements.txt",
"Collecting torch==1.7.0\n Downloading torch-1.7.0-cp37-cp37m-manylinux1_x86_64.whl (776.7 MB)\n\u001b[K |████████████████████████████████| 776.7 MB 4.3 kB/s \n\u001b[?25hCollecting nltk==3.4.5\n Downloading nltk-3.4.5.zip (1.5 MB)\n\u001b[K |████████████████████████████████| 1.5 MB 32.4 MB/s \n\u001b[?25hCollecting colorama==0.4.4\n Downloading colorama-0.4.4-py2.py3-none-any.whl (16 kB)\nCollecting transformers==3.4.0\n Downloading transformers-3.4.0-py3-none-any.whl (1.3 MB)\n\u001b[K |████████████████████████████████| 1.3 MB 64.5 MB/s \n\u001b[?25hCollecting torchtext==0.3.1\n Downloading torchtext-0.3.1-py3-none-any.whl (62 kB)\n\u001b[K |████████████████████████████████| 62 kB 1.3 MB/s \n\u001b[?25hRequirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 6)) (1.1.5)\nRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from torch==1.7.0->-r requirements.txt (line 1)) (1.19.5)\nRequirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch==1.7.0->-r requirements.txt (line 1)) (3.10.0.2)\nCollecting dataclasses\n Downloading dataclasses-0.6-py3-none-any.whl (14 kB)\nRequirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from torch==1.7.0->-r requirements.txt (line 1)) (0.16.0)\nRequirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from nltk==3.4.5->-r requirements.txt (line 2)) (1.15.0)\nCollecting sacremoses\n Downloading sacremoses-0.0.46-py3-none-any.whl (895 kB)\n\u001b[K |████████████████████████████████| 895 kB 77.6 MB/s \n\u001b[?25hRequirement already satisfied: packaging in /usr/local/lib/python3.7/dist-packages (from transformers==3.4.0->-r requirements.txt (line 4)) (21.2)\nRequirement already satisfied: filelock in /usr/local/lib/python3.7/dist-packages (from transformers==3.4.0->-r requirements.txt (line 4)) (3.3.2)\nCollecting tokenizers==0.9.2\n Downloading tokenizers-0.9.2-cp37-cp37m-manylinux1_x86_64.whl (2.9 MB)\n\u001b[K |████████████████████████████████| 2.9 MB 51.0 MB/s \n\u001b[?25hRequirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from transformers==3.4.0->-r requirements.txt (line 4)) (2.23.0)\nCollecting sentencepiece!=0.1.92\n Downloading sentencepiece-0.1.96-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.2 MB)\n\u001b[K |████████████████████████████████| 1.2 MB 61.0 MB/s \n\u001b[?25hRequirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.7/dist-packages (from transformers==3.4.0->-r requirements.txt (line 4)) (4.62.3)\nRequirement already satisfied: protobuf in /usr/local/lib/python3.7/dist-packages (from transformers==3.4.0->-r requirements.txt (line 4)) (3.17.3)\nRequirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.7/dist-packages (from transformers==3.4.0->-r requirements.txt (line 4)) (2019.12.20)\nRequirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas->-r requirements.txt (line 6)) (2.8.2)\nRequirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas->-r requirements.txt (line 6)) (2018.9)\nRequirement already satisfied: pyparsing<3,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging->transformers==3.4.0->-r requirements.txt (line 4)) (2.4.7)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->transformers==3.4.0->-r requirements.txt (line 4)) (2.10)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->transformers==3.4.0->-r requirements.txt (line 4)) (1.24.3)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->transformers==3.4.0->-r requirements.txt (line 4)) (2021.10.8)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->transformers==3.4.0->-r requirements.txt (line 4)) (3.0.4)\nRequirement already satisfied: joblib in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers==3.4.0->-r requirements.txt (line 4)) (1.1.0)\nRequirement already satisfied: click in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers==3.4.0->-r requirements.txt (line 4)) (7.1.2)\nBuilding wheels for collected packages: nltk\n Building wheel for nltk (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for nltk: filename=nltk-3.4.5-py3-none-any.whl size=1449922 sha256=06f23b40b68c3b0c3f1a527c02b60b16d5edf0d9f578661b8d10e7464f40289e\n Stored in directory: /root/.cache/pip/wheels/48/8b/7f/473521e0c731c6566d631b281f323842bbda9bd819eb9a3ead\nSuccessfully built nltk\nInstalling collected packages: dataclasses, torch, tokenizers, sentencepiece, sacremoses, transformers, torchtext, nltk, colorama\n Attempting uninstall: torch\n Found existing installation: torch 1.10.0+cu111\n Uninstalling torch-1.10.0+cu111:\n Successfully uninstalled torch-1.10.0+cu111\n Attempting uninstall: torchtext\n Found existing installation: torchtext 0.11.0\n Uninstalling torchtext-0.11.0:\n Successfully uninstalled torchtext-0.11.0\n Attempting uninstall: nltk\n Found existing installation: nltk 3.2.5\n Uninstalling nltk-3.2.5:\n Successfully uninstalled nltk-3.2.5\n\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\ntorchvision 0.11.1+cu111 requires torch==1.10.0, but you have torch 1.7.0 which is incompatible.\u001b[0m\nSuccessfully installed colorama-0.4.4 dataclasses-0.6 nltk-3.4.5 sacremoses-0.0.46 sentencepiece-0.1.96 tokenizers-0.9.2 torch-1.7.0 torchtext-0.3.1 transformers-3.4.0\n"
]
],
[
[
"# Train a discriminator on AG news dataset\n## First, download the general lanaguage model used to train the discriminator",
"_____no_output_____"
]
],
[
[
"from transformers.modeling_gpt2 import GPT2LMHeadModel\n# This downloads GPT-2 Medium, it takes a little while\n_ = GPT2LMHeadModel.from_pretrained(\"gpt2-medium\")",
"_____no_output_____"
]
],
[
[
"## Import the dataset\nThe data can be found at: https://www.kaggle.com/amananandrai/ag-news-classification-dataset/version/2?select=train.csv\n\nThe PPLM interface requires the data to be a tsv file containing the entire dataset, where the first column is the labels and seconds column the text. Thus, we have to prepare the dataset for this. \n\nFirst, download the dataset following the link above, and upload both files in the set below.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nfrom google.colab import files\nimport torch\ntorch.cuda.is_available()",
"_____no_output_____"
],
[
"uploaded = files.upload()",
"_____no_output_____"
],
[
"data_fp = \"./ag-news-data.tsv\" # where we want to store our prepared dataset",
"_____no_output_____"
],
[
"def prepare_dataset(text_index,\n label_index, \n label_names=None\n ):\n train = pd.read_csv(\"train.csv\")\n test = pd.read_csv(\"test.csv\")\n all_data = pd.concat([train, test])\n all_data = all_data.iloc[:, [label_index, text_index]]\n\n if label_names:\n labels_map = {i+1: label_name for i, label_name in enumerate(label_names)} # here assuming labels are numerated 1,...,n, which is the case for AG news\n all_data.iloc[:, 0] = all_data.iloc[:, 0].map(labels_map) # exchange label numbers by their name\n\n return all_data",
"_____no_output_____"
],
[
"idx2class = [\"world\", \"sports\", \"business\", \"sci/tech\"]\ndata = prepare_dataset(2, 0, idx2class)\ndata.to_csv(data_fp, sep='\\t', index=False, header=False)",
"_____no_output_____"
],
[
"from run_pplm_discrim_train import train_discriminator\n\n# ensure reproducible discriminator\ntorch.manual_seed(444)\nnp.random.seed(444)\n\ndiscriminator, disc_info = train_discriminator(\n dataset=\"generic\",\n dataset_fp=data_fp,\n pretrained_model=\"gpt2-medium\",\n epochs=8,\n learning_rate=0.0001,\n batch_size=128,\n log_interval=10,\n save_model=True,\n cached=False,\n no_cuda=False,\n output_fp='models/',\n idx2class=idx2class\n)",
"Preprocessing generic dataset...\n"
]
],
[
[
"We achieve about 90% accuracy on unseen data, which is pretty good in my opinion. I haven't studied the training accuracy (so I can't say the following for sure), but I don't think we're neither underfitting or overfitting here. This is good stuff! \nAlso, the validation/test accuracy seems to stagnate on ~90%, so more epochs would probably be in no use.",
"_____no_output_____"
],
[
"## Training the discriminator is done, let's download it",
"_____no_output_____"
]
],
[
[
"classifier_name = \"models/news_classifierhead.pt\"\ntorch.save(discriminator.get_classifier().state_dict(), \"models/news_classifierhead.pt\")\nfiles.download(classifier_name)",
"_____no_output_____"
]
],
[
[
"At this point, I put the newly generated model in the discrim_models/ folder, and updated my Github code to include this model. I went back to the beginning of the notebook and recloned the repo.",
"_____no_output_____"
],
[
"# Scoring the generated samples\nWhen doing manual comparison between text generated from different model settings, I'm only interested in comparing only the best sample for each model setting. The idea is to generate lots of samples using the same setting, and picking the best one based on some type of scoring. \nWhat do I mean by best samples? I'm automating this evaluation as means of scoring and ranking the sentences in a similar way as described in the PPLM paper; \n- fluency is measured by the general language model likelihood p(sentence). In scoring, I utilize the fact that the lower the language model loss(sentence), the higher the p(sentence). I use GPT-1 for this, as in the PPLM paper (in the paper however, they use GPT-1 to calculate perplexity, and as I understand it this should correspond to loss.)\n- diversity of words is measured by the mean of the (length normalized) Dist-1, Dist-2 and Dist-3 score, (the PPLM paper was inspired by the way they use this metric in this paper: https://arxiv.org/pdf/1510.03055.pdf)\n\n",
"_____no_output_____"
]
],
[
[
"# MUST BE RUN\nfrom transformers.modeling_gpt2 import GPT2LMHeadModel\nfrom transformers import GPT2Tokenizer, OpenAIGPTLMHeadModel, OpenAIGPTTokenizer\nfrom nltk import ngrams\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"### Instantiate models used for scoring samples",
"_____no_output_____"
]
],
[
[
"# MUST BE RUN\ndevice = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n\n# tokenizer and language model used for calculating fluency / \"perplexity\"\ngpt1_tokenizer = OpenAIGPTTokenizer.from_pretrained(\"openai-gpt\")\ngpt1_model = OpenAIGPTLMHeadModel.from_pretrained(\"openai-gpt\")\ngpt1_model.eval()\ngpt1_model.to(device)\ndevice",
"_____no_output_____"
],
[
"# MUST BE RUN\n\n##############\n# This is to be used on all generated sentences (to be aggregated wrt. model setting), not used for selection\n##############\n\ndef lm_score(sentence):\n \"\"\"\n Calculates the language model total loss of the sentence.\n Code heavily inspired from: https://github.com/huggingface/transformers/issues/1009\n The total loss is equivalent to\n - [ log P(x1 | <|endoftext|>) + log P(x2 | x1, <|endoftext|>) + ... ]\n which means it corresponds to perplexity, and can be used as such in comparisons.\n \"\"\"\n tokens = gpt1_tokenizer.encode(sentence)\n input_ids = torch.tensor(tokens).unsqueeze(0)\n input_ids = input_ids.to(device)\n\n with torch.no_grad():\n outputs = gpt1_model(input_ids, labels=input_ids)\n loss, logits = outputs[:2]\n return loss.item() # * len(tokens) don't multiply with length, this would prefer shorter sentences\n\n###############\n# This is used for selecting the best sample when model setting and prefix is fixed\n###############\n\ndef dist_n_score(sentence, n):\n \"\"\"Calculates the number of distinct n-grams in the sentence, normalized by the sentence length\"\"\"\n if len(sentence.split()) < n:\n raise ValueError(\"Cannot find ngram of sentence with less than n words\")\n \n sentence = sentence.lower().strip()\n dist_n_grams = set()\n for n_gram in ngrams(sentence.split(), n):\n dist_n_grams.add(n_gram)\n \n return len(dist_n_grams) / len(sentence.split())\n\n\ndef dist_score(sentence):\n \"\"\"\n Calculcates the dist-1, dist-2 and dist-3 score of the sentence, as well as their mean\n \"\"\"\n sentence = sentence.lower().strip()\n sentence = sentence.replace(\".\", \"\").replace(\",\", \"\").replace(\"\\n\", \"\")\n \n dist_scores = [dist_n_score(sentence, n) for n in range(1, 4)]\n dist_1, dist_2, dist_3 = dist_scores\n return np.mean(dist_scores), dist_1, dist_2, dist_3\n\n\nsentences =['there is a book on the desk', 'there is a plane on the desk', 'there is a book in the desk', \"desk desk desk desk cat cat\"]\nprint([lm_score(s) for s in sentences])\nprint([dist_score(s)[0] for s in sentences])",
"[3.0594890117645264, 4.118381977081299, 3.2676448822021484, 9.095804214477539]\n[0.8571428571428572, 0.8571428571428572, 0.8571428571428572, 0.4444444444444444]\n"
]
],
[
[
"We can see that the most sensible of the four sentences receives lowest language model score (thus higher probability). Also, we can see that the non-sensible sentence receives both bad language model and dist-score.",
"_____no_output_____"
],
[
"# Text generation",
"_____no_output_____"
]
],
[
[
"# MUST BE RUN\nfrom run_pplm import run_pplm_example, generate_text_pplm, get_classifier, PPLM_DISCRIM",
"_____no_output_____"
],
[
"# MUST BE RUN\n\ndef text_generation(\n model,\n tokenizer,\n discrim=\"news\",\n class_label=\"sports\",\n prefix_text=\"Last summer\",\n perturb=False,\n num_samples=3,\n device=\"cuda\",\n length=150,\n stepsize=0.04,\n num_iterations=10,\n window_length=0, # 0 corresponds to entire sequence\n gamma=1.0,\n gm_scale=0.95,\n kl_scale=0.01,\n verbosity_level=1 # REGULAR\n):\n \"\"\"\n Used to generate a user-specified number of samples, with optional use of the discriminator\n to perturbate the generated samples in the direction of it's gradient.\n\n This is a modified version of the PPML text generation function to suit my experiment.\n\n Only supports generating text using discriminator models (BoW models not supported)\n The default hyper parameters chosen here are the same as in the PPLM Colab demo, since\n this seems to work great for discriminators.\n\n Returns a list of generated text samples and their corresponding attribute model losses\n \"\"\"\n\n # we pass the discriminator even if we want unpertubated text, since it's used for attribute scoring\n discrim_model, class_id = get_classifier(\n discrim,\n class_label,\n device\n )\n\n # encode prefix text\n tokenized_cond_text = tokenizer.encode(\n tokenizer.bos_token + prefix_text,\n add_special_tokens=False\n )\n\n gen_text_samples = []\n discrim_losses = []\n\n if device == 'cuda':\n torch.cuda.empty_cache()\n\n for i in range(num_samples):\n gen_tok_text, discrim_loss, _ = generate_text_pplm(\n model=model,\n tokenizer=tokenizer,\n context=tokenized_cond_text,\n device=device,\n perturb=perturb,\n classifier=discrim_model,\n class_label=class_id,\n loss_type=PPLM_DISCRIM, # BoW not supported as of now\n length=length,\n stepsize=stepsize,\n sample=True,\n num_iterations=num_iterations,\n horizon_length=1,\n window_length=window_length,\n gamma=gamma,\n gm_scale=gm_scale,\n kl_scale=kl_scale,\n verbosity_level=verbosity_level\n )\n\n \n gen_text = tokenizer.decode(gen_tok_text[0][1:]) # decode generated text\n gen_text_samples.append(gen_text)\n discrim_losses.append(discrim_loss.item()) #.data.cpu().numpy())\n \n if device == \"cuda\":\n torch.cuda.empty_cache()\n\n return gen_text_samples, discrim_losses\n\n\ndef select_best(gen_text_samples, discrim_losses):\n \"\"\"\n Given the outout from the text_generation function, filters away 3/4 of the \n generated samples based on mean dist-score, and rank the remaining 1/4 based\n on discriminator losses.\n \n Returns the best sample based smallest discriminator loss (the one maximizing \n the attribute, according to the discriminator)\n \"\"\"\n if len(gen_text_samples) < 4:\n raise ValueError(\"Cannot filter away 3/4 of less than 4 samples\")\n \n n_keep = 1 * len(gen_text_samples) // 4 # number of samples to keep\n\n # filter out the 3/4 samples with lowest mean dist-score\n mean_dists = [dist_score(sample)[0] for sample in gen_text_samples]\n idx_to_keep = np.argpartition(mean_dists, -n_keep)[-n_keep:] # indices of samples with highest mean dist score\n samples = np.array([gen_text_samples, discrim_losses, mean_dists]).T\n filtered_samples = samples[idx_to_keep]\n \n # fetch best sample among the remaining ones\n best_idx = np.argmin(filtered_samples[:, 1]) # index of sample with minimal discrim loss\n best_sample, smallest_loss, mean_dist = filtered_samples[best_idx]\n return best_sample, smallest_loss, mean_dist",
"_____no_output_____"
]
],
[
[
"## Import the base model used to sample from",
"_____no_output_____"
]
],
[
[
"# MUST BE RUN\npretrained_model = \"gpt2-medium\"\n\nmodel = GPT2LMHeadModel.from_pretrained(\n pretrained_model,\n output_hidden_states=True\n)\nmodel.to(device)\nmodel.eval()\n\n# Freeze GPT-2 weigths\nfor param in model.parameters():\n param.requires_grad = False\n\ntokenizer = GPT2Tokenizer.from_pretrained(pretrained_model)",
"_____no_output_____"
]
],
[
[
"## Sample from all combinations of model setting and prefix sentences\nFirst, let's create some data structures to gather relevant information from the sampling process.",
"_____no_output_____"
]
],
[
[
"# MUST BE RUN\n\n# most relevant hyper params wrt. speed\ngenerated_len = 120\nnum_samples = 12\n\nprefixes = [\n \"Last week\",\n \"The potato\",\n \"Breaking news:\",\n \"In the last year\",\n \"The president of the country\",\n]\n\nmodel_settings = [\"world\", \"sports\", \"business\", \"sci/tech\"] # the classes of the discriminator",
"_____no_output_____"
],
[
"# MUST BE RUN\n\n# data structures of generated results\ngen_samples = {model_setting: [] for model_setting in [\"unpert\"] + model_settings.copy()} # contains all generated samples\ncomparisons = {model_setting: {prefix: dict() for prefix in prefixes} for model_setting in model_settings} # contains the best samples for each model setting and prefix combo",
"_____no_output_____"
]
],
[
[
"The cell below runs the *entire* sampling process, it took ~5 hours to run on Googles Compute Engine backend using their GPUs. \n \nHere I decided that generate unperturbed text for each model setting. This might seem silly and redundant, since the unperturbed text is not affected by this choice. \nAnd while that is partly true, I did this to be able to calculate the discriminator losses of the generated text, so that I can select the \"best\" sample wrt. to the classes (even though it's best by chance). I thought this is only fair: the perturbed model gets many chances to generate a \"good\" sample (in the eyes of the discriminator), so the unperturbed model should also have this. \nAlso, I didn't find a easy way of using the discriminator to just score the text sample wrt. a class right of the bat. This is partly due to the fact that discriminator is actually trained on the transformer output.",
"_____no_output_____"
]
],
[
[
"# MUST BE RUN\n\n# since we're sampling, set seed for reproducibility\ntorch.manual_seed(444)\nnp.random.seed(444)\n\nn_combinations = len(prefixes) * len(model_settings)\ni = 1\nfor prefix_sentence in prefixes:\n for j, model_setting in enumerate(model_settings):\n print(f\"\\n\\nRun {i:3d}/{n_combinations:3d} : optimizing for class: {model_setting}, with prefix: {prefix_sentence}\\n\")\n \n unpert_text_samples, unpert_discrim_losses = text_generation(\n model,\n tokenizer,\n device=device,\n length=generated_len,\n num_samples=num_samples,\n prefix_text=prefix_sentence,\n discrim=\"news\",\n class_label=model_setting,\n perturb=False\n )\n\n pert_text_samples, pert_discrim_losses = text_generation(\n model,\n tokenizer,\n device=device,\n length=generated_len,\n num_samples=num_samples,\n prefix_text=prefix_sentence,\n discrim=\"news\",\n class_label=model_setting,\n perturb=True\n )\n # store generated samples\n if j == 0:\n gen_samples[\"unpert\"].extend(unpert_text_samples) # only store unpertubated generation once per prefix\n gen_samples[model_setting].extend(pert_text_samples)\n\n # save the best sample, it's discriminator loss and mean dist-score for both the perturbated and unperturbated samples\n comparisons[model_setting][prefix_sentence][\"unpert\"] = list(select_best(unpert_text_samples, unpert_discrim_losses))\n comparisons[model_setting][prefix_sentence][\"pert\"] = list(select_best(pert_text_samples, pert_discrim_losses))\n\n i += 1\n",
"\n\nRun 1/ 15 : optimizing for class: sports, with prefix: Last week\n\n<|endoftext|>Last week's\n<|endoftext|>Last week's announcement\n<|endoftext|>Last week's announcement that\n"
]
],
[
[
"## Generation analysis\nFirst, let's download the generated samples.",
"_____no_output_____"
]
],
[
[
"import json\n\nwith open(\"all-samples.json\", \"w\") as fp:\n json.dump(gen_samples, fp)\n files.download(\"all-samples.json\")\n\nwith open(\"comparisons.json\", \"w\") as fp:\n json.dump(comparisons, fp)\n files.download(\"comparisons.json\"\")",
"_____no_output_____"
]
],
[
[
"## Let's extract the metrics from the generated samples\nIn the code below, for each model setting, I calculate the perplexity score and dist-1, dist-2, and dist-3 scores for all samples. I then accumulate the mean and standard deviations of the scores wrt. to each model setting, to study how well each model setting actually performed in the experiment above.",
"_____no_output_____"
]
],
[
[
"# MUST BE RUN\n\nmetrics_means_dict = {}\nmetrics_stds_dict = {}\n\nfor model_setting, samples in gen_samples.items():\n perplexities = [lm_score(sample) for sample in samples]\n dist_scores = [dist_score(sample)[1:] for sample in samples] # stored as (mean_dist_score, dist-1, dist-2, dist-3), ignore mean\n all_metrics = np.c_[np.array(perplexities), np.array(dist_scores)]\n metrics_means_dict[model_setting] = np.mean(all_metrics, axis=0)\n metrics_stds_dict[model_setting] = np.std(all_metrics, axis=0)\n\n# structure the statistics neatly dataframes\nmetrics_means_df = pd.DataFrame(data=metrics_means_dict, index=[\"perplexity\", \"dist-1\", \"dist-2\", \"dist-3\"])\nmetrics_means_df = pd.DataFrame(data=metrics_means_dict, index=[\"perplexity\", \"dist-1\", \"dist-2\", \"dist-3\"])",
"_____no_output_____"
],
[
"# save the extracted statistics as csv files\nmetrics_means_df.to_csv(\"metrics-means.csv\")\nmetrics_std_df.to_csv(\"metrics-std.csv\")\nfiles.download(\"metrics-means.csv\")\nfiles.download(\"metrics-std.csv)",
"_____no_output_____"
]
],
[
[
"## Let's see the best examples for each model setting and prefix",
"_____no_output_____"
]
],
[
[
"# MUST BE RUN\n\nfor model_setting, prefix_dict in comparisons.items():\n print(f\"Model setting: {model_setting}\\n\")\n for prefix_sentence in prefix_dict.keys():\n unpert_sample, unpert_loss, unpert_mean_dist = prefix_dict[prefix_sentence][\"unpert\"]\n pert_sample, pert_loss, pert_mean_dist = prefix_dict[prefix_sentence][\"pert\"]\n\n print(f\"Prefix is: {prefix_sentence}\\n\")\n print(f\"Unperturbated:\\nSample: {unpert_sample}\\nDiscrim loss: {unpert_loss:2.2f} | Mean dist-n score: {unpert_mean_dist:2.1f}\\n\")\n print(f\" Perturbated:\\nSample: {pert_sample}\\nDiscrim loss: {pert_loss:2.2f} | Mean dist-n score: {pert_mean_dist:2.1f}\")\n print(\"\\n\\n\")\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0b09438a69cc163a6092427dd079300dc71da51 | 821 | ipynb | Jupyter Notebook | book/_build/jupyter_execute/docs/1002_Saure_und_alkalische_Loesungen_in_unserer_Umwelt.ipynb | tom-tubeless/Chemie | bcd10e84b341121c260526c306f86b1556a6c034 | [
"MIT"
] | null | null | null | book/_build/jupyter_execute/docs/1002_Saure_und_alkalische_Loesungen_in_unserer_Umwelt.ipynb | tom-tubeless/Chemie | bcd10e84b341121c260526c306f86b1556a6c034 | [
"MIT"
] | null | null | null | book/_build/jupyter_execute/docs/1002_Saure_und_alkalische_Loesungen_in_unserer_Umwelt.ipynb | tom-tubeless/Chemie | bcd10e84b341121c260526c306f86b1556a6c034 | [
"MIT"
] | null | null | null | 17.847826 | 63 | 0.527406 | [
[
[
"# 1002_Saure und alkalische Lösungen in unserer Umwelt\n\n**8h**",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown"
]
] |
d0b09aa5110fb02ee7d0dd5d50bfe63ad48748d3 | 6,517 | ipynb | Jupyter Notebook | 05_1_cross_validation_uni_class_cdc16/create_testing_sets_Apr_MD.ipynb | jingzbu/InverseVITraffic | c0d33d91bdd3c014147d58866c1a2b99fb8a9608 | [
"MIT"
] | null | null | null | 05_1_cross_validation_uni_class_cdc16/create_testing_sets_Apr_MD.ipynb | jingzbu/InverseVITraffic | c0d33d91bdd3c014147d58866c1a2b99fb8a9608 | [
"MIT"
] | null | null | null | 05_1_cross_validation_uni_class_cdc16/create_testing_sets_Apr_MD.ipynb | jingzbu/InverseVITraffic | c0d33d91bdd3c014147d58866c1a2b99fb8a9608 | [
"MIT"
] | null | null | null | 25.457031 | 109 | 0.515882 | [
[
[
"%run ../Python_files/util_data_storage_and_load.py",
"_____no_output_____"
],
[
"%run ../Python_files/load_dicts.py",
"_____no_output_____"
],
[
"%run ../Python_files/util.py",
"_____no_output_____"
],
[
"import numpy as np\nfrom numpy.linalg import inv",
"_____no_output_____"
],
[
"# load link flow data\n\nimport json\n\nwith open('../temp_files/link_day_minute_Apr_dict_JSON_adjusted.json', 'r') as json_file:\n link_day_minute_Apr_dict_JSON = json.load(json_file)",
"_____no_output_____"
],
[
"# week_day_Apr_list = [2, 3, 4, 5, 6, 9, 10, 11, 12, 13, 16, 17, 18, 19, 20, 23, 24, 25, 26, 27, 30]\n\n# testing set 1\nweek_day_Apr_list_1 = [20, 23, 24, 25, 26, 27, 30]\n\n# testing set 2\nweek_day_Apr_list_2 = [11, 12, 13, 16, 17, 18, 19]\n\n# testing set 3\nweek_day_Apr_list_3 = [2, 3, 4, 5, 6, 9, 10]",
"_____no_output_____"
],
[
"link_flow_testing_set_Apr_MD_1 = []\nfor link_idx in range(24):\n for day in week_day_Apr_list_1: \n key = 'link_' + str(link_idx) + '_' + str(day)\n link_flow_testing_set_Apr_MD_1.append(link_day_minute_Apr_dict_JSON[key] ['MD_flow'])\n \nlink_flow_testing_set_Apr_MD_2 = []\nfor link_idx in range(24):\n for day in week_day_Apr_list_2: \n key = 'link_' + str(link_idx) + '_' + str(day)\n link_flow_testing_set_Apr_MD_2.append(link_day_minute_Apr_dict_JSON[key] ['MD_flow'])\n \nlink_flow_testing_set_Apr_MD_3 = []\nfor link_idx in range(24):\n for day in week_day_Apr_list_3: \n key = 'link_' + str(link_idx) + '_' + str(day)\n link_flow_testing_set_Apr_MD_3.append(link_day_minute_Apr_dict_JSON[key] ['MD_flow'])",
"_____no_output_____"
],
[
"testing_set_1 = np.matrix(link_flow_testing_set_Apr_MD_1)\ntesting_set_1 = np.matrix.reshape(testing_set_1, 24, 7)\ntesting_set_1 = np.nan_to_num(testing_set_1)\ny = np.array(np.transpose(testing_set_1))\ny = y[np.all(y != 0, axis=1)]\ntesting_set_1 = np.transpose(y)\ntesting_set_1 = np.matrix(testing_set_1)\n\ntesting_set_2 = np.matrix(link_flow_testing_set_Apr_MD_2)\ntesting_set_2 = np.matrix.reshape(testing_set_2, 24, 7)\ntesting_set_2 = np.nan_to_num(testing_set_2)\ny = np.array(np.transpose(testing_set_2))\ny = y[np.all(y != 0, axis=1)]\ntesting_set_2 = np.transpose(y)\ntesting_set_2 = np.matrix(testing_set_2)\n\ntesting_set_3 = np.matrix(link_flow_testing_set_Apr_MD_3)\ntesting_set_3 = np.matrix.reshape(testing_set_3, 24, 7)\ntesting_set_3 = np.nan_to_num(testing_set_3)\ny = np.array(np.transpose(testing_set_3))\ny = y[np.all(y != 0, axis=1)]\ntesting_set_3 = np.transpose(y)\ntesting_set_3 = np.matrix(testing_set_3)",
"_____no_output_____"
],
[
"np.size(testing_set_2, 0), np.size(testing_set_3, 1)",
"_____no_output_____"
],
[
"testing_set_3[:,:1]",
"_____no_output_____"
],
[
"# write testing sets to file\n\nzdump([testing_set_1, testing_set_2, testing_set_3], '../temp_files/testing_sets_Apr_MD.pkz')",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0b0a56485b731405f27c12f3a898d394bc1b5bc | 3,485 | ipynb | Jupyter Notebook | FL by county.ipynb | kirbs-/covid-19-dataset | 3427880186a03339abf82688581b7aab9fe5cb72 | [
"MIT"
] | null | null | null | FL by county.ipynb | kirbs-/covid-19-dataset | 3427880186a03339abf82688581b7aab9fe5cb72 | [
"MIT"
] | null | null | null | FL by county.ipynb | kirbs-/covid-19-dataset | 3427880186a03339abf82688581b7aab9fe5cb72 | [
"MIT"
] | null | null | null | 24.892857 | 134 | 0.546915 | [
[
[
"Florida updates their data daily at 11am and 6pm EDT",
"_____no_output_____"
]
],
[
[
"from selenium import webdriver\nimport time\nimport pandas as pd\nimport pendulum\nimport re\nimport yaml\nfrom selenium.webdriver.chrome.options import Options\nchrome_options = Options()\n#chrome_options.add_argument(\"--disable-extensions\")\n#chrome_options.add_argument(\"--disable-gpu\")\n#chrome_options.add_argument(\"--no-sandbox) # linux only\nchrome_options.add_argument(\"--start-maximized\")\n# chrome_options.add_argument(\"--headless\")\nchrome_options.add_argument(\"user-agent=[Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:73.0) Gecko/20100101 Firefox/73.0]\")",
"_____no_output_____"
],
[
"with open('config.yaml', 'r') as f:\n config = yaml.safe_load(f.read())",
"_____no_output_____"
],
[
"state = 'FL'",
"_____no_output_____"
],
[
"scrape_timestamp = pendulum.now().strftime('%Y%m%d%H%M%S')",
"_____no_output_____"
],
[
"# FL positive cases by county\nurl = 'https://arcg.is/0nHO11'",
"_____no_output_____"
],
[
"def fetch():\n driver = webdriver.Chrome('../20190611 - Parts recommendation/chromedriver', options=chrome_options)\n\n driver.get(url)\n time.sleep(5)\n\n # class topBoxH1Text is used in the summary box.\n datatbl = driver.find_element_by_id('ember219').find_elements_by_class_name('external-html')\n\n data = [re.search('^(.*) \\((\\d*)\\)*', row.text).groups() for row in datatbl]\n\n page_source = driver.page_source\n driver.close()\n\n return pd.DataFrame(data, columns=['county','positive_cases']), page_source",
"_____no_output_____"
],
[
"def save(df, source):\n df.to_csv(f\"{config['data_folder']}/{state}_county_{scrape_timestamp}.txt\", sep='|', index=False)\n\n with open(f\"{config['data_source_backup_folder']}/{state}_county_{scrape_timestamp}.html\", 'w') as f:\n f.write(source)",
"_____no_output_____"
],
[
"def run():\n df, source = fetch()\n save(df, source)",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0b0aa695375b81f5e75ee74d4ac9c60a0354c88 | 607,842 | ipynb | Jupyter Notebook | DS7_Sprint_Challenge_5_Regression_Classification.ipynb | sapinspys/DS-Unit-2-Regression-Classification | 9a4e74f73f8045575d23fd7f47ed40bbe49e62be | [
"MIT"
] | null | null | null | DS7_Sprint_Challenge_5_Regression_Classification.ipynb | sapinspys/DS-Unit-2-Regression-Classification | 9a4e74f73f8045575d23fd7f47ed40bbe49e62be | [
"MIT"
] | null | null | null | DS7_Sprint_Challenge_5_Regression_Classification.ipynb | sapinspys/DS-Unit-2-Regression-Classification | 9a4e74f73f8045575d23fd7f47ed40bbe49e62be | [
"MIT"
] | null | null | null | 192.415954 | 181,084 | 0.857917 | [
[
[
"<a href=\"https://colab.research.google.com/github/sapinspys/DS-Unit-2-Regression-Classification/blob/master/DS7_Sprint_Challenge_5_Regression_Classification.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"_Lambda School Data Science, Unit 2_\n \n# Regression & Classification Sprint Challenge\n\nTo demonstrate mastery on your Sprint Challenge, do all the required, numbered instructions in this notebook.\n\nTo earn a score of \"3\", also do all the stretch goals.\n\nYou are permitted and encouraged to do as much data exploration as you want.",
"_____no_output_____"
],
[
"### Part 1, Classification\n- 1.1. Begin with baselines for classification\n- 1.2. Do train/test split. Arrange data into X features matrix and y target vector\n- 1.3. Use scikit-learn to fit a logistic regression model\n- 1.4. Report classification metric: accuracy\n\n### Part 2, Regression\n- 2.1. Begin with baselines for regression\n- 2.2. Do train/validate/test split. \n- 2.3. Arrange data into X features matrix and y target vector\n- 2.4. Do one-hot encoding\n- 2.5. Use scikit-learn to fit a linear regression (or ridge regression) model\n- 2.6. Report validation MAE and $R^2$\n\n### Stretch Goals, Regression\n- Make visualizations to explore relationships between features and target\n- Try at least 3 feature combinations. You may select features manually, or automatically\n- Report validation MAE and $R^2$ for each feature combination you try\n- Report test MAE and $R^2$ for your final model\n- Print or plot the coefficients for the features in your model",
"_____no_output_____"
]
],
[
[
"# If you're in Colab...\nimport sys\nin_colab = 'google.colab' in sys.modules\n\nif in_colab:\n !pip install category_encoders==2.0.0\n !pip install pandas-profiling==2.3.0\n !pip install plotly==4.1.1",
"Collecting category_encoders==2.0.0\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/6e/a1/f7a22f144f33be78afeb06bfa78478e8284a64263a3c09b1ef54e673841e/category_encoders-2.0.0-py2.py3-none-any.whl (87kB)\n\u001b[K |████████████████████████████████| 92kB 5.7MB/s \n\u001b[?25hRequirement already satisfied: statsmodels>=0.6.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.0.0) (0.10.1)\nRequirement already satisfied: scipy>=0.19.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.0.0) (1.3.1)\nRequirement already satisfied: patsy>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.0.0) (0.5.1)\nRequirement already satisfied: numpy>=1.11.3 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.0.0) (1.16.5)\nRequirement already satisfied: scikit-learn>=0.20.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.0.0) (0.21.3)\nRequirement already satisfied: pandas>=0.21.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders==2.0.0) (0.24.2)\nRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from patsy>=0.4.1->category_encoders==2.0.0) (1.12.0)\nRequirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn>=0.20.0->category_encoders==2.0.0) (0.13.2)\nRequirement already satisfied: python-dateutil>=2.5.0 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.21.1->category_encoders==2.0.0) (2.5.3)\nRequirement already satisfied: pytz>=2011k in /usr/local/lib/python3.6/dist-packages (from pandas>=0.21.1->category_encoders==2.0.0) (2018.9)\nInstalling collected packages: category-encoders\nSuccessfully installed category-encoders-2.0.0\nCollecting pandas-profiling==2.3.0\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/2c/2f/aae19e2173c10a9bb7fee5f5cad35dbe53a393960fc91abc477dcc4661e8/pandas-profiling-2.3.0.tar.gz (127kB)\n\u001b[K |████████████████████████████████| 133kB 4.8MB/s \n\u001b[?25hRequirement already satisfied: pandas>=0.19 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.3.0) (0.24.2)\nRequirement already satisfied: matplotlib>=1.4 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.3.0) (3.0.3)\nRequirement already satisfied: jinja2>=2.8 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.3.0) (2.10.1)\nRequirement already satisfied: missingno>=0.4.2 in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.3.0) (0.4.2)\nCollecting htmlmin>=0.1.12 (from pandas-profiling==2.3.0)\n Downloading https://files.pythonhosted.org/packages/b3/e7/fcd59e12169de19f0131ff2812077f964c6b960e7c09804d30a7bf2ab461/htmlmin-0.1.12.tar.gz\nCollecting phik>=0.9.8 (from pandas-profiling==2.3.0)\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/45/ad/24a16fa4ba612fb96a3c4bb115a5b9741483f53b66d3d3afd987f20fa227/phik-0.9.8-py3-none-any.whl (606kB)\n\u001b[K |████████████████████████████████| 614kB 36.9MB/s \n\u001b[?25hCollecting confuse>=1.0.0 (from pandas-profiling==2.3.0)\n Downloading https://files.pythonhosted.org/packages/4c/6f/90e860cba937c174d8b3775729ccc6377eb91f52ad4eeb008e7252a3646d/confuse-1.0.0.tar.gz\nRequirement already satisfied: astropy in /usr/local/lib/python3.6/dist-packages (from pandas-profiling==2.3.0) (3.0.5)\nRequirement already satisfied: pytz>=2011k in /usr/local/lib/python3.6/dist-packages (from pandas>=0.19->pandas-profiling==2.3.0) (2018.9)\nRequirement already satisfied: numpy>=1.12.0 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.19->pandas-profiling==2.3.0) (1.16.5)\nRequirement already satisfied: python-dateutil>=2.5.0 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.19->pandas-profiling==2.3.0) (2.5.3)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=1.4->pandas-profiling==2.3.0) (2.4.2)\nRequirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=1.4->pandas-profiling==2.3.0) (0.10.0)\nRequirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=1.4->pandas-profiling==2.3.0) (1.1.0)\nRequirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.6/dist-packages (from jinja2>=2.8->pandas-profiling==2.3.0) (1.1.1)\nRequirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from missingno>=0.4.2->pandas-profiling==2.3.0) (1.3.1)\nRequirement already satisfied: seaborn in /usr/local/lib/python3.6/dist-packages (from missingno>=0.4.2->pandas-profiling==2.3.0) (0.9.0)\nRequirement already satisfied: numba>=0.38.1 in /usr/local/lib/python3.6/dist-packages (from phik>=0.9.8->pandas-profiling==2.3.0) (0.40.1)\nRequirement already satisfied: jupyter-client>=5.2.3 in /usr/local/lib/python3.6/dist-packages (from phik>=0.9.8->pandas-profiling==2.3.0) (5.3.1)\nRequirement already satisfied: nbconvert>=5.3.1 in /usr/local/lib/python3.6/dist-packages (from phik>=0.9.8->pandas-profiling==2.3.0) (5.6.0)\nCollecting pytest>=4.0.2 (from phik>=0.9.8->pandas-profiling==2.3.0)\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/2f/19/d5f71752f71451ccc5ed5f6739e9da4a235f38783fdaf3629cae41b2ca7b/pytest-5.1.2-py3-none-any.whl (224kB)\n\u001b[K |████████████████████████████████| 225kB 43.5MB/s \n\u001b[?25hCollecting pytest-pylint>=0.13.0 (from phik>=0.9.8->pandas-profiling==2.3.0)\n Downloading https://files.pythonhosted.org/packages/64/dc/6f35f114844fb12e38d60c4f3d2441a55baff7043ad4e013777dff55746c/pytest_pylint-0.14.1-py3-none-any.whl\nRequirement already satisfied: pyyaml in /usr/local/lib/python3.6/dist-packages (from confuse>=1.0.0->pandas-profiling==2.3.0) (3.13)\nRequirement already satisfied: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.5.0->pandas>=0.19->pandas-profiling==2.3.0) (1.12.0)\nRequirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from kiwisolver>=1.0.1->matplotlib>=1.4->pandas-profiling==2.3.0) (41.2.0)\nRequirement already satisfied: llvmlite>=0.25.0dev0 in /usr/local/lib/python3.6/dist-packages (from numba>=0.38.1->phik>=0.9.8->pandas-profiling==2.3.0) (0.29.0)\nRequirement already satisfied: jupyter-core in /usr/local/lib/python3.6/dist-packages (from jupyter-client>=5.2.3->phik>=0.9.8->pandas-profiling==2.3.0) (4.5.0)\nRequirement already satisfied: traitlets in /usr/local/lib/python3.6/dist-packages (from jupyter-client>=5.2.3->phik>=0.9.8->pandas-profiling==2.3.0) (4.3.2)\nRequirement already satisfied: tornado>=4.1 in /usr/local/lib/python3.6/dist-packages (from jupyter-client>=5.2.3->phik>=0.9.8->pandas-profiling==2.3.0) (4.5.3)\nRequirement already satisfied: pyzmq>=13 in /usr/local/lib/python3.6/dist-packages (from jupyter-client>=5.2.3->phik>=0.9.8->pandas-profiling==2.3.0) (17.0.0)\nRequirement already satisfied: bleach in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.3.0) (3.1.0)\nRequirement already satisfied: pandocfilters>=1.4.1 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.3.0) (1.4.2)\nRequirement already satisfied: mistune<2,>=0.8.1 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.3.0) (0.8.4)\nRequirement already satisfied: defusedxml in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.3.0) (0.6.0)\nRequirement already satisfied: pygments in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.3.0) (2.1.3)\nRequirement already satisfied: nbformat>=4.4 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.3.0) (4.4.0)\nRequirement already satisfied: entrypoints>=0.2.2 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.3.0) (0.3)\nRequirement already satisfied: testpath in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.3.0) (0.4.2)\nRequirement already satisfied: py>=1.5.0 in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.3.0) (1.8.0)\nRequirement already satisfied: more-itertools>=4.0.0 in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.3.0) (7.2.0)\nCollecting pluggy<1.0,>=0.12 (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.3.0)\n Downloading https://files.pythonhosted.org/packages/92/c7/48439f7d5fd6bddb4c04b850bb862b42e3e2b98570040dfaf68aedd8114b/pluggy-0.13.0-py2.py3-none-any.whl\nRequirement already satisfied: wcwidth in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.3.0) (0.1.7)\nRequirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.3.0) (19.1.0)\nRequirement already satisfied: atomicwrites>=1.0 in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.3.0) (1.3.0)\nRequirement already satisfied: importlib-metadata>=0.12; python_version < \"3.8\" in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.3.0) (0.20)\nRequirement already satisfied: packaging in /usr/local/lib/python3.6/dist-packages (from pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.3.0) (19.1)\nCollecting pylint>=1.4.5 (from pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.3.0)\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/60/c2/b3f73f4ac008bef6e75bca4992f3963b3f85942e0277237721ef1c151f0d/pylint-2.3.1-py3-none-any.whl (765kB)\n\u001b[K |████████████████████████████████| 768kB 45.4MB/s \n\u001b[?25hRequirement already satisfied: decorator in /usr/local/lib/python3.6/dist-packages (from traitlets->jupyter-client>=5.2.3->phik>=0.9.8->pandas-profiling==2.3.0) (4.4.0)\nRequirement already satisfied: ipython-genutils in /usr/local/lib/python3.6/dist-packages (from traitlets->jupyter-client>=5.2.3->phik>=0.9.8->pandas-profiling==2.3.0) (0.2.0)\nRequirement already satisfied: webencodings in /usr/local/lib/python3.6/dist-packages (from bleach->nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.3.0) (0.5.1)\nRequirement already satisfied: jsonschema!=2.5.0,>=2.4 in /usr/local/lib/python3.6/dist-packages (from nbformat>=4.4->nbconvert>=5.3.1->phik>=0.9.8->pandas-profiling==2.3.0) (2.6.0)\nRequirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata>=0.12; python_version < \"3.8\"->pytest>=4.0.2->phik>=0.9.8->pandas-profiling==2.3.0) (0.6.0)\nCollecting mccabe<0.7,>=0.6 (from pylint>=1.4.5->pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.3.0)\n Downloading https://files.pythonhosted.org/packages/87/89/479dc97e18549e21354893e4ee4ef36db1d237534982482c3681ee6e7b57/mccabe-0.6.1-py2.py3-none-any.whl\nCollecting isort<5,>=4.2.5 (from pylint>=1.4.5->pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.3.0)\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/e5/b0/c121fd1fa3419ea9bfd55c7f9c4fedfec5143208d8c7ad3ce3db6c623c21/isort-4.3.21-py2.py3-none-any.whl (42kB)\n\u001b[K |████████████████████████████████| 51kB 15.4MB/s \n\u001b[?25hCollecting astroid<3,>=2.2.0 (from pylint>=1.4.5->pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.3.0)\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/d5/ad/7221a62a2dbce5c3b8c57fd18e1052c7331adc19b3f27f1561aa6e620db2/astroid-2.2.5-py3-none-any.whl (193kB)\n\u001b[K |████████████████████████████████| 194kB 51.9MB/s \n\u001b[?25hCollecting typed-ast>=1.3.0; implementation_name == \"cpython\" (from astroid<3,>=2.2.0->pylint>=1.4.5->pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.3.0)\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/31/d3/9d1802c161626d0278bafb1ffb32f76b9d01e123881bbf9d91e8ccf28e18/typed_ast-1.4.0-cp36-cp36m-manylinux1_x86_64.whl (736kB)\n\u001b[K |████████████████████████████████| 737kB 36.1MB/s \n\u001b[?25hRequirement already satisfied: wrapt in /usr/local/lib/python3.6/dist-packages (from astroid<3,>=2.2.0->pylint>=1.4.5->pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.3.0) (1.11.2)\nCollecting lazy-object-proxy (from astroid<3,>=2.2.0->pylint>=1.4.5->pytest-pylint>=0.13.0->phik>=0.9.8->pandas-profiling==2.3.0)\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/0e/26/534a6d32572a9dbca11619321535c0a7ab34688545d9d67c2c204b9e3a3d/lazy_object_proxy-1.4.2-cp36-cp36m-manylinux1_x86_64.whl (49kB)\n\u001b[K |████████████████████████████████| 51kB 19.5MB/s \n\u001b[?25hBuilding wheels for collected packages: pandas-profiling, htmlmin, confuse\n Building wheel for pandas-profiling (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for pandas-profiling: filename=pandas_profiling-2.3.0-py2.py3-none-any.whl size=145035 sha256=f4c4cc93c1f36b718263d47260574d6248e35272a2d88e5aac952bbb727ba835\n Stored in directory: /root/.cache/pip/wheels/ce/c7/f1/dbfef4848ebb048cb1d4a22d1ed0c62d8ff2523747235e19fe\n Building wheel for htmlmin (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for htmlmin: filename=htmlmin-0.1.12-cp36-none-any.whl size=27084 sha256=dbd22fdb9a39619cc8bcabb02d1707c95b42799116385e67cb2da7dcb7030c8b\n Stored in directory: /root/.cache/pip/wheels/43/07/ac/7c5a9d708d65247ac1f94066cf1db075540b85716c30255459\n Building wheel for confuse (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for confuse: filename=confuse-1.0.0-cp36-none-any.whl size=17486 sha256=c99643983359d92dc05809b720add2f90d43f5a2c8089c63fac41b37cf71b962\n Stored in directory: /root/.cache/pip/wheels/b0/b2/96/2074eee7dbf7b7df69d004c9b6ac4e32dad04fb7666cf943bd\nSuccessfully built pandas-profiling htmlmin confuse\n\u001b[31mERROR: datascience 0.10.6 has requirement folium==0.2.1, but you'll have folium 0.8.3 which is incompatible.\u001b[0m\nInstalling collected packages: htmlmin, pluggy, pytest, mccabe, isort, typed-ast, lazy-object-proxy, astroid, pylint, pytest-pylint, phik, confuse, pandas-profiling\n Found existing installation: pluggy 0.7.1\n Uninstalling pluggy-0.7.1:\n Successfully uninstalled pluggy-0.7.1\n Found existing installation: pytest 3.6.4\n Uninstalling pytest-3.6.4:\n Successfully uninstalled pytest-3.6.4\n Found existing installation: pandas-profiling 1.4.1\n Uninstalling pandas-profiling-1.4.1:\n Successfully uninstalled pandas-profiling-1.4.1\nSuccessfully installed astroid-2.2.5 confuse-1.0.0 htmlmin-0.1.12 isort-4.3.21 lazy-object-proxy-1.4.2 mccabe-0.6.1 pandas-profiling-2.3.0 phik-0.9.8 pluggy-0.13.0 pylint-2.3.1 pytest-5.1.2 pytest-pylint-0.14.1 typed-ast-1.4.0\nRequirement already satisfied: plotly==4.1.1 in /usr/local/lib/python3.6/dist-packages (4.1.1)\nRequirement already satisfied: retrying>=1.3.3 in /usr/local/lib/python3.6/dist-packages (from plotly==4.1.1) (1.3.3)\nRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from plotly==4.1.1) (1.12.0)\n"
]
],
[
[
"# Part 1, Classification: Predict Blood Donations 🚑\nOur dataset is from a mobile blood donation vehicle in Taiwan. The Blood Transfusion Service Center drives to different universities and collects blood as part of a blood drive.\n\nThe goal is to predict whether the donor made a donation in March 2007, using information about each donor's history.\n\nGood data-driven systems for tracking and predicting donations and supply needs can improve the entire supply chain, making sure that more patients get the blood transfusions they need.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\ndonors = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/blood-transfusion/transfusion.data')\nassert donors.shape == (748,5)\n\ndonors = donors.rename(columns={\n 'Recency (months)': 'months_since_last_donation', \n 'Frequency (times)': 'number_of_donations', \n 'Monetary (c.c. blood)': 'total_volume_donated', \n 'Time (months)': 'months_since_first_donation', \n 'whether he/she donated blood in March 2007': 'made_donation_in_march_2007'\n})\n\ndonors.head()",
"_____no_output_____"
]
],
[
[
"## 1.1. Begin with baselines\n\nWhat accuracy score would you get here with a \"majority class baseline\"?\n \n(You don't need to split the data into train and test sets yet. You can answer this question either with a scikit-learn function or with a pandas function.)",
"_____no_output_____"
]
],
[
[
"y_train = donors.made_donation_in_march_2007\ny_train.value_counts(normalize=True)",
"_____no_output_____"
],
[
"# We can cross-check this using scikit-learn\nmajority_class = y_train.mode()[0]\nmajority_class",
"_____no_output_____"
],
[
"y_pred = [majority_class] * len(y_train)\nlen(y_pred)",
"_____no_output_____"
],
[
"from sklearn.metrics import accuracy_score\naccuracy_score(y_train, y_pred)",
"_____no_output_____"
]
],
[
[
"## 1.2. Do train/test split. Arrange data into X features matrix and y target vector\n\nDo these steps in either order.\n\nSplit randomly. Use scikit-learn's train/test split function. Include 75% of the data in the train set, and hold out 25% for the test set.",
"_____no_output_____"
]
],
[
[
"train_features = donors[donors.columns.difference(['made_donation_in_march_2007'])]\nprint(train_features.shape)\ntrain_features.head()",
"(748, 4)\n"
],
[
"train_labels = donors['made_donation_in_march_2007']\nprint(train_labels.shape)\ntrain_labels.head()",
"(748,)\n"
],
[
"from sklearn.model_selection import train_test_split\n\nX_train = train_features\ny_train = train_labels\n\nX_train, X_val, y_train, y_val = train_test_split(\n X_train, y_train, train_size=0.75, test_size=0.25, \n stratify=y_train\n) \n\nX_train.shape, X_val.shape, y_train.shape, y_val.shape",
"_____no_output_____"
],
[
"X_train.isnull().sum()",
"_____no_output_____"
],
[
"train_labels.isnull().sum()",
"_____no_output_____"
]
],
[
[
"## 1.3. Use scikit-learn to fit a logistic regression model\n\nYou may use any number of features",
"_____no_output_____"
]
],
[
[
"# No categorical features, only numerical, no need for OHE\nfrom sklearn.linear_model import LogisticRegression\n\n# Right now we include all numerical features\n\n# If we want to split into less features:\n# X_train_subset = X_train[features]\n# X_val_subset = X_val[features]\n\nmodel = LogisticRegression(\n solver='lbfgs', multi_class='auto', max_iter=1000, n_jobs=-1) # Optimized from class\n\nmodel.fit(X_train, y_train)",
"_____no_output_____"
]
],
[
[
"## 1.4. Report classification metric: accuracy\n\nWhat is your model's accuracy on the test set?\n\nDon't worry if your model doesn't beat the mean baseline. That's okay!\n\n_\"The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data.\"_ —[John Tukey](https://en.wikiquote.org/wiki/John_Tukey)\n\n",
"_____no_output_____"
]
],
[
[
"print('Validation Accuracy', model.score(X_val, y_val))",
"Validation Accuracy 0.7807486631016043\n"
]
],
[
[
"# Part 2, Regression: Predict home prices in Ames, Iowa 🏠\n\nYou'll use historical housing data. There's a data dictionary at the bottom of the notebook. \n\nRun this code cell to load the dataset:\n\n\n\n",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nURL = 'https://drive.google.com/uc?export=download&id=1522WlEW6HFss36roD_Cd9nybqSuiVcCK'\nhomes = pd.read_csv(URL)\nassert homes.shape == (2904, 47)\n\nhomes.head()",
"_____no_output_____"
]
],
[
[
"## 2.1. Begin with baselines\n\nWhat is the Mean Absolute Error and R^2 score for a mean baseline?",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score\nimport numpy as np\n\ny_train = homes['SalePrice']\ny_pred_train = [y_train.mean()] * len(y_train)\n\nprint('Mean Baseline:')\nprint('Train Mean Absolute Error:', mean_absolute_error(y_train, y_pred_train))\nprint('Train R^2 Score:', r2_score(y_train, y_pred_train))",
"Mean Baseline:\nTrain Mean Absolute Error: 58149.92774120811\nTrain R^2 Score: 0.0\n"
],
[
"# 0% indicates that the model explains none of the variability of the response data around its mean.",
"_____no_output_____"
]
],
[
[
"## 2.2. Do train/test split\n\nTrain on houses sold in the years 2006 - 2008. (1,920 rows)\n\nValidate on house sold in 2009. (644 rows)\n\nTest on houses sold in 2010. (340 rows)",
"_____no_output_____"
]
],
[
[
"train = homes[(homes.Yr_Sold >= 2006) & (homes.Yr_Sold <= 2008)]\nprint(train.shape)",
"(1920, 47)\n"
],
[
"val = homes[homes.Yr_Sold == 2009]\nprint(val.shape)",
"(644, 47)\n"
],
[
"test = homes[homes.Yr_Sold == 2010]\nprint(test.shape)",
"(340, 47)\n"
]
],
[
[
"## 2.3. Arrange data into X features matrix and y target vector\n\nSelect at least one numeric feature and at least one categorical feature.\n\nOtherwise, you many choose whichever features and however many you want.",
"_____no_output_____"
]
],
[
[
"homes.isnull().sum()",
"_____no_output_____"
],
[
"target = 'SalePrice'\n\nnumeric_features = train[train.columns.difference(['SalePrice'])].select_dtypes(include='number').columns.tolist()\n\ncardinality = train.select_dtypes(exclude='number').nunique()\nlow_cardinality_features = cardinality[cardinality <= 10].index.tolist()\n\nfeatures = numeric_features + low_cardinality_features\nfeatures",
"_____no_output_____"
]
],
[
[
"## 2.4. Do one-hot encoding\n\nEncode your categorical feature(s).",
"_____no_output_____"
]
],
[
[
"X_train = train[features]\ny_train = train[target]\n\nX_val = val[features]\ny_val = val[target]\n\nX_test = test[features]\ny_test = test[target]",
"_____no_output_____"
],
[
"import category_encoders as ce\nfrom sklearn.preprocessing import StandardScaler\n\nX_train_subset = X_train[features]\nX_val_subset = X_val[features]\nX_test_subset = X_test[features]\n\nencoder = ce.OneHotEncoder(use_cat_names=True)\nX_train_encoded = encoder.fit_transform(X_train_subset)\nX_val_encoded = encoder.transform(X_val_subset)\nX_test_encoded = encoder.transform(X_test_subset)\n\nscaler = StandardScaler()\nX_train_scaled = scaler.fit_transform(X_train_encoded)\nX_val_scaled = scaler.transform(X_val_encoded)\nX_test_scaled = scaler.fit_transform(X_test_encoded)\n\nprint(X_train_scaled.shape, X_val_scaled.shape, X_test_scaled.shape)",
"(1920, 157) (644, 157) (340, 157)\n"
]
],
[
[
"## 2.5. Use scikit-learn to fit a linear regression (or ridge regression) model\nFit your model.",
"_____no_output_____"
]
],
[
[
"# STRETCH GOAL: Try at least 3 feature combinations. You may select features manually, or automatically\n# STRETCH GOAL: Report validation MAE and R2 for each feature combination you try\nfrom sklearn.linear_model import RidgeCV\nfrom sklearn.feature_selection import f_regression, SelectKBest\n\nfor k in range(1, len(X_train_encoded.columns)+1):\n print(f'{k} features')\n \n selector = SelectKBest(score_func=f_regression, k=k)\n X_train_selected = selector.fit_transform(X_train_scaled, y_train)\n X_test_selected = selector.transform(X_test_scaled)\n\n model = RidgeCV()\n model.fit(X_train_selected, y_train)\n \n y_pred = model.predict(X_test_selected)\n mae = mean_absolute_error(y_test, y_pred)\n print(f'Test MAE: ${mean_absolute_error(y_test, y_pred):,.0f}')\n print(f'Test R2: {r2_score(y_test, y_pred):,.3f} \\n')",
"1 features\nTest MAE: $35,831\nTest R2: 0.570 \n\n2 features\nTest MAE: $30,063\nTest R2: 0.700 \n\n3 features\nTest MAE: $27,450\nTest R2: 0.741 \n\n4 features\nTest MAE: $27,428\nTest R2: 0.744 \n\n5 features\nTest MAE: $27,420\nTest R2: 0.745 \n\n6 features\nTest MAE: $25,676\nTest R2: 0.770 \n\n7 features\nTest MAE: $25,671\nTest R2: 0.771 \n\n8 features\nTest MAE: $25,330\nTest R2: 0.775 \n\n9 features\nTest MAE: $25,349\nTest R2: 0.777 \n\n10 features\nTest MAE: $23,751\nTest R2: 0.810 \n\n11 features\nTest MAE: $23,823\nTest R2: 0.810 \n\n12 features\nTest MAE: $23,898\nTest R2: 0.812 \n\n13 features\nTest MAE: $23,929\nTest R2: 0.813 \n\n14 features\nTest MAE: $23,601\nTest R2: 0.821 \n\n15 features\nTest MAE: $23,521\nTest R2: 0.822 \n\n16 features\nTest MAE: $23,486\nTest R2: 0.823 \n\n17 features\nTest MAE: $23,503\nTest R2: 0.823 \n\n18 features\nTest MAE: $23,457\nTest R2: 0.822 \n\n19 features\nTest MAE: $23,438\nTest R2: 0.823 \n\n20 features\nTest MAE: $23,409\nTest R2: 0.823 \n\n21 features\nTest MAE: $23,388\nTest R2: 0.825 \n\n22 features\nTest MAE: $23,382\nTest R2: 0.825 \n\n23 features\nTest MAE: $23,473\nTest R2: 0.823 \n\n24 features\nTest MAE: $23,560\nTest R2: 0.821 \n\n25 features\nTest MAE: $23,506\nTest R2: 0.821 \n\n26 features\nTest MAE: $23,526\nTest R2: 0.821 \n\n27 features\nTest MAE: $23,562\nTest R2: 0.820 \n\n28 features\nTest MAE: $23,575\nTest R2: 0.820 \n\n29 features\nTest MAE: $23,225\nTest R2: 0.825 \n\n30 features\nTest MAE: $23,055\nTest R2: 0.827 \n\n31 features\nTest MAE: $23,055\nTest R2: 0.827 \n\n32 features\nTest MAE: $22,967\nTest R2: 0.827 \n\n33 features\nTest MAE: $22,217\nTest R2: 0.835 \n\n34 features\nTest MAE: $22,245\nTest R2: 0.834 \n\n35 features\nTest MAE: $22,217\nTest R2: 0.834 \n\n36 features\nTest MAE: $22,207\nTest R2: 0.835 \n\n37 features\nTest MAE: $22,155\nTest R2: 0.835 \n\n38 features\nTest MAE: $22,133\nTest R2: 0.835 \n\n39 features\nTest MAE: $22,133\nTest R2: 0.835 \n\n40 features\nTest MAE: $22,123\nTest R2: 0.835 \n\n41 features\nTest MAE: $22,166\nTest R2: 0.835 \n\n42 features\nTest MAE: $22,108\nTest R2: 0.836 \n\n43 features\nTest MAE: $21,839\nTest R2: 0.840 \n\n44 features\nTest MAE: $21,782\nTest R2: 0.841 \n\n45 features\nTest MAE: $21,841\nTest R2: 0.841 \n\n46 features\nTest MAE: $21,995\nTest R2: 0.840 \n\n47 features\nTest MAE: $22,005\nTest R2: 0.840 \n\n48 features\nTest MAE: $21,981\nTest R2: 0.840 \n\n49 features\nTest MAE: $21,968\nTest R2: 0.840 \n\n50 features\nTest MAE: $21,968\nTest R2: 0.840 \n\n51 features\nTest MAE: $21,924\nTest R2: 0.840 \n\n52 features\nTest MAE: $21,901\nTest R2: 0.842 \n\n53 features\nTest MAE: $21,863\nTest R2: 0.842 \n\n54 features\nTest MAE: $22,138\nTest R2: 0.841 \n\n55 features\nTest MAE: $21,932\nTest R2: 0.846 \n\n56 features\nTest MAE: $21,789\nTest R2: 0.848 \n\n57 features\nTest MAE: $21,681\nTest R2: 0.851 \n\n58 features\nTest MAE: $21,691\nTest R2: 0.851 \n\n59 features\nTest MAE: $21,315\nTest R2: 0.854 \n\n60 features\nTest MAE: $21,381\nTest R2: 0.853 \n\n61 features\nTest MAE: $21,411\nTest R2: 0.853 \n\n62 features\nTest MAE: $21,413\nTest R2: 0.853 \n\n63 features\nTest MAE: $21,329\nTest R2: 0.854 \n\n64 features\nTest MAE: $21,322\nTest R2: 0.854 \n\n65 features\nTest MAE: $21,338\nTest R2: 0.854 \n\n66 features\nTest MAE: $21,249\nTest R2: 0.854 \n\n67 features\nTest MAE: $21,240\nTest R2: 0.855 \n\n68 features\nTest MAE: $21,135\nTest R2: 0.855 \n\n69 features\nTest MAE: $21,122\nTest R2: 0.855 \n\n70 features\nTest MAE: $21,071\nTest R2: 0.855 \n\n71 features\nTest MAE: $20,721\nTest R2: 0.860 \n\n72 features\nTest MAE: $20,711\nTest R2: 0.860 \n\n73 features\nTest MAE: $20,711\nTest R2: 0.860 \n\n74 features\nTest MAE: $20,801\nTest R2: 0.859 \n\n75 features\nTest MAE: $20,662\nTest R2: 0.861 \n\n76 features\nTest MAE: $20,688\nTest R2: 0.861 \n\n77 features\nTest MAE: $20,696\nTest R2: 0.861 \n\n78 features\nTest MAE: $20,698\nTest R2: 0.861 \n\n79 features\nTest MAE: $20,602\nTest R2: 0.862 \n\n80 features\nTest MAE: $20,479\nTest R2: 0.862 \n\n81 features\nTest MAE: $20,473\nTest R2: 0.862 \n\n82 features\nTest MAE: $20,457\nTest R2: 0.862 \n\n83 features\nTest MAE: $20,461\nTest R2: 0.862 \n\n84 features\nTest MAE: $20,462\nTest R2: 0.862 \n\n85 features\nTest MAE: $20,461\nTest R2: 0.862 \n\n86 features\nTest MAE: $20,467\nTest R2: 0.862 \n\n87 features\nTest MAE: $20,457\nTest R2: 0.862 \n\n88 features\nTest MAE: $20,584\nTest R2: 0.859 \n\n89 features\nTest MAE: $20,590\nTest R2: 0.859 \n\n90 features\nTest MAE: $20,570\nTest R2: 0.859 \n\n91 features\nTest MAE: $20,565\nTest R2: 0.859 \n\n92 features\nTest MAE: $20,603\nTest R2: 0.859 \n\n93 features\nTest MAE: $20,560\nTest R2: 0.860 \n\n94 features\nTest MAE: $20,585\nTest R2: 0.860 \n\n95 features\nTest MAE: $20,574\nTest R2: 0.860 \n\n96 features\nTest MAE: $20,573\nTest R2: 0.860 \n\n97 features\nTest MAE: $20,576\nTest R2: 0.860 \n\n98 features\nTest MAE: $20,654\nTest R2: 0.860 \n\n99 features\nTest MAE: $20,751\nTest R2: 0.859 \n\n100 features\nTest MAE: $20,826\nTest R2: 0.858 \n\n101 features\nTest MAE: $20,838\nTest R2: 0.858 \n\n102 features\nTest MAE: $20,839\nTest R2: 0.858 \n\n103 features\nTest MAE: $20,904\nTest R2: 0.858 \n\n104 features\nTest MAE: $20,929\nTest R2: 0.858 \n\n105 features\nTest MAE: $20,948\nTest R2: 0.858 \n\n106 features\nTest MAE: $20,961\nTest R2: 0.858 \n\n107 features\nTest MAE: $20,962\nTest R2: 0.858 \n\n108 features\nTest MAE: $20,962\nTest R2: 0.858 \n\n109 features\nTest MAE: $20,965\nTest R2: 0.858 \n\n110 features\nTest MAE: $20,987\nTest R2: 0.857 \n\n111 features\nTest MAE: $20,972\nTest R2: 0.857 \n\n112 features\nTest MAE: $20,939\nTest R2: 0.857 \n\n113 features\nTest MAE: $20,932\nTest R2: 0.858 \n\n114 features\nTest MAE: $21,055\nTest R2: 0.854 \n\n115 features\nTest MAE: $21,086\nTest R2: 0.853 \n\n116 features\nTest MAE: $21,086\nTest R2: 0.853 \n\n117 features\nTest MAE: $21,134\nTest R2: 0.853 \n\n118 features\nTest MAE: $21,134\nTest R2: 0.853 \n\n119 features\nTest MAE: $21,143\nTest R2: 0.853 \n\n120 features\nTest MAE: $21,135\nTest R2: 0.853 \n\n121 features\nTest MAE: $21,135\nTest R2: 0.853 \n\n122 features\nTest MAE: $21,132\nTest R2: 0.853 \n\n123 features\nTest MAE: $21,176\nTest R2: 0.853 \n\n124 features\nTest MAE: $21,176\nTest R2: 0.853 \n\n125 features\nTest MAE: $21,154\nTest R2: 0.853 \n\n126 features\nTest MAE: $21,155\nTest R2: 0.853 \n\n127 features\nTest MAE: $21,158\nTest R2: 0.853 \n\n128 features\nTest MAE: $21,163\nTest R2: 0.853 \n\n129 features\nTest MAE: $21,189\nTest R2: 0.856 \n\n130 features\nTest MAE: $21,106\nTest R2: 0.856 \n\n131 features\nTest MAE: $21,121\nTest R2: 0.856 \n\n132 features\nTest MAE: $21,126\nTest R2: 0.854 \n\n133 features\nTest MAE: $21,160\nTest R2: 0.854 \n\n134 features\nTest MAE: $21,157\nTest R2: 0.854 \n\n135 features\nTest MAE: $21,157\nTest R2: 0.854 \n\n136 features\nTest MAE: $21,156\nTest R2: 0.854 \n\n137 features\nTest MAE: $21,178\nTest R2: 0.853 \n\n138 features\nTest MAE: $21,182\nTest R2: 0.853 \n\n139 features\nTest MAE: $21,011\nTest R2: 0.853 \n\n140 features\nTest MAE: $20,984\nTest R2: 0.854 \n\n141 features\nTest MAE: $20,985\nTest R2: 0.854 \n\n142 features\nTest MAE: $20,916\nTest R2: 0.855 \n\n143 features\nTest MAE: $20,917\nTest R2: 0.855 \n\n144 features\nTest MAE: $20,912\nTest R2: 0.855 \n\n145 features\nTest MAE: $21,049\nTest R2: 0.852 \n\n146 features\nTest MAE: $21,055\nTest R2: 0.851 \n\n147 features\nTest MAE: $20,995\nTest R2: 0.853 \n\n148 features\nTest MAE: $20,598\nTest R2: 0.859 \n\n149 features\nTest MAE: $20,593\nTest R2: 0.859 \n\n150 features\nTest MAE: $20,609\nTest R2: 0.859 \n\n151 features\nTest MAE: $20,570\nTest R2: 0.860 \n\n152 features\nTest MAE: $20,566\nTest R2: 0.860 \n\n153 features\nTest MAE: $20,669\nTest R2: 0.858 \n\n154 features\nTest MAE: $20,647\nTest R2: 0.858 \n\n155 features\nTest MAE: $20,585\nTest R2: 0.858 \n\n156 features\nTest MAE: $20,621\nTest R2: 0.858 \n\n157 features\nTest MAE: $20,645\nTest R2: 0.858 \n\n"
]
],
[
[
"## 2.6. Report validation MAE and $R^2$\n\nWhat is your model's Mean Absolute Error and $R^2$ score on the validation set?",
"_____no_output_____"
]
],
[
[
"# STRETCH GOAL: Report test MAE and $R^2$ for your final model\nselector = SelectKBest(score_func=f_regression, k=157)\n\nX_train_selected = selector.fit_transform(X_train_scaled, y_train)\nX_val_selected = selector.transform(X_val_scaled)\n\nmodel = RidgeCV()\nmodel.fit(X_train_selected, y_train)\n\ny_pred_val = model.predict(X_val_selected)\nprint('Mean Baseline:')\nprint('Val Mean Absolute Error:', mean_absolute_error(y_val, y_pred_val))\nprint('Val R^2 Score:', r2_score(y_val, y_pred_val))",
"Mean Baseline:\nVal Mean Absolute Error: 19865.0311356575\nVal R^2 Score: 0.8731097413449\n"
],
[
"# STRETCH GOAL: Print or plot the coefficients for the features in your model\ncoefficients = pd.Series(model.coef_, X_train_encoded.columns)\nplt.figure(figsize=(10,30))\ncoefficients.sort_values().plot.barh(color='blue');",
"_____no_output_____"
]
],
[
[
"# Stretch Goals, Regression\n- Make visualizations to explore relationships between features and target\n- Try at least 3 feature combinations. You may select features manually, or automatically\n- Report validation MAE and $R^2$ for each feature combination you try\n- Report test MAE and $R^2$ for your final model\n- Print or plot the coefficients for the features in your model",
"_____no_output_____"
]
],
[
[
"#STRETCH GOAL: Make visualizations to explore relationships between features and target\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nfor col in sorted(low_cardinality_features):\n sns.catplot(x=col, y='SalePrice', data=train, kind='bar', color='grey')\n plt.xticks(rotation=45)\n plt.show()",
"_____no_output_____"
]
],
[
[
"# Data Dictionary \n\nHere's a description of the data fields:\n\n```\n1st_Flr_SF: First Floor square feet\n\nBedroom_AbvGr: Bedrooms above grade (does NOT include basement bedrooms)\n\nBldg_Type: Type of dwelling\n\t\t\n 1Fam\tSingle-family Detached\t\n 2FmCon\tTwo-family Conversion; originally built as one-family dwelling\n Duplx\tDuplex\n TwnhsE\tTownhouse End Unit\n TwnhsI\tTownhouse Inside Unit\n \nBsmt_Half_Bath: Basement half bathrooms\n\nBsmt_Full_Bath: Basement full bathrooms\n\nCentral_Air: Central air conditioning\n\n N\tNo\n Y\tYes\n\t\t\nCondition_1: Proximity to various conditions\n\t\n Artery\tAdjacent to arterial street\n Feedr\tAdjacent to feeder street\t\n Norm\tNormal\t\n RRNn\tWithin 200' of North-South Railroad\n RRAn\tAdjacent to North-South Railroad\n PosN\tNear positive off-site feature--park, greenbelt, etc.\n PosA\tAdjacent to postive off-site feature\n RRNe\tWithin 200' of East-West Railroad\n RRAe\tAdjacent to East-West Railroad\n\t\nCondition_2: Proximity to various conditions (if more than one is present)\n\t\t\n Artery\tAdjacent to arterial street\n Feedr\tAdjacent to feeder street\t\n Norm\tNormal\t\n RRNn\tWithin 200' of North-South Railroad\n RRAn\tAdjacent to North-South Railroad\n PosN\tNear positive off-site feature--park, greenbelt, etc.\n PosA\tAdjacent to postive off-site feature\n RRNe\tWithin 200' of East-West Railroad\n RRAe\tAdjacent to East-West Railroad\n \n Electrical: Electrical system\n\n SBrkr\tStandard Circuit Breakers & Romex\n FuseA\tFuse Box over 60 AMP and all Romex wiring (Average)\t\n FuseF\t60 AMP Fuse Box and mostly Romex wiring (Fair)\n FuseP\t60 AMP Fuse Box and mostly knob & tube wiring (poor)\n Mix\tMixed\n \n Exter_Cond: Evaluates the present condition of the material on the exterior\n\t\t\n Ex\tExcellent\n Gd\tGood\n TA\tAverage/Typical\n Fa\tFair\n Po\tPoor\n \n Exter_Qual: Evaluates the quality of the material on the exterior \n\t\t\n Ex\tExcellent\n Gd\tGood\n TA\tAverage/Typical\n Fa\tFair\n Po\tPoor\n\t\t\nExterior_1st: Exterior covering on house\n\n AsbShng\tAsbestos Shingles\n AsphShn\tAsphalt Shingles\n BrkComm\tBrick Common\n BrkFace\tBrick Face\n CBlock\tCinder Block\n CemntBd\tCement Board\n HdBoard\tHard Board\n ImStucc\tImitation Stucco\n MetalSd\tMetal Siding\n Other\tOther\n Plywood\tPlywood\n PreCast\tPreCast\t\n Stone\tStone\n Stucco\tStucco\n VinylSd\tVinyl Siding\n Wd Sdng\tWood Siding\n WdShing\tWood Shingles\n\t\nExterior_2nd: Exterior covering on house (if more than one material)\n\n AsbShng\tAsbestos Shingles\n AsphShn\tAsphalt Shingles\n BrkComm\tBrick Common\n BrkFace\tBrick Face\n CBlock\tCinder Block\n CemntBd\tCement Board\n HdBoard\tHard Board\n ImStucc\tImitation Stucco\n MetalSd\tMetal Siding\n Other\tOther\n Plywood\tPlywood\n PreCast\tPreCast\n Stone\tStone\n Stucco\tStucco\n VinylSd\tVinyl Siding\n Wd Sdng\tWood Siding\n WdShing\tWood Shingles\n \nFoundation: Type of foundation\n\t\t\n BrkTil\tBrick & Tile\n CBlock\tCinder Block\n PConc\tPoured Contrete\t\n Slab\tSlab\n Stone\tStone\n Wood\tWood\n\t\t\nFull_Bath: Full bathrooms above grade\n\nFunctional: Home functionality (Assume typical unless deductions are warranted)\n\n Typ\tTypical Functionality\n Min1\tMinor Deductions 1\n Min2\tMinor Deductions 2\n Mod\tModerate Deductions\n Maj1\tMajor Deductions 1\n Maj2\tMajor Deductions 2\n Sev\tSeverely Damaged\n Sal\tSalvage only\n\t\t\nGr_Liv_Area: Above grade (ground) living area square feet\n \nHalf_Bath: Half baths above grade\n\nHeating: Type of heating\n\t\t\n Floor\tFloor Furnace\n GasA\tGas forced warm air furnace\n GasW\tGas hot water or steam heat\n Grav\tGravity furnace\t\n OthW\tHot water or steam heat other than gas\n Wall\tWall furnace\n\t\t\nHeating_QC: Heating quality and condition\n\n Ex\tExcellent\n Gd\tGood\n TA\tAverage/Typical\n Fa\tFair\n Po\tPoor\n\nHouse_Style: Style of dwelling\n\t\n 1Story\tOne story\n 1.5Fin\tOne and one-half story: 2nd level finished\n 1.5Unf\tOne and one-half story: 2nd level unfinished\n 2Story\tTwo story\n 2.5Fin\tTwo and one-half story: 2nd level finished\n 2.5Unf\tTwo and one-half story: 2nd level unfinished\n SFoyer\tSplit Foyer\n SLvl\tSplit Level\n\nKitchen_AbvGr: Kitchens above grade\n\nKitchen_Qual: Kitchen quality\n\n Ex\tExcellent\n Gd\tGood\n TA\tTypical/Average\n Fa\tFair\n Po\tPoor\n\nLandContour: Flatness of the property\n\n Lvl\tNear Flat/Level\t\n Bnk\tBanked - Quick and significant rise from street grade to building\n HLS\tHillside - Significant slope from side to side\n Low\tDepression\n\t\t\nLand_Slope: Slope of property\n\t\t\n Gtl\tGentle slope\n Mod\tModerate Slope\t\n Sev\tSevere Slope\n\nLot_Area: Lot size in square feet\n\nLot_Config: Lot configuration\n\n Inside\tInside lot\n Corner\tCorner lot\n CulDSac\tCul-de-sac\n FR2\tFrontage on 2 sides of property\n FR3\tFrontage on 3 sides of property\n\nLot_Shape: General shape of property\n\n Reg\tRegular\t\n IR1\tSlightly irregular\n IR2\tModerately Irregular\n IR3\tIrregular\n\nMS_SubClass: Identifies the type of dwelling involved in the sale.\t\n\n 20\t1-STORY 1946 & NEWER ALL STYLES\n 30\t1-STORY 1945 & OLDER\n 40\t1-STORY W/FINISHED ATTIC ALL AGES\n 45\t1-1/2 STORY - UNFINISHED ALL AGES\n 50\t1-1/2 STORY FINISHED ALL AGES\n 60\t2-STORY 1946 & NEWER\n 70\t2-STORY 1945 & OLDER\n 75\t2-1/2 STORY ALL AGES\n 80\tSPLIT OR MULTI-LEVEL\n 85\tSPLIT FOYER\n 90\tDUPLEX - ALL STYLES AND AGES\n 120\t1-STORY PUD (Planned Unit Development) - 1946 & NEWER\n 150\t1-1/2 STORY PUD - ALL AGES\n 160\t2-STORY PUD - 1946 & NEWER\n 180\tPUD - MULTILEVEL - INCL SPLIT LEV/FOYER\n 190\t2 FAMILY CONVERSION - ALL STYLES AND AGES\n\nMS_Zoning: Identifies the general zoning classification of the sale.\n\t\t\n A\tAgriculture\n C\tCommercial\n FV\tFloating Village Residential\n I\tIndustrial\n RH\tResidential High Density\n RL\tResidential Low Density\n RP\tResidential Low Density Park \n RM\tResidential Medium Density\n\nMas_Vnr_Type: Masonry veneer type\n\n BrkCmn\tBrick Common\n BrkFace\tBrick Face\n CBlock\tCinder Block\n None\tNone\n Stone\tStone\n\nMo_Sold: Month Sold (MM)\n\nNeighborhood: Physical locations within Ames city limits\n\n Blmngtn\tBloomington Heights\n Blueste\tBluestem\n BrDale\tBriardale\n BrkSide\tBrookside\n ClearCr\tClear Creek\n CollgCr\tCollege Creek\n Crawfor\tCrawford\n Edwards\tEdwards\n Gilbert\tGilbert\n IDOTRR\tIowa DOT and Rail Road\n MeadowV\tMeadow Village\n Mitchel\tMitchell\n Names\tNorth Ames\n NoRidge\tNorthridge\n NPkVill\tNorthpark Villa\n NridgHt\tNorthridge Heights\n NWAmes\tNorthwest Ames\n OldTown\tOld Town\n SWISU\tSouth & West of Iowa State University\n Sawyer\tSawyer\n SawyerW\tSawyer West\n Somerst\tSomerset\n StoneBr\tStone Brook\n Timber\tTimberland\n Veenker\tVeenker\n\t\t\t\nOverall_Cond: Rates the overall condition of the house\n\n 10\tVery Excellent\n 9\tExcellent\n 8\tVery Good\n 7\tGood\n 6\tAbove Average\t\n 5\tAverage\n 4\tBelow Average\t\n 3\tFair\n 2\tPoor\n 1\tVery Poor\n\nOverall_Qual: Rates the overall material and finish of the house\n\n 10\tVery Excellent\n 9\tExcellent\n 8\tVery Good\n 7\tGood\n 6\tAbove Average\n 5\tAverage\n 4\tBelow Average\n 3\tFair\n 2\tPoor\n 1\tVery Poor\n\nPaved_Drive: Paved driveway\n\n Y\tPaved \n P\tPartial Pavement\n N\tDirt/Gravel\n\nRoof_Matl: Roof material\n\n ClyTile\tClay or Tile\n CompShg\tStandard (Composite) Shingle\n Membran\tMembrane\n Metal\tMetal\n Roll\tRoll\n Tar&Grv\tGravel & Tar\n WdShake\tWood Shakes\n WdShngl\tWood Shingles\n\nRoof_Style: Type of roof\n\n Flat\tFlat\n Gable\tGable\n Gambrel\tGabrel (Barn)\n Hip\tHip\n Mansard\tMansard\n Shed\tShed\n\nSalePrice: the sales price for each house\n\nSale_Condition: Condition of sale\n\n Normal\tNormal Sale\n Abnorml\tAbnormal Sale - trade, foreclosure, short sale\n AdjLand\tAdjoining Land Purchase\n Alloca\tAllocation - two linked properties with separate deeds, typically condo with a garage unit\t\n Family\tSale between family members\n Partial\tHome was not completed when last assessed (associated with New Homes)\n\nSale_Type: Type of sale\n\t\t\n WD \tWarranty Deed - Conventional\n CWD\tWarranty Deed - Cash\n VWD\tWarranty Deed - VA Loan\n New\tHome just constructed and sold\n COD\tCourt Officer Deed/Estate\n Con\tContract 15% Down payment regular terms\n ConLw\tContract Low Down payment and low interest\n ConLI\tContract Low Interest\n ConLD\tContract Low Down\n Oth\tOther\n\t\nStreet: Type of road access to property\n\n Grvl\tGravel\t\n Pave\tPaved\n \t\nTotRms_AbvGrd: Total rooms above grade (does not include bathrooms)\n\nUtilities: Type of utilities available\n\t\t\n AllPub\tAll public Utilities (E,G,W,& S)\t\n NoSewr\tElectricity, Gas, and Water (Septic Tank)\n NoSeWa\tElectricity and Gas Only\n ELO\tElectricity only\t\n\t\nYear_Built: Original construction date\n\nYear_Remod/Add: Remodel date (same as construction date if no remodeling or additions)\n\t\t\t\t\t\t\nYr_Sold: Year Sold (YYYY)\t\n\n```",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0b0ca6e24990a2edfecbfe488fa207f31cc5993 | 358,701 | ipynb | Jupyter Notebook | notebooks/mask_generator-Copy1.ipynb | anudeepsekhar/Lane-Detection-Pytorch | cfddda8a0768cf83afd87e29d605fd58aa89df59 | [
"MIT"
] | 1 | 2022-01-11T16:43:50.000Z | 2022-01-11T16:43:50.000Z | notebooks/mask_generator-Copy1.ipynb | anudeepsekhar/Lane-Detection-Pytorch | cfddda8a0768cf83afd87e29d605fd58aa89df59 | [
"MIT"
] | null | null | null | notebooks/mask_generator-Copy1.ipynb | anudeepsekhar/Lane-Detection-Pytorch | cfddda8a0768cf83afd87e29d605fd58aa89df59 | [
"MIT"
] | null | null | null | 53.705794 | 7,836 | 0.623868 | [
[
[
"import cv2\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport glob\nimport pandas as pd\nimport os",
"_____no_output_____"
],
[
"def imshow(img):\n img = cv2.cvtColor(img,cv2.COLOR_BGR2RGB)\n plt.imshow(img)",
"_____no_output_____"
],
[
"def get_lane_mask(sample,lane_idx):\n points_lane = []\n h_max = np.max(data['h_samples'][sample])\n h_min = np.min(data['h_samples'][sample])\n x_idx = data['lanes'][sample][lane_idx]\n y_idx = data['h_samples'][sample]\n for x,y in zip(x_idx,y_idx):\n offset = (y-h_min)/20\n # print(offset)\n if x>-100:\n points_lane.append([x-offset/2,y])\n x_idx_=x_idx.copy()\n y_idx_=y_idx.copy()\n x_idx_.reverse()\n y_idx_.reverse()\n for x,y in zip(x_idx_,y_idx_):\n offset = (y-h_min)/20\n # print(offset)\n if x>-100:\n points_lane.append([x+offset/2,y])\n return points_lane",
"_____no_output_____"
],
[
"def create_lane_mask(img_raw,sample):\n colors = [[255,0,0],[0,255,0],[0,0,255],[0,255,255]]\n laneMask = np.zeros(img_raw.shape, dtype=np.uint8)\n for lane_idx in range(len(data.lanes[sample])):\n points_lane = get_lane_mask(sample,lane_idx)\n if len(points_lane)>0: \n pts = np.array(points_lane, np.int32)\n pts = pts.reshape((-1,1,2))\n laneMask = cv2.fillPoly(laneMask,[pts],colors[lane_idx])\n colors = [[255,0,0],[0,255,0],[0,0,255],[0,255,255]]\n # create grey-scale label image\n label = np.zeros((720,1280),dtype = np.uint8)\n for i in range(len(colors)):\n label[np.where((laneMask == colors[i]).all(axis = 2))] = i+1\n else: continue\n return(img_raw, label)",
"_____no_output_____"
],
[
"data = pd.read_json(os.path.join(data_dir, 'label_data.json'), lines=True)\ndata.info()\nprint(len(data.raw_file))\ndata",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 2858 entries, 0 to 2857\nData columns (total 3 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 lanes 2858 non-null object\n 1 h_samples 2858 non-null object\n 2 raw_file 2858 non-null object\ndtypes: object(3)\nmemory usage: 67.1+ KB\n2858\n"
],
[
"print(len(data.raw_file))",
"2858\n"
],
[
"for i in range(len(data.raw_file)):\n img_path = data.raw_file[i]\n img_path = os.path.join(data_dir,img_path)\n print('Reading from: ', img_path)\n path_list = img_path.split('/')[:-1]\n mask_path_dir = os.path.join(*path_list)\n\n img_raw = cv2.imread(img_path)\n img_, mask = create_lane_mask(img_raw,i)\n \"\"\"\n fig = plt.figure(figsize=(15,20))\n plt.subplot(211)\n imshow(img_raw)\n plt.subplot(212)\n print(mask.shape)\n plt.imshow(mask)\n \"\"\"\n mask_path_dir = mask_path_dir.replace('clips', 'masks')\n print('Saving to: ', mask_path_dir)\n try:\n os.makedirs(mask_path_dir)\n except:\n pass\n\n for i in range(1, 21):\n cv2.imwrite(os.path.join( mask_path_dir, f'{i}.tiff'), mask)\n # i = i+1\n",
"Reading from: dataset/clips/0313-1/6040/20.jpg\nSaving to: dataset/masks/0313-1/6040\nReading from: dataset/clips/0313-1/5320/20.jpg\nSaving to: dataset/masks/0313-1/5320\nReading from: dataset/clips/0313-1/23700/20.jpg\nSaving to: dataset/masks/0313-1/23700\nReading from: dataset/clips/0313-1/51660/20.jpg\nSaving to: dataset/masks/0313-1/51660\nReading from: dataset/clips/0313-1/25680/20.jpg\nSaving to: dataset/masks/0313-1/25680\nReading from: dataset/clips/0313-1/36000/20.jpg\nSaving to: dataset/masks/0313-1/36000\nReading from: dataset/clips/0313-1/9460/20.jpg\nSaving to: dataset/masks/0313-1/9460\nReading from: dataset/clips/0313-1/6180/20.jpg\nSaving to: dataset/masks/0313-1/6180\nReading from: dataset/clips/0313-1/10100/20.jpg\nSaving to: dataset/masks/0313-1/10100\nReading from: dataset/clips/0313-1/39720/20.jpg\nSaving to: dataset/masks/0313-1/39720\nReading from: dataset/clips/0313-1/7780/20.jpg\nSaving to: dataset/masks/0313-1/7780\nReading from: dataset/clips/0313-1/31680/20.jpg\nSaving to: dataset/masks/0313-1/31680\nReading from: dataset/clips/0313-1/11980/20.jpg\nSaving to: dataset/masks/0313-1/11980\nReading from: dataset/clips/0313-1/3300/20.jpg\nSaving to: dataset/masks/0313-1/3300\nReading from: dataset/clips/0313-1/32520/20.jpg\nSaving to: dataset/masks/0313-1/32520\nReading from: dataset/clips/0313-1/47160/20.jpg\nSaving to: dataset/masks/0313-1/47160\nReading from: dataset/clips/0313-1/15220/20.jpg\nSaving to: dataset/masks/0313-1/15220\nReading from: dataset/clips/0313-1/19320/20.jpg\nSaving to: dataset/masks/0313-1/19320\nReading from: dataset/clips/0313-1/9600/20.jpg\nSaving to: dataset/masks/0313-1/9600\nReading from: dataset/clips/0313-1/3960/20.jpg\nSaving to: dataset/masks/0313-1/3960\nReading from: dataset/clips/0313-1/42420/20.jpg\nSaving to: dataset/masks/0313-1/42420\nReading from: dataset/clips/0313-1/300/20.jpg\nSaving to: dataset/masks/0313-1/300\nReading from: dataset/clips/0313-1/9860/20.jpg\nSaving to: dataset/masks/0313-1/9860\nReading from: dataset/clips/0313-1/25440/20.jpg\nSaving to: dataset/masks/0313-1/25440\nReading from: dataset/clips/0313-1/35400/20.jpg\nSaving to: dataset/masks/0313-1/35400\nReading from: dataset/clips/0313-1/29100/20.jpg\nSaving to: dataset/masks/0313-1/29100\nReading from: dataset/clips/0313-1/33180/20.jpg\nSaving to: dataset/masks/0313-1/33180\nReading from: dataset/clips/0313-1/16100/20.jpg\nSaving to: dataset/masks/0313-1/16100\nReading from: dataset/clips/0313-1/42000/20.jpg\nSaving to: dataset/masks/0313-1/42000\nReading from: dataset/clips/0313-1/600/20.jpg\nSaving to: dataset/masks/0313-1/600\nReading from: dataset/clips/0313-1/46680/20.jpg\nSaving to: dataset/masks/0313-1/46680\nReading from: dataset/clips/0313-1/7640/20.jpg\nSaving to: dataset/masks/0313-1/7640\nReading from: dataset/clips/0313-1/43020/20.jpg\nSaving to: dataset/masks/0313-1/43020\nReading from: dataset/clips/0313-1/29880/20.jpg\nSaving to: dataset/masks/0313-1/29880\nReading from: dataset/clips/0313-1/24840/20.jpg\nSaving to: dataset/masks/0313-1/24840\nReading from: dataset/clips/0313-1/9160/20.jpg\nSaving to: dataset/masks/0313-1/9160\nReading from: dataset/clips/0313-1/6420/20.jpg\nSaving to: dataset/masks/0313-1/6420\nReading from: dataset/clips/0313-1/48300/20.jpg\nSaving to: dataset/masks/0313-1/48300\nReading from: dataset/clips/0313-1/4240/20.jpg\nSaving to: dataset/masks/0313-1/4240\nReading from: dataset/clips/0313-1/12600/20.jpg\nSaving to: dataset/masks/0313-1/12600\nReading from: dataset/clips/0313-1/15680/20.jpg\nSaving to: dataset/masks/0313-1/15680\nReading from: dataset/clips/0313-1/9580/20.jpg\nSaving to: dataset/masks/0313-1/9580\nReading from: dataset/clips/0313-1/14620/20.jpg\nSaving to: dataset/masks/0313-1/14620\nReading from: dataset/clips/0313-1/49500/20.jpg\nSaving to: dataset/masks/0313-1/49500\nReading from: dataset/clips/0313-1/10960/20.jpg\nSaving to: dataset/masks/0313-1/10960\nReading from: dataset/clips/0313-1/12200/20.jpg\nSaving to: dataset/masks/0313-1/12200\nReading from: dataset/clips/0313-1/15080/20.jpg\nSaving to: dataset/masks/0313-1/15080\nReading from: dataset/clips/0313-1/46560/20.jpg\nSaving to: dataset/masks/0313-1/46560\nReading from: dataset/clips/0313-1/13240/20.jpg\nSaving to: dataset/masks/0313-1/13240\nReading from: dataset/clips/0313-1/16880/20.jpg\nSaving to: dataset/masks/0313-1/16880\nReading from: dataset/clips/0313-1/40860/20.jpg\nSaving to: dataset/masks/0313-1/40860\nReading from: dataset/clips/0313-1/7000/20.jpg\nSaving to: dataset/masks/0313-1/7000\nReading from: dataset/clips/0313-1/29400/20.jpg\nSaving to: dataset/masks/0313-1/29400\nReading from: dataset/clips/0313-1/7020/20.jpg\nSaving to: dataset/masks/0313-1/7020\nReading from: dataset/clips/0313-1/10680/20.jpg\nSaving to: dataset/masks/0313-1/10680\nReading from: dataset/clips/0313-1/9020/20.jpg\nSaving to: dataset/masks/0313-1/9020\nReading from: dataset/clips/0313-1/27660/20.jpg\nSaving to: dataset/masks/0313-1/27660\nReading from: dataset/clips/0313-1/43200/20.jpg\nSaving to: dataset/masks/0313-1/43200\nReading from: dataset/clips/0313-1/9080/20.jpg\nSaving to: dataset/masks/0313-1/9080\nReading from: dataset/clips/0313-1/42060/20.jpg\nSaving to: dataset/masks/0313-1/42060\nReading from: dataset/clips/0313-1/17060/20.jpg\nSaving to: dataset/masks/0313-1/17060\nReading from: dataset/clips/0313-1/4740/20.jpg\nSaving to: dataset/masks/0313-1/4740\nReading from: dataset/clips/0313-1/25380/20.jpg\nSaving to: dataset/masks/0313-1/25380\nReading from: dataset/clips/0313-1/23460/20.jpg\nSaving to: dataset/masks/0313-1/23460\nReading from: dataset/clips/0313-1/5780/20.jpg\nSaving to: dataset/masks/0313-1/5780\nReading from: dataset/clips/0313-1/16900/20.jpg\nSaving to: dataset/masks/0313-1/16900\nReading from: dataset/clips/0313-1/31800/20.jpg\nSaving to: dataset/masks/0313-1/31800\nReading from: dataset/clips/0313-1/16520/20.jpg\nSaving to: dataset/masks/0313-1/16520\nReading from: dataset/clips/0313-1/38820/20.jpg\nSaving to: dataset/masks/0313-1/38820\nReading from: dataset/clips/0313-1/4300/20.jpg\nSaving to: dataset/masks/0313-1/4300\nReading from: dataset/clips/0313-1/23520/20.jpg\nSaving to: dataset/masks/0313-1/23520\nReading from: dataset/clips/0313-1/52260/20.jpg\nSaving to: dataset/masks/0313-1/52260\nReading from: dataset/clips/0313-1/9900/20.jpg\nSaving to: dataset/masks/0313-1/9900\nReading from: dataset/clips/0313-1/38040/20.jpg\nSaving to: dataset/masks/0313-1/38040\nReading from: dataset/clips/0313-1/28440/20.jpg\nSaving to: dataset/masks/0313-1/28440\nReading from: dataset/clips/0313-1/47700/20.jpg\nSaving to: dataset/masks/0313-1/47700\nReading from: dataset/clips/0313-1/8000/20.jpg\nSaving to: dataset/masks/0313-1/8000\nReading from: dataset/clips/0313-1/16300/20.jpg\nSaving to: dataset/masks/0313-1/16300\nReading from: dataset/clips/0313-1/31560/20.jpg\nSaving to: dataset/masks/0313-1/31560\nReading from: dataset/clips/0313-1/52560/20.jpg\nSaving to: dataset/masks/0313-1/52560\nReading from: dataset/clips/0313-1/5700/20.jpg\nSaving to: dataset/masks/0313-1/5700\nReading from: dataset/clips/0313-1/17480/20.jpg\nSaving to: dataset/masks/0313-1/17480\nReading from: dataset/clips/0313-1/17280/20.jpg\nSaving to: dataset/masks/0313-1/17280\nReading from: dataset/clips/0313-1/15300/20.jpg\nSaving to: dataset/masks/0313-1/15300\nReading from: dataset/clips/0313-1/33480/20.jpg\nSaving to: dataset/masks/0313-1/33480\nReading from: dataset/clips/0313-1/17340/20.jpg\nSaving to: dataset/masks/0313-1/17340\nReading from: dataset/clips/0313-1/180/20.jpg\nSaving to: dataset/masks/0313-1/180\nReading from: dataset/clips/0313-1/27900/20.jpg\nSaving to: dataset/masks/0313-1/27900\nReading from: dataset/clips/0313-1/45720/20.jpg\nSaving to: dataset/masks/0313-1/45720\nReading from: dataset/clips/0313-1/2460/20.jpg\nSaving to: dataset/masks/0313-1/2460\nReading from: dataset/clips/0313-1/17460/20.jpg\nSaving to: dataset/masks/0313-1/17460\nReading from: dataset/clips/0313-1/44460/20.jpg\nSaving to: dataset/masks/0313-1/44460\nReading from: dataset/clips/0313-1/22260/20.jpg\nSaving to: dataset/masks/0313-1/22260\nReading from: dataset/clips/0313-1/31380/20.jpg\nSaving to: dataset/masks/0313-1/31380\nReading from: dataset/clips/0313-1/48540/20.jpg\nSaving to: dataset/masks/0313-1/48540\nReading from: dataset/clips/0313-1/23580/20.jpg\nSaving to: dataset/masks/0313-1/23580\nReading from: dataset/clips/0313-1/18840/20.jpg\nSaving to: dataset/masks/0313-1/18840\nReading from: dataset/clips/0313-1/8840/20.jpg\nSaving to: dataset/masks/0313-1/8840\nReading from: dataset/clips/0313-1/42840/20.jpg\nSaving to: dataset/masks/0313-1/42840\nReading from: dataset/clips/0313-1/49860/20.jpg\nSaving to: dataset/masks/0313-1/49860\nReading from: dataset/clips/0313-1/51180/20.jpg\nSaving to: dataset/masks/0313-1/51180\nReading from: dataset/clips/0313-1/38940/20.jpg\nSaving to: dataset/masks/0313-1/38940\nReading from: dataset/clips/0313-1/16360/20.jpg\nSaving to: dataset/masks/0313-1/16360\nReading from: dataset/clips/0313-1/47880/20.jpg\nSaving to: dataset/masks/0313-1/47880\nReading from: dataset/clips/0313-1/28080/20.jpg\nSaving to: dataset/masks/0313-1/28080\nReading from: dataset/clips/0313-1/18900/20.jpg\nSaving to: dataset/masks/0313-1/18900\nReading from: dataset/clips/0313-1/1200/20.jpg\nSaving to: dataset/masks/0313-1/1200\nReading from: dataset/clips/0313-1/10840/20.jpg\nSaving to: dataset/masks/0313-1/10840\nReading from: dataset/clips/0313-1/17120/20.jpg\nSaving to: dataset/masks/0313-1/17120\nReading from: dataset/clips/0313-1/2340/20.jpg\nSaving to: dataset/masks/0313-1/2340\nReading from: dataset/clips/0313-1/50160/20.jpg\nSaving to: dataset/masks/0313-1/50160\nReading from: dataset/clips/0313-1/43980/20.jpg\nSaving to: dataset/masks/0313-1/43980\nReading from: dataset/clips/0313-1/48360/20.jpg\nSaving to: dataset/masks/0313-1/48360\nReading from: dataset/clips/0313-1/11440/20.jpg\nSaving to: dataset/masks/0313-1/11440\nReading from: dataset/clips/0313-1/2760/20.jpg\nSaving to: dataset/masks/0313-1/2760\nReading from: dataset/clips/0313-1/23820/20.jpg\nSaving to: dataset/masks/0313-1/23820\nReading from: dataset/clips/0313-1/14860/20.jpg\nSaving to: dataset/masks/0313-1/14860\nReading from: dataset/clips/0313-1/39780/20.jpg\nSaving to: dataset/masks/0313-1/39780\nReading from: dataset/clips/0313-1/46860/20.jpg\nSaving to: dataset/masks/0313-1/46860\nReading from: dataset/clips/0313-1/23880/20.jpg\nSaving to: dataset/masks/0313-1/23880\nReading from: dataset/clips/0313-1/50040/20.jpg\nSaving to: dataset/masks/0313-1/50040\nReading from: dataset/clips/0313-1/11840/20.jpg\nSaving to: dataset/masks/0313-1/11840\nReading from: dataset/clips/0313-1/20280/20.jpg\nSaving to: dataset/masks/0313-1/20280\nReading from: dataset/clips/0313-1/1680/20.jpg\nSaving to: dataset/masks/0313-1/1680\nReading from: dataset/clips/0313-1/42240/20.jpg\nSaving to: dataset/masks/0313-1/42240\nReading from: dataset/clips/0313-1/47340/20.jpg\nSaving to: dataset/masks/0313-1/47340\nReading from: dataset/clips/0313-1/50820/20.jpg\nSaving to: dataset/masks/0313-1/50820\nReading from: dataset/clips/0313-1/17240/20.jpg\nSaving to: dataset/masks/0313-1/17240\nReading from: dataset/clips/0313-1/4920/20.jpg\nSaving to: dataset/masks/0313-1/4920\nReading from: dataset/clips/0313-1/14400/20.jpg\nSaving to: dataset/masks/0313-1/14400\nReading from: dataset/clips/0313-1/14140/20.jpg\nSaving to: dataset/masks/0313-1/14140\nReading from: dataset/clips/0313-1/21300/20.jpg\nSaving to: dataset/masks/0313-1/21300\nReading from: dataset/clips/0313-1/12820/20.jpg\nSaving to: dataset/masks/0313-1/12820\nReading from: dataset/clips/0313-1/6700/20.jpg\nSaving to: dataset/masks/0313-1/6700\nReading from: dataset/clips/0313-1/5440/20.jpg\nSaving to: dataset/masks/0313-1/5440\nReading from: dataset/clips/0313-1/41220/20.jpg\nSaving to: dataset/masks/0313-1/41220\nReading from: dataset/clips/0313-1/13180/20.jpg\nSaving to: dataset/masks/0313-1/13180\nReading from: dataset/clips/0313-1/24420/20.jpg\nSaving to: dataset/masks/0313-1/24420\nReading from: dataset/clips/0313-1/20160/20.jpg\nSaving to: dataset/masks/0313-1/20160\nReading from: dataset/clips/0313-1/13120/20.jpg\nSaving to: dataset/masks/0313-1/13120\nReading from: dataset/clips/0313-1/8220/20.jpg\nSaving to: dataset/masks/0313-1/8220\nReading from: dataset/clips/0313-1/42300/20.jpg\nSaving to: dataset/masks/0313-1/42300\nReading from: dataset/clips/0313-1/52740/20.jpg\nSaving to: dataset/masks/0313-1/52740\nReading from: dataset/clips/0313-1/3480/20.jpg\nSaving to: dataset/masks/0313-1/3480\nReading from: dataset/clips/0313-1/17220/20.jpg\nSaving to: dataset/masks/0313-1/17220\nReading from: dataset/clips/0313-1/15500/20.jpg\nSaving to: dataset/masks/0313-1/15500\nReading from: dataset/clips/0313-1/26640/20.jpg\nSaving to: dataset/masks/0313-1/26640\nReading from: dataset/clips/0313-1/4340/20.jpg\nSaving to: dataset/masks/0313-1/4340\nReading from: dataset/clips/0313-1/8860/20.jpg\nSaving to: dataset/masks/0313-1/8860\nReading from: dataset/clips/0313-1/26280/20.jpg\nSaving to: dataset/masks/0313-1/26280\nReading from: dataset/clips/0313-1/30660/20.jpg\nSaving to: dataset/masks/0313-1/30660\nReading from: dataset/clips/0313-1/9360/20.jpg\nSaving to: dataset/masks/0313-1/9360\nReading from: dataset/clips/0313-1/6600/20.jpg\nSaving to: dataset/masks/0313-1/6600\nReading from: dataset/clips/0313-1/7440/20.jpg\nSaving to: dataset/masks/0313-1/7440\nReading from: dataset/clips/0313-1/29160/20.jpg\nSaving to: dataset/masks/0313-1/29160\nReading from: dataset/clips/0313-1/25800/20.jpg\nSaving to: dataset/masks/0313-1/25800\nReading from: dataset/clips/0313-1/11600/20.jpg\nSaving to: dataset/masks/0313-1/11600\nReading from: dataset/clips/0313-1/11120/20.jpg\nSaving to: dataset/masks/0313-1/11120\nReading from: dataset/clips/0313-1/7060/20.jpg\nSaving to: dataset/masks/0313-1/7060\nReading from: dataset/clips/0313-1/16040/20.jpg\nSaving to: dataset/masks/0313-1/16040\nReading from: dataset/clips/0313-1/12980/20.jpg\nSaving to: dataset/masks/0313-1/12980\nReading from: dataset/clips/0313-1/27420/20.jpg\nSaving to: dataset/masks/0313-1/27420\nReading from: dataset/clips/0313-1/15100/20.jpg\nSaving to: dataset/masks/0313-1/15100\nReading from: dataset/clips/0313-1/42600/20.jpg\nSaving to: dataset/masks/0313-1/42600\nReading from: dataset/clips/0313-1/27480/20.jpg\nSaving to: dataset/masks/0313-1/27480\nReading from: dataset/clips/0313-1/33300/20.jpg\nSaving to: dataset/masks/0313-1/33300\nReading from: dataset/clips/0313-1/6800/20.jpg\nSaving to: dataset/masks/0313-1/6800\nReading from: dataset/clips/0313-1/32040/20.jpg\nSaving to: dataset/masks/0313-1/32040\nReading from: dataset/clips/0313-1/8100/20.jpg\nSaving to: dataset/masks/0313-1/8100\nReading from: dataset/clips/0313-1/37320/20.jpg\nSaving to: dataset/masks/0313-1/37320\nReading from: dataset/clips/0313-1/5580/20.jpg\nSaving to: dataset/masks/0313-1/5580\nReading from: dataset/clips/0313-1/36180/20.jpg\nSaving to: dataset/masks/0313-1/36180\nReading from: dataset/clips/0313-1/12880/20.jpg\nSaving to: dataset/masks/0313-1/12880\nReading from: dataset/clips/0313-1/46080/20.jpg\nSaving to: dataset/masks/0313-1/46080\nReading from: dataset/clips/0313-1/19800/20.jpg\nSaving to: dataset/masks/0313-1/19800\nReading from: dataset/clips/0313-1/49620/20.jpg\nSaving to: dataset/masks/0313-1/49620\nReading from: dataset/clips/0313-1/17620/20.jpg\nSaving to: dataset/masks/0313-1/17620\nReading from: dataset/clips/0313-1/7980/20.jpg\nSaving to: dataset/masks/0313-1/7980\nReading from: dataset/clips/0313-1/14660/20.jpg\nSaving to: dataset/masks/0313-1/14660\nReading from: dataset/clips/0313-1/22920/20.jpg\nSaving to: dataset/masks/0313-1/22920\nReading from: dataset/clips/0313-1/36780/20.jpg\nSaving to: dataset/masks/0313-1/36780\nReading from: dataset/clips/0313-1/15860/20.jpg\nSaving to: dataset/masks/0313-1/15860\nReading from: dataset/clips/0313-1/5920/20.jpg\nSaving to: dataset/masks/0313-1/5920\nReading from: dataset/clips/0313-1/1020/20.jpg\nSaving to: dataset/masks/0313-1/1020\nReading from: dataset/clips/0313-1/41700/20.jpg\nSaving to: dataset/masks/0313-1/41700\nReading from: dataset/clips/0313-1/41760/20.jpg\nSaving to: dataset/masks/0313-1/41760\nReading from: dataset/clips/0313-1/18600/20.jpg\nSaving to: dataset/masks/0313-1/18600\nReading from: dataset/clips/0313-1/51540/20.jpg\nSaving to: dataset/masks/0313-1/51540\nReading from: dataset/clips/0313-1/16680/20.jpg\nSaving to: dataset/masks/0313-1/16680\nReading from: dataset/clips/0313-1/4380/20.jpg\nSaving to: dataset/masks/0313-1/4380\nReading from: dataset/clips/0313-1/5720/20.jpg\nSaving to: dataset/masks/0313-1/5720\nReading from: dataset/clips/0313-1/42120/20.jpg\nSaving to: dataset/masks/0313-1/42120\nReading from: dataset/clips/0313-1/7660/20.jpg\nSaving to: dataset/masks/0313-1/7660\nReading from: dataset/clips/0313-1/26100/20.jpg\nSaving to: dataset/masks/0313-1/26100\nReading from: dataset/clips/0313-1/17600/20.jpg\nSaving to: dataset/masks/0313-1/17600\nReading from: dataset/clips/0313-1/39480/20.jpg\nSaving to: dataset/masks/0313-1/39480\nReading from: dataset/clips/0313-1/17260/20.jpg\nSaving to: dataset/masks/0313-1/17260\nReading from: dataset/clips/0313-1/13660/20.jpg\nSaving to: dataset/masks/0313-1/13660\nReading from: dataset/clips/0313-1/9960/20.jpg\nSaving to: dataset/masks/0313-1/9960\nReading from: dataset/clips/0313-1/2160/20.jpg\nSaving to: dataset/masks/0313-1/2160\nReading from: dataset/clips/0313-1/39840/20.jpg\nSaving to: dataset/masks/0313-1/39840\nReading from: dataset/clips/0313-1/21060/20.jpg\nSaving to: dataset/masks/0313-1/21060\nReading from: dataset/clips/0313-1/27120/20.jpg\nSaving to: dataset/masks/0313-1/27120\nReading from: dataset/clips/0313-1/48840/20.jpg\nSaving to: dataset/masks/0313-1/48840\nReading from: dataset/clips/0313-1/8540/20.jpg\nSaving to: dataset/masks/0313-1/8540\nReading from: dataset/clips/0313-1/23280/20.jpg\nSaving to: dataset/masks/0313-1/23280\nReading from: dataset/clips/0313-1/1140/20.jpg\nSaving to: dataset/masks/0313-1/1140\nReading from: dataset/clips/0313-1/34980/20.jpg\nSaving to: dataset/masks/0313-1/34980\nReading from: dataset/clips/0313-1/38640/20.jpg\nSaving to: dataset/masks/0313-1/38640\nReading from: dataset/clips/0313-1/23160/20.jpg\nSaving to: dataset/masks/0313-1/23160\nReading from: dataset/clips/0313-1/12140/20.jpg\nSaving to: dataset/masks/0313-1/12140\nReading from: dataset/clips/0313-1/52140/20.jpg\nSaving to: dataset/masks/0313-1/52140\nReading from: dataset/clips/0313-1/15340/20.jpg\nSaving to: dataset/masks/0313-1/15340\nReading from: dataset/clips/0313-1/52860/20.jpg\nSaving to: dataset/masks/0313-1/52860\nReading from: dataset/clips/0313-1/12220/20.jpg\nSaving to: dataset/masks/0313-1/12220\nReading from: dataset/clips/0313-1/17140/20.jpg\nSaving to: dataset/masks/0313-1/17140\nReading from: dataset/clips/0313-1/41640/20.jpg\nSaving to: dataset/masks/0313-1/41640\nReading from: dataset/clips/0313-1/31020/20.jpg\nSaving to: dataset/masks/0313-1/31020\nReading from: dataset/clips/0313-1/8740/20.jpg\nSaving to: dataset/masks/0313-1/8740\nReading from: dataset/clips/0313-1/5880/20.jpg\nSaving to: dataset/masks/0313-1/5880\nReading from: dataset/clips/0313-1/16120/20.jpg\nSaving to: dataset/masks/0313-1/16120\nReading from: dataset/clips/0313-1/13580/20.jpg\nSaving to: dataset/masks/0313-1/13580\nReading from: dataset/clips/0313-1/8420/20.jpg\nSaving to: dataset/masks/0313-1/8420\nReading from: dataset/clips/0313-1/10340/20.jpg\nSaving to: dataset/masks/0313-1/10340\nReading from: dataset/clips/0313-1/3060/20.jpg\nSaving to: dataset/masks/0313-1/3060\nReading from: dataset/clips/0313-1/30900/20.jpg\nSaving to: dataset/masks/0313-1/30900\nReading from: dataset/clips/0313-1/5140/20.jpg\nSaving to: dataset/masks/0313-1/5140\nReading from: dataset/clips/0313-1/14820/20.jpg\nSaving to: dataset/masks/0313-1/14820\nReading from: dataset/clips/0313-1/45000/20.jpg\nSaving to: dataset/masks/0313-1/45000\nReading from: dataset/clips/0313-1/720/20.jpg\nSaving to: dataset/masks/0313-1/720\nReading from: dataset/clips/0313-1/9200/20.jpg\nSaving to: dataset/masks/0313-1/9200\nReading from: dataset/clips/0313-1/13200/20.jpg\nSaving to: dataset/masks/0313-1/13200\nReading from: dataset/clips/0313-1/51420/20.jpg\nSaving to: dataset/masks/0313-1/51420\nReading from: dataset/clips/0313-1/13360/20.jpg\nSaving to: dataset/masks/0313-1/13360\nReading from: dataset/clips/0313-1/24780/20.jpg\nSaving to: dataset/masks/0313-1/24780\nReading from: dataset/clips/0313-1/37260/20.jpg\nSaving to: dataset/masks/0313-1/37260\nReading from: dataset/clips/0313-1/4560/20.jpg\nSaving to: dataset/masks/0313-1/4560\nReading from: dataset/clips/0313-1/24720/20.jpg\nSaving to: dataset/masks/0313-1/24720\nReading from: dataset/clips/0313-1/11620/20.jpg\nSaving to: dataset/masks/0313-1/11620\nReading from: dataset/clips/0313-1/14380/20.jpg\nSaving to: dataset/masks/0313-1/14380\nReading from: dataset/clips/0313-1/12580/20.jpg\nSaving to: dataset/masks/0313-1/12580\nReading from: dataset/clips/0313-1/9700/20.jpg\nSaving to: dataset/masks/0313-1/9700\nReading from: dataset/clips/0313-1/15440/20.jpg\nSaving to: dataset/masks/0313-1/15440\nReading from: dataset/clips/0313-1/26700/20.jpg\nSaving to: dataset/masks/0313-1/26700\nReading from: dataset/clips/0313-1/47280/20.jpg\nSaving to: dataset/masks/0313-1/47280\nReading from: dataset/clips/0313-1/33900/20.jpg\nSaving to: dataset/masks/0313-1/33900\nReading from: dataset/clips/0313-1/30300/20.jpg\nSaving to: dataset/masks/0313-1/30300\nReading from: dataset/clips/0313-1/7500/20.jpg\nSaving to: dataset/masks/0313-1/7500\nReading from: dataset/clips/0313-1/40620/20.jpg\nSaving to: dataset/masks/0313-1/40620\nReading from: dataset/clips/0313-1/21240/20.jpg\nSaving to: dataset/masks/0313-1/21240\nReading from: dataset/clips/0313-1/6280/20.jpg\nSaving to: dataset/masks/0313-1/6280\nReading from: dataset/clips/0313-1/4440/20.jpg\nSaving to: dataset/masks/0313-1/4440\nReading from: dataset/clips/0313-1/16700/20.jpg\nSaving to: dataset/masks/0313-1/16700\nReading from: dataset/clips/0313-1/29220/20.jpg\nSaving to: dataset/masks/0313-1/29220\nReading from: dataset/clips/0313-1/18960/20.jpg\nSaving to: dataset/masks/0313-1/18960\nReading from: dataset/clips/0313-1/17180/20.jpg\nSaving to: dataset/masks/0313-1/17180\nReading from: dataset/clips/0313-1/34680/20.jpg\nSaving to: dataset/masks/0313-1/34680\nReading from: dataset/clips/0313-1/49740/20.jpg\nSaving to: dataset/masks/0313-1/49740\nReading from: dataset/clips/0313-1/32880/20.jpg\nSaving to: dataset/masks/0313-1/32880\nReading from: dataset/clips/0313-1/9320/20.jpg\nSaving to: dataset/masks/0313-1/9320\nReading from: dataset/clips/0313-1/14160/20.jpg\nSaving to: dataset/masks/0313-1/14160\nReading from: dataset/clips/0313-1/5240/20.jpg\nSaving to: dataset/masks/0313-1/5240\nReading from: dataset/clips/0313-1/20820/20.jpg\nSaving to: dataset/masks/0313-1/20820\nReading from: dataset/clips/0313-1/19440/20.jpg\nSaving to: dataset/masks/0313-1/19440\nReading from: dataset/clips/0313-1/9300/20.jpg\nSaving to: dataset/masks/0313-1/9300\nReading from: dataset/clips/0313-1/10420/20.jpg\nSaving to: dataset/masks/0313-1/10420\nReading from: dataset/clips/0313-1/41940/20.jpg\nSaving to: dataset/masks/0313-1/41940\nReading from: dataset/clips/0313-1/25980/20.jpg\nSaving to: dataset/masks/0313-1/25980\nReading from: dataset/clips/0313-1/16440/20.jpg\nSaving to: dataset/masks/0313-1/16440\nReading from: dataset/clips/0313-1/6660/20.jpg\nSaving to: dataset/masks/0313-1/6660\nReading from: dataset/clips/0313-1/51060/20.jpg\nSaving to: dataset/masks/0313-1/51060\nReading from: dataset/clips/0313-1/16780/20.jpg\nSaving to: dataset/masks/0313-1/16780\nReading from: dataset/clips/0313-1/21480/20.jpg\nSaving to: dataset/masks/0313-1/21480\nReading from: dataset/clips/0313-1/36600/20.jpg\nSaving to: dataset/masks/0313-1/36600\nReading from: dataset/clips/0313-1/34920/20.jpg\nSaving to: dataset/masks/0313-1/34920\nReading from: dataset/clips/0313-1/15660/20.jpg\nSaving to: dataset/masks/0313-1/15660\nReading from: dataset/clips/0313-1/47580/20.jpg\nSaving to: dataset/masks/0313-1/47580\nReading from: dataset/clips/0313-1/12320/20.jpg\nSaving to: dataset/masks/0313-1/12320\nReading from: dataset/clips/0313-1/3940/20.jpg\nSaving to: dataset/masks/0313-1/3940\nReading from: dataset/clips/0313-1/6100/20.jpg\nSaving to: dataset/masks/0313-1/6100\nReading from: dataset/clips/0313-1/40380/20.jpg\nSaving to: dataset/masks/0313-1/40380\nReading from: dataset/clips/0313-1/15380/20.jpg\nSaving to: dataset/masks/0313-1/15380\nReading from: dataset/clips/0313-1/7940/20.jpg\nSaving to: dataset/masks/0313-1/7940\nReading from: dataset/clips/0313-1/4180/20.jpg\nSaving to: dataset/masks/0313-1/4180\nReading from: dataset/clips/0313-1/36900/20.jpg\nSaving to: dataset/masks/0313-1/36900\nReading from: dataset/clips/0313-1/25140/20.jpg\nSaving to: dataset/masks/0313-1/25140\nReading from: dataset/clips/0313-1/28680/20.jpg\nSaving to: dataset/masks/0313-1/28680\nReading from: dataset/clips/0313-1/12680/20.jpg\nSaving to: dataset/masks/0313-1/12680\nReading from: dataset/clips/0313-1/48060/20.jpg\nSaving to: dataset/masks/0313-1/48060\nReading from: dataset/clips/0313-1/14120/20.jpg\nSaving to: dataset/masks/0313-1/14120\nReading from: dataset/clips/0313-1/15140/20.jpg\nSaving to: dataset/masks/0313-1/15140\nReading from: dataset/clips/0313-1/24060/20.jpg\nSaving to: dataset/masks/0313-1/24060\nReading from: dataset/clips/0313-1/47040/20.jpg\nSaving to: dataset/masks/0313-1/47040\nReading from: dataset/clips/0313-1/9720/20.jpg\nSaving to: dataset/masks/0313-1/9720\nReading from: dataset/clips/0313-1/11800/20.jpg\nSaving to: dataset/masks/0313-1/11800\nReading from: dataset/clips/0313-1/960/20.jpg\nSaving to: dataset/masks/0313-1/960\nReading from: dataset/clips/0313-1/51360/20.jpg\nSaving to: dataset/masks/0313-1/51360\nReading from: dataset/clips/0313-1/4860/20.jpg\nSaving to: dataset/masks/0313-1/4860\nReading from: dataset/clips/0313-1/9880/20.jpg\nSaving to: dataset/masks/0313-1/9880\nReading from: dataset/clips/0313-1/6680/20.jpg\nSaving to: dataset/masks/0313-1/6680\nReading from: dataset/clips/0313-1/48240/20.jpg\nSaving to: dataset/masks/0313-1/48240\nReading from: dataset/clips/0313-1/6820/20.jpg\nSaving to: dataset/masks/0313-1/6820\nReading from: dataset/clips/0313-1/16160/20.jpg\nSaving to: dataset/masks/0313-1/16160\nReading from: dataset/clips/0313-1/8260/20.jpg\nSaving to: dataset/masks/0313-1/8260\nReading from: dataset/clips/0313-1/29940/20.jpg\nSaving to: dataset/masks/0313-1/29940\nReading from: dataset/clips/0313-1/33240/20.jpg\nSaving to: dataset/masks/0313-1/33240\nReading from: dataset/clips/0313-1/40080/20.jpg\nSaving to: dataset/masks/0313-1/40080\nReading from: dataset/clips/0313-1/17660/20.jpg\nSaving to: dataset/masks/0313-1/17660\nReading from: dataset/clips/0313-1/3540/20.jpg\nSaving to: dataset/masks/0313-1/3540\nReading from: dataset/clips/0313-1/15840/20.jpg\nSaving to: dataset/masks/0313-1/15840\nReading from: dataset/clips/0313-1/11560/20.jpg\nSaving to: dataset/masks/0313-1/11560\nReading from: dataset/clips/0313-1/12000/20.jpg\nSaving to: dataset/masks/0313-1/12000\nReading from: dataset/clips/0313-1/39120/20.jpg\nSaving to: dataset/masks/0313-1/39120\nReading from: dataset/clips/0313-1/49320/20.jpg\nSaving to: dataset/masks/0313-1/49320\nReading from: dataset/clips/0313-1/49080/20.jpg\nSaving to: dataset/masks/0313-1/49080\nReading from: dataset/clips/0313-1/25020/20.jpg\nSaving to: dataset/masks/0313-1/25020\nReading from: dataset/clips/0313-1/14020/20.jpg\nSaving to: dataset/masks/0313-1/14020\nReading from: dataset/clips/0313-1/45540/20.jpg\nSaving to: dataset/masks/0313-1/45540\nReading from: dataset/clips/0313-1/51840/20.jpg\nSaving to: dataset/masks/0313-1/51840\nReading from: dataset/clips/0313-1/16860/20.jpg\nSaving to: dataset/masks/0313-1/16860\nReading from: dataset/clips/0313-1/4780/20.jpg\nSaving to: dataset/masks/0313-1/4780\nReading from: dataset/clips/0313-1/22320/20.jpg\nSaving to: dataset/masks/0313-1/22320\nReading from: dataset/clips/0313-1/52980/20.jpg\nSaving to: dataset/masks/0313-1/52980\nReading from: dataset/clips/0313-1/11140/20.jpg\nSaving to: dataset/masks/0313-1/11140\nReading from: dataset/clips/0313-1/51960/20.jpg\nSaving to: dataset/masks/0313-1/51960\nReading from: dataset/clips/0313-1/11820/20.jpg\nSaving to: dataset/masks/0313-1/11820\nReading from: dataset/clips/0313-1/52620/20.jpg\nSaving to: dataset/masks/0313-1/52620\nReading from: dataset/clips/0313-1/50880/20.jpg\nSaving to: dataset/masks/0313-1/50880\nReading from: dataset/clips/0313-1/30120/20.jpg\nSaving to: dataset/masks/0313-1/30120\nReading from: dataset/clips/0313-1/14640/20.jpg\nSaving to: dataset/masks/0313-1/14640\nReading from: dataset/clips/0313-1/50700/20.jpg\nSaving to: dataset/masks/0313-1/50700\nReading from: dataset/clips/0313-1/7520/20.jpg\nSaving to: dataset/masks/0313-1/7520\nReading from: dataset/clips/0313-1/33720/20.jpg\nSaving to: dataset/masks/0313-1/33720\nReading from: dataset/clips/0313-1/12260/20.jpg\nSaving to: dataset/masks/0313-1/12260\nReading from: dataset/clips/0313-1/15800/20.jpg\nSaving to: dataset/masks/0313-1/15800\nReading from: dataset/clips/0313-1/27780/20.jpg\nSaving to: dataset/masks/0313-1/27780\nReading from: dataset/clips/0313-1/9120/20.jpg\nSaving to: dataset/masks/0313-1/9120\nReading from: dataset/clips/0313-1/7100/20.jpg\nSaving to: dataset/masks/0313-1/7100\nReading from: dataset/clips/0313-1/32100/20.jpg\nSaving to: dataset/masks/0313-1/32100\nReading from: dataset/clips/0313-1/35820/20.jpg\nSaving to: dataset/masks/0313-1/35820\nReading from: dataset/clips/0313-1/5040/20.jpg\nSaving to: dataset/masks/0313-1/5040\nReading from: dataset/clips/0313-1/22860/20.jpg\nSaving to: dataset/masks/0313-1/22860\nReading from: dataset/clips/0313-1/11900/20.jpg\nSaving to: dataset/masks/0313-1/11900\nReading from: dataset/clips/0313-1/22380/20.jpg\nSaving to: dataset/masks/0313-1/22380\nReading from: dataset/clips/0313-1/8820/20.jpg\nSaving to: dataset/masks/0313-1/8820\nReading from: dataset/clips/0313-1/16980/20.jpg\nSaving to: dataset/masks/0313-1/16980\nReading from: dataset/clips/0313-1/40560/20.jpg\nSaving to: dataset/masks/0313-1/40560\nReading from: dataset/clips/0313-1/6220/20.jpg\nSaving to: dataset/masks/0313-1/6220\nReading from: dataset/clips/0313-1/12020/20.jpg\nSaving to: dataset/masks/0313-1/12020\nReading from: dataset/clips/0313-1/32400/20.jpg\nSaving to: dataset/masks/0313-1/32400\nReading from: dataset/clips/0313-1/21840/20.jpg\nSaving to: dataset/masks/0313-1/21840\nReading from: dataset/clips/0313-1/15940/20.jpg\nSaving to: dataset/masks/0313-1/15940\nReading from: dataset/clips/0313-1/14320/20.jpg\nSaving to: dataset/masks/0313-1/14320\nReading from: dataset/clips/0313-1/37680/20.jpg\nSaving to: dataset/masks/0313-1/37680\nReading from: dataset/clips/0313-1/8580/20.jpg\nSaving to: dataset/masks/0313-1/8580\nReading from: dataset/clips/0313-1/7040/20.jpg\nSaving to: dataset/masks/0313-1/7040\nReading from: dataset/clips/0313-1/8660/20.jpg\nSaving to: dataset/masks/0313-1/8660\nReading from: dataset/clips/0313-1/5280/20.jpg\nSaving to: dataset/masks/0313-1/5280\nReading from: dataset/clips/0313-1/9180/20.jpg\nSaving to: dataset/masks/0313-1/9180\nReading from: dataset/clips/0313-1/27720/20.jpg\nSaving to: dataset/masks/0313-1/27720\nReading from: dataset/clips/0313-1/14800/20.jpg\nSaving to: dataset/masks/0313-1/14800\nReading from: dataset/clips/0313-1/39240/20.jpg\nSaving to: dataset/masks/0313-1/39240\nReading from: dataset/clips/0313-1/21600/20.jpg\nSaving to: dataset/masks/0313-1/21600\nReading from: dataset/clips/0313-1/34740/20.jpg\nSaving to: dataset/masks/0313-1/34740\nReading from: dataset/clips/0313-1/240/20.jpg\nSaving to: dataset/masks/0313-1/240\nReading from: dataset/clips/0313-1/11520/20.jpg\nSaving to: dataset/masks/0313-1/11520\nReading from: dataset/clips/0313-1/41520/20.jpg\nSaving to: dataset/masks/0313-1/41520\nReading from: dataset/clips/0313-1/13380/20.jpg\nSaving to: dataset/masks/0313-1/13380\nReading from: dataset/clips/0313-1/12280/20.jpg\nSaving to: dataset/masks/0313-1/12280\nReading from: dataset/clips/0313-1/12160/20.jpg\nSaving to: dataset/masks/0313-1/12160\nReading from: dataset/clips/0313-1/15360/20.jpg\nSaving to: dataset/masks/0313-1/15360\nReading from: dataset/clips/0313-1/4520/20.jpg\nSaving to: dataset/masks/0313-1/4520\nReading from: dataset/clips/0313-1/14180/20.jpg\nSaving to: dataset/masks/0313-1/14180\nReading from: dataset/clips/0313-1/26340/20.jpg\nSaving to: dataset/masks/0313-1/26340\nReading from: dataset/clips/0313-1/37740/20.jpg\nSaving to: dataset/masks/0313-1/37740\nReading from: dataset/clips/0313-1/6460/20.jpg\nSaving to: dataset/masks/0313-1/6460\nReading from: dataset/clips/0313-1/6380/20.jpg\nSaving to: dataset/masks/0313-1/6380\nReading from: dataset/clips/0313-1/9340/20.jpg\nSaving to: dataset/masks/0313-1/9340\nReading from: dataset/clips/0313-1/43080/20.jpg\nSaving to: dataset/masks/0313-1/43080\nReading from: dataset/clips/0313-1/11740/20.jpg\nSaving to: dataset/masks/0313-1/11740\nReading from: dataset/clips/0313-1/11220/20.jpg\nSaving to: dataset/masks/0313-1/11220\nReading from: dataset/clips/0313-1/16620/20.jpg\nSaving to: dataset/masks/0313-1/16620\nReading from: dataset/clips/0313-1/27240/20.jpg\nSaving to: dataset/masks/0313-1/27240\nReading from: dataset/clips/0313-1/6240/20.jpg\nSaving to: dataset/masks/0313-1/6240\nReading from: dataset/clips/0313-1/26460/20.jpg\nSaving to: dataset/masks/0313-1/26460\nReading from: dataset/clips/0313-1/43560/20.jpg\nSaving to: dataset/masks/0313-1/43560\nReading from: dataset/clips/0313-1/8520/20.jpg\nSaving to: dataset/masks/0313-1/8520\nReading from: dataset/clips/0313-1/660/20.jpg\nSaving to: dataset/masks/0313-1/660\nReading from: dataset/clips/0313-1/10640/20.jpg\nSaving to: dataset/masks/0313-1/10640\nReading from: dataset/clips/0313-1/46620/20.jpg\nSaving to: dataset/masks/0313-1/46620\nReading from: dataset/clips/0313-1/24900/20.jpg\nSaving to: dataset/masks/0313-1/24900\nReading from: dataset/clips/0313-1/5200/20.jpg\nSaving to: dataset/masks/0313-1/5200\nReading from: dataset/clips/0313-1/14880/20.jpg\nSaving to: dataset/masks/0313-1/14880\nReading from: dataset/clips/0313-1/9640/20.jpg\nSaving to: dataset/masks/0313-1/9640\nReading from: dataset/clips/0313-1/38400/20.jpg\nSaving to: dataset/masks/0313-1/38400\nReading from: dataset/clips/0313-1/9280/20.jpg\nSaving to: dataset/masks/0313-1/9280\nReading from: dataset/clips/0313-1/12960/20.jpg\nSaving to: dataset/masks/0313-1/12960\nReading from: dataset/clips/0313-1/12900/20.jpg\nSaving to: dataset/masks/0313-1/12900\nReading from: dataset/clips/0313-1/18540/20.jpg\nSaving to: dataset/masks/0313-1/18540\nReading from: dataset/clips/0313-1/5800/20.jpg\nSaving to: dataset/masks/0313-1/5800\nReading from: dataset/clips/0313-1/2640/20.jpg\nSaving to: dataset/masks/0313-1/2640\nReading from: dataset/clips/0313-1/4420/20.jpg\nSaving to: dataset/masks/0313-1/4420\nReading from: dataset/clips/0313-1/16420/20.jpg\nSaving to: dataset/masks/0313-1/16420\nReading from: dataset/clips/0313-1/3120/20.jpg\nSaving to: dataset/masks/0313-1/3120\nReading from: dataset/clips/0313-1/13340/20.jpg\nSaving to: dataset/masks/0313-1/13340\nReading from: dataset/clips/0313-1/15160/20.jpg\nSaving to: dataset/masks/0313-1/15160\nReading from: dataset/clips/0313-1/17640/20.jpg\nSaving to: dataset/masks/0313-1/17640\nReading from: dataset/clips/0313-1/21360/20.jpg\nSaving to: dataset/masks/0313-1/21360\nReading from: dataset/clips/0313-1/27360/20.jpg\nSaving to: dataset/masks/0313-1/27360\nReading from: dataset/clips/0313-1/17440/20.jpg\nSaving to: dataset/masks/0313-1/17440\nReading from: dataset/clips/0313-1/6980/20.jpg\nSaving to: dataset/masks/0313-1/6980\nReading from: dataset/clips/0313-1/15400/20.jpg\nSaving to: dataset/masks/0313-1/15400\nReading from: dataset/clips/0313-1/8900/20.jpg\nSaving to: dataset/masks/0313-1/8900\nReading from: dataset/clips/0313-1/25200/20.jpg\nSaving to: dataset/masks/0313-1/25200\nReading from: dataset/clips/0313-1/36480/20.jpg\nSaving to: dataset/masks/0313-1/36480\nReading from: dataset/clips/0313-1/11660/20.jpg\nSaving to: dataset/masks/0313-1/11660\nReading from: dataset/clips/0313-1/47220/20.jpg\nSaving to: dataset/masks/0313-1/47220\nReading from: dataset/clips/0313-1/51600/20.jpg\nSaving to: dataset/masks/0313-1/51600\nReading from: dataset/clips/0313-1/22740/20.jpg\nSaving to: dataset/masks/0313-1/22740\nReading from: dataset/clips/0313-1/35100/20.jpg\nSaving to: dataset/masks/0313-1/35100\nReading from: dataset/clips/0313-1/2700/20.jpg\nSaving to: dataset/masks/0313-1/2700\nReading from: dataset/clips/0313-1/32700/20.jpg\nSaving to: dataset/masks/0313-1/32700\nReading from: dataset/clips/0313-1/39180/20.jpg\nSaving to: dataset/masks/0313-1/39180\nReading from: dataset/clips/0313-1/11200/20.jpg\nSaving to: dataset/masks/0313-1/11200\nReading from: dataset/clips/0313-1/9920/20.jpg\nSaving to: dataset/masks/0313-1/9920\nReading from: dataset/clips/0313-1/49800/20.jpg\nSaving to: dataset/masks/0313-1/49800\nReading from: dataset/clips/0313-1/14060/20.jpg\nSaving to: dataset/masks/0313-1/14060\nReading from: dataset/clips/0313-1/11960/20.jpg\nSaving to: dataset/masks/0313-1/11960\nReading from: dataset/clips/0313-1/22140/20.jpg\nSaving to: dataset/masks/0313-1/22140\nReading from: dataset/clips/0313-1/13520/20.jpg\nSaving to: dataset/masks/0313-1/13520\nReading from: dataset/clips/0313-1/4640/20.jpg\nSaving to: dataset/masks/0313-1/4640\nReading from: dataset/clips/0313-1/11160/20.jpg\nSaving to: dataset/masks/0313-1/11160\nReading from: dataset/clips/0313-1/5100/20.jpg\nSaving to: dataset/masks/0313-1/5100\nReading from: dataset/clips/0313-1/19020/20.jpg\nSaving to: dataset/masks/0313-1/19020\nReading from: dataset/clips/0313-1/17520/20.jpg\nSaving to: dataset/masks/0313-1/17520\nReading from: dataset/clips/0313-1/22560/20.jpg\nSaving to: dataset/masks/0313-1/22560\nReading from: dataset/clips/0313-1/3660/20.jpg\nSaving to: dataset/masks/0313-1/3660\nReading from: dataset/clips/0313-1/43740/20.jpg\nSaving to: dataset/masks/0313-1/43740\nReading from: dataset/clips/0313-1/28500/20.jpg\nSaving to: dataset/masks/0313-1/28500\nReading from: dataset/clips/0313-1/10480/20.jpg\nSaving to: dataset/masks/0313-1/10480\nReading from: dataset/clips/0313-1/21660/20.jpg\nSaving to: dataset/masks/0313-1/21660\nReading from: dataset/clips/0313-1/15980/20.jpg\nSaving to: dataset/masks/0313-1/15980\nReading from: dataset/clips/0313-1/10120/20.jpg\nSaving to: dataset/masks/0313-1/10120\nReading from: dataset/clips/0313-1/7380/20.jpg\nSaving to: dataset/masks/0313-1/7380\nReading from: dataset/clips/0313-1/13680/20.jpg\nSaving to: dataset/masks/0313-1/13680\nReading from: dataset/clips/0313-1/19740/20.jpg\nSaving to: dataset/masks/0313-1/19740\nReading from: dataset/clips/0313-1/8120/20.jpg\nSaving to: dataset/masks/0313-1/8120\nReading from: dataset/clips/0313-1/17800/20.jpg\nSaving to: dataset/masks/0313-1/17800\nReading from: dataset/clips/0313-1/15120/20.jpg\nSaving to: dataset/masks/0313-1/15120\nReading from: dataset/clips/0313-1/4040/20.jpg\nSaving to: dataset/masks/0313-1/4040\nReading from: dataset/clips/0313-1/13060/20.jpg\nSaving to: dataset/masks/0313-1/13060\nReading from: dataset/clips/0313-1/13720/20.jpg\nSaving to: dataset/masks/0313-1/13720\nReading from: dataset/clips/0313-1/12700/20.jpg\nSaving to: dataset/masks/0313-1/12700\nReading from: dataset/clips/0313-1/39960/20.jpg\nSaving to: dataset/masks/0313-1/39960\nReading from: dataset/clips/0313-1/14600/20.jpg\nSaving to: dataset/masks/0313-1/14600\nReading from: dataset/clips/0313-1/8640/20.jpg\nSaving to: dataset/masks/0313-1/8640\nReading from: dataset/clips/0313-1/12100/20.jpg\nSaving to: dataset/masks/0313-1/12100\nReading from: dataset/clips/0313-1/17020/20.jpg\nSaving to: dataset/masks/0313-1/17020\nReading from: dataset/clips/0313-1/14260/20.jpg\nSaving to: dataset/masks/0313-1/14260\nReading from: dataset/clips/0313-1/5000/20.jpg\nSaving to: dataset/masks/0313-1/5000\nReading from: dataset/clips/0313-1/17380/20.jpg\nSaving to: dataset/masks/0313-1/17380\nReading from: dataset/clips/0313-1/16280/20.jpg\nSaving to: dataset/masks/0313-1/16280\nReading from: dataset/clips/0313-1/18420/20.jpg\nSaving to: dataset/masks/0313-1/18420\nReading from: dataset/clips/0313-1/45900/20.jpg\nSaving to: dataset/masks/0313-1/45900\nReading from: dataset/clips/0313-1/41280/20.jpg\nSaving to: dataset/masks/0313-1/41280\nReading from: dataset/clips/0313-1/15900/20.jpg\nSaving to: dataset/masks/0313-1/15900\nReading from: dataset/clips/0313-1/14520/20.jpg\nSaving to: dataset/masks/0313-1/14520\nReading from: dataset/clips/0313-1/12420/20.jpg\nSaving to: dataset/masks/0313-1/12420\nReading from: dataset/clips/0313-1/9680/20.jpg\nSaving to: dataset/masks/0313-1/9680\nReading from: dataset/clips/0313-1/4680/20.jpg\nSaving to: dataset/masks/0313-1/4680\nReading from: dataset/clips/0313-1/8700/20.jpg\nSaving to: dataset/masks/0313-1/8700\nReading from: dataset/clips/0313-1/9560/20.jpg\nSaving to: dataset/masks/0313-1/9560\nReading from: dataset/clips/0313-1/42540/20.jpg\nSaving to: dataset/masks/0313-1/42540\nReading from: dataset/clips/0313-1/14840/20.jpg\nSaving to: dataset/masks/0313-1/14840\nReading from: dataset/clips/0313-1/49020/20.jpg\nSaving to: dataset/masks/0313-1/49020\nReading from: dataset/clips/0313-1/25080/20.jpg\nSaving to: dataset/masks/0313-1/25080\nReading from: dataset/clips/0313-1/26160/20.jpg\nSaving to: dataset/masks/0313-1/26160\nReading from: dataset/clips/0313-1/42900/20.jpg\nSaving to: dataset/masks/0313-1/42900\nReading from: dataset/clips/0313-1/42720/20.jpg\nSaving to: dataset/masks/0313-1/42720\nReading from: dataset/clips/0313-1/11080/20.jpg\nSaving to: dataset/masks/0313-1/11080\nReading from: dataset/clips/0313-1/13320/20.jpg\nSaving to: dataset/masks/0313-1/13320\nReading from: dataset/clips/0313-1/4580/20.jpg\nSaving to: dataset/masks/0313-1/4580\nReading from: dataset/clips/0313-1/21420/20.jpg\nSaving to: dataset/masks/0313-1/21420\nReading from: dataset/clips/0313-1/35040/20.jpg\nSaving to: dataset/masks/0313-1/35040\nReading from: dataset/clips/0313-1/6620/20.jpg\nSaving to: dataset/masks/0313-1/6620\nReading from: dataset/clips/0313-1/13480/20.jpg\nSaving to: dataset/masks/0313-1/13480\nReading from: dataset/clips/0313-1/36840/20.jpg\nSaving to: dataset/masks/0313-1/36840\nReading from: dataset/clips/0313-1/4540/20.jpg\nSaving to: dataset/masks/0313-1/4540\nReading from: dataset/clips/0313-1/14300/20.jpg\nSaving to: dataset/masks/0313-1/14300\nReading from: dataset/clips/0313-1/17360/20.jpg\nSaving to: dataset/masks/0313-1/17360\nReading from: dataset/clips/0313-1/40320/20.jpg\nSaving to: dataset/masks/0313-1/40320\nReading from: dataset/clips/0313-1/50940/20.jpg\nSaving to: dataset/masks/0313-1/50940\nReading from: dataset/clips/0313-1/33780/20.jpg\nSaving to: dataset/masks/0313-1/33780\nReading from: dataset/clips/0313-1/13560/20.jpg\nSaving to: dataset/masks/0313-1/13560\nReading from: dataset/clips/0313-1/29520/20.jpg\nSaving to: dataset/masks/0313-1/29520\nReading from: dataset/clips/0313-1/15740/20.jpg\nSaving to: dataset/masks/0313-1/15740\nReading from: dataset/clips/0313-1/3900/20.jpg\nSaving to: dataset/masks/0313-1/3900\nReading from: dataset/clips/0313-1/10660/20.jpg\nSaving to: dataset/masks/0313-1/10660\nReading from: dataset/clips/0313-1/6400/20.jpg\nSaving to: dataset/masks/0313-1/6400\nReading from: dataset/clips/0313-1/23340/20.jpg\nSaving to: dataset/masks/0313-1/23340\nReading from: dataset/clips/0313-1/11720/20.jpg\nSaving to: dataset/masks/0313-1/11720\nReading from: dataset/clips/0313-1/44040/20.jpg\nSaving to: dataset/masks/0313-1/44040\nReading from: dataset/clips/0313-1/45240/20.jpg\nSaving to: dataset/masks/0313-1/45240\nReading from: dataset/clips/0313-1/39360/20.jpg\nSaving to: dataset/masks/0313-1/39360\nReading from: dataset/clips/0313-1/11240/20.jpg\nSaving to: dataset/masks/0313-1/11240\nReading from: dataset/clips/0313-1/37800/20.jpg\nSaving to: dataset/masks/0313-1/37800\nReading from: dataset/clips/0313-1/10260/20.jpg\nSaving to: dataset/masks/0313-1/10260\nReading from: dataset/clips/0313-1/10240/20.jpg\nSaving to: dataset/masks/0313-1/10240\nReading from: dataset/clips/0313-1/46920/20.jpg\nSaving to: dataset/masks/0313-1/46920\nReading from: dataset/clips/0313-1/38220/20.jpg\nSaving to: dataset/masks/0313-1/38220\nReading from: dataset/clips/0313-1/24540/20.jpg\nSaving to: dataset/masks/0313-1/24540\nReading from: dataset/clips/0313-1/60/20.jpg\nSaving to: dataset/masks/0313-1/60\nReading from: dataset/clips/0313-1/46980/20.jpg\nSaving to: dataset/masks/0313-1/46980\nReading from: dataset/clips/0313-1/25320/20.jpg\nSaving to: dataset/masks/0313-1/25320\nReading from: dataset/clips/0313-1/10920/20.jpg\nSaving to: dataset/masks/0313-1/10920\nReading from: dataset/clips/0313-1/1860/20.jpg\nSaving to: dataset/masks/0313-1/1860\nReading from: dataset/clips/0313-1/40740/20.jpg\nSaving to: dataset/masks/0313-1/40740\nReading from: dataset/clips/0313-1/11400/20.jpg\nSaving to: dataset/masks/0313-1/11400\nReading from: dataset/clips/0313-1/13960/20.jpg\nSaving to: dataset/masks/0313-1/13960\nReading from: dataset/clips/0313-1/10440/20.jpg\nSaving to: dataset/masks/0313-1/10440\nReading from: dataset/clips/0313-1/28980/20.jpg\nSaving to: dataset/masks/0313-1/28980\nReading from: dataset/clips/0313-1/4320/20.jpg\nSaving to: dataset/masks/0313-1/4320\nReading from: dataset/clips/0313-1/7260/20.jpg\nSaving to: dataset/masks/0313-1/7260\nReading from: dataset/clips/0313-1/17100/20.jpg\nSaving to: dataset/masks/0313-1/17100\nReading from: dataset/clips/0313-1/14340/20.jpg\nSaving to: dataset/masks/0313-1/14340\nReading from: dataset/clips/0313-1/9540/20.jpg\nSaving to: dataset/masks/0313-1/9540\nReading from: dataset/clips/0313-1/7200/20.jpg\nSaving to: dataset/masks/0313-1/7200\nReading from: dataset/clips/0313-1/16180/20.jpg\nSaving to: dataset/masks/0313-1/16180\nReading from: dataset/clips/0313-1/37920/20.jpg\nSaving to: dataset/masks/0313-1/37920\nReading from: dataset/clips/0313-1/36660/20.jpg\nSaving to: dataset/masks/0313-1/36660\nReading from: dataset/clips/0313-1/49980/20.jpg\nSaving to: dataset/masks/0313-1/49980\nReading from: dataset/clips/0313-1/33420/20.jpg\nSaving to: dataset/masks/0313-1/33420\nReading from: dataset/clips/0313-1/22080/20.jpg\nSaving to: dataset/masks/0313-1/22080\nReading from: dataset/clips/0313-1/13420/20.jpg\nSaving to: dataset/masks/0313-1/13420\nReading from: dataset/clips/0313-1/46260/20.jpg\nSaving to: dataset/masks/0313-1/46260\nReading from: dataset/clips/0313-1/49380/20.jpg\nSaving to: dataset/masks/0313-1/49380\nReading from: dataset/clips/0313-1/53100/20.jpg\nSaving to: dataset/masks/0313-1/53100\nReading from: dataset/clips/0313-1/4800/20.jpg\nSaving to: dataset/masks/0313-1/4800\nReading from: dataset/clips/0313-1/17820/20.jpg\nSaving to: dataset/masks/0313-1/17820\nReading from: dataset/clips/0313-1/11500/20.jpg\nSaving to: dataset/masks/0313-1/11500\nReading from: dataset/clips/0313-1/33840/20.jpg\nSaving to: dataset/masks/0313-1/33840\nReading from: dataset/clips/0313-1/17780/20.jpg\nSaving to: dataset/masks/0313-1/17780\nReading from: dataset/clips/0313-1/8940/20.jpg\nSaving to: dataset/masks/0313-1/8940\nReading from: dataset/clips/0313-1/7860/20.jpg\nSaving to: dataset/masks/0313-1/7860\nReading from: dataset/clips/0313-1/49140/20.jpg\nSaving to: dataset/masks/0313-1/49140\nReading from: dataset/clips/0313-1/18300/20.jpg\nSaving to: dataset/masks/0313-1/18300\nReading from: dataset/clips/0313-1/11920/20.jpg\nSaving to: dataset/masks/0313-1/11920\nReading from: dataset/clips/0313-1/9940/20.jpg\nSaving to: dataset/masks/0313-1/9940\nReading from: dataset/clips/0313-1/16460/20.jpg\nSaving to: dataset/masks/0313-1/16460\nReading from: dataset/clips/0313-1/14280/20.jpg\nSaving to: dataset/masks/0313-1/14280\nReading from: dataset/clips/0313-1/9400/20.jpg\nSaving to: dataset/masks/0313-1/9400\nReading from: dataset/clips/0313-1/24660/20.jpg\nSaving to: dataset/masks/0313-1/24660\nReading from: dataset/clips/0313-1/9820/20.jpg\nSaving to: dataset/masks/0313-1/9820\nReading from: dataset/clips/0313-1/6120/20.jpg\nSaving to: dataset/masks/0313-1/6120\nReading from: dataset/clips/0313-1/24120/20.jpg\nSaving to: dataset/masks/0313-1/24120\nReading from: dataset/clips/0313-1/23040/20.jpg\nSaving to: dataset/masks/0313-1/23040\nReading from: dataset/clips/0313-1/30960/20.jpg\nSaving to: dataset/masks/0313-1/30960\nReading from: dataset/clips/0313-1/7600/20.jpg\nSaving to: dataset/masks/0313-1/7600\nReading from: dataset/clips/0313-1/43800/20.jpg\nSaving to: dataset/masks/0313-1/43800\nReading from: dataset/clips/0313-1/360/20.jpg\nSaving to: dataset/masks/0313-1/360\nReading from: dataset/clips/0313-1/3720/20.jpg\nSaving to: dataset/masks/0313-1/3720\nReading from: dataset/clips/0313-1/14040/20.jpg\nSaving to: dataset/masks/0313-1/14040\nReading from: dataset/clips/0313-1/5220/20.jpg\nSaving to: dataset/masks/0313-1/5220\nReading from: dataset/clips/0313-1/5820/20.jpg\nSaving to: dataset/masks/0313-1/5820\nReading from: dataset/clips/0313-1/23220/20.jpg\nSaving to: dataset/masks/0313-1/23220\nReading from: dataset/clips/0313-1/4760/20.jpg\nSaving to: dataset/masks/0313-1/4760\nReading from: dataset/clips/0313-1/35880/20.jpg\nSaving to: dataset/masks/0313-1/35880\nReading from: dataset/clips/0313-1/7360/20.jpg\nSaving to: dataset/masks/0313-1/7360\nReading from: dataset/clips/0313-1/5480/20.jpg\nSaving to: dataset/masks/0313-1/5480\nReading from: dataset/clips/0313-1/44820/20.jpg\nSaving to: dataset/masks/0313-1/44820\nReading from: dataset/clips/0313-1/12080/20.jpg\nSaving to: dataset/masks/0313-1/12080\nReading from: dataset/clips/0313-1/5620/20.jpg\nSaving to: dataset/masks/0313-1/5620\nReading from: dataset/clips/0313-1/27840/20.jpg\nSaving to: dataset/masks/0313-1/27840\nReading from: dataset/clips/0313-1/40680/20.jpg\nSaving to: dataset/masks/0313-1/40680\nReading from: dataset/clips/0313-1/6520/20.jpg\nSaving to: dataset/masks/0313-1/6520\nReading from: dataset/clips/0313-1/35340/20.jpg\nSaving to: dataset/masks/0313-1/35340\nReading from: dataset/clips/0313-1/37500/20.jpg\nSaving to: dataset/masks/0313-1/37500\nReading from: dataset/clips/0313-1/45060/20.jpg\nSaving to: dataset/masks/0313-1/45060\nReading from: dataset/clips/0313-1/43680/20.jpg\nSaving to: dataset/masks/0313-1/43680\nReading from: dataset/clips/0313-1/2520/20.jpg\nSaving to: dataset/masks/0313-1/2520\nReading from: dataset/clips/0313-1/14580/20.jpg\nSaving to: dataset/masks/0313-1/14580\nReading from: dataset/clips/0313-1/7480/20.jpg\nSaving to: dataset/masks/0313-1/7480\nReading from: dataset/clips/0313-1/8720/20.jpg\nSaving to: dataset/masks/0313-1/8720\nReading from: dataset/clips/0313-1/26940/20.jpg\nSaving to: dataset/masks/0313-1/26940\nReading from: dataset/clips/0313-1/10080/20.jpg\nSaving to: dataset/masks/0313-1/10080\nReading from: dataset/clips/0313-1/51240/20.jpg\nSaving to: dataset/masks/0313-1/51240\nReading from: dataset/clips/0313-1/29040/20.jpg\nSaving to: dataset/masks/0313-1/29040\nReading from: dataset/clips/0313-1/18480/20.jpg\nSaving to: dataset/masks/0313-1/18480\nReading from: dataset/clips/0313-1/5020/20.jpg\nSaving to: dataset/masks/0313-1/5020\nReading from: dataset/clips/0313-1/27540/20.jpg\nSaving to: dataset/masks/0313-1/27540\nReading from: dataset/clips/0313-1/8600/20.jpg\nSaving to: dataset/masks/0313-1/8600\nReading from: dataset/clips/0313-1/16260/20.jpg\nSaving to: dataset/masks/0313-1/16260\nReading from: dataset/clips/0313-1/27600/20.jpg\nSaving to: dataset/masks/0313-1/27600\nReading from: dataset/clips/0313-1/43500/20.jpg\nSaving to: dataset/masks/0313-1/43500\nReading from: dataset/clips/0313-1/4140/20.jpg\nSaving to: dataset/masks/0313-1/4140\nReading from: dataset/clips/0313-1/22200/20.jpg\nSaving to: dataset/masks/0313-1/22200\nReading from: dataset/clips/0313-1/46380/20.jpg\nSaving to: dataset/masks/0313-1/46380\nReading from: dataset/clips/0313-1/40920/20.jpg\nSaving to: dataset/masks/0313-1/40920\nReading from: dataset/clips/0313-1/14780/20.jpg\nSaving to: dataset/masks/0313-1/14780\nReading from: dataset/clips/0313-1/4600/20.jpg\nSaving to: dataset/masks/0313-1/4600\nReading from: dataset/clips/0313-1/40020/20.jpg\nSaving to: dataset/masks/0313-1/40020\nReading from: dataset/clips/0313-1/15640/20.jpg\nSaving to: dataset/masks/0313-1/15640\nReading from: dataset/clips/0313-1/15920/20.jpg\nSaving to: dataset/masks/0313-1/15920\nReading from: dataset/clips/0313-1/24360/20.jpg\nSaving to: dataset/masks/0313-1/24360\nReading from: dataset/clips/0313-1/16740/20.jpg\nSaving to: dataset/masks/0313-1/16740\nReading from: dataset/clips/0313-1/7680/20.jpg\nSaving to: dataset/masks/0313-1/7680\nReading from: dataset/clips/0313-1/51300/20.jpg\nSaving to: dataset/masks/0313-1/51300\nReading from: dataset/clips/0313-1/10200/20.jpg\nSaving to: dataset/masks/0313-1/10200\nReading from: dataset/clips/0313-1/11040/20.jpg\nSaving to: dataset/masks/0313-1/11040\nReading from: dataset/clips/0313-1/10160/20.jpg\nSaving to: dataset/masks/0313-1/10160\nReading from: dataset/clips/0313-1/11640/20.jpg\nSaving to: dataset/masks/0313-1/11640\nReading from: dataset/clips/0313-1/19560/20.jpg\nSaving to: dataset/masks/0313-1/19560\nReading from: dataset/clips/0313-1/3360/20.jpg\nSaving to: dataset/masks/0313-1/3360\nReading from: dataset/clips/0313-1/31500/20.jpg\nSaving to: dataset/masks/0313-1/31500\nReading from: dataset/clips/0313-1/6340/20.jpg\nSaving to: dataset/masks/0313-1/6340\nReading from: dataset/clips/0313-1/8780/20.jpg\nSaving to: dataset/masks/0313-1/8780\nReading from: dataset/clips/0313-1/10020/20.jpg\nSaving to: dataset/masks/0313-1/10020\nReading from: dataset/clips/0313-1/34860/20.jpg\nSaving to: dataset/masks/0313-1/34860\nReading from: dataset/clips/0313-1/29640/20.jpg\nSaving to: dataset/masks/0313-1/29640\nReading from: dataset/clips/0313-1/17320/20.jpg\nSaving to: dataset/masks/0313-1/17320\nReading from: dataset/clips/0313-1/50220/20.jpg\nSaving to: dataset/masks/0313-1/50220\nReading from: dataset/clips/0313-1/30720/20.jpg\nSaving to: dataset/masks/0313-1/30720\nReading from: dataset/clips/0313-1/2880/20.jpg\nSaving to: dataset/masks/0313-1/2880\nReading from: dataset/clips/0313-1/35700/20.jpg\nSaving to: dataset/masks/0313-1/35700\nReading from: dataset/clips/0313-1/30480/20.jpg\nSaving to: dataset/masks/0313-1/30480\nReading from: dataset/clips/0313-1/30360/20.jpg\nSaving to: dataset/masks/0313-1/30360\nReading from: dataset/clips/0313-1/14240/20.jpg\nSaving to: dataset/masks/0313-1/14240\nReading from: dataset/clips/0313-1/44700/20.jpg\nSaving to: dataset/masks/0313-1/44700\nReading from: dataset/clips/0313-1/25620/20.jpg\nSaving to: dataset/masks/0313-1/25620\nReading from: dataset/clips/0313-1/10980/20.jpg\nSaving to: dataset/masks/0313-1/10980\nReading from: dataset/clips/0313-1/7140/20.jpg\nSaving to: dataset/masks/0313-1/7140\nReading from: dataset/clips/0313-1/45660/20.jpg\nSaving to: dataset/masks/0313-1/45660\nReading from: dataset/clips/0313-1/7120/20.jpg\nSaving to: dataset/masks/0313-1/7120\nReading from: dataset/clips/0313-1/18720/20.jpg\nSaving to: dataset/masks/0313-1/18720\nReading from: dataset/clips/0313-1/39300/20.jpg\nSaving to: dataset/masks/0313-1/39300\nReading from: dataset/clips/0313-1/37860/20.jpg\nSaving to: dataset/masks/0313-1/37860\nReading from: dataset/clips/0313-1/5560/20.jpg\nSaving to: dataset/masks/0313-1/5560\nReading from: dataset/clips/0313-1/28920/20.jpg\nSaving to: dataset/masks/0313-1/28920\nReading from: dataset/clips/0313-1/33600/20.jpg\nSaving to: dataset/masks/0313-1/33600\nReading from: dataset/clips/0313-1/41340/20.jpg\nSaving to: dataset/masks/0313-1/41340\nReading from: dataset/clips/0313-1/43260/20.jpg\nSaving to: dataset/masks/0313-1/43260\nReading from: dataset/clips/0313-1/6720/20.jpg\nSaving to: dataset/masks/0313-1/6720\nReading from: dataset/clips/0313-1/27180/20.jpg\nSaving to: dataset/masks/0313-1/27180\nReading from: dataset/clips/0313-1/7900/20.jpg\nSaving to: dataset/masks/0313-1/7900\nReading from: dataset/clips/0313-1/36300/20.jpg\nSaving to: dataset/masks/0313-1/36300\nReading from: dataset/clips/0313-1/10940/20.jpg\nSaving to: dataset/masks/0313-1/10940\nReading from: dataset/clips/0313-1/45780/20.jpg\nSaving to: dataset/masks/0313-1/45780\nReading from: dataset/clips/0313-1/23940/20.jpg\nSaving to: dataset/masks/0313-1/23940\nReading from: dataset/clips/0313-1/38460/20.jpg\nSaving to: dataset/masks/0313-1/38460\nReading from: dataset/clips/0313-1/11480/20.jpg\nSaving to: dataset/masks/0313-1/11480\nReading from: dataset/clips/0313-1/5900/20.jpg\nSaving to: dataset/masks/0313-1/5900\nReading from: dataset/clips/0313-1/12400/20.jpg\nSaving to: dataset/masks/0313-1/12400\nReading from: dataset/clips/0313-1/20100/20.jpg\nSaving to: dataset/masks/0313-1/20100\nReading from: dataset/clips/0313-1/15600/20.jpg\nSaving to: dataset/masks/0313-1/15600\nReading from: dataset/clips/0313-1/33120/20.jpg\nSaving to: dataset/masks/0313-1/33120\nReading from: dataset/clips/0313-1/53040/20.jpg\nSaving to: dataset/masks/0313-1/53040\nReading from: dataset/clips/0313-1/6860/20.jpg\nSaving to: dataset/masks/0313-1/6860\nReading from: dataset/clips/0313-1/36540/20.jpg\nSaving to: dataset/masks/0313-1/36540\nReading from: dataset/clips/0313-1/420/20.jpg\nSaving to: dataset/masks/0313-1/420\nReading from: dataset/clips/0313-1/16540/20.jpg\nSaving to: dataset/masks/0313-1/16540\nReading from: dataset/clips/0313-1/40200/20.jpg\nSaving to: dataset/masks/0313-1/40200\nReading from: dataset/clips/0313-1/38880/20.jpg\nSaving to: dataset/masks/0313-1/38880\nReading from: dataset/clips/0313-1/13780/20.jpg\nSaving to: dataset/masks/0313-1/13780\nReading from: dataset/clips/0313-1/12920/20.jpg\nSaving to: dataset/masks/0313-1/12920\nReading from: dataset/clips/0313-1/17420/20.jpg\nSaving to: dataset/masks/0313-1/17420\nReading from: dataset/clips/0313-1/11700/20.jpg\nSaving to: dataset/masks/0313-1/11700\nReading from: dataset/clips/0313-1/23640/20.jpg\nSaving to: dataset/masks/0313-1/23640\nReading from: dataset/clips/0313-1/19680/20.jpg\nSaving to: dataset/masks/0313-1/19680\nReading from: dataset/clips/0313-1/38280/20.jpg\nSaving to: dataset/masks/0313-1/38280\nReading from: dataset/clips/0313-1/21900/20.jpg\nSaving to: dataset/masks/0313-1/21900\nReading from: dataset/clips/0313-1/11300/20.jpg\nSaving to: dataset/masks/0313-1/11300\nReading from: dataset/clips/0313-1/15760/20.jpg\nSaving to: dataset/masks/0313-1/15760\nReading from: dataset/clips/0313-1/13020/20.jpg\nSaving to: dataset/masks/0313-1/13020\nReading from: dataset/clips/0313-1/8200/20.jpg\nSaving to: dataset/masks/0313-1/8200\nReading from: dataset/clips/0313-1/17980/20.jpg\nSaving to: dataset/masks/0313-1/17980\nReading from: dataset/clips/0313-1/25500/20.jpg\nSaving to: dataset/masks/0313-1/25500\nReading from: dataset/clips/0313-1/4220/20.jpg\nSaving to: dataset/masks/0313-1/4220\nReading from: dataset/clips/0313-1/17860/20.jpg\nSaving to: dataset/masks/0313-1/17860\nReading from: dataset/clips/0313-1/14000/20.jpg\nSaving to: dataset/masks/0313-1/14000\nReading from: dataset/clips/0313-1/12620/20.jpg\nSaving to: dataset/masks/0313-1/12620\nReading from: dataset/clips/0313-1/13140/20.jpg\nSaving to: dataset/masks/0313-1/13140\nReading from: dataset/clips/0313-1/52680/20.jpg\nSaving to: dataset/masks/0313-1/52680\nReading from: dataset/clips/0313-1/17080/20.jpg\nSaving to: dataset/masks/0313-1/17080\nReading from: dataset/clips/0313-1/4940/20.jpg\nSaving to: dataset/masks/0313-1/4940\nReading from: dataset/clips/0313-1/29820/20.jpg\nSaving to: dataset/masks/0313-1/29820\nReading from: dataset/clips/0313-1/1500/20.jpg\nSaving to: dataset/masks/0313-1/1500\nReading from: dataset/clips/0313-1/12360/20.jpg\nSaving to: dataset/masks/0313-1/12360\nReading from: dataset/clips/0313-1/7920/20.jpg\nSaving to: dataset/masks/0313-1/7920\nReading from: dataset/clips/0313-1/5660/20.jpg\nSaving to: dataset/masks/0313-1/5660\nReading from: dataset/clips/0313-1/17680/20.jpg\nSaving to: dataset/masks/0313-1/17680\nReading from: dataset/clips/0313-1/8340/20.jpg\nSaving to: dataset/masks/0313-1/8340\nReading from: dataset/clips/0313-1/30600/20.jpg\nSaving to: dataset/masks/0313-1/30600\nReading from: dataset/clips/0313-1/32760/20.jpg\nSaving to: dataset/masks/0313-1/32760\nReading from: dataset/clips/0313-1/4700/20.jpg\nSaving to: dataset/masks/0313-1/4700\nReading from: dataset/clips/0313-1/25860/20.jpg\nSaving to: dataset/masks/0313-1/25860\nReading from: dataset/clips/0313-1/4080/20.jpg\nSaving to: dataset/masks/0313-1/4080\nReading from: dataset/clips/0313-1/16020/20.jpg\nSaving to: dataset/masks/0313-1/16020\nReading from: dataset/clips/0313-1/16080/20.jpg\nSaving to: dataset/masks/0313-1/16080\nReading from: dataset/clips/0313-1/10780/20.jpg\nSaving to: dataset/masks/0313-1/10780\nReading from: dataset/clips/0313-1/8560/20.jpg\nSaving to: dataset/masks/0313-1/8560\nReading from: dataset/clips/0313-1/9420/20.jpg\nSaving to: dataset/masks/0313-1/9420\nReading from: dataset/clips/0313-1/4820/20.jpg\nSaving to: dataset/masks/0313-1/4820\nReading from: dataset/clips/0313-1/17920/20.jpg\nSaving to: dataset/masks/0313-1/17920\nReading from: dataset/clips/0313-1/8160/20.jpg\nSaving to: dataset/masks/0313-1/8160\nReading from: dataset/clips/0313-1/3980/20.jpg\nSaving to: dataset/masks/0313-1/3980\nReading from: dataset/clips/0313-1/5060/20.jpg\nSaving to: dataset/masks/0313-1/5060\nReading from: dataset/clips/0313-1/8960/20.jpg\nSaving to: dataset/masks/0313-1/8960\nReading from: dataset/clips/0313-1/14760/20.jpg\nSaving to: dataset/masks/0313-1/14760\nReading from: dataset/clips/0313-1/22800/20.jpg\nSaving to: dataset/masks/0313-1/22800\nReading from: dataset/clips/0313-1/37200/20.jpg\nSaving to: dataset/masks/0313-1/37200\nReading from: dataset/clips/0313-1/21540/20.jpg\nSaving to: dataset/masks/0313-1/21540\nReading from: dataset/clips/0313-1/13460/20.jpg\nSaving to: dataset/masks/0313-1/13460\nReading from: dataset/clips/0313-1/6740/20.jpg\nSaving to: dataset/masks/0313-1/6740\nReading from: dataset/clips/0313-1/6440/20.jpg\nSaving to: dataset/masks/0313-1/6440\nReading from: dataset/clips/0313-1/15960/20.jpg\nSaving to: dataset/masks/0313-1/15960\nReading from: dataset/clips/0313-1/32640/20.jpg\nSaving to: dataset/masks/0313-1/32640\nReading from: dataset/clips/0313-1/13980/20.jpg\nSaving to: dataset/masks/0313-1/13980\nReading from: dataset/clips/0313-1/34320/20.jpg\nSaving to: dataset/masks/0313-1/34320\nReading from: dataset/clips/0313-1/17500/20.jpg\nSaving to: dataset/masks/0313-1/17500\nReading from: dataset/clips/0313-1/5680/20.jpg\nSaving to: dataset/masks/0313-1/5680\nReading from: dataset/clips/0313-1/8360/20.jpg\nSaving to: dataset/masks/0313-1/8360\nReading from: dataset/clips/0313-1/4960/20.jpg\nSaving to: dataset/masks/0313-1/4960\nReading from: dataset/clips/0313-1/28140/20.jpg\nSaving to: dataset/masks/0313-1/28140\nReading from: dataset/clips/0313-1/10800/20.jpg\nSaving to: dataset/masks/0313-1/10800\nReading from: dataset/clips/0313-1/43440/20.jpg\nSaving to: dataset/masks/0313-1/43440\nReading from: dataset/clips/0313-1/7700/20.jpg\nSaving to: dataset/masks/0313-1/7700\nReading from: dataset/clips/0313-1/16720/20.jpg\nSaving to: dataset/masks/0313-1/16720\nReading from: dataset/clips/0313-1/10000/20.jpg\nSaving to: dataset/masks/0313-1/10000\nReading from: dataset/clips/0313-1/37620/20.jpg\nSaving to: dataset/masks/0313-1/37620\nReading from: dataset/clips/0313-1/6840/20.jpg\nSaving to: dataset/masks/0313-1/6840\nReading from: dataset/clips/0313-1/11020/20.jpg\nSaving to: dataset/masks/0313-1/11020\nReading from: dataset/clips/0313-1/39060/20.jpg\nSaving to: dataset/masks/0313-1/39060\nReading from: dataset/clips/0313-1/37980/20.jpg\nSaving to: dataset/masks/0313-1/37980\nReading from: dataset/clips/0313-1/1620/20.jpg\nSaving to: dataset/masks/0313-1/1620\nReading from: dataset/clips/0313-1/38100/20.jpg\nSaving to: dataset/masks/0313-1/38100\nReading from: dataset/clips/0313-1/32940/20.jpg\nSaving to: dataset/masks/0313-1/32940\nReading from: dataset/clips/0313-1/29280/20.jpg\nSaving to: dataset/masks/0313-1/29280\nReading from: dataset/clips/0313-1/13440/20.jpg\nSaving to: dataset/masks/0313-1/13440\nReading from: dataset/clips/0313-1/6920/20.jpg\nSaving to: dataset/masks/0313-1/6920\nReading from: dataset/clips/0313-1/6140/20.jpg\nSaving to: dataset/masks/0313-1/6140\nReading from: dataset/clips/0313-1/27000/20.jpg\nSaving to: dataset/masks/0313-1/27000\nReading from: dataset/clips/0313-1/16220/20.jpg\nSaving to: dataset/masks/0313-1/16220\nReading from: dataset/clips/0313-1/15200/20.jpg\nSaving to: dataset/masks/0313-1/15200\nReading from: dataset/clips/0313-1/26400/20.jpg\nSaving to: dataset/masks/0313-1/26400\nReading from: dataset/clips/0313-1/7400/20.jpg\nSaving to: dataset/masks/0313-1/7400\nReading from: dataset/clips/0313-1/12240/20.jpg\nSaving to: dataset/masks/0313-1/12240\nReading from: dataset/clips/0313-1/14200/20.jpg\nSaving to: dataset/masks/0313-1/14200\nReading from: dataset/clips/0313-1/38160/20.jpg\nSaving to: dataset/masks/0313-1/38160\nReading from: dataset/clips/0313-1/35220/20.jpg\nSaving to: dataset/masks/0313-1/35220\nReading from: dataset/clips/0313-1/16820/20.jpg\nSaving to: dataset/masks/0313-1/16820\nReading from: dataset/clips/0313-1/39900/20.jpg\nSaving to: dataset/masks/0313-1/39900\nReading from: dataset/clips/0313-1/4160/20.jpg\nSaving to: dataset/masks/0313-1/4160\nReading from: dataset/clips/0313-1/24180/20.jpg\nSaving to: dataset/masks/0313-1/24180\nReading from: dataset/clips/0313-1/11420/20.jpg\nSaving to: dataset/masks/0313-1/11420\nReading from: dataset/clips/0313-1/13640/20.jpg\nSaving to: dataset/masks/0313-1/13640\nReading from: dataset/clips/0313-1/7080/20.jpg\nSaving to: dataset/masks/0313-1/7080\nReading from: dataset/clips/0313-1/40440/20.jpg\nSaving to: dataset/masks/0313-1/40440\nReading from: dataset/clips/0313-1/10620/20.jpg\nSaving to: dataset/masks/0313-1/10620\nReading from: dataset/clips/0313-1/15280/20.jpg\nSaving to: dataset/masks/0313-1/15280\nReading from: dataset/clips/0313-1/3180/20.jpg\nSaving to: dataset/masks/0313-1/3180\nReading from: dataset/clips/0313-1/15480/20.jpg\nSaving to: dataset/masks/0313-1/15480\nReading from: dataset/clips/0313-1/44940/20.jpg\nSaving to: dataset/masks/0313-1/44940\nReading from: dataset/clips/0313-1/15540/20.jpg\nSaving to: dataset/masks/0313-1/15540\nReading from: dataset/clips/0313-1/49440/20.jpg\nSaving to: dataset/masks/0313-1/49440\nReading from: dataset/clips/0313-1/480/20.jpg\nSaving to: dataset/masks/0313-1/480\nReading from: dataset/clips/0313-1/51120/20.jpg\nSaving to: dataset/masks/0313-1/51120\nReading from: dataset/clips/0313-1/29340/20.jpg\nSaving to: dataset/masks/0313-1/29340\nReading from: dataset/clips/0313-1/8380/20.jpg\nSaving to: dataset/masks/0313-1/8380\nReading from: dataset/clips/0313-1/48000/20.jpg\nSaving to: dataset/masks/0313-1/48000\nReading from: dataset/clips/0313-1/9740/20.jpg\nSaving to: dataset/masks/0313-1/9740\nReading from: dataset/clips/0313-1/37020/20.jpg\nSaving to: dataset/masks/0313-1/37020\nReading from: dataset/clips/0313-1/9260/20.jpg\nSaving to: dataset/masks/0313-1/9260\nReading from: dataset/clips/0313-1/48480/20.jpg\nSaving to: dataset/masks/0313-1/48480\nReading from: dataset/clips/0313-1/20460/20.jpg\nSaving to: dataset/masks/0313-1/20460\nReading from: dataset/clips/0313-1/31980/20.jpg\nSaving to: dataset/masks/0313-1/31980\nReading from: dataset/clips/0313-1/19500/20.jpg\nSaving to: dataset/masks/0313-1/19500\nReading from: dataset/clips/0313-1/40500/20.jpg\nSaving to: dataset/masks/0313-1/40500\nReading from: dataset/clips/0313-1/17560/20.jpg\nSaving to: dataset/masks/0313-1/17560\nReading from: dataset/clips/0313-1/16960/20.jpg\nSaving to: dataset/masks/0313-1/16960\nReading from: dataset/clips/0313-1/30420/20.jpg\nSaving to: dataset/masks/0313-1/30420\nReading from: dataset/clips/0313-1/14500/20.jpg\nSaving to: dataset/masks/0313-1/14500\nReading from: dataset/clips/0313-1/43380/20.jpg\nSaving to: dataset/masks/0313-1/43380\nReading from: dataset/clips/0313-1/5080/20.jpg\nSaving to: dataset/masks/0313-1/5080\nReading from: dataset/clips/0313-1/17300/20.jpg\nSaving to: dataset/masks/0313-1/17300\nReading from: dataset/clips/0313-1/20880/20.jpg\nSaving to: dataset/masks/0313-1/20880\nReading from: dataset/clips/0313-1/36060/20.jpg\nSaving to: dataset/masks/0313-1/36060\nReading from: dataset/clips/0313-1/4000/20.jpg\nSaving to: dataset/masks/0313-1/4000\nReading from: dataset/clips/0313-1/8440/20.jpg\nSaving to: dataset/masks/0313-1/8440\nReading from: dataset/clips/0313-1/15720/20.jpg\nSaving to: dataset/masks/0313-1/15720\nReading from: dataset/clips/0313-1/41460/20.jpg\nSaving to: dataset/masks/0313-1/41460\nReading from: dataset/clips/0313-1/5840/20.jpg\nSaving to: dataset/masks/0313-1/5840\nReading from: dataset/clips/0313-1/42180/20.jpg\nSaving to: dataset/masks/0313-1/42180\nReading from: dataset/clips/0313-1/4980/20.jpg\nSaving to: dataset/masks/0313-1/4980\nReading from: dataset/clips/0313-1/10320/20.jpg\nSaving to: dataset/masks/0313-1/10320\nReading from: dataset/clips/0313-1/41040/20.jpg\nSaving to: dataset/masks/0313-1/41040\nReading from: dataset/clips/0313-1/4460/20.jpg\nSaving to: dataset/masks/0313-1/4460\nReading from: dataset/clips/0313-1/13280/20.jpg\nSaving to: dataset/masks/0313-1/13280\nReading from: dataset/clips/0313-1/48960/20.jpg\nSaving to: dataset/masks/0313-1/48960\nReading from: dataset/clips/0313-1/5500/20.jpg\nSaving to: dataset/masks/0313-1/5500\nReading from: dataset/clips/0313-1/31440/20.jpg\nSaving to: dataset/masks/0313-1/31440\nReading from: dataset/clips/0313-1/17700/20.jpg\nSaving to: dataset/masks/0313-1/17700\nReading from: dataset/clips/0313-1/2580/20.jpg\nSaving to: dataset/masks/0313-1/2580\nReading from: dataset/clips/0313-1/4620/20.jpg\nSaving to: dataset/masks/0313-1/4620\nReading from: dataset/clips/0313-1/37080/20.jpg\nSaving to: dataset/masks/0313-1/37080\nReading from: dataset/clips/0313-1/18240/20.jpg\nSaving to: dataset/masks/0313-1/18240\nReading from: dataset/clips/0313-1/29460/20.jpg\nSaving to: dataset/masks/0313-1/29460\nReading from: dataset/clips/0313-1/8300/20.jpg\nSaving to: dataset/masks/0313-1/8300\nReading from: dataset/clips/0313-1/9040/20.jpg\nSaving to: dataset/masks/0313-1/9040\nReading from: dataset/clips/0313-1/8480/20.jpg\nSaving to: dataset/masks/0313-1/8480\nReading from: dataset/clips/0313-1/8620/20.jpg\nSaving to: dataset/masks/0313-1/8620\nReading from: dataset/clips/0313-1/33060/20.jpg\nSaving to: dataset/masks/0313-1/33060\nReading from: dataset/clips/0313-1/26880/20.jpg\nSaving to: dataset/masks/0313-1/26880\nReading from: dataset/clips/0313-1/5400/20.jpg\nSaving to: dataset/masks/0313-1/5400\nReading from: dataset/clips/0313-1/47820/20.jpg\nSaving to: dataset/masks/0313-1/47820\nReading from: dataset/clips/0313-1/14900/20.jpg\nSaving to: dataset/masks/0313-1/14900\nReading from: dataset/clips/0313-1/5380/20.jpg\nSaving to: dataset/masks/0313-1/5380\nReading from: dataset/clips/0313-1/23100/20.jpg\nSaving to: dataset/masks/0313-1/23100\nReading from: dataset/clips/0313-1/33660/20.jpg\nSaving to: dataset/masks/0313-1/33660\nReading from: dataset/clips/0313-1/29580/20.jpg\nSaving to: dataset/masks/0313-1/29580\nReading from: dataset/clips/0313-1/12740/20.jpg\nSaving to: dataset/masks/0313-1/12740\nReading from: dataset/clips/0313-1/7620/20.jpg\nSaving to: dataset/masks/0313-1/7620\nReading from: dataset/clips/0313-1/8140/20.jpg\nSaving to: dataset/masks/0313-1/8140\nReading from: dataset/clips/0313-1/10220/20.jpg\nSaving to: dataset/masks/0313-1/10220\nReading from: dataset/clips/0313-1/4500/20.jpg\nSaving to: dataset/masks/0313-1/4500\nReading from: dataset/clips/0313-1/5640/20.jpg\nSaving to: dataset/masks/0313-1/5640\nReading from: dataset/clips/0313-1/15060/20.jpg\nSaving to: dataset/masks/0313-1/15060\nReading from: dataset/clips/0313-1/8280/20.jpg\nSaving to: dataset/masks/0313-1/8280\nReading from: dataset/clips/0313-1/9440/20.jpg\nSaving to: dataset/masks/0313-1/9440\nReading from: dataset/clips/0313-1/50520/20.jpg\nSaving to: dataset/masks/0313-1/50520\nReading from: dataset/clips/0313-1/10900/20.jpg\nSaving to: dataset/masks/0313-1/10900\nReading from: dataset/clips/0313-1/6200/20.jpg\nSaving to: dataset/masks/0313-1/6200\nReading from: dataset/clips/0313-1/6960/20.jpg\nSaving to: dataset/masks/0313-1/6960\nReading from: dataset/clips/0313-1/19860/20.jpg\nSaving to: dataset/masks/0313-1/19860\nReading from: dataset/clips/0313-1/37380/20.jpg\nSaving to: dataset/masks/0313-1/37380\nReading from: dataset/clips/0313-1/11280/20.jpg\nSaving to: dataset/masks/0313-1/11280\nReading from: dataset/clips/0313-1/35520/20.jpg\nSaving to: dataset/masks/0313-1/35520\nReading from: dataset/clips/0313-1/1440/20.jpg\nSaving to: dataset/masks/0313-1/1440\nReading from: dataset/clips/0313-1/9620/20.jpg\nSaving to: dataset/masks/0313-1/9620\nReading from: dataset/clips/0313-1/16940/20.jpg\nSaving to: dataset/masks/0313-1/16940\nReading from: dataset/clips/0313-1/35640/20.jpg\nSaving to: dataset/masks/0313-1/35640\nReading from: dataset/clips/0313-1/51720/20.jpg\nSaving to: dataset/masks/0313-1/51720\nReading from: dataset/clips/0313-1/21720/20.jpg\nSaving to: dataset/masks/0313-1/21720\nReading from: dataset/clips/0313-1/10380/20.jpg\nSaving to: dataset/masks/0313-1/10380\nReading from: dataset/clips/0313-1/17540/20.jpg\nSaving to: dataset/masks/0313-1/17540\nReading from: dataset/clips/0313-1/11780/20.jpg\nSaving to: dataset/masks/0313-1/11780\nReading from: dataset/clips/0313-1/24240/20.jpg\nSaving to: dataset/masks/0313-1/24240\nReading from: dataset/clips/0313-1/48120/20.jpg\nSaving to: dataset/masks/0313-1/48120\nReading from: dataset/clips/0313-1/31920/20.jpg\nSaving to: dataset/masks/0313-1/31920\nReading from: dataset/clips/0313-1/40140/20.jpg\nSaving to: dataset/masks/0313-1/40140\nReading from: dataset/clips/0313-1/30780/20.jpg\nSaving to: dataset/masks/0313-1/30780\nReading from: dataset/clips/0313-1/4720/20.jpg\nSaving to: dataset/masks/0313-1/4720\nReading from: dataset/clips/0313-1/10280/20.jpg\nSaving to: dataset/masks/0313-1/10280\nReading from: dataset/clips/0313-1/7740/20.jpg\nSaving to: dataset/masks/0313-1/7740\nReading from: dataset/clips/0313-1/1800/20.jpg\nSaving to: dataset/masks/0313-1/1800\nReading from: dataset/clips/0313-1/10500/20.jpg\nSaving to: dataset/masks/0313-1/10500\nReading from: dataset/clips/0313-1/8460/20.jpg\nSaving to: dataset/masks/0313-1/8460\nReading from: dataset/clips/0313-1/12060/20.jpg\nSaving to: dataset/masks/0313-1/12060\nReading from: dataset/clips/0313-1/10140/20.jpg\nSaving to: dataset/masks/0313-1/10140\nReading from: dataset/clips/0313-1/9760/20.jpg\nSaving to: dataset/masks/0313-1/9760\nReading from: dataset/clips/0313-1/7760/20.jpg\nSaving to: dataset/masks/0313-1/7760\nReading from: dataset/clips/0313-1/11260/20.jpg\nSaving to: dataset/masks/0313-1/11260\nReading from: dataset/clips/0313-1/30180/20.jpg\nSaving to: dataset/masks/0313-1/30180\nReading from: dataset/clips/0313-1/43140/20.jpg\nSaving to: dataset/masks/0313-1/43140\nReading from: dataset/clips/0313-1/14420/20.jpg\nSaving to: dataset/masks/0313-1/14420\nReading from: dataset/clips/0313-1/41100/20.jpg\nSaving to: dataset/masks/0313-1/41100\nReading from: dataset/clips/0313-1/44160/20.jpg\nSaving to: dataset/masks/0313-1/44160\nReading from: dataset/clips/0313-1/7320/20.jpg\nSaving to: dataset/masks/0313-1/7320\nReading from: dataset/clips/0313-1/50280/20.jpg\nSaving to: dataset/masks/0313-1/50280\nReading from: dataset/clips/0313-1/10180/20.jpg\nSaving to: dataset/masks/0313-1/10180\nReading from: dataset/clips/0313-1/32340/20.jpg\nSaving to: dataset/masks/0313-1/32340\nReading from: dataset/clips/0313-1/7340/20.jpg\nSaving to: dataset/masks/0313-1/7340\nReading from: dataset/clips/0313-1/18180/20.jpg\nSaving to: dataset/masks/0313-1/18180\nReading from: dataset/clips/0313-1/16800/20.jpg\nSaving to: dataset/masks/0313-1/16800\nReading from: dataset/clips/0313-1/39000/20.jpg\nSaving to: dataset/masks/0313-1/39000\nReading from: dataset/clips/0313-1/13000/20.jpg\nSaving to: dataset/masks/0313-1/13000\nReading from: dataset/clips/0313-1/11180/20.jpg\nSaving to: dataset/masks/0313-1/11180\nReading from: dataset/clips/0313-1/6580/20.jpg\nSaving to: dataset/masks/0313-1/6580\nReading from: dataset/clips/0313-1/16000/20.jpg\nSaving to: dataset/masks/0313-1/16000\nReading from: dataset/clips/0313-1/9980/20.jpg\nSaving to: dataset/masks/0313-1/9980\nReading from: dataset/clips/0313-1/12300/20.jpg\nSaving to: dataset/masks/0313-1/12300\nReading from: dataset/clips/0313-1/10520/20.jpg\nSaving to: dataset/masks/0313-1/10520\nReading from: dataset/clips/0313-1/12940/20.jpg\nSaving to: dataset/masks/0313-1/12940\nReading from: dataset/clips/0313-1/44760/20.jpg\nSaving to: dataset/masks/0313-1/44760\nReading from: dataset/clips/0313-1/13500/20.jpg\nSaving to: dataset/masks/0313-1/13500\nReading from: dataset/clips/0313-1/9000/20.jpg\nSaving to: dataset/masks/0313-1/9000\nReading from: dataset/clips/0313-1/50460/20.jpg\nSaving to: dataset/masks/0313-1/50460\nReading from: dataset/clips/0313-1/6260/20.jpg\nSaving to: dataset/masks/0313-1/6260\nReading from: dataset/clips/0313-1/8080/20.jpg\nSaving to: dataset/masks/0313-1/8080\nReading from: dataset/clips/0313-1/34620/20.jpg\nSaving to: dataset/masks/0313-1/34620\nReading from: dataset/clips/0313-1/16380/20.jpg\nSaving to: dataset/masks/0313-1/16380\nReading from: dataset/clips/0313-1/19080/20.jpg\nSaving to: dataset/masks/0313-1/19080\nReading from: dataset/clips/0313-1/48600/20.jpg\nSaving to: dataset/masks/0313-1/48600\nReading from: dataset/clips/0313-1/35580/20.jpg\nSaving to: dataset/masks/0313-1/35580\nReading from: dataset/clips/0313-1/13820/20.jpg\nSaving to: dataset/masks/0313-1/13820\nReading from: dataset/clips/0313-1/6760/20.jpg\nSaving to: dataset/masks/0313-1/6760\nReading from: dataset/clips/0313-1/27960/20.jpg\nSaving to: dataset/masks/0313-1/27960\nReading from: dataset/clips/0313-1/11460/20.jpg\nSaving to: dataset/masks/0313-1/11460\nReading from: dataset/clips/0313-1/7960/20.jpg\nSaving to: dataset/masks/0313-1/7960\nReading from: dataset/clips/0313-1/17840/20.jpg\nSaving to: dataset/masks/0313-1/17840\nReading from: dataset/clips/0313-1/35160/20.jpg\nSaving to: dataset/masks/0313-1/35160\nReading from: dataset/clips/0313-1/34140/20.jpg\nSaving to: dataset/masks/0313-1/34140\nReading from: dataset/clips/0313-1/3840/20.jpg\nSaving to: dataset/masks/0313-1/3840\nReading from: dataset/clips/0313-1/13800/20.jpg\nSaving to: dataset/masks/0313-1/13800\nReading from: dataset/clips/0313-1/36240/20.jpg\nSaving to: dataset/masks/0313-1/36240\nReading from: dataset/clips/0313-1/15040/20.jpg\nSaving to: dataset/masks/0313-1/15040\nReading from: dataset/clips/0313-1/50580/20.jpg\nSaving to: dataset/masks/0313-1/50580\nReading from: dataset/clips/0313-1/46200/20.jpg\nSaving to: dataset/masks/0313-1/46200\nReading from: dataset/clips/0313-1/48900/20.jpg\nSaving to: dataset/masks/0313-1/48900\nReading from: dataset/clips/0313-1/10820/20.jpg\nSaving to: dataset/masks/0313-1/10820\nReading from: dataset/clips/0313-1/7240/20.jpg\nSaving to: dataset/masks/0313-1/7240\nReading from: dataset/clips/0313-1/26520/20.jpg\nSaving to: dataset/masks/0313-1/26520\nReading from: dataset/clips/0313-1/9660/20.jpg\nSaving to: dataset/masks/0313-1/9660\nReading from: dataset/clips/0313-1/28200/20.jpg\nSaving to: dataset/masks/0313-1/28200\nReading from: dataset/clips/0313-1/7280/20.jpg\nSaving to: dataset/masks/0313-1/7280\nReading from: dataset/clips/0313-1/24480/20.jpg\nSaving to: dataset/masks/0313-1/24480\nReading from: dataset/clips/0313-1/11940/20.jpg\nSaving to: dataset/masks/0313-1/11940\nReading from: dataset/clips/0313-1/48720/20.jpg\nSaving to: dataset/masks/0313-1/48720\nReading from: dataset/clips/0313-1/4480/20.jpg\nSaving to: dataset/masks/0313-1/4480\nReading from: dataset/clips/0313-1/16320/20.jpg\nSaving to: dataset/masks/0313-1/16320\nReading from: dataset/clips/0313-1/13260/20.jpg\nSaving to: dataset/masks/0313-1/13260\nReading from: dataset/clips/0313-1/52920/20.jpg\nSaving to: dataset/masks/0313-1/52920\nReading from: dataset/clips/0313-1/8060/20.jpg\nSaving to: dataset/masks/0313-1/8060\nReading from: dataset/clips/0313-1/47940/20.jpg\nSaving to: dataset/masks/0313-1/47940\nReading from: dataset/clips/0313-1/9100/20.jpg\nSaving to: dataset/masks/0313-1/9100\nReading from: dataset/clips/0313-1/26820/20.jpg\nSaving to: dataset/masks/0313-1/26820\nReading from: dataset/clips/0313-1/17900/20.jpg\nSaving to: dataset/masks/0313-1/17900\nReading from: dataset/clips/0313-1/8020/20.jpg\nSaving to: dataset/masks/0313-1/8020\nReading from: dataset/clips/0313-1/8500/20.jpg\nSaving to: dataset/masks/0313-1/8500\nReading from: dataset/clips/0313-1/47760/20.jpg\nSaving to: dataset/masks/0313-1/47760\nReading from: dataset/clips/0313-1/34800/20.jpg\nSaving to: dataset/masks/0313-1/34800\nReading from: dataset/clips/0313-1/15880/20.jpg\nSaving to: dataset/masks/0313-1/15880\nReading from: dataset/clips/0313-1/2820/20.jpg\nSaving to: dataset/masks/0313-1/2820\nReading from: dataset/clips/0313-1/17740/20.jpg\nSaving to: dataset/masks/0313-1/17740\nReading from: dataset/clips/0313-1/21780/20.jpg\nSaving to: dataset/masks/0313-1/21780\nReading from: dataset/clips/0313-1/5420/20.jpg\nSaving to: dataset/masks/0313-1/5420\nReading from: dataset/clips/0313-1/4060/20.jpg\nSaving to: dataset/masks/0313-1/4060\nReading from: dataset/clips/0313-1/4400/20.jpg\nSaving to: dataset/masks/0313-1/4400\nReading from: dataset/clips/0313-1/5360/20.jpg\nSaving to: dataset/masks/0313-1/5360\nReading from: dataset/clips/0313-1/16640/20.jpg\nSaving to: dataset/masks/0313-1/16640\nReading from: dataset/clips/0313-1/43920/20.jpg\nSaving to: dataset/masks/0313-1/43920\nReading from: dataset/clips/0313-1/43620/20.jpg\nSaving to: dataset/masks/0313-1/43620\nReading from: dataset/clips/0313-1/6360/20.jpg\nSaving to: dataset/masks/0313-1/6360\nReading from: dataset/clips/0313-1/10460/20.jpg\nSaving to: dataset/masks/0313-1/10460\nReading from: dataset/clips/0313-1/49200/20.jpg\nSaving to: dataset/masks/0313-1/49200\nReading from: dataset/clips/0313-1/9240/20.jpg\nSaving to: dataset/masks/0313-1/9240\nReading from: dataset/clips/0313-1/28380/20.jpg\nSaving to: dataset/masks/0313-1/28380\nReading from: dataset/clips/0313-1/3920/20.jpg\nSaving to: dataset/masks/0313-1/3920\nReading from: dataset/clips/0313-1/44880/20.jpg\nSaving to: dataset/masks/0313-1/44880\nReading from: dataset/clips/0313-1/17580/20.jpg\nSaving to: dataset/masks/0313-1/17580\nReading from: dataset/clips/0313-1/5600/20.jpg\nSaving to: dataset/masks/0313-1/5600\nReading from: dataset/clips/0313-1/5340/20.jpg\nSaving to: dataset/masks/0313-1/5340\nReading from: dataset/clips/0313-1/8320/20.jpg\nSaving to: dataset/masks/0313-1/8320\nReading from: dataset/clips/0313-1/780/20.jpg\nSaving to: dataset/masks/0313-1/780\nReading from: dataset/clips/0313-1/7420/20.jpg\nSaving to: dataset/masks/0313-1/7420\nReading from: dataset/clips/0313-1/14080/20.jpg\nSaving to: dataset/masks/0313-1/14080\nReading from: dataset/clips/0313-1/34560/20.jpg\nSaving to: dataset/masks/0313-1/34560\nReading from: dataset/clips/0313-1/45960/20.jpg\nSaving to: dataset/masks/0313-1/45960\nReading from: dataset/clips/0313-1/2400/20.jpg\nSaving to: dataset/masks/0313-1/2400\nReading from: dataset/clips/0313-1/13600/20.jpg\nSaving to: dataset/masks/0313-1/13600\nReading from: dataset/clips/0313-1/16060/20.jpg\nSaving to: dataset/masks/0313-1/16060\nReading from: dataset/clips/0313-1/5740/20.jpg\nSaving to: dataset/masks/0313-1/5740\nReading from: dataset/clips/0313-1/13040/20.jpg\nSaving to: dataset/masks/0313-1/13040\nReading from: dataset/clips/0313-1/16760/20.jpg\nSaving to: dataset/masks/0313-1/16760\nReading from: dataset/clips/0313-1/15780/20.jpg\nSaving to: dataset/masks/0313-1/15780\nReading from: dataset/clips/0313-1/40980/20.jpg\nSaving to: dataset/masks/0313-1/40980\nReading from: dataset/clips/0313-1/46140/20.jpg\nSaving to: dataset/masks/0313-1/46140\nReading from: dataset/clips/0313-1/14940/20.jpg\nSaving to: dataset/masks/0313-1/14940\nReading from: dataset/clips/0313-1/12760/20.jpg\nSaving to: dataset/masks/0313-1/12760\nReading from: dataset/clips/0313-1/7840/20.jpg\nSaving to: dataset/masks/0313-1/7840\nReading from: dataset/clips/0313-1/42780/20.jpg\nSaving to: dataset/masks/0313-1/42780\nReading from: dataset/clips/0313-1/26220/20.jpg\nSaving to: dataset/masks/0313-1/26220\nReading from: dataset/clips/0313-1/25260/20.jpg\nSaving to: dataset/masks/0313-1/25260\nReading from: dataset/clips/0313-1/49680/20.jpg\nSaving to: dataset/masks/0313-1/49680\nReading from: dataset/clips/0313-1/540/20.jpg\nSaving to: dataset/masks/0313-1/540\nReading from: dataset/clips/0313-1/23400/20.jpg\nSaving to: dataset/masks/0313-1/23400\nReading from: dataset/clips/0313-1/48180/20.jpg\nSaving to: dataset/masks/0313-1/48180\nReading from: dataset/clips/0313-1/39660/20.jpg\nSaving to: dataset/masks/0313-1/39660\nReading from: dataset/clips/0313-1/16500/20.jpg\nSaving to: dataset/masks/0313-1/16500\nReading from: dataset/clips/0313-1/18660/20.jpg\nSaving to: dataset/masks/0313-1/18660\nReading from: dataset/clips/0313-1/7720/20.jpg\nSaving to: dataset/masks/0313-1/7720\nReading from: dataset/clips/0313-1/46320/20.jpg\nSaving to: dataset/masks/0313-1/46320\nReading from: dataset/clips/0313-1/37440/20.jpg\nSaving to: dataset/masks/0313-1/37440\nReading from: dataset/clips/0313-1/31740/20.jpg\nSaving to: dataset/masks/0313-1/31740\nReading from: dataset/clips/0313-1/11060/20.jpg\nSaving to: dataset/masks/0313-1/11060\nReading from: dataset/clips/0313-1/24600/20.jpg\nSaving to: dataset/masks/0313-1/24600\nReading from: dataset/clips/0313-1/30000/20.jpg\nSaving to: dataset/masks/0313-1/30000\nReading from: dataset/clips/0313-1/43860/20.jpg\nSaving to: dataset/masks/0313-1/43860\nReading from: dataset/clips/0313-1/13620/20.jpg\nSaving to: dataset/masks/0313-1/13620\nReading from: dataset/clips/0313-1/49920/20.jpg\nSaving to: dataset/masks/0313-1/49920\nReading from: dataset/clips/0313-1/49560/20.jpg\nSaving to: dataset/masks/0313-1/49560\nReading from: dataset/clips/0313-1/11100/20.jpg\nSaving to: dataset/masks/0313-1/11100\nReading from: dataset/clips/0313-1/36720/20.jpg\nSaving to: dataset/masks/0313-1/36720\nReading from: dataset/clips/0313-1/6780/20.jpg\nSaving to: dataset/masks/0313-1/6780\nReading from: dataset/clips/0313-1/28860/20.jpg\nSaving to: dataset/masks/0313-1/28860\nReading from: dataset/clips/0313-1/22680/20.jpg\nSaving to: dataset/masks/0313-1/22680\nReading from: dataset/clips/0313-1/9800/20.jpg\nSaving to: dataset/masks/0313-1/9800\nReading from: dataset/clips/0313-1/24300/20.jpg\nSaving to: dataset/masks/0313-1/24300\nReading from: dataset/clips/0313-1/38760/20.jpg\nSaving to: dataset/masks/0313-1/38760\nReading from: dataset/clips/0313-1/14920/20.jpg\nSaving to: dataset/masks/0313-1/14920\nReading from: dataset/clips/0313-1/14460/20.jpg\nSaving to: dataset/masks/0313-1/14460\nReading from: dataset/clips/0313-1/12720/20.jpg\nSaving to: dataset/masks/0313-1/12720\nReading from: dataset/clips/0313-1/21180/20.jpg\nSaving to: dataset/masks/0313-1/21180\nReading from: dataset/clips/0313-1/42480/20.jpg\nSaving to: dataset/masks/0313-1/42480\nReading from: dataset/clips/0313-1/15420/20.jpg\nSaving to: dataset/masks/0313-1/15420\nReading from: dataset/clips/0313-1/22440/20.jpg\nSaving to: dataset/masks/0313-1/22440\nReading from: dataset/clips/0313-1/13700/20.jpg\nSaving to: dataset/masks/0313-1/13700\nReading from: dataset/clips/0313-1/44580/20.jpg\nSaving to: dataset/masks/0313-1/44580\nReading from: dataset/clips/0313-1/20640/20.jpg\nSaving to: dataset/masks/0313-1/20640\nReading from: dataset/clips/0313-1/18780/20.jpg\nSaving to: dataset/masks/0313-1/18780\nReading from: dataset/clips/0313-1/10040/20.jpg\nSaving to: dataset/masks/0313-1/10040\nReading from: dataset/clips/0313-1/10540/20.jpg\nSaving to: dataset/masks/0313-1/10540\nReading from: dataset/clips/0313-1/52080/20.jpg\nSaving to: dataset/masks/0313-1/52080\nReading from: dataset/clips/0313-1/29760/20.jpg\nSaving to: dataset/masks/0313-1/29760\nReading from: dataset/clips/0313-1/15320/20.jpg\nSaving to: dataset/masks/0313-1/15320\nReading from: dataset/clips/0313-1/6540/20.jpg\nSaving to: dataset/masks/0313-1/6540\nReading from: dataset/clips/0313-1/46500/20.jpg\nSaving to: dataset/masks/0313-1/46500\nReading from: dataset/clips/0313-1/36360/20.jpg\nSaving to: dataset/masks/0313-1/36360\nReading from: dataset/clips/0313-1/35760/20.jpg\nSaving to: dataset/masks/0313-1/35760\nReading from: dataset/clips/0313-1/14540/20.jpg\nSaving to: dataset/masks/0313-1/14540\nReading from: dataset/clips/0313-1/9520/20.jpg\nSaving to: dataset/masks/0313-1/9520\nReading from: dataset/clips/0313-1/1260/20.jpg\nSaving to: dataset/masks/0313-1/1260\nReading from: dataset/clips/0313-1/45840/20.jpg\nSaving to: dataset/masks/0313-1/45840\nReading from: dataset/clips/0313-1/20520/20.jpg\nSaving to: dataset/masks/0313-1/20520\nReading from: dataset/clips/0313-1/16480/20.jpg\nSaving to: dataset/masks/0313-1/16480\nReading from: dataset/clips/0313-1/3240/20.jpg\nSaving to: dataset/masks/0313-1/3240\nReading from: dataset/clips/0313-1/48660/20.jpg\nSaving to: dataset/masks/0313-1/48660\nReading from: dataset/clips/0313-1/20400/20.jpg\nSaving to: dataset/masks/0313-1/20400\nReading from: dataset/clips/0313-1/13540/20.jpg\nSaving to: dataset/masks/0313-1/13540\nReading from: dataset/clips/0313-1/21120/20.jpg\nSaving to: dataset/masks/0313-1/21120\nReading from: dataset/clips/0313-1/6560/20.jpg\nSaving to: dataset/masks/0313-1/6560\nReading from: dataset/clips/0313-1/22980/20.jpg\nSaving to: dataset/masks/0313-1/22980\nReading from: dataset/clips/0313-1/8400/20.jpg\nSaving to: dataset/masks/0313-1/8400\nReading from: dataset/clips/0313-1/17940/20.jpg\nSaving to: dataset/masks/0313-1/17940\nReading from: dataset/clips/0313-1/17960/20.jpg\nSaving to: dataset/masks/0313-1/17960\nReading from: dataset/clips/0313-1/11580/20.jpg\nSaving to: dataset/masks/0313-1/11580\nReading from: dataset/clips/0313-1/20700/20.jpg\nSaving to: dataset/masks/0313-1/20700\nReading from: dataset/clips/0313-1/52380/20.jpg\nSaving to: dataset/masks/0313-1/52380\nReading from: dataset/clips/0313-1/6500/20.jpg\nSaving to: dataset/masks/0313-1/6500\nReading from: dataset/clips/0313-1/30240/20.jpg\nSaving to: dataset/masks/0313-1/30240\nReading from: dataset/clips/0313-1/25560/20.jpg\nSaving to: dataset/masks/0313-1/25560\nReading from: dataset/clips/0313-1/14480/20.jpg\nSaving to: dataset/masks/0313-1/14480\nReading from: dataset/clips/0313-1/12040/20.jpg\nSaving to: dataset/masks/0313-1/12040\nReading from: dataset/clips/0313-1/1320/20.jpg\nSaving to: dataset/masks/0313-1/1320\nReading from: dataset/clips/0313-1/20940/20.jpg\nSaving to: dataset/masks/0313-1/20940\nReading from: dataset/clips/0313-1/4260/20.jpg\nSaving to: dataset/masks/0313-1/4260\nReading from: dataset/clips/0313-1/42960/20.jpg\nSaving to: dataset/masks/0313-1/42960\nReading from: dataset/clips/0313-1/41400/20.jpg\nSaving to: dataset/masks/0313-1/41400\nReading from: dataset/clips/0313-1/10860/20.jpg\nSaving to: dataset/masks/0313-1/10860\nReading from: dataset/clips/0313-1/47640/20.jpg\nSaving to: dataset/masks/0313-1/47640\nReading from: dataset/clips/0313-1/5940/20.jpg\nSaving to: dataset/masks/0313-1/5940\nReading from: dataset/clips/0313-1/34020/20.jpg\nSaving to: dataset/masks/0313-1/34020\nReading from: dataset/clips/0313-1/6900/20.jpg\nSaving to: dataset/masks/0313-1/6900\nReading from: dataset/clips/0313-1/13940/20.jpg\nSaving to: dataset/masks/0313-1/13940\nReading from: dataset/clips/0313-1/13300/20.jpg\nSaving to: dataset/masks/0313-1/13300\nReading from: dataset/clips/0313-1/31620/20.jpg\nSaving to: dataset/masks/0313-1/31620\nReading from: dataset/clips/0313-1/40260/20.jpg\nSaving to: dataset/masks/0313-1/40260\nReading from: dataset/clips/0313-1/4880/20.jpg\nSaving to: dataset/masks/0313-1/4880\nReading from: dataset/clips/0313-1/41880/20.jpg\nSaving to: dataset/masks/0313-1/41880\nReading from: dataset/clips/0313-1/34200/20.jpg\nSaving to: dataset/masks/0313-1/34200\nReading from: dataset/clips/0313-1/45300/20.jpg\nSaving to: dataset/masks/0313-1/45300\nReading from: dataset/clips/0313-1/20580/20.jpg\nSaving to: dataset/masks/0313-1/20580\nReading from: dataset/clips/0313-1/47520/20.jpg\nSaving to: dataset/masks/0313-1/47520\nReading from: dataset/clips/0313-1/30540/20.jpg\nSaving to: dataset/masks/0313-1/30540\nReading from: dataset/clips/0313-1/42360/20.jpg\nSaving to: dataset/masks/0313-1/42360\nReading from: dataset/clips/0313-1/5520/20.jpg\nSaving to: dataset/masks/0313-1/5520\nReading from: dataset/clips/0313-1/4280/20.jpg\nSaving to: dataset/masks/0313-1/4280\nReading from: dataset/clips/0313-1/12520/20.jpg\nSaving to: dataset/masks/0313-1/12520\nReading from: dataset/clips/0313-1/6480/20.jpg\nSaving to: dataset/masks/0313-1/6480\nReading from: dataset/clips/0313-1/47400/20.jpg\nSaving to: dataset/masks/0313-1/47400\nReading from: dataset/clips/0313-1/24000/20.jpg\nSaving to: dataset/masks/0313-1/24000\nReading from: dataset/clips/0313-1/10600/20.jpg\nSaving to: dataset/masks/0313-1/10600\nReading from: dataset/clips/0313-1/7880/20.jpg\nSaving to: dataset/masks/0313-1/7880\nReading from: dataset/clips/0313-1/17760/20.jpg\nSaving to: dataset/masks/0313-1/17760\nReading from: dataset/clips/0313-1/27300/20.jpg\nSaving to: dataset/masks/0313-1/27300\nReading from: dataset/clips/0313-1/14960/20.jpg\nSaving to: dataset/masks/0313-1/14960\nReading from: dataset/clips/0313-1/3780/20.jpg\nSaving to: dataset/masks/0313-1/3780\nReading from: dataset/clips/0313-1/26580/20.jpg\nSaving to: dataset/masks/0313-1/26580\nReading from: dataset/clips/0313-1/14720/20.jpg\nSaving to: dataset/masks/0313-1/14720\nReading from: dataset/clips/0313-1/34080/20.jpg\nSaving to: dataset/masks/0313-1/34080\nReading from: dataset/clips/0313-1/46740/20.jpg\nSaving to: dataset/masks/0313-1/46740\nReading from: dataset/clips/0313-1/9220/20.jpg\nSaving to: dataset/masks/0313-1/9220\nReading from: dataset/clips/0313-1/52500/20.jpg\nSaving to: dataset/masks/0313-1/52500\nReading from: dataset/clips/0313-1/20040/20.jpg\nSaving to: dataset/masks/0313-1/20040\nReading from: dataset/clips/0313-1/7540/20.jpg\nSaving to: dataset/masks/0313-1/7540\nReading from: dataset/clips/0313-1/13860/20.jpg\nSaving to: dataset/masks/0313-1/13860\nReading from: dataset/clips/0313-1/10880/20.jpg\nSaving to: dataset/masks/0313-1/10880\nReading from: dataset/clips/0313-1/5760/20.jpg\nSaving to: dataset/masks/0313-1/5760\nReading from: dataset/clips/0313-1/6320/20.jpg\nSaving to: dataset/masks/0313-1/6320\nReading from: dataset/clips/0313-1/6020/20.jpg\nSaving to: dataset/masks/0313-1/6020\nReading from: dataset/clips/0313-1/12780/20.jpg\nSaving to: dataset/masks/0313-1/12780\nReading from: dataset/clips/0313-1/9840/20.jpg\nSaving to: dataset/masks/0313-1/9840\nReading from: dataset/clips/0313-1/44280/20.jpg\nSaving to: dataset/masks/0313-1/44280\nReading from: dataset/clips/0313-1/9480/20.jpg\nSaving to: dataset/masks/0313-1/9480\nReading from: dataset/clips/0313-1/4360/20.jpg\nSaving to: dataset/masks/0313-1/4360\nReading from: dataset/clips/0313-1/14440/20.jpg\nSaving to: dataset/masks/0313-1/14440\nReading from: dataset/clips/0313-1/8980/20.jpg\nSaving to: dataset/masks/0313-1/8980\nReading from: dataset/clips/0313-1/13760/20.jpg\nSaving to: dataset/masks/0313-1/13760\nReading from: dataset/clips/0313-1/39420/20.jpg\nSaving to: dataset/masks/0313-1/39420\nReading from: dataset/clips/0313-1/5160/20.jpg\nSaving to: dataset/masks/0313-1/5160\nReading from: dataset/clips/0313-1/44640/20.jpg\nSaving to: dataset/masks/0313-1/44640\nReading from: dataset/clips/0313-1/4840/20.jpg\nSaving to: dataset/masks/0313-1/4840\nReading from: dataset/clips/0313-1/16400/20.jpg\nSaving to: dataset/masks/0313-1/16400\nReading from: dataset/clips/0313-1/45480/20.jpg\nSaving to: dataset/masks/0313-1/45480\nReading from: dataset/clips/0313-1/45360/20.jpg\nSaving to: dataset/masks/0313-1/45360\nReading from: dataset/clips/0313-1/18060/20.jpg\nSaving to: dataset/masks/0313-1/18060\nReading from: dataset/clips/0313-1/47460/20.jpg\nSaving to: dataset/masks/0313-1/47460\nReading from: dataset/clips/0313-1/7180/20.jpg\nSaving to: dataset/masks/0313-1/7180\nReading from: dataset/clips/0313-1/10760/20.jpg\nSaving to: dataset/masks/0313-1/10760\nReading from: dataset/clips/0313-1/8800/20.jpg\nSaving to: dataset/masks/0313-1/8800\nReading from: dataset/clips/0313-1/1920/20.jpg\nSaving to: dataset/masks/0313-1/1920\nReading from: dataset/clips/0313-1/9500/20.jpg\nSaving to: dataset/masks/0313-1/9500\nReading from: dataset/clips/0313-1/11000/20.jpg\nSaving to: dataset/masks/0313-1/11000\nReading from: dataset/clips/0313-1/43320/20.jpg\nSaving to: dataset/masks/0313-1/43320\nReading from: dataset/clips/0313-1/5980/20.jpg\nSaving to: dataset/masks/0313-1/5980\nReading from: dataset/clips/0313-1/10360/20.jpg\nSaving to: dataset/masks/0313-1/10360\nReading from: dataset/clips/0313-1/39600/20.jpg\nSaving to: dataset/masks/0313-1/39600\nReading from: dataset/clips/0313-1/1980/20.jpg\nSaving to: dataset/masks/0313-1/1980\nReading from: dataset/clips/0313-1/10300/20.jpg\nSaving to: dataset/masks/0313-1/10300\nReading from: dataset/clips/0313-1/31080/20.jpg\nSaving to: dataset/masks/0313-1/31080\nReading from: dataset/clips/0313-1/19200/20.jpg\nSaving to: dataset/masks/0313-1/19200\nReading from: dataset/clips/0313-1/11340/20.jpg\nSaving to: dataset/masks/0313-1/11340\nReading from: dataset/clips/0313-1/9140/20.jpg\nSaving to: dataset/masks/0313-1/9140\nReading from: dataset/clips/0313-1/16660/20.jpg\nSaving to: dataset/masks/0313-1/16660\nReading from: dataset/clips/0313-1/10560/20.jpg\nSaving to: dataset/masks/0313-1/10560\nReading from: dataset/clips/0313-1/31140/20.jpg\nSaving to: dataset/masks/0313-1/31140\nReading from: dataset/clips/0313-1/15240/20.jpg\nSaving to: dataset/masks/0313-1/15240\nReading from: dataset/clips/0313-1/50340/20.jpg\nSaving to: dataset/masks/0313-1/50340\nReading from: dataset/clips/0313-1/20220/20.jpg\nSaving to: dataset/masks/0313-1/20220\nReading from: dataset/clips/0313-1/5540/20.jpg\nSaving to: dataset/masks/0313-1/5540\nReading from: dataset/clips/0313-1/14100/20.jpg\nSaving to: dataset/masks/0313-1/14100\nReading from: dataset/clips/0313-1/10740/20.jpg\nSaving to: dataset/masks/0313-1/10740\nReading from: dataset/clips/0313-1/10400/20.jpg\nSaving to: dataset/masks/0313-1/10400\nReading from: dataset/clips/0313-1/7160/20.jpg\nSaving to: dataset/masks/0313-1/7160\nReading from: dataset/clips/0313-1/44100/20.jpg\nSaving to: dataset/masks/0313-1/44100\nReading from: dataset/clips/0313-1/38520/20.jpg\nSaving to: dataset/masks/0313-1/38520\nReading from: dataset/clips/0313-1/19980/20.jpg\nSaving to: dataset/masks/0313-1/19980\nReading from: dataset/clips/0313-1/14700/20.jpg\nSaving to: dataset/masks/0313-1/14700\nReading from: dataset/clips/0313-1/34500/20.jpg\nSaving to: dataset/masks/0313-1/34500\nReading from: dataset/clips/0313-1/28800/20.jpg\nSaving to: dataset/masks/0313-1/28800\nReading from: dataset/clips/0313-1/19620/20.jpg\nSaving to: dataset/masks/0313-1/19620\nReading from: dataset/clips/0313-1/7560/20.jpg\nSaving to: dataset/masks/0313-1/7560\nReading from: dataset/clips/0313-1/18360/20.jpg\nSaving to: dataset/masks/0313-1/18360\nReading from: dataset/clips/0313-1/48780/20.jpg\nSaving to: dataset/masks/0313-1/48780\nReading from: dataset/clips/0313-1/4100/20.jpg\nSaving to: dataset/masks/0313-1/4100\nReading from: dataset/clips/0313-1/15020/20.jpg\nSaving to: dataset/masks/0313-1/15020\nReading from: dataset/clips/0313-1/12660/20.jpg\nSaving to: dataset/masks/0313-1/12660\nReading from: dataset/clips/0313-1/32820/20.jpg\nSaving to: dataset/masks/0313-1/32820\nReading from: dataset/clips/0313-1/12120/20.jpg\nSaving to: dataset/masks/0313-1/12120\nReading from: dataset/clips/0313-1/51900/20.jpg\nSaving to: dataset/masks/0313-1/51900\nReading from: dataset/clips/0313-1/42660/20.jpg\nSaving to: dataset/masks/0313-1/42660\nReading from: dataset/clips/0313-1/20760/20.jpg\nSaving to: dataset/masks/0313-1/20760\nReading from: dataset/clips/0313-1/36960/20.jpg\nSaving to: dataset/masks/0313-1/36960\nReading from: dataset/clips/0313-1/17880/20.jpg\nSaving to: dataset/masks/0313-1/17880\nReading from: dataset/clips/0313-1/35280/20.jpg\nSaving to: dataset/masks/0313-1/35280\nReading from: dataset/clips/0313-1/6060/20.jpg\nSaving to: dataset/masks/0313-1/6060\nReading from: dataset/clips/0313-1/36420/20.jpg\nSaving to: dataset/masks/0313-1/36420\nReading from: dataset/clips/0313-1/51480/20.jpg\nSaving to: dataset/masks/0313-1/51480\nReading from: dataset/clips/0313-1/50760/20.jpg\nSaving to: dataset/masks/0313-1/50760\nReading from: dataset/clips/0313-1/50400/20.jpg\nSaving to: dataset/masks/0313-1/50400\nReading from: dataset/clips/0313-1/39540/20.jpg\nSaving to: dataset/masks/0313-1/39540\nReading from: dataset/clips/0313-1/27060/20.jpg\nSaving to: dataset/masks/0313-1/27060\nReading from: dataset/clips/0313-1/34440/20.jpg\nSaving to: dataset/masks/0313-1/34440\nReading from: dataset/clips/0313-1/11880/20.jpg\nSaving to: dataset/masks/0313-1/11880\nReading from: dataset/clips/0313-1/12340/20.jpg\nSaving to: dataset/masks/0313-1/12340\nReading from: dataset/clips/0313-1/16240/20.jpg\nSaving to: dataset/masks/0313-1/16240\nReading from: dataset/clips/0313-1/46020/20.jpg\nSaving to: dataset/masks/0313-1/46020\nReading from: dataset/clips/0313-1/51000/20.jpg\nSaving to: dataset/masks/0313-1/51000\nReading from: dataset/clips/0313-1/6640/20.jpg\nSaving to: dataset/masks/0313-1/6640\nReading from: dataset/clips/0313-1/16140/20.jpg\nSaving to: dataset/masks/0313-1/16140\nReading from: dataset/clips/0313-1/4020/20.jpg\nSaving to: dataset/masks/0313-1/4020\nReading from: dataset/clips/0313-1/16200/20.jpg\nSaving to: dataset/masks/0313-1/16200\nReading from: dataset/clips/0313-1/34260/20.jpg\nSaving to: dataset/masks/0313-1/34260\nReading from: dataset/clips/0313-1/16840/20.jpg\nSaving to: dataset/masks/0313-1/16840\nReading from: dataset/clips/0313-1/15820/20.jpg\nSaving to: dataset/masks/0313-1/15820\nReading from: dataset/clips/0313-1/28020/20.jpg\nSaving to: dataset/masks/0313-1/28020\nReading from: dataset/clips/0313-1/33540/20.jpg\nSaving to: dataset/masks/0313-1/33540\nReading from: dataset/clips/0313-1/15620/20.jpg\nSaving to: dataset/masks/0313-1/15620\nReading from: dataset/clips/0313-1/12460/20.jpg\nSaving to: dataset/masks/0313-1/12460\nReading from: dataset/clips/0313-1/44220/20.jpg\nSaving to: dataset/masks/0313-1/44220\nReading from: dataset/clips/0313-1/15580/20.jpg\nSaving to: dataset/masks/0313-1/15580\nReading from: dataset/clips/0313-1/15260/20.jpg\nSaving to: dataset/masks/0313-1/15260\nReading from: dataset/clips/0313-1/41820/20.jpg\nSaving to: dataset/masks/0313-1/41820\nReading from: dataset/clips/0313-1/41580/20.jpg\nSaving to: dataset/masks/0313-1/41580\nReading from: dataset/clips/0313-1/4200/20.jpg\nSaving to: dataset/masks/0313-1/4200\nReading from: dataset/clips/0313-1/13920/20.jpg\nSaving to: dataset/masks/0313-1/13920\nReading from: dataset/clips/0313-1/7800/20.jpg\nSaving to: dataset/masks/0313-1/7800\nReading from: dataset/clips/0313-1/12860/20.jpg\nSaving to: dataset/masks/0313-1/12860\nReading from: dataset/clips/0313-1/12840/20.jpg\nSaving to: dataset/masks/0313-1/12840\nReading from: dataset/clips/0313-1/11380/20.jpg\nSaving to: dataset/masks/0313-1/11380\nReading from: dataset/clips/0313-1/11320/20.jpg\nSaving to: dataset/masks/0313-1/11320\nReading from: dataset/clips/0313-1/8180/20.jpg\nSaving to: dataset/masks/0313-1/8180\nReading from: dataset/clips/0313-1/17200/20.jpg\nSaving to: dataset/masks/0313-1/17200\nReading from: dataset/clips/0313-1/40800/20.jpg\nSaving to: dataset/masks/0313-1/40800\nReading from: dataset/clips/0313-1/6080/20.jpg\nSaving to: dataset/masks/0313-1/6080\nReading from: dataset/clips/0313-1/17720/20.jpg\nSaving to: dataset/masks/0313-1/17720\nReading from: dataset/clips/0313-1/29700/20.jpg\nSaving to: dataset/masks/0313-1/29700\nReading from: dataset/clips/0313-1/5120/20.jpg\nSaving to: dataset/masks/0313-1/5120\nReading from: dataset/clips/0313-1/45180/20.jpg\nSaving to: dataset/masks/0313-1/45180\nReading from: dataset/clips/0313-1/49260/20.jpg\nSaving to: dataset/masks/0313-1/49260\nReading from: dataset/clips/0313-1/30840/20.jpg\nSaving to: dataset/masks/0313-1/30840\nReading from: dataset/clips/0313-1/5860/20.jpg\nSaving to: dataset/masks/0313-1/5860\nReading from: dataset/clips/0313-1/14220/20.jpg\nSaving to: dataset/masks/0313-1/14220\nReading from: dataset/clips/0313-1/46800/20.jpg\nSaving to: dataset/masks/0313-1/46800\nReading from: dataset/clips/0313-1/19140/20.jpg\nSaving to: dataset/masks/0313-1/19140\nReading from: dataset/clips/0313-1/6000/20.jpg\nSaving to: dataset/masks/0313-1/6000\nReading from: dataset/clips/0313-1/16600/20.jpg\nSaving to: dataset/masks/0313-1/16600\nReading from: dataset/clips/0313-1/28620/20.jpg\nSaving to: dataset/masks/0313-1/28620\nReading from: dataset/clips/0313-1/51780/20.jpg\nSaving to: dataset/masks/0313-1/51780\nReading from: dataset/clips/0313-1/16580/20.jpg\nSaving to: dataset/masks/0313-1/16580\nReading from: dataset/clips/0313-1/44520/20.jpg\nSaving to: dataset/masks/0313-1/44520\nReading from: dataset/clips/0313-1/24960/20.jpg\nSaving to: dataset/masks/0313-1/24960\nReading from: dataset/clips/0313-1/5460/20.jpg\nSaving to: dataset/masks/0313-1/5460\nReading from: dataset/clips/0313-1/12500/20.jpg\nSaving to: dataset/masks/0313-1/12500\nReading from: dataset/clips/0313-1/15560/20.jpg\nSaving to: dataset/masks/0313-1/15560\nReading from: dataset/clips/0313-1/5260/20.jpg\nSaving to: dataset/masks/0313-1/5260\nReading from: dataset/clips/0313-1/32280/20.jpg\nSaving to: dataset/masks/0313-1/32280\nReading from: dataset/clips/0313-1/45420/20.jpg\nSaving to: dataset/masks/0313-1/45420\nReading from: dataset/clips/0313-1/25920/20.jpg\nSaving to: dataset/masks/0313-1/25920\nReading from: dataset/clips/0313-1/18000/20.jpg\nSaving to: dataset/masks/0313-1/18000\nReading from: dataset/clips/0313-1/900/20.jpg\nSaving to: dataset/masks/0313-1/900\nReading from: dataset/clips/0313-1/31200/20.jpg\nSaving to: dataset/masks/0313-1/31200\nReading from: dataset/clips/0313-1/28740/20.jpg\nSaving to: dataset/masks/0313-1/28740\nReading from: dataset/clips/0313-1/35940/20.jpg\nSaving to: dataset/masks/0313-1/35940\nReading from: dataset/clips/0313-1/26040/20.jpg\nSaving to: dataset/masks/0313-1/26040\nReading from: dataset/clips/0313-1/13080/20.jpg\nSaving to: dataset/masks/0313-1/13080\nReading from: dataset/clips/0313-1/44400/20.jpg\nSaving to: dataset/masks/0313-1/44400\nReading from: dataset/clips/0313-1/22620/20.jpg\nSaving to: dataset/masks/0313-1/22620\nReading from: dataset/clips/0313-1/11760/20.jpg\nSaving to: dataset/masks/0313-1/11760\nReading from: dataset/clips/0313-1/52020/20.jpg\nSaving to: dataset/masks/0313-1/52020\nReading from: dataset/clips/0313-1/38340/20.jpg\nSaving to: dataset/masks/0313-1/38340\nReading from: dataset/clips/0313-1/28260/20.jpg\nSaving to: dataset/masks/0313-1/28260\nReading from: dataset/clips/0313-1/840/20.jpg\nSaving to: dataset/masks/0313-1/840\nReading from: dataset/clips/0313-1/1380/20.jpg\nSaving to: dataset/masks/0313-1/1380\nReading from: dataset/clips/0313-1/23760/20.jpg\nSaving to: dataset/masks/0313-1/23760\nReading from: dataset/clips/0313-1/15700/20.jpg\nSaving to: dataset/masks/0313-1/15700\nReading from: dataset/clips/0313-1/12540/20.jpg\nSaving to: dataset/masks/0313-1/12540\nReading from: dataset/clips/0313-1/5960/20.jpg\nSaving to: dataset/masks/0313-1/5960\nReading from: dataset/clips/0313-1/46440/20.jpg\nSaving to: dataset/masks/0313-1/46440\nReading from: dataset/clips/0313-1/16560/20.jpg\nSaving to: dataset/masks/0313-1/16560\nReading from: dataset/clips/0313-1/12440/20.jpg\nSaving to: dataset/masks/0313-1/12440\nReading from: dataset/clips/0313-1/31320/20.jpg\nSaving to: dataset/masks/0313-1/31320\nReading from: dataset/clips/0313-1/2100/20.jpg\nSaving to: dataset/masks/0313-1/2100\nReading from: dataset/clips/0313-1/45600/20.jpg\nSaving to: dataset/masks/0313-1/45600\nReading from: dataset/clips/0313-1/14680/20.jpg\nSaving to: dataset/masks/0313-1/14680\nReading from: dataset/clips/0313-1/30060/20.jpg\nSaving to: dataset/masks/0313-1/30060\nReading from: dataset/clips/0313-1/38580/20.jpg\nSaving to: dataset/masks/0313-1/38580\nReading from: dataset/clips/0313-1/38700/20.jpg\nSaving to: dataset/masks/0313-1/38700\nReading from: dataset/clips/0313-1/22500/20.jpg\nSaving to: dataset/masks/0313-1/22500\nReading from: dataset/clips/0313-1/50100/20.jpg\nSaving to: dataset/masks/0313-1/50100\nReading from: dataset/clips/0313-1/13160/20.jpg\nSaving to: dataset/masks/0313-1/13160\nReading from: dataset/clips/0313-1/28560/20.jpg\nSaving to: dataset/masks/0313-1/28560\nReading from: dataset/clips/0313-1/52440/20.jpg\nSaving to: dataset/masks/0313-1/52440\nReading from: dataset/clips/0313-1/6300/20.jpg\nSaving to: dataset/masks/0313-1/6300\nReading from: dataset/clips/0313-1/18120/20.jpg\nSaving to: dataset/masks/0313-1/18120\nReading from: dataset/clips/0313-1/10720/20.jpg\nSaving to: dataset/masks/0313-1/10720\nReading from: dataset/clips/0313-1/15180/20.jpg\nSaving to: dataset/masks/0313-1/15180\nReading from: dataset/clips/0313-1/12480/20.jpg\nSaving to: dataset/masks/0313-1/12480\nReading from: dataset/clips/0313-1/35460/20.jpg\nSaving to: dataset/masks/0313-1/35460\nReading from: dataset/clips/0313-1/15000/20.jpg\nSaving to: dataset/masks/0313-1/15000\nReading from: dataset/clips/0313-1/9380/20.jpg\nSaving to: dataset/masks/0313-1/9380\nReading from: dataset/clips/0313-1/48420/20.jpg\nSaving to: dataset/masks/0313-1/48420\nReading from: dataset/clips/0313-1/20340/20.jpg\nSaving to: dataset/masks/0313-1/20340\nReading from: dataset/clips/0313-1/33360/20.jpg\nSaving to: dataset/masks/0313-1/33360\nReading from: dataset/clips/0313-1/13740/20.jpg\nSaving to: dataset/masks/0313-1/13740\nReading from: dataset/clips/0313-1/37140/20.jpg\nSaving to: dataset/masks/0313-1/37140\nReading from: dataset/clips/0313-1/21000/20.jpg\nSaving to: dataset/masks/0313-1/21000\nReading from: dataset/clips/0313-1/34380/20.jpg\nSaving to: dataset/masks/0313-1/34380\nReading from: dataset/clips/0313-1/17040/20.jpg\nSaving to: dataset/masks/0313-1/17040\nReading from: dataset/clips/0313-1/17400/20.jpg\nSaving to: dataset/masks/0313-1/17400\nReading from: dataset/clips/0313-1/12640/20.jpg\nSaving to: dataset/masks/0313-1/12640\nReading from: dataset/clips/0313-1/7820/20.jpg\nSaving to: dataset/masks/0313-1/7820\nReading from: dataset/clips/0313-1/9060/20.jpg\nSaving to: dataset/masks/0313-1/9060\nReading from: dataset/clips/0313-1/13220/20.jpg\nSaving to: dataset/masks/0313-1/13220\nReading from: dataset/clips/0313-1/120/20.jpg\nSaving to: dataset/masks/0313-1/120\nReading from: dataset/clips/0313-1/7220/20.jpg\nSaving to: dataset/masks/0313-1/7220\nReading from: dataset/clips/0313-1/15460/20.jpg\nSaving to: dataset/masks/0313-1/15460\nReading from: dataset/clips/0313-1/9780/20.jpg\nSaving to: dataset/masks/0313-1/9780\nReading from: dataset/clips/0313-1/6160/20.jpg\nSaving to: dataset/masks/0313-1/6160\nReading from: dataset/clips/0313-1/1560/20.jpg\nSaving to: dataset/masks/0313-1/1560\nReading from: dataset/clips/0313-1/8680/20.jpg\nSaving to: dataset/masks/0313-1/8680\nReading from: dataset/clips/0313-1/12800/20.jpg\nSaving to: dataset/masks/0313-1/12800\nReading from: dataset/clips/0313-1/13840/20.jpg\nSaving to: dataset/masks/0313-1/13840\nReading from: dataset/clips/0313-1/32160/20.jpg\nSaving to: dataset/masks/0313-1/32160\nReading from: dataset/clips/0313-1/8880/20.jpg\nSaving to: dataset/masks/0313-1/8880\nReading from: dataset/clips/0313-1/45120/20.jpg\nSaving to: dataset/masks/0313-1/45120\nReading from: dataset/clips/0313-1/8040/20.jpg\nSaving to: dataset/masks/0313-1/8040\nReading from: dataset/clips/0313-1/8760/20.jpg\nSaving to: dataset/masks/0313-1/8760\nReading from: dataset/clips/0313-1/6880/20.jpg\nSaving to: dataset/masks/0313-1/6880\nReading from: dataset/clips/0313-1/3420/20.jpg\nSaving to: dataset/masks/0313-1/3420\nReading from: dataset/clips/0313-1/4660/20.jpg\nSaving to: dataset/masks/0313-1/4660\nReading from: dataset/clips/0313-1/11540/20.jpg\nSaving to: dataset/masks/0313-1/11540\nReading from: dataset/clips/0313-1/19920/20.jpg\nSaving to: dataset/masks/0313-1/19920\nReading from: dataset/clips/0313-1/21960/20.jpg\nSaving to: dataset/masks/0313-1/21960\nReading from: dataset/clips/0313-1/41160/20.jpg\nSaving to: dataset/masks/0313-1/41160\nReading from: dataset/clips/0313-1/36120/20.jpg\nSaving to: dataset/masks/0313-1/36120\nReading from: dataset/clips/0313-1/7580/20.jpg\nSaving to: dataset/masks/0313-1/7580\nReading from: dataset/clips/0313-1/31860/20.jpg\nSaving to: dataset/masks/0313-1/31860\nReading from: dataset/clips/0313-1/12380/20.jpg\nSaving to: dataset/masks/0313-1/12380\nReading from: dataset/clips/0313-1/33960/20.jpg\nSaving to: dataset/masks/0313-1/33960\nReading from: dataset/clips/0313-1/11860/20.jpg\nSaving to: dataset/masks/0313-1/11860\nReading from: dataset/clips/0313-1/2280/20.jpg\nSaving to: dataset/masks/0313-1/2280\nReading from: dataset/clips/0313-1/52200/20.jpg\nSaving to: dataset/masks/0313-1/52200\nReading from: dataset/clips/0313-1/10060/20.jpg\nSaving to: dataset/masks/0313-1/10060\nReading from: dataset/clips/0313-1/4120/20.jpg\nSaving to: dataset/masks/0313-1/4120\nReading from: dataset/clips/0313-1/10580/20.jpg\nSaving to: dataset/masks/0313-1/10580\nReading from: dataset/clips/0313-1/5180/20.jpg\nSaving to: dataset/masks/0313-1/5180\nReading from: dataset/clips/0313-1/14980/20.jpg\nSaving to: dataset/masks/0313-1/14980\nReading from: dataset/clips/0313-1/5300/20.jpg\nSaving to: dataset/masks/0313-1/5300\nReading from: dataset/clips/0313-1/4900/20.jpg\nSaving to: dataset/masks/0313-1/4900\nReading from: dataset/clips/0313-1/6940/20.jpg\nSaving to: dataset/masks/0313-1/6940\nReading from: dataset/clips/0313-1/22020/20.jpg\nSaving to: dataset/masks/0313-1/22020\nReading from: dataset/clips/0313-1/26760/20.jpg\nSaving to: dataset/masks/0313-1/26760\nReading from: dataset/clips/0313-1/12180/20.jpg\nSaving to: dataset/masks/0313-1/12180\nReading from: dataset/clips/0313-1/11360/20.jpg\nSaving to: dataset/masks/0313-1/11360\nReading from: dataset/clips/0313-1/17000/20.jpg\nSaving to: dataset/masks/0313-1/17000\nReading from: dataset/clips/0313-1/2940/20.jpg\nSaving to: dataset/masks/0313-1/2940\nReading from: dataset/clips/0313-1/11680/20.jpg\nSaving to: dataset/masks/0313-1/11680\nReading from: dataset/clips/0313-1/52800/20.jpg\nSaving to: dataset/masks/0313-1/52800\nReading from: dataset/clips/0313-1/44340/20.jpg\nSaving to: dataset/masks/0313-1/44340\nReading from: dataset/clips/0313-1/1080/20.jpg\nSaving to: dataset/masks/0313-1/1080\nReading from: dataset/clips/0313-1/13400/20.jpg\nSaving to: dataset/masks/0313-1/13400\nReading from: dataset/clips/0313-1/28320/20.jpg\nSaving to: dataset/masks/0313-1/28320\nReading from: dataset/clips/0313-1/32220/20.jpg\nSaving to: dataset/masks/0313-1/32220\nReading from: dataset/clips/0313-1/2040/20.jpg\nSaving to: dataset/masks/0313-1/2040\nReading from: dataset/clips/0313-1/12560/20.jpg\nSaving to: dataset/masks/0313-1/12560\nReading from: dataset/clips/0313-1/13100/20.jpg\nSaving to: dataset/masks/0313-1/13100\nReading from: dataset/clips/0313-1/32580/20.jpg\nSaving to: dataset/masks/0313-1/32580\nReading from: dataset/clips/0313-1/13880/20.jpg\nSaving to: dataset/masks/0313-1/13880\nReading from: dataset/clips/0313-1/14740/20.jpg\nSaving to: dataset/masks/0313-1/14740\nReading from: dataset/clips/0313-1/13900/20.jpg\nSaving to: dataset/masks/0313-1/13900\nReading from: dataset/clips/0313-1/3000/20.jpg\nSaving to: dataset/masks/0313-1/3000\nReading from: dataset/clips/0313-1/33000/20.jpg\nSaving to: dataset/masks/0313-1/33000\nReading from: dataset/clips/0313-1/7460/20.jpg\nSaving to: dataset/masks/0313-1/7460\nReading from: dataset/clips/0313-1/37560/20.jpg\nSaving to: dataset/masks/0313-1/37560\nReading from: dataset/clips/0313-1/52320/20.jpg\nSaving to: dataset/masks/0313-1/52320\nReading from: dataset/clips/0313-1/15520/20.jpg\nSaving to: dataset/masks/0313-1/15520\nReading from: dataset/clips/0313-1/32460/20.jpg\nSaving to: dataset/masks/0313-1/32460\nReading from: dataset/clips/0313-1/17160/20.jpg\nSaving to: dataset/masks/0313-1/17160\nReading from: dataset/clips/0313-1/1740/20.jpg\nSaving to: dataset/masks/0313-1/1740\nReading from: dataset/clips/0313-1/50640/20.jpg\nSaving to: dataset/masks/0313-1/50640\nReading from: dataset/clips/0313-1/25740/20.jpg\nSaving to: dataset/masks/0313-1/25740\nReading from: dataset/clips/0313-1/19380/20.jpg\nSaving to: dataset/masks/0313-1/19380\nReading from: dataset/clips/0313-1/16920/20.jpg\nSaving to: dataset/masks/0313-1/16920\nReading from: dataset/clips/0313-1/31260/20.jpg\nSaving to: dataset/masks/0313-1/31260\nReading from: dataset/clips/0313-1/47100/20.jpg\nSaving to: dataset/masks/0313-1/47100\nReading from: dataset/clips/0313-1/2220/20.jpg\nSaving to: dataset/masks/0313-1/2220\nReading from: dataset/clips/0313-1/16340/20.jpg\nSaving to: dataset/masks/0313-1/16340\nReading from: dataset/clips/0313-1/19260/20.jpg\nSaving to: dataset/masks/0313-1/19260\nReading from: dataset/clips/0313-1/10700/20.jpg\nSaving to: dataset/masks/0313-1/10700\nReading from: dataset/clips/0313-1/14360/20.jpg\nSaving to: dataset/masks/0313-1/14360\nReading from: dataset/clips/0313-1/3600/20.jpg\nSaving to: dataset/masks/0313-1/3600\nReading from: dataset/clips/0313-1/7300/20.jpg\nSaving to: dataset/masks/0313-1/7300\nReading from: dataset/clips/0313-1/8920/20.jpg\nSaving to: dataset/masks/0313-1/8920\nReading from: dataset/clips/0313-1/8240/20.jpg\nSaving to: dataset/masks/0313-1/8240\nReading from: dataset/clips/0313-1/14560/20.jpg\nSaving to: dataset/masks/0313-1/14560\nReading from: dataset/clips/0313-2/12180/20.jpg\nSaving to: dataset/masks/0313-2/12180\nReading from: dataset/clips/0313-2/33280/20.jpg\nSaving to: dataset/masks/0313-2/33280\nReading from: dataset/clips/0313-2/850/20.jpg\nSaving to: dataset/masks/0313-2/850\nReading from: dataset/clips/0313-2/275/20.jpg\nSaving to: dataset/masks/0313-2/275\nReading from: dataset/clips/0313-2/38640/20.jpg\nSaving to: dataset/masks/0313-2/38640\nReading from: dataset/clips/0313-2/105/20.jpg\nSaving to: dataset/masks/0313-2/105\nReading from: dataset/clips/0313-2/39780/20.jpg\nSaving to: dataset/masks/0313-2/39780\nReading from: dataset/clips/0313-2/1275/20.jpg\nSaving to: dataset/masks/0313-2/1275\nReading from: dataset/clips/0313-2/39120/20.jpg\nSaving to: dataset/masks/0313-2/39120\nReading from: dataset/clips/0313-2/8460/20.jpg\nSaving to: dataset/masks/0313-2/8460\nReading from: dataset/clips/0313-2/35900/20.jpg\nSaving to: dataset/masks/0313-2/35900\nReading from: dataset/clips/0313-2/60600/20.jpg\nSaving to: dataset/masks/0313-2/60600\nReading from: dataset/clips/0313-2/22140/20.jpg\nSaving to: dataset/masks/0313-2/22140\nReading from: dataset/clips/0313-2/20100/20.jpg\nSaving to: dataset/masks/0313-2/20100\nReading from: dataset/clips/0313-2/1510/20.jpg\nSaving to: dataset/masks/0313-2/1510\nReading from: dataset/clips/0313-2/1530/20.jpg\nSaving to: dataset/masks/0313-2/1530\nReading from: dataset/clips/0313-2/32900/20.jpg\nSaving to: dataset/masks/0313-2/32900\nReading from: dataset/clips/0313-2/1705/20.jpg\nSaving to: dataset/masks/0313-2/1705\nReading from: dataset/clips/0313-2/24300/20.jpg\nSaving to: dataset/masks/0313-2/24300\nReading from: dataset/clips/0313-2/325/20.jpg\nSaving to: dataset/masks/0313-2/325\nReading from: dataset/clips/0313-2/38700/20.jpg\nSaving to: dataset/masks/0313-2/38700\nReading from: dataset/clips/0313-2/385/20.jpg\nSaving to: dataset/masks/0313-2/385\nReading from: dataset/clips/0313-2/28680/20.jpg\nSaving to: dataset/masks/0313-2/28680\nReading from: dataset/clips/0313-2/1380/20.jpg\nSaving to: dataset/masks/0313-2/1380\nReading from: dataset/clips/0313-2/5160/20.jpg\nSaving to: dataset/masks/0313-2/5160\nReading from: dataset/clips/0313-2/41780/20.jpg\nSaving to: dataset/masks/0313-2/41780\nReading from: dataset/clips/0313-2/32520/20.jpg\nSaving to: dataset/masks/0313-2/32520\nReading from: dataset/clips/0313-2/7080/20.jpg\nSaving to: dataset/masks/0313-2/7080\nReading from: dataset/clips/0313-2/35760/20.jpg\nSaving to: dataset/masks/0313-2/35760\nReading from: dataset/clips/0313-2/555/20.jpg\nSaving to: dataset/masks/0313-2/555\nReading from: dataset/clips/0313-2/1370/20.jpg\nSaving to: dataset/masks/0313-2/1370\nReading from: dataset/clips/0313-2/39660/20.jpg\nSaving to: dataset/masks/0313-2/39660\nReading from: dataset/clips/0313-2/11220/20.jpg\nSaving to: dataset/masks/0313-2/11220\nReading from: dataset/clips/0313-2/15540/20.jpg\nSaving to: dataset/masks/0313-2/15540\nReading from: dataset/clips/0313-2/2400/20.jpg\nSaving to: dataset/masks/0313-2/2400\nReading from: dataset/clips/0313-2/870/20.jpg\nSaving to: dataset/masks/0313-2/870\nReading from: dataset/clips/0313-2/95/20.jpg\nSaving to: dataset/masks/0313-2/95\nReading from: dataset/clips/0313-2/855/20.jpg\nSaving to: dataset/masks/0313-2/855\nReading from: dataset/clips/0313-2/30600/20.jpg\nSaving to: dataset/masks/0313-2/30600\nReading from: dataset/clips/0313-2/435/20.jpg\nSaving to: dataset/masks/0313-2/435\nReading from: dataset/clips/0313-2/19860/20.jpg\nSaving to: dataset/masks/0313-2/19860\nReading from: dataset/clips/0313-2/42040/20.jpg\nSaving to: dataset/masks/0313-2/42040\nReading from: dataset/clips/0313-2/995/20.jpg\nSaving to: dataset/masks/0313-2/995\nReading from: dataset/clips/0313-2/370/20.jpg\nSaving to: dataset/masks/0313-2/370\nReading from: dataset/clips/0313-2/39400/20.jpg\nSaving to: dataset/masks/0313-2/39400\nReading from: dataset/clips/0313-2/1410/20.jpg\nSaving to: dataset/masks/0313-2/1410\nReading from: dataset/clips/0313-2/1325/20.jpg\nSaving to: dataset/masks/0313-2/1325\nReading from: dataset/clips/0313-2/23520/20.jpg\nSaving to: dataset/masks/0313-2/23520\nReading from: dataset/clips/0313-2/13440/20.jpg\nSaving to: dataset/masks/0313-2/13440\nReading from: dataset/clips/0313-2/16500/20.jpg\nSaving to: dataset/masks/0313-2/16500\nReading from: dataset/clips/0313-2/34020/20.jpg\nSaving to: dataset/masks/0313-2/34020\nReading from: dataset/clips/0313-2/64020/20.jpg\nSaving to: dataset/masks/0313-2/64020\nReading from: dataset/clips/0313-2/1405/20.jpg\nSaving to: dataset/masks/0313-2/1405\nReading from: dataset/clips/0313-2/14340/20.jpg\nSaving to: dataset/masks/0313-2/14340\nReading from: dataset/clips/0313-2/40660/20.jpg\nSaving to: dataset/masks/0313-2/40660\nReading from: dataset/clips/0313-2/39380/20.jpg\nSaving to: dataset/masks/0313-2/39380\nReading from: dataset/clips/0313-2/32640/20.jpg\nSaving to: dataset/masks/0313-2/32640\nReading from: dataset/clips/0313-2/19260/20.jpg\nSaving to: dataset/masks/0313-2/19260\nReading from: dataset/clips/0313-2/1350/20.jpg\nSaving to: dataset/masks/0313-2/1350\nReading from: dataset/clips/0313-2/38060/20.jpg\nSaving to: dataset/masks/0313-2/38060\nReading from: dataset/clips/0313-2/11760/20.jpg\nSaving to: dataset/masks/0313-2/11760\nReading from: dataset/clips/0313-2/34180/20.jpg\nSaving to: dataset/masks/0313-2/34180\nReading from: dataset/clips/0313-2/3360/20.jpg\nSaving to: dataset/masks/0313-2/3360\nReading from: dataset/clips/0313-2/1135/20.jpg\nSaving to: dataset/masks/0313-2/1135\nReading from: dataset/clips/0313-2/21720/20.jpg\nSaving to: dataset/masks/0313-2/21720\nReading from: dataset/clips/0313-2/600/20.jpg\nSaving to: dataset/masks/0313-2/600\nReading from: dataset/clips/0313-2/11700/20.jpg\nSaving to: dataset/masks/0313-2/11700\nReading from: dataset/clips/0313-2/4920/20.jpg\nSaving to: dataset/masks/0313-2/4920\nReading from: dataset/clips/0313-2/31680/20.jpg\nSaving to: dataset/masks/0313-2/31680\nReading from: dataset/clips/0313-2/970/20.jpg\nSaving to: dataset/masks/0313-2/970\nReading from: dataset/clips/0313-2/3300/20.jpg\nSaving to: dataset/masks/0313-2/3300\nReading from: dataset/clips/0313-2/27120/20.jpg\nSaving to: dataset/masks/0313-2/27120\nReading from: dataset/clips/0313-2/1355/20.jpg\nSaving to: dataset/masks/0313-2/1355\nReading from: dataset/clips/0313-2/13200/20.jpg\nSaving to: dataset/masks/0313-2/13200\nReading from: dataset/clips/0313-2/37640/20.jpg\nSaving to: dataset/masks/0313-2/37640\nReading from: dataset/clips/0313-2/1465/20.jpg\nSaving to: dataset/masks/0313-2/1465\nReading from: dataset/clips/0313-2/40880/20.jpg\nSaving to: dataset/masks/0313-2/40880\nReading from: dataset/clips/0313-2/38960/20.jpg\nSaving to: dataset/masks/0313-2/38960\nReading from: dataset/clips/0313-2/550/20.jpg\nSaving to: dataset/masks/0313-2/550\nReading from: dataset/clips/0313-2/6480/20.jpg\nSaving to: dataset/masks/0313-2/6480\nReading from: dataset/clips/0313-2/27300/20.jpg\nSaving to: dataset/masks/0313-2/27300\nReading from: dataset/clips/0313-2/9180/20.jpg\nSaving to: dataset/masks/0313-2/9180\nReading from: dataset/clips/0313-2/33680/20.jpg\nSaving to: dataset/masks/0313-2/33680\nReading from: dataset/clips/0313-2/32700/20.jpg\nSaving to: dataset/masks/0313-2/32700\nReading from: dataset/clips/0313-2/20040/20.jpg\nSaving to: dataset/masks/0313-2/20040\nReading from: dataset/clips/0313-2/36080/20.jpg\nSaving to: dataset/masks/0313-2/36080\nReading from: dataset/clips/0313-2/20640/20.jpg\nSaving to: dataset/masks/0313-2/20640\nReading from: dataset/clips/0313-2/1670/20.jpg\nSaving to: dataset/masks/0313-2/1670\nReading from: dataset/clips/0313-2/39520/20.jpg\nSaving to: dataset/masks/0313-2/39520\nReading from: dataset/clips/0313-2/5580/20.jpg\nSaving to: dataset/masks/0313-2/5580\nReading from: dataset/clips/0313-2/55/20.jpg\nSaving to: dataset/masks/0313-2/55\nReading from: dataset/clips/0313-2/695/20.jpg\nSaving to: dataset/masks/0313-2/695\nReading from: dataset/clips/0313-2/41860/20.jpg\nSaving to: dataset/masks/0313-2/41860\nReading from: dataset/clips/0313-2/25140/20.jpg\nSaving to: dataset/masks/0313-2/25140\nReading from: dataset/clips/0313-2/37440/20.jpg\nSaving to: dataset/masks/0313-2/37440\nReading from: dataset/clips/0313-2/490/20.jpg\nSaving to: dataset/masks/0313-2/490\nReading from: dataset/clips/0313-2/70/20.jpg\nSaving to: dataset/masks/0313-2/70\nReading from: dataset/clips/0313-2/25980/20.jpg\nSaving to: dataset/masks/0313-2/25980\nReading from: dataset/clips/0313-2/13980/20.jpg\nSaving to: dataset/masks/0313-2/13980\nReading from: dataset/clips/0313-2/8220/20.jpg\nSaving to: dataset/masks/0313-2/8220\nReading from: dataset/clips/0313-2/40480/20.jpg\nSaving to: dataset/masks/0313-2/40480\nReading from: dataset/clips/0313-2/670/20.jpg\nSaving to: dataset/masks/0313-2/670\nReading from: dataset/clips/0313-2/41880/20.jpg\nSaving to: dataset/masks/0313-2/41880\nReading from: dataset/clips/0313-2/680/20.jpg\nSaving to: dataset/masks/0313-2/680\nReading from: dataset/clips/0313-2/42740/20.jpg\nSaving to: dataset/masks/0313-2/42740\nReading from: dataset/clips/0313-2/710/20.jpg\nSaving to: dataset/masks/0313-2/710\nReading from: dataset/clips/0313-2/38560/20.jpg\nSaving to: dataset/masks/0313-2/38560\nReading from: dataset/clips/0313-2/34080/20.jpg\nSaving to: dataset/masks/0313-2/34080\nReading from: dataset/clips/0313-2/27660/20.jpg\nSaving to: dataset/masks/0313-2/27660\nReading from: dataset/clips/0313-2/41100/20.jpg\nSaving to: dataset/masks/0313-2/41100\nReading from: dataset/clips/0313-2/27180/20.jpg\nSaving to: dataset/masks/0313-2/27180\nReading from: dataset/clips/0313-2/59640/20.jpg\nSaving to: dataset/masks/0313-2/59640\nReading from: dataset/clips/0313-2/150/20.jpg\nSaving to: dataset/masks/0313-2/150\nReading from: dataset/clips/0313-2/24960/20.jpg\nSaving to: dataset/masks/0313-2/24960\nReading from: dataset/clips/0313-2/65/20.jpg\nSaving to: dataset/masks/0313-2/65\nReading from: dataset/clips/0313-2/16740/20.jpg\nSaving to: dataset/masks/0313-2/16740\nReading from: dataset/clips/0313-2/1250/20.jpg\nSaving to: dataset/masks/0313-2/1250\nReading from: dataset/clips/0313-2/1295/20.jpg\nSaving to: dataset/masks/0313-2/1295\nReading from: dataset/clips/0313-2/17340/20.jpg\nSaving to: dataset/masks/0313-2/17340\nReading from: dataset/clips/0313-2/10080/20.jpg\nSaving to: dataset/masks/0313-2/10080\nReading from: dataset/clips/0313-2/36860/20.jpg\nSaving to: dataset/masks/0313-2/36860\nReading from: dataset/clips/0313-2/39900/20.jpg\nSaving to: dataset/masks/0313-2/39900\nReading from: dataset/clips/0313-2/6060/20.jpg\nSaving to: dataset/masks/0313-2/6060\nReading from: dataset/clips/0313-2/24600/20.jpg\nSaving to: dataset/masks/0313-2/24600\nReading from: dataset/clips/0313-2/61200/20.jpg\nSaving to: dataset/masks/0313-2/61200\nReading from: dataset/clips/0313-2/50/20.jpg\nSaving to: dataset/masks/0313-2/50\nReading from: dataset/clips/0313-2/12720/20.jpg\nSaving to: dataset/masks/0313-2/12720\nReading from: dataset/clips/0313-2/36840/20.jpg\nSaving to: dataset/masks/0313-2/36840\nReading from: dataset/clips/0313-2/485/20.jpg\nSaving to: dataset/masks/0313-2/485\nReading from: dataset/clips/0313-2/575/20.jpg\nSaving to: dataset/masks/0313-2/575\nReading from: dataset/clips/0313-2/34560/20.jpg\nSaving to: dataset/masks/0313-2/34560\nReading from: dataset/clips/0313-2/13920/20.jpg\nSaving to: dataset/masks/0313-2/13920\nReading from: dataset/clips/0313-2/31560/20.jpg\nSaving to: dataset/masks/0313-2/31560\nReading from: dataset/clips/0313-2/440/20.jpg\nSaving to: dataset/masks/0313-2/440\nReading from: dataset/clips/0313-2/22080/20.jpg\nSaving to: dataset/masks/0313-2/22080\nReading from: dataset/clips/0313-2/63780/20.jpg\nSaving to: dataset/masks/0313-2/63780\nReading from: dataset/clips/0313-2/1750/20.jpg\nSaving to: dataset/masks/0313-2/1750\nReading from: dataset/clips/0313-2/5940/20.jpg\nSaving to: dataset/masks/0313-2/5940\nReading from: dataset/clips/0313-2/60300/20.jpg\nSaving to: dataset/masks/0313-2/60300\nReading from: dataset/clips/0313-2/32800/20.jpg\nSaving to: dataset/masks/0313-2/32800\nReading from: dataset/clips/0313-2/41480/20.jpg\nSaving to: dataset/masks/0313-2/41480\nReading from: dataset/clips/0313-2/6120/20.jpg\nSaving to: dataset/masks/0313-2/6120\nReading from: dataset/clips/0313-2/15060/20.jpg\nSaving to: dataset/masks/0313-2/15060\nReading from: dataset/clips/0313-2/785/20.jpg\nSaving to: dataset/masks/0313-2/785\nReading from: dataset/clips/0313-2/15780/20.jpg\nSaving to: dataset/masks/0313-2/15780\nReading from: dataset/clips/0313-2/21180/20.jpg\nSaving to: dataset/masks/0313-2/21180\nReading from: dataset/clips/0313-2/17580/20.jpg\nSaving to: dataset/masks/0313-2/17580\nReading from: dataset/clips/0313-2/33580/20.jpg\nSaving to: dataset/masks/0313-2/33580\nReading from: dataset/clips/0313-2/1255/20.jpg\nSaving to: dataset/masks/0313-2/1255\nReading from: dataset/clips/0313-2/41080/20.jpg\nSaving to: dataset/masks/0313-2/41080\nReading from: dataset/clips/0313-2/33540/20.jpg\nSaving to: dataset/masks/0313-2/33540\nReading from: dataset/clips/0313-2/63120/20.jpg\nSaving to: dataset/masks/0313-2/63120\nReading from: dataset/clips/0313-2/715/20.jpg\nSaving to: dataset/masks/0313-2/715\nReading from: dataset/clips/0313-2/27240/20.jpg\nSaving to: dataset/masks/0313-2/27240\nReading from: dataset/clips/0313-2/2820/20.jpg\nSaving to: dataset/masks/0313-2/2820\nReading from: dataset/clips/0313-2/15420/20.jpg\nSaving to: dataset/masks/0313-2/15420\nReading from: dataset/clips/0313-2/425/20.jpg\nSaving to: dataset/masks/0313-2/425\nReading from: dataset/clips/0313-2/62460/20.jpg\nSaving to: dataset/masks/0313-2/62460\nReading from: dataset/clips/0313-2/59160/20.jpg\nSaving to: dataset/masks/0313-2/59160\nReading from: dataset/clips/0313-2/11040/20.jpg\nSaving to: dataset/masks/0313-2/11040\nReading from: dataset/clips/0313-2/290/20.jpg\nSaving to: dataset/masks/0313-2/290\nReading from: dataset/clips/0313-2/37340/20.jpg\nSaving to: dataset/masks/0313-2/37340\nReading from: dataset/clips/0313-2/39700/20.jpg\nSaving to: dataset/masks/0313-2/39700\nReading from: dataset/clips/0313-2/41160/20.jpg\nSaving to: dataset/masks/0313-2/41160\nReading from: dataset/clips/0313-2/41640/20.jpg\nSaving to: dataset/masks/0313-2/41640\nReading from: dataset/clips/0313-2/41760/20.jpg\nSaving to: dataset/masks/0313-2/41760\nReading from: dataset/clips/0313-2/39720/20.jpg\nSaving to: dataset/masks/0313-2/39720\nReading from: dataset/clips/0313-2/30420/20.jpg\nSaving to: dataset/masks/0313-2/30420\nReading from: dataset/clips/0313-2/32940/20.jpg\nSaving to: dataset/masks/0313-2/32940\nReading from: dataset/clips/0313-2/25680/20.jpg\nSaving to: dataset/masks/0313-2/25680\nReading from: dataset/clips/0313-2/18420/20.jpg\nSaving to: dataset/masks/0313-2/18420\nReading from: dataset/clips/0313-2/510/20.jpg\nSaving to: dataset/masks/0313-2/510\nReading from: dataset/clips/0313-2/20880/20.jpg\nSaving to: dataset/masks/0313-2/20880\nReading from: dataset/clips/0313-2/23160/20.jpg\nSaving to: dataset/masks/0313-2/23160\nReading from: dataset/clips/0313-2/1205/20.jpg\nSaving to: dataset/masks/0313-2/1205\nReading from: dataset/clips/0313-2/185/20.jpg\nSaving to: dataset/masks/0313-2/185\nReading from: dataset/clips/0313-2/16920/20.jpg\nSaving to: dataset/masks/0313-2/16920\nReading from: dataset/clips/0313-2/35580/20.jpg\nSaving to: dataset/masks/0313-2/35580\nReading from: dataset/clips/0313-2/1690/20.jpg\nSaving to: dataset/masks/0313-2/1690\nReading from: dataset/clips/0313-2/9300/20.jpg\nSaving to: dataset/masks/0313-2/9300\nReading from: dataset/clips/0313-2/36120/20.jpg\nSaving to: dataset/masks/0313-2/36120\nReading from: dataset/clips/0313-2/3120/20.jpg\nSaving to: dataset/masks/0313-2/3120\nReading from: dataset/clips/0313-2/1110/20.jpg\nSaving to: dataset/masks/0313-2/1110\nReading from: dataset/clips/0313-2/835/20.jpg\nSaving to: dataset/masks/0313-2/835\nReading from: dataset/clips/0313-2/64680/20.jpg\nSaving to: dataset/masks/0313-2/64680\nReading from: dataset/clips/0313-2/4680/20.jpg\nSaving to: dataset/masks/0313-2/4680\nReading from: dataset/clips/0313-2/18120/20.jpg\nSaving to: dataset/masks/0313-2/18120\nReading from: dataset/clips/0313-2/12000/20.jpg\nSaving to: dataset/masks/0313-2/12000\nReading from: dataset/clips/0313-2/40900/20.jpg\nSaving to: dataset/masks/0313-2/40900\nReading from: dataset/clips/0313-2/62280/20.jpg\nSaving to: dataset/masks/0313-2/62280\nReading from: dataset/clips/0313-2/1495/20.jpg\nSaving to: dataset/masks/0313-2/1495\nReading from: dataset/clips/0313-2/36420/20.jpg\nSaving to: dataset/masks/0313-2/36420\nReading from: dataset/clips/0313-2/43060/20.jpg\nSaving to: dataset/masks/0313-2/43060\nReading from: dataset/clips/0313-2/30060/20.jpg\nSaving to: dataset/masks/0313-2/30060\nReading from: dataset/clips/0313-2/35660/20.jpg\nSaving to: dataset/masks/0313-2/35660\nReading from: dataset/clips/0313-2/21840/20.jpg\nSaving to: dataset/masks/0313-2/21840\nReading from: dataset/clips/0313-2/7860/20.jpg\nSaving to: dataset/masks/0313-2/7860\nReading from: dataset/clips/0313-2/28860/20.jpg\nSaving to: dataset/masks/0313-2/28860\nReading from: dataset/clips/0313-2/35180/20.jpg\nSaving to: dataset/masks/0313-2/35180\nReading from: dataset/clips/0313-2/38780/20.jpg\nSaving to: dataset/masks/0313-2/38780\nReading from: dataset/clips/0313-2/39460/20.jpg\nSaving to: dataset/masks/0313-2/39460\nReading from: dataset/clips/0313-2/42760/20.jpg\nSaving to: dataset/masks/0313-2/42760\nReading from: dataset/clips/0313-2/495/20.jpg\nSaving to: dataset/masks/0313-2/495\nReading from: dataset/clips/0313-2/40980/20.jpg\nSaving to: dataset/masks/0313-2/40980\nReading from: dataset/clips/0313-2/810/20.jpg\nSaving to: dataset/masks/0313-2/810\nReading from: dataset/clips/0313-2/3480/20.jpg\nSaving to: dataset/masks/0313-2/3480\nReading from: dataset/clips/0313-2/17040/20.jpg\nSaving to: dataset/masks/0313-2/17040\nReading from: dataset/clips/0313-2/21300/20.jpg\nSaving to: dataset/masks/0313-2/21300\nReading from: dataset/clips/0313-2/34380/20.jpg\nSaving to: dataset/masks/0313-2/34380\nReading from: dataset/clips/0313-2/59340/20.jpg\nSaving to: dataset/masks/0313-2/59340\nReading from: dataset/clips/0313-2/40120/20.jpg\nSaving to: dataset/masks/0313-2/40120\nReading from: dataset/clips/0313-2/36300/20.jpg\nSaving to: dataset/masks/0313-2/36300\nReading from: dataset/clips/0313-2/20400/20.jpg\nSaving to: dataset/masks/0313-2/20400\nReading from: dataset/clips/0313-2/16260/20.jpg\nSaving to: dataset/masks/0313-2/16260\nReading from: dataset/clips/0313-2/42640/20.jpg\nSaving to: dataset/masks/0313-2/42640\nReading from: dataset/clips/0313-2/42560/20.jpg\nSaving to: dataset/masks/0313-2/42560\nReading from: dataset/clips/0313-2/655/20.jpg\nSaving to: dataset/masks/0313-2/655\nReading from: dataset/clips/0313-2/16980/20.jpg\nSaving to: dataset/masks/0313-2/16980\nReading from: dataset/clips/0313-2/61500/20.jpg\nSaving to: dataset/masks/0313-2/61500\nReading from: dataset/clips/0313-2/39360/20.jpg\nSaving to: dataset/masks/0313-2/39360\nReading from: dataset/clips/0313-2/3180/20.jpg\nSaving to: dataset/masks/0313-2/3180\nReading from: dataset/clips/0313-2/1550/20.jpg\nSaving to: dataset/masks/0313-2/1550\nReading from: dataset/clips/0313-2/3600/20.jpg\nSaving to: dataset/masks/0313-2/3600\nReading from: dataset/clips/0313-2/3240/20.jpg\nSaving to: dataset/masks/0313-2/3240\nReading from: dataset/clips/0313-2/175/20.jpg\nSaving to: dataset/masks/0313-2/175\nReading from: dataset/clips/0313-2/38520/20.jpg\nSaving to: dataset/masks/0313-2/38520\nReading from: dataset/clips/0313-2/640/20.jpg\nSaving to: dataset/masks/0313-2/640\nReading from: dataset/clips/0313-2/36620/20.jpg\nSaving to: dataset/masks/0313-2/36620\nReading from: dataset/clips/0313-2/24480/20.jpg\nSaving to: dataset/masks/0313-2/24480\nReading from: dataset/clips/0313-2/24360/20.jpg\nSaving to: dataset/masks/0313-2/24360\nReading from: dataset/clips/0313-2/32720/20.jpg\nSaving to: dataset/masks/0313-2/32720\nReading from: dataset/clips/0313-2/31980/20.jpg\nSaving to: dataset/masks/0313-2/31980\nReading from: dataset/clips/0313-2/10260/20.jpg\nSaving to: dataset/masks/0313-2/10260\nReading from: dataset/clips/0313-2/32600/20.jpg\nSaving to: dataset/masks/0313-2/32600\nReading from: dataset/clips/0313-2/36600/20.jpg\nSaving to: dataset/masks/0313-2/36600\nReading from: dataset/clips/0313-2/17160/20.jpg\nSaving to: dataset/masks/0313-2/17160\nReading from: dataset/clips/0313-2/24720/20.jpg\nSaving to: dataset/masks/0313-2/24720\nReading from: dataset/clips/0313-2/32500/20.jpg\nSaving to: dataset/masks/0313-2/32500\nReading from: dataset/clips/0313-2/350/20.jpg\nSaving to: dataset/masks/0313-2/350\nReading from: dataset/clips/0313-2/1430/20.jpg\nSaving to: dataset/masks/0313-2/1430\nReading from: dataset/clips/0313-2/28980/20.jpg\nSaving to: dataset/masks/0313-2/28980\nReading from: dataset/clips/0313-2/39640/20.jpg\nSaving to: dataset/masks/0313-2/39640\nReading from: dataset/clips/0313-2/39040/20.jpg\nSaving to: dataset/masks/0313-2/39040\nReading from: dataset/clips/0313-2/62340/20.jpg\nSaving to: dataset/masks/0313-2/62340\nReading from: dataset/clips/0313-2/12540/20.jpg\nSaving to: dataset/masks/0313-2/12540\nReading from: dataset/clips/0313-2/36520/20.jpg\nSaving to: dataset/masks/0313-2/36520\nReading from: dataset/clips/0313-2/39880/20.jpg\nSaving to: dataset/masks/0313-2/39880\nReading from: dataset/clips/0313-2/19320/20.jpg\nSaving to: dataset/masks/0313-2/19320\nReading from: dataset/clips/0313-2/34340/20.jpg\nSaving to: dataset/masks/0313-2/34340\nReading from: dataset/clips/0313-2/25440/20.jpg\nSaving to: dataset/masks/0313-2/25440\nReading from: dataset/clips/0313-2/23400/20.jpg\nSaving to: dataset/masks/0313-2/23400\nReading from: dataset/clips/0313-2/10140/20.jpg\nSaving to: dataset/masks/0313-2/10140\nReading from: dataset/clips/0313-2/1655/20.jpg\nSaving to: dataset/masks/0313-2/1655\nReading from: dataset/clips/0313-2/25800/20.jpg\nSaving to: dataset/masks/0313-2/25800\nReading from: dataset/clips/0313-2/25860/20.jpg\nSaving to: dataset/masks/0313-2/25860\nReading from: dataset/clips/0313-2/37360/20.jpg\nSaving to: dataset/masks/0313-2/37360\nReading from: dataset/clips/0313-2/33120/20.jpg\nSaving to: dataset/masks/0313-2/33120\nReading from: dataset/clips/0313-2/15300/20.jpg\nSaving to: dataset/masks/0313-2/15300\nReading from: dataset/clips/0313-2/10920/20.jpg\nSaving to: dataset/masks/0313-2/10920\nReading from: dataset/clips/0313-2/2940/20.jpg\nSaving to: dataset/masks/0313-2/2940\nReading from: dataset/clips/0313-2/1305/20.jpg\nSaving to: dataset/masks/0313-2/1305\nReading from: dataset/clips/0313-2/20/20.jpg\nSaving to: dataset/masks/0313-2/20\nReading from: dataset/clips/0313-2/7680/20.jpg\nSaving to: dataset/masks/0313-2/7680\nReading from: dataset/clips/0313-2/18300/20.jpg\nSaving to: dataset/masks/0313-2/18300\nReading from: dataset/clips/0313-2/235/20.jpg\nSaving to: dataset/masks/0313-2/235\nReading from: dataset/clips/0313-2/45/20.jpg\nSaving to: dataset/masks/0313-2/45\nReading from: dataset/clips/0313-2/37240/20.jpg\nSaving to: dataset/masks/0313-2/37240\nReading from: dataset/clips/0313-2/37160/20.jpg\nSaving to: dataset/masks/0313-2/37160\nReading from: dataset/clips/0313-2/59400/20.jpg\nSaving to: dataset/masks/0313-2/59400\nReading from: dataset/clips/0313-2/32780/20.jpg\nSaving to: dataset/masks/0313-2/32780\nReading from: dataset/clips/0313-2/40220/20.jpg\nSaving to: dataset/masks/0313-2/40220\nReading from: dataset/clips/0313-2/4500/20.jpg\nSaving to: dataset/masks/0313-2/4500\nReading from: dataset/clips/0313-2/35060/20.jpg\nSaving to: dataset/masks/0313-2/35060\nReading from: dataset/clips/0313-2/1715/20.jpg\nSaving to: dataset/masks/0313-2/1715\nReading from: dataset/clips/0313-2/4080/20.jpg\nSaving to: dataset/masks/0313-2/4080\nReading from: dataset/clips/0313-2/63600/20.jpg\nSaving to: dataset/masks/0313-2/63600\nReading from: dataset/clips/0313-2/32980/20.jpg\nSaving to: dataset/masks/0313-2/32980\nReading from: dataset/clips/0313-2/38880/20.jpg\nSaving to: dataset/masks/0313-2/38880\nReading from: dataset/clips/0313-2/18540/20.jpg\nSaving to: dataset/masks/0313-2/18540\nReading from: dataset/clips/0313-2/34920/20.jpg\nSaving to: dataset/masks/0313-2/34920\nReading from: dataset/clips/0313-2/6780/20.jpg\nSaving to: dataset/masks/0313-2/6780\nReading from: dataset/clips/0313-2/1270/20.jpg\nSaving to: dataset/masks/0313-2/1270\nReading from: dataset/clips/0313-2/12960/20.jpg\nSaving to: dataset/masks/0313-2/12960\nReading from: dataset/clips/0313-2/42880/20.jpg\nSaving to: dataset/masks/0313-2/42880\nReading from: dataset/clips/0313-2/415/20.jpg\nSaving to: dataset/masks/0313-2/415\nReading from: dataset/clips/0313-2/36880/20.jpg\nSaving to: dataset/masks/0313-2/36880\nReading from: dataset/clips/0313-2/840/20.jpg\nSaving to: dataset/masks/0313-2/840\nReading from: dataset/clips/0313-2/28500/20.jpg\nSaving to: dataset/masks/0313-2/28500\nReading from: dataset/clips/0313-2/28620/20.jpg\nSaving to: dataset/masks/0313-2/28620\nReading from: dataset/clips/0313-2/140/20.jpg\nSaving to: dataset/masks/0313-2/140\nReading from: dataset/clips/0313-2/18240/20.jpg\nSaving to: dataset/masks/0313-2/18240\nReading from: dataset/clips/0313-2/22320/20.jpg\nSaving to: dataset/masks/0313-2/22320\nReading from: dataset/clips/0313-2/41820/20.jpg\nSaving to: dataset/masks/0313-2/41820\nReading from: dataset/clips/0313-2/10740/20.jpg\nSaving to: dataset/masks/0313-2/10740\nReading from: dataset/clips/0313-2/955/20.jpg\nSaving to: dataset/masks/0313-2/955\nReading from: dataset/clips/0313-2/40780/20.jpg\nSaving to: dataset/masks/0313-2/40780\nReading from: dataset/clips/0313-2/59820/20.jpg\nSaving to: dataset/masks/0313-2/59820\nReading from: dataset/clips/0313-2/39140/20.jpg\nSaving to: dataset/masks/0313-2/39140\nReading from: dataset/clips/0313-2/1790/20.jpg\nSaving to: dataset/masks/0313-2/1790\nReading from: dataset/clips/0313-2/39920/20.jpg\nSaving to: dataset/masks/0313-2/39920\nReading from: dataset/clips/0313-2/64560/20.jpg\nSaving to: dataset/masks/0313-2/64560\nReading from: dataset/clips/0313-2/1175/20.jpg\nSaving to: dataset/masks/0313-2/1175\nReading from: dataset/clips/0313-2/36340/20.jpg\nSaving to: dataset/masks/0313-2/36340\nReading from: dataset/clips/0313-2/1120/20.jpg\nSaving to: dataset/masks/0313-2/1120\nReading from: dataset/clips/0313-2/35260/20.jpg\nSaving to: dataset/masks/0313-2/35260\nReading from: dataset/clips/0313-2/720/20.jpg\nSaving to: dataset/masks/0313-2/720\nReading from: dataset/clips/0313-2/38320/20.jpg\nSaving to: dataset/masks/0313-2/38320\nReading from: dataset/clips/0313-2/10500/20.jpg\nSaving to: dataset/masks/0313-2/10500\nReading from: dataset/clips/0313-2/505/20.jpg\nSaving to: dataset/masks/0313-2/505\nReading from: dataset/clips/0313-2/29640/20.jpg\nSaving to: dataset/masks/0313-2/29640\nReading from: dataset/clips/0313-2/40280/20.jpg\nSaving to: dataset/masks/0313-2/40280\nReading from: dataset/clips/0313-2/5280/20.jpg\nSaving to: dataset/masks/0313-2/5280\nReading from: dataset/clips/0313-2/39000/20.jpg\nSaving to: dataset/masks/0313-2/39000\nReading from: dataset/clips/0313-2/1145/20.jpg\nSaving to: dataset/masks/0313-2/1145\nReading from: dataset/clips/0313-2/11940/20.jpg\nSaving to: dataset/masks/0313-2/11940\nReading from: dataset/clips/0313-2/34480/20.jpg\nSaving to: dataset/masks/0313-2/34480\nReading from: dataset/clips/0313-2/39960/20.jpg\nSaving to: dataset/masks/0313-2/39960\nReading from: dataset/clips/0313-2/33600/20.jpg\nSaving to: dataset/masks/0313-2/33600\nReading from: dataset/clips/0313-2/1170/20.jpg\nSaving to: dataset/masks/0313-2/1170\nReading from: dataset/clips/0313-2/1545/20.jpg\nSaving to: dataset/masks/0313-2/1545\nReading from: dataset/clips/0313-2/15900/20.jpg\nSaving to: dataset/masks/0313-2/15900\nReading from: dataset/clips/0313-2/23340/20.jpg\nSaving to: dataset/masks/0313-2/23340\nReading from: dataset/clips/0313-2/4020/20.jpg\nSaving to: dataset/masks/0313-2/4020\nReading from: dataset/clips/0313-2/40820/20.jpg\nSaving to: dataset/masks/0313-2/40820\nReading from: dataset/clips/0313-2/36820/20.jpg\nSaving to: dataset/masks/0313-2/36820\nReading from: dataset/clips/0313-2/39500/20.jpg\nSaving to: dataset/masks/0313-2/39500\nReading from: dataset/clips/0313-2/5820/20.jpg\nSaving to: dataset/masks/0313-2/5820\nReading from: dataset/clips/0313-2/25320/20.jpg\nSaving to: dataset/masks/0313-2/25320\nReading from: dataset/clips/0313-2/41600/20.jpg\nSaving to: dataset/masks/0313-2/41600\nReading from: dataset/clips/0313-2/42540/20.jpg\nSaving to: dataset/masks/0313-2/42540\nReading from: dataset/clips/0313-2/1615/20.jpg\nSaving to: dataset/masks/0313-2/1615\nReading from: dataset/clips/0313-2/36400/20.jpg\nSaving to: dataset/masks/0313-2/36400\nReading from: dataset/clips/0313-2/35820/20.jpg\nSaving to: dataset/masks/0313-2/35820\nReading from: dataset/clips/0313-2/35740/20.jpg\nSaving to: dataset/masks/0313-2/35740\nReading from: dataset/clips/0313-2/1770/20.jpg\nSaving to: dataset/masks/0313-2/1770\nReading from: dataset/clips/0313-2/20760/20.jpg\nSaving to: dataset/masks/0313-2/20760\nReading from: dataset/clips/0313-2/36960/20.jpg\nSaving to: dataset/masks/0313-2/36960\nReading from: dataset/clips/0313-2/35380/20.jpg\nSaving to: dataset/masks/0313-2/35380\nReading from: dataset/clips/0313-2/1860/20.jpg\nSaving to: dataset/masks/0313-2/1860\nReading from: dataset/clips/0313-2/30720/20.jpg\nSaving to: dataset/masks/0313-2/30720\nReading from: dataset/clips/0313-2/37800/20.jpg\nSaving to: dataset/masks/0313-2/37800\nReading from: dataset/clips/0313-2/34220/20.jpg\nSaving to: dataset/masks/0313-2/34220\nReading from: dataset/clips/0313-2/39860/20.jpg\nSaving to: dataset/masks/0313-2/39860\nReading from: dataset/clips/0313-2/34740/20.jpg\nSaving to: dataset/masks/0313-2/34740\nReading from: dataset/clips/0313-2/38600/20.jpg\nSaving to: dataset/masks/0313-2/38600\nReading from: dataset/clips/0313-2/38660/20.jpg\nSaving to: dataset/masks/0313-2/38660\nReading from: dataset/clips/0313-2/13140/20.jpg\nSaving to: dataset/masks/0313-2/13140\nReading from: dataset/clips/0313-2/1535/20.jpg\nSaving to: dataset/masks/0313-2/1535\nReading from: dataset/clips/0313-2/1685/20.jpg\nSaving to: dataset/masks/0313-2/1685\nReading from: dataset/clips/0313-2/35620/20.jpg\nSaving to: dataset/masks/0313-2/35620\nReading from: dataset/clips/0313-2/16560/20.jpg\nSaving to: dataset/masks/0313-2/16560\nReading from: dataset/clips/0313-2/59460/20.jpg\nSaving to: dataset/masks/0313-2/59460\nReading from: dataset/clips/0313-2/20820/20.jpg\nSaving to: dataset/masks/0313-2/20820\nReading from: dataset/clips/0313-2/35680/20.jpg\nSaving to: dataset/masks/0313-2/35680\nReading from: dataset/clips/0313-2/32540/20.jpg\nSaving to: dataset/masks/0313-2/32540\nReading from: dataset/clips/0313-2/32560/20.jpg\nSaving to: dataset/masks/0313-2/32560\nReading from: dataset/clips/0313-2/26640/20.jpg\nSaving to: dataset/masks/0313-2/26640\nReading from: dataset/clips/0313-2/33360/20.jpg\nSaving to: dataset/masks/0313-2/33360\nReading from: dataset/clips/0313-2/33420/20.jpg\nSaving to: dataset/masks/0313-2/33420\nReading from: dataset/clips/0313-2/32420/20.jpg\nSaving to: dataset/masks/0313-2/32420\nReading from: dataset/clips/0313-2/32160/20.jpg\nSaving to: dataset/masks/0313-2/32160\nReading from: dataset/clips/0313-2/1435/20.jpg\nSaving to: dataset/masks/0313-2/1435\nReading from: dataset/clips/0313-2/36140/20.jpg\nSaving to: dataset/masks/0313-2/36140\nReading from: dataset/clips/0313-2/335/20.jpg\nSaving to: dataset/masks/0313-2/335\nReading from: dataset/clips/0313-2/39740/20.jpg\nSaving to: dataset/masks/0313-2/39740\nReading from: dataset/clips/0313-2/39980/20.jpg\nSaving to: dataset/masks/0313-2/39980\nReading from: dataset/clips/0313-2/32960/20.jpg\nSaving to: dataset/masks/0313-2/32960\nReading from: dataset/clips/0313-2/63300/20.jpg\nSaving to: dataset/masks/0313-2/63300\nReading from: dataset/clips/0313-2/35100/20.jpg\nSaving to: dataset/masks/0313-2/35100\nReading from: dataset/clips/0313-2/29460/20.jpg\nSaving to: dataset/masks/0313-2/29460\nReading from: dataset/clips/0313-2/31500/20.jpg\nSaving to: dataset/masks/0313-2/31500\nReading from: dataset/clips/0313-2/630/20.jpg\nSaving to: dataset/masks/0313-2/630\nReading from: dataset/clips/0313-2/160/20.jpg\nSaving to: dataset/masks/0313-2/160\nReading from: dataset/clips/0313-2/36160/20.jpg\nSaving to: dataset/masks/0313-2/36160\nReading from: dataset/clips/0313-2/9660/20.jpg\nSaving to: dataset/masks/0313-2/9660\nReading from: dataset/clips/0313-2/42520/20.jpg\nSaving to: dataset/masks/0313-2/42520\nReading from: dataset/clips/0313-2/63900/20.jpg\nSaving to: dataset/masks/0313-2/63900\nReading from: dataset/clips/0313-2/360/20.jpg\nSaving to: dataset/masks/0313-2/360\nReading from: dataset/clips/0313-2/39440/20.jpg\nSaving to: dataset/masks/0313-2/39440\nReading from: dataset/clips/0313-2/61440/20.jpg\nSaving to: dataset/masks/0313-2/61440\nReading from: dataset/clips/0313-2/330/20.jpg\nSaving to: dataset/masks/0313-2/330\nReading from: dataset/clips/0313-2/930/20.jpg\nSaving to: dataset/masks/0313-2/930\nReading from: dataset/clips/0313-2/34660/20.jpg\nSaving to: dataset/masks/0313-2/34660\nReading from: dataset/clips/0313-2/63420/20.jpg\nSaving to: dataset/masks/0313-2/63420\nReading from: dataset/clips/0313-2/820/20.jpg\nSaving to: dataset/masks/0313-2/820\nReading from: dataset/clips/0313-2/60/20.jpg\nSaving to: dataset/masks/0313-2/60\nReading from: dataset/clips/0313-2/23040/20.jpg\nSaving to: dataset/masks/0313-2/23040\nReading from: dataset/clips/0313-2/135/20.jpg\nSaving to: dataset/masks/0313-2/135\nReading from: dataset/clips/0313-2/1920/20.jpg\nSaving to: dataset/masks/0313-2/1920\nReading from: dataset/clips/0313-2/705/20.jpg\nSaving to: dataset/masks/0313-2/705\nReading from: dataset/clips/0313-2/190/20.jpg\nSaving to: dataset/masks/0313-2/190\nReading from: dataset/clips/0313-2/41200/20.jpg\nSaving to: dataset/masks/0313-2/41200\nReading from: dataset/clips/0313-2/19440/20.jpg\nSaving to: dataset/masks/0313-2/19440\nReading from: dataset/clips/0313-2/29160/20.jpg\nSaving to: dataset/masks/0313-2/29160\nReading from: dataset/clips/0313-2/625/20.jpg\nSaving to: dataset/masks/0313-2/625\nReading from: dataset/clips/0313-2/34300/20.jpg\nSaving to: dataset/masks/0313-2/34300\nReading from: dataset/clips/0313-2/775/20.jpg\nSaving to: dataset/masks/0313-2/775\nReading from: dataset/clips/0313-2/24420/20.jpg\nSaving to: dataset/masks/0313-2/24420\nReading from: dataset/clips/0313-2/13800/20.jpg\nSaving to: dataset/masks/0313-2/13800\nReading from: dataset/clips/0313-2/60420/20.jpg\nSaving to: dataset/masks/0313-2/60420\nReading from: dataset/clips/0313-2/35040/20.jpg\nSaving to: dataset/masks/0313-2/35040\nReading from: dataset/clips/0313-2/14220/20.jpg\nSaving to: dataset/masks/0313-2/14220\nReading from: dataset/clips/0313-2/22980/20.jpg\nSaving to: dataset/masks/0313-2/22980\nReading from: dataset/clips/0313-2/1755/20.jpg\nSaving to: dataset/masks/0313-2/1755\nReading from: dataset/clips/0313-2/4620/20.jpg\nSaving to: dataset/masks/0313-2/4620\nReading from: dataset/clips/0313-2/41980/20.jpg\nSaving to: dataset/masks/0313-2/41980\nReading from: dataset/clips/0313-2/40160/20.jpg\nSaving to: dataset/masks/0313-2/40160\nReading from: dataset/clips/0313-2/13020/20.jpg\nSaving to: dataset/masks/0313-2/13020\nReading from: dataset/clips/0313-2/41620/20.jpg\nSaving to: dataset/masks/0313-2/41620\nReading from: dataset/clips/0313-2/28560/20.jpg\nSaving to: dataset/masks/0313-2/28560\nReading from: dataset/clips/0313-2/64500/20.jpg\nSaving to: dataset/masks/0313-2/64500\nReading from: dataset/clips/0313-2/270/20.jpg\nSaving to: dataset/masks/0313-2/270\nReading from: dataset/clips/0313-2/64620/20.jpg\nSaving to: dataset/masks/0313-2/64620\nReading from: dataset/clips/0313-2/685/20.jpg\nSaving to: dataset/masks/0313-2/685\nReading from: dataset/clips/0313-2/8040/20.jpg\nSaving to: dataset/masks/0313-2/8040\nReading from: dataset/clips/0313-2/8700/20.jpg\nSaving to: dataset/masks/0313-2/8700\nReading from: dataset/clips/0313-2/35840/20.jpg\nSaving to: dataset/masks/0313-2/35840\nReading from: dataset/clips/0313-2/37120/20.jpg\nSaving to: dataset/masks/0313-2/37120\nReading from: dataset/clips/0313-2/16200/20.jpg\nSaving to: dataset/masks/0313-2/16200\nReading from: dataset/clips/0313-2/12360/20.jpg\nSaving to: dataset/masks/0313-2/12360\nReading from: dataset/clips/0313-2/41700/20.jpg\nSaving to: dataset/masks/0313-2/41700\nReading from: dataset/clips/0313-2/62640/20.jpg\nSaving to: dataset/masks/0313-2/62640\nReading from: dataset/clips/0313-2/1340/20.jpg\nSaving to: dataset/masks/0313-2/1340\nReading from: dataset/clips/0313-2/39180/20.jpg\nSaving to: dataset/masks/0313-2/39180\nReading from: dataset/clips/0313-2/1700/20.jpg\nSaving to: dataset/masks/0313-2/1700\nReading from: dataset/clips/0313-2/115/20.jpg\nSaving to: dataset/masks/0313-2/115\nReading from: dataset/clips/0313-2/29100/20.jpg\nSaving to: dataset/masks/0313-2/29100\nReading from: dataset/clips/0313-2/23100/20.jpg\nSaving to: dataset/masks/0313-2/23100\nReading from: dataset/clips/0313-2/21960/20.jpg\nSaving to: dataset/masks/0313-2/21960\nReading from: dataset/clips/0313-2/19920/20.jpg\nSaving to: dataset/masks/0313-2/19920\nReading from: dataset/clips/0313-2/60900/20.jpg\nSaving to: dataset/masks/0313-2/60900\nReading from: dataset/clips/0313-2/33740/20.jpg\nSaving to: dataset/masks/0313-2/33740\nReading from: dataset/clips/0313-2/33260/20.jpg\nSaving to: dataset/masks/0313-2/33260\nReading from: dataset/clips/0313-2/42920/20.jpg\nSaving to: dataset/masks/0313-2/42920\nReading from: dataset/clips/0313-2/5760/20.jpg\nSaving to: dataset/masks/0313-2/5760\nReading from: dataset/clips/0313-2/36480/20.jpg\nSaving to: dataset/masks/0313-2/36480\nReading from: dataset/clips/0313-2/20160/20.jpg\nSaving to: dataset/masks/0313-2/20160\nReading from: dataset/clips/0313-2/1570/20.jpg\nSaving to: dataset/masks/0313-2/1570\nReading from: dataset/clips/0313-2/62400/20.jpg\nSaving to: dataset/masks/0313-2/62400\nReading from: dataset/clips/0313-2/760/20.jpg\nSaving to: dataset/masks/0313-2/760\nReading from: dataset/clips/0313-2/250/20.jpg\nSaving to: dataset/masks/0313-2/250\nReading from: dataset/clips/0313-2/60120/20.jpg\nSaving to: dataset/masks/0313-2/60120\nReading from: dataset/clips/0313-2/2100/20.jpg\nSaving to: dataset/masks/0313-2/2100\nReading from: dataset/clips/0313-2/20940/20.jpg\nSaving to: dataset/masks/0313-2/20940\nReading from: dataset/clips/0313-2/17400/20.jpg\nSaving to: dataset/masks/0313-2/17400\nReading from: dataset/clips/0313-2/11280/20.jpg\nSaving to: dataset/masks/0313-2/11280\nReading from: dataset/clips/0313-2/29820/20.jpg\nSaving to: dataset/masks/0313-2/29820\nReading from: dataset/clips/0313-2/41960/20.jpg\nSaving to: dataset/masks/0313-2/41960\nReading from: dataset/clips/0313-2/63660/20.jpg\nSaving to: dataset/masks/0313-2/63660\nReading from: dataset/clips/0313-2/645/20.jpg\nSaving to: dataset/masks/0313-2/645\nReading from: dataset/clips/0313-2/42100/20.jpg\nSaving to: dataset/masks/0313-2/42100\nReading from: dataset/clips/0313-2/1025/20.jpg\nSaving to: dataset/masks/0313-2/1025\nReading from: dataset/clips/0313-2/35880/20.jpg\nSaving to: dataset/masks/0313-2/35880\nReading from: dataset/clips/0313-2/545/20.jpg\nSaving to: dataset/masks/0313-2/545\nReading from: dataset/clips/0313-2/20280/20.jpg\nSaving to: dataset/masks/0313-2/20280\nReading from: dataset/clips/0313-2/15120/20.jpg\nSaving to: dataset/masks/0313-2/15120\nReading from: dataset/clips/0313-2/36940/20.jpg\nSaving to: dataset/masks/0313-2/36940\nReading from: dataset/clips/0313-2/60000/20.jpg\nSaving to: dataset/masks/0313-2/60000\nReading from: dataset/clips/0313-2/635/20.jpg\nSaving to: dataset/masks/0313-2/635\nReading from: dataset/clips/0313-2/15960/20.jpg\nSaving to: dataset/masks/0313-2/15960\nReading from: dataset/clips/0313-2/1485/20.jpg\nSaving to: dataset/masks/0313-2/1485\nReading from: dataset/clips/0313-2/39820/20.jpg\nSaving to: dataset/masks/0313-2/39820\nReading from: dataset/clips/0313-2/31440/20.jpg\nSaving to: dataset/masks/0313-2/31440\nReading from: dataset/clips/0313-2/33040/20.jpg\nSaving to: dataset/masks/0313-2/33040\nReading from: dataset/clips/0313-2/32480/20.jpg\nSaving to: dataset/masks/0313-2/32480\nReading from: dataset/clips/0313-2/39160/20.jpg\nSaving to: dataset/masks/0313-2/39160\nReading from: dataset/clips/0313-2/26040/20.jpg\nSaving to: dataset/masks/0313-2/26040\nReading from: dataset/clips/0313-2/32620/20.jpg\nSaving to: dataset/masks/0313-2/32620\nReading from: dataset/clips/0313-2/1180/20.jpg\nSaving to: dataset/masks/0313-2/1180\nReading from: dataset/clips/0313-2/12840/20.jpg\nSaving to: dataset/masks/0313-2/12840\nReading from: dataset/clips/0313-2/830/20.jpg\nSaving to: dataset/masks/0313-2/830\nReading from: dataset/clips/0313-2/36460/20.jpg\nSaving to: dataset/masks/0313-2/36460\nReading from: dataset/clips/0313-2/6180/20.jpg\nSaving to: dataset/masks/0313-2/6180\nReading from: dataset/clips/0313-2/315/20.jpg\nSaving to: dataset/masks/0313-2/315\nReading from: dataset/clips/0313-2/1420/20.jpg\nSaving to: dataset/masks/0313-2/1420\nReading from: dataset/clips/0313-2/3060/20.jpg\nSaving to: dataset/masks/0313-2/3060\nReading from: dataset/clips/0313-2/36640/20.jpg\nSaving to: dataset/masks/0313-2/36640\nReading from: dataset/clips/0313-2/24180/20.jpg\nSaving to: dataset/masks/0313-2/24180\nReading from: dataset/clips/0313-2/41140/20.jpg\nSaving to: dataset/masks/0313-2/41140\nReading from: dataset/clips/0313-2/3000/20.jpg\nSaving to: dataset/masks/0313-2/3000\nReading from: dataset/clips/0313-2/890/20.jpg\nSaving to: dataset/masks/0313-2/890\nReading from: dataset/clips/0313-2/17640/20.jpg\nSaving to: dataset/masks/0313-2/17640\nReading from: dataset/clips/0313-2/1400/20.jpg\nSaving to: dataset/masks/0313-2/1400\nReading from: dataset/clips/0313-2/62100/20.jpg\nSaving to: dataset/masks/0313-2/62100\nReading from: dataset/clips/0313-2/1980/20.jpg\nSaving to: dataset/masks/0313-2/1980\nReading from: dataset/clips/0313-2/3660/20.jpg\nSaving to: dataset/masks/0313-2/3660\nReading from: dataset/clips/0313-2/2880/20.jpg\nSaving to: dataset/masks/0313-2/2880\nReading from: dataset/clips/0313-2/9900/20.jpg\nSaving to: dataset/masks/0313-2/9900\nReading from: dataset/clips/0313-2/24540/20.jpg\nSaving to: dataset/masks/0313-2/24540\nReading from: dataset/clips/0313-2/27000/20.jpg\nSaving to: dataset/masks/0313-2/27000\nReading from: dataset/clips/0313-2/33440/20.jpg\nSaving to: dataset/masks/0313-2/33440\nReading from: dataset/clips/0313-2/37740/20.jpg\nSaving to: dataset/masks/0313-2/37740\nReading from: dataset/clips/0313-2/8520/20.jpg\nSaving to: dataset/masks/0313-2/8520\nReading from: dataset/clips/0313-2/39020/20.jpg\nSaving to: dataset/masks/0313-2/39020\nReading from: dataset/clips/0313-2/6960/20.jpg\nSaving to: dataset/masks/0313-2/6960\nReading from: dataset/clips/0313-2/39340/20.jpg\nSaving to: dataset/masks/0313-2/39340\nReading from: dataset/clips/0313-2/1610/20.jpg\nSaving to: dataset/masks/0313-2/1610\nReading from: dataset/clips/0313-2/37080/20.jpg\nSaving to: dataset/masks/0313-2/37080\nReading from: dataset/clips/0313-2/37500/20.jpg\nSaving to: dataset/masks/0313-2/37500\nReading from: dataset/clips/0313-2/37760/20.jpg\nSaving to: dataset/masks/0313-2/37760\nReading from: dataset/clips/0313-2/8880/20.jpg\nSaving to: dataset/masks/0313-2/8880\nReading from: dataset/clips/0313-2/30180/20.jpg\nSaving to: dataset/masks/0313-2/30180\nReading from: dataset/clips/0313-2/30660/20.jpg\nSaving to: dataset/masks/0313-2/30660\nReading from: dataset/clips/0313-2/25620/20.jpg\nSaving to: dataset/masks/0313-2/25620\nReading from: dataset/clips/0313-2/34440/20.jpg\nSaving to: dataset/masks/0313-2/34440\nReading from: dataset/clips/0313-2/40000/20.jpg\nSaving to: dataset/masks/0313-2/40000\nReading from: dataset/clips/0313-2/34600/20.jpg\nSaving to: dataset/masks/0313-2/34600\nReading from: dataset/clips/0313-2/9120/20.jpg\nSaving to: dataset/masks/0313-2/9120\nReading from: dataset/clips/0313-2/33620/20.jpg\nSaving to: dataset/masks/0313-2/33620\nReading from: dataset/clips/0313-2/37100/20.jpg\nSaving to: dataset/masks/0313-2/37100\nReading from: dataset/clips/0313-2/4980/20.jpg\nSaving to: dataset/masks/0313-2/4980\nReading from: dataset/clips/0313-2/23280/20.jpg\nSaving to: dataset/masks/0313-2/23280\nReading from: dataset/clips/0313-2/1565/20.jpg\nSaving to: dataset/masks/0313-2/1565\nReading from: dataset/clips/0313-2/7800/20.jpg\nSaving to: dataset/masks/0313-2/7800\nReading from: dataset/clips/0313-2/965/20.jpg\nSaving to: dataset/masks/0313-2/965\nReading from: dataset/clips/0313-2/33840/20.jpg\nSaving to: dataset/masks/0313-2/33840\nReading from: dataset/clips/0313-2/28260/20.jpg\nSaving to: dataset/masks/0313-2/28260\nReading from: dataset/clips/0313-2/455/20.jpg\nSaving to: dataset/masks/0313-2/455\nReading from: dataset/clips/0313-2/42160/20.jpg\nSaving to: dataset/masks/0313-2/42160\nReading from: dataset/clips/0313-2/7020/20.jpg\nSaving to: dataset/masks/0313-2/7020\nReading from: dataset/clips/0313-2/4380/20.jpg\nSaving to: dataset/masks/0313-2/4380\nReading from: dataset/clips/0313-2/1280/20.jpg\nSaving to: dataset/masks/0313-2/1280\nReading from: dataset/clips/0313-2/40500/20.jpg\nSaving to: dataset/masks/0313-2/40500\nReading from: dataset/clips/0313-2/37880/20.jpg\nSaving to: dataset/masks/0313-2/37880\nReading from: dataset/clips/0313-2/515/20.jpg\nSaving to: dataset/masks/0313-2/515\nReading from: dataset/clips/0313-2/33760/20.jpg\nSaving to: dataset/masks/0313-2/33760\nReading from: dataset/clips/0313-2/9060/20.jpg\nSaving to: dataset/masks/0313-2/9060\nReading from: dataset/clips/0313-2/1425/20.jpg\nSaving to: dataset/masks/0313-2/1425\nReading from: dataset/clips/0313-2/20460/20.jpg\nSaving to: dataset/masks/0313-2/20460\nReading from: dataset/clips/0313-2/28740/20.jpg\nSaving to: dataset/masks/0313-2/28740\nReading from: dataset/clips/0313-2/42060/20.jpg\nSaving to: dataset/masks/0313-2/42060\nReading from: dataset/clips/0313-2/12900/20.jpg\nSaving to: dataset/masks/0313-2/12900\nReading from: dataset/clips/0313-2/725/20.jpg\nSaving to: dataset/masks/0313-2/725\nReading from: dataset/clips/0313-2/11580/20.jpg\nSaving to: dataset/masks/0313-2/11580\nReading from: dataset/clips/0313-2/12780/20.jpg\nSaving to: dataset/masks/0313-2/12780\nReading from: dataset/clips/0313-2/42660/20.jpg\nSaving to: dataset/masks/0313-2/42660\nReading from: dataset/clips/0313-2/1385/20.jpg\nSaving to: dataset/masks/0313-2/1385\nReading from: dataset/clips/0313-2/61320/20.jpg\nSaving to: dataset/masks/0313-2/61320\nReading from: dataset/clips/0313-2/38820/20.jpg\nSaving to: dataset/masks/0313-2/38820\nReading from: dataset/clips/0313-2/37980/20.jpg\nSaving to: dataset/masks/0313-2/37980\nReading from: dataset/clips/0313-2/1200/20.jpg\nSaving to: dataset/masks/0313-2/1200\nReading from: dataset/clips/0313-2/320/20.jpg\nSaving to: dataset/masks/0313-2/320\nReading from: dataset/clips/0313-2/33980/20.jpg\nSaving to: dataset/masks/0313-2/33980\nReading from: dataset/clips/0313-2/615/20.jpg\nSaving to: dataset/masks/0313-2/615\nReading from: dataset/clips/0313-2/795/20.jpg\nSaving to: dataset/masks/0313-2/795\nReading from: dataset/clips/0313-2/10200/20.jpg\nSaving to: dataset/masks/0313-2/10200\nReading from: dataset/clips/0313-2/42900/20.jpg\nSaving to: dataset/masks/0313-2/42900\nReading from: dataset/clips/0313-2/925/20.jpg\nSaving to: dataset/masks/0313-2/925\nReading from: dataset/clips/0313-2/60540/20.jpg\nSaving to: dataset/masks/0313-2/60540\nReading from: dataset/clips/0313-2/33000/20.jpg\nSaving to: dataset/masks/0313-2/33000\nReading from: dataset/clips/0313-2/4260/20.jpg\nSaving to: dataset/masks/0313-2/4260\nReading from: dataset/clips/0313-2/40100/20.jpg\nSaving to: dataset/masks/0313-2/40100\nReading from: dataset/clips/0313-2/35400/20.jpg\nSaving to: dataset/masks/0313-2/35400\nReading from: dataset/clips/0313-2/155/20.jpg\nSaving to: dataset/masks/0313-2/155\nReading from: dataset/clips/0313-2/1360/20.jpg\nSaving to: dataset/masks/0313-2/1360\nReading from: dataset/clips/0313-2/63720/20.jpg\nSaving to: dataset/masks/0313-2/63720\nReading from: dataset/clips/0313-2/63240/20.jpg\nSaving to: dataset/masks/0313-2/63240\nReading from: dataset/clips/0313-2/1760/20.jpg\nSaving to: dataset/masks/0313-2/1760\nReading from: dataset/clips/0313-2/38180/20.jpg\nSaving to: dataset/masks/0313-2/38180\nReading from: dataset/clips/0313-2/1740/20.jpg\nSaving to: dataset/masks/0313-2/1740\nReading from: dataset/clips/0313-2/39220/20.jpg\nSaving to: dataset/masks/0313-2/39220\nReading from: dataset/clips/0313-2/38480/20.jpg\nSaving to: dataset/masks/0313-2/38480\nReading from: dataset/clips/0313-2/39200/20.jpg\nSaving to: dataset/masks/0313-2/39200\nReading from: dataset/clips/0313-2/38420/20.jpg\nSaving to: dataset/masks/0313-2/38420\nReading from: dataset/clips/0313-2/1140/20.jpg\nSaving to: dataset/masks/0313-2/1140\nReading from: dataset/clips/0313-2/2580/20.jpg\nSaving to: dataset/masks/0313-2/2580\nReading from: dataset/clips/0313-2/22260/20.jpg\nSaving to: dataset/masks/0313-2/22260\nReading from: dataset/clips/0313-2/18180/20.jpg\nSaving to: dataset/masks/0313-2/18180\nReading from: dataset/clips/0313-2/24120/20.jpg\nSaving to: dataset/masks/0313-2/24120\nReading from: dataset/clips/0313-2/40940/20.jpg\nSaving to: dataset/masks/0313-2/40940\nReading from: dataset/clips/0313-2/37460/20.jpg\nSaving to: dataset/masks/0313-2/37460\nReading from: dataset/clips/0313-2/33660/20.jpg\nSaving to: dataset/masks/0313-2/33660\nReading from: dataset/clips/0313-2/36180/20.jpg\nSaving to: dataset/masks/0313-2/36180\nReading from: dataset/clips/0313-2/37260/20.jpg\nSaving to: dataset/masks/0313-2/37260\nReading from: dataset/clips/0313-2/610/20.jpg\nSaving to: dataset/masks/0313-2/610\nReading from: dataset/clips/0313-2/36100/20.jpg\nSaving to: dataset/masks/0313-2/36100\nReading from: dataset/clips/0313-2/38260/20.jpg\nSaving to: dataset/masks/0313-2/38260\nReading from: dataset/clips/0313-2/39760/20.jpg\nSaving to: dataset/masks/0313-2/39760\nReading from: dataset/clips/0313-2/36060/20.jpg\nSaving to: dataset/masks/0313-2/36060\nReading from: dataset/clips/0313-2/41740/20.jpg\nSaving to: dataset/masks/0313-2/41740\nReading from: dataset/clips/0313-2/9600/20.jpg\nSaving to: dataset/masks/0313-2/9600\nReading from: dataset/clips/0313-2/36380/20.jpg\nSaving to: dataset/masks/0313-2/36380\nReading from: dataset/clips/0313-2/33100/20.jpg\nSaving to: dataset/masks/0313-2/33100\nReading from: dataset/clips/0313-2/35520/20.jpg\nSaving to: dataset/masks/0313-2/35520\nReading from: dataset/clips/0313-2/33700/20.jpg\nSaving to: dataset/masks/0313-2/33700\nReading from: dataset/clips/0313-2/41920/20.jpg\nSaving to: dataset/masks/0313-2/41920\nReading from: dataset/clips/0313-2/36800/20.jpg\nSaving to: dataset/masks/0313-2/36800\nReading from: dataset/clips/0313-2/570/20.jpg\nSaving to: dataset/masks/0313-2/570\nReading from: dataset/clips/0313-2/35500/20.jpg\nSaving to: dataset/masks/0313-2/35500\nReading from: dataset/clips/0313-2/24000/20.jpg\nSaving to: dataset/masks/0313-2/24000\nReading from: dataset/clips/0313-2/225/20.jpg\nSaving to: dataset/masks/0313-2/225\nReading from: dataset/clips/0313-2/6300/20.jpg\nSaving to: dataset/masks/0313-2/6300\nReading from: dataset/clips/0313-2/31800/20.jpg\nSaving to: dataset/masks/0313-2/31800\nReading from: dataset/clips/0313-2/59700/20.jpg\nSaving to: dataset/masks/0313-2/59700\nReading from: dataset/clips/0313-2/13320/20.jpg\nSaving to: dataset/masks/0313-2/13320\nReading from: dataset/clips/0313-2/1795/20.jpg\nSaving to: dataset/masks/0313-2/1795\nReading from: dataset/clips/0313-2/7440/20.jpg\nSaving to: dataset/masks/0313-2/7440\nReading from: dataset/clips/0313-2/1045/20.jpg\nSaving to: dataset/masks/0313-2/1045\nReading from: dataset/clips/0313-2/40520/20.jpg\nSaving to: dataset/masks/0313-2/40520\nReading from: dataset/clips/0313-2/35860/20.jpg\nSaving to: dataset/masks/0313-2/35860\nReading from: dataset/clips/0313-2/9540/20.jpg\nSaving to: dataset/masks/0313-2/9540\nReading from: dataset/clips/0313-2/540/20.jpg\nSaving to: dataset/masks/0313-2/540\nReading from: dataset/clips/0313-2/33240/20.jpg\nSaving to: dataset/masks/0313-2/33240\nReading from: dataset/clips/0313-2/23700/20.jpg\nSaving to: dataset/masks/0313-2/23700\nReading from: dataset/clips/0313-2/34820/20.jpg\nSaving to: dataset/masks/0313-2/34820\nReading from: dataset/clips/0313-2/31260/20.jpg\nSaving to: dataset/masks/0313-2/31260\nReading from: dataset/clips/0313-2/39800/20.jpg\nSaving to: dataset/masks/0313-2/39800\nReading from: dataset/clips/0313-2/700/20.jpg\nSaving to: dataset/masks/0313-2/700\nReading from: dataset/clips/0313-2/11520/20.jpg\nSaving to: dataset/masks/0313-2/11520\nReading from: dataset/clips/0313-2/6000/20.jpg\nSaving to: dataset/masks/0313-2/6000\nReading from: dataset/clips/0313-2/40320/20.jpg\nSaving to: dataset/masks/0313-2/40320\nReading from: dataset/clips/0313-2/35120/20.jpg\nSaving to: dataset/masks/0313-2/35120\nReading from: dataset/clips/0313-2/300/20.jpg\nSaving to: dataset/masks/0313-2/300\nReading from: dataset/clips/0313-2/10980/20.jpg\nSaving to: dataset/masks/0313-2/10980\nReading from: dataset/clips/0313-2/28020/20.jpg\nSaving to: dataset/masks/0313-2/28020\nReading from: dataset/clips/0313-2/28080/20.jpg\nSaving to: dataset/masks/0313-2/28080\nReading from: dataset/clips/0313-2/6660/20.jpg\nSaving to: dataset/masks/0313-2/6660\nReading from: dataset/clips/0313-2/14820/20.jpg\nSaving to: dataset/masks/0313-2/14820\nReading from: dataset/clips/0313-2/6900/20.jpg\nSaving to: dataset/masks/0313-2/6900\nReading from: dataset/clips/0313-2/31920/20.jpg\nSaving to: dataset/masks/0313-2/31920\nReading from: dataset/clips/0313-2/19740/20.jpg\nSaving to: dataset/masks/0313-2/19740\nReading from: dataset/clips/0313-2/805/20.jpg\nSaving to: dataset/masks/0313-2/805\nReading from: dataset/clips/0313-2/28320/20.jpg\nSaving to: dataset/masks/0313-2/28320\nReading from: dataset/clips/0313-2/26700/20.jpg\nSaving to: dataset/masks/0313-2/26700\nReading from: dataset/clips/0313-2/34420/20.jpg\nSaving to: dataset/masks/0313-2/34420\nReading from: dataset/clips/0313-2/27360/20.jpg\nSaving to: dataset/masks/0313-2/27360\nReading from: dataset/clips/0313-2/33920/20.jpg\nSaving to: dataset/masks/0313-2/33920\nReading from: dataset/clips/0313-2/14580/20.jpg\nSaving to: dataset/masks/0313-2/14580\nReading from: dataset/clips/0313-2/605/20.jpg\nSaving to: dataset/masks/0313-2/605\nReading from: dataset/clips/0313-2/1580/20.jpg\nSaving to: dataset/masks/0313-2/1580\nReading from: dataset/clips/0313-2/38080/20.jpg\nSaving to: dataset/masks/0313-2/38080\nReading from: dataset/clips/0313-2/26340/20.jpg\nSaving to: dataset/masks/0313-2/26340\nReading from: dataset/clips/0313-2/2640/20.jpg\nSaving to: dataset/masks/0313-2/2640\nReading from: dataset/clips/0313-2/1445/20.jpg\nSaving to: dataset/masks/0313-2/1445\nReading from: dataset/clips/0313-2/12060/20.jpg\nSaving to: dataset/masks/0313-2/12060\nReading from: dataset/clips/0313-2/41260/20.jpg\nSaving to: dataset/masks/0313-2/41260\nReading from: dataset/clips/0313-2/15180/20.jpg\nSaving to: dataset/masks/0313-2/15180\nReading from: dataset/clips/0313-2/1095/20.jpg\nSaving to: dataset/masks/0313-2/1095\nReading from: dataset/clips/0313-2/23820/20.jpg\nSaving to: dataset/masks/0313-2/23820\nReading from: dataset/clips/0313-2/24240/20.jpg\nSaving to: dataset/masks/0313-2/24240\nReading from: dataset/clips/0313-2/34100/20.jpg\nSaving to: dataset/masks/0313-2/34100\nReading from: dataset/clips/0313-2/980/20.jpg\nSaving to: dataset/masks/0313-2/980\nReading from: dataset/clips/0313-2/33800/20.jpg\nSaving to: dataset/masks/0313-2/33800\nReading from: dataset/clips/0313-2/41840/20.jpg\nSaving to: dataset/masks/0313-2/41840\nReading from: dataset/clips/0313-2/63840/20.jpg\nSaving to: dataset/masks/0313-2/63840\nReading from: dataset/clips/0313-2/64740/20.jpg\nSaving to: dataset/masks/0313-2/64740\nReading from: dataset/clips/0313-2/5400/20.jpg\nSaving to: dataset/masks/0313-2/5400\nReading from: dataset/clips/0313-2/40140/20.jpg\nSaving to: dataset/masks/0313-2/40140\nReading from: dataset/clips/0313-2/35720/20.jpg\nSaving to: dataset/masks/0313-2/35720\nReading from: dataset/clips/0313-2/34800/20.jpg\nSaving to: dataset/masks/0313-2/34800\nReading from: dataset/clips/0313-2/36220/20.jpg\nSaving to: dataset/masks/0313-2/36220\nReading from: dataset/clips/0313-2/38380/20.jpg\nSaving to: dataset/masks/0313-2/38380\nReading from: dataset/clips/0313-2/410/20.jpg\nSaving to: dataset/masks/0313-2/410\nReading from: dataset/clips/0313-2/1460/20.jpg\nSaving to: dataset/masks/0313-2/1460\nReading from: dataset/clips/0313-2/5220/20.jpg\nSaving to: dataset/masks/0313-2/5220\nReading from: dataset/clips/0313-2/650/20.jpg\nSaving to: dataset/masks/0313-2/650\nReading from: dataset/clips/0313-2/30900/20.jpg\nSaving to: dataset/masks/0313-2/30900\nReading from: dataset/clips/0313-2/41320/20.jpg\nSaving to: dataset/masks/0313-2/41320\nReading from: dataset/clips/0313-2/36500/20.jpg\nSaving to: dataset/masks/0313-2/36500\nReading from: dataset/clips/0313-2/1520/20.jpg\nSaving to: dataset/masks/0313-2/1520\nReading from: dataset/clips/0313-2/905/20.jpg\nSaving to: dataset/masks/0313-2/905\nReading from: dataset/clips/0313-2/28140/20.jpg\nSaving to: dataset/masks/0313-2/28140\nReading from: dataset/clips/0313-2/8820/20.jpg\nSaving to: dataset/masks/0313-2/8820\nReading from: dataset/clips/0313-2/7320/20.jpg\nSaving to: dataset/masks/0313-2/7320\nReading from: dataset/clips/0313-2/19200/20.jpg\nSaving to: dataset/masks/0313-2/19200\nReading from: dataset/clips/0313-2/660/20.jpg\nSaving to: dataset/masks/0313-2/660\nReading from: dataset/clips/0313-2/40080/20.jpg\nSaving to: dataset/masks/0313-2/40080\nReading from: dataset/clips/0313-2/17700/20.jpg\nSaving to: dataset/masks/0313-2/17700\nReading from: dataset/clips/0313-2/170/20.jpg\nSaving to: dataset/masks/0313-2/170\nReading from: dataset/clips/0313-2/33520/20.jpg\nSaving to: dataset/masks/0313-2/33520\nReading from: dataset/clips/0313-2/59880/20.jpg\nSaving to: dataset/masks/0313-2/59880\nReading from: dataset/clips/0313-2/10440/20.jpg\nSaving to: dataset/masks/0313-2/10440\nReading from: dataset/clips/0313-2/22800/20.jpg\nSaving to: dataset/masks/0313-2/22800\nReading from: dataset/clips/0313-2/13740/20.jpg\nSaving to: dataset/masks/0313-2/13740\nReading from: dataset/clips/0313-2/910/20.jpg\nSaving to: dataset/masks/0313-2/910\nReading from: dataset/clips/0313-2/42780/20.jpg\nSaving to: dataset/masks/0313-2/42780\nReading from: dataset/clips/0313-2/9840/20.jpg\nSaving to: dataset/masks/0313-2/9840\nReading from: dataset/clips/0313-2/36660/20.jpg\nSaving to: dataset/masks/0313-2/36660\nReading from: dataset/clips/0313-2/42940/20.jpg\nSaving to: dataset/masks/0313-2/42940\nReading from: dataset/clips/0313-2/35780/20.jpg\nSaving to: dataset/masks/0313-2/35780\nReading from: dataset/clips/0313-2/2520/20.jpg\nSaving to: dataset/masks/0313-2/2520\nReading from: dataset/clips/0313-2/430/20.jpg\nSaving to: dataset/masks/0313-2/430\nReading from: dataset/clips/0313-2/23880/20.jpg\nSaving to: dataset/masks/0313-2/23880\nReading from: dataset/clips/0313-2/17820/20.jpg\nSaving to: dataset/masks/0313-2/17820\nReading from: dataset/clips/0313-2/4860/20.jpg\nSaving to: dataset/masks/0313-2/4860\nReading from: dataset/clips/0313-2/38400/20.jpg\nSaving to: dataset/masks/0313-2/38400\nReading from: dataset/clips/0313-2/63480/20.jpg\nSaving to: dataset/masks/0313-2/63480\nReading from: dataset/clips/0313-2/34140/20.jpg\nSaving to: dataset/masks/0313-2/34140\nReading from: dataset/clips/0313-2/23460/20.jpg\nSaving to: dataset/masks/0313-2/23460\nReading from: dataset/clips/0313-2/1245/20.jpg\nSaving to: dataset/masks/0313-2/1245\nReading from: dataset/clips/0313-2/1150/20.jpg\nSaving to: dataset/masks/0313-2/1150\nReading from: dataset/clips/0313-2/21060/20.jpg\nSaving to: dataset/masks/0313-2/21060\nReading from: dataset/clips/0313-2/30300/20.jpg\nSaving to: dataset/masks/0313-2/30300\nReading from: dataset/clips/0313-2/36680/20.jpg\nSaving to: dataset/masks/0313-2/36680\nReading from: dataset/clips/0313-2/35540/20.jpg\nSaving to: dataset/masks/0313-2/35540\nReading from: dataset/clips/0313-2/40920/20.jpg\nSaving to: dataset/masks/0313-2/40920\nReading from: dataset/clips/0313-2/34240/20.jpg\nSaving to: dataset/masks/0313-2/34240\nReading from: dataset/clips/0313-2/1665/20.jpg\nSaving to: dataset/masks/0313-2/1665\nReading from: dataset/clips/0313-2/1605/20.jpg\nSaving to: dataset/masks/0313-2/1605\nReading from: dataset/clips/0313-2/27780/20.jpg\nSaving to: dataset/masks/0313-2/27780\nReading from: dataset/clips/0313-2/36200/20.jpg\nSaving to: dataset/masks/0313-2/36200\nReading from: dataset/clips/0313-2/61380/20.jpg\nSaving to: dataset/masks/0313-2/61380\nReading from: dataset/clips/0313-2/585/20.jpg\nSaving to: dataset/masks/0313-2/585\nReading from: dataset/clips/0313-2/42840/20.jpg\nSaving to: dataset/masks/0313-2/42840\nReading from: dataset/clips/0313-2/2160/20.jpg\nSaving to: dataset/masks/0313-2/2160\nReading from: dataset/clips/0313-2/22020/20.jpg\nSaving to: dataset/masks/0313-2/22020\nReading from: dataset/clips/0313-2/59520/20.jpg\nSaving to: dataset/masks/0313-2/59520\nReading from: dataset/clips/0313-2/21420/20.jpg\nSaving to: dataset/masks/0313-2/21420\nReading from: dataset/clips/0313-2/6360/20.jpg\nSaving to: dataset/masks/0313-2/6360\nReading from: dataset/clips/0313-2/35480/20.jpg\nSaving to: dataset/masks/0313-2/35480\nReading from: dataset/clips/0313-2/6720/20.jpg\nSaving to: dataset/masks/0313-2/6720\nReading from: dataset/clips/0313-2/61020/20.jpg\nSaving to: dataset/masks/0313-2/61020\nReading from: dataset/clips/0313-2/41720/20.jpg\nSaving to: dataset/masks/0313-2/41720\nReading from: dataset/clips/0313-2/8280/20.jpg\nSaving to: dataset/masks/0313-2/8280\nReading from: dataset/clips/0313-2/14400/20.jpg\nSaving to: dataset/masks/0313-2/14400\nReading from: dataset/clips/0313-2/43020/20.jpg\nSaving to: dataset/masks/0313-2/43020\nReading from: dataset/clips/0313-2/37380/20.jpg\nSaving to: dataset/masks/0313-2/37380\nReading from: dataset/clips/0313-2/8160/20.jpg\nSaving to: dataset/masks/0313-2/8160\nReading from: dataset/clips/0313-2/36700/20.jpg\nSaving to: dataset/masks/0313-2/36700\nReading from: dataset/clips/0313-2/29700/20.jpg\nSaving to: dataset/masks/0313-2/29700\nReading from: dataset/clips/0313-2/40180/20.jpg\nSaving to: dataset/masks/0313-2/40180\nReading from: dataset/clips/0313-2/530/20.jpg\nSaving to: dataset/masks/0313-2/530\nReading from: dataset/clips/0313-2/39840/20.jpg\nSaving to: dataset/masks/0313-2/39840\nReading from: dataset/clips/0313-2/3900/20.jpg\nSaving to: dataset/masks/0313-2/3900\nReading from: dataset/clips/0313-2/61140/20.jpg\nSaving to: dataset/masks/0313-2/61140\nReading from: dataset/clips/0313-2/1710/20.jpg\nSaving to: dataset/masks/0313-2/1710\nReading from: dataset/clips/0313-2/1585/20.jpg\nSaving to: dataset/masks/0313-2/1585\nReading from: dataset/clips/0313-2/1345/20.jpg\nSaving to: dataset/masks/0313-2/1345\nReading from: dataset/clips/0313-2/20700/20.jpg\nSaving to: dataset/masks/0313-2/20700\nReading from: dataset/clips/0313-2/33220/20.jpg\nSaving to: dataset/masks/0313-2/33220\nReading from: dataset/clips/0313-2/9960/20.jpg\nSaving to: dataset/masks/0313-2/9960\nReading from: dataset/clips/0313-2/25/20.jpg\nSaving to: dataset/masks/0313-2/25\nReading from: dataset/clips/0313-2/14640/20.jpg\nSaving to: dataset/masks/0313-2/14640\nReading from: dataset/clips/0313-2/36900/20.jpg\nSaving to: dataset/masks/0313-2/36900\nReading from: dataset/clips/0313-2/16680/20.jpg\nSaving to: dataset/masks/0313-2/16680\nReading from: dataset/clips/0313-2/16080/20.jpg\nSaving to: dataset/masks/0313-2/16080\nReading from: dataset/clips/0313-2/37820/20.jpg\nSaving to: dataset/masks/0313-2/37820\nReading from: dataset/clips/0313-2/16140/20.jpg\nSaving to: dataset/masks/0313-2/16140\nReading from: dataset/clips/0313-2/8940/20.jpg\nSaving to: dataset/masks/0313-2/8940\nReading from: dataset/clips/0313-2/60360/20.jpg\nSaving to: dataset/masks/0313-2/60360\nReading from: dataset/clips/0313-2/255/20.jpg\nSaving to: dataset/masks/0313-2/255\nReading from: dataset/clips/0313-2/2340/20.jpg\nSaving to: dataset/masks/0313-2/2340\nReading from: dataset/clips/0313-2/30480/20.jpg\nSaving to: dataset/masks/0313-2/30480\nReading from: dataset/clips/0313-2/22500/20.jpg\nSaving to: dataset/masks/0313-2/22500\nReading from: dataset/clips/0313-2/34460/20.jpg\nSaving to: dataset/masks/0313-2/34460\nReading from: dataset/clips/0313-2/33480/20.jpg\nSaving to: dataset/masks/0313-2/33480\nReading from: dataset/clips/0313-2/34640/20.jpg\nSaving to: dataset/masks/0313-2/34640\nReading from: dataset/clips/0313-2/975/20.jpg\nSaving to: dataset/masks/0313-2/975\nReading from: dataset/clips/0313-2/26880/20.jpg\nSaving to: dataset/masks/0313-2/26880\nReading from: dataset/clips/0313-2/6540/20.jpg\nSaving to: dataset/masks/0313-2/6540\nReading from: dataset/clips/0313-2/3420/20.jpg\nSaving to: dataset/masks/0313-2/3420\nReading from: dataset/clips/0313-2/64080/20.jpg\nSaving to: dataset/masks/0313-2/64080\nReading from: dataset/clips/0313-2/34400/20.jpg\nSaving to: dataset/masks/0313-2/34400\nReading from: dataset/clips/0313-2/43000/20.jpg\nSaving to: dataset/masks/0313-2/43000\nReading from: dataset/clips/0313-2/35420/20.jpg\nSaving to: dataset/masks/0313-2/35420\nReading from: dataset/clips/0313-2/75/20.jpg\nSaving to: dataset/masks/0313-2/75\nReading from: dataset/clips/0313-2/3840/20.jpg\nSaving to: dataset/masks/0313-2/3840\nReading from: dataset/clips/0313-2/14700/20.jpg\nSaving to: dataset/masks/0313-2/14700\nReading from: dataset/clips/0313-2/40760/20.jpg\nSaving to: dataset/masks/0313-2/40760\nReading from: dataset/clips/0313-2/33500/20.jpg\nSaving to: dataset/masks/0313-2/33500\nReading from: dataset/clips/0313-2/41900/20.jpg\nSaving to: dataset/masks/0313-2/41900\nReading from: dataset/clips/0313-2/1130/20.jpg\nSaving to: dataset/masks/0313-2/1130\nReading from: dataset/clips/0313-2/205/20.jpg\nSaving to: dataset/masks/0313-2/205\nReading from: dataset/clips/0313-2/16320/20.jpg\nSaving to: dataset/masks/0313-2/16320\nReading from: dataset/clips/0313-2/33140/20.jpg\nSaving to: dataset/masks/0313-2/33140\nReading from: dataset/clips/0313-2/59280/20.jpg\nSaving to: dataset/masks/0313-2/59280\nReading from: dataset/clips/0313-2/1010/20.jpg\nSaving to: dataset/masks/0313-2/1010\nReading from: dataset/clips/0313-2/59220/20.jpg\nSaving to: dataset/masks/0313-2/59220\nReading from: dataset/clips/0313-2/37560/20.jpg\nSaving to: dataset/masks/0313-2/37560\nReading from: dataset/clips/0313-2/42980/20.jpg\nSaving to: dataset/masks/0313-2/42980\nReading from: dataset/clips/0313-2/960/20.jpg\nSaving to: dataset/masks/0313-2/960\nReading from: dataset/clips/0313-2/34000/20.jpg\nSaving to: dataset/masks/0313-2/34000\nReading from: dataset/clips/0313-2/875/20.jpg\nSaving to: dataset/masks/0313-2/875\nReading from: dataset/clips/0313-2/32880/20.jpg\nSaving to: dataset/masks/0313-2/32880\nReading from: dataset/clips/0313-2/18780/20.jpg\nSaving to: dataset/masks/0313-2/18780\nReading from: dataset/clips/0313-2/940/20.jpg\nSaving to: dataset/masks/0313-2/940\nReading from: dataset/clips/0313-2/1125/20.jpg\nSaving to: dataset/masks/0313-2/1125\nReading from: dataset/clips/0313-2/2040/20.jpg\nSaving to: dataset/masks/0313-2/2040\nReading from: dataset/clips/0313-2/18900/20.jpg\nSaving to: dataset/masks/0313-2/18900\nReading from: dataset/clips/0313-2/42340/20.jpg\nSaving to: dataset/masks/0313-2/42340\nReading from: dataset/clips/0313-2/36780/20.jpg\nSaving to: dataset/masks/0313-2/36780\nReading from: dataset/clips/0313-2/10860/20.jpg\nSaving to: dataset/masks/0313-2/10860\nReading from: dataset/clips/0313-2/42400/20.jpg\nSaving to: dataset/masks/0313-2/42400\nReading from: dataset/clips/0313-2/41460/20.jpg\nSaving to: dataset/masks/0313-2/41460\nReading from: dataset/clips/0313-2/1500/20.jpg\nSaving to: dataset/masks/0313-2/1500\nReading from: dataset/clips/0313-2/5460/20.jpg\nSaving to: dataset/masks/0313-2/5460\nReading from: dataset/clips/0313-2/28800/20.jpg\nSaving to: dataset/masks/0313-2/28800\nReading from: dataset/clips/0313-2/11400/20.jpg\nSaving to: dataset/masks/0313-2/11400\nReading from: dataset/clips/0313-2/31320/20.jpg\nSaving to: dataset/masks/0313-2/31320\nReading from: dataset/clips/0313-2/19500/20.jpg\nSaving to: dataset/masks/0313-2/19500\nReading from: dataset/clips/0313-2/750/20.jpg\nSaving to: dataset/masks/0313-2/750\nReading from: dataset/clips/0313-2/1595/20.jpg\nSaving to: dataset/masks/0313-2/1595\nReading from: dataset/clips/0313-2/17520/20.jpg\nSaving to: dataset/masks/0313-2/17520\nReading from: dataset/clips/0313-2/215/20.jpg\nSaving to: dataset/masks/0313-2/215\nReading from: dataset/clips/0313-2/33160/20.jpg\nSaving to: dataset/masks/0313-2/33160\nReading from: dataset/clips/0313-2/21780/20.jpg\nSaving to: dataset/masks/0313-2/21780\nReading from: dataset/clips/0313-2/35080/20.jpg\nSaving to: dataset/masks/0313-2/35080\nReading from: dataset/clips/0313-2/37860/20.jpg\nSaving to: dataset/masks/0313-2/37860\nReading from: dataset/clips/0313-2/42620/20.jpg\nSaving to: dataset/masks/0313-2/42620\nReading from: dataset/clips/0313-2/745/20.jpg\nSaving to: dataset/masks/0313-2/745\nReading from: dataset/clips/0313-2/35360/20.jpg\nSaving to: dataset/masks/0313-2/35360\nReading from: dataset/clips/0313-2/825/20.jpg\nSaving to: dataset/masks/0313-2/825\nReading from: dataset/clips/0313-2/4200/20.jpg\nSaving to: dataset/masks/0313-2/4200\nReading from: dataset/clips/0313-2/16020/20.jpg\nSaving to: dataset/masks/0313-2/16020\nReading from: dataset/clips/0313-2/4740/20.jpg\nSaving to: dataset/masks/0313-2/4740\nReading from: dataset/clips/0313-2/27480/20.jpg\nSaving to: dataset/masks/0313-2/27480\nReading from: dataset/clips/0313-2/355/20.jpg\nSaving to: dataset/masks/0313-2/355\nReading from: dataset/clips/0313-2/60240/20.jpg\nSaving to: dataset/masks/0313-2/60240\nReading from: dataset/clips/0313-2/26580/20.jpg\nSaving to: dataset/masks/0313-2/26580\nReading from: dataset/clips/0313-2/28380/20.jpg\nSaving to: dataset/masks/0313-2/28380\nReading from: dataset/clips/0313-2/22860/20.jpg\nSaving to: dataset/masks/0313-2/22860\nReading from: dataset/clips/0313-2/43040/20.jpg\nSaving to: dataset/masks/0313-2/43040\nReading from: dataset/clips/0313-2/11340/20.jpg\nSaving to: dataset/masks/0313-2/11340\nReading from: dataset/clips/0313-2/10560/20.jpg\nSaving to: dataset/masks/0313-2/10560\nReading from: dataset/clips/0313-2/4800/20.jpg\nSaving to: dataset/masks/0313-2/4800\nReading from: dataset/clips/0313-2/1320/20.jpg\nSaving to: dataset/masks/0313-2/1320\nReading from: dataset/clips/0313-2/15240/20.jpg\nSaving to: dataset/masks/0313-2/15240\nReading from: dataset/clips/0313-2/30840/20.jpg\nSaving to: dataset/masks/0313-2/30840\nReading from: dataset/clips/0313-2/20220/20.jpg\nSaving to: dataset/masks/0313-2/20220\nReading from: dataset/clips/0313-2/14280/20.jpg\nSaving to: dataset/masks/0313-2/14280\nReading from: dataset/clips/0313-2/41280/20.jpg\nSaving to: dataset/masks/0313-2/41280\nReading from: dataset/clips/0313-2/14940/20.jpg\nSaving to: dataset/masks/0313-2/14940\nReading from: dataset/clips/0313-2/790/20.jpg\nSaving to: dataset/masks/0313-2/790\nReading from: dataset/clips/0313-2/765/20.jpg\nSaving to: dataset/masks/0313-2/765\nReading from: dataset/clips/0313-2/445/20.jpg\nSaving to: dataset/masks/0313-2/445\nReading from: dataset/clips/0313-2/37000/20.jpg\nSaving to: dataset/masks/0313-2/37000\nReading from: dataset/clips/0313-2/42300/20.jpg\nSaving to: dataset/masks/0313-2/42300\nReading from: dataset/clips/0313-2/41420/20.jpg\nSaving to: dataset/masks/0313-2/41420\nReading from: dataset/clips/0313-2/40600/20.jpg\nSaving to: dataset/masks/0313-2/40600\nReading from: dataset/clips/0313-2/40260/20.jpg\nSaving to: dataset/masks/0313-2/40260\nReading from: dataset/clips/0313-2/180/20.jpg\nSaving to: dataset/masks/0313-2/180\nReading from: dataset/clips/0313-2/38240/20.jpg\nSaving to: dataset/masks/0313-2/38240\nReading from: dataset/clips/0313-2/1225/20.jpg\nSaving to: dataset/masks/0313-2/1225\nReading from: dataset/clips/0313-2/36360/20.jpg\nSaving to: dataset/masks/0313-2/36360\nReading from: dataset/clips/0313-2/25200/20.jpg\nSaving to: dataset/masks/0313-2/25200\nReading from: dataset/clips/0313-2/33940/20.jpg\nSaving to: dataset/masks/0313-2/33940\nReading from: dataset/clips/0313-2/10020/20.jpg\nSaving to: dataset/masks/0313-2/10020\nReading from: dataset/clips/0313-2/14460/20.jpg\nSaving to: dataset/masks/0313-2/14460\nReading from: dataset/clips/0313-2/61980/20.jpg\nSaving to: dataset/masks/0313-2/61980\nReading from: dataset/clips/0313-2/37220/20.jpg\nSaving to: dataset/masks/0313-2/37220\nReading from: dataset/clips/0313-2/23760/20.jpg\nSaving to: dataset/masks/0313-2/23760\nReading from: dataset/clips/0313-2/34680/20.jpg\nSaving to: dataset/masks/0313-2/34680\nReading from: dataset/clips/0313-2/35140/20.jpg\nSaving to: dataset/masks/0313-2/35140\nReading from: dataset/clips/0313-2/40380/20.jpg\nSaving to: dataset/masks/0313-2/40380\nReading from: dataset/clips/0313-2/38620/20.jpg\nSaving to: dataset/masks/0313-2/38620\nReading from: dataset/clips/0313-2/40580/20.jpg\nSaving to: dataset/masks/0313-2/40580\nReading from: dataset/clips/0313-2/1620/20.jpg\nSaving to: dataset/masks/0313-2/1620\nReading from: dataset/clips/0313-2/35340/20.jpg\nSaving to: dataset/masks/0313-2/35340\nReading from: dataset/clips/0313-2/64260/20.jpg\nSaving to: dataset/masks/0313-2/64260\nReading from: dataset/clips/0313-2/7260/20.jpg\nSaving to: dataset/masks/0313-2/7260\nReading from: dataset/clips/0313-2/19620/20.jpg\nSaving to: dataset/masks/0313-2/19620\nReading from: dataset/clips/0313-2/9240/20.jpg\nSaving to: dataset/masks/0313-2/9240\nReading from: dataset/clips/0313-2/22740/20.jpg\nSaving to: dataset/masks/0313-2/22740\nReading from: dataset/clips/0313-2/39100/20.jpg\nSaving to: dataset/masks/0313-2/39100\nReading from: dataset/clips/0313-2/39080/20.jpg\nSaving to: dataset/masks/0313-2/39080\nReading from: dataset/clips/0313-2/41240/20.jpg\nSaving to: dataset/masks/0313-2/41240\nReading from: dataset/clips/0313-2/29580/20.jpg\nSaving to: dataset/masks/0313-2/29580\nReading from: dataset/clips/0313-2/63180/20.jpg\nSaving to: dataset/masks/0313-2/63180\nReading from: dataset/clips/0313-2/34160/20.jpg\nSaving to: dataset/masks/0313-2/34160\nReading from: dataset/clips/0313-2/21120/20.jpg\nSaving to: dataset/masks/0313-2/21120\nReading from: dataset/clips/0313-2/1055/20.jpg\nSaving to: dataset/masks/0313-2/1055\nReading from: dataset/clips/0313-2/37700/20.jpg\nSaving to: dataset/masks/0313-2/37700\nReading from: dataset/clips/0313-2/41120/20.jpg\nSaving to: dataset/masks/0313-2/41120\nReading from: dataset/clips/0313-2/62220/20.jpg\nSaving to: dataset/masks/0313-2/62220\nReading from: dataset/clips/0313-2/35200/20.jpg\nSaving to: dataset/masks/0313-2/35200\nReading from: dataset/clips/0313-2/1720/20.jpg\nSaving to: dataset/masks/0313-2/1720\nReading from: dataset/clips/0313-2/41000/20.jpg\nSaving to: dataset/masks/0313-2/41000\nReading from: dataset/clips/0313-2/19020/20.jpg\nSaving to: dataset/masks/0313-2/19020\nReading from: dataset/clips/0313-2/15360/20.jpg\nSaving to: dataset/masks/0313-2/15360\nReading from: dataset/clips/0313-2/58680/20.jpg\nSaving to: dataset/masks/0313-2/58680\nReading from: dataset/clips/0313-2/1235/20.jpg\nSaving to: dataset/masks/0313-2/1235\nReading from: dataset/clips/0313-2/41300/20.jpg\nSaving to: dataset/masks/0313-2/41300\nReading from: dataset/clips/0313-2/18600/20.jpg\nSaving to: dataset/masks/0313-2/18600\nReading from: dataset/clips/0313-2/1185/20.jpg\nSaving to: dataset/masks/0313-2/1185\nReading from: dataset/clips/0313-2/1160/20.jpg\nSaving to: dataset/masks/0313-2/1160\nReading from: dataset/clips/0313-2/25740/20.jpg\nSaving to: dataset/masks/0313-2/25740\nReading from: dataset/clips/0313-2/35980/20.jpg\nSaving to: dataset/masks/0313-2/35980\nReading from: dataset/clips/0313-2/880/20.jpg\nSaving to: dataset/masks/0313-2/880\nReading from: dataset/clips/0313-2/7500/20.jpg\nSaving to: dataset/masks/0313-2/7500\nReading from: dataset/clips/0313-2/38120/20.jpg\nSaving to: dataset/masks/0313-2/38120\nReading from: dataset/clips/0313-2/1680/20.jpg\nSaving to: dataset/masks/0313-2/1680\nReading from: dataset/clips/0313-2/38100/20.jpg\nSaving to: dataset/masks/0313-2/38100\nReading from: dataset/clips/0313-2/40840/20.jpg\nSaving to: dataset/masks/0313-2/40840\nReading from: dataset/clips/0313-2/38460/20.jpg\nSaving to: dataset/masks/0313-2/38460\nReading from: dataset/clips/0313-2/42440/20.jpg\nSaving to: dataset/masks/0313-2/42440\nReading from: dataset/clips/0313-2/36580/20.jpg\nSaving to: dataset/masks/0313-2/36580\nReading from: dataset/clips/0313-2/38840/20.jpg\nSaving to: dataset/masks/0313-2/38840\nReading from: dataset/clips/0313-2/59100/20.jpg\nSaving to: dataset/masks/0313-2/59100\nReading from: dataset/clips/0313-2/38800/20.jpg\nSaving to: dataset/masks/0313-2/38800\nReading from: dataset/clips/0313-2/37900/20.jpg\nSaving to: dataset/masks/0313-2/37900\nReading from: dataset/clips/0313-2/38900/20.jpg\nSaving to: dataset/masks/0313-2/38900\nReading from: dataset/clips/0313-2/11160/20.jpg\nSaving to: dataset/masks/0313-2/11160\nReading from: dataset/clips/0313-2/41180/20.jpg\nSaving to: dataset/masks/0313-2/41180\nReading from: dataset/clips/0313-2/21900/20.jpg\nSaving to: dataset/masks/0313-2/21900\nReading from: dataset/clips/0313-2/1105/20.jpg\nSaving to: dataset/masks/0313-2/1105\nReading from: dataset/clips/0313-2/1780/20.jpg\nSaving to: dataset/masks/0313-2/1780\nReading from: dataset/clips/0313-2/14760/20.jpg\nSaving to: dataset/masks/0313-2/14760\nReading from: dataset/clips/0313-2/37920/20.jpg\nSaving to: dataset/masks/0313-2/37920\nReading from: dataset/clips/0313-2/14040/20.jpg\nSaving to: dataset/masks/0313-2/14040\nReading from: dataset/clips/0313-2/1000/20.jpg\nSaving to: dataset/masks/0313-2/1000\nReading from: dataset/clips/0313-2/60060/20.jpg\nSaving to: dataset/masks/0313-2/60060\nReading from: dataset/clips/0313-2/1440/20.jpg\nSaving to: dataset/masks/0313-2/1440\nReading from: dataset/clips/0313-2/1050/20.jpg\nSaving to: dataset/masks/0313-2/1050\nReading from: dataset/clips/0313-2/1480/20.jpg\nSaving to: dataset/masks/0313-2/1480\nReading from: dataset/clips/0313-2/310/20.jpg\nSaving to: dataset/masks/0313-2/310\nReading from: dataset/clips/0313-2/22560/20.jpg\nSaving to: dataset/masks/0313-2/22560\nReading from: dataset/clips/0313-2/9480/20.jpg\nSaving to: dataset/masks/0313-2/9480\nReading from: dataset/clips/0313-2/42600/20.jpg\nSaving to: dataset/masks/0313-2/42600\nReading from: dataset/clips/0313-2/33380/20.jpg\nSaving to: dataset/masks/0313-2/33380\nReading from: dataset/clips/0313-2/800/20.jpg\nSaving to: dataset/masks/0313-2/800\nReading from: dataset/clips/0313-2/31020/20.jpg\nSaving to: dataset/masks/0313-2/31020\nReading from: dataset/clips/0313-2/34720/20.jpg\nSaving to: dataset/masks/0313-2/34720\nReading from: dataset/clips/0313-2/1290/20.jpg\nSaving to: dataset/masks/0313-2/1290\nReading from: dataset/clips/0313-2/19560/20.jpg\nSaving to: dataset/masks/0313-2/19560\nReading from: dataset/clips/0313-2/35920/20.jpg\nSaving to: dataset/masks/0313-2/35920\nReading from: dataset/clips/0313-2/17100/20.jpg\nSaving to: dataset/masks/0313-2/17100\nReading from: dataset/clips/0313-2/31140/20.jpg\nSaving to: dataset/masks/0313-2/31140\nReading from: dataset/clips/0313-2/40960/20.jpg\nSaving to: dataset/masks/0313-2/40960\nReading from: dataset/clips/0313-2/16380/20.jpg\nSaving to: dataset/masks/0313-2/16380\nReading from: dataset/clips/0313-2/690/20.jpg\nSaving to: dataset/masks/0313-2/690\nReading from: dataset/clips/0313-2/1310/20.jpg\nSaving to: dataset/masks/0313-2/1310\nReading from: dataset/clips/0313-2/42860/20.jpg\nSaving to: dataset/masks/0313-2/42860\nReading from: dataset/clips/0313-2/1155/20.jpg\nSaving to: dataset/masks/0313-2/1155\nReading from: dataset/clips/0313-2/41440/20.jpg\nSaving to: dataset/masks/0313-2/41440\nReading from: dataset/clips/0313-2/1020/20.jpg\nSaving to: dataset/masks/0313-2/1020\nReading from: dataset/clips/0313-2/62520/20.jpg\nSaving to: dataset/masks/0313-2/62520\nReading from: dataset/clips/0313-2/32280/20.jpg\nSaving to: dataset/masks/0313-2/32280\nReading from: dataset/clips/0313-2/42280/20.jpg\nSaving to: dataset/masks/0313-2/42280\nReading from: dataset/clips/0313-2/62940/20.jpg\nSaving to: dataset/masks/0313-2/62940\nReading from: dataset/clips/0313-2/33060/20.jpg\nSaving to: dataset/masks/0313-2/33060\nReading from: dataset/clips/0313-2/200/20.jpg\nSaving to: dataset/masks/0313-2/200\nReading from: dataset/clips/0313-2/36020/20.jpg\nSaving to: dataset/masks/0313-2/36020\nReading from: dataset/clips/0313-2/13260/20.jpg\nSaving to: dataset/masks/0313-2/13260\nReading from: dataset/clips/0313-2/33780/20.jpg\nSaving to: dataset/masks/0313-2/33780\nReading from: dataset/clips/0313-2/64380/20.jpg\nSaving to: dataset/masks/0313-2/64380\nReading from: dataset/clips/0313-2/32680/20.jpg\nSaving to: dataset/masks/0313-2/32680\nReading from: dataset/clips/0313-2/1630/20.jpg\nSaving to: dataset/masks/0313-2/1630\nReading from: dataset/clips/0313-2/26220/20.jpg\nSaving to: dataset/masks/0313-2/26220\nReading from: dataset/clips/0313-2/1375/20.jpg\nSaving to: dataset/masks/0313-2/1375\nReading from: dataset/clips/0313-2/37040/20.jpg\nSaving to: dataset/masks/0313-2/37040\nReading from: dataset/clips/0313-2/1470/20.jpg\nSaving to: dataset/masks/0313-2/1470\nReading from: dataset/clips/0313-2/26280/20.jpg\nSaving to: dataset/masks/0313-2/26280\nReading from: dataset/clips/0313-2/29940/20.jpg\nSaving to: dataset/masks/0313-2/29940\nReading from: dataset/clips/0313-2/34500/20.jpg\nSaving to: dataset/masks/0313-2/34500\nReading from: dataset/clips/0313-2/42140/20.jpg\nSaving to: dataset/masks/0313-2/42140\nReading from: dataset/clips/0313-2/32760/20.jpg\nSaving to: dataset/masks/0313-2/32760\nReading from: dataset/clips/0313-2/32040/20.jpg\nSaving to: dataset/masks/0313-2/32040\nReading from: dataset/clips/0313-2/8340/20.jpg\nSaving to: dataset/masks/0313-2/8340\nReading from: dataset/clips/0313-2/31620/20.jpg\nSaving to: dataset/masks/0313-2/31620\nReading from: dataset/clips/0313-2/25500/20.jpg\nSaving to: dataset/masks/0313-2/25500\nReading from: dataset/clips/0313-2/40740/20.jpg\nSaving to: dataset/masks/0313-2/40740\nReading from: dataset/clips/0313-2/39420/20.jpg\nSaving to: dataset/masks/0313-2/39420\nReading from: dataset/clips/0313-2/61800/20.jpg\nSaving to: dataset/masks/0313-2/61800\nReading from: dataset/clips/0313-2/920/20.jpg\nSaving to: dataset/masks/0313-2/920\nReading from: dataset/clips/0313-2/12480/20.jpg\nSaving to: dataset/masks/0313-2/12480\nReading from: dataset/clips/0313-2/31740/20.jpg\nSaving to: dataset/masks/0313-2/31740\nReading from: dataset/clips/0313-2/16860/20.jpg\nSaving to: dataset/masks/0313-2/16860\nReading from: dataset/clips/0313-2/62820/20.jpg\nSaving to: dataset/masks/0313-2/62820\nReading from: dataset/clips/0313-2/33640/20.jpg\nSaving to: dataset/masks/0313-2/33640\nReading from: dataset/clips/0313-2/41400/20.jpg\nSaving to: dataset/masks/0313-2/41400\nReading from: dataset/clips/0313-2/40400/20.jpg\nSaving to: dataset/masks/0313-2/40400\nReading from: dataset/clips/0313-2/865/20.jpg\nSaving to: dataset/masks/0313-2/865\nReading from: dataset/clips/0313-2/37200/20.jpg\nSaving to: dataset/masks/0313-2/37200\nReading from: dataset/clips/0313-2/38280/20.jpg\nSaving to: dataset/masks/0313-2/38280\nReading from: dataset/clips/0313-2/26460/20.jpg\nSaving to: dataset/masks/0313-2/26460\nReading from: dataset/clips/0313-2/26820/20.jpg\nSaving to: dataset/masks/0313-2/26820\nReading from: dataset/clips/0313-2/36560/20.jpg\nSaving to: dataset/masks/0313-2/36560\nReading from: dataset/clips/0313-2/8100/20.jpg\nSaving to: dataset/masks/0313-2/8100\nReading from: dataset/clips/0313-2/61260/20.jpg\nSaving to: dataset/masks/0313-2/61260\nReading from: dataset/clips/0313-2/17760/20.jpg\nSaving to: dataset/masks/0313-2/17760\nReading from: dataset/clips/0313-2/1005/20.jpg\nSaving to: dataset/masks/0313-2/1005\nReading from: dataset/clips/0313-2/27600/20.jpg\nSaving to: dataset/masks/0313-2/27600\nReading from: dataset/clips/0313-2/770/20.jpg\nSaving to: dataset/masks/0313-2/770\nReading from: dataset/clips/0313-2/1725/20.jpg\nSaving to: dataset/masks/0313-2/1725\nReading from: dataset/clips/0313-2/1525/20.jpg\nSaving to: dataset/masks/0313-2/1525\nReading from: dataset/clips/0313-2/34980/20.jpg\nSaving to: dataset/masks/0313-2/34980\nReading from: dataset/clips/0313-2/13380/20.jpg\nSaving to: dataset/masks/0313-2/13380\nReading from: dataset/clips/0313-2/37840/20.jpg\nSaving to: dataset/masks/0313-2/37840\nReading from: dataset/clips/0313-2/42320/20.jpg\nSaving to: dataset/masks/0313-2/42320\nReading from: dataset/clips/0313-2/1335/20.jpg\nSaving to: dataset/masks/0313-2/1335\nReading from: dataset/clips/0313-2/38940/20.jpg\nSaving to: dataset/masks/0313-2/38940\nReading from: dataset/clips/0313-2/37680/20.jpg\nSaving to: dataset/masks/0313-2/37680\nReading from: dataset/clips/0313-2/30540/20.jpg\nSaving to: dataset/masks/0313-2/30540\nReading from: dataset/clips/0313-2/500/20.jpg\nSaving to: dataset/masks/0313-2/500\nReading from: dataset/clips/0313-2/41540/20.jpg\nSaving to: dataset/masks/0313-2/41540\nReading from: dataset/clips/0313-2/21660/20.jpg\nSaving to: dataset/masks/0313-2/21660\nReading from: dataset/clips/0313-2/13620/20.jpg\nSaving to: dataset/masks/0313-2/13620\nReading from: dataset/clips/0313-2/29760/20.jpg\nSaving to: dataset/masks/0313-2/29760\nReading from: dataset/clips/0313-2/35/20.jpg\nSaving to: dataset/masks/0313-2/35\nReading from: dataset/clips/0313-2/13680/20.jpg\nSaving to: dataset/masks/0313-2/13680\nReading from: dataset/clips/0313-2/535/20.jpg\nSaving to: dataset/masks/0313-2/535\nReading from: dataset/clips/0313-2/41360/20.jpg\nSaving to: dataset/masks/0313-2/41360\nReading from: dataset/clips/0313-2/37540/20.jpg\nSaving to: dataset/masks/0313-2/37540\nReading from: dataset/clips/0313-2/33400/20.jpg\nSaving to: dataset/masks/0313-2/33400\nReading from: dataset/clips/0313-2/40620/20.jpg\nSaving to: dataset/masks/0313-2/40620\nReading from: dataset/clips/0313-2/21000/20.jpg\nSaving to: dataset/masks/0313-2/21000\nReading from: dataset/clips/0313-2/14160/20.jpg\nSaving to: dataset/masks/0313-2/14160\nReading from: dataset/clips/0313-2/30780/20.jpg\nSaving to: dataset/masks/0313-2/30780\nReading from: dataset/clips/0313-2/36280/20.jpg\nSaving to: dataset/masks/0313-2/36280\nReading from: dataset/clips/0313-2/10680/20.jpg\nSaving to: dataset/masks/0313-2/10680\nReading from: dataset/clips/0313-2/100/20.jpg\nSaving to: dataset/masks/0313-2/100\nReading from: dataset/clips/0313-2/58620/20.jpg\nSaving to: dataset/masks/0313-2/58620\nReading from: dataset/clips/0313-2/1765/20.jpg\nSaving to: dataset/masks/0313-2/1765\nReading from: dataset/clips/0313-2/465/20.jpg\nSaving to: dataset/masks/0313-2/465\nReading from: dataset/clips/0313-2/42480/20.jpg\nSaving to: dataset/masks/0313-2/42480\nReading from: dataset/clips/0313-2/42360/20.jpg\nSaving to: dataset/masks/0313-2/42360\nReading from: dataset/clips/0313-2/35000/20.jpg\nSaving to: dataset/masks/0313-2/35000\nReading from: dataset/clips/0313-2/39280/20.jpg\nSaving to: dataset/masks/0313-2/39280\nReading from: dataset/clips/0313-2/1745/20.jpg\nSaving to: dataset/masks/0313-2/1745\nReading from: dataset/clips/0313-2/1075/20.jpg\nSaving to: dataset/masks/0313-2/1075\nReading from: dataset/clips/0313-2/38140/20.jpg\nSaving to: dataset/masks/0313-2/38140\nReading from: dataset/clips/0313-2/14520/20.jpg\nSaving to: dataset/masks/0313-2/14520\nReading from: dataset/clips/0313-2/31080/20.jpg\nSaving to: dataset/masks/0313-2/31080\nReading from: dataset/clips/0313-2/21540/20.jpg\nSaving to: dataset/masks/0313-2/21540\nReading from: dataset/clips/0313-2/38220/20.jpg\nSaving to: dataset/masks/0313-2/38220\nReading from: dataset/clips/0313-2/42720/20.jpg\nSaving to: dataset/masks/0313-2/42720\nReading from: dataset/clips/0313-2/16800/20.jpg\nSaving to: dataset/masks/0313-2/16800\nReading from: dataset/clips/0313-2/38360/20.jpg\nSaving to: dataset/masks/0313-2/38360\nReading from: dataset/clips/0313-2/12660/20.jpg\nSaving to: dataset/masks/0313-2/12660\nReading from: dataset/clips/0313-2/18960/20.jpg\nSaving to: dataset/masks/0313-2/18960\nReading from: dataset/clips/0313-2/37720/20.jpg\nSaving to: dataset/masks/0313-2/37720\nReading from: dataset/clips/0313-2/19680/20.jpg\nSaving to: dataset/masks/0313-2/19680\nReading from: dataset/clips/0313-2/32920/20.jpg\nSaving to: dataset/masks/0313-2/32920\nReading from: dataset/clips/0313-2/39600/20.jpg\nSaving to: dataset/masks/0313-2/39600\nReading from: dataset/clips/0313-2/34540/20.jpg\nSaving to: dataset/masks/0313-2/34540\nReading from: dataset/clips/0313-2/4440/20.jpg\nSaving to: dataset/masks/0313-2/4440\nReading from: dataset/clips/0313-2/3720/20.jpg\nSaving to: dataset/masks/0313-2/3720\nReading from: dataset/clips/0313-2/42260/20.jpg\nSaving to: dataset/masks/0313-2/42260\nReading from: dataset/clips/0313-2/33460/20.jpg\nSaving to: dataset/masks/0313-2/33460\nReading from: dataset/clips/0313-2/9360/20.jpg\nSaving to: dataset/masks/0313-2/9360\nReading from: dataset/clips/0313-2/19800/20.jpg\nSaving to: dataset/masks/0313-2/19800\nReading from: dataset/clips/0313-2/210/20.jpg\nSaving to: dataset/masks/0313-2/210\nReading from: dataset/clips/0313-2/20520/20.jpg\nSaving to: dataset/masks/0313-2/20520\nReading from: dataset/clips/0313-2/36920/20.jpg\nSaving to: dataset/masks/0313-2/36920\nReading from: dataset/clips/0313-2/62160/20.jpg\nSaving to: dataset/masks/0313-2/62160\nReading from: dataset/clips/0313-2/990/20.jpg\nSaving to: dataset/masks/0313-2/990\nReading from: dataset/clips/0313-2/63060/20.jpg\nSaving to: dataset/masks/0313-2/63060\nReading from: dataset/clips/0313-2/5040/20.jpg\nSaving to: dataset/masks/0313-2/5040\nReading from: dataset/clips/0313-2/26400/20.jpg\nSaving to: dataset/masks/0313-2/26400\nReading from: dataset/clips/0313-2/25080/20.jpg\nSaving to: dataset/masks/0313-2/25080\nReading from: dataset/clips/0313-2/12300/20.jpg\nSaving to: dataset/masks/0313-2/12300\nReading from: dataset/clips/0313-2/1735/20.jpg\nSaving to: dataset/masks/0313-2/1735\nReading from: dataset/clips/0313-2/42820/20.jpg\nSaving to: dataset/masks/0313-2/42820\nReading from: dataset/clips/0313-2/475/20.jpg\nSaving to: dataset/masks/0313-2/475\nReading from: dataset/clips/0313-2/61620/20.jpg\nSaving to: dataset/masks/0313-2/61620\nReading from: dataset/clips/0313-2/42800/20.jpg\nSaving to: dataset/masks/0313-2/42800\nReading from: dataset/clips/0313-2/63360/20.jpg\nSaving to: dataset/masks/0313-2/63360\nReading from: dataset/clips/0313-2/27420/20.jpg\nSaving to: dataset/masks/0313-2/27420\nReading from: dataset/clips/0313-2/35220/20.jpg\nSaving to: dataset/masks/0313-2/35220\nReading from: dataset/clips/0313-2/285/20.jpg\nSaving to: dataset/masks/0313-2/285\nReading from: dataset/clips/0313-2/1785/20.jpg\nSaving to: dataset/masks/0313-2/1785\nReading from: dataset/clips/0313-2/42380/20.jpg\nSaving to: dataset/masks/0313-2/42380\nReading from: dataset/clips/0313-2/42460/20.jpg\nSaving to: dataset/masks/0313-2/42460\nReading from: dataset/clips/0313-2/33860/20.jpg\nSaving to: dataset/masks/0313-2/33860\nReading from: dataset/clips/0313-2/23580/20.jpg\nSaving to: dataset/masks/0313-2/23580\nReading from: dataset/clips/0313-2/26160/20.jpg\nSaving to: dataset/masks/0313-2/26160\nReading from: dataset/clips/0313-2/60720/20.jpg\nSaving to: dataset/masks/0313-2/60720\nReading from: dataset/clips/0313-2/34960/20.jpg\nSaving to: dataset/masks/0313-2/34960\nReading from: dataset/clips/0313-2/1450/20.jpg\nSaving to: dataset/masks/0313-2/1450\nReading from: dataset/clips/0313-2/860/20.jpg\nSaving to: dataset/masks/0313-2/860\nReading from: dataset/clips/0313-2/935/20.jpg\nSaving to: dataset/masks/0313-2/935\nReading from: dataset/clips/0313-2/58800/20.jpg\nSaving to: dataset/masks/0313-2/58800\nReading from: dataset/clips/0313-2/42960/20.jpg\nSaving to: dataset/masks/0313-2/42960\nReading from: dataset/clips/0313-2/59040/20.jpg\nSaving to: dataset/masks/0313-2/59040\nReading from: dataset/clips/0313-2/7620/20.jpg\nSaving to: dataset/masks/0313-2/7620\nReading from: dataset/clips/0313-2/375/20.jpg\nSaving to: dataset/masks/0313-2/375\nReading from: dataset/clips/0313-2/240/20.jpg\nSaving to: dataset/masks/0313-2/240\nReading from: dataset/clips/0313-2/37320/20.jpg\nSaving to: dataset/masks/0313-2/37320\nReading from: dataset/clips/0313-2/885/20.jpg\nSaving to: dataset/masks/0313-2/885\nReading from: dataset/clips/0313-2/29880/20.jpg\nSaving to: dataset/masks/0313-2/29880\nReading from: dataset/clips/0313-2/36740/20.jpg\nSaving to: dataset/masks/0313-2/36740\nReading from: dataset/clips/0313-2/1100/20.jpg\nSaving to: dataset/masks/0313-2/1100\nReading from: dataset/clips/0313-2/1210/20.jpg\nSaving to: dataset/masks/0313-2/1210\nReading from: dataset/clips/0313-2/1040/20.jpg\nSaving to: dataset/masks/0313-2/1040\nReading from: dataset/clips/0313-2/1540/20.jpg\nSaving to: dataset/masks/0313-2/1540\nReading from: dataset/clips/0313-2/62040/20.jpg\nSaving to: dataset/masks/0313-2/62040\nReading from: dataset/clips/0313-2/1475/20.jpg\nSaving to: dataset/masks/0313-2/1475\nReading from: dataset/clips/0313-2/36040/20.jpg\nSaving to: dataset/masks/0313-2/36040\nReading from: dataset/clips/0313-2/41580/20.jpg\nSaving to: dataset/masks/0313-2/41580\nReading from: dataset/clips/0313-2/38000/20.jpg\nSaving to: dataset/masks/0313-2/38000\nReading from: dataset/clips/0313-2/37660/20.jpg\nSaving to: dataset/masks/0313-2/37660\nReading from: dataset/clips/0313-2/40540/20.jpg\nSaving to: dataset/masks/0313-2/40540\nReading from: dataset/clips/0313-2/945/20.jpg\nSaving to: dataset/masks/0313-2/945\nReading from: dataset/clips/0313-2/29520/20.jpg\nSaving to: dataset/masks/0313-2/29520\nReading from: dataset/clips/0313-2/195/20.jpg\nSaving to: dataset/masks/0313-2/195\nReading from: dataset/clips/0313-2/40060/20.jpg\nSaving to: dataset/masks/0313-2/40060\nReading from: dataset/clips/0313-2/15480/20.jpg\nSaving to: dataset/masks/0313-2/15480\nReading from: dataset/clips/0313-2/815/20.jpg\nSaving to: dataset/masks/0313-2/815\nReading from: dataset/clips/0313-2/38020/20.jpg\nSaving to: dataset/masks/0313-2/38020\nReading from: dataset/clips/0313-2/17220/20.jpg\nSaving to: dataset/masks/0313-2/17220\nReading from: dataset/clips/0313-2/1590/20.jpg\nSaving to: dataset/masks/0313-2/1590\nReading from: dataset/clips/0313-2/22380/20.jpg\nSaving to: dataset/masks/0313-2/22380\nReading from: dataset/clips/0313-2/33020/20.jpg\nSaving to: dataset/masks/0313-2/33020\nReading from: dataset/clips/0313-2/40860/20.jpg\nSaving to: dataset/masks/0313-2/40860\nReading from: dataset/clips/0313-2/9720/20.jpg\nSaving to: dataset/masks/0313-2/9720\nReading from: dataset/clips/0313-2/58560/20.jpg\nSaving to: dataset/masks/0313-2/58560\nReading from: dataset/clips/0313-2/16440/20.jpg\nSaving to: dataset/masks/0313-2/16440\nReading from: dataset/clips/0313-2/33080/20.jpg\nSaving to: dataset/masks/0313-2/33080\nReading from: dataset/clips/0313-2/28200/20.jpg\nSaving to: dataset/masks/0313-2/28200\nReading from: dataset/clips/0313-2/9780/20.jpg\nSaving to: dataset/masks/0313-2/9780\nReading from: dataset/clips/0313-2/1165/20.jpg\nSaving to: dataset/masks/0313-2/1165\nReading from: dataset/clips/0313-2/25560/20.jpg\nSaving to: dataset/masks/0313-2/25560\nReading from: dataset/clips/0313-2/34700/20.jpg\nSaving to: dataset/masks/0313-2/34700\nReading from: dataset/clips/0313-2/1390/20.jpg\nSaving to: dataset/masks/0313-2/1390\nReading from: dataset/clips/0313-2/27960/20.jpg\nSaving to: dataset/masks/0313-2/27960\nReading from: dataset/clips/0313-2/27060/20.jpg\nSaving to: dataset/masks/0313-2/27060\nReading from: dataset/clips/0313-2/37940/20.jpg\nSaving to: dataset/masks/0313-2/37940\nReading from: dataset/clips/0313-2/37780/20.jpg\nSaving to: dataset/masks/0313-2/37780\nReading from: dataset/clips/0313-2/7920/20.jpg\nSaving to: dataset/masks/0313-2/7920\nReading from: dataset/clips/0313-2/37060/20.jpg\nSaving to: dataset/masks/0313-2/37060\nReading from: dataset/clips/0313-2/19080/20.jpg\nSaving to: dataset/masks/0313-2/19080\nReading from: dataset/clips/0313-2/22620/20.jpg\nSaving to: dataset/masks/0313-2/22620\nReading from: dataset/clips/0313-2/480/20.jpg\nSaving to: dataset/masks/0313-2/480\nReading from: dataset/clips/0313-2/18000/20.jpg\nSaving to: dataset/masks/0313-2/18000\nReading from: dataset/clips/0313-2/390/20.jpg\nSaving to: dataset/masks/0313-2/390\nReading from: dataset/clips/0313-2/13560/20.jpg\nSaving to: dataset/masks/0313-2/13560\nReading from: dataset/clips/0313-2/59580/20.jpg\nSaving to: dataset/masks/0313-2/59580\nReading from: dataset/clips/0313-2/21360/20.jpg\nSaving to: dataset/masks/0313-2/21360\nReading from: dataset/clips/0313-2/33900/20.jpg\nSaving to: dataset/masks/0313-2/33900\nReading from: dataset/clips/0313-2/525/20.jpg\nSaving to: dataset/masks/0313-2/525\nReading from: dataset/clips/0313-2/41500/20.jpg\nSaving to: dataset/masks/0313-2/41500\nReading from: dataset/clips/0313-2/12120/20.jpg\nSaving to: dataset/masks/0313-2/12120\nReading from: dataset/clips/0313-2/28440/20.jpg\nSaving to: dataset/masks/0313-2/28440\nReading from: dataset/clips/0313-2/27540/20.jpg\nSaving to: dataset/masks/0313-2/27540\nReading from: dataset/clips/0313-2/40240/20.jpg\nSaving to: dataset/masks/0313-2/40240\nReading from: dataset/clips/0313-2/675/20.jpg\nSaving to: dataset/masks/0313-2/675\nReading from: dataset/clips/0313-2/740/20.jpg\nSaving to: dataset/masks/0313-2/740\nReading from: dataset/clips/0313-2/36540/20.jpg\nSaving to: dataset/masks/0313-2/36540\nReading from: dataset/clips/0313-2/41020/20.jpg\nSaving to: dataset/masks/0313-2/41020\nReading from: dataset/clips/0313-2/125/20.jpg\nSaving to: dataset/masks/0313-2/125\nReading from: dataset/clips/0313-2/7200/20.jpg\nSaving to: dataset/masks/0313-2/7200\nReading from: dataset/clips/0313-2/1315/20.jpg\nSaving to: dataset/masks/0313-2/1315\nReading from: dataset/clips/0313-2/19380/20.jpg\nSaving to: dataset/masks/0313-2/19380\nReading from: dataset/clips/0313-2/18360/20.jpg\nSaving to: dataset/masks/0313-2/18360\nReading from: dataset/clips/0313-2/11880/20.jpg\nSaving to: dataset/masks/0313-2/11880\nReading from: dataset/clips/0313-2/35460/20.jpg\nSaving to: dataset/masks/0313-2/35460\nReading from: dataset/clips/0313-2/11640/20.jpg\nSaving to: dataset/masks/0313-2/11640\nReading from: dataset/clips/0313-2/1300/20.jpg\nSaving to: dataset/masks/0313-2/1300\nReading from: dataset/clips/0313-2/33820/20.jpg\nSaving to: dataset/masks/0313-2/33820\nReading from: dataset/clips/0313-2/62760/20.jpg\nSaving to: dataset/masks/0313-2/62760\nReading from: dataset/clips/0313-2/21240/20.jpg\nSaving to: dataset/masks/0313-2/21240\nReading from: dataset/clips/0313-2/2220/20.jpg\nSaving to: dataset/masks/0313-2/2220\nReading from: dataset/clips/0313-2/15720/20.jpg\nSaving to: dataset/masks/0313-2/15720\nReading from: dataset/clips/0313-2/32840/20.jpg\nSaving to: dataset/masks/0313-2/32840\nReading from: dataset/clips/0313-2/40640/20.jpg\nSaving to: dataset/masks/0313-2/40640\nReading from: dataset/clips/0313-2/520/20.jpg\nSaving to: dataset/masks/0313-2/520\nReading from: dataset/clips/0313-2/61860/20.jpg\nSaving to: dataset/masks/0313-2/61860\nReading from: dataset/clips/0313-2/62880/20.jpg\nSaving to: dataset/masks/0313-2/62880\nReading from: dataset/clips/0313-2/1230/20.jpg\nSaving to: dataset/masks/0313-2/1230\nReading from: dataset/clips/0313-2/85/20.jpg\nSaving to: dataset/masks/0313-2/85\nReading from: dataset/clips/0313-2/60780/20.jpg\nSaving to: dataset/masks/0313-2/60780\nReading from: dataset/clips/0313-2/12600/20.jpg\nSaving to: dataset/masks/0313-2/12600\nReading from: dataset/clips/0313-2/38340/20.jpg\nSaving to: dataset/masks/0313-2/38340\nReading from: dataset/clips/0313-2/40420/20.jpg\nSaving to: dataset/masks/0313-2/40420\nReading from: dataset/clips/0313-2/11100/20.jpg\nSaving to: dataset/masks/0313-2/11100\nReading from: dataset/clips/0313-2/38740/20.jpg\nSaving to: dataset/masks/0313-2/38740\nReading from: dataset/clips/0313-2/38760/20.jpg\nSaving to: dataset/masks/0313-2/38760\nReading from: dataset/clips/0313-2/735/20.jpg\nSaving to: dataset/masks/0313-2/735\nReading from: dataset/clips/0313-2/1085/20.jpg\nSaving to: dataset/masks/0313-2/1085\nReading from: dataset/clips/0313-2/42220/20.jpg\nSaving to: dataset/masks/0313-2/42220\nReading from: dataset/clips/0313-2/36720/20.jpg\nSaving to: dataset/masks/0313-2/36720\nReading from: dataset/clips/0313-2/15840/20.jpg\nSaving to: dataset/masks/0313-2/15840\nReading from: dataset/clips/0313-2/35640/20.jpg\nSaving to: dataset/masks/0313-2/35640\nReading from: dataset/clips/0313-2/30120/20.jpg\nSaving to: dataset/masks/0313-2/30120\nReading from: dataset/clips/0313-2/35560/20.jpg\nSaving to: dataset/masks/0313-2/35560\nReading from: dataset/clips/0313-2/41800/20.jpg\nSaving to: dataset/masks/0313-2/41800\nReading from: dataset/clips/0313-2/295/20.jpg\nSaving to: dataset/masks/0313-2/295\nReading from: dataset/clips/0313-2/18720/20.jpg\nSaving to: dataset/masks/0313-2/18720\nReading from: dataset/clips/0313-2/41040/20.jpg\nSaving to: dataset/masks/0313-2/41040\nReading from: dataset/clips/0313-2/3540/20.jpg\nSaving to: dataset/masks/0313-2/3540\nReading from: dataset/clips/0313-2/560/20.jpg\nSaving to: dataset/masks/0313-2/560\nReading from: dataset/clips/0313-2/1260/20.jpg\nSaving to: dataset/masks/0313-2/1260\nReading from: dataset/clips/0313-2/1190/20.jpg\nSaving to: dataset/masks/0313-2/1190\nReading from: dataset/clips/0313-2/40560/20.jpg\nSaving to: dataset/masks/0313-2/40560\nReading from: dataset/clips/0313-2/63000/20.jpg\nSaving to: dataset/masks/0313-2/63000\nReading from: dataset/clips/0313-2/24060/20.jpg\nSaving to: dataset/masks/0313-2/24060\nReading from: dataset/clips/0313-2/1660/20.jpg\nSaving to: dataset/masks/0313-2/1660\nReading from: dataset/clips/0313-2/35960/20.jpg\nSaving to: dataset/masks/0313-2/35960\nReading from: dataset/clips/0313-2/280/20.jpg\nSaving to: dataset/masks/0313-2/280\nReading from: dataset/clips/0313-2/24660/20.jpg\nSaving to: dataset/masks/0313-2/24660\nReading from: dataset/clips/0313-2/1600/20.jpg\nSaving to: dataset/masks/0313-2/1600\nReading from: dataset/clips/0313-2/34860/20.jpg\nSaving to: dataset/masks/0313-2/34860\nReading from: dataset/clips/0313-2/31860/20.jpg\nSaving to: dataset/masks/0313-2/31860\nReading from: dataset/clips/0313-2/2760/20.jpg\nSaving to: dataset/masks/0313-2/2760\nReading from: dataset/clips/0313-2/40800/20.jpg\nSaving to: dataset/masks/0313-2/40800\nReading from: dataset/clips/0313-2/39320/20.jpg\nSaving to: dataset/masks/0313-2/39320\nReading from: dataset/clips/0313-2/5100/20.jpg\nSaving to: dataset/masks/0313-2/5100\nReading from: dataset/clips/0313-2/34280/20.jpg\nSaving to: dataset/masks/0313-2/34280\nReading from: dataset/clips/0313-2/42580/20.jpg\nSaving to: dataset/masks/0313-2/42580\nReading from: dataset/clips/0313-2/5520/20.jpg\nSaving to: dataset/masks/0313-2/5520\nReading from: dataset/clips/0313-2/900/20.jpg\nSaving to: dataset/masks/0313-2/900\nReading from: dataset/clips/0313-2/1115/20.jpg\nSaving to: dataset/masks/0313-2/1115\nReading from: dataset/clips/0313-2/42420/20.jpg\nSaving to: dataset/masks/0313-2/42420\nReading from: dataset/clips/0313-2/42500/20.jpg\nSaving to: dataset/masks/0313-2/42500\nReading from: dataset/clips/0313-2/120/20.jpg\nSaving to: dataset/masks/0313-2/120\nReading from: dataset/clips/0313-2/35800/20.jpg\nSaving to: dataset/masks/0313-2/35800\nReading from: dataset/clips/0313-2/42680/20.jpg\nSaving to: dataset/masks/0313-2/42680\nReading from: dataset/clips/0313-2/7140/20.jpg\nSaving to: dataset/masks/0313-2/7140\nReading from: dataset/clips/0313-2/35240/20.jpg\nSaving to: dataset/masks/0313-2/35240\nReading from: dataset/clips/0313-2/39940/20.jpg\nSaving to: dataset/masks/0313-2/39940\nReading from: dataset/clips/0313-2/24900/20.jpg\nSaving to: dataset/masks/0313-2/24900\nReading from: dataset/clips/0313-2/14100/20.jpg\nSaving to: dataset/masks/0313-2/14100\nReading from: dataset/clips/0313-2/1240/20.jpg\nSaving to: dataset/masks/0313-2/1240\nReading from: dataset/clips/0313-2/39480/20.jpg\nSaving to: dataset/masks/0313-2/39480\nReading from: dataset/clips/0313-2/25920/20.jpg\nSaving to: dataset/masks/0313-2/25920\nReading from: dataset/clips/0313-2/23940/20.jpg\nSaving to: dataset/masks/0313-2/23940\nReading from: dataset/clips/0313-2/15000/20.jpg\nSaving to: dataset/masks/0313-2/15000\nReading from: dataset/clips/0313-2/42020/20.jpg\nSaving to: dataset/masks/0313-2/42020\nReading from: dataset/clips/0313-2/32400/20.jpg\nSaving to: dataset/masks/0313-2/32400\nReading from: dataset/clips/0313-2/32860/20.jpg\nSaving to: dataset/masks/0313-2/32860\nReading from: dataset/clips/0313-2/460/20.jpg\nSaving to: dataset/masks/0313-2/460\nReading from: dataset/clips/0313-2/36240/20.jpg\nSaving to: dataset/masks/0313-2/36240\nReading from: dataset/clips/0313-2/38580/20.jpg\nSaving to: dataset/masks/0313-2/38580\nReading from: dataset/clips/0313-2/36000/20.jpg\nSaving to: dataset/masks/0313-2/36000\nReading from: dataset/clips/0313-2/38540/20.jpg\nSaving to: dataset/masks/0313-2/38540\nReading from: dataset/clips/0313-2/36320/20.jpg\nSaving to: dataset/masks/0313-2/36320\nReading from: dataset/clips/0313-2/1365/20.jpg\nSaving to: dataset/masks/0313-2/1365\nReading from: dataset/clips/0313-2/21480/20.jpg\nSaving to: dataset/masks/0313-2/21480\nReading from: dataset/clips/0313-2/37400/20.jpg\nSaving to: dataset/masks/0313-2/37400\nReading from: dataset/clips/0313-2/985/20.jpg\nSaving to: dataset/masks/0313-2/985\nReading from: dataset/clips/0313-2/32440/20.jpg\nSaving to: dataset/masks/0313-2/32440\nReading from: dataset/clips/0313-2/41060/20.jpg\nSaving to: dataset/masks/0313-2/41060\nReading from: dataset/clips/0313-2/1515/20.jpg\nSaving to: dataset/masks/0313-2/1515\nReading from: dataset/clips/0313-2/37620/20.jpg\nSaving to: dataset/masks/0313-2/37620\nReading from: dataset/clips/0313-2/8400/20.jpg\nSaving to: dataset/masks/0313-2/8400\nReading from: dataset/clips/0313-2/22200/20.jpg\nSaving to: dataset/masks/0313-2/22200\nReading from: dataset/clips/0313-2/34360/20.jpg\nSaving to: dataset/masks/0313-2/34360\nReading from: dataset/clips/0313-2/32100/20.jpg\nSaving to: dataset/masks/0313-2/32100\nReading from: dataset/clips/0313-2/41340/20.jpg\nSaving to: dataset/masks/0313-2/41340\nReading from: dataset/clips/0313-2/39680/20.jpg\nSaving to: dataset/masks/0313-2/39680\nReading from: dataset/clips/0313-2/3780/20.jpg\nSaving to: dataset/masks/0313-2/3780\nReading from: dataset/clips/0313-2/38860/20.jpg\nSaving to: dataset/masks/0313-2/38860\nReading from: dataset/clips/0313-2/29040/20.jpg\nSaving to: dataset/masks/0313-2/29040\nReading from: dataset/clips/0313-2/38440/20.jpg\nSaving to: dataset/masks/0313-2/38440\nReading from: dataset/clips/0313-2/260/20.jpg\nSaving to: dataset/masks/0313-2/260\nReading from: dataset/clips/0313-2/26940/20.jpg\nSaving to: dataset/masks/0313-2/26940\nReading from: dataset/clips/0313-2/63960/20.jpg\nSaving to: dataset/masks/0313-2/63960\nReading from: dataset/clips/0313-2/11460/20.jpg\nSaving to: dataset/masks/0313-2/11460\nReading from: dataset/clips/0313-2/40460/20.jpg\nSaving to: dataset/masks/0313-2/40460\nReading from: dataset/clips/0313-2/34040/20.jpg\nSaving to: dataset/masks/0313-2/34040\nReading from: dataset/clips/0313-2/8640/20.jpg\nSaving to: dataset/masks/0313-2/8640\nReading from: dataset/clips/0313-2/1060/20.jpg\nSaving to: dataset/masks/0313-2/1060\nReading from: dataset/clips/0313-2/1015/20.jpg\nSaving to: dataset/masks/0313-2/1015\nReading from: dataset/clips/0313-2/37180/20.jpg\nSaving to: dataset/masks/0313-2/37180\nReading from: dataset/clips/0313-2/64140/20.jpg\nSaving to: dataset/masks/0313-2/64140\nReading from: dataset/clips/0313-2/18480/20.jpg\nSaving to: dataset/masks/0313-2/18480\nReading from: dataset/clips/0313-2/1265/20.jpg\nSaving to: dataset/masks/0313-2/1265\nReading from: dataset/clips/0313-2/37020/20.jpg\nSaving to: dataset/masks/0313-2/37020\nReading from: dataset/clips/0313-2/41660/20.jpg\nSaving to: dataset/masks/0313-2/41660\nReading from: dataset/clips/0313-2/64200/20.jpg\nSaving to: dataset/masks/0313-2/64200\nReading from: dataset/clips/0313-2/10380/20.jpg\nSaving to: dataset/masks/0313-2/10380\nReading from: dataset/clips/0313-2/40300/20.jpg\nSaving to: dataset/masks/0313-2/40300\nReading from: dataset/clips/0313-2/37280/20.jpg\nSaving to: dataset/masks/0313-2/37280\nReading from: dataset/clips/0313-2/34760/20.jpg\nSaving to: dataset/masks/0313-2/34760\nReading from: dataset/clips/0313-2/1800/20.jpg\nSaving to: dataset/masks/0313-2/1800\nReading from: dataset/clips/0313-2/230/20.jpg\nSaving to: dataset/masks/0313-2/230\nReading from: dataset/clips/0313-2/39580/20.jpg\nSaving to: dataset/masks/0313-2/39580\nReading from: dataset/clips/0313-2/1555/20.jpg\nSaving to: dataset/masks/0313-2/1555\nReading from: dataset/clips/0313-2/20340/20.jpg\nSaving to: dataset/masks/0313-2/20340\nReading from: dataset/clips/0313-2/450/20.jpg\nSaving to: dataset/masks/0313-2/450\nReading from: dataset/clips/0313-2/33560/20.jpg\nSaving to: dataset/masks/0313-2/33560\nReading from: dataset/clips/0313-2/10800/20.jpg\nSaving to: dataset/masks/0313-2/10800\nReading from: dataset/clips/0313-2/35600/20.jpg\nSaving to: dataset/masks/0313-2/35600\nReading from: dataset/clips/0313-2/37420/20.jpg\nSaving to: dataset/masks/0313-2/37420\nReading from: dataset/clips/0313-2/2460/20.jpg\nSaving to: dataset/masks/0313-2/2460\nReading from: dataset/clips/0313-2/13500/20.jpg\nSaving to: dataset/masks/0313-2/13500\nReading from: dataset/clips/0313-2/145/20.jpg\nSaving to: dataset/masks/0313-2/145\nReading from: dataset/clips/0313-2/18060/20.jpg\nSaving to: dataset/masks/0313-2/18060\nReading from: dataset/clips/0313-2/34940/20.jpg\nSaving to: dataset/masks/0313-2/34940\nReading from: dataset/clips/0313-2/35020/20.jpg\nSaving to: dataset/masks/0313-2/35020\nReading from: dataset/clips/0313-2/61920/20.jpg\nSaving to: dataset/masks/0313-2/61920\nReading from: dataset/clips/0313-2/38160/20.jpg\nSaving to: dataset/masks/0313-2/38160\nReading from: dataset/clips/0313-2/32220/20.jpg\nSaving to: dataset/masks/0313-2/32220\nReading from: dataset/clips/0313-2/33960/20.jpg\nSaving to: dataset/masks/0313-2/33960\nReading from: dataset/clips/0313-2/915/20.jpg\nSaving to: dataset/masks/0313-2/915\nReading from: dataset/clips/0313-2/60840/20.jpg\nSaving to: dataset/masks/0313-2/60840\nReading from: dataset/clips/0313-2/58860/20.jpg\nSaving to: dataset/masks/0313-2/58860\nReading from: dataset/clips/0313-2/5700/20.jpg\nSaving to: dataset/masks/0313-2/5700\nReading from: dataset/clips/0313-2/64440/20.jpg\nSaving to: dataset/masks/0313-2/64440\nReading from: dataset/clips/0313-2/13860/20.jpg\nSaving to: dataset/masks/0313-2/13860\nReading from: dataset/clips/0313-2/35160/20.jpg\nSaving to: dataset/masks/0313-2/35160\nReading from: dataset/clips/0313-2/30000/20.jpg\nSaving to: dataset/masks/0313-2/30000\nReading from: dataset/clips/0313-2/32460/20.jpg\nSaving to: dataset/masks/0313-2/32460\nReading from: dataset/clips/0313-2/395/20.jpg\nSaving to: dataset/masks/0313-2/395\nReading from: dataset/clips/0313-2/40040/20.jpg\nSaving to: dataset/masks/0313-2/40040\nReading from: dataset/clips/0313-2/58920/20.jpg\nSaving to: dataset/masks/0313-2/58920\nReading from: dataset/clips/0313-2/40/20.jpg\nSaving to: dataset/masks/0313-2/40\nReading from: dataset/clips/0313-2/1455/20.jpg\nSaving to: dataset/masks/0313-2/1455\nReading from: dataset/clips/0313-2/34840/20.jpg\nSaving to: dataset/masks/0313-2/34840\nReading from: dataset/clips/0313-2/41520/20.jpg\nSaving to: dataset/masks/0313-2/41520\nReading from: dataset/clips/0313-2/1675/20.jpg\nSaving to: dataset/masks/0313-2/1675\nReading from: dataset/clips/0313-2/38500/20.jpg\nSaving to: dataset/masks/0313-2/38500\nReading from: dataset/clips/0313-2/10620/20.jpg\nSaving to: dataset/masks/0313-2/10620\nReading from: dataset/clips/0313-2/42180/20.jpg\nSaving to: dataset/masks/0313-2/42180\nReading from: dataset/clips/0313-2/32740/20.jpg\nSaving to: dataset/masks/0313-2/32740\nReading from: dataset/clips/0313-2/110/20.jpg\nSaving to: dataset/masks/0313-2/110\nReading from: dataset/clips/0313-2/6240/20.jpg\nSaving to: dataset/masks/0313-2/6240\nReading from: dataset/clips/0313-2/1215/20.jpg\nSaving to: dataset/masks/0313-2/1215\nReading from: dataset/clips/0313-2/42240/20.jpg\nSaving to: dataset/masks/0313-2/42240\nReading from: dataset/clips/0313-2/38720/20.jpg\nSaving to: dataset/masks/0313-2/38720\nReading from: dataset/clips/0313-2/38040/20.jpg\nSaving to: dataset/masks/0313-2/38040\nReading from: dataset/clips/0313-2/5640/20.jpg\nSaving to: dataset/masks/0313-2/5640\nReading from: dataset/clips/0313-2/62580/20.jpg\nSaving to: dataset/masks/0313-2/62580\nReading from: dataset/clips/0313-2/31380/20.jpg\nSaving to: dataset/masks/0313-2/31380\nReading from: dataset/clips/0313-2/4560/20.jpg\nSaving to: dataset/masks/0313-2/4560\nReading from: dataset/clips/0313-2/38980/20.jpg\nSaving to: dataset/masks/0313-2/38980\nReading from: dataset/clips/0313-2/35700/20.jpg\nSaving to: dataset/masks/0313-2/35700\nReading from: dataset/clips/0313-2/80/20.jpg\nSaving to: dataset/masks/0313-2/80\nReading from: dataset/clips/0313-2/365/20.jpg\nSaving to: dataset/masks/0313-2/365\nReading from: dataset/clips/0313-2/595/20.jpg\nSaving to: dataset/masks/0313-2/595\nReading from: dataset/clips/0313-2/39300/20.jpg\nSaving to: dataset/masks/0313-2/39300\nReading from: dataset/clips/0313-2/730/20.jpg\nSaving to: dataset/masks/0313-2/730\nReading from: dataset/clips/0313-2/39060/20.jpg\nSaving to: dataset/masks/0313-2/39060\nReading from: dataset/clips/0313-2/41380/20.jpg\nSaving to: dataset/masks/0313-2/41380\nReading from: dataset/clips/0313-2/38200/20.jpg\nSaving to: dataset/masks/0313-2/38200\nReading from: dataset/clips/0313-2/34320/20.jpg\nSaving to: dataset/masks/0313-2/34320\nReading from: dataset/clips/0313-2/40200/20.jpg\nSaving to: dataset/masks/0313-2/40200\nReading from: dataset/clips/0313-2/12420/20.jpg\nSaving to: dataset/masks/0313-2/12420\nReading from: dataset/clips/0313-2/33200/20.jpg\nSaving to: dataset/masks/0313-2/33200\nReading from: dataset/clips/0313-2/470/20.jpg\nSaving to: dataset/masks/0313-2/470\nReading from: dataset/clips/0313-2/28920/20.jpg\nSaving to: dataset/masks/0313-2/28920\nReading from: dataset/clips/0313-2/26100/20.jpg\nSaving to: dataset/masks/0313-2/26100\nReading from: dataset/clips/0313-2/6420/20.jpg\nSaving to: dataset/masks/0313-2/6420\nReading from: dataset/clips/0313-2/25260/20.jpg\nSaving to: dataset/masks/0313-2/25260\nReading from: dataset/clips/0313-2/17460/20.jpg\nSaving to: dataset/masks/0313-2/17460\nReading from: dataset/clips/0313-2/40440/20.jpg\nSaving to: dataset/masks/0313-2/40440\nReading from: dataset/clips/0313-2/61080/20.jpg\nSaving to: dataset/masks/0313-2/61080\nReading from: dataset/clips/0313-2/4140/20.jpg\nSaving to: dataset/masks/0313-2/4140\nReading from: dataset/clips/0313-2/35440/20.jpg\nSaving to: dataset/masks/0313-2/35440\nReading from: dataset/clips/0313-2/61560/20.jpg\nSaving to: dataset/masks/0313-2/61560\nReading from: dataset/clips/0313-2/1090/20.jpg\nSaving to: dataset/masks/0313-2/1090\nReading from: dataset/clips/0313-2/61680/20.jpg\nSaving to: dataset/masks/0313-2/61680\nReading from: dataset/clips/0313-2/5340/20.jpg\nSaving to: dataset/masks/0313-2/5340\nReading from: dataset/clips/0313-2/9000/20.jpg\nSaving to: dataset/masks/0313-2/9000\nReading from: dataset/clips/0313-2/1490/20.jpg\nSaving to: dataset/masks/0313-2/1490\nReading from: dataset/clips/0313-2/665/20.jpg\nSaving to: dataset/masks/0313-2/665\nReading from: dataset/clips/0313-2/1775/20.jpg\nSaving to: dataset/masks/0313-2/1775\nReading from: dataset/clips/0313-2/42200/20.jpg\nSaving to: dataset/masks/0313-2/42200\nReading from: dataset/clips/0313-2/27840/20.jpg\nSaving to: dataset/masks/0313-2/27840\nReading from: dataset/clips/0313-2/5880/20.jpg\nSaving to: dataset/masks/0313-2/5880\nReading from: dataset/clips/0313-2/58740/20.jpg\nSaving to: dataset/masks/0313-2/58740\nReading from: dataset/clips/0313-2/34900/20.jpg\nSaving to: dataset/masks/0313-2/34900\nReading from: dataset/clips/0313-2/37580/20.jpg\nSaving to: dataset/masks/0313-2/37580\nReading from: dataset/clips/0313-2/34780/20.jpg\nSaving to: dataset/masks/0313-2/34780\nReading from: dataset/clips/0313-2/7560/20.jpg\nSaving to: dataset/masks/0313-2/7560\nReading from: dataset/clips/0313-2/18660/20.jpg\nSaving to: dataset/masks/0313-2/18660\nReading from: dataset/clips/0313-2/17280/20.jpg\nSaving to: dataset/masks/0313-2/17280\nReading from: dataset/clips/0313-2/37960/20.jpg\nSaving to: dataset/masks/0313-2/37960\nReading from: dataset/clips/0313-2/22920/20.jpg\nSaving to: dataset/masks/0313-2/22920\nReading from: dataset/clips/0313-2/22680/20.jpg\nSaving to: dataset/masks/0313-2/22680\nReading from: dataset/clips/0313-2/420/20.jpg\nSaving to: dataset/masks/0313-2/420\nReading from: dataset/clips/0313-2/8760/20.jpg\nSaving to: dataset/masks/0313-2/8760\nReading from: dataset/clips/0313-2/37140/20.jpg\nSaving to: dataset/masks/0313-2/37140\nReading from: dataset/clips/0313-2/7740/20.jpg\nSaving to: dataset/masks/0313-2/7740\nReading from: dataset/clips/0313-2/34580/20.jpg\nSaving to: dataset/masks/0313-2/34580\nReading from: dataset/clips/0313-2/34260/20.jpg\nSaving to: dataset/masks/0313-2/34260\nReading from: dataset/clips/0313-2/21600/20.jpg\nSaving to: dataset/masks/0313-2/21600\nReading from: dataset/clips/0313-2/1220/20.jpg\nSaving to: dataset/masks/0313-2/1220\nReading from: dataset/clips/0313-2/33320/20.jpg\nSaving to: dataset/masks/0313-2/33320\nReading from: dataset/clips/0313-2/9420/20.jpg\nSaving to: dataset/masks/0313-2/9420\nReading from: dataset/clips/0313-2/1695/20.jpg\nSaving to: dataset/masks/0313-2/1695\nReading from: dataset/clips/0313-2/1030/20.jpg\nSaving to: dataset/masks/0313-2/1030\nReading from: dataset/clips/0313-2/39260/20.jpg\nSaving to: dataset/masks/0313-2/39260\nReading from: dataset/clips/0313-2/33300/20.jpg\nSaving to: dataset/masks/0313-2/33300\nReading from: dataset/clips/0313-2/60480/20.jpg\nSaving to: dataset/masks/0313-2/60480\nReading from: dataset/clips/0313-2/34120/20.jpg\nSaving to: dataset/masks/0313-2/34120\nReading from: dataset/clips/0313-2/6600/20.jpg\nSaving to: dataset/masks/0313-2/6600\nReading from: dataset/clips/0313-2/40720/20.jpg\nSaving to: dataset/masks/0313-2/40720\nReading from: dataset/clips/0313-2/24780/20.jpg\nSaving to: dataset/masks/0313-2/24780\nReading from: dataset/clips/0313-2/38680/20.jpg\nSaving to: dataset/masks/0313-2/38680\nReading from: dataset/clips/0313-2/1730/20.jpg\nSaving to: dataset/masks/0313-2/1730\nReading from: dataset/clips/0313-2/380/20.jpg\nSaving to: dataset/masks/0313-2/380\nReading from: dataset/clips/0313-2/220/20.jpg\nSaving to: dataset/masks/0313-2/220\nReading from: dataset/clips/0313-2/22440/20.jpg\nSaving to: dataset/masks/0313-2/22440\nReading from: dataset/clips/0313-2/61740/20.jpg\nSaving to: dataset/masks/0313-2/61740\nReading from: dataset/clips/0313-2/34880/20.jpg\nSaving to: dataset/masks/0313-2/34880\nReading from: dataset/clips/0313-2/845/20.jpg\nSaving to: dataset/masks/0313-2/845\nReading from: dataset/clips/0313-2/33340/20.jpg\nSaving to: dataset/masks/0313-2/33340\nReading from: dataset/clips/0313-2/1395/20.jpg\nSaving to: dataset/masks/0313-2/1395\nReading from: dataset/clips/0313-2/30240/20.jpg\nSaving to: dataset/masks/0313-2/30240\nReading from: dataset/clips/0313-2/37600/20.jpg\nSaving to: dataset/masks/0313-2/37600\nReading from: dataset/clips/0313-2/2280/20.jpg\nSaving to: dataset/masks/0313-2/2280\nReading from: dataset/clips/0313-2/60660/20.jpg\nSaving to: dataset/masks/0313-2/60660\nReading from: dataset/clips/0313-2/42120/20.jpg\nSaving to: dataset/masks/0313-2/42120\nReading from: dataset/clips/0313-2/620/20.jpg\nSaving to: dataset/masks/0313-2/620\nReading from: dataset/clips/0313-2/10320/20.jpg\nSaving to: dataset/masks/0313-2/10320\nReading from: dataset/clips/0313-2/1645/20.jpg\nSaving to: dataset/masks/0313-2/1645\nReading from: dataset/clips/0313-2/565/20.jpg\nSaving to: dataset/masks/0313-2/565\nReading from: dataset/clips/0313-2/12240/20.jpg\nSaving to: dataset/masks/0313-2/12240\nReading from: dataset/clips/0313-2/1035/20.jpg\nSaving to: dataset/masks/0313-2/1035\nReading from: dataset/clips/0313-2/33720/20.jpg\nSaving to: dataset/masks/0313-2/33720\nReading from: dataset/clips/0313-2/3960/20.jpg\nSaving to: dataset/masks/0313-2/3960\nReading from: dataset/clips/0313-2/33880/20.jpg\nSaving to: dataset/masks/0313-2/33880\nReading from: dataset/clips/0313-2/2700/20.jpg\nSaving to: dataset/masks/0313-2/2700\nReading from: dataset/clips/0313-2/245/20.jpg\nSaving to: dataset/masks/0313-2/245\nReading from: dataset/clips/0313-2/1195/20.jpg\nSaving to: dataset/masks/0313-2/1195\nReading from: dataset/clips/0313-2/1640/20.jpg\nSaving to: dataset/masks/0313-2/1640\nReading from: dataset/clips/0313-2/39560/20.jpg\nSaving to: dataset/masks/0313-2/39560\nReading from: dataset/clips/0313-2/36760/20.jpg\nSaving to: dataset/masks/0313-2/36760\nReading from: dataset/clips/0313-2/16620/20.jpg\nSaving to: dataset/masks/0313-2/16620\nReading from: dataset/clips/0313-2/1635/20.jpg\nSaving to: dataset/masks/0313-2/1635\nReading from: dataset/clips/0313-2/24840/20.jpg\nSaving to: dataset/masks/0313-2/24840\nReading from: dataset/clips/0313-2/8580/20.jpg\nSaving to: dataset/masks/0313-2/8580\nReading from: dataset/clips/0313-2/41680/20.jpg\nSaving to: dataset/masks/0313-2/41680\nReading from: dataset/clips/0313-2/345/20.jpg\nSaving to: dataset/masks/0313-2/345\nReading from: dataset/clips/0313-2/1650/20.jpg\nSaving to: dataset/masks/0313-2/1650\nReading from: dataset/clips/0313-2/1575/20.jpg\nSaving to: dataset/masks/0313-2/1575\nReading from: dataset/clips/0313-2/39540/20.jpg\nSaving to: dataset/masks/0313-2/39540\nReading from: dataset/clips/0313-2/63540/20.jpg\nSaving to: dataset/masks/0313-2/63540\nReading from: dataset/clips/0313-2/29400/20.jpg\nSaving to: dataset/masks/0313-2/29400\nReading from: dataset/clips/0313-2/42000/20.jpg\nSaving to: dataset/masks/0313-2/42000\nReading from: dataset/clips/0313-2/1065/20.jpg\nSaving to: dataset/masks/0313-2/1065\nReading from: dataset/clips/0313-2/27720/20.jpg\nSaving to: dataset/masks/0313-2/27720\nReading from: dataset/clips/0313-2/35320/20.jpg\nSaving to: dataset/masks/0313-2/35320\nReading from: dataset/clips/0313-2/25020/20.jpg\nSaving to: dataset/masks/0313-2/25020\nReading from: dataset/clips/0313-2/1560/20.jpg\nSaving to: dataset/masks/0313-2/1560\nReading from: dataset/clips/0313-2/895/20.jpg\nSaving to: dataset/masks/0313-2/895\nReading from: dataset/clips/0313-2/35940/20.jpg\nSaving to: dataset/masks/0313-2/35940\nReading from: dataset/clips/0313-2/41560/20.jpg\nSaving to: dataset/masks/0313-2/41560\nReading from: dataset/clips/0313-2/20580/20.jpg\nSaving to: dataset/masks/0313-2/20580\nReading from: dataset/clips/0313-2/405/20.jpg\nSaving to: dataset/masks/0313-2/405\nReading from: dataset/clips/0313-2/40020/20.jpg\nSaving to: dataset/masks/0313-2/40020\nReading from: dataset/clips/0313-2/41220/20.jpg\nSaving to: dataset/masks/0313-2/41220\nReading from: dataset/clips/0313-2/7980/20.jpg\nSaving to: dataset/masks/0313-2/7980\nReading from: dataset/clips/0313-2/29280/20.jpg\nSaving to: dataset/masks/0313-2/29280\nReading from: dataset/clips/0313-2/60180/20.jpg\nSaving to: dataset/masks/0313-2/60180\nReading from: dataset/clips/0313-2/11820/20.jpg\nSaving to: dataset/masks/0313-2/11820\nReading from: dataset/clips/0313-2/29340/20.jpg\nSaving to: dataset/masks/0313-2/29340\nReading from: dataset/clips/0313-2/31200/20.jpg\nSaving to: dataset/masks/0313-2/31200\nReading from: dataset/clips/0313-2/25380/20.jpg\nSaving to: dataset/masks/0313-2/25380\nReading from: dataset/clips/0313-2/37520/20.jpg\nSaving to: dataset/masks/0313-2/37520\nReading from: dataset/clips/0313-2/15600/20.jpg\nSaving to: dataset/masks/0313-2/15600\nReading from: dataset/clips/0313-2/38920/20.jpg\nSaving to: dataset/masks/0313-2/38920\nReading from: dataset/clips/0313-2/17940/20.jpg\nSaving to: dataset/masks/0313-2/17940\nReading from: dataset/clips/0313-2/36440/20.jpg\nSaving to: dataset/masks/0313-2/36440\nReading from: dataset/clips/0313-2/40680/20.jpg\nSaving to: dataset/masks/0313-2/40680\nReading from: dataset/clips/0313-2/7380/20.jpg\nSaving to: dataset/masks/0313-2/7380\nReading from: dataset/clips/0313-2/580/20.jpg\nSaving to: dataset/masks/0313-2/580\nReading from: dataset/clips/0313-2/59760/20.jpg\nSaving to: dataset/masks/0313-2/59760\nReading from: dataset/clips/0313-2/36980/20.jpg\nSaving to: dataset/masks/0313-2/36980\nReading from: dataset/clips/0313-2/34620/20.jpg\nSaving to: dataset/masks/0313-2/34620\nReading from: dataset/clips/0313-2/1070/20.jpg\nSaving to: dataset/masks/0313-2/1070\nReading from: dataset/clips/0313-2/42080/20.jpg\nSaving to: dataset/masks/0313-2/42080\nReading from: dataset/clips/0313-2/165/20.jpg\nSaving to: dataset/masks/0313-2/165\nReading from: dataset/clips/0313-2/1285/20.jpg\nSaving to: dataset/masks/0313-2/1285\nReading from: dataset/clips/0313-2/42700/20.jpg\nSaving to: dataset/masks/0313-2/42700\nReading from: dataset/clips/0313-2/590/20.jpg\nSaving to: dataset/masks/0313-2/590\nReading from: dataset/clips/0313-2/18840/20.jpg\nSaving to: dataset/masks/0313-2/18840\nReading from: dataset/clips/0313-2/32820/20.jpg\nSaving to: dataset/masks/0313-2/32820\nReading from: dataset/clips/0313-2/950/20.jpg\nSaving to: dataset/masks/0313-2/950\nReading from: dataset/clips/0313-2/26760/20.jpg\nSaving to: dataset/masks/0313-2/26760\nReading from: dataset/clips/0313-2/1330/20.jpg\nSaving to: dataset/masks/0313-2/1330\nReading from: dataset/clips/0313-2/40700/20.jpg\nSaving to: dataset/masks/0313-2/40700\nReading from: dataset/clips/0313-2/27900/20.jpg\nSaving to: dataset/masks/0313-2/27900\nReading from: dataset/clips/0313-2/60960/20.jpg\nSaving to: dataset/masks/0313-2/60960\nReading from: dataset/clips/0313-2/30360/20.jpg\nSaving to: dataset/masks/0313-2/30360\nReading from: dataset/clips/0313-2/19980/20.jpg\nSaving to: dataset/masks/0313-2/19980\nReading from: dataset/clips/0313-2/17880/20.jpg\nSaving to: dataset/masks/0313-2/17880\nReading from: dataset/clips/0313-2/34200/20.jpg\nSaving to: dataset/masks/0313-2/34200\nReading from: dataset/clips/0313-2/780/20.jpg\nSaving to: dataset/masks/0313-2/780\nReading from: dataset/clips/0313-2/1415/20.jpg\nSaving to: dataset/masks/0313-2/1415\nReading from: dataset/clips/0313-2/32580/20.jpg\nSaving to: dataset/masks/0313-2/32580\nReading from: dataset/clips/0313-2/19140/20.jpg\nSaving to: dataset/masks/0313-2/19140\nReading from: dataset/clips/0313-2/40340/20.jpg\nSaving to: dataset/masks/0313-2/40340\nReading from: dataset/clips/0313-2/32660/20.jpg\nSaving to: dataset/masks/0313-2/32660\nReading from: dataset/clips/0313-2/34060/20.jpg\nSaving to: dataset/masks/0313-2/34060\nReading from: dataset/clips/0313-2/6840/20.jpg\nSaving to: dataset/masks/0313-2/6840\nReading from: dataset/clips/0313-2/33180/20.jpg\nSaving to: dataset/masks/0313-2/33180\nReading from: dataset/clips/0313-2/62700/20.jpg\nSaving to: dataset/masks/0313-2/62700\nReading from: dataset/clips/0313-2/4320/20.jpg\nSaving to: dataset/masks/0313-2/4320\nReading from: dataset/clips/0313-2/340/20.jpg\nSaving to: dataset/masks/0313-2/340\nReading from: dataset/clips/0313-2/1080/20.jpg\nSaving to: dataset/masks/0313-2/1080\nReading from: dataset/clips/0313-2/90/20.jpg\nSaving to: dataset/masks/0313-2/90\nReading from: dataset/clips/0313-2/130/20.jpg\nSaving to: dataset/masks/0313-2/130\nReading from: dataset/clips/0313-2/15660/20.jpg\nSaving to: dataset/masks/0313-2/15660\nReading from: dataset/clips/0313-2/38300/20.jpg\nSaving to: dataset/masks/0313-2/38300\nReading from: dataset/clips/0313-2/14880/20.jpg\nSaving to: dataset/masks/0313-2/14880\nReading from: dataset/clips/0313-2/39620/20.jpg\nSaving to: dataset/masks/0313-2/39620\nReading from: dataset/clips/0313-2/64320/20.jpg\nSaving to: dataset/masks/0313-2/64320\nReading from: dataset/clips/0313-2/41940/20.jpg\nSaving to: dataset/masks/0313-2/41940\nReading from: dataset/clips/0313-2/34520/20.jpg\nSaving to: dataset/masks/0313-2/34520\nReading from: dataset/clips/0313-2/265/20.jpg\nSaving to: dataset/masks/0313-2/265\nReading from: dataset/clips/0313-2/36260/20.jpg\nSaving to: dataset/masks/0313-2/36260\nReading from: dataset/clips/0313-2/1625/20.jpg\nSaving to: dataset/masks/0313-2/1625\nReading from: dataset/clips/0313-2/35280/20.jpg\nSaving to: dataset/masks/0313-2/35280\nReading from: dataset/clips/0313-2/30/20.jpg\nSaving to: dataset/masks/0313-2/30\nReading from: dataset/clips/0313-2/1505/20.jpg\nSaving to: dataset/masks/0313-2/1505\nReading from: dataset/clips/0313-2/23640/20.jpg\nSaving to: dataset/masks/0313-2/23640\nReading from: dataset/clips/0313-2/59940/20.jpg\nSaving to: dataset/masks/0313-2/59940\nReading from: dataset/clips/0313-2/755/20.jpg\nSaving to: dataset/masks/0313-2/755\nReading from: dataset/clips/0313-2/37480/20.jpg\nSaving to: dataset/masks/0313-2/37480\nReading from: dataset/clips/0313-2/305/20.jpg\nSaving to: dataset/masks/0313-2/305\nReading from: dataset/clips/0313-2/29220/20.jpg\nSaving to: dataset/masks/0313-2/29220\nReading from: dataset/clips/0313-2/58980/20.jpg\nSaving to: dataset/masks/0313-2/58980\nReading from: dataset/clips/0313-2/40360/20.jpg\nSaving to: dataset/masks/0313-2/40360\nReading from: dataset/clips/0313-2/37300/20.jpg\nSaving to: dataset/masks/0313-2/37300\nReading from: dataset/clips/0313-2/39240/20.jpg\nSaving to: dataset/masks/0313-2/39240\nReading from: dataset/clips/0313-2/13080/20.jpg\nSaving to: dataset/masks/0313-2/13080\nReading from: dataset/clips/0313-2/26520/20.jpg\nSaving to: dataset/masks/0313-2/26520\nReading from: dataset/clips/0313-2/30960/20.jpg\nSaving to: dataset/masks/0313-2/30960\nReading from: dataset/clips/0313-2/23220/20.jpg\nSaving to: dataset/masks/0313-2/23220\nReading from: dataset/clips/0313-2/35300/20.jpg\nSaving to: dataset/masks/0313-2/35300\nReading from: dataset/clips/0313-2/400/20.jpg\nSaving to: dataset/masks/0313-2/400\n"
],
[
"cv2.imwrite('/Users/srinivas/Projects/Lane_Detection/datasets/LaneDetection/train/masks/0313-1/300/1.tiff', mask)",
"_____no_output_____"
],
[
"mask_img = cv2.imread('20.tiff', cv2.IMREAD_GRAYSCALE)\nmask_img.shape",
"_____no_output_____"
],
[
"plt.imshow(mask_img)",
"_____no_output_____"
],
[
"print(np.unique(mask_img))\nprint(np.unique(mask))",
"[0 1 2 3 4]\n[0 1 2 3 4]\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0b0e2b9382b561be1dee9e29df5824b907789c9 | 569,304 | ipynb | Jupyter Notebook | example.ipynb | cochoa0x1/pykoreaqi | 0ee3fa8f5a876ab6ff6a33ca40f82affc495670d | [
"MIT"
] | 1 | 2018-03-05T11:34:27.000Z | 2018-03-05T11:34:27.000Z | example.ipynb | cochoa0x1/pykoreaqi | 0ee3fa8f5a876ab6ff6a33ca40f82affc495670d | [
"MIT"
] | null | null | null | example.ipynb | cochoa0x1/pykoreaqi | 0ee3fa8f5a876ab6ff6a33ca40f82affc495670d | [
"MIT"
] | null | null | null | 35.995448 | 135 | 0.352854 | [
[
[
"from pykoreaqi import AirKorea",
"_____no_output_____"
],
[
"aqi = AirKorea()\n\ndata = aqi.get_all_realtime()",
"_____no_output_____"
],
[
"data",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
d0b0e601e459bb40a5ee61edefccc03c33a926d8 | 80,032 | ipynb | Jupyter Notebook | archive/Navigation-31Dec1515.ipynb | kapalko/DRL_Banana_Collector | a3e5f2707c582fd6f1a15524ef9f5f89b1f7efc5 | [
"MIT"
] | null | null | null | archive/Navigation-31Dec1515.ipynb | kapalko/DRL_Banana_Collector | a3e5f2707c582fd6f1a15524ef9f5f89b1f7efc5 | [
"MIT"
] | null | null | null | archive/Navigation-31Dec1515.ipynb | kapalko/DRL_Banana_Collector | a3e5f2707c582fd6f1a15524ef9f5f89b1f7efc5 | [
"MIT"
] | null | null | null | 203.643766 | 33,356 | 0.899053 | [
[
[
"# Navigation\n\n---\n\nIn this notebook, you will learn how to use the Unity ML-Agents environment for the first project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893).\n\n### 1. Start the Environment\n\nWe begin by importing some necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).",
"_____no_output_____"
]
],
[
[
"from unityagents import UnityEnvironment\nimport numpy as np\nfrom collections import deque\nimport torch\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.\n\n- **Mac**: `\"path/to/Banana.app\"`\n- **Windows** (x86): `\"path/to/Banana_Windows_x86/Banana.exe\"`\n- **Windows** (x86_64): `\"path/to/Banana_Windows_x86_64/Banana.exe\"`\n- **Linux** (x86): `\"path/to/Banana_Linux/Banana.x86\"`\n- **Linux** (x86_64): `\"path/to/Banana_Linux/Banana.x86_64\"`\n- **Linux** (x86, headless): `\"path/to/Banana_Linux_NoVis/Banana.x86\"`\n- **Linux** (x86_64, headless): `\"path/to/Banana_Linux_NoVis/Banana.x86_64\"`\n\nFor instance, if you are using a Mac, then you downloaded `Banana.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:\n```\nenv = UnityEnvironment(file_name=\"Banana.app\")\n```",
"_____no_output_____"
]
],
[
[
"env = UnityEnvironment(file_name=\"Banana.app\")",
"INFO:unityagents:\n'Academy' started successfully!\nUnity Academy name: Academy\n Number of Brains: 1\n Number of External Brains : 1\n Lesson number : 0\n Reset Parameters :\n\t\t\nUnity brain name: BananaBrain\n Number of Visual Observations (per agent): 0\n Vector Observation space type: continuous\n Vector Observation space size (per agent): 37\n Number of stacked Vector Observation: 1\n Vector Action space type: discrete\n Vector Action space size (per agent): 4\n Vector Action descriptions: , , , \n"
]
],
[
[
"Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.",
"_____no_output_____"
]
],
[
[
"# get the default brain\nbrain_name = env.brain_names[0]\nbrain = env.brains[brain_name]",
"_____no_output_____"
]
],
[
[
"### 2. Examine the State and Action Spaces\n\nThe simulation contains a single agent that navigates a large environment. At each time step, it has four actions at its disposal:\n- `0` - walk forward \n- `1` - walk backward\n- `2` - turn left\n- `3` - turn right\n\nThe state space has `37` dimensions and contains the agent's velocity, along with ray-based perception of objects around agent's forward direction. A reward of `+1` is provided for collecting a yellow banana, and a reward of `-1` is provided for collecting a blue banana. \n\nRun the code cell below to print some information about the environment.",
"_____no_output_____"
]
],
[
[
"# reset the environment\nenv_info = env.reset(train_mode=True)[brain_name]\n\n# number of agents in the environment\nprint('Number of agents:', len(env_info.agents))\n\n# number of actions\naction_size = brain.vector_action_space_size\nprint('Number of actions:', action_size)\n\n# examine the state space \nstate = env_info.vector_observations[0]\nprint('States look like:', state)\nstate_size = len(state)\nprint('States have length:', state_size)",
"Number of agents: 1\nNumber of actions: 4\nStates look like: [1. 0. 0. 0. 0.84408134 0.\n 0. 1. 0. 0.0748472 0. 1.\n 0. 0. 0.25755 1. 0. 0.\n 0. 0.74177343 0. 1. 0. 0.\n 0.25854847 0. 0. 1. 0. 0.09355672\n 0. 1. 0. 0. 0.31969345 0.\n 0. ]\nStates have length: 37\n"
]
],
[
[
"### 3. Take Random Actions in the Environment\n\nIn the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.\n\nOnce this cell is executed, you will watch the agent's performance, if it selects an action (uniformly) at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment. \n\nOf course, as part of the project, you'll have to change the code so that the agent is able to use its experience to gradually choose better actions when interacting with the environment!",
"_____no_output_____"
]
],
[
[
"env_info = env.reset(train_mode=False)[brain_name] # reset the environment\nstate = env_info.vector_observations[0] # get the current state\nscore = 0 # initialize the score\nwhile True:\n action = np.random.randint(action_size) # select an action\n env_info = env.step(action)[brain_name] # send the action to the environment\n next_state = env_info.vector_observations[0] # get the next state\n reward = env_info.rewards[0] # get the reward\n done = env_info.local_done[0] # see if episode has finished\n score += reward # update the score\n state = next_state # roll over the state to next time step\n if done: # exit loop if episode finished\n break\n \nprint(\"Score: {}\".format(score))",
"Score: 0.0\n"
]
],
[
[
"When finished, you can close the environment.",
"_____no_output_____"
]
],
[
[
"#env.close()",
"_____no_output_____"
]
],
[
[
"### 4. It's Your Turn!\n\nNow it's your turn to train your own agent to solve the environment! When training the environment, set `train_mode=True`, so that the line for resetting the environment looks like the following:\n```python\nenv_info = env.reset(train_mode=True)[brain_name]\n```",
"_____no_output_____"
],
[
"### Training using DQN based off the previous assignments",
"_____no_output_____"
]
],
[
[
"from dqn_agent import Agent\nagent = Agent(state_size=37, action_size=4, seed=42)\n\ndef dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.99):\n \"\"\"Deep Q-Learning.\n \n Params\n ======\n n_episodes (int): maximum number of training episodes\n max_t (int): maximum number of timesteps per episode\n eps_start (float): starting value of epsilon, for epsilon-greedy action selection\n eps_end (float): minimum value of epsilon\n eps_decay (float): multiplicative factor (per episode) for decreasing epsilon\n \"\"\"\n scores = [] # list containing scores from each episode\n scores_window = deque(maxlen=100) # last 100 scores\n eps = eps_start # initialize epsilon\n for i_episode in range(1, n_episodes+1):\n env_info = env.reset(train_mode=True)[brain_name]\n state = env_info.vector_observations[0]\n score = 0\n for t in range(max_t):\n action = agent.act(state, eps)\n env_info = env.step(action)[brain_name]\n next_state = env_info.vector_observations[0]\n reward = env_info.rewards[0]\n done = env_info.local_done[0]\n agent.step(state, action, reward, next_state, done)\n state = next_state\n score += reward\n if done:\n break \n scores_window.append(score) # save most recent score\n scores.append(score) # save most recent score\n eps = max(eps_end, eps_decay*eps) # decrease epsilon\n print('\\rEpisode {}\\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end=\"\")\n if i_episode % 100 == 0:\n print('\\rEpisode {}\\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)))\n if np.mean(scores_window)>=13.0:\n print('\\nEnvironment solved in {:d} episodes!\\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window)))\n torch.save(agent.qnetwork_local.state_dict(), 'checkpoint_311220201515.pth')\n break\n return scores\n\nscores = dqn()\n\n# plot the scores\nfig = plt.figure()\nax = fig.add_subplot(111)\nplt.plot(np.arange(len(scores)), scores)\nplt.ylabel('Score')\nplt.xlabel('Episode #')\nplt.show()",
"State Size: 37\nAction_size: 4\nEpisode 100\tAverage Score: 1.40\nEpisode 200\tAverage Score: 7.13\nEpisode 300\tAverage Score: 11.09\nEpisode 383\tAverage Score: 13.03\nEnvironment solved in 283 episodes!\tAverage Score: 13.03\n"
],
[
"# plot the scores\nfig = plt.figure()\nax = fig.add_subplot(111)\nplt.plot(np.arange(len(scores)), scores)\nplt.ylabel('Score')\nplt.xlabel('Episode #')\nplt.show()",
"_____no_output_____"
],
[
"# closes the environment\nenv.close()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0b0e800164cffc55c59cfd5c02eaecaa0375617 | 6,341 | ipynb | Jupyter Notebook | data_download.ipynb | erikfred/ptz_detector | db9a26a0f2027bed2a5db40431e836d4e6bb0923 | [
"MIT"
] | null | null | null | data_download.ipynb | erikfred/ptz_detector | db9a26a0f2027bed2a5db40431e836d4e6bb0923 | [
"MIT"
] | null | null | null | data_download.ipynb | erikfred/ptz_detector | db9a26a0f2027bed2a5db40431e836d4e6bb0923 | [
"MIT"
] | null | null | null | 31.391089 | 354 | 0.597067 | [
[
[
"# Data Download - *read description before running*",
"_____no_output_____"
],
[
"Term project for ESS 490/590\n\nGrad: Erik Fredrickson\n\nUndergrad: Ashika Capirala",
"_____no_output_____"
],
[
"*This notebook demonstrates how the open access datasets can be downloaded, but these data are provided at significantly higher temporal resolution than needed for the purposes of our study, so for the sake of this project we recommend that the user use the provided reduced data, which will save significant time, computing power, and disc space.*",
"_____no_output_____"
]
],
[
[
"# Imports\nimport obspy\nimport obspy.clients.fdsn.client as fdsn\nfrom obspy import UTCDateTime",
"_____no_output_____"
]
],
[
[
"## APG pressures and temperatures\n\nGet bottom temperature and bottom pressure from IRIS (https://www.iris.edu/hq/; https://doi.org/10.7914/SN/XO_2018)\n\n<img src=\"AACSE.png\" width=\"600\">\n<center>Alaska Amphibious Community Seismic Experiment</center>",
"_____no_output_____"
]
],
[
[
"# Pull pressure and temperature data from IRIS\nnetwork = 'XO'\nstaNames = ['LA21', 'LA34', 'LA33', 'LA23', 'LA25', 'LA22', 'LA28', 'LA39', 'LA32', 'LA30', 'LT07', 'LT06', \\\n 'LT13', 'LT03', 'LT11', 'LT04', 'LT01', 'LT20', 'LT14', 'LT16', 'LT10', 'LT12']\nstaCodes = 'LA21,LA34,LA33,LA23,LA25,LA22,LA28,LA39,LA32,LA30,LT07,LT06,LT13,LT03,LT11,LT04,LT01,LT20,LT14,LT16,LT10,LT12'\nchaNames = ['HDH', 'HKO']\nchaCodes='HDH,HKO'\nTstart = UTCDateTime(2018, 06, 01)\nTend = UTCDateTime(2019, 06, 20)\n\nfdsn_client = fdsn.Client('IRIS')\n\n# DO NOT RUN AS WRITTEN -- way too much data, so we'll need to make a loop to parse it by station and by day\nDtmp = fdsn_client.get_waveforms(network=network, station=staCodes, location='--', channel=chaCodes, starttime=Tstart, \\\n endtime=Tend, attach_response=False)",
"_____no_output_____"
]
],
[
[
"## Satellite altimetry\n\nGet altimetry data from Copernicus Marine (https://marine.copernicus.eu/; https://resources.marine.copernicus.eu/?option=com_csw&view=details&product_id=SEALEVEL_GLO_PHY_CLIMATE_L4_REP_OBSERVATIONS_008_057)\n\n<img src=\"jason-2-altimeter.jpg\" width=\"600\">\n<center>Jason-2 Satellite</center>",
"_____no_output_____"
]
],
[
[
"# installer to handle data download\n!pip install motuclient --upgrade",
"Collecting motuclient\n Downloading motuclient-1.8.8.tar.gz (30 kB)\nBuilding wheels for collected packages: motuclient\n Building wheel for motuclient (setup.py) ... \u001b[?25ldone\n\u001b[?25h Created wheel for motuclient: filename=motuclient-1.8.8-py3-none-any.whl size=34205 sha256=83d62167dbfe284e1de30763a39c7b71a92ab49edd6ff972d4a9b09e6cc6a208\n Stored in directory: /Users/erikfred/Library/Caches/pip/wheels/e6/16/94/3e40d579a5a03c17f2123e2dd13c5389c2e7515f49487fd88e\nSuccessfully built motuclient\nInstalling collected packages: motuclient\nSuccessfully installed motuclient-1.8.8\n"
],
[
"# Get desired data (would need to change directory, user, and password fields)\n!python -m motuclient --motu https://my.cmems-du.eu/motu-web/Motu --service-id \\\n SEALEVEL_GLO_PHY_CLIMATE_L4_REP_OBSERVATIONS_008_057-TDS --product-id \\\n dataset-duacs-rep-global-merged-twosat-phy-l4 --longitude-min 198 \\\n --longitude-max 210 --latitude-min 53 --latitude-max 60 \\\n --date-min \"2018-06-01 00:00:00\" --date-max \"2019-06-20 23:59:59\" \\\n --variable adt --variable err --variable sla --variable ugos --variable ugosa \\\n --variable vgos --variable vgosa --out-dir <OUTPUT_DIRECTORY> --out-name \\\n <OUTPUT_FILENAME> --user <USERNAME> --pwd <PASSWORD>",
"_____no_output_____"
]
],
[
[
"## Oceanographic model\n\nModel data not currently publicly available :(\nLoad from netcdf",
"_____no_output_____"
],
[
"## Eddy catalog",
"_____no_output_____"
],
[
"Labeled dataset hosted by AVISO. Requires registration, but free for academics (https://www.aviso.altimetry.fr/en/home.html; https://doi.org/10.24400/527896/a01-2021.001)\n\nFull code is available on GitHub! (https://github.com/AntSimi/py-eddy-tracker)\n\n<img src=\"eddy_field.jpg\" width=\"600\">\n<center>https://doi.org/10.1175/JTECH-D-14-00019.1</center>",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
d0b0f2c739e47143a0cb953071cf3fe471244936 | 31,715 | ipynb | Jupyter Notebook | notebooks/keras_mnist.ipynb | farrajota/kaggle_digit_recognizer | c8a93b7024ec4320c4b53052b1ab79d35cf54092 | [
"MIT"
] | null | null | null | notebooks/keras_mnist.ipynb | farrajota/kaggle_digit_recognizer | c8a93b7024ec4320c4b53052b1ab79d35cf54092 | [
"MIT"
] | null | null | null | notebooks/keras_mnist.ipynb | farrajota/kaggle_digit_recognizer | c8a93b7024ec4320c4b53052b1ab79d35cf54092 | [
"MIT"
] | null | null | null | 37.577014 | 4,628 | 0.486741 | [
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nfrom sklearn.model_selection import train_test_split\n\nimport keras\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Dropout, ReLU, Flatten\nfrom keras.layers import Conv2D, MaxPooling2D\nfrom keras.losses import categorical_crossentropy\nfrom keras.optimizers import Adam\nfrom keras.preprocessing.image import ImageDataGenerator\nfrom keras import backend as K\n\n%matplotlib inline",
"_____no_output_____"
],
[
"# load data\ntrain_df = pd.read_csv('data/train.csv')\ntest_df = pd.read_csv('data/test.csv')",
"_____no_output_____"
],
[
"train_df.head(3)",
"_____no_output_____"
],
[
"test_df.head(3)",
"_____no_output_____"
],
[
"# Convert data to numpy arrays\nimg_rows, img_cols = 28, 28\nnum_classes = train_df['label'].nunique()\n\n# Split train + val data\nX = train_df.drop(columns=['label']).values\ny = train_df['label'].values\n\n# one-hot encode the labels\ny = keras.utils.to_categorical(y, num_classes)\n\nif K.image_data_format() == 'channels_first':\n X = X.reshape(-1, 1, img_rows, img_cols)\n X_test = test_df.values.reshape(-1, 1, img_rows, img_cols)\n input_shape = (1, img_rows, img_cols)\nelse:\n X = X.reshape(-1, img_rows, img_cols, 1)\n X_test = test_df.values.reshape(-1, img_rows, img_cols, 1)\n input_shape = (img_rows, img_cols, 1)",
"_____no_output_____"
],
[
"# normalize data\nX = X.astype('float32')\nX_test = X_test.astype('float32')\nX /= 255\nX_test /= 255",
"_____no_output_____"
],
[
"# Split train + val data\nX_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.33, random_state=42)",
"_____no_output_____"
],
[
"# train array shape\nX_train.shape, y_train.shape",
"_____no_output_____"
],
[
"# val array shape\nX_val.shape, y_val.shape",
"_____no_output_____"
],
[
"# plot first image to check if it is in the correct format\nplt.imshow(X_train[0,:,:, 0])",
"_____no_output_____"
],
[
"# create model\ndef get_model():\n model = Sequential()\n model.add(Conv2D(filters=32, \n kernel_size=(3,3),\n activation=\"relu\",\n input_shape=input_shape))\n model.add(Conv2D(filters=64, \n kernel_size=(3,3),\n activation=\"relu\"))\n model.add(MaxPooling2D(pool_size=(2,2)))\n model.add(Conv2D(filters=128, \n kernel_size=(3,3),\n activation=\"relu\"))\n model.add(MaxPooling2D(pool_size=(2,2)))\n model.add(Dropout(0.25))\n model.add(Flatten())\n model.add(Dense(128, activation='relu'))\n model.add(Dropout(0.5))\n model.add(Dense(num_classes, activation='softmax'))\n\n model.compile(loss=categorical_crossentropy,\n optimizer=Adam(),\n metrics=['accuracy'])\n return model",
"_____no_output_____"
],
[
"# Model summary\nmodel = get_model()\nmodel.summary()",
"_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv2d_7 (Conv2D) (None, 26, 26, 32) 320 \n_________________________________________________________________\nconv2d_8 (Conv2D) (None, 24, 24, 64) 18496 \n_________________________________________________________________\nmax_pooling2d_5 (MaxPooling2 (None, 12, 12, 64) 0 \n_________________________________________________________________\nconv2d_9 (Conv2D) (None, 10, 10, 128) 73856 \n_________________________________________________________________\nmax_pooling2d_6 (MaxPooling2 (None, 5, 5, 128) 0 \n_________________________________________________________________\ndropout_5 (Dropout) (None, 5, 5, 128) 0 \n_________________________________________________________________\nflatten_3 (Flatten) (None, 3200) 0 \n_________________________________________________________________\ndense_5 (Dense) (None, 128) 409728 \n_________________________________________________________________\ndropout_6 (Dropout) (None, 128) 0 \n_________________________________________________________________\ndense_6 (Dense) (None, 10) 1290 \n=================================================================\nTotal params: 503,690\nTrainable params: 503,690\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"# Setup train data transformer\ntrain_datagen = ImageDataGenerator(featurewise_center=True,\n featurewise_std_normalization=True,\n width_shift_range=0.2,\n height_shift_range=0.2,\n horizontal_flip=True)\n\ntrain_datagen.fit(X_train)",
"_____no_output_____"
],
[
"# Setup validation data transformer\nval_datagen = ImageDataGenerator(featurewise_center=True,\n featurewise_std_normalization=True,\n horizontal_flip=False)\n\nval_datagen.fit(X_train)",
"_____no_output_____"
],
[
"batch_size = 128\nepochs = 10\nmodel_gen = get_model()\nmodel_gen.fit_generator(train_datagen.flow(X_train, y_train, batch_size=batch_size),\n steps_per_epoch=len(X_train) / batch_size,\n epochs=epochs,\n shuffle=True,\n validation_data=val_datagen.flow(X_val, y_val, batch_size=batch_size),\n validation_steps=len(X_val) / batch_size,\n verbose=1)",
"Epoch 1/10\n220/219 [==============================] - 89s 405ms/step - loss: 1.2205 - acc: 0.5704 - val_loss: 0.3468 - val_acc: 0.8898\nEpoch 2/10\n220/219 [==============================] - 97s 442ms/step - loss: 0.5769 - acc: 0.8096 - val_loss: 0.1883 - val_acc: 0.9426\nEpoch 3/10\n220/219 [==============================] - 88s 400ms/step - loss: 0.4190 - acc: 0.8655 - val_loss: 0.1466 - val_acc: 0.9542\nEpoch 4/10\n220/219 [==============================] - 98s 443ms/step - loss: 0.3380 - acc: 0.8945 - val_loss: 0.1110 - val_acc: 0.9655\nEpoch 5/10\n220/219 [==============================] - 97s 440ms/step - loss: 0.2889 - acc: 0.9113 - val_loss: 0.1315 - val_acc: 0.9595\nEpoch 6/10\n220/219 [==============================] - 99s 452ms/step - loss: 0.2588 - acc: 0.9198 - val_loss: 0.1108 - val_acc: 0.9652\nEpoch 7/10\n220/219 [==============================] - 95s 434ms/step - loss: 0.2430 - acc: 0.9252 - val_loss: 0.0798 - val_acc: 0.9752\nEpoch 8/10\n220/219 [==============================] - 97s 442ms/step - loss: 0.2179 - acc: 0.9340 - val_loss: 0.0775 - val_acc: 0.9764\nEpoch 9/10\n220/219 [==============================] - 97s 441ms/step - loss: 0.2033 - acc: 0.9359 - val_loss: 0.0773 - val_acc: 0.9760\nEpoch 10/10\n220/219 [==============================] - 97s 440ms/step - loss: 0.1960 - acc: 0.9414 - val_loss: 0.0694 - val_acc: 0.9781\n"
],
[
"score = model_gen.evaluate_generator(val_datagen.flow(X_val, y_val, batch_size=batch_size), \n steps=len(X_val) / batch_size,\n verbose=1)\nprint('Test loss:', score[0])\nprint('Test accuracy:', score[1])",
"109/108 [==============================] - 8s 74ms/step\nTest loss: 0.06943384749548775\nTest accuracy: 0.9781385280697205\n"
],
[
"batch_size = 128\nepochs = 10\nmodel.fit(X_train, y_train,\n batch_size=batch_size,\n epochs=epochs,\n verbose=1,\n validation_data=(X_val, y_val))",
"Train on 28140 samples, validate on 13860 samples\nEpoch 1/10\n28140/28140 [==============================] - 96s 3ms/step - loss: 0.3474 - acc: 0.8908 - val_loss: 0.0738 - val_acc: 0.9776\nEpoch 2/10\n28140/28140 [==============================] - 92s 3ms/step - loss: 0.1053 - acc: 0.9687 - val_loss: 0.0515 - val_acc: 0.9838\nEpoch 3/10\n28140/28140 [==============================] - 94s 3ms/step - loss: 0.0763 - acc: 0.9769 - val_loss: 0.0438 - val_acc: 0.9854\nEpoch 4/10\n28140/28140 [==============================] - 93s 3ms/step - loss: 0.0623 - acc: 0.9813 - val_loss: 0.0413 - val_acc: 0.9874\nEpoch 5/10\n28140/28140 [==============================] - 93s 3ms/step - loss: 0.0513 - acc: 0.9857 - val_loss: 0.0342 - val_acc: 0.9891\nEpoch 6/10\n28140/28140 [==============================] - 91s 3ms/step - loss: 0.0467 - acc: 0.9856 - val_loss: 0.0303 - val_acc: 0.9903\nEpoch 7/10\n28140/28140 [==============================] - 93s 3ms/step - loss: 0.0397 - acc: 0.9882 - val_loss: 0.0320 - val_acc: 0.9911\nEpoch 8/10\n28140/28140 [==============================] - 98s 3ms/step - loss: 0.0348 - acc: 0.9885 - val_loss: 0.0288 - val_acc: 0.9918\nEpoch 9/10\n28140/28140 [==============================] - 91s 3ms/step - loss: 0.0334 - acc: 0.9897 - val_loss: 0.0295 - val_acc: 0.9916\nEpoch 10/10\n28140/28140 [==============================] - 94s 3ms/step - loss: 0.0297 - acc: 0.9906 - val_loss: 0.0311 - val_acc: 0.9908\n"
],
[
"score = model.evaluate(X_val, y_val, batch_size=256, verbose=1)\nprint('Test loss:', score[0])\nprint('Test accuracy:', score[1])",
"13860/13860 [==============================] - 11s 810us/step\nTest loss: 0.031108998779266598\nTest accuracy: 0.9907647907303869\n"
],
[
"# train model on the full train data\nbatch_size = 128\nepochs = 10\nmodel_final = get_model()\nmodel_final.fit(X, y,\n batch_size=batch_size,\n epochs=epochs,\n verbose=1)",
"Epoch 1/10\n42000/42000 [==============================] - 111s 3ms/step - loss: 0.2927 - acc: 0.9086\nEpoch 2/10\n42000/42000 [==============================] - 122s 3ms/step - loss: 0.0909 - acc: 0.9726\nEpoch 3/10\n42000/42000 [==============================] - 119s 3ms/step - loss: 0.0647 - acc: 0.9797\nEpoch 4/10\n42000/42000 [==============================] - 121s 3ms/step - loss: 0.0507 - acc: 0.9853\nEpoch 5/10\n42000/42000 [==============================] - 121s 3ms/step - loss: 0.0455 - acc: 0.9857\nEpoch 6/10\n42000/42000 [==============================] - 122s 3ms/step - loss: 0.0370 - acc: 0.9887\nEpoch 7/10\n42000/42000 [==============================] - 122s 3ms/step - loss: 0.0356 - acc: 0.9888\nEpoch 8/10\n42000/42000 [==============================] - 122s 3ms/step - loss: 0.0305 - acc: 0.9906\nEpoch 9/10\n42000/42000 [==============================] - 121s 3ms/step - loss: 0.0268 - acc: 0.9917\nEpoch 10/10\n42000/42000 [==============================] - 125s 3ms/step - loss: 0.0248 - acc: 0.9918\n"
],
[
"# create submission predictions\npredictions = model_final.predict(X_test, batch_size=256, verbose=1)",
"28000/28000 [==============================] - 25s 879us/step\n"
],
[
"# save predictions\nout_df = pd.DataFrame({\"ImageId\": list(range(1, len(predictions) + 1)),\n \"Label\": np.argmax(predictions, axis=1)})\nout_df.to_csv('keras_submission.csv', index=False)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0b0f319c0314c54ad60223c983bcfd54a2390e0 | 41,426 | ipynb | Jupyter Notebook | site/ja/tutorials/load_data/images.ipynb | gabrielrufino/docs-l10n | 9eb7df2cf9e78e1c9df76c57c935db85c79c8c3a | [
"Apache-2.0"
] | 1 | 2020-02-07T02:51:36.000Z | 2020-02-07T02:51:36.000Z | site/ja/tutorials/load_data/images.ipynb | gabrielrufino/docs-l10n | 9eb7df2cf9e78e1c9df76c57c935db85c79c8c3a | [
"Apache-2.0"
] | null | null | null | site/ja/tutorials/load_data/images.ipynb | gabrielrufino/docs-l10n | 9eb7df2cf9e78e1c9df76c57c935db85c79c8c3a | [
"Apache-2.0"
] | null | null | null | 24.746714 | 429 | 0.470646 | [
[
[
"##### Copyright 2018 The TensorFlow Authors.",
"_____no_output_____"
]
],
[
[
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n",
"_____no_output_____"
]
],
[
[
"# tf.dataを使って画像をロードする",
"_____no_output_____"
],
[
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/tutorials/load_data/images\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/load_data/images.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/load_data/images.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>",
"_____no_output_____"
],
[
"Note: これらのドキュメントは私たちTensorFlowコミュニティが翻訳したものです。コミュニティによる 翻訳は**ベストエフォート**であるため、この翻訳が正確であることや[英語の公式ドキュメント](https://www.tensorflow.org/?hl=en)の 最新の状態を反映したものであることを保証することはできません。 この翻訳の品質を向上させるためのご意見をお持ちの方は、GitHubリポジトリ[tensorflow/docs](https://github.com/tensorflow/docs)にプルリクエストをお送りください。 コミュニティによる翻訳やレビューに参加していただける方は、 [[email protected] メーリングリスト](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-ja)にご連絡ください。",
"_____no_output_____"
],
[
"このチュートリアルでは、'tf.data' を使って画像データセットをロードする簡単な例を示します。\n\nこのチュートリアルで使用するデータセットは、クラスごとに別々のディレクトリに別れた形で配布されています。",
"_____no_output_____"
],
[
"## 設定",
"_____no_output_____"
]
],
[
[
"from __future__ import absolute_import, division, print_function, unicode_literals\n\ntry:\n # Colab only\n %tensorflow_version 2.x\nexcept Exception:\n pass\nimport tensorflow as tf",
"_____no_output_____"
],
[
"AUTOTUNE = tf.data.experimental.AUTOTUNE",
"_____no_output_____"
]
],
[
[
"## データセットのダウンロードと検査",
"_____no_output_____"
],
[
"### 画像の取得\n\n訓練を始める前に、ネットワークに認識すべき新しいクラスを教えるために画像のセットが必要です。最初に使うためのクリエイティブ・コモンズでライセンスされた花の画像のアーカイブを作成してあります。",
"_____no_output_____"
]
],
[
[
"import pathlib\ndata_root_orig = tf.keras.utils.get_file(origin='https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',\n fname='flower_photos', untar=True)\ndata_root = pathlib.Path(data_root_orig)\nprint(data_root)",
"_____no_output_____"
]
],
[
[
"218MB をダウンロードすると、花の画像のコピーが使えるようになっているはずです。",
"_____no_output_____"
]
],
[
[
"for item in data_root.iterdir():\n print(item)",
"_____no_output_____"
],
[
"import random\nall_image_paths = list(data_root.glob('*/*'))\nall_image_paths = [str(path) for path in all_image_paths]\nrandom.shuffle(all_image_paths)\n\nimage_count = len(all_image_paths)\nimage_count",
"_____no_output_____"
],
[
"all_image_paths[:10]",
"_____no_output_____"
]
],
[
[
"### 画像の検査\n\n扱っている画像について知るために、画像のいくつかを見てみましょう。",
"_____no_output_____"
]
],
[
[
"import os\nattributions = (data_root/\"LICENSE.txt\").open(encoding='utf-8').readlines()[4:]\nattributions = [line.split(' CC-BY') for line in attributions]\nattributions = dict(attributions)",
"_____no_output_____"
],
[
"import IPython.display as display\n\ndef caption_image(image_path):\n image_rel = pathlib.Path(image_path).relative_to(data_root)\n return \"Image (CC BY 2.0) \" + ' - '.join(attributions[str(image_rel)].split(' - ')[:-1])\n ",
"_____no_output_____"
],
[
"for n in range(3):\n image_path = random.choice(all_image_paths)\n display.display(display.Image(image_path))\n print(caption_image(image_path))\n print()",
"_____no_output_____"
]
],
[
[
"### 各画像のラベルの決定",
"_____no_output_____"
],
[
"ラベルを一覧してみます。",
"_____no_output_____"
]
],
[
[
"label_names = sorted(item.name for item in data_root.glob('*/') if item.is_dir())\nlabel_names",
"_____no_output_____"
]
],
[
[
"ラベルにインデックスを割り当てます。",
"_____no_output_____"
]
],
[
[
"label_to_index = dict((name, index) for index,name in enumerate(label_names))\nlabel_to_index",
"_____no_output_____"
]
],
[
[
"ファイルとラベルのインデックスの一覧を作成します。",
"_____no_output_____"
]
],
[
[
"all_image_labels = [label_to_index[pathlib.Path(path).parent.name]\n for path in all_image_paths]\n\nprint(\"First 10 labels indices: \", all_image_labels[:10])",
"_____no_output_____"
]
],
[
[
"### 画像の読み込みと整形",
"_____no_output_____"
],
[
"TensorFlow には画像を読み込んで処理するために必要なツールが備わっています。",
"_____no_output_____"
]
],
[
[
"img_path = all_image_paths[0]\nimg_path",
"_____no_output_____"
]
],
[
[
"以下は生のデータです。",
"_____no_output_____"
]
],
[
[
"img_raw = tf.io.read_file(img_path)\nprint(repr(img_raw)[:100]+\"...\")",
"_____no_output_____"
]
],
[
[
"画像のテンソルにデコードします。",
"_____no_output_____"
]
],
[
[
"img_tensor = tf.image.decode_image(img_raw)\n\nprint(img_tensor.shape)\nprint(img_tensor.dtype)",
"_____no_output_____"
]
],
[
[
"モデルに合わせてリサイズします。",
"_____no_output_____"
]
],
[
[
"img_final = tf.image.resize(img_tensor, [192, 192])\nimg_final = img_final/255.0\nprint(img_final.shape)\nprint(img_final.numpy().min())\nprint(img_final.numpy().max())\n",
"_____no_output_____"
]
],
[
[
"このあと使用するために、簡単な関数にまとめます。",
"_____no_output_____"
]
],
[
[
"def preprocess_image(image):\n image = tf.image.decode_jpeg(image, channels=3)\n image = tf.image.resize(image, [192, 192])\n image /= 255.0 # normalize to [0,1] range\n\n return image",
"_____no_output_____"
],
[
"def load_and_preprocess_image(path):\n image = tf.io.read_file(path)\n return preprocess_image(image)",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\n\nimage_path = all_image_paths[0]\nlabel = all_image_labels[0]\n\nplt.imshow(load_and_preprocess_image(img_path))\nplt.grid(False)\nplt.xlabel(caption_image(img_path))\nplt.title(label_names[label].title())\nprint()",
"_____no_output_____"
]
],
[
[
"## `tf.data.Dataset`の構築",
"_____no_output_____"
],
[
"### 画像のデータセット",
"_____no_output_____"
],
[
"`tf.data.Dataset` を構築するもっとも簡単な方法は、`from_tensor_slices` メソッドを使うことです。\n\n文字列の配列をスライスすると、文字列のデータセットが出来上がります。",
"_____no_output_____"
]
],
[
[
"path_ds = tf.data.Dataset.from_tensor_slices(all_image_paths)",
"_____no_output_____"
]
],
[
[
"`shapes` と `types` は、データセット中のそれぞれのアイテムの内容を示しています。この場合には、バイナリ文字列のスカラーのセットです。 ",
"_____no_output_____"
]
],
[
[
"print(path_ds)",
"_____no_output_____"
]
],
[
[
"`preprocess_image` をファイルパスのデータセットにマップすることで、画像を実行時にロードし整形する新しいデータセットを作成します。",
"_____no_output_____"
]
],
[
[
"image_ds = path_ds.map(load_and_preprocess_image, num_parallel_calls=AUTOTUNE)",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\n\nplt.figure(figsize=(8,8))\nfor n,image in enumerate(image_ds.take(4)):\n plt.subplot(2,2,n+1)\n plt.imshow(image)\n plt.grid(False)\n plt.xticks([])\n plt.yticks([])\n plt.xlabel(caption_image(all_image_paths[n]))\n plt.show()",
"_____no_output_____"
]
],
[
[
"### `(image, label)`のペアのデータセット",
"_____no_output_____"
],
[
"おなじ `from_tensor_slices` メソッドを使ってラベルのデータセットを作ることができます。",
"_____no_output_____"
]
],
[
[
"label_ds = tf.data.Dataset.from_tensor_slices(tf.cast(all_image_labels, tf.int64))",
"_____no_output_____"
],
[
"for label in label_ds.take(10):\n print(label_names[label.numpy()])",
"_____no_output_____"
]
],
[
[
"これらのデータセットはおなじ順番なので、zip することで `(image, label)` というペアのデータセットができます。",
"_____no_output_____"
]
],
[
[
"image_label_ds = tf.data.Dataset.zip((image_ds, label_ds))",
"_____no_output_____"
]
],
[
[
"新しいデータセットの `shapes` と `types` は、それぞれのフィールドを示すシェイプと型のタプルです。",
"_____no_output_____"
]
],
[
[
"print(image_label_ds)",
"_____no_output_____"
]
],
[
[
"注: `all_image_labels` や `all_image_paths` のような配列がある場合、 `tf.data.dataset.Dataset.zip` メソッドの代わりとなるのは、配列のペアをスライスすることです。",
"_____no_output_____"
]
],
[
[
"ds = tf.data.Dataset.from_tensor_slices((all_image_paths, all_image_labels))\n\n# The tuples are unpacked into the positional arguments of the mapped function\n# タプルは展開され、マップ関数の位置引数に割り当てられます\ndef load_and_preprocess_from_path_label(path, label):\n return load_and_preprocess_image(path), label\n\nimage_label_ds = ds.map(load_and_preprocess_from_path_label)\nimage_label_ds",
"_____no_output_____"
]
],
[
[
"### 基本的な訓練手法",
"_____no_output_____"
],
[
"このデータセットを使ってモデルの訓練を行うには、データが\n\n* よくシャッフルされ\n* バッチ化され\n* 限りなく繰り返され\n* バッチが出来るだけ早く利用できる\n\nことが必要です。\n\nこれらの特性は `tf.data` APIを使えば簡単に付け加えることができます。",
"_____no_output_____"
]
],
[
[
"BATCH_SIZE = 32\n\n# シャッフルバッファのサイズをデータセットとおなじに設定することで、データが完全にシャッフルされる\n# ようにできます。\nds = image_label_ds.shuffle(buffer_size=image_count)\nds = ds.repeat()\nds = ds.batch(BATCH_SIZE)\n# `prefetch`を使うことで、モデルの訓練中にバックグラウンドでデータセットがバッチを取得できます。\nds = ds.prefetch(buffer_size=AUTOTUNE)\nds",
"_____no_output_____"
]
],
[
[
"注意すべきことがいくつかあります。\n\n1. 順番が重要です。\n\n * `.repeat` の前に `.shuffle` すると、エポックの境界を越えて要素がシャッフルされます。(ほかの要素がすべて出現する前に2回出現する要素があるかもしれません)\n * `.batch` の後に `.shuffle` すると、バッチの順番がシャッフルされますが、要素がバッチを越えてシャッフルされることはありません。\n\n1. 完全なシャッフルのため、 `buffer_size` をデータセットとおなじサイズに設定しています。データセットのサイズ未満の場合、値が大きいほど良くランダム化されますが、より多くのメモリーを使用します。\n\n1. シャッフルバッファがいっぱいになってから要素が取り出されます。そのため、大きな `buffer_size` が `Dataset` を使い始める際の遅延の原因になります。\n\n1. シャッフルされたデータセットは、シャッフルバッファが完全に空になるまでデータセットが終わりであることを伝えません。 `.repeat` によって `Dataset` が再起動されると、シャッフルバッファが一杯になるまでもう一つの待ち時間が発生します。\n\n最後の問題は、 `tf.data.Dataset.apply` メソッドを、融合された `tf.data.experimental.shuffle_and_repeat` 関数と組み合わせることで対処できます。",
"_____no_output_____"
]
],
[
[
"ds = image_label_ds.apply(\n tf.data.experimental.shuffle_and_repeat(buffer_size=image_count))\nds = ds.batch(BATCH_SIZE)\nds = ds.prefetch(buffer_size=AUTOTUNE)\nds",
"_____no_output_____"
]
],
[
[
"### データセットをモデルにつなぐ\n\n`tf.keras.applications`からMobileNet v2のコピーを取得します。\n\nこれを簡単な転移学習のサンプルに使用します。\n\nMobileNetの重みを訓練不可に設定します。",
"_____no_output_____"
]
],
[
[
"mobile_net = tf.keras.applications.MobileNetV2(input_shape=(192, 192, 3), include_top=False)\nmobile_net.trainable=False",
"_____no_output_____"
]
],
[
[
"このモデルは、入力が `[-1,1]` の範囲に正規化されていることを想定しています。\n\n```\nhelp(keras_applications.mobilenet_v2.preprocess_input)\n```\n\n<pre>\n...\nThis function applies the \"Inception\" preprocessing which converts\nthe RGB values from [0, 255] to [-1, 1] \n...\n</pre>",
"_____no_output_____"
],
[
"このため、データをMobileNetモデルに渡す前に、入力を`[0,1]`の範囲から`[-1,1]`の範囲に変換する必要があります。",
"_____no_output_____"
]
],
[
[
"def change_range(image,label):\n return 2*image-1, label\n\nkeras_ds = ds.map(change_range)",
"_____no_output_____"
]
],
[
[
"MobileNetは画像ごとに `6x6` の特徴量の空間を返します。\n\nバッチを1つ渡してみましょう。",
"_____no_output_____"
]
],
[
[
"# シャッフルバッファがいっぱいになるまで、データセットは何秒かかかります。\nimage_batch, label_batch = next(iter(keras_ds))",
"_____no_output_____"
],
[
"feature_map_batch = mobile_net(image_batch)\nprint(feature_map_batch.shape)",
"_____no_output_____"
]
],
[
[
"MobileNet をラップしたモデルを作り、出力層である `tf.keras.layers.Dense` の前に、`tf.keras.layers.GlobalAveragePooling2D` で空間の軸にそって平均値を求めます。",
"_____no_output_____"
]
],
[
[
"model = tf.keras.Sequential([\n mobile_net,\n tf.keras.layers.GlobalAveragePooling2D(),\n tf.keras.layers.Dense(len(label_names))])",
"_____no_output_____"
]
],
[
[
"期待したとおりの形状の出力が得られます。",
"_____no_output_____"
]
],
[
[
"logit_batch = model(image_batch).numpy()\n\nprint(\"min logit:\", logit_batch.min())\nprint(\"max logit:\", logit_batch.max())\nprint()\n\nprint(\"Shape:\", logit_batch.shape)",
"_____no_output_____"
]
],
[
[
"訓練手法を記述するためにモデルをコンパイルします。",
"_____no_output_____"
]
],
[
[
"model.compile(optimizer=tf.keras.optimizers.Adam(), \n loss='sparse_categorical_crossentropy',\n metrics=[\"accuracy\"])",
"_____no_output_____"
]
],
[
[
"訓練可能な変数は2つ、全結合層の `weights` と `bias` です。",
"_____no_output_____"
]
],
[
[
"len(model.trainable_variables) ",
"_____no_output_____"
],
[
"model.summary()",
"_____no_output_____"
]
],
[
[
"モデルを訓練します。\n\n普通は、エポックごとの本当のステップ数を指定しますが、ここではデモの目的なので3ステップだけとします。",
"_____no_output_____"
]
],
[
[
"steps_per_epoch=tf.math.ceil(len(all_image_paths)/BATCH_SIZE).numpy()\nsteps_per_epoch",
"_____no_output_____"
],
[
"model.fit(ds, epochs=1, steps_per_epoch=3)",
"_____no_output_____"
]
],
[
[
"## 性能\n\n注:このセクションでは性能の向上に役立ちそうな簡単なトリックをいくつか紹介します。詳しくは、[Input Pipeline Performance](https://www.tensorflow.org/guide/performance/datasets) を参照してください。\n\n上記の単純なパイプラインは、エポックごとにそれぞれのファイルを一つずつ読み込みます。これは、CPU を使ったローカルでの訓練では問題になりませんが、GPU を使った訓練では十分ではなく、いかなる分散訓練でも使うべきではありません。",
"_____no_output_____"
],
[
"調査のため、まず、データセットの性能をチェックする簡単な関数を定義します。",
"_____no_output_____"
]
],
[
[
"import time\ndefault_timeit_steps = 2*steps_per_epoch+1\n\ndef timeit(ds, steps=default_timeit_steps):\n overall_start = time.time()\n # Fetch a single batch to prime the pipeline (fill the shuffle buffer),\n # before starting the timer\n it = iter(ds.take(steps+1))\n next(it)\n\n start = time.time()\n for i,(images,labels) in enumerate(it):\n if i%10 == 0:\n print('.',end='')\n print()\n end = time.time()\n\n duration = end-start\n print(\"{} batches: {} s\".format(steps, duration))\n print(\"{:0.5f} Images/s\".format(BATCH_SIZE*steps/duration))\n print(\"Total time: {}s\".format(end-overall_start))",
"_____no_output_____"
]
],
[
[
"現在のデータセットの性能は次のとおりです。",
"_____no_output_____"
]
],
[
[
"ds = image_label_ds.apply(\n tf.data.experimental.shuffle_and_repeat(buffer_size=image_count))\nds = ds.batch(BATCH_SIZE).prefetch(buffer_size=AUTOTUNE)\nds",
"_____no_output_____"
],
[
"timeit(ds)",
"_____no_output_____"
]
],
[
[
"### キャッシュ",
"_____no_output_____"
],
[
"`tf.data.Dataset.cache` を使うと、エポックを越えて計算結果を簡単にキャッシュできます。特に、データがメモリに収まるときには効果的です。\n\nここでは、画像が前処理(デコードとリサイズ)された後でキャッシュされます。",
"_____no_output_____"
]
],
[
[
"ds = image_label_ds.cache()\nds = ds.apply(\n tf.data.experimental.shuffle_and_repeat(buffer_size=image_count))\nds = ds.batch(BATCH_SIZE).prefetch(buffer_size=AUTOTUNE)\nds",
"_____no_output_____"
],
[
"timeit(ds)",
"_____no_output_____"
]
],
[
[
"メモリキャッシュを使う際の欠点のひとつは、実行の都度キャッシュを再構築しなければならないことです。このため、データセットがスタートするたびにおなじだけ起動のための遅延が発生します。",
"_____no_output_____"
]
],
[
[
"timeit(ds)",
"_____no_output_____"
]
],
[
[
"データがメモリに収まらない場合には、キャッシュファイルを使用します。",
"_____no_output_____"
]
],
[
[
"ds = image_label_ds.cache(filename='./cache.tf-data')\nds = ds.apply(\n tf.data.experimental.shuffle_and_repeat(buffer_size=image_count))\nds = ds.batch(BATCH_SIZE).prefetch(1)\nds",
"_____no_output_____"
],
[
"timeit(ds)",
"_____no_output_____"
]
],
[
[
"キャッシュファイルには、キャッシュを再構築することなくデータセットを再起動できるという利点もあります。2回めがどれほど早いか見てみましょう。",
"_____no_output_____"
]
],
[
[
"timeit(ds)",
"_____no_output_____"
]
],
[
[
"### TFRecord ファイル",
"_____no_output_____"
],
[
"#### 生の画像データ\n\nTFRecord ファイルは、バイナリの大きなオブジェクトのシーケンスを保存するための単純なフォーマットです。複数のサンプルをおなじファイルに詰め込むことで、TensorFlow は複数のサンプルを一度に読み込むことができます。これは、特に GCS のようなリモートストレージサービスを使用する際の性能にとって重要です。\n\n最初に、生の画像データから TFRecord ファイルを構築します。",
"_____no_output_____"
]
],
[
[
"image_ds = tf.data.Dataset.from_tensor_slices(all_image_paths).map(tf.io.read_file)\ntfrec = tf.data.experimental.TFRecordWriter('images.tfrec')\ntfrec.write(image_ds)",
"_____no_output_____"
]
],
[
[
"次に、TFRecord ファイルを読み込み、以前定義した `preprocess_image` 関数を使って画像のデコード/リフォーマットを行うデータセットを構築します。",
"_____no_output_____"
]
],
[
[
"image_ds = tf.data.TFRecordDataset('images.tfrec').map(preprocess_image)",
"_____no_output_____"
]
],
[
[
"これを、前に定義済みのラベルデータセットと zip し、期待どおりの `(image,label)` のペアを得ます。",
"_____no_output_____"
]
],
[
[
"ds = tf.data.Dataset.zip((image_ds, label_ds))\nds = ds.apply(\n tf.data.experimental.shuffle_and_repeat(buffer_size=image_count))\nds=ds.batch(BATCH_SIZE).prefetch(AUTOTUNE)\nds",
"_____no_output_____"
],
[
"timeit(ds)",
"_____no_output_____"
]
],
[
[
"これは、`cache` バージョンよりも低速です。前処理をキャッシュしていないからです。",
"_____no_output_____"
],
[
"#### シリアライズしたテンソル",
"_____no_output_____"
],
[
"前処理を TFRecord ファイルに保存するには、前やったように前処理した画像のデータセットを作ります。",
"_____no_output_____"
]
],
[
[
"paths_ds = tf.data.Dataset.from_tensor_slices(all_image_paths)\nimage_ds = paths_ds.map(load_and_preprocess_image)\nimage_ds",
"_____no_output_____"
]
],
[
[
"`.jpeg` 文字列のデータセットではなく、これはテンソルのデータセットです。\n\nこれを TFRecord ファイルにシリアライズするには、まず、テンソルのデータセットを文字列のデータセットに変換します。",
"_____no_output_____"
]
],
[
[
"ds = image_ds.map(tf.io.serialize_tensor)\nds",
"_____no_output_____"
],
[
"tfrec = tf.data.experimental.TFRecordWriter('images.tfrec')\ntfrec.write(ds)",
"_____no_output_____"
]
],
[
[
"前処理をキャッシュしたことにより、データは TFRecord ファイルから非常に効率的にロードできます。テンソルを使用する前にデシリアライズすることを忘れないでください。",
"_____no_output_____"
]
],
[
[
"ds = tf.data.TFRecordDataset('images.tfrec')\n\ndef parse(x):\n result = tf.io.parse_tensor(x, out_type=tf.float32)\n result = tf.reshape(result, [192, 192, 3])\n return result\n\nds = ds.map(parse, num_parallel_calls=AUTOTUNE)\nds",
"_____no_output_____"
]
],
[
[
"次にラベルを追加し、以前とおなじような標準的な処理を適用します。",
"_____no_output_____"
]
],
[
[
"ds = tf.data.Dataset.zip((ds, label_ds))\nds = ds.apply(\n tf.data.experimental.shuffle_and_repeat(buffer_size=image_count))\nds=ds.batch(BATCH_SIZE).prefetch(AUTOTUNE)\nds",
"_____no_output_____"
],
[
"timeit(ds)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0b0f56073f83749a055b1b2deb60f1d44d1a931 | 62,618 | ipynb | Jupyter Notebook | Sequence Models/week1/Dinosaurus_Island_Character_level_language_model_final_v3a.ipynb | lankuohsing/Coursera-Deep-Learning-Specialization | 64f34c862c8ef2cdf97379d82d31d47b8c6d6dcd | [
"MIT"
] | null | null | null | Sequence Models/week1/Dinosaurus_Island_Character_level_language_model_final_v3a.ipynb | lankuohsing/Coursera-Deep-Learning-Specialization | 64f34c862c8ef2cdf97379d82d31d47b8c6d6dcd | [
"MIT"
] | null | null | null | Sequence Models/week1/Dinosaurus_Island_Character_level_language_model_final_v3a.ipynb | lankuohsing/Coursera-Deep-Learning-Specialization | 64f34c862c8ef2cdf97379d82d31d47b8c6d6dcd | [
"MIT"
] | null | null | null | 38.392397 | 565 | 0.558833 | [
[
[
"# Character level language model - Dinosaurus Island\n\nWelcome to Dinosaurus Island! 65 million years ago, dinosaurs existed, and in this assignment they are back. You are in charge of a special task. Leading biology researchers are creating new breeds of dinosaurs and bringing them to life on earth, and your job is to give names to these dinosaurs. If a dinosaur does not like its name, it might go berserk, so choose wisely! \n\n<table>\n<td>\n<img src=\"images/dino.jpg\" style=\"width:250;height:300px;\">\n\n</td>\n\n</table>\n\nLuckily you have learned some deep learning and you will use it to save the day. Your assistant has collected a list of all the dinosaur names they could find, and compiled them into this [dataset](dinos.txt). (Feel free to take a look by clicking the previous link.) To create new dinosaur names, you will build a character level language model to generate new names. Your algorithm will learn the different name patterns, and randomly generate new names. Hopefully this algorithm will keep you and your team safe from the dinosaurs' wrath! \n\nBy completing this assignment you will learn:\n\n- How to store text data for processing using an RNN \n- How to synthesize data, by sampling predictions at each time step and passing it to the next RNN-cell unit\n- How to build a character-level text generation recurrent neural network\n- Why clipping the gradients is important\n\nWe will begin by loading in some functions that we have provided for you in `rnn_utils`. Specifically, you have access to functions such as `rnn_forward` and `rnn_backward` which are equivalent to those you've implemented in the previous assignment. ",
"_____no_output_____"
],
[
"## <font color='darkblue'>Updates</font>\n\n#### If you were working on the notebook before this update...\n* The current notebook is version \"3a\".\n* You can find your original work saved in the notebook with the previous version name (\"v3\") \n* To view the file directory, go to the menu \"File->Open\", and this will open a new tab that shows the file directory.\n\n#### List of updates\n* Sort and print `chars` list of characters.\n* Import and use pretty print\n* `clip`: \n - Additional details on why we need to use the \"out\" parameter.\n - Modified for loop to have students fill in the correct items to loop through.\n - Added a test case to check for hard-coding error.\n* `sample`\n - additional hints added to steps 1,2,3,4.\n - \"Using 2D arrays instead of 1D arrays\".\n - explanation of numpy.ravel().\n - fixed expected output.\n - clarified comments in the code.\n* \"training the model\"\n - Replaced the sample code with explanations for how to set the index, X and Y (for a better learning experience).\n* Spelling, grammar and wording corrections.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom utils import *\nimport random\nimport pprint",
"_____no_output_____"
]
],
[
[
"## 1 - Problem Statement\n\n### 1.1 - Dataset and Preprocessing\n\nRun the following cell to read the dataset of dinosaur names, create a list of unique characters (such as a-z), and compute the dataset and vocabulary size. ",
"_____no_output_____"
]
],
[
[
"data = open('dinos.txt', 'r').read()\ndata= data.lower()\nchars = list(set(data))\ndata_size, vocab_size = len(data), len(chars)\nprint('There are %d total characters and %d unique characters in your data.' % (data_size, vocab_size))",
"There are 19909 total characters and 27 unique characters in your data.\n"
]
],
[
[
"\n* The characters are a-z (26 characters) plus the \"\\n\" (or newline character).\n* In this assignment, the newline character \"\\n\" plays a role similar to the `<EOS>` (or \"End of sentence\") token we had discussed in lecture. \n - Here, \"\\n\" indicates the end of the dinosaur name rather than the end of a sentence. \n* `char_to_ix`: In the cell below, we create a python dictionary (i.e., a hash table) to map each character to an index from 0-26.\n* `ix_to_char`: We also create a second python dictionary that maps each index back to the corresponding character. \n - This will help you figure out what index corresponds to what character in the probability distribution output of the softmax layer. ",
"_____no_output_____"
]
],
[
[
"chars = sorted(chars)\nprint(chars)",
"['\\n', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z']\n"
],
[
"char_to_ix = { ch:i for i,ch in enumerate(chars) }\nix_to_char = { i:ch for i,ch in enumerate(chars) }\npp = pprint.PrettyPrinter(indent=4)\npp.pprint(ix_to_char)",
"{ 0: '\\n',\n 1: 'a',\n 2: 'b',\n 3: 'c',\n 4: 'd',\n 5: 'e',\n 6: 'f',\n 7: 'g',\n 8: 'h',\n 9: 'i',\n 10: 'j',\n 11: 'k',\n 12: 'l',\n 13: 'm',\n 14: 'n',\n 15: 'o',\n 16: 'p',\n 17: 'q',\n 18: 'r',\n 19: 's',\n 20: 't',\n 21: 'u',\n 22: 'v',\n 23: 'w',\n 24: 'x',\n 25: 'y',\n 26: 'z'}\n"
]
],
[
[
"### 1.2 - Overview of the model\n\nYour model will have the following structure: \n\n- Initialize parameters \n- Run the optimization loop\n - Forward propagation to compute the loss function\n - Backward propagation to compute the gradients with respect to the loss function\n - Clip the gradients to avoid exploding gradients\n - Using the gradients, update your parameters with the gradient descent update rule.\n- Return the learned parameters \n \n<img src=\"images/rnn.png\" style=\"width:450;height:300px;\">\n<caption><center> **Figure 1**: Recurrent Neural Network, similar to what you had built in the previous notebook \"Building a Recurrent Neural Network - Step by Step\". </center></caption>\n\n* At each time-step, the RNN tries to predict what is the next character given the previous characters. \n* The dataset $\\mathbf{X} = (x^{\\langle 1 \\rangle}, x^{\\langle 2 \\rangle}, ..., x^{\\langle T_x \\rangle})$ is a list of characters in the training set.\n* $\\mathbf{Y} = (y^{\\langle 1 \\rangle}, y^{\\langle 2 \\rangle}, ..., y^{\\langle T_x \\rangle})$ is the same list of characters but shifted one character forward. \n* At every time-step $t$, $y^{\\langle t \\rangle} = x^{\\langle t+1 \\rangle}$. The prediction at time $t$ is the same as the input at time $t + 1$.",
"_____no_output_____"
],
[
"## 2 - Building blocks of the model\n\nIn this part, you will build two important blocks of the overall model:\n- Gradient clipping: to avoid exploding gradients\n- Sampling: a technique used to generate characters\n\nYou will then apply these two functions to build the model.",
"_____no_output_____"
],
[
"### 2.1 - Clipping the gradients in the optimization loop\n\nIn this section you will implement the `clip` function that you will call inside of your optimization loop. \n\n#### Exploding gradients\n* When gradients are very large, they're called \"exploding gradients.\" \n* Exploding gradients make the training process more difficult, because the updates may be so large that they \"overshoot\" the optimal values during back propagation.\n\nRecall that your overall loop structure usually consists of:\n* forward pass, \n* cost computation, \n* backward pass, \n* parameter update. \n\nBefore updating the parameters, you will perform gradient clipping to make sure that your gradients are not \"exploding.\"\n\n#### gradient clipping\nIn the exercise below, you will implement a function `clip` that takes in a dictionary of gradients and returns a clipped version of gradients if needed. \n* There are different ways to clip gradients.\n* We will use a simple element-wise clipping procedure, in which every element of the gradient vector is clipped to lie between some range [-N, N]. \n* For example, if the N=10\n - The range is [-10, 10]\n - If any component of the gradient vector is greater than 10, it is set to 10.\n - If any component of the gradient vector is less than -10, it is set to -10. \n - If any components are between -10 and 10, they keep their original values.\n\n<img src=\"images/clip.png\" style=\"width:400;height:150px;\">\n<caption><center> **Figure 2**: Visualization of gradient descent with and without gradient clipping, in a case where the network is running into \"exploding gradient\" problems. </center></caption>\n\n**Exercise**: \nImplement the function below to return the clipped gradients of your dictionary `gradients`. \n* Your function takes in a maximum threshold and returns the clipped versions of the gradients. \n* You can check out [numpy.clip](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.clip.html). \n - You will need to use the argument \"`out = ...`\".\n - Using the \"`out`\" parameter allows you to update a variable \"in-place\".\n - If you don't use \"`out`\" argument, the clipped variable is stored in the variable \"gradient\" but does not update the gradient variables `dWax`, `dWaa`, `dWya`, `db`, `dby`.",
"_____no_output_____"
]
],
[
[
"### GRADED FUNCTION: clip\n\ndef clip(gradients, maxValue):\n '''\n Clips the gradients' values between minimum and maximum.\n \n Arguments:\n gradients -- a dictionary containing the gradients \"dWaa\", \"dWax\", \"dWya\", \"db\", \"dby\"\n maxValue -- everything above this number is set to this number, and everything less than -maxValue is set to -maxValue\n \n Returns: \n gradients -- a dictionary with the clipped gradients.\n '''\n \n dWaa, dWax, dWya, db, dby = gradients['dWaa'], gradients['dWax'], gradients['dWya'], gradients['db'], gradients['dby']\n \n ### START CODE HERE ###\n # clip to mitigate exploding gradients, loop over [dWax, dWaa, dWya, db, dby]. (≈2 lines)\n for gradient in [dWaa, dWax, dWya, db, dby]:\n np.clip(gradient,a_min=-maxValue,a_max=maxValue,out=gradient)\n ### END CODE HERE ###\n \n gradients = {\"dWaa\": dWaa, \"dWax\": dWax, \"dWya\": dWya, \"db\": db, \"dby\": dby}\n \n return gradients",
"_____no_output_____"
],
[
"# Test with a maxvalue of 10\nmaxValue = 10\nnp.random.seed(3)\ndWax = np.random.randn(5,3)*10\ndWaa = np.random.randn(5,5)*10\ndWya = np.random.randn(2,5)*10\ndb = np.random.randn(5,1)*10\ndby = np.random.randn(2,1)*10\ngradients = {\"dWax\": dWax, \"dWaa\": dWaa, \"dWya\": dWya, \"db\": db, \"dby\": dby}\ngradients = clip(gradients, maxValue)\nprint(\"gradients[\\\"dWaa\\\"][1][2] =\", gradients[\"dWaa\"][1][2])\nprint(\"gradients[\\\"dWax\\\"][3][1] =\", gradients[\"dWax\"][3][1])\nprint(\"gradients[\\\"dWya\\\"][1][2] =\", gradients[\"dWya\"][1][2])\nprint(\"gradients[\\\"db\\\"][4] =\", gradients[\"db\"][4])\nprint(\"gradients[\\\"dby\\\"][1] =\", gradients[\"dby\"][1])",
"gradients[\"dWaa\"][1][2] = 10.0\ngradients[\"dWax\"][3][1] = -10.0\ngradients[\"dWya\"][1][2] = 0.29713815361\ngradients[\"db\"][4] = [ 10.]\ngradients[\"dby\"][1] = [ 8.45833407]\n"
]
],
[
[
"** Expected output:**\n\n```Python\ngradients[\"dWaa\"][1][2] = 10.0\ngradients[\"dWax\"][3][1] = -10.0\ngradients[\"dWya\"][1][2] = 0.29713815361\ngradients[\"db\"][4] = [ 10.]\ngradients[\"dby\"][1] = [ 8.45833407]\n```",
"_____no_output_____"
]
],
[
[
"# Test with a maxValue of 5\nmaxValue = 5\nnp.random.seed(3)\ndWax = np.random.randn(5,3)*10\ndWaa = np.random.randn(5,5)*10\ndWya = np.random.randn(2,5)*10\ndb = np.random.randn(5,1)*10\ndby = np.random.randn(2,1)*10\ngradients = {\"dWax\": dWax, \"dWaa\": dWaa, \"dWya\": dWya, \"db\": db, \"dby\": dby}\ngradients = clip(gradients, maxValue)\nprint(\"gradients[\\\"dWaa\\\"][1][2] =\", gradients[\"dWaa\"][1][2])\nprint(\"gradients[\\\"dWax\\\"][3][1] =\", gradients[\"dWax\"][3][1])\nprint(\"gradients[\\\"dWya\\\"][1][2] =\", gradients[\"dWya\"][1][2])\nprint(\"gradients[\\\"db\\\"][4] =\", gradients[\"db\"][4])\nprint(\"gradients[\\\"dby\\\"][1] =\", gradients[\"dby\"][1])",
"gradients[\"dWaa\"][1][2] = 5.0\ngradients[\"dWax\"][3][1] = -5.0\ngradients[\"dWya\"][1][2] = 0.29713815361\ngradients[\"db\"][4] = [ 5.]\ngradients[\"dby\"][1] = [ 5.]\n"
]
],
[
[
"** Expected Output: **\n```Python\ngradients[\"dWaa\"][1][2] = 5.0\ngradients[\"dWax\"][3][1] = -5.0\ngradients[\"dWya\"][1][2] = 0.29713815361\ngradients[\"db\"][4] = [ 5.]\ngradients[\"dby\"][1] = [ 5.]\n```",
"_____no_output_____"
],
[
"### 2.2 - Sampling\n\nNow assume that your model is trained. You would like to generate new text (characters). The process of generation is explained in the picture below:\n\n<img src=\"images/dinos3.png\" style=\"width:500;height:300px;\">\n<caption><center> **Figure 3**: In this picture, we assume the model is already trained. We pass in $x^{\\langle 1\\rangle} = \\vec{0}$ at the first time step, and have the network sample one character at a time. </center></caption>",
"_____no_output_____"
],
[
"**Exercise**: Implement the `sample` function below to sample characters. You need to carry out 4 steps:\n\n- **Step 1**: Input the \"dummy\" vector of zeros $x^{\\langle 1 \\rangle} = \\vec{0}$. \n - This is the default input before we've generated any characters. \n We also set $a^{\\langle 0 \\rangle} = \\vec{0}$",
"_____no_output_____"
],
[
"- **Step 2**: Run one step of forward propagation to get $a^{\\langle 1 \\rangle}$ and $\\hat{y}^{\\langle 1 \\rangle}$. Here are the equations:\n\nhidden state: \n$$ a^{\\langle t+1 \\rangle} = \\tanh(W_{ax} x^{\\langle t+1 \\rangle } + W_{aa} a^{\\langle t \\rangle } + b)\\tag{1}$$\n\nactivation:\n$$ z^{\\langle t + 1 \\rangle } = W_{ya} a^{\\langle t + 1 \\rangle } + b_y \\tag{2}$$\n\nprediction:\n$$ \\hat{y}^{\\langle t+1 \\rangle } = softmax(z^{\\langle t + 1 \\rangle })\\tag{3}$$\n\n- Details about $\\hat{y}^{\\langle t+1 \\rangle }$:\n - Note that $\\hat{y}^{\\langle t+1 \\rangle }$ is a (softmax) probability vector (its entries are between 0 and 1 and sum to 1). \n - $\\hat{y}^{\\langle t+1 \\rangle}_i$ represents the probability that the character indexed by \"i\" is the next character. \n - We have provided a `softmax()` function that you can use.",
"_____no_output_____"
],
[
"#### Additional Hints\n\n- $x^{\\langle 1 \\rangle}$ is `x` in the code. When creating the one-hot vector, make a numpy array of zeros, with the number of rows equal to the number of unique characters, and the number of columns equal to one. It's a 2D and not a 1D array.\n- $a^{\\langle 0 \\rangle}$ is `a_prev` in the code. It is a numpy array of zeros, where the number of rows is $n_{a}$, and number of columns is 1. It is a 2D array as well. $n_{a}$ is retrieved by getting the number of columns in $W_{aa}$ (the numbers need to match in order for the matrix multiplication $W_{aa}a^{\\langle t \\rangle}$ to work.\n- [numpy.dot](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html)\n- [numpy.tanh](https://docs.scipy.org/doc/numpy/reference/generated/numpy.tanh.html)",
"_____no_output_____"
],
[
"#### Using 2D arrays instead of 1D arrays\n* You may be wondering why we emphasize that $x^{\\langle 1 \\rangle}$ and $a^{\\langle 0 \\rangle}$ are 2D arrays and not 1D vectors.\n* For matrix multiplication in numpy, if we multiply a 2D matrix with a 1D vector, we end up with with a 1D array.\n* This becomes a problem when we add two arrays where we expected them to have the same shape.\n* When two arrays with a different number of dimensions are added together, Python \"broadcasts\" one across the other.\n* Here is some sample code that shows the difference between using a 1D and 2D array.",
"_____no_output_____"
]
],
[
[
"import numpy as np",
"_____no_output_____"
],
[
"matrix1 = np.array([[1,1],[2,2],[3,3]]) # (3,2)\nmatrix2 = np.array([[0],[0],[0]]) # (3,1) \nvector1D = np.array([1,1]) # (2,) \nvector2D = np.array([[1],[1]]) # (2,1)\nprint(\"matrix1 \\n\", matrix1,\"\\n\")\nprint(\"matrix2 \\n\", matrix2,\"\\n\")\nprint(\"vector1D \\n\", vector1D,\"\\n\")\nprint(\"vector2D \\n\", vector2D)",
"matrix1 \n [[1 1]\n [2 2]\n [3 3]] \n\nmatrix2 \n [[0]\n [0]\n [0]] \n\nvector1D \n [1 1] \n\nvector2D \n [[1]\n [1]]\n"
],
[
"print(\"Multiply 2D and 1D arrays: result is a 1D array\\n\", \n np.dot(matrix1,vector1D))\nprint(\"Multiply 2D and 2D arrays: result is a 2D array\\n\", \n np.dot(matrix1,vector2D))",
"Multiply 2D and 1D arrays: result is a 1D array\n [2 4 6]\nMultiply 2D and 2D arrays: result is a 2D array\n [[2]\n [4]\n [6]]\n"
],
[
"print(\"Adding (3 x 1) vector to a (3 x 1) vector is a (3 x 1) vector\\n\",\n \"This is what we want here!\\n\", \n np.dot(matrix1,vector2D) + matrix2)",
"Adding (3 x 1) vector to a (3 x 1) vector is a (3 x 1) vector\n This is what we want here!\n [[2]\n [4]\n [6]]\n"
],
[
"print(\"Adding a (3,) vector to a (3 x 1) vector\\n\",\n \"broadcasts the 1D array across the second dimension\\n\",\n \"Not what we want here!\\n\",\n np.dot(matrix1,vector1D) + matrix2\n )",
"Adding a (3,) vector to a (3 x 1) vector\n broadcasts the 1D array across the second dimension\n Not what we want here!\n [[2 4 6]\n [2 4 6]\n [2 4 6]]\n"
]
],
[
[
"- **Step 3**: Sampling: \n - Now that we have $y^{\\langle t+1 \\rangle}$, we want to select the next letter in the dinosaur name. If we select the most probable, the model will always generate the same result given a starting letter. \n - To make the results more interesting, we will use np.random.choice to select a next letter that is likely, but not always the same.\n - Sampling is the selection of a value from a group of values, where each value has a probability of being picked. \n - Sampling allows us to generate random sequences of values.\n - Pick the next character's index according to the probability distribution specified by $\\hat{y}^{\\langle t+1 \\rangle }$. \n - This means that if $\\hat{y}^{\\langle t+1 \\rangle }_i = 0.16$, you will pick the index \"i\" with 16% probability. \n - You can use [np.random.choice](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.random.choice.html).\n\n Example of how to use `np.random.choice()`:\n ```python\n np.random.seed(0)\n probs = np.array([0.1, 0.0, 0.7, 0.2])\n idx = np.random.choice([0, 1, 2, 3] p = probs)\n ```\n - This means that you will pick the index (`idx`) according to the distribution: \n\n $P(index = 0) = 0.1, P(index = 1) = 0.0, P(index = 2) = 0.7, P(index = 3) = 0.2$.\n\n - Note that the value that's set to `p` should be set to a 1D vector.\n - Also notice that $\\hat{y}^{\\langle t+1 \\rangle}$, which is `y` in the code, is a 2D array.",
"_____no_output_____"
],
[
"##### Additional Hints\n- [range](https://docs.python.org/3/library/functions.html#func-range)\n- [numpy.ravel](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ravel.html) takes a multi-dimensional array and returns its contents inside of a 1D vector.\n```Python\narr = np.array([[1,2],[3,4]])\nprint(\"arr\")\nprint(arr)\nprint(\"arr.ravel()\")\nprint(arr.ravel())\n```\nOutput:\n```Python\narr\n[[1 2]\n [3 4]]\narr.ravel()\n[1 2 3 4]\n```\n\n- Note that `append` is an \"in-place\" operation. In other words, don't do this:\n```Python\nfun_hobbies = fun_hobbies.append('learning') ## Doesn't give you what you want\n```",
"_____no_output_____"
],
[
"- **Step 4**: Update to $x^{\\langle t \\rangle }$ \n - The last step to implement in `sample()` is to update the variable `x`, which currently stores $x^{\\langle t \\rangle }$, with the value of $x^{\\langle t + 1 \\rangle }$. \n - You will represent $x^{\\langle t + 1 \\rangle }$ by creating a one-hot vector corresponding to the character that you have chosen as your prediction. \n - You will then forward propagate $x^{\\langle t + 1 \\rangle }$ in Step 1 and keep repeating the process until you get a \"\\n\" character, indicating that you have reached the end of the dinosaur name. ",
"_____no_output_____"
],
[
"##### Additional Hints\n- In order to reset `x` before setting it to the new one-hot vector, you'll want to set all the values to zero.\n - You can either create a new numpy array: [numpy.zeros](https://docs.scipy.org/doc/numpy/reference/generated/numpy.zeros.html)\n - Or fill all values with a single number: [numpy.ndarray.fill](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.fill.html)",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: sample\n\ndef sample(parameters, char_to_ix, seed):\n \"\"\"\n Sample a sequence of characters according to a sequence of probability distributions output of the RNN\n\n Arguments:\n parameters -- python dictionary containing the parameters Waa, Wax, Wya, by, and b. \n char_to_ix -- python dictionary mapping each character to an index.\n seed -- used for grading purposes. Do not worry about it.\n\n Returns:\n indices -- a list of length n containing the indices of the sampled characters.\n \"\"\"\n \n # Retrieve parameters and relevant shapes from \"parameters\" dictionary\n Waa, Wax, Wya, by, b = parameters['Waa'], parameters['Wax'], parameters['Wya'], parameters['by'], parameters['b']\n vocab_size = by.shape[0]\n n_a = Waa.shape[1]\n \n ### START CODE HERE ###\n # Step 1: Create the a zero vector x that can be used as the one-hot vector \n # representing the first character (initializing the sequence generation). (≈1 line)\n x = np.zeros((vocab_size,1))\n # Step 1': Initialize a_prev as zeros (≈1 line)\n a_prev = np.zeros((n_a,1))\n \n # Create an empty list of indices, this is the list which will contain the list of indices of the characters to generate (≈1 line)\n indices = []\n \n # idx is the index of the one-hot vector x that is set to 1\n # All other positions in x are zero.\n # We will initialize idx to -1\n idx = -1 \n \n # Loop over time-steps t. At each time-step:\n # sample a character from a probability distribution \n # and append its index (`idx`) to the list \"indices\". \n # We'll stop if we reach 50 characters \n # (which should be very unlikely with a well trained model).\n # Setting the maximum number of characters helps with debugging and prevents infinite loops. \n counter = 0\n newline_character = char_to_ix['\\n']\n \n while (idx != newline_character and counter != 50):\n \n # Step 2: Forward propagate x using the equations (1), (2) and (3)\n a = np.tanh(np.dot(Wax,x)+np.dot(Waa,a_prev)+b)\n z = np.dot(Wya,a)+by\n y = softmax(z)\n \n # for grading purposes\n np.random.seed(counter+seed) \n \n # Step 3: Sample the index of a character within the vocabulary from the probability distribution y\n # (see additional hints above)\n idx = np.random.choice(list(range(0,vocab_size)),p=y.ravel())\n\n # Append the index to \"indices\"\n indices.append(idx)\n \n # Step 4: Overwrite the input x with one that corresponds to the sampled index `idx`.\n # (see additional hints above)\n x = np.zeros((vocab_size,1))\n x[idx] = 1\n \n # Update \"a_prev\" to be \"a\"\n a_prev = a\n \n # for grading purposes\n seed += 1\n counter +=1\n \n ### END CODE HERE ###\n\n if (counter == 50):\n indices.append(char_to_ix['\\n'])\n \n return indices",
"_____no_output_____"
],
[
"np.random.seed(2)\n_, n_a = 20, 100\nWax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)\nb, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)\nparameters = {\"Wax\": Wax, \"Waa\": Waa, \"Wya\": Wya, \"b\": b, \"by\": by}\n\n\nindices = sample(parameters, char_to_ix, 0)\nprint(\"Sampling:\")\nprint(\"list of sampled indices:\\n\", indices)\nprint(\"size indices:\\n\", len(indices))\nprint(\"list of sampled characters:\\n\", [ix_to_char[i] for i in indices])",
"Sampling:\nlist of sampled indices:\n [12, 17, 24, 14, 13, 9, 10, 22, 24, 6, 13, 11, 12, 6, 21, 15, 21, 14, 3, 2, 1, 21, 18, 24, 7, 25, 6, 25, 18, 10, 16, 2, 3, 8, 15, 12, 11, 7, 1, 12, 10, 2, 7, 7, 11, 17, 24, 12, 13, 24, 0]\nsize indices:\n 51\nlist of sampled characters:\n ['l', 'q', 'x', 'n', 'm', 'i', 'j', 'v', 'x', 'f', 'm', 'k', 'l', 'f', 'u', 'o', 'u', 'n', 'c', 'b', 'a', 'u', 'r', 'x', 'g', 'y', 'f', 'y', 'r', 'j', 'p', 'b', 'c', 'h', 'o', 'l', 'k', 'g', 'a', 'l', 'j', 'b', 'g', 'g', 'k', 'q', 'x', 'l', 'm', 'x', '\\n']\n"
]
],
[
[
"** Expected output:**\n\n```Python\nSampling:\nlist of sampled indices:\n [12, 17, 24, 14, 13, 9, 10, 22, 24, 6, 13, 11, 12, 6, 21, 15, 21, 14, 3, 2, 1, 21, 18, 24, 7, 25, 6, 25, 18, 10, 16, 2, 3, 8, 15, 12, 11, 7, 1, 12, 10, 2, 7, 7, 11, 17, 24, 12, 13, 24, 0]\nlist of sampled characters:\n ['l', 'q', 'x', 'n', 'm', 'i', 'j', 'v', 'x', 'f', 'm', 'k', 'l', 'f', 'u', 'o', 'u', 'n', 'c', 'b', 'a', 'u', 'r', 'x', 'g', 'y', 'f', 'y', 'r', 'j', 'p', 'b', 'c', 'h', 'o', 'l', 'k', 'g', 'a', 'l', 'j', 'b', 'g', 'g', 'k', 'q', 'x', 'l', 'm', 'x', '\\n']\n```\n\n* Please note that over time, if there are updates to the back-end of the Coursera platform (that may update the version of numpy), the actual list of sampled indices and sampled characters may change. \n* If you follow the instructions given above and get an output without errors, it's possible the routine is correct even if your output doesn't match the expected output. Submit your assignment to the grader to verify its correctness.",
"_____no_output_____"
],
[
"## 3 - Building the language model \n\nIt is time to build the character-level language model for text generation. \n\n\n### 3.1 - Gradient descent \n\n* In this section you will implement a function performing one step of stochastic gradient descent (with clipped gradients). \n* You will go through the training examples one at a time, so the optimization algorithm will be stochastic gradient descent. \n\nAs a reminder, here are the steps of a common optimization loop for an RNN:\n\n- Forward propagate through the RNN to compute the loss\n- Backward propagate through time to compute the gradients of the loss with respect to the parameters\n- Clip the gradients\n- Update the parameters using gradient descent \n\n**Exercise**: Implement the optimization process (one step of stochastic gradient descent). \n\nThe following functions are provided:\n\n```python\ndef rnn_forward(X, Y, a_prev, parameters):\n \"\"\" Performs the forward propagation through the RNN and computes the cross-entropy loss.\n It returns the loss' value as well as a \"cache\" storing values to be used in backpropagation.\"\"\"\n ....\n return loss, cache\n \ndef rnn_backward(X, Y, parameters, cache):\n \"\"\" Performs the backward propagation through time to compute the gradients of the loss with respect\n to the parameters. It returns also all the hidden states.\"\"\"\n ...\n return gradients, a\n\ndef update_parameters(parameters, gradients, learning_rate):\n \"\"\" Updates parameters using the Gradient Descent Update Rule.\"\"\"\n ...\n return parameters\n```\n\nRecall that you previously implemented the `clip` function:\n\n```Python\ndef clip(gradients, maxValue)\n \"\"\"Clips the gradients' values between minimum and maximum.\"\"\"\n ...\n return gradients\n```",
"_____no_output_____"
],
[
"#### parameters\n\n* Note that the weights and biases inside the `parameters` dictionary are being updated by the optimization, even though `parameters` is not one of the returned values of the `optimize` function. The `parameters` dictionary is passed by reference into the function, so changes to this dictionary are making changes to the `parameters` dictionary even when accessed outside of the function.\n* Python dictionaries and lists are \"pass by reference\", which means that if you pass a dictionary into a function and modify the dictionary within the function, this changes that same dictionary (it's not a copy of the dictionary).",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: optimize\n\ndef optimize(X, Y, a_prev, parameters, learning_rate = 0.01):\n \"\"\"\n Execute one step of the optimization to train the model.\n \n Arguments:\n X -- list of integers, where each integer is a number that maps to a character in the vocabulary.\n Y -- list of integers, exactly the same as X but shifted one index to the left.\n a_prev -- previous hidden state.\n parameters -- python dictionary containing:\n Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)\n Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)\n Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)\n b -- Bias, numpy array of shape (n_a, 1)\n by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)\n learning_rate -- learning rate for the model.\n \n Returns:\n loss -- value of the loss function (cross-entropy)\n gradients -- python dictionary containing:\n dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)\n dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)\n dWya -- Gradients of hidden-to-output weights, of shape (n_y, n_a)\n db -- Gradients of bias vector, of shape (n_a, 1)\n dby -- Gradients of output bias vector, of shape (n_y, 1)\n a[len(X)-1] -- the last hidden state, of shape (n_a, 1)\n \"\"\"\n \n ### START CODE HERE ###\n \n # Forward propagate through time (≈1 line)\n loss, cache = rnn_forward(X, Y, a_prev, parameters)\n \n # Backpropagate through time (≈1 line)\n gradients, a = rnn_backward(X, Y, parameters, cache)\n \n # Clip your gradients between -5 (min) and 5 (max) (≈1 line)\n gradients = clip(gradients, 5)\n \n # Update parameters (≈1 line)\n parameters = update_parameters(parameters, gradients, learning_rate)\n \n ### END CODE HERE ###\n \n return loss, gradients, a[len(X)-1]",
"_____no_output_____"
],
[
"np.random.seed(1)\nvocab_size, n_a = 27, 100\na_prev = np.random.randn(n_a, 1)\nWax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)\nb, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)\nparameters = {\"Wax\": Wax, \"Waa\": Waa, \"Wya\": Wya, \"b\": b, \"by\": by}\nX = [12,3,5,11,22,3]\nY = [4,14,11,22,25, 26]\n\nloss, gradients, a_last = optimize(X, Y, a_prev, parameters, learning_rate = 0.01)\nprint(\"Loss =\", loss)\nprint(\"gradients[\\\"dWaa\\\"][1][2] =\", gradients[\"dWaa\"][1][2])\nprint(\"np.argmax(gradients[\\\"dWax\\\"]) =\", np.argmax(gradients[\"dWax\"]))\nprint(\"gradients[\\\"dWya\\\"][1][2] =\", gradients[\"dWya\"][1][2])\nprint(\"gradients[\\\"db\\\"][4] =\", gradients[\"db\"][4])\nprint(\"gradients[\\\"dby\\\"][1] =\", gradients[\"dby\"][1])\nprint(\"a_last[4] =\", a_last[4])",
"Loss = 126.503975722\ngradients[\"dWaa\"][1][2] = 0.194709315347\nnp.argmax(gradients[\"dWax\"]) = 93\ngradients[\"dWya\"][1][2] = -0.007773876032\ngradients[\"db\"][4] = [-0.06809825]\ngradients[\"dby\"][1] = [ 0.01538192]\na_last[4] = [-1.]\n"
]
],
[
[
"** Expected output:**\n\n```Python\nLoss = 126.503975722\ngradients[\"dWaa\"][1][2] = 0.194709315347\nnp.argmax(gradients[\"dWax\"]) = 93\ngradients[\"dWya\"][1][2] = -0.007773876032\ngradients[\"db\"][4] = [-0.06809825]\ngradients[\"dby\"][1] = [ 0.01538192]\na_last[4] = [-1.]\n```",
"_____no_output_____"
],
[
"### 3.2 - Training the model ",
"_____no_output_____"
],
[
"* Given the dataset of dinosaur names, we use each line of the dataset (one name) as one training example. \n* Every 100 steps of stochastic gradient descent, you will sample 10 randomly chosen names to see how the algorithm is doing. \n* Remember to shuffle the dataset, so that stochastic gradient descent visits the examples in random order. \n\n**Exercise**: Follow the instructions and implement `model()`. When `examples[index]` contains one dinosaur name (string), to create an example (X, Y), you can use this:\n\n##### Set the index `idx` into the list of examples\n* Using the for-loop, walk through the shuffled list of dinosaur names in the list \"examples\".\n* If there are 100 examples, and the for-loop increments the index to 100 onwards, think of how you would make the index cycle back to 0, so that we can continue feeding the examples into the model when j is 100, 101, etc.\n* Hint: 101 divided by 100 is zero with a remainder of 1.\n* `%` is the modulus operator in python.\n\n##### Extract a single example from the list of examples\n* `single_example`: use the `idx` index that you set previously to get one word from the list of examples.",
"_____no_output_____"
],
[
"##### Convert a string into a list of characters: `single_example_chars`\n* `single_example_chars`: A string is a list of characters.\n* You can use a list comprehension (recommended over for-loops) to generate a list of characters.\n```Python\nstr = 'I love learning'\nlist_of_chars = [c for c in str]\nprint(list_of_chars)\n```\n\n```\n['I', ' ', 'l', 'o', 'v', 'e', ' ', 'l', 'e', 'a', 'r', 'n', 'i', 'n', 'g']\n```",
"_____no_output_____"
],
[
"##### Convert list of characters to a list of integers: `single_example_ix`\n* Create a list that contains the index numbers associated with each character.\n* Use the dictionary `char_to_ix`\n* You can combine this with the list comprehension that is used to get a list of characters from a string.\n* This is a separate line of code below, to help learners clarify each step in the function.",
"_____no_output_____"
],
[
"##### Create the list of input characters: `X`\n* `rnn_forward` uses the `None` value as a flag to set the input vector as a zero-vector.\n* Prepend the `None` value in front of the list of input characters.\n* There is more than one way to prepend a value to a list. One way is to add two lists together: `['a'] + ['b']`",
"_____no_output_____"
],
[
"##### Get the integer representation of the newline character `ix_newline`\n* `ix_newline`: The newline character signals the end of the dinosaur name.\n - get the integer representation of the newline character `'\\n'`.\n - Use `char_to_ix`",
"_____no_output_____"
],
[
"##### Set the list of labels (integer representation of the characters): `Y`\n* The goal is to train the RNN to predict the next letter in the name, so the labels are the list of characters that are one time step ahead of the characters in the input `X`.\n - For example, `Y[0]` contains the same value as `X[1]` \n* The RNN should predict a newline at the last letter so add ix_newline to the end of the labels. \n - Append the integer representation of the newline character to the end of `Y`.\n - Note that `append` is an in-place operation.\n - It might be easier for you to add two lists together.",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: model\n\ndef model(data, ix_to_char, char_to_ix, num_iterations = 35000, n_a = 50, dino_names = 7, vocab_size = 27):\n \"\"\"\n Trains the model and generates dinosaur names. \n \n Arguments:\n data -- text corpus\n ix_to_char -- dictionary that maps the index to a character\n char_to_ix -- dictionary that maps a character to an index\n num_iterations -- number of iterations to train the model for\n n_a -- number of units of the RNN cell\n dino_names -- number of dinosaur names you want to sample at each iteration. \n vocab_size -- number of unique characters found in the text (size of the vocabulary)\n \n Returns:\n parameters -- learned parameters\n \"\"\"\n \n # Retrieve n_x and n_y from vocab_size\n n_x, n_y = vocab_size, vocab_size\n \n # Initialize parameters\n parameters = initialize_parameters(n_a, n_x, n_y)\n \n # Initialize loss (this is required because we want to smooth our loss)\n loss = get_initial_loss(vocab_size, dino_names)\n \n # Build list of all dinosaur names (training examples).\n with open(\"dinos.txt\") as f:\n examples = f.readlines()\n examples = [x.lower().strip() for x in examples]\n \n # Shuffle list of all dinosaur names\n np.random.seed(0)\n np.random.shuffle(examples)\n \n # Initialize the hidden state of your LSTM\n a_prev = np.zeros((n_a, 1))\n \n # Optimization loop\n for j in range(num_iterations):\n \n ### START CODE HERE ###\n \n # Set the index `idx` (see instructions above)\n idx = j%len(examples)\n \n # Set the input X (see instructions above)\n single_example = examples[idx]\n single_example_chars = [c for c in single_example]\n single_example_ix = [char_to_ix[c] for c in single_example_chars]\n X = [None]+single_example_ix\n \n # Set the labels Y (see instructions above)\n ix_newline = char_to_ix[\"\\n\"]\n Y = X[1:]+[ix_newline]\n \n # Perform one optimization step: Forward-prop -> Backward-prop -> Clip -> Update parameters\n # Choose a learning rate of 0.01\n curr_loss, gradients, a_prev = optimize(X, Y, a_prev, parameters, learning_rate = 0.01)\n \n ### END CODE HERE ###\n \n # Use a latency trick to keep the loss smooth. It happens here to accelerate the training.\n loss = smooth(loss, curr_loss)\n\n # Every 2000 Iteration, generate \"n\" characters thanks to sample() to check if the model is learning properly\n if j % 2000 == 0:\n \n print('Iteration: %d, Loss: %f' % (j, loss) + '\\n')\n \n # The number of dinosaur names to print\n seed = 0\n for name in range(dino_names):\n \n # Sample indices and print them\n sampled_indices = sample(parameters, char_to_ix, seed)\n print_sample(sampled_indices, ix_to_char)\n \n seed += 1 # To get the same result (for grading purposes), increment the seed by one. \n \n print('\\n')\n \n return parameters",
"_____no_output_____"
]
],
[
[
"Run the following cell, you should observe your model outputting random-looking characters at the first iteration. After a few thousand iterations, your model should learn to generate reasonable-looking names. ",
"_____no_output_____"
]
],
[
[
"parameters = model(data, ix_to_char, char_to_ix)",
"Iteration: 0, Loss: 23.087336\n\nNkzxwtdmfqoeyhsqwasjkjvu\nKneb\nKzxwtdmfqoeyhsqwasjkjvu\nNeb\nZxwtdmfqoeyhsqwasjkjvu\nEb\nXwtdmfqoeyhsqwasjkjvu\n\n\nIteration: 2000, Loss: 27.884160\n\nLiusskeomnolxeros\nHmdaairus\nHytroligoraurus\nLecalosapaus\nXusicikoraurus\nAbalpsamantisaurus\nTpraneronxeros\n\n\nIteration: 4000, Loss: 25.901815\n\nMivrosaurus\nInee\nIvtroplisaurus\nMbaaisaurus\nWusichisaurus\nCabaselachus\nToraperlethosdarenitochusthiamamumamaon\n\n\nIteration: 6000, Loss: 24.608779\n\nOnwusceomosaurus\nLieeaerosaurus\nLxussaurus\nOma\nXusteonosaurus\nEeahosaurus\nToreonosaurus\n\n\nIteration: 8000, Loss: 24.070350\n\nOnxusichepriuon\nKilabersaurus\nLutrodon\nOmaaerosaurus\nXutrcheps\nEdaksoje\nTrodiktonus\n\n\nIteration: 10000, Loss: 23.844446\n\nOnyusaurus\nKlecalosaurus\nLustodon\nOla\nXusodonia\nEeaeosaurus\nTroceosaurus\n\n\nIteration: 12000, Loss: 23.291971\n\nOnyxosaurus\nKica\nLustrepiosaurus\nOlaagrraiansaurus\nYuspangosaurus\nEealosaurus\nTrognesaurus\n\n\nIteration: 14000, Loss: 23.382338\n\nMeutromodromurus\nInda\nIutroinatorsaurus\nMaca\nYusteratoptititan\nCa\nTroclosaurus\n\n\nIteration: 16000, Loss: 23.255630\n\nMeustolkanolus\nIndabestacarospceryradwalosaurus\nJustolopinaveraterasauracoptelalenyden\nMaca\nYusocles\nDaahosaurus\nTrodon\n\n\nIteration: 18000, Loss: 22.905483\n\nPhytronn\nMeicanstolanthus\nMustrisaurus\nPegalosaurus\nYuskercis\nEgalosaurus\nTromelosaurus\n\n\nIteration: 20000, Loss: 22.873854\n\nNlyushanerohyisaurus\nLoga\nLustrhigosaurus\nNedalosaurus\nYuslangosaurus\nElagosaurus\nTrrangosaurus\n\n\nIteration: 22000, Loss: 22.710545\n\nOnyxromicoraurospareiosatrus\nLiga\nMustoffankeugoptardoros\nOla\nYusodogongterosaurus\nEhaerona\nTrododongxernochenhus\n\n\nIteration: 24000, Loss: 22.604827\n\nMeustognathiterhucoplithaloptha\nJigaadosaurus\nKurrodon\nMecaistheansaurus\nYuromelosaurus\nEiaeropeeton\nTroenathiteritaus\n\n\nIteration: 26000, Loss: 22.714486\n\nNhyxosaurus\nKola\nLvrosaurus\nNecalosaurus\nYurolonlus\nEjakosaurus\nTroindronykus\n\n\nIteration: 28000, Loss: 22.647640\n\nOnyxosaurus\nLoceahosaurus\nLustleonlonx\nOlabasicachudrakhurgawamosaurus\nYtrojianiisaurus\nEladon\nTromacimathoshargicitan\n\n\nIteration: 30000, Loss: 22.598485\n\nOryuton\nLocaaesaurus\nLustoendosaurus\nOlaahus\nYusaurus\nEhadopldarshuellus\nTroia\n\n\nIteration: 32000, Loss: 22.211861\n\nMeutronlapsaurus\nKracallthcaps\nLustrathus\nMacairugeanosaurus\nYusidoneraverataus\nEialosaurus\nTroimaniathonsaurus\n\n\nIteration: 34000, Loss: 22.447230\n\nOnyxipaledisons\nKiabaeropa\nLussiamang\nPacaeptabalsaurus\nXosalong\nEiacoteg\nTroia\n\n\n"
]
],
[
[
"** Expected Output**\n\nThe output of your model may look different, but it will look something like this:\n\n```Python\nIteration: 34000, Loss: 22.447230\n\nOnyxipaledisons\nKiabaeropa\nLussiamang\nPacaeptabalsaurus\nXosalong\nEiacoteg\nTroia\n```",
"_____no_output_____"
],
[
"## Conclusion\n\nYou can see that your algorithm has started to generate plausible dinosaur names towards the end of the training. At first, it was generating random characters, but towards the end you could see dinosaur names with cool endings. Feel free to run the algorithm even longer and play with hyperparameters to see if you can get even better results. Our implementation generated some really cool names like `maconucon`, `marloralus` and `macingsersaurus`. Your model hopefully also learned that dinosaur names tend to end in `saurus`, `don`, `aura`, `tor`, etc.\n\nIf your model generates some non-cool names, don't blame the model entirely--not all actual dinosaur names sound cool. (For example, `dromaeosauroides` is an actual dinosaur name and is in the training set.) But this model should give you a set of candidates from which you can pick the coolest! \n\nThis assignment had used a relatively small dataset, so that you could train an RNN quickly on a CPU. Training a model of the english language requires a much bigger dataset, and usually needs much more computation, and could run for many hours on GPUs. We ran our dinosaur name for quite some time, and so far our favorite name is the great, undefeatable, and fierce: Mangosaurus!\n\n<img src=\"images/mangosaurus.jpeg\" style=\"width:250;height:300px;\">",
"_____no_output_____"
],
[
"## 4 - Writing like Shakespeare\n\nThe rest of this notebook is optional and is not graded, but we hope you'll do it anyway since it's quite fun and informative. \n\nA similar (but more complicated) task is to generate Shakespeare poems. Instead of learning from a dataset of Dinosaur names you can use a collection of Shakespearian poems. Using LSTM cells, you can learn longer term dependencies that span many characters in the text--e.g., where a character appearing somewhere a sequence can influence what should be a different character much much later in the sequence. These long term dependencies were less important with dinosaur names, since the names were quite short. \n\n\n<img src=\"images/shakespeare.jpg\" style=\"width:500;height:400px;\">\n<caption><center> Let's become poets! </center></caption>\n\nWe have implemented a Shakespeare poem generator with Keras. Run the following cell to load the required packages and models. This may take a few minutes. ",
"_____no_output_____"
]
],
[
[
"from __future__ import print_function\nfrom keras.callbacks import LambdaCallback\nfrom keras.models import Model, load_model, Sequential\nfrom keras.layers import Dense, Activation, Dropout, Input, Masking\nfrom keras.layers import LSTM\nfrom keras.utils.data_utils import get_file\nfrom keras.preprocessing.sequence import pad_sequences\nfrom shakespeare_utils import *\nimport sys\nimport io",
"Using TensorFlow backend.\n"
]
],
[
[
"To save you some time, we have already trained a model for ~1000 epochs on a collection of Shakespearian poems called [*\"The Sonnets\"*](shakespeare.txt). ",
"_____no_output_____"
],
[
"Let's train the model for one more epoch. When it finishes training for an epoch---this will also take a few minutes---you can run `generate_output`, which will prompt asking you for an input (`<`40 characters). The poem will start with your sentence, and our RNN-Shakespeare will complete the rest of the poem for you! For example, try \"Forsooth this maketh no sense \" (don't enter the quotation marks). Depending on whether you include the space at the end, your results might also differ--try it both ways, and try other inputs as well. \n",
"_____no_output_____"
]
],
[
[
"print_callback = LambdaCallback(on_epoch_end=on_epoch_end)\n\nmodel.fit(x, y, batch_size=128, epochs=1, callbacks=[print_callback])",
"Epoch 1/1\n31412/31412 [==============================] - 263s - loss: 2.5635 \n"
],
[
"# Run this cell to try with different inputs without having to re-train the model \ngenerate_output()",
"Write the beginning of your poem, the Shakespeare machine will complete it. Your input is: Forsooth this maketh no sense \n\n\nHere is your poem: \n\nForsooth this maketh no sense renping.\nbut a did sind make hil ons dede is men,\nwithou's inus will o decanot,\nlek o whle thou debert should every forted,\nwhice muse whe bow way i hath, wom's (leccanny,\nthat minge in adited and forso caned doass,\nwith this of ares mote di have no lade peres,\nin live mater for worre la cinse with will when thy comserd srobld,\nnow, boroor, my ho thate thought dewhice hevinging,\nnow not his wrecio"
]
],
[
[
"The RNN-Shakespeare model is very similar to the one you have built for dinosaur names. The only major differences are:\n- LSTMs instead of the basic RNN to capture longer-range dependencies\n- The model is a deeper, stacked LSTM model (2 layer)\n- Using Keras instead of python to simplify the code \n\nIf you want to learn more, you can also check out the Keras Team's text generation implementation on GitHub: https://github.com/keras-team/keras/blob/master/examples/lstm_text_generation.py.\n\nCongratulations on finishing this notebook! ",
"_____no_output_____"
],
[
"**References**:\n- This exercise took inspiration from Andrej Karpathy's implementation: https://gist.github.com/karpathy/d4dee566867f8291f086. To learn more about text generation, also check out Karpathy's [blog post](http://karpathy.github.io/2015/05/21/rnn-effectiveness/).\n- For the Shakespearian poem generator, our implementation was based on the implementation of an LSTM text generator by the Keras team: https://github.com/keras-team/keras/blob/master/examples/lstm_text_generation.py ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
d0b1047b745faad550424ae533f7b343fbb1ef77 | 105,956 | ipynb | Jupyter Notebook | Pose_Detection.ipynb | jaymyname/Computervision | 8e52b6bb7e18615dcc5a8bf95373e05dbddb17d3 | [
"Apache-2.0"
] | null | null | null | Pose_Detection.ipynb | jaymyname/Computervision | 8e52b6bb7e18615dcc5a8bf95373e05dbddb17d3 | [
"Apache-2.0"
] | null | null | null | Pose_Detection.ipynb | jaymyname/Computervision | 8e52b6bb7e18615dcc5a8bf95373e05dbddb17d3 | [
"Apache-2.0"
] | null | null | null | 333.194969 | 32,480 | 0.925394 | [
[
[
"import cv2 as cv\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport argparse",
"_____no_output_____"
],
[
"net = cv.dnn.readNetFromTensorflow(\"graph_opt.pb\")",
"_____no_output_____"
],
[
"inWidth = 368\ninHeight = 368\nthr = 0.2",
"_____no_output_____"
],
[
"BODY_PARTS = { \"Nose\": 0, \"Neck\": 1, \"RShoulder\": 2, \"RElbow\": 3, \"RWrist\": 4,\n \"LShoulder\": 5, \"LElbow\": 6, \"LWrist\": 7, \"RHip\": 8, \"RKnee\": 9,\n \"RAnkle\": 10, \"LHip\": 11, \"LKnee\": 12, \"LAnkle\": 13, \"REye\": 14,\n \"LEye\": 15, \"REar\": 16, \"LEar\": 17, \"Background\": 18 }\n\nPOSE_PAIRS = [ [\"Neck\", \"RShoulder\"], [\"Neck\", \"LShoulder\"], [\"RShoulder\", \"RElbow\"],\n [\"RElbow\", \"RWrist\"], [\"LShoulder\", \"LElbow\"], [\"LElbow\", \"LWrist\"],\n [\"Neck\", \"RHip\"], [\"RHip\", \"RKnee\"], [\"RKnee\", \"RAnkle\"], [\"Neck\", \"LHip\"],\n [\"LHip\", \"LKnee\"], [\"LKnee\", \"LAnkle\"], [\"Neck\", \"Nose\"], [\"Nose\", \"REye\"],\n [\"REye\", \"REar\"], [\"Nose\", \"LEye\"], [\"LEye\", \"LEar\"] ]\n",
"_____no_output_____"
],
[
"img = cv.imread(\"image.jpg\")",
"_____no_output_____"
],
[
"plt.imshow(img)",
"_____no_output_____"
],
[
"plt.imshow(cv.cvtColor(img,cv.COLOR_BGR2RGB))",
"_____no_output_____"
],
[
"def pose_estiamtion(frame):\n frameWidth = frame.shape[1]\n frameHeight = frame.shape[0]\n net.setInput(cv.dnn.blobFromImage(frame, 1.0, (inWidth, inHeight), (127.5, 127.5, 127.5), swapRB=True, crop=False))\n out = net.forward()\n out = out[:, :19, :, :] # MobileNet output [1, 57, -1, -1], we only need the first 19 elements\n assert(len(BODY_PARTS) == out.shape[1])\n points = []\n for i in range(len(BODY_PARTS)):\n # Slice heatmap of corresponging body's part.\n heatMap = out[0, i, :, :]\n\n # Originally, we try to find all the local maximums. To simplify a sample\n # we just find a global one. However only a single pose at the same time\n # could be detected this way.\n _, conf, _, point = cv.minMaxLoc(heatMap)\n x = (frameWidth * point[0]) / out.shape[3]\n y = (frameHeight * point[1]) / out.shape[2]\n # Add a point if it's confidence is higher than threshold.\n points.append((int(x), int(y)) if conf > thr else None)\n \n for pair in POSE_PAIRS:\n partFrom = pair[0]\n partTo = pair[1]\n assert(partFrom in BODY_PARTS)\n assert(partTo in BODY_PARTS)\n\n idFrom = BODY_PARTS[partFrom]\n idTo = BODY_PARTS[partTo]\n\n if points[idFrom] and points[idTo]:\n cv.line(frame, points[idFrom], points[idTo], (0, 255, 0), 3)\n cv.ellipse(frame, points[idFrom], (3, 3), 0, 0, 360, (0, 0, 255), cv.FILLED)\n cv.ellipse(frame, points[idTo], (3, 3), 0, 0, 360, (0, 0, 255), cv.FILLED)\n \n t, _ = net.getPerfProfile()\n freq = cv.getTickFrequency() / 1000\n cv.putText(frame, '%.2fms' % (t / freq), (10, 20), cv.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0))\n return frame",
"_____no_output_____"
],
[
"estimated_image = pose_estiamtion(img)",
"_____no_output_____"
],
[
"plt.imshow(cv.cvtColor(estimated_image,cv.COLOR_BGR2RGB))",
"_____no_output_____"
],
[
"cap = cv.VideoCapture(1)\ncap.set(cv.CAP_PROP_FPS,10)\ncap.set(3,800)\ncap.set(4,800)\n\nif not cap.isOpened():\n cap = cv.VideoCapture(0)\nif not cap.isOpened():\n raise IOError(\"Cannot open the Webcame\")\n\nwhile cv.waitKey(1) < 0 :\n hasFrame, frame = cap.read()\n if not hasFrame:\n cv.waitKey()\n break\n frameWidth = frame.shape[1]\n frameHeight = frame.shape[0]\n net.setInput(cv.dnn.blobFromImage(frame, 1.0, (inWidth, inHeight), (127.5, 127.5, 127.5), swapRB=True, crop=False))\n out = net.forward()\n out = out[:, :19, :, :] # MobileNet output [1, 57, -1, -1], we only need the first 19 elements\n assert(len(BODY_PARTS) == out.shape[1])\n points = []\n for i in range(len(BODY_PARTS)):\n # Slice heatmap of corresponging body's part.\n heatMap = out[0, i, :, :]\n\n # Originally, we try to find all the local maximums. To simplify a sample\n # we just find a global one. However only a single pose at the same time\n # could be detected this way.\n _, conf, _, point = cv.minMaxLoc(heatMap)\n x = (frameWidth * point[0]) / out.shape[3]\n y = (frameHeight * point[1]) / out.shape[2]\n # Add a point if it's confidence is higher than threshold.\n points.append((int(x), int(y)) if conf > thr else None)\n \n for pair in POSE_PAIRS:\n partFrom = pair[0]\n partTo = pair[1]\n assert(partFrom in BODY_PARTS)\n assert(partTo in BODY_PARTS)\n\n idFrom = BODY_PARTS[partFrom]\n idTo = BODY_PARTS[partTo]\n\n if points[idFrom] and points[idTo]:\n cv.line(frame, points[idFrom], points[idTo], (0, 255, 0), 3)\n cv.ellipse(frame, points[idFrom], (3, 3), 0, 0, 360, (0, 0, 255), cv.FILLED)\n cv.ellipse(frame, points[idTo], (3, 3), 0, 0, 360, (0, 0, 255), cv.FILLED)\n \n t, _ = net.getPerfProfile()\n freq = cv.getTickFrequency() / 1000\n cv.putText(frame, '%.2fms' % (t / freq), (10, 20), cv.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0))\n \n cv.imshow('Pose Estimtion Tutorial',frame)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0b1067bbd9ef85390b826b1f4f9c5dd924f8480 | 88,243 | ipynb | Jupyter Notebook | notebooks/Educational-Data-Log-Analysis.ipynb | Bessy-Mukaria/Educational-Data-Log-Analysis | 56432364e3a4400b53f7396d43fab8f6657e40d8 | [
"MIT"
] | 1 | 2021-04-26T18:15:29.000Z | 2021-04-26T18:15:29.000Z | notebooks/Educational-Data-Log-Analysis.ipynb | Bessy-Mukaria/Educational-Data-Log-Analysis | 56432364e3a4400b53f7396d43fab8f6657e40d8 | [
"MIT"
] | null | null | null | notebooks/Educational-Data-Log-Analysis.ipynb | Bessy-Mukaria/Educational-Data-Log-Analysis | 56432364e3a4400b53f7396d43fab8f6657e40d8 | [
"MIT"
] | null | null | null | 27.319814 | 1,121 | 0.364799 | [
[
[
"## Moodle Database: Educational Data Log Analysis \nThe Moodle LMS is a free and open-source learning management system written in PHP and distributed under the GNU General Public License. It is used for blended learning, distance education, flipped classroom and other e-learning projects in schools, universities, workplaces and other sectors. With customizable management features, it is used to create private websites with online courses for educators and trainers to achieve learning goals. Moodle allows for extending and tailoring learning environments using community-sourced plugins .\n\nIn this notebokk we are going to explore the 10 Academy Moodle logs stored in the database together with many other relevant tables. ",
"_____no_output_____"
],
[
"# Table of content\n1. Installing the required libraries \n2. Importing the required libraries \n3. Moodle database understanding \n4. Data Extraction Transformation and Loading (ETL)",
"_____no_output_____"
],
[
"### Installing the necessary libraries",
"_____no_output_____"
]
],
[
[
"#!pip install ipython-sql\n#!pip install sqlalchemy\n#!pip install psycopg2",
"_____no_output_____"
]
],
[
[
"### Importing necessary libraries",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nfrom sqlalchemy import create_engine\nimport psycopg2\nimport logging\nfrom IPython.display import display",
"_____no_output_____"
],
[
"#allowing connection to the database\n%load_ext sql",
"The sql extension is already loaded. To reload it, use:\n %reload_ext sql\n"
],
[
"#ipython-sql\n%sql postgresql://bessy:Streetdance53@localhost/moodle",
"_____no_output_____"
],
[
"#sqlalchemy\nengine = create_engine('postgresql://bessy:Streetdance53@localhost/moodle')",
"_____no_output_____"
]
],
[
[
"### Moodle database Understanding.\nNow, let's have a glance of how some of the tables look like.We will consider the following tables;\n`mdl_logstore_standard_log`,\n`mdl_context`,\n`mdl_user`,\n`mdl_course`,\n`mdl_modules `,\n`mdl_course_modules`,\n`mdl_course_modules_completion`, \n`mdl_grade_items`,\n`mdl_grade_grades`,\n`mdl_grade_categories`,\n`mdl_grade_items_history`,\n`mdl_grade_grades_history`,\n`mdl_grade_categories_history`,\n`mdl_forum`,\n`mdl_forum_discussions`,\n`mdl_forum_posts`.",
"_____no_output_____"
],
[
"`Table:mdl_logstore_standard_log`",
"_____no_output_____"
]
],
[
[
"%%sql\nSELECT *FROM mdl_logstore_standard_log LIMIT 3;",
" * postgresql://bessy:***@localhost/moodle\n3 rows affected.\n"
]
],
[
[
"`Table: mdl_context`",
"_____no_output_____"
]
],
[
[
"%%sql\nSELECT * FROM mdl_context LIMIT 3;",
" * postgresql://bessy:***@localhost/moodle\n3 rows affected.\n"
]
],
[
[
"`mdl_course`",
"_____no_output_____"
]
],
[
[
"%%sql\nSELECT * FROM mdl_course LIMIT 3;",
" * postgresql://bessy:***@localhost/moodle\n3 rows affected.\n"
]
],
[
[
"`mdl_user`",
"_____no_output_____"
]
],
[
[
"%%sql\nSELECT * FROM mdl_user LIMIT 3;",
" * postgresql://bessy:***@localhost/moodle\n3 rows affected.\n"
]
],
[
[
"`mdl_modules`",
"_____no_output_____"
]
],
[
[
"%%sql\nSELECT * FROM mdl_modules LIMIT 3;",
" * postgresql://bessy:***@localhost/moodle\n3 rows affected.\n"
]
],
[
[
"`mdl_course_modules`",
"_____no_output_____"
]
],
[
[
"%%sql\nSELECT * FROM mdl_course_modules LIMIT 3;",
" * postgresql://bessy:***@localhost/moodle\n3 rows affected.\n"
]
],
[
[
"`mdl_course_modules_completion`",
"_____no_output_____"
]
],
[
[
"%%sql\nSELECT * FROM mdl_course_modules_completion LIMIT 3",
" * postgresql://bessy:***@localhost/moodle\n3 rows affected.\n"
]
],
[
[
"`mdl_grade_grades`",
"_____no_output_____"
]
],
[
[
"%%sql\nSELECT * FROM mdl_grade_grades LIMIT 3",
" * postgresql://bessy:***@localhost/moodle\n3 rows affected.\n"
]
],
[
[
"### Number of tables in the database;",
"_____no_output_____"
]
],
[
[
"%%sql\nSELECT COUNT(*) FROM information_schema.tables",
" * postgresql://bessy:***@localhost/moodle\n1 rows affected.\n"
]
],
[
[
"### Number of records in the following tables; \n",
"_____no_output_____"
]
],
[
[
"mit = ['mdl_logstore_standard_log', 'mdl_context', 'mdl_user', 'mdl_course', 'mdl_modules' , 'mdl_course_modules', 'mdl_course_modules_completion',\n 'mdl_grade_items', 'mdl_grade_grades', 'mdl_grade_categories', 'mdl_grade_items_history', 'mdl_grade_grades_history', \n 'mdl_grade_categories_history', 'mdl_forum', 'mdl_forum_discussions', 'mdl_forum_posts']\n\n# fetches and returns number of records of a given table in a moodle database\ndef table_count(table):\n count = %sql SELECT COUNT(*) as {table}_count from {table}\n return count\n\nfor table in mit:\n display(table_count(table))",
" * postgresql://bessy:***@localhost/moodle\n1 rows affected.\n"
]
],
[
[
"### Number of quiz submission by time",
"_____no_output_____"
]
],
[
[
"%%sql\nselect date_part('hour', timestamp with time zone 'epoch' + timefinish * interval '1 second') as hour, count(1)\nfrom mdl_quiz_attempts qa\nwhere qa.preview = 0 and qa.timefinish <> 0\ngroup by date_part('hour', timestamp with time zone 'epoch' + timefinish * interval '1 second')\norder by hour",
" * postgresql://bessy:***@localhost/moodle\n24 rows affected.\n"
],
[
"%%sql\nSELECT COUNT(id), EXTRACT(HOUR FROM to_timestamp(timecreated)) FROM mdl_logstore_standard_log WHERE action ='submitted' AND component='mod_quiz'\ngroup by EXTRACT(HOUR FROM to_timestamp(timecreated));",
" * postgresql://bessy:***@localhost/moodle\n24 rows affected.\n"
]
],
[
[
"## Monthly usage time of learners who have confirmed and are not deleted",
"_____no_output_____"
]
],
[
[
"%%sql\nselect extract(month from to_timestamp(mdl_stats_user_monthly.timeend)) as calendar_month,\n count(distinct mdl_stats_user_monthly.userid) as total_users\nfrom mdl_stats_user_monthly\n inner join mdl_role_assignments on mdl_stats_user_monthly.userid = mdl_role_assignments.userid\n inner join mdl_context on mdl_role_assignments.contextid = mdl_context.id\nwhere mdl_stats_user_monthly.stattype = 'activity'\n and mdl_stats_user_monthly.courseid <>1\ngroup by extract(month from to_timestamp(mdl_stats_user_monthly.timeend))\norder by extract(month from to_timestamp(mdl_stats_user_monthly.timeend))",
" * postgresql://bessy:***@localhost/moodle\n2 rows affected.\n"
],
[
"%%sql \nSELECT COUNT(lastaccess - firstaccess) AS usagetime, EXTRACT (MONTH FROM to_timestamp(firstaccess)) AS month \nFROM mdl_user WHERE confirmed = 1 AND deleted = 0 GROUP BY EXTRACT (MONTH FROM to_timestamp(firstaccess))",
" * postgresql://bessy:***@localhost/moodle\n7 rows affected.\n"
]
],
[
[
"## Count of log events per user",
"_____no_output_____"
]
],
[
[
"actions = ['loggedin', 'viewed', 'started', 'submitted', 'uploaded', 'updated', 'searched', \n 'answered', 'attempted', 'abandoned']\n\n# fetch and return count of log events of an action per user\ndef event_count(action):\n count = %sql SELECT userid, COUNT(action) AS {action}_count FROM mdl_logstore_standard_log WHERE action='{action}' GROUP BY userid limit 5\n return count\n\nfor action in actions:\n display(event_count(action))",
" * postgresql://bessy:***@localhost/moodle\n5 rows affected.\n"
]
],
[
[
"### python class to pull \n* Overall grade of learners \n* Number of forum posts\n",
"_____no_output_____"
]
],
[
[
"class PullGrade():\n def __init__(self):\n pass\n \n def open_db(self, **kwargs):\n # extract args, if they are not provided assign a default value\n user = kwargs.get('user', 'briodev')\n password = kwargs.get('password', '14ConnectPsq')\n db = kwargs.get('db', 'moodle')\n \n # make a connection to PostgreSQL\n # use exception to show error message if failed to connect\n try:\n params = dict(user=user, \n password=password,\n host=\"127.0.0.1\",\n port = \"5432\",\n database = db)\n proot = 'postgresql://{user}@{host}:5432/{database}'.format(**params)\n logging.info('Connecting to the PostgreSQL database... using sqlalchemy engine')\n engine = create_engine(proot)\n except (Exception, psycopg2.Error) as error:\n logging.error(r\"Error while connecting to PostgreSQL {error}\")\n \n return engine\n \n # fetch and return number of forum posts\n def forum_posts(self):\n count = %sql SELECT COUNT(*) from mdl_forum_posts\n return count\n \n # fetch and return overall grade of learners\n def overall_grade(self):\n overall = %sql SELECT userid, round(SUM(finalgrade)/count(*), 2) as overall_grade from mdl_grade_grades WHERE finalgrade is not null group by userid LIMIT 10\n return overall",
"_____no_output_____"
],
[
"db = PullGrade()\ndb.open_db()",
"_____no_output_____"
],
[
"#Forum_posts\ndb.forum_posts()",
" * postgresql://bessy:***@localhost/moodle\n1 rows affected.\n"
],
[
"#Overall grade.\ndb.overall_grade()",
" * postgresql://bessy:***@localhost/moodle\n10 rows affected.\n"
]
],
[
[
"### Data Extraction Transformation and Loading (ETL)",
"_____no_output_____"
]
],
[
[
"#reading the mdl_logstore_standard_log\nlog_df = pd.read_sql(\"select * from mdl_logstore_standard_log\", engine)",
"_____no_output_____"
],
[
"def top_x(df, percent):\n total_len = df.shape[0]\n top = int((total_len * percent)/100)\n return df.iloc[:top,]",
"_____no_output_____"
]
],
[
[
"### Login count",
"_____no_output_____"
]
],
[
[
"log_df_logged_in = log_df[log_df.action == 'loggedin'][['userid', 'action']]\nlogin_by_user = log_df_logged_in.groupby('userid').count().sort_values('action', ascending=False)",
"_____no_output_____"
],
[
"login_by_user.columns = [\"login_count\"]\ntop_x(login_by_user, 1)",
"_____no_output_____"
]
],
[
[
"### Activity count",
"_____no_output_____"
]
],
[
[
"activity_log = log_df[['userid', 'action']]\nactivity_log_by_user = activity_log.groupby('userid').count().sort_values('action', ascending=False)",
"_____no_output_____"
],
[
"activity_log_by_user.columns = ['activity_count']\ntop_x(activity_log_by_user, 1)",
"_____no_output_____"
],
[
"log_in_out = log_df[(log_df.action == \"loggedin\") | (log_df.action == \"loggedout\")]",
"_____no_output_____"
],
[
"user_id = log_df.userid.unique()\n\nd_times = {}\n\nfor user in user_id:\n log_user = log_df[log_df.userid == user].sort_values('timecreated')\n \n d_time = 0 \n isLoggedIn = 0\n loggedIn_timecreated = 0\n \n for i in range(len(log_user)): \n row = log_user.iloc[i,]\n \n row_next = log_user.iloc[i+1,] if i+1 < len(log_user) else row\n \n if(row.action == \"loggedin\"): \n isLoggedIn = 1\n loggedIn_timecreated = row.timecreated\n\n if( (i+1 == len(log_user)) | ( (row_next.action == \"loggedin\") & (isLoggedIn == 1) ) ):\n d_time += row.timecreated - loggedIn_timecreated\n isLoggedIn = 0\n\n d_times[user] = d_time\n",
"_____no_output_____"
],
[
"\ndedication_time_df = pd.DataFrame({'userid':list(d_times.keys()),\n 'dedication_time':list(d_times.values())})",
"_____no_output_____"
],
[
"dedication_time_df",
"_____no_output_____"
],
[
"top_x(dedication_time_df.sort_values('dedication_time', ascending=False), 35)",
"_____no_output_____"
]
],
[
[
"### References\n*\thttps://docs.moodle.org/39/en/Custom_SQL_queries_report\n*\thttps://docs.moodle.org/39/en/ad-hoc_contributed_reports\n*\thttps://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.331.667&rep=rep1&type=pdf\n*\thttp://informatics.ue-varna.bg/conference19/Conf.proceedings_Informatics-50.years%20177-187.pdf\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d0b1354d9162ab1a8d07dd2dd7778abe28e31107 | 968 | ipynb | Jupyter Notebook | lectures/4-notes.ipynb | JakobGM/machine-learning-course | 657e692e438ce997ed937544ba559f3790edba4e | [
"MIT"
] | 1 | 2020-02-28T09:07:25.000Z | 2020-02-28T09:07:25.000Z | lectures/4-notes.ipynb | berild/machine-learning-course | 657e692e438ce997ed937544ba559f3790edba4e | [
"MIT"
] | 4 | 2020-03-24T17:31:18.000Z | 2021-08-23T20:19:17.000Z | lectures/4-notes.ipynb | berild/machine-learning-course | 657e692e438ce997ed937544ba559f3790edba4e | [
"MIT"
] | 2 | 2019-09-13T07:30:13.000Z | 2020-02-04T10:36:13.000Z | 24.820513 | 118 | 0.582645 | [
[
[
"# Lecture 4\n\n* Optidigits notebook in day 2 directory should be looked at.\n* Slide \"Other tools\" feature index vs row index, `x.value_counts()`, `x.isnull()` should be used in report\n* `pd.scatter_matrix(df)`, `plt.matshow(df.corr(df))`. Group features.\n* Use test set without labels in exploratory data analysis.\n* NA values should be replaced with the `MICE` R-package, subshelling out.",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown"
]
] |
d0b139ecea88011812cc8728d2a37a6c3265afed | 19,369 | ipynb | Jupyter Notebook | examples/catboost_examples/regression.ipynb | beyondacm/hyperparameter_hunter | f243ad95deeb08db73bc1b1b8ed66fed9c761ea5 | [
"MIT"
] | 688 | 2018-06-01T23:43:28.000Z | 2022-03-23T06:37:20.000Z | examples/catboost_examples/regression.ipynb | beyondacm/hyperparameter_hunter | f243ad95deeb08db73bc1b1b8ed66fed9c761ea5 | [
"MIT"
] | 188 | 2018-07-09T23:22:31.000Z | 2021-04-01T07:43:46.000Z | examples/catboost_examples/regression.ipynb | beyondacm/hyperparameter_hunter | f243ad95deeb08db73bc1b1b8ed66fed9c761ea5 | [
"MIT"
] | 100 | 2018-08-28T03:30:47.000Z | 2022-01-25T04:37:11.000Z | 40.94926 | 501 | 0.49827 | [
[
[
"# Format DataFrame",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nfrom sklearn.datasets import make_regression\n\ndata = make_regression(n_samples=600, n_features=50, noise=0.1, random_state=42)\ntrain_df = pd.DataFrame(data[0], columns=[\"x_{}\".format(_) for _ in range(data[0].shape[1])])\ntrain_df[\"target\"] = data[1]\n\nprint(train_df.shape)\ntrain_df.head()",
"(600, 51)\n"
]
],
[
[
"# Set Up Environment",
"_____no_output_____"
]
],
[
[
"from hyperparameter_hunter import Environment, CVExperiment\nfrom sklearn.metrics import explained_variance_score\n\nenv = Environment(\n train_dataset=train_df,\n results_path=\"HyperparameterHunterAssets\",\n metrics=dict(evs=explained_variance_score),\n cv_type=\"KFold\",\n cv_params=dict(n_splits=3, shuffle=True, random_state=1337),\n runs=2,\n)",
"Cross-Experiment Key: 'Sxkz36nLbTyi4QJM7w6wCWkbIucXm0RbzMsirLPuYmw='\n"
]
],
[
[
"Now that HyperparameterHunter has an active `Environment`, we can do two things:\n\n# 1. Perform Experiments\n\n*Note: If this is your first HyperparameterHunter example, the CatBoost classification example may be a better starting point.*\n\nIn this Experiment, we're also going to use `model_extra_params` to provide arguments to `CatBoostRegressor`'s `fit` method, just like we would if we weren't using HyperparameterHunter.\n\nWe'll be using the `verbose` argument to print evaluations of our `CatBoostRegressor` every 50 iterations, and we'll also be using the dataset sentinels offered by `Environment`. You can read more about the exciting thing you can do with the `Environment` sentinels in the documentation and in the example dedicated to them. For now, though, we'll be using them to provide each fold's `env.validation_input`, and `env.validation_target` to `CatBoostRegressor.fit` via its `eval_set` argument.\n\nYou could also easily add `CatBoostRegressor.fit`'s `early_stopping_rounds` argument to `model_extra_params[\"fit\"]` to use early stopping, but doing so here with only `iterations=100` doesn't make much sense.",
"_____no_output_____"
]
],
[
[
"from catboost import CatBoostRegressor\n\nexperiment = CVExperiment(\n model_initializer=CatBoostRegressor,\n model_init_params=dict(\n iterations=100,\n learning_rate=0.05,\n depth=5,\n bootstrap_type=\"Bayesian\",\n save_snapshot=False,\n allow_writing_files=False,\n ),\n model_extra_params=dict(\n fit=dict(\n verbose=50,\n eval_set=[(env.validation_input, env.validation_target)],\n ),\n ),\n)",
"<22:13:22> Validated Environment: 'Sxkz36nLbTyi4QJM7w6wCWkbIucXm0RbzMsirLPuYmw='\n<22:13:22> Initialized Experiment: 'df2982f6-ab54-4c74-80b6-ad1c1e49fe45'\n<22:13:22> Hyperparameter Key: '913_iDDPY_PMp5ulOyK251mgLCZ2qau7kjOvDdy1rz8='\n<22:13:22> \n0:\tlearn: 181.0383680\ttest: 179.9132919\tbest: 179.9132919 (0)\ttotal: 63ms\tremaining: 6.24s\n50:\tlearn: 97.8034574\ttest: 110.2078251\tbest: 110.2078251 (50)\ttotal: 427ms\tremaining: 410ms\n99:\tlearn: 65.3572502\ttest: 86.2235253\tbest: 86.2235253 (99)\ttotal: 783ms\tremaining: 0us\n\nbestTest = 86.22352531\nbestIteration = 99\n\n<22:13:22> F0/R0 | OOF(evs=0.77530) | Time Elapsed: 0.81737 s\n0:\tlearn: 180.3273426\ttest: 178.7632887\tbest: 178.7632887 (0)\ttotal: 7.5ms\tremaining: 742ms\n50:\tlearn: 95.3065291\ttest: 108.1124182\tbest: 108.1124182 (50)\ttotal: 373ms\tremaining: 358ms\n99:\tlearn: 63.7322170\ttest: 85.4546004\tbest: 85.4546004 (99)\ttotal: 724ms\tremaining: 0us\n\nbestTest = 85.45460041\nbestIteration = 99\n\n<22:13:23> F0/R1 | OOF(evs=0.77929) | Time Elapsed: 0.73847 s\n<22:13:23> F0.0 AVG: OOF(evs=0.77998) | Time Elapsed: 1.56024 s\n0:\tlearn: 183.5077544\ttest: 173.1917854\tbest: 173.1917854 (0)\ttotal: 7.24ms\tremaining: 717ms\n50:\tlearn: 100.5844263\ttest: 108.1710145\tbest: 108.1710145 (50)\ttotal: 375ms\tremaining: 360ms\n99:\tlearn: 67.5414394\ttest: 85.5053407\tbest: 85.5053407 (99)\ttotal: 734ms\tremaining: 0us\n\nbestTest = 85.5053407\nbestIteration = 99\n\n<22:13:24> F1/R0 | OOF(evs=0.76335) | Time Elapsed: 0.7484 s\n0:\tlearn: 183.4412848\ttest: 172.7748236\tbest: 172.7748236 (0)\ttotal: 7.37ms\tremaining: 730ms\n50:\tlearn: 98.9207169\ttest: 107.3348632\tbest: 107.3348632 (50)\ttotal: 372ms\tremaining: 357ms\n99:\tlearn: 67.1194369\ttest: 84.1135630\tbest: 84.1135630 (99)\ttotal: 728ms\tremaining: 0us\n\nbestTest = 84.11356298\nbestIteration = 99\n\n<22:13:25> F1/R1 | OOF(evs=0.77169) | Time Elapsed: 0.74292 s\n<22:13:25> F0.1 AVG: OOF(evs=0.77061) | Time Elapsed: 1.49607 s\n0:\tlearn: 176.0961712\ttest: 188.8149185\tbest: 188.8149185 (0)\ttotal: 6.9ms\tremaining: 683ms\n50:\tlearn: 94.2771704\ttest: 120.4426175\tbest: 120.4426175 (50)\ttotal: 375ms\tremaining: 360ms\n99:\tlearn: 67.9729670\ttest: 99.9417843\tbest: 99.9417843 (99)\ttotal: 730ms\tremaining: 0us\n\nbestTest = 99.94178427\nbestIteration = 99\n\n<22:13:25> F2/R0 | OOF(evs=0.72638) | Time Elapsed: 0.74454 s\n0:\tlearn: 177.0392612\ttest: 189.8521136\tbest: 189.8521136 (0)\ttotal: 6.98ms\tremaining: 691ms\n50:\tlearn: 95.5295037\ttest: 121.3560567\tbest: 121.3560567 (50)\ttotal: 377ms\tremaining: 362ms\n99:\tlearn: 66.3592641\ttest: 99.0826473\tbest: 99.0826473 (99)\ttotal: 736ms\tremaining: 0us\n\nbestTest = 99.08264731\nbestIteration = 99\n\n<22:13:26> F2/R1 | OOF(evs=0.73186) | Time Elapsed: 0.75069 s\n<22:13:26> F0.2 AVG: OOF(evs=0.73123) | Time Elapsed: 1.49972 s\n<22:13:26> \n<22:13:26> FINAL: OOF(evs=0.75971) | Time Elapsed: 4.56196 s\n<22:13:26> \n<22:13:26> Saving results for Experiment: 'df2982f6-ab54-4c74-80b6-ad1c1e49fe45'\n"
]
],
[
[
"Notice above that CatBoost printed scores for our `eval_set` every 50 iterations just like we said in `model_extra_params[\"fit\"]`; although, it made our results rather difficult to read, so we'll switch back to `verbose=False` during optimization.\n\n# 2. Hyperparameter Optimization\n\nNotice below that `optimizer` still recognizes the results of `experiment` as valid learning material even though their `verbose` values differ. This is because it knows that `verbose` has no effect on actual results.",
"_____no_output_____"
]
],
[
[
"from hyperparameter_hunter import DummyOptPro, Real, Integer, Categorical\n\noptimizer = DummyOptPro(iterations=10, random_state=777)\n\noptimizer.forge_experiment(\n model_initializer=CatBoostRegressor,\n model_init_params=dict(\n iterations=100,\n learning_rate=Real(0.001, 0.2),\n depth=Integer(3, 7),\n bootstrap_type=Categorical([\"Bayesian\", \"Bernoulli\"]),\n save_snapshot=False,\n allow_writing_files=False,\n ),\n model_extra_params=dict(\n fit=dict(\n verbose=False,\n eval_set=[(env.validation_input, env.validation_target)],\n ),\n ),\n)\n\noptimizer.go()",
"Validated Environment with key: \"Sxkz36nLbTyi4QJM7w6wCWkbIucXm0RbzMsirLPuYmw=\"\n\u001b[31mSaved Result Files\u001b[0m\n\u001b[31m_________________________________________________________________________________________\u001b[0m\n Step | ID | Time | Value | bootstrap_type | depth | learning_rate | \nExperiments matching cross-experiment key/algorithm: 1\nExperiments fitting in the given space: 1\nExperiments matching current guidelines: 1\n 0 | df2982f6 | 00m00s | \u001b[35m 0.75971\u001b[0m | \u001b[32m Bayesian\u001b[0m | \u001b[32m 5\u001b[0m | \u001b[32m 0.0500\u001b[0m | \n\u001b[31mHyperparameter Optimization\u001b[0m\n\u001b[31m_________________________________________________________________________________________\u001b[0m\n Step | ID | Time | Value | bootstrap_type | depth | learning_rate | \n 1 | e21de31a | 00m19s | \u001b[35m 0.86756\u001b[0m | \u001b[32m Bayesian\u001b[0m | \u001b[32m 7\u001b[0m | \u001b[32m 0.1719\u001b[0m | \n 2 | ea45250b | 00m02s | \u001b[35m 0.89994\u001b[0m | \u001b[32m Bayesian\u001b[0m | \u001b[32m 4\u001b[0m | \u001b[32m 0.1510\u001b[0m | \n 3 | dc911fbd | 00m19s | 0.86487 | Bernoulli | 7 | 0.1858 | \n 4 | a344b5f7 | 00m04s | 0.89580 | Bayesian | 5 | 0.1656 | \n 5 | 186a787e | 00m04s | 0.89318 | Bernoulli | 5 | 0.1329 | \n 6 | 30d43916 | 00m02s | \u001b[35m 0.90540\u001b[0m | \u001b[32m Bernoulli\u001b[0m | \u001b[32m 4\u001b[0m | \u001b[32m 0.1422\u001b[0m | \n 7 | bfe06f05 | 00m08s | 0.40699 | Bayesian | 6 | 0.0109 | \n 8 | 514bf91a | 00m19s | 0.87122 | Bernoulli | 7 | 0.1991 | \n 9 | 2109f8fb | 00m08s | 0.84663 | Bayesian | 6 | 0.0910 | \n 10 | ebd045eb | 00m02s | \u001b[35m 0.92369\u001b[0m | \u001b[32m Bernoulli\u001b[0m | \u001b[32m 4\u001b[0m | \u001b[32m 0.1999\u001b[0m | \nOptimization loop completed in 0:01:34.529025\nBest score was 0.9236872318041063 from Experiment \"ebd045eb-1224-49f7-89f3-9ff37bd0fdb7\"\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0b13cbd361ec38d6bc8a8ac4a77bcb88bc1db5e | 4,677 | ipynb | Jupyter Notebook | Untitled.ipynb | ieferrari/python-3D-meshes | 84469525f4b6167dba08a425556441c50beceb11 | [
"MIT"
] | null | null | null | Untitled.ipynb | ieferrari/python-3D-meshes | 84469525f4b6167dba08a425556441c50beceb11 | [
"MIT"
] | null | null | null | Untitled.ipynb | ieferrari/python-3D-meshes | 84469525f4b6167dba08a425556441c50beceb11 | [
"MIT"
] | null | null | null | 29.415094 | 435 | 0.545863 | [
[
[
"#!/usr/bin/env python\n\"\"\"File format conversion\ncategory: vtk, file conversion, tomb\"\"\"\nimport os, sys\nimport vtk\n\ninvtkfile='./tubes.vtk'\noutvtpfile ='tubes.vtp'\n\ndef vtk2vtp(invtkfile, outvtpfile, binary=False):\n \"\"\"What it says on the label\"\"\"\n reader = vtk.vtkPolyDataReader()\n reader.SetFileName(invtkfile)\n\n writer = vtk.vtkXMLPolyDataWriter()\n writer.SetFileName(outvtpfile)\n if binary:\n writer.SetFileTypeToBinary()\n writer.SetInput(reader.GetOutput())\n writer.Update()\n\nif __name__ == '__main__':\n args = sys.argv\n binary = False\n if '-b' in args:\n args.remove('-b')\n binary = True\n if len(args) < 2:\n print('Batch converts vtk files to vtp files.\\nUsage:\\n vtk2vtp.py model1.vtk model2.vtk ...')\n print(' [-b] causes output to be in binary format, much smaller vtp file size, if it happens to work')\n sys.exit()\n infiles = args[1:]\n for vtkfile in infiles:\n if vtkfile[-4:] != '.vtk':\n print (vtkfile, \"doesn't look like a vtk file, won't convert\")\n continue\n vtk2vtp(vtkfile, vtkfile[:-4]+'.vtp', binary=binary)",
"-f doesn't look like a vtk file, won't convert\n/home/user/.local/share/jupyter/runtime/kernel-3ceff57c-4ef5-4097-bce5-9b9b56b9707c.json doesn't look like a vtk file, won't convert\n"
],
[
"invtkfile='./tubes.vtk'\noutvtpfile ='./tubes.vtp'\nreader = vtk.vtkPolyDataReader()\nreader.SetFileName(invtkfile)",
"_____no_output_____"
],
[
"writer = vtk.vtkXMLPolyDataWriter()\nwriter.SetFileName(outvtpfile)\nwriter.SetInputData(reader.GetOutput())\nwriter.Update()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
d0b13d454ed54410a93ea184b173921b0b2b65ac | 203,287 | ipynb | Jupyter Notebook | notebooks/Durables-vs-Nondurables-At-Low-And-High-Frequencies.ipynb | llorracc/Jupyter | b75b59f210e93dfc32b1a98f2df4f9da8ca3ca71 | [
"Apache-2.0"
] | null | null | null | notebooks/Durables-vs-Nondurables-At-Low-And-High-Frequencies.ipynb | llorracc/Jupyter | b75b59f210e93dfc32b1a98f2df4f9da8ca3ca71 | [
"Apache-2.0"
] | 4 | 2018-12-07T04:17:58.000Z | 2019-10-09T02:50:36.000Z | notebooks/Durables-vs-Nondurables-At-Low-And-High-Frequencies.ipynb | llorracc/Jupyter | b75b59f210e93dfc32b1a98f2df4f9da8ca3ca71 | [
"Apache-2.0"
] | 3 | 2019-02-19T20:00:40.000Z | 2020-05-27T15:47:30.000Z | 614.160121 | 121,764 | 0.945107 | [
[
[
"# Durables vs Non Durables At Low And High Frequencies",
"_____no_output_____"
]
],
[
[
"!pip install numpy\n!pip install matplotlib \n!pip install pandas\n!pip install pandas_datareader\n!pip install datetime\n!pip install seaborn\n\n# Some initial setup\nfrom matplotlib import pyplot as plt\nimport numpy as np\nplt.style.use('seaborn-darkgrid')\nimport pandas as pd\nimport pandas_datareader.data as web\nimport datetime\nimport seaborn as sns",
"Requirement already satisfied: numpy in c:\\users\\mateo\\anaconda3\\lib\\site-packages (1.18.5)\nRequirement already satisfied: matplotlib in c:\\users\\mateo\\anaconda3\\lib\\site-packages (3.2.2)\nRequirement already satisfied: numpy>=1.11 in c:\\users\\mateo\\anaconda3\\lib\\site-packages (from matplotlib) (1.18.5)\nRequirement already satisfied: kiwisolver>=1.0.1 in c:\\users\\mateo\\anaconda3\\lib\\site-packages (from matplotlib) (1.2.0)\nRequirement already satisfied: cycler>=0.10 in c:\\users\\mateo\\anaconda3\\lib\\site-packages (from matplotlib) (0.10.0)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in c:\\users\\mateo\\anaconda3\\lib\\site-packages (from matplotlib) (2.4.7)\nRequirement already satisfied: python-dateutil>=2.1 in c:\\users\\mateo\\anaconda3\\lib\\site-packages (from matplotlib) (2.8.1)\nRequirement already satisfied: six in c:\\users\\mateo\\anaconda3\\lib\\site-packages (from cycler>=0.10->matplotlib) (1.15.0)\nRequirement already satisfied: pandas in c:\\users\\mateo\\anaconda3\\lib\\site-packages (1.0.5)\nRequirement already satisfied: pytz>=2017.2 in c:\\users\\mateo\\anaconda3\\lib\\site-packages (from pandas) (2020.1)\nRequirement already satisfied: numpy>=1.13.3 in c:\\users\\mateo\\anaconda3\\lib\\site-packages (from pandas) (1.18.5)\nRequirement already satisfied: python-dateutil>=2.6.1 in c:\\users\\mateo\\anaconda3\\lib\\site-packages (from pandas) (2.8.1)\nRequirement already satisfied: six>=1.5 in c:\\users\\mateo\\anaconda3\\lib\\site-packages (from python-dateutil>=2.6.1->pandas) (1.15.0)\nRequirement already satisfied: pandas_datareader in c:\\users\\mateo\\anaconda3\\lib\\site-packages (0.9.0)\nRequirement already satisfied: pandas>=0.23 in c:\\users\\mateo\\anaconda3\\lib\\site-packages (from pandas_datareader) (1.0.5)\nRequirement already satisfied: requests>=2.19.0 in c:\\users\\mateo\\anaconda3\\lib\\site-packages (from pandas_datareader) (2.24.0)\nRequirement already satisfied: lxml in c:\\users\\mateo\\anaconda3\\lib\\site-packages (from pandas_datareader) (4.5.2)\nRequirement already satisfied: python-dateutil>=2.6.1 in c:\\users\\mateo\\anaconda3\\lib\\site-packages (from pandas>=0.23->pandas_datareader) (2.8.1)\nRequirement already satisfied: numpy>=1.13.3 in c:\\users\\mateo\\anaconda3\\lib\\site-packages (from pandas>=0.23->pandas_datareader) (1.18.5)\nRequirement already satisfied: pytz>=2017.2 in c:\\users\\mateo\\anaconda3\\lib\\site-packages (from pandas>=0.23->pandas_datareader) (2020.1)\nRequirement already satisfied: chardet<4,>=3.0.2 in c:\\users\\mateo\\anaconda3\\lib\\site-packages (from requests>=2.19.0->pandas_datareader) (3.0.4)\nRequirement already satisfied: idna<3,>=2.5 in c:\\users\\mateo\\anaconda3\\lib\\site-packages (from requests>=2.19.0->pandas_datareader) (2.10)\nRequirement already satisfied: certifi>=2017.4.17 in c:\\users\\mateo\\anaconda3\\lib\\site-packages (from requests>=2.19.0->pandas_datareader) (2020.6.20)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in c:\\users\\mateo\\anaconda3\\lib\\site-packages (from requests>=2.19.0->pandas_datareader) (1.25.9)\nRequirement already satisfied: six>=1.5 in c:\\users\\mateo\\anaconda3\\lib\\site-packages (from python-dateutil>=2.6.1->pandas>=0.23->pandas_datareader) (1.15.0)\nRequirement already satisfied: datetime in c:\\users\\mateo\\anaconda3\\lib\\site-packages (4.3)\nRequirement already satisfied: pytz in c:\\users\\mateo\\anaconda3\\lib\\site-packages (from datetime) (2020.1)\nRequirement already satisfied: zope.interface in c:\\users\\mateo\\anaconda3\\lib\\site-packages (from datetime) (4.7.1)\nRequirement already satisfied: setuptools in c:\\users\\mateo\\anaconda3\\lib\\site-packages (from zope.interface->datetime) (49.2.0.post20200714)\nRequirement already satisfied: seaborn in c:\\users\\mateo\\anaconda3\\lib\\site-packages (0.10.1)\nRequirement already satisfied: scipy>=1.0.1 in c:\\users\\mateo\\anaconda3\\lib\\site-packages (from seaborn) (1.5.0)\nRequirement already satisfied: matplotlib>=2.1.2 in c:\\users\\mateo\\anaconda3\\lib\\site-packages (from seaborn) (3.2.2)\nRequirement already satisfied: numpy>=1.13.3 in c:\\users\\mateo\\anaconda3\\lib\\site-packages (from seaborn) (1.18.5)\nRequirement already satisfied: pandas>=0.22.0 in c:\\users\\mateo\\anaconda3\\lib\\site-packages (from seaborn) (1.0.5)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in c:\\users\\mateo\\anaconda3\\lib\\site-packages (from matplotlib>=2.1.2->seaborn) (2.4.7)\nRequirement already satisfied: python-dateutil>=2.1 in c:\\users\\mateo\\anaconda3\\lib\\site-packages (from matplotlib>=2.1.2->seaborn) (2.8.1)\nRequirement already satisfied: kiwisolver>=1.0.1 in c:\\users\\mateo\\anaconda3\\lib\\site-packages (from matplotlib>=2.1.2->seaborn) (1.2.0)\nRequirement already satisfied: cycler>=0.10 in c:\\users\\mateo\\anaconda3\\lib\\site-packages (from matplotlib>=2.1.2->seaborn) (0.10.0)\nRequirement already satisfied: pytz>=2017.2 in c:\\users\\mateo\\anaconda3\\lib\\site-packages (from pandas>=0.22.0->seaborn) (2020.1)\nRequirement already satisfied: six>=1.5 in c:\\users\\mateo\\anaconda3\\lib\\site-packages (from python-dateutil>=2.1->matplotlib>=2.1.2->seaborn) (1.15.0)\n"
],
[
"# Import Quarterly data from Fred using Data Reader\n\nstart = datetime.datetime(1947, 1, 1) #beginning of series\nstart1 = datetime.datetime(1956, 10, 1) #beginning of series\n\nend = datetime.datetime(2018, 4, 1) #end of series\n\nPCDG = web.DataReader('PCDG', 'fred', start, end) #loads your durable goods quarterly series data\nPCND= web.DataReader('PCND', 'fred', start, end) #Loads your non durable goods quarterly series data\nPCDG1 = web.DataReader('PCDG', 'fred', start1, end) #loads your durable goods quarterly series data, helps in having time series of identical length\nPCND1= web.DataReader('PCND', 'fred', start1, end) #Loads your non durable goods quarterly series data, , helps in having time series of identical length",
"_____no_output_____"
],
[
"# Constructing PCDG and PCND growth series () \nz1=PCDG.pct_change(periods=40)# 10*4\nz2=PCND.pct_change(periods=40)#10*4\nz3=PCDG1.pct_change(periods=1)# \nz4=PCND1.pct_change(periods=1)#\ns1=z1*100 #(In percentage terms)\ns2=z2*100 #(In percentage terms)\ns3=z3*100 #(In percentage terms)\ns4=z4*100 #(In percentage terms)",
"_____no_output_____"
],
[
"# Plotting the growth rates\nplt.figure(figsize=((14,8))) # set the plot size\nplt.title('Durables vs Non Durables Growth 10 year vs Quarterly')\nplt.xlabel('Year')\nplt.ylabel(' Growth (Percentage Terms)')\nplt.plot(s1,label=\"PCDG 10 year growth\")\nplt.plot(s2,label=\"PCND 10 year growth\")\nplt.plot(s3,label=\"PCDG quarterly growth\")\nplt.plot(s4,label=\"PCND quarterly growth\")\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"# Drops the missing NAN observations\n\na1=s1.dropna()#Drops the missing values from s1 series\na2=s2.dropna()#Drops the missing values from s2 series\na3=s3.dropna()#Drops the missing values from s3 series\na4=s4.dropna()#Drops the missing values from s4 series",
"_____no_output_____"
],
[
"# concatate (merge) the two series\nc1=pd.concat([a1, a2], axis=1)\nc2=pd.concat([a3, a4], axis=1)",
"_____no_output_____"
],
[
"#Pairwise Plotting for the 10 year growth series\n\nsns.pairplot(c1)\nplt.suptitle('10 Year Growth Rates')\nplt.show()",
"_____no_output_____"
],
[
"#Pairwise Plotting for the quarterly growth series\n\nsns.pairplot(c2)\nplt.suptitle('1 Quarter Growth Rates')\nplt.show()",
"_____no_output_____"
]
],
[
[
"For each frequency [quarterly|10-year] each moment of time would correspond to a single point (x=nondurables growth, y=durables growth). Such a plot shows that at the 10 year frequency, there is a very strong relationship between the two growth rates, and at the 1 quarter frequency, much much less.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d0b154b1804ee3d7ced7746003796185e16a6f60 | 5,008 | ipynb | Jupyter Notebook | doc/source/od/methods/iforest.ipynb | arnaudvl/alibi-detect | 573ef3be3435c834489a7b4f2d23e580c8a0a2a2 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2022-02-22T03:02:42.000Z | 2022-02-22T03:02:42.000Z | doc/source/od/methods/iforest.ipynb | arnaudvl/alibi-detect | 573ef3be3435c834489a7b4f2d23e580c8a0a2a2 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2021-11-08T09:29:08.000Z | 2021-11-09T11:44:05.000Z | doc/source/od/methods/iforest.ipynb | arnaudvl/alibi-detect | 573ef3be3435c834489a7b4f2d23e580c8a0a2a2 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | 34.068027 | 722 | 0.611821 | [
[
[
"[source](../../api/alibi_detect.od.isolationforest.rst)",
"_____no_output_____"
],
[
"# Isolation Forest",
"_____no_output_____"
],
[
"## Overview\n\n[Isolation forests](https://cs.nju.edu.cn/zhouzh/zhouzh.files/publication/icdm08b.pdf) (IF) are tree based models specifically used for outlier detection. The IF isolates observations by randomly selecting a feature and then randomly selecting a split value between the maximum and minimum values of the selected feature. The number of splittings required to isolate a sample is equivalent to the path length from the root node to the terminating node. This path length, averaged over a forest of random trees, is a measure of normality and is used to define an anomaly score. Outliers can typically be isolated quicker, leading to shorter paths. The algorithm is suitable for low to medium dimensional tabular data.",
"_____no_output_____"
],
[
"## Usage\n\n### Initialize\n\nParameters:\n\n* `threshold`: threshold value for the outlier score above which the instance is flagged as an outlier.\n\n* `n_estimators`: number of base estimators in the ensemble. Defaults to 100.\n\n* `max_samples`: number of samples to draw from the training data to train each base estimator. If *int*, draw `max_samples` samples. If *float*, draw `max_samples` *times number of features* samples. If *'auto'*, `max_samples` = min(256, number of samples).\n\n* `max_features`: number of features to draw from the training data to train each base estimator. If *int*, draw `max_features` features. If float, draw `max_features` *times number of features* features.\n\n* `bootstrap`: whether to fit individual trees on random subsets of the training data, sampled with replacement.\n\n* `n_jobs`: number of jobs to run in parallel for `fit` and `predict`.\n\n* `data_type`: can specify data type added to metadata. E.g. *'tabular'* or *'image'*.\n\nInitialized outlier detector example:\n\n```python\nfrom alibi_detect.od import IForest\n\nod = IForest(\n threshold=0.,\n n_estimators=100\n)\n```",
"_____no_output_____"
],
[
"### Fit\n\nWe then need to train the outlier detector. The following parameters can be specified:\n\n* `X`: training batch as a numpy array.\n\n* `sample_weight`: array with shape *(batch size,)* used to assign different weights to each instance during training. Defaults to *None*.\n\n```python\nod.fit(\n X_train\n)\n```\n\nIt is often hard to find a good threshold value. If we have a batch of normal and outlier data and we know approximately the percentage of normal data in the batch, we can infer a suitable threshold:\n\n```python\nod.infer_threshold(\n X, \n threshold_perc=95\n)\n```",
"_____no_output_____"
],
[
"### Detect\n\nWe detect outliers by simply calling `predict` on a batch of instances `X` to compute the instance level outlier scores. We can also return the instance level outlier score by setting `return_instance_score` to True.\n\nThe prediction takes the form of a dictionary with `meta` and `data` keys. `meta` contains the detector's metadata while `data` is also a dictionary which contains the actual predictions stored in the following keys:\n\n* `is_outlier`: boolean whether instances are above the threshold and therefore outlier instances. The array is of shape *(batch size,)*.\n\n* `instance_score`: contains instance level scores if `return_instance_score` equals True.\n\n\n```python\npreds = od.predict(\n X,\n return_instance_score=True\n)\n```",
"_____no_output_____"
],
[
"## Examples\n\n### Tabular\n\n[Outlier detection on KDD Cup 99](../../examples/od_if_kddcup.nblink)",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d0b15e0f023be4312338a99124f9d8a0262fbac6 | 77,132 | ipynb | Jupyter Notebook | En_category_classification.ipynb | Nomanciit/English-Category-Classification | b330b079ba0b964a287a7be765b5c260b9b3fdfb | [
"Apache-2.0"
] | null | null | null | En_category_classification.ipynb | Nomanciit/English-Category-Classification | b330b079ba0b964a287a7be765b5c260b9b3fdfb | [
"Apache-2.0"
] | null | null | null | En_category_classification.ipynb | Nomanciit/English-Category-Classification | b330b079ba0b964a287a7be765b5c260b9b3fdfb | [
"Apache-2.0"
] | null | null | null | 77,132 | 77,132 | 0.668361 | [
[
[
"from google.colab import drive\ndrive.mount('/content/drive')\nimport warnings\nwarnings.filterwarnings('ignore')",
"Mounted at /content/drive\n"
],
[
"# !pip install tensorflow_text \n!pip install transformers emoji\n# !pip install ktrain",
"Collecting transformers\n Downloading transformers-4.10.0-py3-none-any.whl (2.8 MB)\n\u001b[K |████████████████████████████████| 2.8 MB 5.4 MB/s \n\u001b[?25hCollecting emoji\n Downloading emoji-1.4.2.tar.gz (184 kB)\n\u001b[K |████████████████████████████████| 184 kB 49.7 MB/s \n\u001b[?25hRequirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.7/dist-packages (from transformers) (4.62.0)\nRequirement already satisfied: filelock in /usr/local/lib/python3.7/dist-packages (from transformers) (3.0.12)\nRequirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.7/dist-packages (from transformers) (2019.12.20)\nRequirement already satisfied: importlib-metadata in /usr/local/lib/python3.7/dist-packages (from transformers) (4.6.4)\nRequirement already satisfied: packaging in /usr/local/lib/python3.7/dist-packages (from transformers) (21.0)\nCollecting huggingface-hub>=0.0.12\n Downloading huggingface_hub-0.0.16-py3-none-any.whl (50 kB)\n\u001b[K |████████████████████████████████| 50 kB 5.7 MB/s \n\u001b[?25hRequirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from transformers) (2.23.0)\nCollecting tokenizers<0.11,>=0.10.1\n Downloading tokenizers-0.10.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (3.3 MB)\n\u001b[K |████████████████████████████████| 3.3 MB 31.8 MB/s \n\u001b[?25hCollecting sacremoses\n Downloading sacremoses-0.0.45-py3-none-any.whl (895 kB)\n\u001b[K |████████████████████████████████| 895 kB 33.4 MB/s \n\u001b[?25hRequirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.7/dist-packages (from transformers) (1.19.5)\nCollecting pyyaml>=5.1\n Downloading PyYAML-5.4.1-cp37-cp37m-manylinux1_x86_64.whl (636 kB)\n\u001b[K |████████████████████████████████| 636 kB 42.3 MB/s \n\u001b[?25hRequirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from huggingface-hub>=0.0.12->transformers) (3.7.4.3)\nRequirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging->transformers) (2.4.7)\nRequirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata->transformers) (3.5.0)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (3.0.4)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (1.24.3)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (2021.5.30)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (2.10)\nRequirement already satisfied: joblib in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers) (1.0.1)\nRequirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers) (1.15.0)\nRequirement already satisfied: click in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers) (7.1.2)\nBuilding wheels for collected packages: emoji\n Building wheel for emoji (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for emoji: filename=emoji-1.4.2-py3-none-any.whl size=186469 sha256=8ba5b5c09c979a1b3981450a8e7eea569942b19925ff33996b1b652da84cd70b\n Stored in directory: /root/.cache/pip/wheels/e4/61/e7/2fc1ac8f306848fc66c6c013ab511f0a39ef4b1825b11363b2\nSuccessfully built emoji\nInstalling collected packages: tokenizers, sacremoses, pyyaml, huggingface-hub, transformers, emoji\n Attempting uninstall: pyyaml\n Found existing installation: PyYAML 3.13\n Uninstalling PyYAML-3.13:\n Successfully uninstalled PyYAML-3.13\nSuccessfully installed emoji-1.4.2 huggingface-hub-0.0.16 pyyaml-5.4.1 sacremoses-0.0.45 tokenizers-0.10.3 transformers-4.10.0\n"
],
[
"from transformers import AutoTokenizer\n",
"_____no_output_____"
],
[
"import pandas as pd\ndataset = pd.read_excel(\"/content/drive/MyDrive/English Category Transformer Model/en_category_model_data.xlsx\")\ndataset = dataset.sample(frac=1, axis=1).sample(frac=1).reset_index(drop=True) \ndataset = dataset[['p_message','Category']]\ndataset.head()",
"_____no_output_____"
],
[
"dataset.groupby('Category').size()",
"_____no_output_____"
],
[
"def to_int_sentiment(label):\n if label == \"business\":\n return 0\n elif label == \"education\":\n return 1\n elif label == 'entertainment':\n return 2\n elif label == 'fashion':\n return 3\n elif label == 'food':\n return 4\n elif label == 'health':\n return 5\n elif label == 'politics':\n return 6\n elif label == 'sports':\n return 7\n elif label == 'technology':\n return 8\n elif label == 'telecom':\n return 9\n elif label == 'tourism':\n return 10\n elif label == 'transport':\n return 11\n elif label == 'weather':\n return 12\n\ndataset['Category'] = dataset.Category.apply(to_int_sentiment)\ndataset = dataset.dropna()\ndataset['Category'] = dataset.Category.apply(int)",
"_____no_output_____"
],
[
"dataset.head()",
"_____no_output_____"
],
[
"from preprocessing import CleaningText\nen_text_clean = CleaningText()\n\n",
"_____no_output_____"
],
[
"dataset['p_message'] = dataset['p_message'].apply(str)\ndataset['p_message'] = dataset['p_message'].apply(en_text_clean.text_preprocessing)\ndataset.head()\n",
"_____no_output_____"
],
[
"dataset.groupby('Category').size()",
"_____no_output_____"
],
[
"\nimport numpy as np\nimport pandas as pd\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import accuracy_score, recall_score, precision_score, f1_score\nimport torch\nfrom transformers import TrainingArguments, Trainer\nfrom transformers import BertTokenizer, BertForSequenceClassification\nfrom transformers import EarlyStoppingCallback\n\n\n# Read data\n# dataset = pd.read_excel(\"/content/drive/MyDrive/ar_general_sentiment_data.xlsx\")\n# dataset = dataset[['text','Sentiment']]\n\n# Define pretrained tokenizer and model\nmodel_name = \"prajjwal1/bert-small\"\ntokenizer = BertTokenizer.from_pretrained(model_name)\nmodel = BertForSequenceClassification.from_pretrained(model_name, num_labels=13)\n\n# ----- 1. Preprocess data -----#\n# Preprocess data\nX = list(dataset[\"p_message\"])\ny = list(dataset[\"Category\"])\nX_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2)\nX_train_tokenized = tokenizer(X_train, padding=True, truncation=True, max_length=512)\nX_val_tokenized = tokenizer(X_val, padding=True, truncation=True, max_length=512)\n\n# Create torch dataset\nclass Dataset(torch.utils.data.Dataset):\n def __init__(self, encodings, labels=None):\n self.encodings = encodings\n self.labels = labels\n\n def __getitem__(self, idx):\n item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}\n if self.labels:\n item[\"labels\"] = torch.tensor(self.labels[idx])\n return item\n\n def __len__(self):\n return len(self.encodings[\"input_ids\"])\n\ntrain_dataset = Dataset(X_train_tokenized, y_train)\nval_dataset = Dataset(X_val_tokenized, y_val)\n\n\n\n",
"_____no_output_____"
],
[
"# ----- 2. Fine-tune pretrained model -----#\n# Define Trainer parameters\ndef compute_metrics(p):\n pred, labels = p\n pred = np.argmax(pred, axis=1)\n\n accuracy = accuracy_score(y_true=labels, y_pred=pred)\n recall = recall_score(y_true=labels, y_pred=pred,average='micro')\n precision = precision_score(y_true=labels, y_pred=pred,average='micro')\n f1 = f1_score(y_true=labels, y_pred=pred,average='micro')\n\n return {\"accuracy\": accuracy, \"precision\": precision, \"recall\": recall, \"f1\": f1}\n\n# Define Trainer\nargs = TrainingArguments(\n output_dir=\"output\",\n evaluation_strategy=\"steps\",\n eval_steps=500,\n per_device_train_batch_size=8,\n per_device_eval_batch_size=8,\n num_train_epochs=5,\n seed=0,\n load_best_model_at_end=True,\n)\ntrainer = Trainer(\n model=model,\n args=args,\n train_dataset=train_dataset,\n eval_dataset=val_dataset,\n compute_metrics=compute_metrics,\n # callbacks=[EarlyStoppingCallback(early_stopping_patience=3)],\n)\n\n# Train pre-trained model\ntrainer.train()\n",
"***** Running training *****\n Num examples = 25391\n Num Epochs = 5\n Instantaneous batch size per device = 8\n Total train batch size (w. parallel, distributed & accumulation) = 8\n Gradient Accumulation steps = 1\n Total optimization steps = 15870\n"
],
[
"model_path = \"/content/drive/MyDrive/English Category Transformer Model/en_category_model\"\ntrainer.save_model(model_path)",
"Saving model checkpoint to /content/drive/MyDrive/English Category Transformer Model/en_category_model\nConfiguration saved in /content/drive/MyDrive/English Category Transformer Model/en_category_model/config.json\nModel weights saved in /content/drive/MyDrive/English Category Transformer Model/en_category_model/pytorch_model.bin\n"
],
[
"\nX_test_tokenized = tokenizer(X_val, padding=True, truncation=True, max_length=512) \n# Create torch dataset\ntest_dataset = Dataset(X_test_tokenized) \n# Load trained model\n# model_path = \"sentiment_model\"\nmodel = BertForSequenceClassification.from_pretrained(model_path, num_labels=13) \n# Define test trainer\ntest_trainer = Trainer(model) \n# Make prediction\nraw_pred, _, _ = test_trainer.predict(test_dataset) \n# Preprocess raw predictions\ny_pred = np.argmax(raw_pred, axis=1)",
"loading configuration file /content/drive/MyDrive/English Category Transformer Model/en_category_model/config.json\nModel config BertConfig {\n \"_name_or_path\": \"prajjwal1/bert-small\",\n \"architectures\": [\n \"BertForSequenceClassification\"\n ],\n \"attention_probs_dropout_prob\": 0.1,\n \"classifier_dropout\": null,\n \"gradient_checkpointing\": false,\n \"hidden_act\": \"gelu\",\n \"hidden_dropout_prob\": 0.1,\n \"hidden_size\": 512,\n \"id2label\": {\n \"0\": \"LABEL_0\",\n \"1\": \"LABEL_1\",\n \"2\": \"LABEL_2\",\n \"3\": \"LABEL_3\",\n \"4\": \"LABEL_4\",\n \"5\": \"LABEL_5\",\n \"6\": \"LABEL_6\",\n \"7\": \"LABEL_7\",\n \"8\": \"LABEL_8\",\n \"9\": \"LABEL_9\",\n \"10\": \"LABEL_10\",\n \"11\": \"LABEL_11\",\n \"12\": \"LABEL_12\"\n },\n \"initializer_range\": 0.02,\n \"intermediate_size\": 2048,\n \"label2id\": {\n \"LABEL_0\": 0,\n \"LABEL_1\": 1,\n \"LABEL_10\": 10,\n \"LABEL_11\": 11,\n \"LABEL_12\": 12,\n \"LABEL_2\": 2,\n \"LABEL_3\": 3,\n \"LABEL_4\": 4,\n \"LABEL_5\": 5,\n \"LABEL_6\": 6,\n \"LABEL_7\": 7,\n \"LABEL_8\": 8,\n \"LABEL_9\": 9\n },\n \"layer_norm_eps\": 1e-12,\n \"max_position_embeddings\": 512,\n \"model_type\": \"bert\",\n \"num_attention_heads\": 8,\n \"num_hidden_layers\": 4,\n \"pad_token_id\": 0,\n \"position_embedding_type\": \"absolute\",\n \"problem_type\": \"single_label_classification\",\n \"torch_dtype\": \"float32\",\n \"transformers_version\": \"4.10.0\",\n \"type_vocab_size\": 2,\n \"use_cache\": true,\n \"vocab_size\": 30522\n}\n\nloading weights file /content/drive/MyDrive/English Category Transformer Model/en_category_model/pytorch_model.bin\nAll model checkpoint weights were used when initializing BertForSequenceClassification.\n\nAll the weights of BertForSequenceClassification were initialized from the model checkpoint at /content/drive/MyDrive/English Category Transformer Model/en_category_model.\nIf your task is similar to the task the model of the checkpoint was trained on, you can already use BertForSequenceClassification for predictions without further training.\nNo `TrainingArguments` passed, using `output_dir=tmp_trainer`.\nPyTorch: setting up devices\nThe default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-).\n***** Running Prediction *****\n Num examples = 6348\n Batch size = 8\n"
],
[
"y_pred",
"_____no_output_____"
],
[
"from sklearn.metrics import classification_report, confusion_matrix\nclass_names = [\"0\",\"1\",\"2\",\"3\",\"4\",\"5\",\"6\",\"7\",\"8\",\"9\",\"10\",\"11\",\"12\"]\nprint(classification_report(y_val, y_pred, target_names=class_names))",
" precision recall f1-score support\n\n 0 0.91 0.91 0.91 527\n 1 0.94 0.93 0.94 495\n 2 0.96 0.93 0.95 526\n 3 0.98 0.99 0.98 438\n 4 0.98 0.94 0.96 491\n 5 0.91 0.95 0.93 522\n 6 0.96 0.92 0.94 554\n 7 0.97 0.97 0.97 449\n 8 0.90 0.92 0.91 537\n 9 0.95 0.97 0.96 578\n 10 0.96 0.96 0.96 292\n 11 0.94 0.95 0.95 523\n 12 0.96 0.97 0.96 416\n\n accuracy 0.95 6348\n macro avg 0.95 0.95 0.95 6348\nweighted avg 0.95 0.95 0.95 6348\n\n"
],
[
" print(confusion_matrix(y_val, y_pred))",
"[[480 5 2 1 3 6 11 0 8 3 2 4 2]\n [ 5 459 5 1 1 7 1 1 8 5 1 1 0]\n [ 2 5 490 3 1 6 2 2 5 3 0 5 2]\n [ 0 1 0 434 0 0 0 1 0 0 0 1 1]\n [ 3 2 2 0 462 10 0 1 0 2 1 5 3]\n [ 1 3 4 3 2 496 1 1 7 1 1 1 1]\n [ 16 4 0 0 1 2 509 2 10 2 2 2 4]\n [ 0 1 3 1 0 3 1 436 0 0 0 3 1]\n [ 9 5 1 0 1 8 1 1 492 13 0 4 2]\n [ 2 0 0 0 1 1 0 0 13 560 0 0 1]\n [ 8 0 0 0 0 0 0 0 0 0 279 5 0]\n [ 3 1 2 0 0 5 0 3 5 1 3 499 1]\n [ 1 0 1 1 0 0 4 2 1 1 1 0 404]]\n"
],
[
"df = pd.DataFrame(X_val,columns =['p_message'])\ndf.head()",
"_____no_output_____"
],
[
"df['prediction'] = y_pred",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"df.groupby('prediction').size()",
"_____no_output_____"
],
[
"from transformers import BertTokenizer, BertForSequenceClassification\nfrom transformers import EarlyStoppingCallback\nfrom transformers import AutoTokenizer\nfrom preprocessing import CleaningText\n\nmodel_name = \"prajjwal1/bert-small\"\nmodel_path = \"en_category\"",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0b16111de7808bc10e589be1f9c959975270ebd | 844,542 | ipynb | Jupyter Notebook | P1.ipynb | kpilkk/CarND-LaneLines-P1 | 2b67c5ff68584d81bfec15c37e090c16054ce262 | [
"MIT"
] | null | null | null | P1.ipynb | kpilkk/CarND-LaneLines-P1 | 2b67c5ff68584d81bfec15c37e090c16054ce262 | [
"MIT"
] | null | null | null | P1.ipynb | kpilkk/CarND-LaneLines-P1 | 2b67c5ff68584d81bfec15c37e090c16054ce262 | [
"MIT"
] | 1 | 2020-01-20T12:17:55.000Z | 2020-01-20T12:17:55.000Z | 808.948276 | 128,632 | 0.950136 | [
[
[
"# Self-Driving Car Engineer Nanodegree\n\n\n## Project: **Finding Lane Lines on the Road** \n***\nIn this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip \"raw-lines-example.mp4\" (also contained in this repository) to see what the output should look like after using the helper functions below. \n\nOnce you have a result that looks roughly like \"raw-lines-example.mp4\", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video \"P1_example.mp4\". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.\n\nIn addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/322/view) for this project.\n\n---\nLet's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the \"play\" button above) to display the image.\n\n**Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the \"Kernel\" menu above and selecting \"Restart & Clear Output\".**\n\n---",
"_____no_output_____"
],
[
"**The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**\n\n---\n\n<figure>\n <img src=\"examples/line-segments-example.jpg\" width=\"380\" alt=\"Combined Image\" />\n <figcaption>\n <p></p> \n <p style=\"text-align: center;\"> Your output should look something like this (above) after detecting line segments using the helper functions below </p> \n </figcaption>\n</figure>\n <p></p> \n<figure>\n <img src=\"examples/laneLines_thirdPass.jpg\" width=\"380\" alt=\"Combined Image\" />\n <figcaption>\n <p></p> \n <p style=\"text-align: center;\"> Your goal is to connect/average/extrapolate line segments to get output like this</p> \n </figcaption>\n</figure>",
"_____no_output_____"
],
[
"**Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.** ",
"_____no_output_____"
],
[
"## Import Packages",
"_____no_output_____"
]
],
[
[
"#importing some useful packages\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimport numpy as np\nimport cv2\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"## Read in an Image",
"_____no_output_____"
]
],
[
[
"#reading in an image\nimage = mpimg.imread('test_images/solidWhiteRight.jpg')\n\n#printing out some stats and plotting\nprint('This image is:', type(image), 'with dimensions:', image.shape)\nplt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')",
"This image is: <class 'numpy.ndarray'> with dimensions: (540, 960, 3)\n"
]
],
[
[
"## Ideas for Lane Detection Pipeline",
"_____no_output_____"
],
[
"**Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**\n\n`cv2.inRange()` for color selection \n`cv2.fillPoly()` for regions selection \n`cv2.line()` to draw lines on an image given endpoints \n`cv2.addWeighted()` to coadd / overlay two images \n`cv2.cvtColor()` to grayscale or change color \n`cv2.imwrite()` to output images to file \n`cv2.bitwise_and()` to apply a mask to an image\n\n**Check out the OpenCV documentation to learn about these and discover even more awesome functionality!**",
"_____no_output_____"
],
[
"## Helper Functions",
"_____no_output_____"
],
[
"Below are some helper functions to help get you started. They should look familiar from the lesson!",
"_____no_output_____"
]
],
[
[
"import math\n\ndef grayscale(img):\n \"\"\"Applies the Grayscale transform\n This will return an image with only one color channel\n but NOTE: to see the returned image as grayscale\n (assuming your grayscaled image is called 'gray')\n you should call plt.imshow(gray, cmap='gray')\"\"\"\n return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)\n # Or use BGR2GRAY if you read an image with cv2.imread()\n # return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\n \ndef canny(img, low_threshold, high_threshold):\n \"\"\"Applies the Canny transform\"\"\"\n return cv2.Canny(img, low_threshold, high_threshold)\n\ndef gaussian_blur(img, kernel_size):\n \"\"\"Applies a Gaussian Noise kernel\"\"\"\n return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)\n\ndef region_of_interest(img, vertices):\n \"\"\"\n Applies an image mask.\n \n Only keeps the region of the image defined by the polygon\n formed from `vertices`. The rest of the image is set to black.\n `vertices` should be a numpy array of integer points.\n \"\"\"\n #defining a blank mask to start with\n mask = np.zeros_like(img) \n \n #defining a 3 channel or 1 channel color to fill the mask with depending on the input image\n if len(img.shape) > 2:\n channel_count = img.shape[2] # i.e. 3 or 4 depending on your image\n ignore_mask_color = (255,) * channel_count\n else:\n ignore_mask_color = 255\n \n #filling pixels inside the polygon defined by \"vertices\" with the fill color \n cv2.fillPoly(mask, vertices, ignore_mask_color)\n \n #returning the image only where mask pixels are nonzero\n masked_image = cv2.bitwise_and(img, mask)\n return masked_image\n\n\n# def draw_lines(img, lines, color=[255, 0, 0], thickness=10):\n# \"\"\"\n# NOTE: this is the function you might want to use as a starting point once you want to \n# average/extrapolate the line segments you detect to map out the full\n# extent of the lane (going from the result shown in raw-lines-example.mp4\n# to that shown in P1_example.mp4). \n \n# Think about things like separating line segments by their \n# slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left\n# line vs. the right line. Then, you can average the position of each of \n# the lines and extrapolate to the top and bottom of the lane.\n \n# This function draws `lines` with `color` and `thickness`. \n# Lines are drawn on the image inplace (mutates the image).\n# If you want to make the lines semi-transparent, think about combining\n# this function with the weighted_img() function below\n# \"\"\"\n# for line in lines:\n# for x1,y1,x2,y2 in line:\n# cv2.line(img, (x1, y1), (x2, y2), color, thickness)\n \n\ndef hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):\n \"\"\"\n `img` should be the output of a Canny transform.\n \n Returns an image with hough lines drawn.\n \"\"\"\n lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)\n line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)\n draw_lines(line_img, lines)\n return line_img\n\n# Python 3 has support for cool math symbols.\n\ndef weighted_img(img, initial_img, α=0.8, β=1., γ=0.):\n \"\"\"\n `img` is the output of the hough_lines(), An image with lines drawn on it.\n Should be a blank image (all black) with lines drawn on it.\n \n `initial_img` should be the image before any processing.\n \n The result image is computed as follows:\n \n initial_img * α + img * β + γ\n NOTE: initial_img and img must be the same shape!\n \"\"\"\n return cv2.addWeighted(initial_img, α, img, β, γ)",
"_____no_output_____"
]
],
[
[
"New draw_lines function to extrapolate the line<br/>\nHere fraction of shape of image i.e **fraction*img.shape[0]** is taken to make it work with optoional problem as its image has different dimensions.",
"_____no_output_____"
]
],
[
[
"def draw_lines(img, lines, color=[255, 0, 0], thickness=10):\n left_slope = 0\n right_slope = 0\n x_left = 0\n y_left = 0\n x_right = 0\n y_right = 0\n count_left = 0 # count of left linesegments for average calculation\n count_right = 0 # count of right linesegments for average calculation\n \n for line in lines:\n for x1,y1,x2,y2 in line:\n slope = (y2-y1)/(x2-x1)\n # slope boundary is taken almost tan(30) as from camera point of view it has slope arount the same\n if slope>0.5: #Left lane\n # Adding all values of slope and average positions of a line\n left_slope += slope\n x_left += (x1+x2)/2\n y_left += (y1+y2)/2\n count_left += 1\n \n elif slope<-0.5: # right lane\n # Adding all values of slope and average positions of a line\n right_slope += slope\n x_right += (x1+x2)/2\n y_right += (y1+y2)/2\n count_right += 1\n \n # Left lane - averaging all slopes, x co-ordinates and y co-ordinates\n if count_left>0: # if left lane has been detected\n avg_left_slope = left_slope/count_left\n avg_left_x = x_left/count_left\n avg_left_y = y_left/count_left\n # Calculate bottom x and top x assuming fixed positions for corresponding y\n # It has been calculated based on slope formula y = mx+c then x = (y-c)/m\n bottom_x_left = int(((int(img.shape[0])-avg_left_y)/avg_left_slope) + avg_left_x)\n top_x_left = int(((int(0.60*img.shape[0])-avg_left_y)/avg_left_slope)+ avg_left_x)\n \n else: # If Left lane is not detected - best guess positions of bottom x and top x\n bottom_x_left = int(0.21*img.shape[1])\n top_x_left = int(0.43*img.shape[1])\n \n # Draw a line\n cv2.line(img, (top_x_left, int(0.60*img.shape[0])), (bottom_x_left, int(img.shape[0])), color, thickness)\n \n #Right lane - Average across all slope and intercepts\n if count_right>0: # If right lane is detected\n avg_right_slope = right_slope/count_right\n avg_right_x = x_right/count_right\n avg_right_y = y_right/count_right\n # Calculate bottom x and top x assuming fixed positions for corresponding y\n # It has been calculated based on slope formula y = mx+c then x = (y-c)/m\n bottom_x_right = int(((int(img.shape[0])-avg_right_y)/avg_right_slope) + avg_right_x)\n top_x_right = int(((int(0.60*img.shape[0])-avg_right_y)/avg_right_slope)+ avg_right_x)\n \n else: # If right lane is not detected - best guess positions of bottom x and top x\n bottom_x_right = int(0.89*img.shape[1])\n top_x_right = int(0.53*img.shape[1])\n \n # Draw a line \n cv2.line(img, (top_x_right, int(0.60*img.shape[0])), (bottom_x_right, int(img.shape[0])), color, thickness)",
"_____no_output_____"
]
],
[
[
"## Test Images\n\nBuild your pipeline to work on the images in the directory \"test_images\" \n**You should make sure your pipeline works well on these images before you try the videos.**",
"_____no_output_____"
]
],
[
[
"import os\nos.listdir(\"test_images/\")",
"_____no_output_____"
],
[
"image_file = ['test_images/whiteCarLaneSwitch.jpg','test_images/solidWhiteCurve.jpg', 'test_images/solidWhiteRight.jpg', 'test_images/solidYellowCurve.jpg', 'test_images/solidYellowCurve2.jpg', 'test_images/solidYellowLeft.jpg']",
"_____no_output_____"
]
],
[
[
"## Build a Lane Finding Pipeline\n\n",
"_____no_output_____"
],
[
"Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.\n\nTry tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.",
"_____no_output_____"
]
],
[
[
"# TODO: Build your pipeline that will draw lane lines on the test_images\n# then save them to the test_images_output directory.\ndef lane_finding_pipeline(img, vertices):\n# image = mpimg.imread(img)\n gray = grayscale(img)\n kernel_size = 5\n blur_gray = gaussian_blur(gray, kernel_size)\n low_threshold = 50\n high_threshold = 150\n edges = canny(blur_gray, low_threshold, high_threshold)\n masked_edges = region_of_interest(edges, vertices)\n rho = 1 # distance resolution in pixels of the Hough grid\n theta = np.pi/180 # angular resolution in radians of the Hough grid\n threshold = 10 # minimum number of votes (intersections in Hough grid cell)\n min_line_len = 60 #minimum number of pixels making up a line\n max_line_gap = 30 # maximum gap in pixels between connectable line segments\n final = hough_lines(masked_edges, rho, theta, threshold, min_line_len, max_line_gap)\n # color_edges = np.dstack((edges, edges, edges))\n final_image = weighted_img(final, img)\n return final_image",
"_____no_output_____"
],
[
"img = mpimg.imread(image_file[0])\nvertices = np.array([[(150,img.shape[0]),(450, 323), (500, 323), (img.shape[1],img.shape[0])]], dtype=np.int32)\ncv2.imwrite('test_images_output/whiteCarLaneSwitch.png', cv2.cvtColor(lane_finding_pipeline(img, vertices), cv2.COLOR_BGR2RGB))\nplt.imshow(lane_finding_pipeline(img, vertices))",
"_____no_output_____"
],
[
"img = mpimg.imread(image_file[1])\nvertices = np.array([[(150,img.shape[0]),(450, 323), (500, 323), (img.shape[1],img.shape[0])]], dtype=np.int32)\ncv2.imwrite('test_images_output/solidWhiteCurve.png', cv2.cvtColor(lane_finding_pipeline(img, vertices), cv2.COLOR_BGR2RGB))\nplt.imshow(lane_finding_pipeline(img, vertices))",
"_____no_output_____"
],
[
"img = mpimg.imread(image_file[2])\nvertices = np.array([[(150,img.shape[0]),(450, 323), (500, 323), (img.shape[1],img.shape[0])]], dtype=np.int32)\ncv2.imwrite('test_images_output/solidWhiteRight.png', cv2.cvtColor(lane_finding_pipeline(img, vertices), cv2.COLOR_BGR2RGB))\nplt.imshow(lane_finding_pipeline(img, vertices))",
"_____no_output_____"
],
[
"img = mpimg.imread(image_file[3])\nvertices = np.array([[(150,img.shape[0]),(450, 323), (500, 323), (img.shape[1],img.shape[0])]], dtype=np.int32)\ncv2.imwrite('test_images_output/solidYellowCurve.png', cv2.cvtColor(lane_finding_pipeline(img, vertices), cv2.COLOR_BGR2RGB))\nplt.imshow(lane_finding_pipeline(img, vertices))",
"_____no_output_____"
],
[
"img = mpimg.imread(image_file[4])\nvertices = np.array([[(150,img.shape[0]),(450, 323), (500, 323), (img.shape[1],img.shape[0])]], dtype=np.int32)\ncv2.imwrite('test_images_output/solidYellowCurve2.png', cv2.cvtColor(lane_finding_pipeline(img, vertices), cv2.COLOR_BGR2RGB))\nplt.imshow(lane_finding_pipeline(img, vertices))",
"_____no_output_____"
],
[
"img = mpimg.imread(image_file[5])\nvertices = np.array([[(150,img.shape[0]),(450, 305), (500, 305), (img.shape[1],img.shape[0])]], dtype=np.int32)\ncv2.imwrite('test_images_output/solidYellowLeft.png', cv2.cvtColor(lane_finding_pipeline(img, vertices), cv2.COLOR_BGR2RGB))\nplt.imshow(lane_finding_pipeline(img, vertices))",
"_____no_output_____"
]
],
[
[
"## Test on Videos\n\nYou know what's cooler than drawing lanes over images? Drawing lanes over video!\n\nWe can test our solution on two provided videos:\n\n`solidWhiteRight.mp4`\n\n`solidYellowLeft.mp4`\n\n**Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**\n\n**If you get an error that looks like this:**\n```\nNeedDownloadError: Need ffmpeg exe. \nYou can download it by calling: \nimageio.plugins.ffmpeg.download()\n```\n**Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.**",
"_____no_output_____"
]
],
[
[
"# Import everything needed to edit/save/watch video clips\nfrom moviepy.editor import VideoFileClip\nfrom IPython.display import HTML",
"_____no_output_____"
],
[
"def process_image(image):\n # NOTE: The output you return should be a color image (3 channel) for processing video below\n # TODO: put your pipeline here,\n # you should return the final output (image where lines are drawn on lanes)\n gray = grayscale(image)\n kernel_size = 5\n blur_gray = gaussian_blur(gray, kernel_size)\n low_threshold = 50\n high_threshold = 150\n edges = canny(blur_gray, low_threshold, high_threshold)\n vertices = np.array([[(150,image.shape[0]),(445, 320), (500, 320), (image.shape[1],image.shape[0])]], dtype=np.int32)\n masked_edges = region_of_interest(edges, vertices)\n rho = 1 # distance resolution in pixels of the Hough grid\n theta = np.pi/180 # angular resolution in radians of the Hough grid\n threshold = 10 # minimum number of votes (intersections in Hough grid cell)\n min_line_len = 60 #minimum number of pixels making up a line\n max_line_gap = 30 # maximum gap in pixels between connectable line segments\n final = hough_lines(masked_edges, rho, theta, threshold, min_line_len, max_line_gap)\n # color_edges = np.dstack((edges, edges, edges))\n result = weighted_img(final, image)\n# plt.imshow(result)\n return result",
"_____no_output_____"
]
],
[
[
"Let's try the one with the solid white lane on the right first ...",
"_____no_output_____"
]
],
[
[
"white_output = 'test_videos_output/solidWhiteRight.mp4'\n## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video\n## To do so add .subclip(start_second,end_second) to the end of the line below\n## Where start_second and end_second are integer values representing the start and end of the subclip\n## You may also uncomment the following line for a subclip of the first 5 seconds\n##clip1 = VideoFileClip(\"test_videos/solidWhiteRight.mp4\").subclip(0,5)\nclip1 = VideoFileClip(\"test_videos/solidWhiteRight.mp4\")\nwhite_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!\n%time white_clip.write_videofile(white_output, audio=False)",
"[MoviePy] >>>> Building video test_videos_output/solidWhiteRight.mp4\n[MoviePy] Writing video test_videos_output/solidWhiteRight.mp4\n"
]
],
[
[
"Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.",
"_____no_output_____"
]
],
[
[
"HTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\">\n</video>\n\"\"\".format(white_output))",
"_____no_output_____"
]
],
[
[
"## Improve the draw_lines() function\n\n**At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video \"P1_example.mp4\".**\n\n**Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.**",
"_____no_output_____"
],
[
"Now for the one with the solid yellow lane on the left. This one's more tricky!",
"_____no_output_____"
]
],
[
[
"yellow_output = 'test_videos_output/solidYellowLeft.mp4'\n## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video\n## To do so add .subclip(start_second,end_second) to the end of the line below\n## Where start_second and end_second are integer values representing the start and end of the subclip\n## You may also uncomment the following line for a subclip of the first 5 seconds\n##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)\nclip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')\nyellow_clip = clip2.fl_image(process_image)\n%time yellow_clip.write_videofile(yellow_output, audio=False)",
"[MoviePy] >>>> Building video test_videos_output/solidYellowLeft.mp4\n[MoviePy] Writing video test_videos_output/solidYellowLeft.mp4\n"
],
[
"HTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\">\n</video>\n\"\"\".format(yellow_output))",
"_____no_output_____"
]
],
[
[
"## Writeup and Submission\n\nIf you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file.\n",
"_____no_output_____"
],
[
"## Optional Challenge\n\nTry your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!",
"_____no_output_____"
]
],
[
[
"def process_image_1(image):\n # NOTE: The output you return should be a color image (3 channel) for processing video below\n # TODO: put your pipeline here,\n # you should return the final output (image where lines are drawn on lanes)\n gray = grayscale(image)\n kernel_size = 5\n blur_gray = gaussian_blur(gray, kernel_size)\n low_threshold = 50\n high_threshold = 150\n edges = canny(blur_gray, low_threshold, high_threshold)\n# vertices = np.array([[(220,690),(550, 450), (740, 450), (1150,690), (1000, 650),(750, 500),(600, 500),(400, 650),(700, 550)]], dtype=np.int32)\n vertices = np.array([[(220,690),(550, 450), (740, 450), (1150,690)]], dtype=np.int32)\n masked_edges = region_of_interest(edges, vertices)\n rho = 1 # distance resolution in pixels of the Hough grid\n theta = np.pi/180 # angular resolution in radians of the Hough grid\n threshold = 10 # minimum number of votes (intersections in Hough grid cell)\n min_line_len = 20 #minimum number of pixels making up a line\n max_line_gap = 7 # maximum gap in pixels between connectable line segments\n final = hough_lines(masked_edges, rho, theta, threshold, min_line_len, max_line_gap)\n # color_edges = np.dstack((edges, edges, edges))\n result = weighted_img(final, image)\n# plt.imshow(image)\n return result",
"_____no_output_____"
],
[
"challenge_output = 'test_videos_output/challenge.mp4'\n## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video\n## To do so add .subclip(start_second,end_second) to the end of the line below\n## Where start_second and end_second are integer values representing the start and end of the subclip\n## You may also uncomment the following line for a subclip of the first 5 seconds\n##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)\nclip3 = VideoFileClip('test_videos/challenge.mp4')\nchallenge_clip = clip3.fl_image(process_image_1)\n%time challenge_clip.write_videofile(challenge_output, audio=False)",
"[MoviePy] >>>> Building video test_videos_output/challenge.mp4\n[MoviePy] Writing video test_videos_output/challenge.mp4\n"
],
[
"HTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\">\n</video>\n\"\"\".format(challenge_output))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0b16160850152b5e0a126d799a4e4795acb3be1 | 31,952 | ipynb | Jupyter Notebook | Notebook/Classification_models.ipynb | IBMDeveloperUK/Classification-Models-using-Python-and-Scikit-Learn | 37be3ecba6a6137e468c02b50f72111147f55b58 | [
"Apache-2.0"
] | 2 | 2021-06-10T09:23:53.000Z | 2021-06-10T11:44:12.000Z | Notebook/Classification_models.ipynb | IBMDeveloperUK/Classification-Models-using-Python-and-Scikit-Learn | 37be3ecba6a6137e468c02b50f72111147f55b58 | [
"Apache-2.0"
] | null | null | null | Notebook/Classification_models.ipynb | IBMDeveloperUK/Classification-Models-using-Python-and-Scikit-Learn | 37be3ecba6a6137e468c02b50f72111147f55b58 | [
"Apache-2.0"
] | 3 | 2021-06-10T09:46:13.000Z | 2021-09-14T09:04:11.000Z | 36.309091 | 846 | 0.62234 | [
[
[
"# Classification models using python and scikit-learn\n\nThere are many users of online trading platforms and these companies would like to run analytics on and predict churn based on user activity on the platform. Keeping customers happy so they do not move their investments elsewhere is key to maintaining profitability.\n\nIn this notebook, we'll use scikit-learn to predict classes. scikit-learn provides implementations of many classification algorithms. In here, we have chosen the random forest classification algorithm to walk through all the different steps.\n\n\n<a id=\"top\"></a>\n## Table of Contents\n\n1. [Load libraries](#load_libraries)\n2. [Data exploration](#explore_data)\n3. [Prepare data for building classification model](#prepare_data)\n4. [Split data into train and test sets](#split_data)\n5. [Helper methods for graph generation](#helper_methods)\n6. [Prepare Random Forest classification model](#prepare_model)\n7. [Train Random Forest classification model](#train_model)\n8. [Test Random Forest classification model](#test_model)\n9. [Evaluate Random Forest classification model](#evaluate_model)\n10.[Build K-Nearest classification model](#model_knn)\n11. [Comparative study of both classification algorithms](#compare_classification)",
"_____no_output_____"
],
[
"### Quick set of instructions to work through the notebook\n\nIf you are new to Notebooks, here's a quick overview of how to work in this environment.\n\n1. The notebook has 2 types of cells - markdown (text) such as this and code such as the one below. \n2. Each cell with code can be executed independently or together (see options under the Cell menu). When working in this notebook, we will be running one cell at a time.\n3. To run the cell, position cursor in the code cell and click the Run (arrow) icon. The cell is running when you see the * next to it. Some cells have printable output.\n4. Work through this notebook by reading the instructions and executing code cell by cell. Some cells will require modifications before you run them. ",
"_____no_output_____"
],
[
"<a id=\"load_libraries\"></a>\n## 1. Load libraries\n[Top](#top)\n\n\nInstall python modules\nNOTE! Some pip installs require a kernel restart.\nThe shell command pip install is used to install Python modules. Some installs require a kernel restart to complete. To avoid confusing errors, run the following cell once and then use the Kernel menu to restart the kernel before proceeding.",
"_____no_output_____"
]
],
[
[
"!pip install pandas==0.24.2\n!pip install --user pandas_ml==0.6.1\n#downgrade matplotlib to bypass issue with confusion matrix being chopped out\n!pip install matplotlib==3.1.0\n!pip install --user scikit-learn==0.21.3\n!pip install -q scikit-plot",
"_____no_output_____"
],
[
"from sklearn.preprocessing import LabelEncoder, OneHotEncoder\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.compose import ColumnTransformer, make_column_transformer\nfrom sklearn.pipeline import Pipeline\n\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import accuracy_score, classification_report\n\nimport pandas as pd, numpy as np\n\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\nimport matplotlib.colors as mcolors\nimport matplotlib.patches as mpatches\nimport scikitplot as skplt",
"_____no_output_____"
]
],
[
[
"<a id=\"explore_data\"></a>\n## 2. Data exploration\n[Top](#top)\n\nIn this tutorial, we use a data set that contains information about customers of an online trading platform to classify whether a given customer’s probability of churn will be high, medium, or low. This provides a good example to learn how a classification model is built from start to end.",
"_____no_output_____"
]
],
[
[
"df_churn_pd = pd.read_csv(\"https://raw.githubusercontent.com/IBM/ml-learning-path-assets/master/data/mergedcustomers_missing_values_GENDER.csv\")\ndf_churn_pd.head()",
"_____no_output_____"
]
],
[
[
"We use numpy and matplotlib to get some statistics and visualize data.",
"_____no_output_____"
],
[
"print(\"The dataset contains columns of the following data types : \\n\" +str(df_churn_pd.dtypes))",
"_____no_output_____"
],
[
"Notice below that Gender has three missing values. This will be handled in one of the preprocessing steps that is to follow. ",
"_____no_output_____"
]
],
[
[
"print(\"The dataset contains following number of records for each of the columns : \\n\" +str(df_churn_pd.count()))",
"_____no_output_____"
]
],
[
[
"If we are not satisfied with the representational data, now is the time to get more data to be used for training and testing.",
"_____no_output_____"
]
],
[
[
"print( \"Each category within the churnrisk column has the following count : \")\nprint(df_churn_pd.groupby(['CHURNRISK']).size())\n#bar chart to show split of data\nindex = ['High','Medium','Low']\nchurn_plot = df_churn_pd['CHURNRISK'].value_counts(sort=True, ascending=False).plot(kind='bar',\n figsize=(4,4),title=\"Total number for occurences of churn risk \" \n + str(df_churn_pd['CHURNRISK'].count()), color=['#BB6B5A','#8CCB9B','#E5E88B'])\nchurn_plot.set_xlabel(\"Churn Risk\")\nchurn_plot.set_ylabel(\"Frequency\")",
"_____no_output_____"
]
],
[
[
"<a id=\"prepare_data\"></a>\n## 3. Data preparation\n[Top](#top)\n\nData preparation is a very important step in machine learning model building. This is because the model can perform well only when the data it is trained on is good and well prepared. Hence, this step consumes the bulk of a data scientist's time spent building models.\n\nDuring this process, we identify categorical columns in the dataset. Categories need to be indexed, which means the string labels are converted to label indices. These label indices are encoded using One-hot encoding to a binary vector with at most a single value indicating the presence of a specific feature value from among the set of all feature values. This encoding allows algorithms which expect continuous features to use categorical features.\n",
"_____no_output_____"
]
],
[
[
"\n#remove columns that are not required\ndf_churn_pd = df_churn_pd.drop(['ID'], axis=1)\n\ndf_churn_pd.head()\n",
"_____no_output_____"
]
],
[
[
"\n### [Preprocessing Data](https://scikit-learn.org/stable/modules/preprocessing.html)",
"_____no_output_____"
],
[
"Scikit-learn provides a method to fill empty values with something that would be applicable in its context. We used the <i><b> SimpleImputer <b></i> class that is provided by Sklearn and filled the missing values with the most frequent value in the column.",
"_____no_output_____"
],
[
"### [One Hot Encoder](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html)",
"_____no_output_____"
]
],
[
[
"# Defining the categorical columns \ncategoricalColumns = ['GENDER', 'STATUS', 'HOMEOWNER']\n\nprint(\"Categorical columns : \" )\nprint(categoricalColumns)\n\nimpute_categorical = SimpleImputer(strategy=\"most_frequent\")\n\nonehot_categorical = OneHotEncoder(handle_unknown='ignore')\n\ncategorical_transformer = Pipeline(steps=[('impute',impute_categorical),('onehot',onehot_categorical)])\n",
"_____no_output_____"
]
],
[
[
"The numerical columns from the data set are identified, and StandardScaler is applied to each of the columns. This way, each value is subtracted with the mean of its column and divided by its standard deviation.<br>\n### [Standard Scaler](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html)",
"_____no_output_____"
]
],
[
[
"# Defining the numerical columns \nnumericalColumns = df_churn_pd.select_dtypes(include=[np.float,np.int]).columns\n\nprint(\"Numerical columns : \" )\nprint(numericalColumns)\n\nscaler_numerical = StandardScaler()\n\nnumerical_transformer = Pipeline(steps=[('scale',scaler_numerical)])\n",
"_____no_output_____"
]
],
[
[
"The preprocessing techniques that are applied must be customized for each of the columns. Sklearn provides a library called the [ColumnTransformer](https://scikit-learn.org/stable/modules/generated/sklearn.compose.ColumnTransformer.html?highlight=columntransformer#sklearn.compose.ColumnTransformer), which allows a sequence of these techniques to be applied to selective columns using a pipeline.\n\nOnly the specified columns in transformers are transformed and combined in the output, and the non-specified columns are dropped. By specifying remainder='passthrough', all remaining columns that were not specified in transformers will be automatically passed through",
"_____no_output_____"
]
],
[
[
"preprocessorForCategoricalColumns = ColumnTransformer(transformers=[('cat', categorical_transformer, \n categoricalColumns)],\n remainder=\"passthrough\")\npreprocessorForAllColumns = ColumnTransformer(transformers=[('cat', categorical_transformer, categoricalColumns),\n ('num',numerical_transformer,numericalColumns)],\n remainder=\"passthrough\")\n\n",
"_____no_output_____"
]
],
[
[
"Machine learning algorithms cannot use simple text. We must convert the data from text to a number. Therefore, for each string that is a class we assign a label that is a number. For example, in the customer churn data set, the CHURNRISK output label is classified as high, medium, or low and is assigned labels 0, 1, or 2. We use the [LabelEncoder](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html?highlight=labelencoder#sklearn.preprocessing.LabelEncoder) class provided by Sklearn for this.",
"_____no_output_____"
]
],
[
[
"# prepare data frame for splitting data into train and test datasets\n\nfeatures = []\nfeatures = df_churn_pd.drop(['CHURNRISK'], axis=1)\n\nlabel_churn = pd.DataFrame(df_churn_pd, columns = ['CHURNRISK']) \nlabel_encoder = LabelEncoder()\nlabel = df_churn_pd['CHURNRISK']\n\nlabel = label_encoder.fit_transform(label)\nprint(\"Encoded value of Churnrisk after applying label encoder : \" + str(label))\n",
"_____no_output_____"
]
],
[
[
"### These are some of the popular preprocessing steps that are applied on the data sets. Look at [Data Processing in detail](https://developer.ibm.com/articles/data-preprocessing-in-detail/) for more information",
"_____no_output_____"
]
],
[
[
"area = 75\nx = df_churn_pd['ESTINCOME']\ny = df_churn_pd['DAYSSINCELASTTRADE']\nz = df_churn_pd['TOTALDOLLARVALUETRADED']\n\npop_a = mpatches.Patch(color='#BB6B5A', label='High')\npop_b = mpatches.Patch(color='#E5E88B', label='Medium')\npop_c = mpatches.Patch(color='#8CCB9B', label='Low')\ndef colormap(risk_list):\n cols=[]\n for l in risk_list:\n if l==0:\n cols.append('#BB6B5A')\n elif l==2:\n cols.append('#E5E88B')\n elif l==1:\n cols.append('#8CCB9B')\n return cols\n\nfig = plt.figure(figsize=(12,6))\nfig.suptitle('2D and 3D view of churnrisk data')\n\n# First subplot\nax = fig.add_subplot(1, 2,1)\n\nax.scatter(x, y, alpha=0.8, c=colormap(label), s= area)\nax.set_ylabel('DAYS SINCE LAST TRADE')\nax.set_xlabel('ESTIMATED INCOME')\n\nplt.legend(handles=[pop_a,pop_b,pop_c])\n\n# Second subplot\nax = fig.add_subplot(1,2,2, projection='3d')\n\nax.scatter(z, x, y, c=colormap(label), marker='o')\n\nax.set_xlabel('TOTAL DOLLAR VALUE TRADED')\nax.set_ylabel('ESTIMATED INCOME')\nax.set_zlabel('DAYS SINCE LAST TRADE')\n\nplt.legend(handles=[pop_a,pop_b,pop_c])\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"\n<a id=\"split_data\"></a>\n## 4. Split data into test and train\n[Top](#top)\n\nScikit-learn provides in built API to split the original dataset into train and test datasets. random_state is set to a number to be able to reproduce the same data split combination through multiple runs. \n\n[Split arrays or matrices into random train and test subsets](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html)",
"_____no_output_____"
]
],
[
[
"\nX_train, X_test, y_train, y_test = train_test_split(features,label , random_state=0)\nprint(\"Dimensions of datasets that will be used for training : Input features\"+str(X_train.shape)+ \n \" Output label\" + str(y_train.shape))\nprint(\"Dimensions of datasets that will be used for testing : Input features\"+str(X_test.shape)+ \n \" Output label\" + str(y_test.shape))\n",
"_____no_output_____"
]
],
[
[
"<a id=\"helper_methods\"></a>\n## 5. Helper methods for graph generation\n[Top](#top)\n",
"_____no_output_____"
]
],
[
[
"def colormap(risk_list):\n cols=[]\n for l in risk_list:\n if l==0:\n cols.append('#BB6B5A')\n elif l==2:\n cols.append('#E5E88B')\n elif l==1:\n cols.append('#8CCB9B')\n return cols\n\ndef two_d_compare(y_test,y_pred,model_name):\n #y_pred = label_encoder.fit_transform(y_pred)\n #y_test = label_encoder.fit_transform(y_test)\n area = (12 * np.random.rand(40))**2 \n plt.subplots(ncols=2, figsize=(10,4))\n plt.suptitle('Actual vs Predicted data : ' +model_name + '. Accuracy : %.2f' % accuracy_score(y_test, y_pred))\n\n plt.subplot(121)\n plt.scatter(X_test['ESTINCOME'], X_test['DAYSSINCELASTTRADE'], alpha=0.8, c=colormap(y_test))\n plt.title('Actual')\n plt.legend(handles=[pop_a,pop_b,pop_c])\n\n plt.subplot(122)\n plt.scatter(X_test['ESTINCOME'], X_test['DAYSSINCELASTTRADE'],alpha=0.8, c=colormap(y_pred))\n plt.title('Predicted')\n plt.legend(handles=[pop_a,pop_b,pop_c])\n\n plt.show()\n \nx = X_test['TOTALDOLLARVALUETRADED']\ny = X_test['ESTINCOME']\nz = X_test['DAYSSINCELASTTRADE']\n\npop_a = mpatches.Patch(color='#BB6B5A', label='High')\npop_b = mpatches.Patch(color='#E5E88B', label='Medium')\npop_c = mpatches.Patch(color='#8CCB9B', label='Low')\n\ndef three_d_compare(y_test,y_pred,model_name):\n fig = plt.figure(figsize=(12,10))\n fig.suptitle('Actual vs Predicted (3D) data : ' +model_name + '. Accuracy : %.2f' % accuracy_score(y_test, y_pred))\n \n ax = fig.add_subplot(121, projection='3d')\n ax.scatter(x, y, z, c=colormap(y_test), marker='o')\n ax.set_xlabel('TOTAL DOLLAR VALUE TRADED')\n ax.set_ylabel('ESTIMATED INCOME')\n ax.set_zlabel('DAYS SINCE LAST TRADE')\n plt.legend(handles=[pop_a,pop_b,pop_c])\n plt.title('Actual')\n\n ax = fig.add_subplot(122, projection='3d')\n ax.scatter(x, y, z, c=colormap(y_pred), marker='o')\n ax.set_xlabel('TOTAL DOLLAR VALUE TRADED')\n ax.set_ylabel('ESTIMATED INCOME')\n ax.set_zlabel('DAYS SINCE LAST TRADE')\n plt.legend(handles=[pop_a,pop_b,pop_c])\n plt.title('Predicted')\n\n plt.show()\n \n\ndef model_metrics(y_test,y_pred):\n print(\"Decoded values of Churnrisk after applying inverse of label encoder : \" + str(np.unique(y_pred)))\n\n skplt.metrics.plot_confusion_matrix(y_test,y_pred,text_fontsize=\"small\",cmap='Greens',figsize=(6,4))\n plt.show()\n \n print(\"The classification report for the model : \\n\\n\"+ classification_report(y_test, y_pred))\n",
"_____no_output_____"
]
],
[
[
"<a id=\"prepare_model\"></a>\n## 6. Prepare Random Forest classification model\n[Top](#top)",
"_____no_output_____"
],
[
"We instantiate a decision-tree based classification algorithm, namely, RandomForestClassifier. Next we define a pipeline to chain together the various transformers and estimators defined during the data preparation step before. \nScikit-learn provides APIs that make it easier to combine multiple algorithms into a single pipeline.\n\nWe fit the pipeline to training data and apply the trained model to transform test data and generate churn risk class prediction.",
"_____no_output_____"
],
[
"[Understanding Random Forest Classifier](https://towardsdatascience.com/understanding-random-forest-58381e0602d2)",
"_____no_output_____"
]
],
[
[
"from sklearn.ensemble import RandomForestClassifier\n\nmodel_name = \"Random Forest Classifier\"\n\nrandomForestClassifier = RandomForestClassifier(n_estimators=100, max_depth=2,random_state=0)\n",
"_____no_output_____"
]
],
[
[
"Pipelines are a convenient way of designing your data processing in a machine learning flow. The following code example shows how pipelines are set up using sklearn.\n\nRead more [Here](https://scikit-learn.org/stable/modules/classes.html?highlight=pipeline#module-sklearn.pipeline)",
"_____no_output_____"
]
],
[
[
"rfc_model = Pipeline(steps=[('preprocessorAll',preprocessorForAllColumns),('classifier', randomForestClassifier)])",
"_____no_output_____"
]
],
[
[
"<a id=\"train_model\"></a>\n## 7. Train Random Forest classification model\n[Top](#top)",
"_____no_output_____"
]
],
[
[
"# Build models\n\nrfc_model.fit(X_train,y_train)\n",
"_____no_output_____"
]
],
[
[
"<a id=\"test_model\"></a>\n## 8. Test Random Forest classification model\n[Top](#top)",
"_____no_output_____"
]
],
[
[
"\ny_pred_rfc = rfc_model.predict(X_test)\n",
"_____no_output_____"
]
],
[
[
"<a id=\"evaluate_model\"></a>\n## 9. Evaluate Random Forest classification model\n[Top](#top)",
"_____no_output_____"
],
[
"### Model results\n\nIn a supervised classification problem such as churn risk classification, we have a true output and a model-generated predicted output for each data point. For this reason, the results for each data point can be assigned to one of four categories:\n\n1. True Positive (TP) - label is positive and prediction is also positive\n2. True Negative (TN) - label is negative and prediction is also negative\n3. False Positive (FP) - label is negative but prediction is positive\n4. False Negative (FN) - label is positive but prediction is negative\n\nThese four numbers are the building blocks for most classifier evaluation metrics. A fundamental point when considering classifier evaluation is that pure accuracy (i.e. was the prediction correct or incorrect) is not generally a good metric. The reason for this is because a dataset may be highly unbalanced. For example, if a model is designed to predict fraud from a dataset where 95% of the data points are not fraud and 5% of the data points are fraud, then a naive classifier that predicts not fraud, regardless of input, will be 95% accurate. For this reason, metrics like precision and recall are typically used because they take into account the type of error. In most applications there is some desired balance between precision and recall, which can be captured by combining the two into a single metric, called the F-measure.\n\n",
"_____no_output_____"
]
],
[
[
"two_d_compare(y_test,y_pred_rfc,model_name)\n\n#three_d_compare(y_test,y_pred_rfc,model_name)",
"_____no_output_____"
]
],
[
[
"### Confusion matrix \n\nIn the graph below we have printed a confusion matrix and a self-explanotary classification report.\n\nThe confusion matrix shows that, 42 mediums were wrongly predicted as high, 2 mediums were wrongly predicted as low and 52 mediums were accurately predicted as mediums.",
"_____no_output_____"
]
],
[
[
"y_test = label_encoder.inverse_transform(y_test)\ny_pred_rfc = label_encoder.inverse_transform(y_pred_rfc)\nmodel_metrics(y_test,y_pred_rfc)",
"_____no_output_____"
]
],
[
[
"[Precision Recall Fscore support](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_recall_fscore_support.html)",
"_____no_output_____"
],
[
"[Understanding the Confusion Matrix](https://towardsdatascience.com/confusion-matrix-for-your-multi-class-machine-learning-model-ff9aa3bf7826)",
"_____no_output_____"
],
[
"### Comparative study\nIn the bar chart below, we have compared the random forest classification algorithm output classes against the actual values. ",
"_____no_output_____"
]
],
[
[
"uniqueValues, occurCount = np.unique(y_test, return_counts=True)\nfrequency_actual = (occurCount[0],occurCount[2],occurCount[1])\n\nuniqueValues, occurCount = np.unique(y_pred_rfc, return_counts=True)\nfrequency_predicted_rfc = (occurCount[0],occurCount[2],occurCount[1])\n\nn_groups = 3\nfig, ax = plt.subplots(figsize=(10,5))\nindex = np.arange(n_groups)\nbar_width = 0.1\nopacity = 0.8\n\nrects1 = plt.bar(index, frequency_actual, bar_width,\nalpha=opacity,\ncolor='g',\nlabel='Actual')\n\nrects6 = plt.bar(index + bar_width, frequency_predicted_rfc, bar_width,\nalpha=opacity,\ncolor='purple',\nlabel='Random Forest - Predicted')\n\nplt.xlabel('Churn Risk')\nplt.ylabel('Frequency')\nplt.title('Actual vs Predicted frequency.')\nplt.xticks(index + bar_width, ('High', 'Medium', 'Low'))\nplt.legend()\n\nplt.tight_layout()\nplt.show()",
"_____no_output_____"
]
],
[
[
"<a id=\"model_knn\"></a>\n## 10. Build K-Nearest classification model\n[Top](#top)\n\nK number of nearest points around the data point to be predicted are taken into consideration. These K points at this time, already belong to a class. The data point under consideration, is said to belong to the class with which most number of points from these k points belong to. ",
"_____no_output_____"
]
],
[
[
"from sklearn.neighbors import KNeighborsClassifier\n\nmodel_name = \"K-Nearest Neighbor Classifier\"\n\nknnClassifier = KNeighborsClassifier(n_neighbors = 5, metric='minkowski', p=2)\n\nknn_model = Pipeline(steps=[('preprocessorAll',preprocessorForAllColumns),('classifier', knnClassifier)]) \n\nknn_model.fit(X_train,y_train)\n\ny_pred_knn = knn_model.predict(X_test)\n",
"_____no_output_____"
],
[
"y_test = label_encoder.transform(y_test)\ntwo_d_compare(y_test,y_pred_knn,model_name)",
"_____no_output_____"
],
[
"y_test = label_encoder.inverse_transform(y_test)\ny_pred_knn = label_encoder.inverse_transform(y_pred_knn)\nmodel_metrics(y_test,y_pred_knn)",
"_____no_output_____"
]
],
[
[
"<a id=\"compare_classification\"></a>\n## 11. Comparative study of both classification algorithms. \n[Top](#top)\n\n\n ",
"_____no_output_____"
]
],
[
[
"uniqueValues, occurCount = np.unique(y_test, return_counts=True)\nfrequency_actual = (occurCount[0],occurCount[2],occurCount[1])\n\nuniqueValues, occurCount = np.unique(y_pred_rfc, return_counts=True)\nfrequency_predicted_rfc = (occurCount[0],occurCount[2],occurCount[1])\n\nuniqueValues, occurCount = np.unique(y_pred_knn, return_counts=True)\nfrequency_predicted_knn = (occurCount[0],occurCount[2],occurCount[1])\n\nn_groups = 3\nfig, ax = plt.subplots(figsize=(10,5))\nindex = np.arange(n_groups)\nbar_width = 0.1\nopacity = 0.8\n\nrects1 = plt.bar(index, frequency_actual, bar_width,\nalpha=opacity,\ncolor='g',\nlabel='Actual')\n\n\nrects6 = plt.bar(index + bar_width*2, frequency_predicted_rfc, bar_width,\nalpha=opacity,\ncolor='purple',\nlabel='Random Forest - Predicted')\n\nrects4 = plt.bar(index + bar_width*4, frequency_predicted_knn, bar_width,\nalpha=opacity,\ncolor='b',\nlabel='K-Nearest Neighbor - Predicted')\n\nplt.xlabel('Churn Risk')\nplt.ylabel('Frequency')\nplt.title('Actual vs Predicted frequency.')\nplt.xticks(index + bar_width, ('High', 'Medium', 'Low'))\nplt.legend()\n\nplt.tight_layout()\nplt.show()",
"_____no_output_____"
]
],
[
[
"Until evaluation provides satisfactory scores, you would repeat the data preprocessing through evaluating steps by tuning what are called the hyperparameters.",
"_____no_output_____"
],
[
"[Choosing the right estimator](https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html)",
"_____no_output_____"
],
[
"### For a comparative study of some of the current most popular algorithms Please refer to this [tutorial](https://developer.ibm.com/tutorials/learn-classification-algorithms-using-python-and-scikit-learn/)",
"_____no_output_____"
],
[
"<p><font size=-1 color=gray>\n© Copyright 2019 IBM Corp. All Rights Reserved.\n<p>\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file\nexcept in compliance with the License. You may obtain a copy of the License at\nhttps://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the\nLicense is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either\nexpress or implied. See the License for the specific language governing permissions and\nlimitations under the License.\n</font></p>",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d0b1753b311ae515d9180d99794205c96188e95a | 10,664 | ipynb | Jupyter Notebook | mission_to_mars.ipynb | arlawrenc/web-scraping-challenge | f781b23bac6d2af7df9f751d9692e4977d0be4eb | [
"ADSL"
] | null | null | null | mission_to_mars.ipynb | arlawrenc/web-scraping-challenge | f781b23bac6d2af7df9f751d9692e4977d0be4eb | [
"ADSL"
] | null | null | null | mission_to_mars.ipynb | arlawrenc/web-scraping-challenge | f781b23bac6d2af7df9f751d9692e4977d0be4eb | [
"ADSL"
] | null | null | null | 34.849673 | 257 | 0.550638 | [
[
[
"from bs4 import BeautifulSoup as bs\nfrom splinter import Browser\nimport pandas as pd",
"_____no_output_____"
],
[
"with Browser(\"chrome\") as browser:\n # Visit URL\n url = \"https://mars.nasa.gov/news/\"\n browser.visit(url)\n browser.fill('search', 'splinter - python acceptance testing for web applications')\n # Find and click the 'search' button\n button = browser.find_by_css('input.search_submit')\n print(button[0])\n # Interact with elements\n button[0].click()\n if browser.is_text_present('There are no items matching these criteria.'):\n print(\"Yes, the official website was found!\")\n else:\n print(\"No, it wasn't found... We need to improve our SEO techniques\")",
"<splinter.driver.webdriver.WebDriverElement object at 0x0000027A80EFAF88>\nNo, it wasn't found... We need to improve our SEO techniques\n"
],
[
"#Scrape the [NASA Mars News Site](https://mars.nasa.gov/news/) and collect the latest News Title and Paragraph Text. Assign the text to variables that you can reference later.\n\n#```python\n# Example:\nnews_title = \"NASA's Next Mars Mission to Investigate Interior of Red Planet\"\n\nnews_p = \"Preparation of NASA's next spacecraft to Mars, InSight, has ramped up this summer, on course for launch next May from Vandenberg Air Force Base in central California -- the first interplanetary launch in history from America's West Coast.\"",
"_____no_output_____"
],
[
"with Browser(\"chrome\") as browser:\n url = \"https://mars.nasa.gov/news/\"\n browser.visit(url)\n html = bs(browser.html, 'html.parser')\n \n body = html.body\n# body.strippedtext\n# print(body.a)\n\n print(body.find_all(\"div\", class_='content_title')[1].getText())\n# titles = body.find_all(\"div\", class_='content_title')\n# first = titles[1].getText()\n# print(first)\n ",
"\n\nAlabama High School Student Names NASA's Mars Helicopter\n\n\n"
],
[
"# Use splinter to navigate the site and find the image url for the current Featured Mars Image and \n#assign the url string to a variable called `featured_image_url`.\n\n# * Make sure to find the image url to the full size `.jpg` image.\n\n\n# * Make sure to save a complete url string for this image.\n\n# ```python\n# # Example:\n# featured_image_url = 'https://www.jpl.nasa.gov/spaceimages/images/largesize/PIA16225_hires.jpg'\n\nbase_url = \"https://www.jpl.nasa.gov\"\nsearch_url = '/spaceimages/?search=&category=Mars'\nurl = base_url + search_url\nfeatured_image_url = None\nwith Browser(\"chrome\") as browser:\n browser.visit(url)\n image = browser.find_by_id('full_image')[0]\n featured_image_url = base_url + image['data-fancybox-href']\n print(featured_image_url)\n",
"https://www.jpl.nasa.gov/spaceimages/images/mediumsize/PIA17838_ip.jpg\n"
],
[
"#Visit the Mars Weather twitter account [here](https://twitter.com/marswxreport?lang=en) and \n#scrape the latest Mars weather tweet from the page. Save the tweet text for the weather \n#report as a variable called `mars_weather`.\n#Note: Be sure you are not signed in to twitter, or scraping may become more difficult.**\nimport time\nwith Browser(\"chrome\") as browser:\n url_weather = \"https://twitter.com/marswxreport?lang=en\"\n browser.visit(url_weather) \n html_weather = browser.html\n time.sleep(5)\n soup = bs(html_weather, \"html.parser\")\n main = soup.main\n temp = main.find_all('section', attrs={\"aria-labelledby\": \"accessible-list-0\"})\n elements = soup.find_all(\"section\", class_=\"css-1dbjc4n\")\n for e in elements:\n print(e)\n print(temp.text)\n print(temp[0])\n# url= \"https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars\"\n# with Browser(\"chrome\") as browser:\n\n# browser.visit(url) \n# html = browser.html\n# soup = bs(html, 'html.parser')\n# xpath = \"//div[@class='description']//a[@class='itemLink product-item']/h3\"\n# results = browser.find_by_xpath(xpath)\n# hemisphere_image_urls = []\n# for i in range(4):\n# html = browser.html\n# soup = bs(html, 'html.parser')\n# # find the new Splinter elements\n# results = browser.find_by_xpath(xpath)\n# # save name of the hemisphere\n# header = results[i].html\n# # go to hemisphere details page \n# details_link = results[i]\n# details_link.click()\n\n# html = browser.html\n# soup = bs(html, 'html.parser')\n# # Save the image url\n# hemisphere_image_urls.append({\"title\": header, \"image_url\": soup.find(\"div\", class_=\"downloads\").a[\"href\"]})\n# # Go back to the original page\n# browser.back()\n\n# print(hemisphere_image_urls)\n",
"_____no_output_____"
],
[
"#Visit the Mars Facts webpage [here](https://space-facts.com/mars/) and use Pandas to scrape the table \n#containing facts about the planet including Diameter, Mass, etc.\nurl = \"https://space-facts.com/mars/\"\ndata = pd.read_html(url)\ndf = data[0]\n\ndf.columns = ['Description', 'Value']\ndf.set_index('Description', inplace=True)\nmars_html_table = df.to_html()\nprint(df)\n\n\n#Use Pandas to convert the data to a HTML table string.\n",
"_____no_output_____"
],
[
"# * Visit the USGS Astrogeology site [here](https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars)\n# to obtain high resolution images for each of Mar's hemispheres.\n\n# * You will need to click each of the links to the hemispheres in order to find \n# the image url to the full resolution image.\n\n# * Save both the image url string for the full resolution hemisphere image, and \n# the Hemisphere title containing the hemisphere name. Use a Python dictionary to store \n#the data using the keys `img_url` and `title`.\n\n# * Append the dictionary with the image url string and the hemisphere title to a list.\n# This list will contain one dictionary for each hemisphere.\n\n# ```python\n# # Example:\n# hemisphere_image_urls = [\n# {\"title\": \"Valles Marineris Hemisphere\", \"img_url\": \"...\"},\n# {\"title\": \"Cerberus Hemisphere\", \"img_url\": \"...\"},\n# {\"title\": \"Schiaparelli Hemisphere\", \"img_url\": \"...\"},\n# {\"title\": \"Syrtis Major Hemisphere\", \"img_url\": \"...\"},\n# ]\n# ```\n\nurl= \"https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars\"\nwith Browser(\"chrome\") as browser:\n\n browser.visit(url) \n html = browser.html\n soup = bs(html, 'html.parser')\n xpath = \"//div[@class='description']//a[@class='itemLink product-item']/h3\"\n results = browser.find_by_xpath(xpath)\n hemisphere_image_urls = []\n for i in range(4):\n html = browser.html\n soup = bs(html, 'html.parser')\n # find the new Splinter elements\n results = browser.find_by_xpath(xpath)\n # save name of the hemisphere\n header = results[i].html\n # go to hemisphere details page \n details_link = results[i]\n details_link.click()\n\n html = browser.html\n soup = bs(html, 'html.parser')\n # Save the image url\n hemisphere_image_urls.append({\"title\": header, \"image_url\": soup.find(\"div\", class_=\"downloads\").a[\"href\"]})\n # Go back to the original page\n browser.back()\n\n print(hemisphere_image_urls)\n #to make it prettier\n for i in hemisphere_image_urls: print(i)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0b179fae28d13c1f4d354eef1c03cf2c6fa1b0c | 6,836 | ipynb | Jupyter Notebook | Intro-to-Python/00.ipynb | koshikraj/PythonLectures | c274c5a62cfb4c90942a4756acd1b9f3c3a538f4 | [
"CC-BY-3.0"
] | null | null | null | Intro-to-Python/00.ipynb | koshikraj/PythonLectures | c274c5a62cfb4c90942a4756acd1b9f3c3a538f4 | [
"CC-BY-3.0"
] | null | null | null | Intro-to-Python/00.ipynb | koshikraj/PythonLectures | c274c5a62cfb4c90942a4756acd1b9f3c3a538f4 | [
"CC-BY-3.0"
] | null | null | null | 46.189189 | 645 | 0.665594 | [
[
[
"<small><small><i>\nIntroduction to Python - available from https://gitlab.erc.monash.edu.au/andrease/Python4Maths.git\n\nThe original version was written by Rajath Kumar and is available at https://github.com/rajathkumarmp/Python-Lectures.\nThe notes have been updated for Python 3 and amended for use in Monash University mathematics courses by [Andreas Ernst](http://users.monash.edu.au/~andreas) \n</small></small></i>",
"_____no_output_____"
],
[
"# Python-Lectures",
"_____no_output_____"
],
[
"## Introduction\n\nPython is a modern, robust, high level programming language. It is very easy to pick up even if you are completely new to programming. \n\nPython, similar to other languages like matlab or R, is interpreted hence runs slowly compared to C++, Fortran or Java. However writing programs in Python is very quick. Python has a very large collection of libraries for everything from scientific computing to web services. It caters for object oriented and functional programming with module system that allows large and complex applications to be developed in Python. \n\nThese lectures are using jupyter notebooks which mix Python code with documentation. The python notebooks can be run on a webserver or stand-alone on a computer.\n\nTo give an indication of what Python code looks like, here is a simple bit of code that defines a set $N=\\{1,3,4,5,7\\}$ and calculates the sum of the squared elements of this set: $$\\sum_{i\\in N} i^2=100$$",
"_____no_output_____"
]
],
[
[
"N={1,3,4,5,7,8}\nprint('The sum of ∑_i∈N i*i =',sum( i**2 for i in N ) )",
"The sum of ∑_i∈N i*i = 164\n"
]
],
[
[
"## Contents\n\nThis course is broken up into a number of notebooks (chapters).\n\n* [00](00.ipynb) This introduction with additional information below on how to get started in running python\n* [01](01.ipynb) Basic data types and operations (numbers, strings) \n* [02](02.ipynb) String manipulation \n* [03](03.ipynb) Data structures: Lists and Tuples\n* [04](04.ipynb) Data structures (continued): dictionaries\n* [05](05.ipynb) Control statements: if, for, while, try statements\n* [06](06.ipynb) Functions\n* [07](07.ipynb) Classes and basic object oriented programming\n* [08](08.ipynb) Scipy: libraries for arrays (matrices) and plotting\n* [09](09.ipynb) Mixed Integer Linear Programming using the mymip library.\n* [10](10.ipynb) Networks and graphs under python - a very brief introduction\n* [11](11.ipynb) Using the numba library for fast numerical computing.\n\nThis is a tutorial style introduction to Python. For a quick reminder / summary of Python syntax the following [Quick Reference Card](http://www.cs.put.poznan.pl/csobaniec/software/python/py-qrc.html) may be useful. A longer and more detailed tutorial style introduction to python is available from the python site at: https://docs.python.org/3/tutorial/\n",
"_____no_output_____"
],
[
"## Installation\n\n### Loging into the web server\nThe easiest way to run this and other notebooks for staff and students at Monash University is to log into the Jupyter server at [https://sci-web17-v01.ocio.monash.edu.au/hub](https://sci-web17-v01.ocio.monash.edu.au/hub). The steps for running notebooks are:\n* Log in using your monash email address. The first time you log in an empty account will automatically be set up for you.\n* Press the start button (if prompted by the system)\n* Use the menu of the jupyter system to upload a .ipynb python notebook file or to start a new notebook.\n\n### Installing \n\nPython runs on windows, linux, mac and other environments. There are many python distributions available. However the recommended way to install python under Microsoft Windows or Linux is to use the Anaconda distribution available at [https://www.continuum.io/downloads](https://www.continuum.io/downloads). Make sure to get the Python *3.6* version, not 2.7. This distribution comes with the [SciPy](https://www.scipy.org/) collection of scientific python tools as well as the iron python notebook. For developing python code without notebooks consider using [spyder](https://github.com/spyder-ide/spyder) (also included with Anaconda)\n\nTo open a notebook with anaconda installed, from the terminal run:\n\n ipython notebook\n\nNote that for the Monash University optimisation course additional modules relating to the commercial optimisation library [CPLEX](http://www-01.ibm.com/software/commerce/optimization/cplex-optimizer/index.html) and possibly [Gurobi](http://www.gurobi.com/) will be used. These libraries are not available as part of any standard distribution but are available under academic licence. Cplex is included on the [Monash server](https://sci-web17-v01.ocio.monash.edu.au/hub).",
"_____no_output_____"
],
[
"## How to learn from this resource?\n\nDownload all the notebooks from Moodle or https://gitlab.erc.monash.edu.au/andrease/Python4Maths.git\n\nUpload them to the monash server and lauch them or launch ipython notebook from the folder which contains the notebooks. Open each one of them\n\nCell > All Output > Clear\n\nThis will clear all the outputs and now you can understand each statement and learn interactively.\n",
"_____no_output_____"
],
[
"## License\nThis work is licensed under the Creative Commons Attribution 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by/3.0/",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d0b181da7c65358acb2ebd6afc94957e4c658458 | 1,648 | ipynb | Jupyter Notebook | notebooks/20201012-earthquake-dmg-us-test-func.ipynb | saritepe/earthquake_dmg-drivendata | e0ee9fe839bb0fa643e56736e5561758b21abff1 | [
"MIT"
] | 1 | 2020-10-14T09:16:42.000Z | 2020-10-14T09:16:42.000Z | notebooks/20201012-earthquake-dmg-us-test-func.ipynb | saritepe/earthquake_dmg-drivendata | e0ee9fe839bb0fa643e56736e5561758b21abff1 | [
"MIT"
] | null | null | null | notebooks/20201012-earthquake-dmg-us-test-func.ipynb | saritepe/earthquake_dmg-drivendata | e0ee9fe839bb0fa643e56736e5561758b21abff1 | [
"MIT"
] | null | null | null | 30.518519 | 443 | 0.574636 | [
[
[
"from ..src.data.get_config import get_config",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
d0b1855dc1c9c28956e3acc4aeddb9dd728f33bf | 313,105 | ipynb | Jupyter Notebook | Course2week5-Lasso/Overfitting_Demo_Ridge_Lasso.ipynb | bluove/Machine-Learning-Specialization | 2be733fe2b5a0b2848001d851632f413c30ab93d | [
"Apache-2.0"
] | null | null | null | Course2week5-Lasso/Overfitting_Demo_Ridge_Lasso.ipynb | bluove/Machine-Learning-Specialization | 2be733fe2b5a0b2848001d851632f413c30ab93d | [
"Apache-2.0"
] | null | null | null | Course2week5-Lasso/Overfitting_Demo_Ridge_Lasso.ipynb | bluove/Machine-Learning-Specialization | 2be733fe2b5a0b2848001d851632f413c30ab93d | [
"Apache-2.0"
] | null | null | null | 229.213031 | 20,766 | 0.894182 | [
[
[
"# Overfitting demo\n\n## Create a dataset based on a true sinusoidal relationship\nLet's look at a synthetic dataset consisting of 30 points drawn from the sinusoid $y = \\sin(4x)$:",
"_____no_output_____"
]
],
[
[
"import graphlab\nimport math\nimport random\nimport numpy\nfrom matplotlib import pyplot as plt\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"Create random values for x in interval [0,1)",
"_____no_output_____"
]
],
[
[
"random.seed(98103)\nn = 30\nx = graphlab.SArray([random.random() for i in range(n)]).sort()",
"2016-04-25 03:15:49,473 [INFO] graphlab.cython.cy_server, 176: GraphLab Create v1.8.5 started. Logging: C:\\Users\\PHILIP~1\\AppData\\Local\\Temp\\graphlab_server_1461579348.log.0\n"
]
],
[
[
"Compute y",
"_____no_output_____"
]
],
[
[
"y = x.apply(lambda x: math.sin(4*x))",
"_____no_output_____"
]
],
[
[
"Add random Gaussian noise to y",
"_____no_output_____"
]
],
[
[
"random.seed(1)\ne = graphlab.SArray([random.gauss(0,1.0/3.0) for i in range(n)])\ny = y + e",
"_____no_output_____"
]
],
[
[
"### Put data into an SFrame to manipulate later",
"_____no_output_____"
]
],
[
[
"data = graphlab.SFrame({'X1':x,'Y':y})\ndata",
"_____no_output_____"
]
],
[
[
"### Create a function to plot the data, since we'll do it many times",
"_____no_output_____"
]
],
[
[
"def plot_data(data): \n plt.plot(data['X1'],data['Y'],'k.')\n plt.xlabel('x')\n plt.ylabel('y')\n\nplot_data(data)",
"_____no_output_____"
]
],
[
[
"## Define some useful polynomial regression functions",
"_____no_output_____"
],
[
"Define a function to create our features for a polynomial regression model of any degree:",
"_____no_output_____"
]
],
[
[
"def polynomial_features(data, deg):\n data_copy=data.copy()\n for i in range(1,deg):\n data_copy['X'+str(i+1)]=data_copy['X'+str(i)]*data_copy['X1']\n return data_copy",
"_____no_output_____"
]
],
[
[
"Define a function to fit a polynomial linear regression model of degree \"deg\" to the data in \"data\":",
"_____no_output_____"
]
],
[
[
"def polynomial_regression(data, deg):\n model = graphlab.linear_regression.create(polynomial_features(data,deg), \n target='Y', l2_penalty=0.,l1_penalty=0.,\n validation_set=None,verbose=False)\n return model",
"_____no_output_____"
]
],
[
[
"Define function to plot data and predictions made, since we are going to use it many times.",
"_____no_output_____"
]
],
[
[
"def plot_poly_predictions(data, model):\n plot_data(data)\n\n # Get the degree of the polynomial\n deg = len(model.coefficients['value'])-1\n \n # Create 200 points in the x axis and compute the predicted value for each point\n x_pred = graphlab.SFrame({'X1':[i/200.0 for i in range(200)]})\n y_pred = model.predict(polynomial_features(x_pred,deg))\n \n # plot predictions\n plt.plot(x_pred['X1'], y_pred, 'g-', label='degree ' + str(deg) + ' fit')\n plt.legend(loc='upper left')\n plt.axis([0,1,-1.5,2])",
"_____no_output_____"
]
],
[
[
"Create a function that prints the polynomial coefficients in a pretty way :)",
"_____no_output_____"
]
],
[
[
"def print_coefficients(model): \n # Get the degree of the polynomial\n deg = len(model.coefficients['value'])-1\n\n # Get learned parameters as a list\n w = list(model.coefficients['value'])\n\n # Numpy has a nifty function to print out polynomials in a pretty way\n # (We'll use it, but it needs the parameters in the reverse order)\n print 'Learned polynomial for degree ' + str(deg) + ':'\n w.reverse()\n print numpy.poly1d(w)",
"_____no_output_____"
]
],
[
[
"## Fit a degree-2 polynomial",
"_____no_output_____"
],
[
"Fit our degree-2 polynomial to the data generated above:",
"_____no_output_____"
]
],
[
[
"model = polynomial_regression(data, deg=2)",
"_____no_output_____"
]
],
[
[
"Inspect learned parameters",
"_____no_output_____"
]
],
[
[
"print_coefficients(model)",
"Learned polynomial for degree 2:\n 2\n-5.129 x + 4.147 x + 0.07471\n"
]
],
[
[
"Form and plot our predictions along a grid of x values:",
"_____no_output_____"
]
],
[
[
"plot_poly_predictions(data,model)",
"_____no_output_____"
]
],
[
[
"## Fit a degree-4 polynomial",
"_____no_output_____"
]
],
[
[
"model = polynomial_regression(data, deg=4)\nprint_coefficients(model)\nplot_poly_predictions(data,model)",
"Learned polynomial for degree 4:\n 4 3 2\n23.87 x - 53.82 x + 35.23 x - 6.828 x + 0.7755\n"
]
],
[
[
"## Fit a degree-16 polynomial",
"_____no_output_____"
]
],
[
[
"model = polynomial_regression(data, deg=16)\nprint_coefficients(model)",
"Learned polynomial for degree 16:\n 16 15 14 13\n-4.537e+05 x + 1.129e+06 x + 4.821e+05 x - 3.81e+06 x \n 12 11 10 9\n + 3.536e+06 x + 5.753e+04 x - 1.796e+06 x + 2.178e+06 x\n 8 7 6 5 4\n - 3.662e+06 x + 4.442e+06 x - 3.13e+06 x + 1.317e+06 x - 3.356e+05 x\n 3 2\n + 5.06e+04 x - 4183 x + 160.8 x - 1.621\n"
]
],
[
[
"###Woah!!!! Those coefficients are *crazy*! On the order of 10^6.",
"_____no_output_____"
]
],
[
[
"plot_poly_predictions(data,model)",
"_____no_output_____"
]
],
[
[
"### Above: Fit looks pretty wild, too. Here's a clear example of how overfitting is associated with very large magnitude estimated coefficients.",
"_____no_output_____"
],
[
"# ",
"_____no_output_____"
],
[
"# ",
"_____no_output_____"
],
[
" # ",
"_____no_output_____"
],
[
" # ",
"_____no_output_____"
],
[
"# Ridge Regression",
"_____no_output_____"
],
[
"Ridge regression aims to avoid overfitting by adding a cost to the RSS term of standard least squares that depends on the 2-norm of the coefficients $\\|w\\|$. The result is penalizing fits with large coefficients. The strength of this penalty, and thus the fit vs. model complexity balance, is controled by a parameter lambda (here called \"L2_penalty\").",
"_____no_output_____"
],
[
"Define our function to solve the ridge objective for a polynomial regression model of any degree:",
"_____no_output_____"
]
],
[
[
"def polynomial_ridge_regression(data, deg, l2_penalty):\n model = graphlab.linear_regression.create(polynomial_features(data,deg), \n target='Y', l2_penalty=l2_penalty,\n validation_set=None,verbose=False)\n return model",
"_____no_output_____"
]
],
[
[
"## Perform a ridge fit of a degree-16 polynomial using a *very* small penalty strength",
"_____no_output_____"
]
],
[
[
"model = polynomial_ridge_regression(data, deg=16, l2_penalty=1e-25)\nprint_coefficients(model)",
"Learned polynomial for degree 16:\n 16 15 14 13\n-4.537e+05 x + 1.129e+06 x + 4.821e+05 x - 3.81e+06 x \n 12 11 10 9\n + 3.536e+06 x + 5.753e+04 x - 1.796e+06 x + 2.178e+06 x\n 8 7 6 5 4\n - 3.662e+06 x + 4.442e+06 x - 3.13e+06 x + 1.317e+06 x - 3.356e+05 x\n 3 2\n + 5.06e+04 x - 4183 x + 160.8 x - 1.621\n"
],
[
"plot_poly_predictions(data,model)",
"_____no_output_____"
]
],
[
[
"## Perform a ridge fit of a degree-16 polynomial using a very large penalty strength",
"_____no_output_____"
]
],
[
[
"model = polynomial_ridge_regression(data, deg=16, l2_penalty=100)\nprint_coefficients(model)",
"Learned polynomial for degree 16:\n 16 15 14 13 12 11\n-0.301 x - 0.2802 x - 0.2604 x - 0.2413 x - 0.2229 x - 0.205 x \n 10 9 8 7 6 5\n - 0.1874 x - 0.1699 x - 0.1524 x - 0.1344 x - 0.1156 x - 0.09534 x\n 4 3 2\n - 0.07304 x - 0.04842 x - 0.02284 x - 0.002257 x + 0.6416\n"
],
[
"plot_poly_predictions(data,model)",
"_____no_output_____"
]
],
[
[
"## Let's look at fits for a sequence of increasing lambda values",
"_____no_output_____"
]
],
[
[
"for l2_penalty in [1e-25, 1e-10, 1e-6, 1e-3, 1e2]:\n model = polynomial_ridge_regression(data, deg=16, l2_penalty=l2_penalty)\n print 'lambda = %.2e' % l2_penalty\n print_coefficients(model)\n print '\\n'\n plt.figure()\n plot_poly_predictions(data,model)\n plt.title('Ridge, lambda = %.2e' % l2_penalty)",
"lambda = 1.00e-25\nLearned polynomial for degree 16:\n 16 15 14 13\n-4.537e+05 x + 1.129e+06 x + 4.821e+05 x - 3.81e+06 x \n 12 11 10 9\n + 3.536e+06 x + 5.753e+04 x - 1.796e+06 x + 2.178e+06 x\n 8 7 6 5 4\n - 3.662e+06 x + 4.442e+06 x - 3.13e+06 x + 1.317e+06 x - 3.356e+05 x\n 3 2\n + 5.06e+04 x - 4183 x + 160.8 x - 1.621\n\n\nlambda = 1.00e-10\nLearned polynomial for degree 16:\n 16 15 14 13\n4.975e+04 x - 7.821e+04 x - 2.265e+04 x + 3.949e+04 x \n 12 11 10 9 8\n + 4.366e+04 x + 3074 x - 3.332e+04 x - 2.786e+04 x + 1.032e+04 x\n 7 6 5 4 3 2\n + 2.962e+04 x - 1440 x - 2.597e+04 x + 1.839e+04 x - 5596 x + 866.1 x - 65.19 x + 2.159\n\n\nlambda = 1.00e-06\nLearned polynomial for degree 16:\n 16 15 14 13 12 11\n329.1 x - 356.4 x - 264.2 x + 33.8 x + 224.7 x + 210.8 x \n 10 9 8 7 6 5 4\n + 49.62 x - 122.4 x - 178 x - 79.13 x + 84.89 x + 144.9 x + 5.123 x\n 3 2\n - 156.9 x + 88.21 x - 14.82 x + 1.059\n\n\nlambda = 1.00e-03\nLearned polynomial for degree 16:\n 16 15 14 13 12 11\n6.364 x - 1.596 x - 4.807 x - 4.778 x - 2.776 x + 0.1238 x \n 10 9 8 7 6 5\n + 2.977 x + 4.926 x + 5.203 x + 3.248 x - 0.9291 x - 6.011 x\n 4 3 2\n - 8.395 x - 2.655 x + 9.861 x - 2.225 x + 0.5636\n\n\nlambda = 1.00e+02\nLearned polynomial for degree 16:\n 16 15 14 13 12 11\n-0.301 x - 0.2802 x - 0.2604 x - 0.2413 x - 0.2229 x - 0.205 x \n 10 9 8 7 6 5\n - 0.1874 x - 0.1699 x - 0.1524 x - 0.1344 x - 0.1156 x - 0.09534 x\n 4 3 2\n - 0.07304 x - 0.04842 x - 0.02284 x - 0.002257 x + 0.6416\n\n\n"
],
[
"data",
"_____no_output_____"
]
],
[
[
"## Perform a ridge fit of a degree-16 polynomial using a \"good\" penalty strength",
"_____no_output_____"
],
[
"We will learn about cross validation later in this course as a way to select a good value of the tuning parameter (penalty strength) lambda. Here, we consider \"leave one out\" (LOO) cross validation, which one can show approximates average mean square error (MSE). As a result, choosing lambda to minimize the LOO error is equivalent to choosing lambda to minimize an approximation to average MSE.",
"_____no_output_____"
]
],
[
[
"# LOO cross validation -- return the average MSE\ndef loo(data, deg, l2_penalty_values):\n # Create polynomial features\n data = polynomial_features(data, deg)\n \n # Create as many folds for cross validatation as number of data points\n num_folds = len(data)\n folds = graphlab.cross_validation.KFold(data,num_folds)\n \n # for each value of l2_penalty, fit a model for each fold and compute average MSE\n l2_penalty_mse = []\n min_mse = None\n best_l2_penalty = None\n for l2_penalty in l2_penalty_values:\n next_mse = 0.0\n for train_set, validation_set in folds:\n # train model\n model = graphlab.linear_regression.create(train_set,target='Y', \n l2_penalty=l2_penalty,\n validation_set=None,verbose=False)\n \n # predict on validation set \n y_test_predicted = model.predict(validation_set)\n # compute squared error\n next_mse += ((y_test_predicted-validation_set['Y'])**2).sum()\n \n # save squared error in list of MSE for each l2_penalty\n next_mse = next_mse/num_folds\n l2_penalty_mse.append(next_mse)\n if min_mse is None or next_mse < min_mse:\n min_mse = next_mse\n best_l2_penalty = l2_penalty\n \n return l2_penalty_mse,best_l2_penalty",
"_____no_output_____"
]
],
[
[
"Run LOO cross validation for \"num\" values of lambda, on a log scale",
"_____no_output_____"
]
],
[
[
"l2_penalty_values = numpy.logspace(-4, 10, num=10)\nl2_penalty_mse,best_l2_penalty = loo(data, 16, l2_penalty_values)",
"_____no_output_____"
]
],
[
[
"Plot results of estimating LOO for each value of lambda",
"_____no_output_____"
]
],
[
[
"plt.plot(l2_penalty_values,l2_penalty_mse,'k-')\nplt.xlabel('$\\ell_2$ penalty')\nplt.ylabel('LOO cross validation error')\nplt.xscale('log')\nplt.yscale('log')",
"_____no_output_____"
]
],
[
[
"Find the value of lambda, $\\lambda_{\\mathrm{CV}}$, that minimizes the LOO cross validation error, and plot resulting fit",
"_____no_output_____"
]
],
[
[
"best_l2_penalty",
"_____no_output_____"
],
[
"model = polynomial_ridge_regression(data, deg=16, l2_penalty=best_l2_penalty)\nprint_coefficients(model)",
"Learned polynomial for degree 16:\n 16 15 14 13 12 11\n1.345 x + 1.141 x + 0.9069 x + 0.6447 x + 0.3569 x + 0.04947 x \n 10 9 8 7 6 5\n - 0.2683 x - 0.5821 x - 0.8701 x - 1.099 x - 1.216 x - 1.145 x\n 4 3 2\n - 0.7837 x - 0.07406 x + 0.7614 x + 0.7703 x + 0.3918\n"
],
[
"plot_poly_predictions(data,model)",
"_____no_output_____"
]
],
[
[
"# ",
"_____no_output_____"
],
[
"# ",
"_____no_output_____"
],
[
"# ",
"_____no_output_____"
],
[
"# ",
"_____no_output_____"
],
[
"# Lasso Regression",
"_____no_output_____"
],
[
"Lasso regression jointly shrinks coefficients to avoid overfitting, and implicitly performs feature selection by setting some coefficients exactly to 0 for sufficiently large penalty strength lambda (here called \"L1_penalty\"). In particular, lasso takes the RSS term of standard least squares and adds a 1-norm cost of the coefficients $\\|w\\|$.",
"_____no_output_____"
],
[
"Define our function to solve the lasso objective for a polynomial regression model of any degree:",
"_____no_output_____"
]
],
[
[
"def polynomial_lasso_regression(data, deg, l1_penalty):\n model = graphlab.linear_regression.create(polynomial_features(data,deg), \n target='Y', l2_penalty=0.,\n l1_penalty=l1_penalty,\n validation_set=None, \n solver='fista', verbose=False,\n max_iterations=3000, convergence_threshold=1e-10)\n return model",
"_____no_output_____"
]
],
[
[
"## Explore the lasso solution as a function of a few different penalty strengths",
"_____no_output_____"
],
[
"We refer to lambda in the lasso case below as \"l1_penalty\"",
"_____no_output_____"
]
],
[
[
"for l1_penalty in [0.0001, 0.01, 0.1, 10]:\n model = polynomial_lasso_regression(data, deg=16, l1_penalty=l1_penalty)\n print 'l1_penalty = %e' % l1_penalty\n print 'number of nonzeros = %d' % (model.coefficients['value']).nnz()\n print_coefficients(model)\n print '\\n'\n plt.figure()\n plot_poly_predictions(data,model)\n plt.title('LASSO, lambda = %.2e, # nonzeros = %d' % (l1_penalty, (model.coefficients['value']).nnz()))",
"l1_penalty = 1.000000e-04\nnumber of nonzeros = 17\nLearned polynomial for degree 16:\n 16 15 14 13 12 11\n29.02 x + 1.35 x - 12.72 x - 16.93 x - 13.82 x - 6.698 x \n 10 9 8 7 6 5\n + 1.407 x + 8.939 x + 12.88 x + 11.44 x + 3.759 x - 8.062 x\n 4 3 2\n - 16.28 x - 7.682 x + 17.86 x - 4.384 x + 0.685\n\n\nl1_penalty = 1.000000e-02\nnumber of nonzeros = 14\nLearned polynomial for degree 16:\n 16 15 11 10 9 8\n-1.18 x - 0.001318 x + 0.08745 x + 0.7389 x + 3.828 x + 0.4761 x\n 7 6 5 4 3 2\n + 0.1282 x + 0.001952 x - 0.6151 x - 10.11 x - 0.0003954 x + 6.686 x - 1.28 x + 0.5056\n\n\nl1_penalty = 1.000000e-01\nnumber of nonzeros = 5\nLearned polynomial for degree 16:\n 16 6 5\n2.21 x - 1.002 x - 2.962 x + 1.216 x + 0.3473\n\n\nl1_penalty = 1.000000e+01\nnumber of nonzeros = 2\nLearned polynomial for degree 16:\n 9\n-1.526 x + 0.5755\n\n\n"
]
],
[
[
"Above: We see that as lambda increases, we get sparser and sparser solutions. However, even for our non-sparse case for lambda=0.0001, the fit of our high-order polynomial is not too wild. This is because, like in ridge, coefficients included in the lasso solution are shrunk relative to those of the least squares (unregularized) solution. This leads to better behavior even without sparsity. Of course, as lambda goes to 0, the amount of this shrinkage decreases and the lasso solution approaches the (wild) least squares solution.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0b19a5b9a8a04776169ee952d1aa4f2180bb67c | 38,333 | ipynb | Jupyter Notebook | Chapter02/Activity02_01/Activity02_01.ipynb | adityashah95/The-Reinforcement-Learning-Workshop | 6efe78c68379dd27df6ff56846df49eb60ac81f1 | [
"MIT"
] | null | null | null | Chapter02/Activity02_01/Activity02_01.ipynb | adityashah95/The-Reinforcement-Learning-Workshop | 6efe78c68379dd27df6ff56846df49eb60ac81f1 | [
"MIT"
] | null | null | null | Chapter02/Activity02_01/Activity02_01.ipynb | adityashah95/The-Reinforcement-Learning-Workshop | 6efe78c68379dd27df6ff56846df49eb60ac81f1 | [
"MIT"
] | null | null | null | 113.411243 | 16,404 | 0.84812 | [
[
[
"# Gridworld",
"_____no_output_____"
],
[
"\n\nThe Gridworld environment (inspired from Sutton and Barto, Reinforcement Learning: an Introduction) is represented in figure. The environment is a finite MDP in which states are represented by grid cells. The available actions are 4: left, right, up, down. Actions move the current state in the action direction and the associated reward is 0 for all actions. Exceptions are:\n\n- Border cells: if the action brings the agent outside of the grid the agent state does not change and the agent receives a reward of -1.\n\n- Good cells: $G_1$ and $G_2$ are special cells. For these cells each action brings the agent in state $G_1$' and $G_2$' respectively. The associated reward is +10 for going outside state $G_1$ and +5 for going outside state $G_2$.\n\n- Bad cells: $B_1$ and $B_2$ are bad cells. For these cells the associated reward is -1 for all actions.\n\nThe goal of the activity is to calculate and represent visually the state values for the random policy, in which the agent selects each action with equal probability (1/4) in all states. The discount factor is assumed to be equal to 0.9.",
"_____no_output_____"
],
[
"## Solution",
"_____no_output_____"
],
[
"Imports:",
"_____no_output_____"
]
],
[
[
"from enum import Enum, auto\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy import linalg\nfrom typing import Tuple",
"_____no_output_____"
]
],
[
[
"Visualization function:",
"_____no_output_____"
]
],
[
[
"# helper function\ndef vis_matrix(M, cmap=plt.cm.Blues):\n fig, ax = plt.subplots()\n ax.matshow(M, cmap=cmap)\n for i in range(M.shape[0]):\n for j in range(M.shape[1]):\n c = M[j, i]\n ax.text(i, j, \"%.2f\" % c, va=\"center\", ha=\"center\")",
"_____no_output_____"
],
[
"# Define the actions\nclass Action(Enum):\n UP = auto()\n DOWN = auto()\n LEFT = auto()\n RIGHT = auto()\n\n\n# Agent Policy, random\nclass Policy:\n def __init__(self):\n self._possible_actions = [action for action in Action]\n self._action_probs = {\n a: 1 / len(self._possible_actions) for a in self._possible_actions\n }\n\n def __call__(self, state: Tuple[int, int], action: Action) -> float:\n \"\"\"\n Returns the action probability\n \"\"\"\n assert action in self._possible_actions\n # state is unused for this policy\n return self._action_probs[action]",
"_____no_output_____"
],
[
"class Environment:\n def __init__(self):\n self.grid_width = 5\n self.grid_height = 5\n self._good_state1 = (0, 1)\n self._good_state2 = (0, 3)\n self._to_state1 = (4, 2)\n self._to_state2 = (2, 3)\n self._bad_state1 = (1, 1)\n self._bad_state2 = (4, 4)\n self._bad_states = [self._bad_state1, self._bad_state2]\n self._good_states = [self._good_state1, self._good_state2]\n self._to_states = [self._to_state1, self._to_state2]\n self._good_rewards = [10, 5]\n\n def step(self, state, action):\n i, j = state\n for good_state, reward, to_state in zip(\n self._good_states, self._good_rewards, self._to_states\n ):\n if (i, j) == good_state:\n return (to_state, reward)\n reward = 0\n if state in self._bad_states:\n reward = -1\n if action == Action.LEFT:\n j_next = max(j - 1, 0)\n i_next = i\n if j - 1 < 0:\n reward = -1\n elif action == Action.RIGHT:\n j_next = min(j + 1, self.grid_width - 1)\n i_next = i\n if j + 1 > self.grid_width - 1:\n reward = -1\n elif action == Action.UP:\n j_next = j\n i_next = max(i - 1, 0)\n if i - 1 < 0:\n reward = -1\n elif action == Action.DOWN:\n j_next = j\n i_next = min(i + 1, self.grid_height - 1)\n if i + 1 > self.grid_height - 1:\n reward = -1\n else:\n raise ValueError(\"Invalid action\")\n return ((i_next, j_next), reward)",
"_____no_output_____"
]
],
[
[
"Probability and reward matrix:",
"_____no_output_____"
]
],
[
[
"pi = Policy()\nenv = Environment()\n\n# setup probability matrix and reward matrix\nP = np.zeros((env.grid_width * env.grid_height, env.grid_width * env.grid_height))\nR = np.zeros_like(P)\npossible_actions = [action for action in Action]\n\n# Loop for all states and fill up P and R\nfor i in range(env.grid_height):\n for j in range(env.grid_width):\n state = (i, j)\n # loop for all action and setup P and R\n for action in possible_actions:\n next_state, reward = env.step(state, action)\n (i_next, j_next) = next_state\n P[i * env.grid_width + j, i_next * env.grid_width + j_next] += pi(\n state, action\n )\n # the reward depends only on the starting state and the final state\n R[i * env.grid_width + j, i_next * env.grid_width + j_next] = reward",
"_____no_output_____"
],
[
"# check the correctness\nassert((np.sum(P, axis=1) == 1).all())",
"_____no_output_____"
],
[
"# expected reward for each state\nR_expected = np.sum(P * R, axis=1, keepdims=True)",
"_____no_output_____"
],
[
"# reshape the state values in a matrix\nR_square = R_expected.reshape((env.grid_height,env.grid_width))\n# Visualize\nvis_matrix(R_square, cmap=plt.cm.Reds)",
"_____no_output_____"
]
],
[
[
"The previous figure is a color representation of the expected reward associated to each state considering the current policy. Notice the expected reward of bad states is exactly equal to -1. The expected reward of good states is exactly equal to 10 and 5 respectively.",
"_____no_output_____"
]
],
[
[
"# define the discount factor\ngamma = 0.9",
"_____no_output_____"
],
[
"# Now it is possible to solve the Bellman Equation\nA = np.eye(env.grid_width*env.grid_height) - gamma * P\nB = R_expected",
"_____no_output_____"
],
[
"# solve using scipy linalg\nV = linalg.solve(A, B)",
"_____no_output_____"
],
[
"# reshape the state values in a matrix\nV_square = V.reshape((env.grid_height,env.grid_width))\n# visualize results\nvis_matrix(V_square, cmap=plt.cm.Reds)",
"_____no_output_____"
]
],
[
[
"Notice that the value of good states is less than the expected reward from those states. This is because landing states have an expected reward that is negative or because landing states are close to states for which the reward is negative. You can notice that the state with higher value is state $G_1$, followed by state $G_2$. It is also interesting to notice the high value of state in position (1, 2), being close to good states.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d0b1ab7564f60832d9801e28ca3552a1cbe1054c | 101,595 | ipynb | Jupyter Notebook | notebook/reward_result.ipynb | kuto5046/kaggle-football | 8e51a5b98548a9ea3c4f16064eed228b0f6ce9b4 | [
"MIT"
] | null | null | null | notebook/reward_result.ipynb | kuto5046/kaggle-football | 8e51a5b98548a9ea3c4f16064eed228b0f6ce9b4 | [
"MIT"
] | 4 | 2020-12-01T10:15:16.000Z | 2020-12-01T10:15:46.000Z | notebook/reward_result.ipynb | kuto5046/kaggle-football | 8e51a5b98548a9ea3c4f16064eed228b0f6ce9b4 | [
"MIT"
] | null | null | null | 84.733111 | 55,198 | 0.650544 | [
[
[
"## evaluation episodeのrewardsの結果を見るnotebook",
"_____no_output_____"
]
],
[
[
"# import csv\n# def text_csv_converter(datas):\n# file_csv = datas.replace(\"txt\", \"csv\")\n# with open(datas) as rf:\n# with open(file_csv, \"w\") as wf:\n# readfile = rf.readlines()\n# for read_text in readfile:\n# read_text = read_text.split()\n# writer = csv.writer(wf, delimiter=',')\n# writer.writerow(read_text)\n\n# filename = \"/content/drive/My Drive/Gfootball/kaggle_simulations/rainbow/agent2/1-7results/scores.txt\"\n# text_csv_converter(filename)",
"_____no_output_____"
],
[
"import pandas as pd\ndf1 = pd.read_csv(\"/content/drive/My Drive/Gfootball/kaggle_simulations/rainbow/agent/1-6results/scores.csv\")\ndf2 = pd.read_csv(\"/content/drive/My Drive/Gfootball/kaggle_simulations/rainbow/agent/7-10results/scores.csv\")",
"_____no_output_____"
],
[
"df1 = df1[df1[\"episodes\"] < 207]\ndf1",
"_____no_output_____"
],
[
"import numpy as np\ndf2 = df2.iloc[6:]\nepi = np.arange(207, len(df2)+207)\ndf2[\"episodes\"] = epi\ndf2",
"_____no_output_____"
],
[
"df = pd.concat([df1, df2])\ndf.to_csv('/content/drive/My Drive/Gfootball/kaggle_simulations/rainbow/agent/scores.csv')",
"_____no_output_____"
],
[
"df = pd.read_csv(\"/content/drive/My Drive/Gfootball/kaggle_simulations/rainbow/agent2/scores.csv\")\ndf",
"_____no_output_____"
],
[
"df['roll_median'] = df['median'].rolling(5).mean()\ndf",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set()\n\n# visualize reward each episodes\nfig = plt.figure(figsize=(7, 5))\nax1 = fig.add_subplot(111)\nax1.set_title(\"median reward\")\nsns.lineplot(x=\"episodes\", y=\"roll_median\", data=df, ax=ax1, color=(0, 0, 1, 1))\nsns.lineplot(x=\"episodes\", y=\"median\", data=df, ax=ax1, color=(0, 0, 1, 0.2))\nplt.tight_layout()\nplt.savefig(\"/content/drive/My Drive/Gfootball/kaggle_simulations/rainbow/agent2/result.png\")\nplt.show()",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0b1abb96bc9e05817dc10efb8240d71ebe6a8d5 | 7,620 | ipynb | Jupyter Notebook | 28x28MNIST_example.ipynb | ninomiyalab/Memory-LessMomentumQuasi-Newton | 8e0f1d832028eafa29628f4cf94ed84b81a40801 | [
"MIT"
] | null | null | null | 28x28MNIST_example.ipynb | ninomiyalab/Memory-LessMomentumQuasi-Newton | 8e0f1d832028eafa29628f4cf94ed84b81a40801 | [
"MIT"
] | null | null | null | 28x28MNIST_example.ipynb | ninomiyalab/Memory-LessMomentumQuasi-Newton | 8e0f1d832028eafa29628f4cf94ed84b81a40801 | [
"MIT"
] | null | null | null | 45.903614 | 163 | 0.462073 | [
[
[
"!git clone https://github.com/ninomiyalab/Memory_Less_Momentum_Quasi_Newton",
"_____no_output_____"
],
[
"import tensorflow as tf\nimport tensorflow.keras\nfrom tensorflow.keras.models import Model, load_model\nfrom tensorflow.keras.layers import Input, Dense, Activation, Conv2D, Flatten\nfrom tensorflow.keras import optimizers\nfrom Memory_Less_Momentum_Quasi_Newton.MLQN import *\nfrom Memory_Less_Momentum_Quasi_Newton.MLMoQ import *\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport csv\n\ndef compare_MNIST(i = 0):\n np.random.seed(i)\n \n mnist = tf.keras.datasets.mnist\n (x_train, y_train), (x_test, y_test) = mnist.load_data()\n\n \n x_train, x_test = x_train[..., np.newaxis]/255.0, x_test[..., np.newaxis]/255.0\n y_train, y_test = tf.keras.utils.to_categorical(y_train), tf.keras.utils.to_categorical(y_test)\n\n #defined Neural Network Model\n def My_Model(input_shape, Output_dim):\n inputs = Input(shape = (input_shape))\n x = Flatten()(inputs)\n x = Dense(10, activation=\"sigmoid\")(x)\n outputs = Dense(Output_dim, activation=\"softmax\")(x)\n model = Model(inputs = [inputs], outputs = [outputs])\n return model\n \n model = My_Model(x_train.shape[1:], Output_dim=10)\n \n model.save(\"model.h5\")\n \n loss_fn = tf.keras.losses.CategoricalCrossentropy()\n\n epochs = 2000\n\n #if verbose is True, the results of each iteration will be printed.\n verbose = True\n\n #if graph is True, the results of all algorithms will be plotted in the graph.\n graph = True\n\n # MLQN Training \n # --------------------------------------------------------------------------------------\n model = load_model(\"model.h5\")\n\n optimizer = MLQN( )\n\n model.compile(loss=loss_fn, optimizer=optimizer, metrics=['accuracy'])\n\n MLQN_history = model.fit(x_train, y_train, epochs = epochs, verbose = verbose, batch_size = x_train.shape[0], validation_data = (x_test, y_test))\n # --------------------------------------------------------------------------------------\n\n # MLMoQ Training\n # --------------------------------------------------------------------------------------\n model = load_model(\"model.h5\")\n\n optimizer = MLMoQ()\n\n model.compile(loss=loss_fn, optimizer=optimizer, metrics=['accuracy'])\n\n MLMoQ_history = model.fit(x_train, y_train, epochs = epochs, verbose = verbose, batch_size = x_train.shape[0], validation_data = (x_test, y_test))\n # --------------------------------------------------------------------------------------\n\n # Adam Training \n # --------------------------------------------------------------------------------------\n model = load_model(\"model.h5\")\n\n optimizer = tf.keras.optimizers.Adam()\n \n model.compile(loss=loss_fn, optimizer=optimizer, metrics=['accuracy'])\n\n Adam_history = model.fit(x_train, y_train, epochs = epochs, verbose = verbose, batch_size = x_train.shape[0], validation_data = (x_test, y_test))\n # --------------------------------------------------------------------------------------\n\n if graph:\n fig, (axL, axR) = plt.subplots(ncols=2, figsize=(10,4))\n #Train Loss vs. Iteration graph\n axL.set_title(\"Train_Loss\")\n axL.plot(MLQN_history.history['loss'],color=\"blue\", label=\"MLQN\")\n axL.plot(MLMoQ_history.history['loss'], color=\"m\",label=\"MLMoQ\")\n axL.plot(Adam_history.history['loss'], color=\"orange\",label=\"Adam\")\n axL.set_xlabel('Iterations')\n axL.set_ylabel('Train Loss')\n axL.legend(bbox_to_anchor=(0, -0.2), loc='upper left', borderaxespad=0)\n axL.legend()\n #Train Accuracy vs. Iteration graph\n axR.set_title(\"Train_Accuracy\")\n axR.plot(MLQN_history.history['accuracy'],color=\"blue\", label=\"MLQN\")\n axR.plot(MLMoQ_history.history['accuracy'], color=\"m\",label=\"MLMoQ\")\n axR.plot(Adam_history.history['accuracy'],color=\"orange\", label=\"Adam\")\n axR.set_xlabel('Iterations')\n axR.set_ylabel('Train Accuracy')\n axR.legend(bbox_to_anchor=(0, -0.2), loc='upper left', borderaxespad=0)\n axR.legend()\n plt.show()\n \n fig, (axL, axR) = plt.subplots(ncols=2, figsize=(10,4))\n #Test Loss vs. Iteration graph\n axL.set_title(\"Test_Loss\")\n axL.plot(MLQN_history.history['val_loss'],color=\"blue\", label=\"MLQN\")\n axL.plot(MLMoQ_history.history['val_loss'],color=\"m\", label=\"MLMoQ\")\n axL.plot(Adam_history.history['val_loss'],color=\"orange\",label=\"Adam\")\n axL.set_xlabel('Iterations')\n axL.set_ylabel('Test Loss')\n axL.legend(bbox_to_anchor=(0, -0.2), loc='upper left', borderaxespad=0)\n axL.legend()\n\n #Test Accuracy vs. Iteration graph\n axR.set_title(\"Test_Accuracy\")\n axR.plot(MLQN_history.history['val_accuracy'],color=\"blue\",label=\"MLQN\")\n axR.plot(MLMoQ_history.history['val_accuracy'],color=\"m\", label=\"MLMoQ\")\n axR.plot(Adam_history.history['val_accuracy'],color=\"orange\", label=\"Adam\")\n axR.set_xlabel('Iterations')\n axR.set_ylabel('Test Accuracy')\n axR.legend(bbox_to_anchor=(0, -0.2), loc='upper left', borderaxespad=0)\n axR.legend()\n plt.show()\n \n \n\nfor i in range(10):\n print(i + 1)\n compare_MNIST(i)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code"
]
] |
d0b1bd50970e6ed9595dc393b0ecb7b572e09a71 | 86,066 | ipynb | Jupyter Notebook | Copy_of_LS_DS_142_Sampling_Confidence_Intervals_and_Hypothesis_Testing.ipynb | zevan07/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments | 1fca12212a9abd1ddfa1736a380b4a4fa6db51ae | [
"MIT"
] | null | null | null | Copy_of_LS_DS_142_Sampling_Confidence_Intervals_and_Hypothesis_Testing.ipynb | zevan07/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments | 1fca12212a9abd1ddfa1736a380b4a4fa6db51ae | [
"MIT"
] | null | null | null | Copy_of_LS_DS_142_Sampling_Confidence_Intervals_and_Hypothesis_Testing.ipynb | zevan07/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments | 1fca12212a9abd1ddfa1736a380b4a4fa6db51ae | [
"MIT"
] | null | null | null | 120.036262 | 56,032 | 0.811586 | [
[
[
"<a href=\"https://colab.research.google.com/github/zevan07/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments/blob/master/Copy_of_LS_DS_142_Sampling_Confidence_Intervals_and_Hypothesis_Testing.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Lambda School Data Science Module 142\n## Sampling, Confidence Intervals, and Hypothesis Testing",
"_____no_output_____"
],
[
"## Prepare - examine other available hypothesis tests\n\nIf you had to pick a single hypothesis test in your toolbox, t-test would probably be the best choice - but the good news is you don't have to pick just one! Here's some of the others to be aware of:",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom scipy.stats import chisquare # One-way chi square test\n\n# Chi square can take any crosstab/table and test the independence of rows/cols\n# The null hypothesis is that the rows/cols are independent -> low chi square\n# The alternative is that there is a dependence -> high chi square\n# Be aware! Chi square does *not* tell you direction/causation\n\nind_obs = np.array([[1, 1], [2, 2]]).T\nprint(ind_obs)\nprint(chisquare(ind_obs, axis=None))\n\ndep_obs = np.array([[16, 18, 16, 14, 12, 12], [32, 24, 16, 28, 20, 24]]).T\nprint(dep_obs)\nprint(chisquare(dep_obs, axis=None))",
"[[1 2]\n [1 2]]\nPower_divergenceResult(statistic=0.6666666666666666, pvalue=0.8810148425137847)\n[[16 32]\n [18 24]\n [16 16]\n [14 28]\n [12 20]\n [12 24]]\nPower_divergenceResult(statistic=23.31034482758621, pvalue=0.015975692534127565)\n"
],
[
"# Distribution tests:\n# We often assume that something is normal, but it can be important to *check*\n\n# For example, later on with predictive modeling, a typical assumption is that\n# residuals (prediction errors) are normal - checking is a good diagnostic\n\nfrom scipy.stats import normaltest\n# Poisson models arrival times and is related to the binomial (coinflip)\nsample = np.random.poisson(5, 1000)\nprint(normaltest(sample)) # Pretty clearly not normal",
"NormaltestResult(statistic=38.69323106073592, pvalue=3.961609200867749e-09)\n"
],
[
"# Kruskal-Wallis H-test - compare the median rank between 2+ groups\n# Can be applied to ranking decisions/outcomes/recommendations\n# The underlying math comes from chi-square distribution, and is best for n>5\nfrom scipy.stats import kruskal\n\nx1 = [1, 3, 5, 7, 9]\ny1 = [2, 4, 6, 8, 10]\nprint(kruskal(x1, y1)) # x1 is a little better, but not \"significantly\" so\n\nx2 = [1, 1, 1]\ny2 = [2, 2, 2]\nz = [2, 2] # Hey, a third group, and of different size!\nprint(kruskal(x2, y2, z)) # x clearly dominates",
"KruskalResult(statistic=0.2727272727272734, pvalue=0.6015081344405895)\nKruskalResult(statistic=7.0, pvalue=0.0301973834223185)\n"
]
],
[
[
"And there's many more! `scipy.stats` is fairly comprehensive, though there are even more available if you delve into the extended world of statistics packages. As tests get increasingly obscure and specialized, the importance of knowing them by heart becomes small - but being able to look them up and figure them out when they *are* relevant is still important.",
"_____no_output_____"
],
[
"## Live Lecture - let's explore some more of scipy.stats\n\nCandidate topics to explore:\n\n- `scipy.stats.chi2` - the Chi-squared distribution, which we can use to reproduce the Chi-squared test\n- Calculate the Chi-Squared test statistic \"by hand\" (with code), and feed it into `chi2`\n- Build a confidence interval with `stats.t.ppf`, the t-distribution percentile point function (the inverse of the CDF) - we can write a function to return a tuple of `(mean, lower bound, upper bound)` that you can then use for the assignment (visualizing confidence intervals)",
"_____no_output_____"
]
],
[
[
"# Taking requests! Come to lecture with a topic or problem and we'll try it.",
"_____no_output_____"
]
],
[
[
"## Assignment - Build a confidence interval\n\nA confidence interval refers to a neighborhood around some point estimate, the size of which is determined by the desired p-value. For instance, we might say that 52% of Americans prefer tacos to burritos, with a 95% confidence interval of +/- 5%.\n\n52% (0.52) is the point estimate, and +/- 5% (the interval $[0.47, 0.57]$) is the confidence interval. \"95% confidence\" means a p-value $\\leq 1 - 0.95 = 0.05$.\n\nIn this case, the confidence interval includes $0.5$ - which is the natural null hypothesis (that half of Americans prefer tacos and half burritos, thus there is no clear favorite). So in this case, we could use the confidence interval to report that we've failed to reject the null hypothesis.\n\nBut providing the full analysis with a confidence interval, including a graphical representation of it, can be a helpful and powerful way to tell your story. Done well, it is also more intuitive to a layperson than simply saying \"fail to reject the null hypothesis\" - it shows that in fact the data does *not* give a single clear result (the point estimate) but a whole range of possibilities.\n\nHow is a confidence interval built, and how should it be interpreted? It does *not* mean that 95% of the data lies in that interval - instead, the frequentist interpretation is \"if we were to repeat this experiment 100 times, we would expect the average result to lie in this interval ~95 times.\"\n\nFor a 95% confidence interval and a normal(-ish) distribution, you can simply remember that +/-2 standard deviations contains 95% of the probability mass, and so the 95% confidence interval based on a given sample is centered at the mean (point estimate) and has a range of +/- 2 (or technically 1.96) standard deviations.\n\nDifferent distributions/assumptions (90% confidence, 99% confidence) will require different math, but the overall process and interpretation (with a frequentist approach) will be the same.\n\nYour assignment - using the data from the prior module ([congressional voting records](https://archive.ics.uci.edu/ml/datasets/Congressional+Voting+Records)):\n\n1. Generate and numerically represent a confidence interval\n2. Graphically (with a plot) represent the confidence interval\n3. Interpret the confidence interval - what does it tell you about the data and its distribution?\n\nStretch goals:\n\n1. Write a summary of your findings, mixing prose and math/code/results. *Note* - yes, this is by definition a political topic. It is challenging but important to keep your writing voice *neutral* and stick to the facts of the data. Data science often involves considering controversial issues, so it's important to be sensitive about them (especially if you want to publish).\n2. Apply the techniques you learned today to your project data or other data of your choice, and write/discuss your findings here.\n3. Refactor your code so it is elegant, readable, and can be easily run for all issues.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nfrom scipy import stats\nfrom scipy.stats import normaltest\nfrom scipy.stats import kruskal\nfrom random import randint\nimport matplotlib.pyplot as plt\nfrom matplotlib.pyplot import figure",
"_____no_output_____"
],
[
"# the data file does not have a header so we'll need to create one\n# attribute info copy and pasted from name file\nattribute_info = '''1. Class-Name: 2 (democrat, republican)\n 2. handicapped-infants: 2 (y,n)\n 3. water-project-cost-sharing: 2 (y,n)\n 4. adoption-of-the-budget-resolution: 2 (y,n)\n 5. physician-fee-freeze: 2 (y,n)\n 6. el-salvador-aid: 2 (y,n)\n 7. religious-groups-in-schools: 2 (y,n)\n 8. anti-satellite-test-ban: 2 (y,n)\n 9. aid-to-nicaraguan-contras: 2 (y,n)\n 10. mx-missile: 2 (y,n)\n 11. immigration: 2 (y,n)\n 12. synfuels-corporation-cutback: 2 (y,n)\n 13. education-spending: 2 (y,n)\n 14. superfund-right-to-sue: 2 (y,n)\n 15. crime: 2 (y,n)\n 16. duty-free-exports: 2 (y,n)\n 17. export-administration-act-south-africa: 2 (y,n)'''\n\n# clean up attribute info to use for column headers\nnames = (attribute_info.replace(': 2 (y,n)', ' ')\n .replace(': 2 (democrat, republican)', ' ')\n .replace('.', ' ')\n .split())\n\n# finish cleaning by getting rid of numbers\nfor x in names:\n nums = [str(x) for x in range(0, 18)]\n if x in nums:\n names.remove(x)\n# import the csv without the first row as a header\ndf = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/voting-records/house-votes-84.data', header=None)\n\n# add header (names)\ndf.columns = names\n\n# replace all 'y', 'n', and '?' values with python friendly values\n# replaced '?' with random numbers to avoid NaNs\ndf = df.replace({'y': 1, 'n': 0, '?': randint(0,1)})\nprint(df.shape)\n\n# create dataframes for each party\nrep = df[df['Class-Name'] == 'republican']\ndem = df[df['Class-Name'] == 'democrat']",
"(435, 17)\n"
],
[
"# create a function to get mean, confidence interval, and the interval (for use in graphing)\ndef confidence_interval(data, confidence = 0.95):\n data = np.array(data)\n mean = np.mean(data)\n n = len(data)\n stderr = stats.sem(data)\n interval = stderr * stats.t.ppf((1 + confidence) / 2.0, n - 1)\n return (mean, mean - interval, mean + interval, interval)\n\n# create a reporter for all of the values calculated with the above function\ndef report_confidence_interval(confidence_interval):\n print('Mean: {}'.format(confidence_interval[0]))\n print('Lower bound: {}'.format(confidence_interval[1]))\n print('Upper bound: {}'.format(confidence_interval[2]))\n s = \"our mean lies in the interval [{:.5}, {:.5}]\".format(confidence_interval[1], confidence_interval[2])\n return s, confidence_interval[0]",
"_____no_output_____"
],
[
"dem_means = []\nrep_means = []\n\ndem_er = []\nrep_er = []\n\n\nfor name in names[1:]:\n print(name)\n print('Democrats')\n dem_means.append(confidence_interval(dem[name])[0])\n dem_er.append(confidence_interval(dem[name])[3])\n print(report_confidence_interval(confidence_interval(dem[name])))\n print('Republicans')\n rep_means.append(confidence_interval(rep[name])[0])\n rep_er.append(confidence_interval(rep[name])[3])\n print(report_confidence_interval(confidence_interval(rep[name])))\n print(' ')",
"handicapped-infants\nDemocrats\nMean: 0.6179775280898876\nLower bound: 0.5593207017612246\nUpper bound: 0.6766343544185506\n('our mean lies in the interval [0.55932, 0.67663]', 0.6179775280898876)\nRepublicans\nMean: 0.20238095238095238\nLower bound: 0.14100035693170826\nUpper bound: 0.2637615478301965\n('our mean lies in the interval [0.141, 0.26376]', 0.20238095238095238)\n \nwater-project-cost-sharing\nDemocrats\nMean: 0.5543071161048689\nLower bound: 0.4943030277521611\nUpper bound: 0.6143112044575767\n('our mean lies in the interval [0.4943, 0.61431]', 0.5543071161048689)\nRepublicans\nMean: 0.5654761904761905\nLower bound: 0.4897471468537114\nUpper bound: 0.6412052340986696\n('our mean lies in the interval [0.48975, 0.64121]', 0.5654761904761905)\n \nadoption-of-the-budget-resolution\nDemocrats\nMean: 0.8913857677902621\nLower bound: 0.8538224468450312\nUpper bound: 0.9289490887354931\n('our mean lies in the interval [0.85382, 0.92895]', 0.8913857677902621)\nRepublicans\nMean: 0.15476190476190477\nLower bound: 0.09950709529949271\nUpper bound: 0.21001671422431684\n('our mean lies in the interval [0.099507, 0.21002]', 0.15476190476190477)\n \nphysician-fee-freeze\nDemocrats\nMean: 0.08239700374531835\nLower bound: 0.04920214032735521\nUpper bound: 0.11559186716328149\n('our mean lies in the interval [0.049202, 0.11559]', 0.08239700374531835)\nRepublicans\nMean: 0.9880952380952381\nLower bound: 0.9715257809044495\nUpper bound: 1.0046646952860268\n('our mean lies in the interval [0.97153, 1.0047]', 0.9880952380952381)\n \nel-salvador-aid\nDemocrats\nMean: 0.250936329588015\nLower bound: 0.19859690995467824\nUpper bound: 0.30327574922135175\n('our mean lies in the interval [0.1986, 0.30328]', 0.250936329588015)\nRepublicans\nMean: 0.9523809523809523\nLower bound: 0.9198464458225006\nUpper bound: 0.984915458939404\n('our mean lies in the interval [0.91985, 0.98492]', 0.9523809523809523)\n \nreligious-groups-in-schools\nDemocrats\nMean: 0.4943820224719101\nLower bound: 0.4340246461260167\nUpper bound: 0.5547393988178035\n('our mean lies in the interval [0.43402, 0.55474]', 0.4943820224719101)\nRepublicans\nMean: 0.8988095238095238\nLower bound: 0.8527359210232621\nUpper bound: 0.9448831265957855\n('our mean lies in the interval [0.85274, 0.94488]', 0.8988095238095238)\n \nanti-satellite-test-ban\nDemocrats\nMean: 0.7790262172284644\nLower bound: 0.7289381611781973\nUpper bound: 0.8291142732787316\n('our mean lies in the interval [0.72894, 0.82911]', 0.7790262172284644)\nRepublicans\nMean: 0.26785714285714285\nLower bound: 0.20020243047504438\nUpper bound: 0.3355118552392413\n('our mean lies in the interval [0.2002, 0.33551]', 0.26785714285714285)\n \naid-to-nicaraguan-contras\nDemocrats\nMean: 0.8314606741573034\nLower bound: 0.7862689149634521\nUpper bound: 0.8766524333511547\n('our mean lies in the interval [0.78627, 0.87665]', 0.8314606741573034)\nRepublicans\nMean: 0.20833333333333334\nLower bound: 0.146289434341848\nUpper bound: 0.2703772323248187\n('our mean lies in the interval [0.14629, 0.27038]', 0.20833333333333334)\n \nmx-missile\nDemocrats\nMean: 0.7752808988764045\nLower bound: 0.7248917176706724\nUpper bound: 0.8256700800821366\n('our mean lies in the interval [0.72489, 0.82567]', 0.7752808988764045)\nRepublicans\nMean: 0.13095238095238096\nLower bound: 0.07941444662385228\nUpper bound: 0.18249031528090964\n('our mean lies in the interval [0.079414, 0.18249]', 0.13095238095238096)\n \nimmigration\nDemocrats\nMean: 0.4794007490636704\nLower bound: 0.41909081017353317\nUpper bound: 0.5397106879538076\n('our mean lies in the interval [0.41909, 0.53971]', 0.4794007490636704)\nRepublicans\nMean: 0.5654761904761905\nLower bound: 0.4897471468537114\nUpper bound: 0.6412052340986696\n('our mean lies in the interval [0.48975, 0.64121]', 0.5654761904761905)\n \nsynfuels-corporation-cutback\nDemocrats\nMean: 0.5280898876404494\nLower bound: 0.4678240312506821\nUpper bound: 0.5883557440302167\n('our mean lies in the interval [0.46782, 0.58836]', 0.5280898876404494)\nRepublicans\nMean: 0.17857142857142858\nLower bound: 0.12006017401576093\nUpper bound: 0.23708268312709624\n('our mean lies in the interval [0.12006, 0.23708]', 0.17857142857142858)\n \neducation-spending\nDemocrats\nMean: 0.20224719101123595\nLower bound: 0.1537559627103129\nUpper bound: 0.250738419312159\n('our mean lies in the interval [0.15376, 0.25074]', 0.20224719101123595)\nRepublicans\nMean: 0.8809523809523809\nLower bound: 0.831477461593508\nUpper bound: 0.9304273003112539\n('our mean lies in the interval [0.83148, 0.93043]', 0.8809523809523809)\n \nsuperfund-right-to-sue\nDemocrats\nMean: 0.3295880149812734\nLower bound: 0.2728408257690887\nUpper bound: 0.38633520419345807\n('our mean lies in the interval [0.27284, 0.38634]', 0.3295880149812734)\nRepublicans\nMean: 0.8690476190476191\nLower bound: 0.8175096847190904\nUpper bound: 0.9205855533761478\n('our mean lies in the interval [0.81751, 0.92059]', 0.8690476190476191)\n \ncrime\nDemocrats\nMean: 0.37453183520599254\nLower bound: 0.31610198955968966\nUpper bound: 0.4329616808522954\n('our mean lies in the interval [0.3161, 0.43296]', 0.37453183520599254)\nRepublicans\nMean: 0.9821428571428571\nLower bound: 0.9619107163315306\nUpper bound: 1.0023749979541836\n('our mean lies in the interval [0.96191, 1.0024]', 0.9821428571428571)\n \nduty-free-exports\nDemocrats\nMean: 0.6591760299625468\nLower bound: 0.6019552815869864\nUpper bound: 0.7163967783381071\n('our mean lies in the interval [0.60196, 0.7164]', 0.6591760299625468)\nRepublicans\nMean: 0.15476190476190477\nLower bound: 0.09950709529949271\nUpper bound: 0.21001671422431684\n('our mean lies in the interval [0.099507, 0.21002]', 0.15476190476190477)\n \nexport-administration-act-south-africa\nDemocrats\nMean: 0.9550561797752809\nLower bound: 0.9300448249904457\nUpper bound: 0.9800675345601161\n('our mean lies in the interval [0.93004, 0.98007]', 0.9550561797752809)\nRepublicans\nMean: 0.7023809523809523\nLower bound: 0.6325311397787999\nUpper bound: 0.7722307649831047\n('our mean lies in the interval [0.63253, 0.77223]', 0.7023809523809523)\n \n"
],
[
"# bar heights (with a subset of the data)\npart_dem_means = dem_means[:5]\npart_rep_means = rep_means[:5]\n\n# we need to cut down the names to fit\npart_names = names [1:6]\n\n# error bars (with a subset of the data)\npart_dem_ers = dem_er[:5]\npart_rep_ers = rep_er[:5]",
"_____no_output_____"
],
[
"# plot a bar graph\nplt.style.use('fivethirtyeight')\n\nbarWidth = 0.4\nr1 = np.arange(len(part_dem_means))\nr2 = [x + barWidth for x in r1]\nplt.bar(r1, part_dem_means, width = barWidth, color = 'blue', edgecolor = 'black', yerr = part_dem_ers, capsize = 4, label = 'Democrats')\nplt.bar(r2, part_rep_means, width = barWidth, color = 'red', edgecolor = 'black', yerr = part_rep_ers, capsize = 4, label = 'Republicans')\n\nplt.title('Support for bills by party')\nplt.legend()\n\nplt.xticks([r + barWidth for r in range(len(part_dem_means))], names[1:6], rotation = 45, ha=\"right\");",
"_____no_output_____"
]
],
[
[
"## Interpretation\n\nMost of the confidence intervals are pretty large. If you were trying to extrapolate this data to a population (sort of a nonsensical situation, because congress is the population), you might find a value much different from what you predicted. \n\nUsing the handicapped infants bill as an example, the predicted outcome would be ~62%, but because the confidence interval is ~6%, the actual value could be expected to be anywhere between ~56% and ~68%.",
"_____no_output_____"
]
],
[
[
"print(dem_means[0])\nprint(dem_er[0])",
"0.6179775280898876\n0.117313652657326\n"
]
],
[
[
"## Resources\n\n- [Interactive visualize the Chi-Squared test](https://homepage.divms.uiowa.edu/~mbognar/applets/chisq.html)\n- [Calculation of Chi-Squared test statistic](https://en.wikipedia.org/wiki/Pearson%27s_chi-squared_test)\n- [Visualization of a confidence interval generated by R code](https://commons.wikimedia.org/wiki/File:Confidence-interval.svg)\n- [Expected value of a squared standard normal](https://math.stackexchange.com/questions/264061/expected-value-calculation-for-squared-normal-distribution) (it's 1 - which is why the expected value of a Chi-Squared with $n$ degrees of freedom is $n$, as it's the sum of $n$ squared standard normals)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0b1d0ddd866b58182a87662ed4199fecda2277f | 1,783 | ipynb | Jupyter Notebook | notebooks/book1/21/agglomDemo.ipynb | patel-zeel/pyprobml | 027ef3c13a2a63d958e05fdedb68fd7b8f0e0261 | [
"MIT"
] | null | null | null | notebooks/book1/21/agglomDemo.ipynb | patel-zeel/pyprobml | 027ef3c13a2a63d958e05fdedb68fd7b8f0e0261 | [
"MIT"
] | 1 | 2022-03-27T04:59:50.000Z | 2022-03-27T04:59:50.000Z | notebooks/book1/21/agglomDemo.ipynb | patel-zeel/pyprobml | 027ef3c13a2a63d958e05fdedb68fd7b8f0e0261 | [
"MIT"
] | 2 | 2022-03-26T11:52:36.000Z | 2022-03-27T05:17:48.000Z | 28.301587 | 136 | 0.496354 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
d0b1d8cf0eef0ce39d599304bd34d6edddd4fe43 | 100,575 | ipynb | Jupyter Notebook | HW3/m380hw3.ipynb | ZhihaoAi/MATH-380-Assignments | 17595db9759115281e95c51d4e40c7b71e337de2 | [
"MIT"
] | null | null | null | HW3/m380hw3.ipynb | ZhihaoAi/MATH-380-Assignments | 17595db9759115281e95c51d4e40c7b71e337de2 | [
"MIT"
] | null | null | null | HW3/m380hw3.ipynb | ZhihaoAi/MATH-380-Assignments | 17595db9759115281e95c51d4e40c7b71e337de2 | [
"MIT"
] | null | null | null | 216.756466 | 17,440 | 0.925648 | [
[
[
"import numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.style.use('seaborn-whitegrid')\n\nfrom sklearn.linear_model import LinearRegression",
"_____no_output_____"
],
[
"popa = pd.read_csv('population.txt', header = None, names = ['t', 'P'])",
"_____no_output_____"
],
[
"plt.scatter(x=popa['t'], y=popa['P'], marker='o')",
"_____no_output_____"
],
[
"sns.lmplot(x='t', y='P', data=popa, ci=None, fit_reg=True);",
"_____no_output_____"
],
[
"(133-8)/(21-7)",
"_____no_output_____"
],
[
"lma = LinearRegression().fit(popa['t'].ravel().reshape(-1,1), popa['P'])",
"_____no_output_____"
],
[
"lma.coef_",
"_____no_output_____"
],
[
"popb = pd.read_csv('population.txt', header = None, names = ['t', 'P'])",
"_____no_output_____"
],
[
"popb['ln P'] = np.log(popb['P'])",
"_____no_output_____"
],
[
"sns.lmplot(x='t', y='ln P', data=popb, ci=None, fit_reg=True);",
"_____no_output_____"
],
[
"lmb = LinearRegression().fit(popb['t'].ravel().reshape(-1,1), popb['ln P'])",
"_____no_output_____"
],
[
"lmb.coef_",
"_____no_output_____"
],
[
"lmb.intercept_",
"_____no_output_____"
],
[
"np.exp(lmb.intercept_)",
"_____no_output_____"
],
[
"sns.lmplot(x='t', y='ln P', data=popb, ci=None, fit_reg=False);\nplt.plot(np.arange(5,42,0.1), np.exp(lmb.intercept_)*np.exp(lmb.coef_[0]*np.arange(5,42,0.1)))\nplt.ylabel('P')",
"_____no_output_____"
],
[
"kepler = pd.read_csv('kepler.txt', header = None, names = ['T', 'r'])",
"_____no_output_____"
],
[
"sns.lmplot(x='r', y='T', data=kepler, ci=None, fit_reg=False);",
"_____no_output_____"
],
[
"kepler['ln T'] = np.log(kepler['T'])\nkepler['ln r'] = np.log(kepler['r'])",
"_____no_output_____"
],
[
"sns.lmplot(x='ln r', y='ln T', data=kepler, ci=None, fit_reg=True);",
"_____no_output_____"
],
[
"lm7b = LinearRegression().fit(kepler['ln r'].ravel().reshape(-1,1), kepler['ln T'])",
"_____no_output_____"
],
[
"lm7b.coef_",
"_____no_output_____"
],
[
"lm7b.intercept_",
"_____no_output_____"
],
[
"np.exp(lm7b.intercept_)",
"_____no_output_____"
],
[
"test322 = pd.read_csv('3_2-2.txt', header = None, names = ['x', 'y'])",
"_____no_output_____"
],
[
"sns.lmplot(x='x', y='y', data=test322, ci=None, fit_reg=False);\nplt.plot(np.arange(25,200,0.1), 0.00164038*np.arange(25,200,0.1)+0.00295615)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0b1dd7f49d8e234355a77d1dcafed0e2fb5e934 | 328,669 | ipynb | Jupyter Notebook | Applied Math/Y2S2/.ipynb_checkpoints/R Test-checkpoint.ipynb | darkeclipz/jupyter-notebooks | 5de784244ad9db12cfacbbec3053b11f10456d7e | [
"Unlicense"
] | 1 | 2018-08-28T12:16:12.000Z | 2018-08-28T12:16:12.000Z | Applied Math/Y2S2/.ipynb_checkpoints/R Test-checkpoint.ipynb | darkeclipz/jupyter-notebooks | 5de784244ad9db12cfacbbec3053b11f10456d7e | [
"Unlicense"
] | null | null | null | Applied Math/Y2S2/.ipynb_checkpoints/R Test-checkpoint.ipynb | darkeclipz/jupyter-notebooks | 5de784244ad9db12cfacbbec3053b11f10456d7e | [
"Unlicense"
] | null | null | null | 350.767343 | 63,748 | 0.923151 | [
[
[
"# Fitting distribution with R",
"_____no_output_____"
]
],
[
[
"x.norm <- rnorm(n=200,m=10,sd=2)\nhist(x.norm,main=\"Histogram of observed data\")",
"_____no_output_____"
],
[
"plot(density(x.norm),main=\"Density estimate of data\")",
"_____no_output_____"
],
[
"plot(ecdf(x.norm),main=\"Empirical cumulative distribution function\")",
"_____no_output_____"
],
[
"z.norm <- (x.norm-mean(x.norm))/sd(x.norm) # standardize data\nqqnorm(z.norm) ## drawing the QQplot\nabline(0,1) ## drawing a 45-degree reference line",
"_____no_output_____"
]
],
[
[
"If data differ from a normal distribution (i.e. data belonging from a Weibull pdf) we cna use `qqplot()` in this way:",
"_____no_output_____"
]
],
[
[
"x.wei <- rweibull(n=200,shape=2.1,scale=1.1) ## sampling from a Weibull\nx.teo <- rweibull(n=200,shape=2,scale=1) ## theoretical quantiles from a Weibull population\nqqplot(x.teo,x.wei,main=\"QQ-plot distr. Weibull\")\nabline(0,1)",
"_____no_output_____"
]
],
[
[
"# Model choice",
"_____no_output_____"
],
[
"Dealing with discrete data we can refer to Poisson's distribution with probability mass function:\n\n$$ f(x,\\lambda)=e^{-\\lambda\\dfrac{\\lambda^x}{x!}} \\quad \\text{where } x=0,1,2,\\ldots$$ ",
"_____no_output_____"
]
],
[
[
"x.poi <- rpois(n=200, lambda=2.5)\nhist(x.poi, main=\"Poisson distribution\")",
"_____no_output_____"
]
],
[
[
"As concern continuous data we have the normal (gaussian) dsitrubition:\n\n$$ f(x,\\lambda,\\sigma)=\\dfrac{1}{\\sqrt{2\\pi}\\sigma} e^{\\dfrac{1(x-\\mu)^2}{2\\sigma^2}} $$\n\nwith $x \\in \\mathbb{R}$.",
"_____no_output_____"
]
],
[
[
"curve(dnorm(x,m=10,sd=2),from=0,to=20,main=\"Normal distribution\")",
"_____no_output_____"
]
],
[
[
"Gamma distribution:\n\n$$ f(x,\\alpha,\\lambda)=\\dfrac{\\lambda^\\alpha}{\\gamma(\\alpha)}x^{\\alpha-1}e^{-\\lambda x} $$\n\nwith $x \\in \\mathbb{R}^+$.",
"_____no_output_____"
]
],
[
[
"curve(dgamma(x, scale=1.5, shape=2), from=0, to=15, main=\"Gamma distribution\")",
"_____no_output_____"
]
],
[
[
"Weibull distribition:\n\n$$ f(x,\\alpha,\\beta)=\\alpha\\beta^{-\\alpha}x^{\\alpha-1}e^{-\\left[\\left(\\dfrac{x}{\\beta}\\right)^\\alpha\\right]} $$",
"_____no_output_____"
]
],
[
[
"curve(dweibull(x, scale=2.5, shape=1.5), from=0, to=15, main=\"Weibull distribution\")",
"_____no_output_____"
],
[
"h<-hist(x.norm,breaks=15)\nxhist<-c(min(h$breaks),h$breaks)\nyhist<-c(0,h$density,0)\nxfit<-seq(min(x.norm),max(x.norm),length=40)\nyfit<-dnorm(xfit,mean=mean(x.norm),sd=sd(x.norm))\nplot(xhist,yhist,type=\"s\",ylim=c(0,max(yhist,yfit)), main=\"Normal pdf and histogram\")\nlines(xfit,yfit, col=\"red\")",
"_____no_output_____"
],
[
"yfit",
"_____no_output_____"
],
[
"yhist",
"_____no_output_____"
],
[
"ks.test(yfit,yhist)",
"Warning message in ks.test(yfit, yhist):\n\"cannot compute exact p-value with ties\""
]
],
[
[
"# StackOverflow example",
"_____no_output_____"
],
[
"The following is from this StackOverflow example: https://stats.stackexchange.com/questions/132652/how-to-determine-which-distribution-fits-my-data-best\n\nThis requires you to install the following packages with the R package manager: `fitdistrplus` and `logspline`.",
"_____no_output_____"
]
],
[
[
"library(fitdistrplus)\nlibrary(logspline)\n\nx <- c(37.50,46.79,48.30,46.04,43.40,39.25,38.49,49.51,40.38,36.98,40.00,\n38.49,37.74,47.92,44.53,44.91,44.91,40.00,41.51,47.92,36.98,43.40,\n42.26,41.89,38.87,43.02,39.25,40.38,42.64,36.98,44.15,44.91,43.40,\n49.81,38.87,40.00,52.45,53.13,47.92,52.45,44.91,29.54,27.13,35.60,\n45.34,43.37,54.15,42.77,42.88,44.26,27.14,39.31,24.80,16.62,30.30,\n36.39,28.60,28.53,35.84,31.10,34.55,52.65,48.81,43.42,52.49,38.00,\n38.65,34.54,37.70,38.11,43.05,29.95,32.48,24.63,35.33,41.34)",
"_____no_output_____"
],
[
"descdist(x, discrete = FALSE)",
"_____no_output_____"
],
[
"fit.weibull <- fitdist(x, \"weibull\")\nfit.norm <- fitdist(x, \"norm\")",
"_____no_output_____"
],
[
"plot(fit.norm)",
"_____no_output_____"
],
[
"plot(fit.weibull)",
"_____no_output_____"
],
[
"fit.weibull$aic",
"_____no_output_____"
],
[
"fit.norm$aic",
"_____no_output_____"
]
],
[
[
"## Kolmogorov-Smirnov test simulation",
"_____no_output_____"
]
],
[
[
"n.sims <- 5e4\n\nstats <- replicate(n.sims, { \n r <- rweibull(n = length(x)\n , shape= fit.weibull$estimate[\"shape\"]\n , scale = fit.weibull$estimate[\"scale\"]\n )\n as.numeric(ks.test(r\n , \"pweibull\"\n , shape= fit.weibull$estimate[\"shape\"]\n , scale = fit.weibull$estimate[\"scale\"])$statistic\n ) \n})",
"_____no_output_____"
],
[
"plot(ecdf(stats), las = 1, main = \"KS-test statistic simulation (CDF)\", col = \"darkorange\", lwd = 1.7)\ngrid()",
"_____no_output_____"
],
[
"fit <- logspline(stats)\n\n1 - plogspline(ks.test(x\n , \"pweibull\"\n , shape= fit.weibull$estimate[\"shape\"]\n , scale = fit.weibull$estimate[\"scale\"])$statistic\n , fit\n)",
"Warning message in ks.test(x, \"pweibull\", shape = fit.weibull$estimate[\"shape\"], :\n\"ties should not be present for the Kolmogorov-Smirnov test\""
],
[
"xs <- seq(10, 65, len=500)\n\ntrue.weibull <- rweibull(1e6, shape= fit.weibull$estimate[\"shape\"]\n , scale = fit.weibull$estimate[\"scale\"])\n\nboot.pdf <- sapply(1:1000, function(i) {\n xi <- sample(x, size=length(x), replace=TRUE)\n MLE.est <- suppressWarnings(fitdist(xi, distr=\"weibull\")) \n dweibull(xs, shape=MLE.est$estimate[\"shape\"], scale = MLE.est$estimate[\"scale\"])\n}\n)\n\nboot.cdf <- sapply(1:1000, function(i) {\n xi <- sample(x, size=length(x), replace=TRUE)\n MLE.est <- suppressWarnings(fitdist(xi, distr=\"weibull\")) \n pweibull(xs, shape= MLE.est$estimate[\"shape\"], scale = MLE.est$estimate[\"scale\"])\n}\n) \n\n#-----------------------------------------------------------------------------\n# Plot PDF\n#-----------------------------------------------------------------------------\n\npar(bg=\"white\", las=1, cex=1.2)\nplot(xs, boot.pdf[, 1], type=\"l\", col=rgb(.6, .6, .6, .1), ylim=range(boot.pdf),\n xlab=\"x\", ylab=\"Probability density\")\nfor(i in 2:ncol(boot.pdf)) lines(xs, boot.pdf[, i], col=rgb(.6, .6, .6, .1))\n\n# Add pointwise confidence bands\n\nquants <- apply(boot.pdf, 1, quantile, c(0.025, 0.5, 0.975))\nmin.point <- apply(boot.pdf, 1, min, na.rm=TRUE)\nmax.point <- apply(boot.pdf, 1, max, na.rm=TRUE)\nlines(xs, quants[1, ], col=\"red\", lwd=1.5, lty=2)\nlines(xs, quants[3, ], col=\"red\", lwd=1.5, lty=2)\nlines(xs, quants[2, ], col=\"darkred\", lwd=2)",
"_____no_output_____"
],
[
"#-----------------------------------------------------------------------------\n# Plot CDF\n#-----------------------------------------------------------------------------\n\npar(bg=\"white\", las=1, cex=1.2)\nplot(xs, boot.cdf[, 1], type=\"l\", col=rgb(.6, .6, .6, .1), ylim=range(boot.cdf),\n xlab=\"x\", ylab=\"F(x)\")\nfor(i in 2:ncol(boot.cdf)) lines(xs, boot.cdf[, i], col=rgb(.6, .6, .6, .1))\n\n# Add pointwise confidence bands\n\nquants <- apply(boot.cdf, 1, quantile, c(0.025, 0.5, 0.975))\nmin.point <- apply(boot.cdf, 1, min, na.rm=TRUE)\nmax.point <- apply(boot.cdf, 1, max, na.rm=TRUE)\nlines(xs, quants[1, ], col=\"red\", lwd=1.5, lty=2)\nlines(xs, quants[3, ], col=\"red\", lwd=1.5, lty=2)\nlines(xs, quants[2, ], col=\"darkred\", lwd=2)\n#lines(xs, min.point, col=\"purple\")\n#lines(xs, max.point, col=\"purple\")",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
d0b1e63042c715564956adf75a8aa61faaf10e86 | 303,999 | ipynb | Jupyter Notebook | EDA-dataset10.ipynb | Rahulraj31/Spotify-Hit-Flop-Predictor-1960-2019 | d760caf96b7453ed3d62a7b95a330e0234669024 | [
"MIT"
] | 1 | 2021-07-17T23:06:58.000Z | 2021-07-17T23:06:58.000Z | EDA-dataset10.ipynb | Rahulraj31/Spotify-Hit-Flop-Predictor-1960-2019 | d760caf96b7453ed3d62a7b95a330e0234669024 | [
"MIT"
] | null | null | null | EDA-dataset10.ipynb | Rahulraj31/Spotify-Hit-Flop-Predictor-1960-2019 | d760caf96b7453ed3d62a7b95a330e0234669024 | [
"MIT"
] | null | null | null | 121.115139 | 173,180 | 0.811927 | [
[
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns",
"_____no_output_____"
],
[
"data = pd.read_csv('dataset-of-10s.csv')",
"_____no_output_____"
],
[
"data.head()",
"_____no_output_____"
]
],
[
[
"# checking basic integrity",
"_____no_output_____"
]
],
[
[
"data.shape",
"_____no_output_____"
],
[
"data.info() ",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 6398 entries, 0 to 6397\nData columns (total 19 columns):\ntrack 6398 non-null object\nartist 6398 non-null object\nuri 6398 non-null object\ndanceability 6398 non-null float64\nenergy 6398 non-null float64\nkey 6398 non-null int64\nloudness 6398 non-null float64\nmode 6398 non-null int64\nspeechiness 6398 non-null float64\nacousticness 6398 non-null float64\ninstrumentalness 6398 non-null float64\nliveness 6398 non-null float64\nvalence 6398 non-null float64\ntempo 6398 non-null float64\nduration_ms 6398 non-null int64\ntime_signature 6398 non-null int64\nchorus_hit 6398 non-null float64\nsections 6398 non-null int64\ntarget 6398 non-null int64\ndtypes: float64(10), int64(6), object(3)\nmemory usage: 949.8+ KB\n"
]
],
[
[
"# no. of rows = non null values for each column -> no null value",
"_____no_output_____"
]
],
[
[
"data.head()",
"_____no_output_____"
]
],
[
[
"# checking unique records using uri",
"_____no_output_____"
]
],
[
[
"# extracting exact id\ndef extract(x):\n splited_list = x.split(':') # spliting text at colons\n return splited_list[2] # returning third element\n \ndata['uri'] = data['uri'].apply(extract) ",
"_____no_output_____"
],
[
"data.head() #successfully extracted the id",
"_____no_output_____"
]
],
[
[
"# checking for duplicate rows",
"_____no_output_____"
]
],
[
[
"data['uri'].nunique(), ",
"_____no_output_____"
],
[
"data['uri'].value_counts()\n",
"_____no_output_____"
],
[
"data['uri'].value_counts().unique() ",
"_____no_output_____"
],
[
"dupe_mask = data['uri'].value_counts()==2",
"_____no_output_____"
],
[
"dupe_ids = dupe_mask[dupe_mask]\n\ndupe_ids.value_counts, dupe_ids.shape ",
"_____no_output_____"
],
[
"#converting duplicate ids into a list\ndupe_ids = dupe_ids.index\ndupe_ids = dupe_ids.tolist()\ndupe_ids",
"_____no_output_____"
],
[
"duplicate_index = data.loc[data['uri'].isin(dupe_ids),:].index # all the duplicted records\nduplicate_index = duplicate_index.tolist()",
"_____no_output_____"
]
],
[
[
"# We will be removing all the duplication as they are few compared to data",
"_____no_output_____"
]
],
[
[
"data.drop(duplicate_index,axis=0,inplace=True)\ndata.shape",
"_____no_output_____"
],
[
"data.info()",
"<class 'pandas.core.frame.DataFrame'>\nInt64Index: 6358 entries, 0 to 6397\nData columns (total 19 columns):\ntrack 6358 non-null object\nartist 6358 non-null object\nuri 6358 non-null object\ndanceability 6358 non-null float64\nenergy 6358 non-null float64\nkey 6358 non-null int64\nloudness 6358 non-null float64\nmode 6358 non-null int64\nspeechiness 6358 non-null float64\nacousticness 6358 non-null float64\ninstrumentalness 6358 non-null float64\nliveness 6358 non-null float64\nvalence 6358 non-null float64\ntempo 6358 non-null float64\nduration_ms 6358 non-null int64\ntime_signature 6358 non-null int64\nchorus_hit 6358 non-null float64\nsections 6358 non-null int64\ntarget 6358 non-null int64\ndtypes: float64(10), int64(6), object(3)\nmemory usage: 993.4+ KB\n"
],
[
"print(\"shape of data\",data.shape )\nprint(\"no. of unique rows\",data['uri'].nunique()) # no duplicates",
"shape of data (6358, 19)\nno. of unique rows 6358\n"
],
[
"data.head()",
"_____no_output_____"
]
],
[
[
"# now we will be dropping all the unnecessary columns which contain string which cant be eficiently converted into numerics",
"_____no_output_____"
]
],
[
[
"data.drop(['track','artist','uri'],axis=1,inplace=True)",
"_____no_output_____"
],
[
"data.head()",
"_____no_output_____"
]
],
[
[
"# Univariate analysis",
"_____no_output_____"
]
],
[
[
"#analysing class imbalance\nsns.countplot(data=data,x='target') ",
"_____no_output_____"
],
[
"data.columns",
"_____no_output_____"
],
[
"# checking appropriate data type\ndata[['danceability', 'energy', 'key', 'loudness']].info() # every feature have appropriate datatype",
"<class 'pandas.core.frame.DataFrame'>\nInt64Index: 6358 entries, 0 to 6397\nData columns (total 4 columns):\ndanceability 6358 non-null float64\nenergy 6358 non-null float64\nkey 6358 non-null int64\nloudness 6358 non-null float64\ndtypes: float64(3), int64(1)\nmemory usage: 568.4 KB\n"
],
[
"# checking range of first 4 features \ndata[['danceability', 'energy', 'key', 'loudness']].describe()\n",
"_____no_output_____"
],
[
"plt.figure(figsize=(10,10))\nplt.subplot(2,2,1)\ndata['danceability'].plot()\nplt.subplot(2,2,2)\nplt.plot(data['energy'],color='red')\nplt.subplot(2,2,3)\nplt.plot(data[['key','loudness']])\n",
"_____no_output_____"
]
],
[
[
"# danceabilty is well inside the range(0,1)\n# energy is well inside the range(0,1)\n# there's no -1 for keys-> every track has been assigned respective keys\n# loudness values are out of range(0,-60)db",
"_____no_output_____"
]
],
[
[
"loudness_error_idnex = data[data['loudness']>0].index\nloudness_error_idnex",
"_____no_output_____"
],
[
" # removing rows with out of range values in loudness column\ndata.drop(loudness_error_idnex,axis=0, inplace=True)",
"_____no_output_____"
],
[
"data.shape # record is removed ",
"_____no_output_____"
],
[
"# checking appropriate datatype for next 5 columns\ndata[['mode', 'speechiness',\n 'acousticness', 'instrumentalness', 'liveness',]].info() # datatypes are in acoordance with provided info",
"<class 'pandas.core.frame.DataFrame'>\nInt64Index: 6358 entries, 0 to 6397\nData columns (total 5 columns):\nmode 6358 non-null int64\nspeechiness 6358 non-null float64\nacousticness 6358 non-null float64\ninstrumentalness 6358 non-null float64\nliveness 6358 non-null float64\ndtypes: float64(4), int64(1)\nmemory usage: 298.0 KB\n"
],
[
"data[['mode', 'speechiness',\n 'acousticness', 'instrumentalness', 'liveness',]].describe() # every feautre is within range",
"_____no_output_____"
],
[
"sns.countplot(x=data['mode']) # have only two possible values 0 and 1, no noise in the feature",
"_____no_output_____"
],
[
"data[['valence', 'tempo',\n 'duration_ms', 'time_signature', 'chorus_hit', 'sections']].info() # data type is in accordance with provided info",
"<class 'pandas.core.frame.DataFrame'>\nInt64Index: 6358 entries, 0 to 6397\nData columns (total 6 columns):\nvalence 6358 non-null float64\ntempo 6358 non-null float64\nduration_ms 6358 non-null int64\ntime_signature 6358 non-null int64\nchorus_hit 6358 non-null float64\nsections 6358 non-null int64\ndtypes: float64(3), int64(3)\nmemory usage: 667.7 KB\n"
],
[
"data[['valence', 'tempo',\n 'duration_ms', 'time_signature', 'chorus_hit', 'sections']].describe() # all the data are in specified range",
"_____no_output_____"
]
],
[
[
"# Performing F-test to know the relation between every feature and target",
"_____no_output_____"
]
],
[
[
"data.head()",
"_____no_output_____"
],
[
"x = data.iloc[:,:-1].values\ny = data.iloc[:,-1].values\nx.shape,y.shape",
"_____no_output_____"
],
[
"from sklearn.feature_selection import f_classif\nf_stat,p_value = f_classif(x,y) ",
"_____no_output_____"
],
[
"feat_list = data.iloc[:,:-1].columns.tolist()",
"_____no_output_____"
],
[
"# making a dataframe\ndict = {'Features':feat_list,'f_statistics':f_stat,'p_value':p_value}\nrelation = pd.DataFrame(dict)\nrelation.sort_values(by='p_value')",
"_____no_output_____"
]
],
[
[
"# Multivariate analysis",
"_____no_output_____"
]
],
[
[
"correlation = data.corr()",
"_____no_output_____"
],
[
"plt.figure(figsize=(15,12))\nsns.heatmap(correlation, annot=True)\nplt.tight_layout",
"_____no_output_____"
]
],
[
[
"# strong features(accordance with f-test) --> \ndanceability, loudness, acousticness, instrumentalness, valence\n\n# less imortant feature(accordance with f-test)-->\nduration, section, mode, time_signature, chorus hit\n\n# least imortant--> \nenergy,key,speecheness,liveliness,tempo",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
d0b1fbccd35965e39dcdaeb10604bbb6784c2c74 | 31,339 | ipynb | Jupyter Notebook | ch02.ipynb | RusWhang/pydata-book | 3d39a31faff4d76c3083b2643cb18268e942d1f2 | [
"MIT"
] | null | null | null | ch02.ipynb | RusWhang/pydata-book | 3d39a31faff4d76c3083b2643cb18268e942d1f2 | [
"MIT"
] | null | null | null | ch02.ipynb | RusWhang/pydata-book | 3d39a31faff4d76c3083b2643cb18268e942d1f2 | [
"MIT"
] | null | null | null | 18.844859 | 87 | 0.451674 | [
[
[
"# Python Language Basics, IPython, and Jupyter Notebooks",
"_____no_output_____"
]
],
[
[
"import numpy as np\nnp.random.seed(12345)\nnp.set_printoptions(precision=4, suppress=True)",
"_____no_output_____"
]
],
[
[
"## The Python Interpreter",
"_____no_output_____"
],
[
"```python\n$ python\nPython 3.6.0 | packaged by conda-forge | (default, Jan 13 2017, 23:17:12)\n[GCC 4.8.2 20140120 (Red Hat 4.8.2-15)] on linux\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> a = 5\n>>> print(a)\n5\n```",
"_____no_output_____"
],
[
"```python\nprint('Hello world')\n```",
"_____no_output_____"
],
[
"```python\n$ python hello_world.py\nHello world\n```",
"_____no_output_____"
],
[
"```shell\n$ ipython\nPython 3.6.0 | packaged by conda-forge | (default, Jan 13 2017, 23:17:12)\nType \"copyright\", \"credits\" or \"license\" for more information.\n\nIPython 5.1.0 -- An enhanced Interactive Python.\n? -> Introduction and overview of IPython's features.\n%quickref -> Quick reference.\nhelp -> Python's own help system.\nobject? -> Details about 'object', use 'object??' for extra details.\n\nIn [1]: %run hello_world.py\nHello world\n\nIn [2]:\n```",
"_____no_output_____"
],
[
"## IPython Basics",
"_____no_output_____"
],
[
"### Running the IPython Shell",
"_____no_output_____"
],
[
"$ ",
"_____no_output_____"
]
],
[
[
"import numpy as np\ndata = {i : np.random.randn() for i in range(7)}\ndata",
"_____no_output_____"
]
],
[
[
">>> from numpy.random import randn\n>>> data = {i : randn() for i in range(7)}\n>>> print(data)\n{0: -1.5948255432744511, 1: 0.10569006472787983, 2: 1.972367135977295,\n3: 0.15455217573074576, 4: -0.24058577449429575, 5: -1.2904897053651216,\n6: 0.3308507317325902}",
"_____no_output_____"
],
[
"### Running the Jupyter Notebook",
"_____no_output_____"
],
[
"```shell\n$ jupyter notebook\n[I 15:20:52.739 NotebookApp] Serving notebooks from local directory:\n/home/wesm/code/pydata-book\n[I 15:20:52.739 NotebookApp] 0 active kernels\n[I 15:20:52.739 NotebookApp] The Jupyter Notebook is running at:\nhttp://localhost:8888/\n[I 15:20:52.740 NotebookApp] Use Control-C to stop this server and shut down\nall kernels (twice to skip confirmation).\nCreated new window in existing browser session.\n```",
"_____no_output_____"
],
[
"### Tab Completion",
"_____no_output_____"
],
[
"```\nIn [1]: an_apple = 27\n\nIn [2]: an_example = 42\n\nIn [3]: an\n```",
"_____no_output_____"
],
[
"```\nIn [3]: b = [1, 2, 3]\n\nIn [4]: b.\n```",
"_____no_output_____"
],
[
"```\nIn [1]: import datetime\n\nIn [2]: datetime.\n```",
"_____no_output_____"
],
[
"```\nIn [7]: datasets/movielens/\n```",
"_____no_output_____"
],
[
"### Introspection",
"_____no_output_____"
],
[
"```\nIn [8]: b = [1, 2, 3]\n\nIn [9]: b?\nType: list\nString Form:[1, 2, 3]\nLength: 3\nDocstring:\nlist() -> new empty list\nlist(iterable) -> new list initialized from iterable's items\n\nIn [10]: print?\nDocstring:\nprint(value, ..., sep=' ', end='\\n', file=sys.stdout, flush=False)\n\nPrints the values to a stream, or to sys.stdout by default.\nOptional keyword arguments:\nfile: a file-like object (stream); defaults to the current sys.stdout.\nsep: string inserted between values, default a space.\nend: string appended after the last value, default a newline.\nflush: whether to forcibly flush the stream.\nType: builtin_function_or_method\n```",
"_____no_output_____"
],
[
"```python\ndef add_numbers(a, b):\n \"\"\"\n Add two numbers together\n\n Returns\n -------\n the_sum : type of arguments\n \"\"\"\n return a + b\n```",
"_____no_output_____"
],
[
"```python\nIn [11]: add_numbers?\nSignature: add_numbers(a, b)\nDocstring:\nAdd two numbers together\n\nReturns\n-------\nthe_sum : type of arguments\nFile: <ipython-input-9-6a548a216e27>\nType: function\n```",
"_____no_output_____"
],
[
"```python\nIn [12]: add_numbers??\nSignature: add_numbers(a, b)\nSource:\ndef add_numbers(a, b):\n \"\"\"\n Add two numbers together\n\n Returns\n -------\n the_sum : type of arguments\n \"\"\"\n return a + b\nFile: <ipython-input-9-6a548a216e27>\nType: function\n```",
"_____no_output_____"
],
[
"```python\nIn [13]: np.*load*?\nnp.__loader__\nnp.load\nnp.loads\nnp.loadtxt\nnp.pkgload\n```",
"_____no_output_____"
],
[
"### The %run Command",
"_____no_output_____"
],
[
"```python\ndef f(x, y, z):\n return (x + y) / z\n\na = 5\nb = 6\nc = 7.5\n\nresult = f(a, b, c)\n```",
"_____no_output_____"
],
[
"```python\nIn [14]: %run ipython_script_test.py\n```",
"_____no_output_____"
],
[
"```python\nIn [15]: c\nOut [15]: 7.5\n\nIn [16]: result\nOut[16]: 1.4666666666666666\n```",
"_____no_output_____"
],
[
"```python\n>>> %load ipython_script_test.py\n\n def f(x, y, z):\n return (x + y) / z\n\n a = 5\n b = 6\n c = 7.5\n\n result = f(a, b, c)\n```",
"_____no_output_____"
],
[
"#### Interrupting running code",
"_____no_output_____"
],
[
"### Executing Code from the Clipboard",
"_____no_output_____"
],
[
"```python\nx = 5\ny = 7\nif x > 5:\n x += 1\n\n y = 8\n```",
"_____no_output_____"
],
[
"```python\nIn [17]: %paste\nx = 5\ny = 7\nif x > 5:\n x += 1\n\n y = 8\n## -- End pasted text --\n```",
"_____no_output_____"
],
[
"```python\nIn [18]: %cpaste\nPasting code; enter '--' alone on the line to stop or use Ctrl-D.\n:x = 5\n:y = 7\n:if x > 5:\n: x += 1\n:\n: y = 8\n:--\n```",
"_____no_output_____"
],
[
"### Terminal Keyboard Shortcuts",
"_____no_output_____"
],
[
"### About Magic Commands",
"_____no_output_____"
],
[
"```python\nIn [20]: a = np.random.randn(100, 100)\n\nIn [20]: %timeit np.dot(a, a)\n10000 loops, best of 3: 20.9 µs per loop\n```",
"_____no_output_____"
],
[
"```python\nIn [21]: %debug?\nDocstring:\n::\n\n %debug [--breakpoint FILE:LINE] [statement [statement ...]]\n\nActivate the interactive debugger.\n\nThis magic command support two ways of activating debugger.\nOne is to activate debugger before executing code. This way, you\ncan set a break point, to step through the code from the point.\nYou can use this mode by giving statements to execute and optionally\na breakpoint.\n\nThe other one is to activate debugger in post-mortem mode. You can\nactivate this mode simply running %debug without any argument.\nIf an exception has just occurred, this lets you inspect its stack\nframes interactively. Note that this will always work only on the last\ntraceback that occurred, so you must call this quickly after an\nexception that you wish to inspect has fired, because if another one\noccurs, it clobbers the previous one.\n\nIf you want IPython to automatically do this on every exception, see\nthe %pdb magic for more details.\n\npositional arguments:\n statement Code to run in debugger. You can omit this in cell\n magic mode.\n\noptional arguments:\n --breakpoint <FILE:LINE>, -b <FILE:LINE>\n Set break point at LINE in FILE.\n\n``` ",
"_____no_output_____"
],
[
"```python\nIn [22]: %pwd\nOut[22]: '/home/wesm/code/pydata-book\n\nIn [23]: foo = %pwd\n\nIn [24]: foo\nOut[24]: '/home/wesm/code/pydata-book'\n```",
"_____no_output_____"
],
[
"### Matplotlib Integration",
"_____no_output_____"
],
[
"```python\nIn [26]: %matplotlib\nUsing matplotlib backend: Qt4Agg\n```",
"_____no_output_____"
],
[
"```python\nIn [26]: %matplotlib inline\n```",
"_____no_output_____"
],
[
"## Python Language Basics",
"_____no_output_____"
],
[
"### Language Semantics",
"_____no_output_____"
],
[
"#### Indentation, not braces",
"_____no_output_____"
],
[
"```python\nfor x in array:\n if x < pivot:\n less.append(x)\n else:\n greater.append(x)\n```",
"_____no_output_____"
],
[
"```python\na = 5; b = 6; c = 7\n```",
"_____no_output_____"
],
[
"#### Everything is an object",
"_____no_output_____"
],
[
"#### Comments",
"_____no_output_____"
],
[
"```python\nresults = []\nfor line in file_handle:\n # keep the empty lines for now\n # if len(line) == 0:\n # continue\n results.append(line.replace('foo', 'bar'))\n```",
"_____no_output_____"
],
[
"```python\nprint(\"Reached this line\") # Simple status report\n```",
"_____no_output_____"
],
[
"#### Function and object method calls",
"_____no_output_____"
],
[
"```\nresult = f(x, y, z)\ng()\n```",
"_____no_output_____"
],
[
"```\nobj.some_method(x, y, z)\n```",
"_____no_output_____"
],
[
"```python\nresult = f(a, b, c, d=5, e='foo')\n```",
"_____no_output_____"
],
[
"#### Variables and argument passing",
"_____no_output_____"
]
],
[
[
"a = [1, 2, 3]",
"_____no_output_____"
],
[
"b = a",
"_____no_output_____"
],
[
"a.append(4)\nb",
"_____no_output_____"
]
],
[
[
"```python\ndef append_element(some_list, element):\n some_list.append(element)\n```",
"_____no_output_____"
],
[
"```python\nIn [27]: data = [1, 2, 3]\n\nIn [28]: append_element(data, 4)\n\nIn [29]: data\nOut[29]: [1, 2, 3, 4]\n```",
"_____no_output_____"
],
[
"#### Dynamic references, strong types",
"_____no_output_____"
]
],
[
[
"a = 5\ntype(a)\na = 'foo'\ntype(a)",
"_____no_output_____"
],
[
"'5' + 5",
"_____no_output_____"
],
[
"a = 4.5\nb = 2\n# String formatting, to be visited later\nprint('a is {0}, b is {1}'.format(type(a), type(b)))\na / b",
"_____no_output_____"
],
[
"a = 5\nisinstance(a, int)",
"_____no_output_____"
],
[
"a = 5; b = 4.5\nisinstance(a, (int, float))\nisinstance(b, (int, float))",
"_____no_output_____"
]
],
[
[
"#### Attributes and methods",
"_____no_output_____"
],
[
"```python\nIn [1]: a = 'foo'\n\nIn [2]: a.<Press Tab>\na.capitalize a.format a.isupper a.rindex a.strip\na.center a.index a.join a.rjust a.swapcase\na.count a.isalnum a.ljust a.rpartition a.title\na.decode a.isalpha a.lower a.rsplit a.translate\na.encode a.isdigit a.lstrip a.rstrip a.upper\na.endswith a.islower a.partition a.split a.zfill\na.expandtabs a.isspace a.replace a.splitlines\na.find a.istitle a.rfind a.startswith\n```",
"_____no_output_____"
]
],
[
[
"a = 'foo'",
"_____no_output_____"
],
[
"getattr(a, 'split')",
"_____no_output_____"
]
],
[
[
"#### Duck typing",
"_____no_output_____"
]
],
[
[
"def isiterable(obj):\n try:\n iter(obj)\n return True\n except TypeError: # not iterable\n return False",
"_____no_output_____"
],
[
"isiterable('a string')\nisiterable([1, 2, 3])\nisiterable(5)",
"_____no_output_____"
]
],
[
[
"if not isinstance(x, list) and isiterable(x):\n x = list(x)",
"_____no_output_____"
],
[
"#### Imports",
"_____no_output_____"
],
[
"```python\n# some_module.py\nPI = 3.14159\n\ndef f(x):\n return x + 2\n\ndef g(a, b):\n return a + b\n```",
"_____no_output_____"
],
[
"import some_module\nresult = some_module.f(5)\npi = some_module.PI",
"_____no_output_____"
],
[
"from some_module import f, g, PI\nresult = g(5, PI)",
"_____no_output_____"
],
[
"import some_module as sm\nfrom some_module import PI as pi, g as gf\n\nr1 = sm.f(pi)\nr2 = gf(6, pi)",
"_____no_output_____"
],
[
"#### Binary operators and comparisons",
"_____no_output_____"
]
],
[
[
"5 - 7\n12 + 21.5\n5 <= 2",
"_____no_output_____"
],
[
"a = [1, 2, 3]\nb = a\nc = list(a)\na is b\na is not c",
"_____no_output_____"
],
[
"a == c",
"_____no_output_____"
],
[
"a = None\na is None",
"_____no_output_____"
]
],
[
[
"#### Mutable and immutable objects",
"_____no_output_____"
]
],
[
[
"a_list = ['foo', 2, [4, 5]]\na_list[2] = (3, 4)\na_list",
"_____no_output_____"
],
[
"a_tuple = (3, 5, (4, 5))\na_tuple[1] = 'four'",
"_____no_output_____"
]
],
[
[
"### Scalar Types",
"_____no_output_____"
],
[
"#### Numeric types",
"_____no_output_____"
]
],
[
[
"ival = 17239871\nival ** 6",
"_____no_output_____"
],
[
"fval = 7.243\nfval2 = 6.78e-5",
"_____no_output_____"
],
[
"3 / 2",
"_____no_output_____"
],
[
"3 // 2",
"_____no_output_____"
]
],
[
[
"#### Strings",
"_____no_output_____"
],
[
"a = 'one way of writing a string'\nb = \"another way\"",
"_____no_output_____"
]
],
[
[
"c = \"\"\"\nThis is a longer string that\nspans multiple lines\n\"\"\"",
"_____no_output_____"
],
[
"c.count('\\n')",
"_____no_output_____"
],
[
"a = 'this is a string'\na[10] = 'f'\nb = a.replace('string', 'longer string')\nb",
"_____no_output_____"
],
[
"a",
"_____no_output_____"
],
[
"a = 5.6\ns = str(a)\nprint(s)",
"_____no_output_____"
],
[
"s = 'python'\nlist(s)\ns[:3]",
"_____no_output_____"
],
[
"s = '12\\\\34'\nprint(s)",
"_____no_output_____"
],
[
"s = r'this\\has\\no\\special\\characters'\ns",
"_____no_output_____"
],
[
"a = 'this is the first half '\nb = 'and this is the second half'\na + b",
"_____no_output_____"
],
[
"template = '{0:.2f} {1:s} are worth US${2:d}'",
"_____no_output_____"
],
[
"template.format(4.5560, 'Argentine Pesos', 1)",
"_____no_output_____"
]
],
[
[
"#### Bytes and Unicode",
"_____no_output_____"
]
],
[
[
"val = \"español\"\nval",
"_____no_output_____"
],
[
"val_utf8 = val.encode('utf-8')\nval_utf8\ntype(val_utf8)",
"_____no_output_____"
],
[
"val_utf8.decode('utf-8')",
"_____no_output_____"
],
[
"val.encode('latin1')\nval.encode('utf-16')\nval.encode('utf-16le')",
"_____no_output_____"
],
[
"bytes_val = b'this is bytes'\nbytes_val\ndecoded = bytes_val.decode('utf8')\ndecoded # this is str (Unicode) now",
"_____no_output_____"
]
],
[
[
"#### Booleans",
"_____no_output_____"
]
],
[
[
"True and True\nFalse or True",
"_____no_output_____"
]
],
[
[
"#### Type casting",
"_____no_output_____"
]
],
[
[
"s = '3.14159'\nfval = float(s)\ntype(fval)\nint(fval)\nbool(fval)\nbool(0)",
"_____no_output_____"
]
],
[
[
"#### None",
"_____no_output_____"
]
],
[
[
"a = None\na is None\nb = 5\nb is not None",
"_____no_output_____"
]
],
[
[
"def add_and_maybe_multiply(a, b, c=None):\n result = a + b\n\n if c is not None:\n result = result * c\n\n return result",
"_____no_output_____"
]
],
[
[
"type(None)",
"_____no_output_____"
]
],
[
[
"#### Dates and times",
"_____no_output_____"
]
],
[
[
"from datetime import datetime, date, time\ndt = datetime(2011, 10, 29, 20, 30, 21)\ndt.day\ndt.minute",
"_____no_output_____"
],
[
"dt.date()\ndt.time()",
"_____no_output_____"
],
[
"dt.strftime('%m/%d/%Y %H:%M')",
"_____no_output_____"
],
[
"datetime.strptime('20091031', '%Y%m%d')",
"_____no_output_____"
],
[
"dt.replace(minute=0, second=0)",
"_____no_output_____"
],
[
"dt2 = datetime(2011, 11, 15, 22, 30)\ndelta = dt2 - dt\ndelta\ntype(delta)",
"_____no_output_____"
],
[
"dt\ndt + delta",
"_____no_output_____"
]
],
[
[
"### Control Flow",
"_____no_output_____"
],
[
"#### if, elif, and else",
"_____no_output_____"
],
[
"if x < 0:\n print('It's negative')",
"_____no_output_____"
],
[
"if x < 0:\n print('It's negative')\nelif x == 0:\n print('Equal to zero')\nelif 0 < x < 5:\n print('Positive but smaller than 5')\nelse:\n print('Positive and larger than or equal to 5')",
"_____no_output_____"
]
],
[
[
"a = 5; b = 7\nc = 8; d = 4\nif a < b or c > d:\n print('Made it')",
"_____no_output_____"
],
[
"4 > 3 > 2 > 1",
"_____no_output_____"
]
],
[
[
"#### for loops",
"_____no_output_____"
],
[
"for value in collection:\n # do something with value",
"_____no_output_____"
],
[
"sequence = [1, 2, None, 4, None, 5]\ntotal = 0\nfor value in sequence:\n if value is None:\n continue\n total += value",
"_____no_output_____"
],
[
"sequence = [1, 2, 0, 4, 6, 5, 2, 1]\ntotal_until_5 = 0\nfor value in sequence:\n if value == 5:\n break\n total_until_5 += value",
"_____no_output_____"
]
],
[
[
"for i in range(4):\n for j in range(4):\n if j > i:\n break\n print((i, j))",
"_____no_output_____"
]
],
[
[
"for a, b, c in iterator:\n # do something",
"_____no_output_____"
],
[
"#### while loops",
"_____no_output_____"
],
[
"x = 256\ntotal = 0\nwhile x > 0:\n if total > 500:\n break\n total += x\n x = x // 2",
"_____no_output_____"
],
[
"#### pass",
"_____no_output_____"
],
[
"if x < 0:\n print('negative!')\nelif x == 0:\n # TODO: put something smart here\n pass\nelse:\n print('positive!')",
"_____no_output_____"
],
[
"#### range",
"_____no_output_____"
]
],
[
[
"range(10)\nlist(range(10))",
"_____no_output_____"
],
[
"list(range(0, 20, 2))\nlist(range(5, 0, -1))",
"_____no_output_____"
]
],
[
[
"seq = [1, 2, 3, 4]\nfor i in range(len(seq)):\n val = seq[i]",
"_____no_output_____"
],
[
"sum = 0\nfor i in range(100000):\n # % is the modulo operator\n if i % 3 == 0 or i % 5 == 0:\n sum += i",
"_____no_output_____"
],
[
"#### Ternary expressions",
"_____no_output_____"
],
[
"value = ",
"_____no_output_____"
],
[
"if ",
"_____no_output_____"
]
],
[
[
"x = 5\n'Non-negative' if x >= 0 else 'Negative'",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
]
] |
d0b20093de2f760582395136a067e769e5e9bd83 | 805,137 | ipynb | Jupyter Notebook | code/.ipynb_checkpoints/Bathymetry and Coord Grid Generator (Py3)-checkpoint.ipynb | goldford/NEMO-Salish-Sea-2021 | c6a78956293f1741ff9537747dd02ce37c20a4d3 | [
"MIT"
] | null | null | null | code/.ipynb_checkpoints/Bathymetry and Coord Grid Generator (Py3)-checkpoint.ipynb | goldford/NEMO-Salish-Sea-2021 | c6a78956293f1741ff9537747dd02ce37c20a4d3 | [
"MIT"
] | null | null | null | code/.ipynb_checkpoints/Bathymetry and Coord Grid Generator (Py3)-checkpoint.ipynb | goldford/NEMO-Salish-Sea-2021 | c6a78956293f1741ff9537747dd02ce37c20a4d3 | [
"MIT"
] | null | null | null | 163.712281 | 342,239 | 0.834006 | [
[
[
"# last edited Apr 4, 2021, by GO. \n# to do: \n \n\n################################################################################\n# script uses 'seagrid' E grid (500 m) and a bathymetric data file to generate \n# new bathymetric .nc file at coarser resolutions (multiples of 500 m). \n# Based on original code provided by M Dunphy. \n# Assumes input bathymetric file is on exact grid as 'seagrid'.\n# Output can be used by NEMO to generate a 'mesh mask'\n\n# in: \n# coordinates_seagrid_SalishSea2.nc - 500m coordinates, same region\n# bathymetry_201702.nc - 500m bathymetry, same region\n#\n# out: \n# coordinates_seagrid_SalishSea_1500m.nc \n# bathymetry_201702.nc\n\n# change log: \n# - Apr 3 2021 - \n# issue with np.nan vs np.NaN or similar causing issues with XIOS. \n# Set all land values to zero instead of nan.\n# - Feb 5, 2021, by GO (previously called 'Working Grid Generator')\n# - fixes to the Fraser river extension and southeast corner\n#\n# - mike made improvements - Dec 29, 2020\n# - to 'decimante' a 500 m Arakawa E grid to 1 km, 1.5 km, 2 km etc grids \n# (factors of 500 m) we can extract by skipping n cells and taking coords \n# from previous 500 m Arakawa E grid. \n# However, the point we extract from the 500 m grid depends on the new grid \n# - it's either an 'f' point (even number of skipped cells, n; e.g., 1 km = \n# 2 x 500 m = even) or a 't' point (if n is odd). \n\n################################################################################\n%matplotlib notebook\n\nimport netCDF4 as nc\nimport numpy as np\nfrom helpers import writebathy, expandf # custom helper fns from MD, MEOPAR \nfrom helpers import gete1, gete2, writecoords, t2u, t2v, t2f\nimport matplotlib.pyplot as plt\n\nres = \"1500m\"\nkm = \"1500m\"\ngridfilename = \"..//data//grid//coordinates_salishsea_{}.nc\".format(res) # in \nn = 3 # e.g., 500m x n = new res\ndatetag = \"20210406\"\nbathyout_filename = \"..//data//bathymetry//bathy_salishsea_{}_{}.nc\".format(res,datetag)\nbathyout_filename_preedits = \"..//data//bathymetry//bathy_salishsea_{}_before_manual_edits.nc\".format(res)\n\ndef loadreduce_md(pt, n):\n c0 = '..//data//grid//etc//coordinates_seagrid_SalishSea2.nc'\n\n with nc.Dataset(c0) as ncid:\n if pt=='t':\n glam = ncid.variables[\"glamt\"][0, 1::n, 1::n].filled()\n gphi = ncid.variables[\"gphit\"][0, 1::n, 1::n].filled()\n if pt=='u':\n glam = ncid.variables[\"glamu\"][0, 1::n, 2::n].filled()\n gphi = ncid.variables[\"gphiu\"][0, 1::n, 2::n].filled()\n if pt=='v':\n glam = ncid.variables[\"glamv\"][0, 2::n, 1::n].filled()\n gphi = ncid.variables[\"gphiv\"][0, 2::n, 1::n].filled()\n if pt=='f':\n glam = ncid.variables[\"glamf\"][0, 2::n, 2::n].filled()\n gphi = ncid.variables[\"gphif\"][0, 2::n, 2::n].filled()\n return glam, gphi",
"_____no_output_____"
],
[
"#########################################################\n######### DECIMATE GRID FROM 500m to 1500m ##############\n\n# Since we're doing a 3-way reduction, we can re-use the original points and not calculate new ones\nglamt, gphit = loadreduce_md('t', n)\nglamu, gphiu = loadreduce_md('u', n)\nglamv, gphiv = loadreduce_md('v', n)\nglamf, gphif = loadreduce_md('f', n)\n\n# Compute scaling factors (with extrapolation for the left/bottom most scaling factor)\ne1t = gete1(glamu,gphiu,expandleft=True) # Need a left u point\ne1u = gete1(glamt,gphit)\ne1v = gete1(glamf,gphif,expandleft=True) # Need a left f point\ne1f = gete1(glamv,gphiv)\n#\ne2t = gete2(glamv,gphiv,expanddown=True) # Need a lower v point\ne2u = gete2(glamf,gphif,expanddown=True) # Need a lower f point\ne2v = gete2(glamt,gphit)\ne2f = gete2(glamu,gphiu)\n\n# Output slices\nNY,NX = glamt.shape\nJ,I = slice(0,NY), slice(0,NX-1)\n\n\nwritecoords(gridfilename,\n glamt[J,I],glamu[J,I],glamv[J,I],glamf[J,I],\n gphit[J,I],gphiu[J,I],gphiv[J,I],gphif[J,I],\n e1t[J,I],e1u[J,I],e1v[J,I],e1f[J,I],\n e2t[J,I],e2u[J,I],e2v[J,I],e2f[J,I])",
"_____no_output_____"
],
[
"##########################################################\n############### DECIMATE AND EDIT BATHY ##################\n\n# --------------------------------------------------------------------------------\n# 1) get the grid centres (t points) for new grid\nwith nc.Dataset(gridfilename) as ncid:\n glamt = ncid.variables[\"glamt\"][0, :, :].filled()\n gphit = ncid.variables[\"gphit\"][0, :, :].filled()\n\n# --------------------------------------------------------------------------------\n# 2) get depths from 500 m bathy file\nwith nc.Dataset('..//data//bathymetry//etc//bathymetry_201702.nc') as nc_b_file:\n a = nc_b_file.variables[\"Bathymetry\"][:, :].filled()\n\n# --------------------------------------------------------------------------------\n# 3) 'land mask' from 500 m bathy\nmask = a.copy()\nmask[mask > 0] = 1\n\n# --------------------------------------------------------------------------------\n# 4) create new grid taking mean of surrounding cells\na2 = np.zeros(glamt.shape)\nm2 = np.zeros(glamt.shape)\nfor j in range(a2.shape[0]):\n for i in range(a2.shape[1]):\n i1, i2 = 3*i, 3*i+3\n j1, j2 = 3*j, 3*j+3\n\n bvals = a[j1:j2, i1:i2] # extract 3x3 box of bathy values\n a2[j,i] = np.mean(bvals)\n mvals = mask[j1:j2, i1:i2]\n m2[j,i] = np.mean(mvals)\n\n# --------------------------------------------------------------------------------\n# 5) filter new bathy grid based on % land\n # (m2 is the % of the new 1500m cell that was land in 500m version)\na2[m2 < 0.5] = 0\n\n# --------------------------------------------------------------------------------\n# 6) set min depth\na2[(a2 > 0) & (a2 < 4)] = 4\n\n# --------------------------------------------------------------------------------\n# 6a) write to file pre-edits\nwritebathy(bathyout_filename_preedits,glamt,gphit,a2)\n\ndef manualedits(a, n):\n # a = array of depths, 1.5 km grid\n \n # manual edits for 1.5 km bathy\n if n == 3:\n # north to south\n a[296,57] = 40 #northern fjord\n a[296,54] = 60 #northern fjord\n a[296,53] = 60 #northern fjord\n a[295,52] = 150 #northern fjord\n a[286,44] = 200 #northern fjord\n a[289,32] = 20 #hardwick\n a[289,33] = 20 #hardwick\n a[286,33] = 20 #hardwick\n a[285,30] = 20 #hardwick\n a[284,41] = 20 #west thurlow\n a[284,42] = 20 #west thurlow\n a[283,43] = 20 #west thurlow\n a[282,43] = 20 #west thurlow\n a[281,44] = 20 #west thurlow\n a[279,43] = 20 #west thurlow\n a[279,45] = 20 #west thurlow\n a[269,59] = 20 #sonora\n a[265,54] = 10 #maurelle\n a[265,57] = 20 #maurelle\n a[266,52] = 20 #quadra\n a[268,46] = 20 #quadra\n a[259,53] = 20 #quadra\n a[260,57] = 20 #read\n a[254,63] = 20 #cortes\n a[254,62] = 20 #cortes\n a[255,62] = 20 #cortes\n a[254,72] = 6 #redonda\n a[254,73] = 20 #redonda\n a[252,72] = 20 #redonda\n a[252,71] = 20 #redonda\n a[251,71] = 20 #redonda\n a[197,82] = 30 #nelson\n a[197,84] = 60 #nelson\n a[199,86] = 60 #nelson\n a[200,86] = 60 #nelson\n a[156,73] = 30 #gabriola\n a[132,72] = 100 #salt spring\n a[128,71] = 50 #salt spring\n a[123,86] = 30 #mayne\n a[146,112] = 0.0 #north fraser\n a[146,114] = 0.0 #north fraser\n a[146,113] = 0.0 #north fraser\n a[146,108] = 6 #north fraser\n a[146,109] = 6 #north fraser\n a[145,112] = 0.0 #north fraser\n a[145,115] = 0.0 #north fraser\n #a[145,108] = 10 #north fraser\n a[144,108] = 6 #north fraser\n a[144,109] = 6 #north fraser\n a[144,110] = 6 #north fraser\n a[144,111] = 6 #north fraser\n a[144,112] = 6 #north fraser\n a[144,115] = 0.0 #north fraser\n a[145,110] = 0.0 #north fraser\n a[145,111] = 0.0 #north fraser\n a[145,114] = 0.0 #north fraser\n a[144,107] = 6 #north fraser\n a[143,112] = 6 #north fraser\n a[143,113] = 6 #north fraser\n a[143,115] = 0.0 #north fraser\n a[143,116] = 0.0 #north fraser\n a[142,113] = 6 #north fraser\n a[142,114] = 6 #north fraser\n a[142,116] = 0.0 #north fraser\n a[141,116] = 0.0 #north fraser\n a[141,118] = 6 #north fraser\n a[141,120] = 6 #north fraser\n a[142,120] = 6 #north fraser\n a[136,103] = 6 #south fraser\n a[137,104] = 6 #south fraser\n a[138,109] = 10 #south fraser\n a[138,112] = 12 #south fraser\n a[139,104] = 10 #south fraser\n a[139,113] = 10 #south fraser\n a[139,114] = 10 #south fraser\n a[138,113] = 0.0 #south fraser\n a[138,117] = 0.0 #south fraser\n a[137,107] = 0.0 #south fraser\n a[137,109] = 0.0 #south fraser\n a[137,110] = 0.0 #south fraser\n a[137,113] = 0.0 #south fraser\n a[137,114] = 0.0 #south fraser\n a[137,115] = 0.0 #south fraser\n a[137,116] = 10 #south fraser\n a[137,117] = 10 #south fraser\n a[137,118] = 12 #south fraser\n a[136,116] = 0.0 #south fraser\n a[136,117] = 0.0 #south fraser\n a[136,118] = 0.0 #south fraser\n a[136,119] = 0.0 #south fraser\n a[140,119] = 6 # fraser\n a[140,125] = 0.0 # fraser\n a[141,117] = 6 # fraser\n a[141,118] = 0.0 # fraser\n a[142,118] = 0.0 # fraser\n a[142,119] = 0.0 # fraser\n a[142,120] = 0.0 # fraser\n a[141,120] = 0.0 # fraser\n a[140,120] = 0.0 # fraser\n a[140,121] = 0.0 # fraser\n a[141,121] = 0.0 # fraser\n a[140,123] = 0.0 # fraser\n a[141,123] = 0.0 # fraser\n a[142,123] = 0.0 # fraser\n a[142,122] = 0.0 # fraser\n a[141,122] = 0.0 # fraser\n a[141,124] = 0.0 # fraser\n a[140,124] = 0.0 # fraser\n a[140,125] = 0.0 # fraser\n a[141,115] = 6 # fraser\n a[141,116] = 6 # fraser\n a[142,115] = 6 # fraser\n a[140,115] = 0.0 # fraser\n a[140,116] = 0.0 # fraser\n a[140,117] = 6 # fraser\n a[140,118] = 6 # fraser\n a[138,114] = 10 # fraser\n a[138,115] = 10 # fraser\n a[138,116] = 10 # fraser\n a[138,118] = 6 # fraser\n a[138,127] = 10 # fraser\n a[138,128] = 10 # fraser\n a[138,129] = 10 # fraser\n a[138,130] = 0.0 # fraser\n a[137,129] = 10 # fraser\n a[136,129] = 10 # fraser\n a[135,129] = 0 # fraser\n a[135,130] = 10 # fraser\n a[135,131] = 10 # fraser\n a[136,131] = 10 # fraser\n a[137,131] = 10 # fraser\n a[139,112] = 10 # fraser\n a[139,117] = 0.0 # fraser\n a[139,118] = 0.0 # fraser\n a[139,121] = 10 # fraser\n a[139,122] = 10 # fraser\n a[139,123] = 10 # fraser\n a[139,124] = 10 # fraser\n a[139,125] = 10 # fraser\n a[139,126] = 10 # fraser\n a[139,130] = 6 #artificial frsr riv extension\n a[140,130] = 6 #artificial frsr riv extension\n a[141,130] = 6 #artificial frsr riv extension\n a[142,130] = 6 #artificial frsr riv extension\n a[143,130] = 6 #artificial frsr riv extension\n a[144,130] = 6 #artificial frsr riv extension\n a[145,130] = 6 #artificial frsr riv extension\n a[146,130] = 6 #artificial frsr riv extension\n a[147,130] = 6 #artificial frsr riv extension \n a[148,130] = 6 #artificial frsr riv extension\n a[149,130] = 6 #artificial frsr riv extension\n a[150,130] = 6 #artificial frsr riv extension\n a[151,130] = 6 #artificial frsr riv extension\n a[140,129] = 6 #artificial frsr riv extension\n a[139,129] = 6 #artificial frsr riv extension\n a[136,129] = 0.0 #artificial frsr riv extension\n a[135,131] = 0.0 #artificial frsr riv extension\n a[136,131] = 0.0 #artificial frsr riv extension\n a[137,131] = 0.0 #artificial frsr riv extension\n a[138,131] = 0.0 #artificial frsr riv extension\n a[135,130] = 0.0 #artificial frsr riv extension\n a[137,129] = 0.0 #artificial frsr riv extension\n a[100,93] = 15 #shaw is\n a[99,94] = 15 #shaw is \n a[102,81] = 15 #san juan is \n a[102,82] = 10 #san juan is \n a[103,82] = 10 #san juan is \n a[104,82] = 10 #san juan is \n a[104,83] = 10 #san juan is \n a[94,98] = 30 #lopez is\n a[82,101] = 20 #rosario b\n a[81,101] = 20 #rosario b\n a[81,102] = 10 #rosario b\n a[33,43] = 200 #hood cnl\n a[34,44] = 200 #hood cnl\n a[24,39] = 30 #hood cnl\n a[23,41] = 30 #hood cnl\n a[23,48] = 30 #hood cnl\n a[7,59] = 30 #tacoma\n a[5,58] = 30 #tacoma\n a[8,54] = 20 #fox is\n a[28,71] = 10 #bremerton\n a[28,72] = 10 #bremerton\n a[30,72] = 10 #bremerton\n a[27,68] = 20 #bremerton\n a[26,68] = 30 #bremerton\n a[26,66] = 30 #bremerton\n a[4,31] = 30 #southwest\n a[4,32] = 30 #southwest\n a[4,33] = 30 #southwest\n a[9,31] = 10 #southwest\n a[11,39] = 10 #southwest\n a[12,39] = 10 #southwest\n a[13,40] = 10 #southwest\n a[14,41] = 10 #southwest\n a[6,35] = 10 #southwest\n a[8,35] = 10 #southwest\n a[19,50] = 10 #southwest\n a[19,49] = 10 #southwest\n a[10,68] = 10 #southwest\n a[8,30] = 10 #southwest\n a[8,29] = 10 #southwest\n a[1,46] = 10 #southwest - MD fix 20210208\n return a\n\n# --------------------------------------------------------------------------------\n# 7) apply manual edits\na3 = manualedits(a2, n)\n\n\n",
"_____no_output_____"
],
[
"# Apr 6 --------------------------------------------------------------------------\n# 8) replace all np.nan and nan with 0.0\n\n \n \n",
"_____no_output_____"
],
[
"# --------------------------------------------------------------------------------\n# 9) write to file\nwritebathy(bathyout_filename,glamt,gphit,a3)\n\nprint(\"success\")",
"_____no_output_____"
]
],
[
[
"# Plots to check channels etc\n- took from old checkbathy and plotgrids files",
"_____no_output_____"
]
],
[
[
"import scipy.io as sio\nfrom IPython.core.display import display, HTML\ndisplay(HTML(\"<style>.container { width:90% !important; }</style>\"))\n\nfrom helpers import expandf, grid_angle\n\n# grid\ndef load1(f):\n with nc.Dataset(f) as ncid:\n glamt = ncid.variables[\"glamt\"][0, :, :].filled()\n gphit = ncid.variables[\"gphit\"][0, :, :].filled()\n glamu = ncid.variables[\"glamu\"][0, :, :].filled()\n gphiu = ncid.variables[\"gphiu\"][0, :, :].filled()\n glamv = ncid.variables[\"glamv\"][0, :, :].filled()\n gphiv = ncid.variables[\"gphiv\"][0, :, :].filled()\n glamf = ncid.variables[\"glamf\"][0, :, :].filled()\n gphif = ncid.variables[\"gphif\"][0, :, :].filled()\n return glamt, glamu, glamv, glamf, gphit, gphiu, gphiv, gphif\n\n#\ndef load2(f):\n with nc.Dataset(f) as ncid:\n e1t = ncid.variables[\"e1t\"][0, :, :].filled()\n e1u = ncid.variables[\"e1u\"][0, :, :].filled()\n e1v = ncid.variables[\"e1v\"][0, :, :].filled()\n e1f = ncid.variables[\"e1f\"][0, :, :].filled()\n e2t = ncid.variables[\"e2t\"][0, :, :].filled()\n e2u = ncid.variables[\"e2u\"][0, :, :].filled()\n e2v = ncid.variables[\"e2v\"][0, :, :].filled()\n e2f = ncid.variables[\"e2f\"][0, :, :].filled()\n return e1t,e1u,e1v,e1f,e2t,e2u,e2v,e2f\n\ndef load3(f):\n with nc.Dataset(f) as ncid:\n depth = ncid.variables[\"Bathymetry\"][:, :].filled()\n latt = ncid.variables[\"nav_lat\"][:, :].filled()\n lont = ncid.variables[\"nav_lon\"][:, :].filled()\n\n return depth, latt, lont\n\n# for rivers - GO\ndef load4(f):\n with nc.Dataset(f) as ncid:\n rorunoff = ncid.variables[\"rorunoff\"][6, :, :].filled()\n latt = ncid.variables[\"nav_lat\"][:, :].filled()\n lont = ncid.variables[\"nav_lon\"][:, :].filled()\n\n return rorunoff, latt, lont\n\n# grid\ndef plotgrid1(f):\n glamt, glamu, glamv, glamf, gphit, gphiu, gphiv, gphif = load1(f)\n\n plt.figure(figsize=(7,5)); plt.clf()\n\n # Draw sides of every box\n glamfe, gphife = expandf(glamf, gphif)\n NY,NX = glamfe.shape\n print(glamt.shape)\n print(glamu.shape)\n print(glamf.shape)\n for j in range(NY):\n plt.plot(glamfe[j,:],gphife[j,:], 'k')\n for i in range(NX):\n plt.plot(glamfe[:,i],gphife[:,i], 'k')\n\n # Plot t, u, v, f points in red, green, blue, magenta \n plt.plot(glamt, gphit, 'r.')\n plt.plot(glamu, gphiu, 'g.')\n plt.plot(glamv, gphiv, 'b.')\n plt.plot(glamf, gphif, 'm.')\n\n plt.tight_layout()\n plt.xlim([-123.5,-123.3])\n plt.ylim([46.84,46.95])\n\n #plt.savefig(f.replace(\".nc\",\"_gridpts.png\"))\n\n# grid\ndef plotgrid2(f):\n glamt, glamu, glamv, glamf, gphit, gphiu, gphiv, gphif = load1(f)\n e1t,e1u,e1v,e1f,e2t,e2u,e2v,e2f = load2(f)\n glamfe, gphife = expandf(glamf, gphif)\n A = grid_angle(f)\n \n plt.figure(figsize=(12,4))\n\n plt.subplot(1,3,1)\n plt.pcolormesh(glamfe,gphife,e1t); plt.colorbar(); plt.title(\"e1t (m)\")\n plt.subplot(1,3,2)\n plt.pcolormesh(glamfe,gphife,e2t); plt.colorbar(); plt.title(\"e2t (m)\")\n plt.subplot(1,3,3)\n plt.pcolormesh(glamf,gphif,A); plt.colorbar(); plt.title(\"angle (deg)\")\n \n plt.tight_layout()\n plt.savefig(f.replace(\".nc\",\"_resolution_angle.png\"))\n \n# bathy\ndef plotgrid3(f):\n depth, latt, lont = load3(f)\n \n depth[depth==0]=np.nan\n depth[depth>0]=1\n #print(depth.shape)\n \n # can do edits below \n # made permanent in the main create bathy above\n # north to south\n #depth[178,128] = 400 #northern fjord\n# depth[296,54] = 60 #northern fjord\n# depth[296,53] = 60 #northern fjord\n\n plt.figure(figsize=(8,8))\n\n plt.subplot(1,1,1)\n plt.pcolormesh(depth, cmap=plt.plasma()); plt.colorbar(); plt.title(\"depth\")\n #plt.pcolormesh(depth); plt.colorbar(); plt.title(\"depth\")\n #plt.pcolormesh(ma_rorunoff, cmap=plt.pink()); plt.title(\"rodepth\")\n \n plt.tight_layout()\n plt.savefig(f.replace(\".nc\",\"_bathycheck.png\"))\n\n# runoff / rivers\ndef plotgrid4(f):\n depth, latt, lont = load3(f)\n \n # added for river runoff overlay\n rorunoff, latt2, lontt2 = load4('c:/temp/runofftools/rivers_month_202101GO.nc')\n #rorunoff[rorunoff==0]=np.nan\n #print(rorunoff.shape)\n ma_rorunoff = np.ma.masked_array(rorunoff, rorunoff == 0)\n \n depth[depth==0]=np.nan\n depth[depth>0]=1\n \n #print(depth.shape)\n\n plt.figure(figsize=(8,8))\n\n plt.subplot(1,1,1)\n plt.pcolormesh(depth, cmap=plt.plasma()); plt.colorbar(); plt.title(\"depth\")\n #plt.pcolormesh(depth); plt.colorbar(); plt.title(\"depth\")\n #plt.pcolormesh(ma_rorunoff, cmap=plt.pink()); plt.title(\"rodepth\")\n \n plt.tight_layout()\n plt.savefig(\"C:/temp/runofftools/runoffcheck2.png\")\n",
"_____no_output_____"
],
[
"# #################################################################\n# #################### BASIC PLOT OF BATHY ########################\n\ngridfilename = '..//data//grid//coordinates_salishsea_1500m.nc'\n#bathyfilename = 'bathy_salishsea_1500m_before_manual_edits.nc'\n#bathyfilename = '..//data//bathymetry//bathy_salishsea_1500m_Dec30.nc'\n\nwith nc.Dataset(gridfilename) as ncid:\n glamt = ncid.variables[\"glamt\"][0, :, :].filled()\n gphit = ncid.variables[\"gphit\"][0, :, :].filled()\n glamf = ncid.variables[\"glamf\"][0, :, :].filled()\n gphif = ncid.variables[\"gphif\"][0, :, :].filled()\nglamfe,gphife=expandf(glamf,gphif)\n\nwith nc.Dataset(bathyout_filename) as nc_b_file:\n bathy = nc_b_file.variables[\"Bathymetry\"][:, :].filled()\n\nbb=np.copy(bathy); bb[bb==0]=np.nan\nplt.figure(figsize=(8,8))\nplt.subplot(1,1,1)\nplt.pcolormesh(glamfe,gphife,bb); plt.colorbar()\n# Coastlines\nmfile = sio.loadmat('..//data//reference//PNW.mat')\nncst = mfile['ncst']\nplt.plot(ncst[:,0],ncst[:,1],'k')\n\nmfile2 = sio.loadmat('..//data//reference//PNWrivers.mat')\nncst2 = mfile2['ncst']\nplt.plot(ncst2[:,0],ncst2[:,1],'k')",
"_____no_output_____"
],
[
"##########################################################\n############### PLOTS TO CHECK BATHY ETC #################\n\n# plotgrid1('coordinates_seagrid_SalishSea2.nc')\n#plotgrid1('coordinates_salishsea_1km.nc')\n#plotgrid1('coordinates_salishsea_1500m.nc')\n#plotgrid1('coordinates_salishsea_2km.nc')\n#plotgrid2('coordinates_seagrid_SalishSea2.nc')\n# plotgrid2('coordinates_salishsea_1km.nc')\n#plotgrid2('coordinates_salishsea_2km.nc')\n#plotgrid2('coordinates_salishsea_1p5km.nc')\n#plotgrid3('bathy_salishsea_1500m_Dec21.nc')\nplotgrid3(bathyout_filename)\n#plotgrid3('bathy_salishsea_2km.nc')",
"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:103: RuntimeWarning: invalid value encountered in greater\n"
],
[
"# junk code below",
"_____no_output_____"
],
[
"a = range(24)\nb = a[::3]\nlist(b)",
"_____no_output_____"
],
[
"my_list[0] = [_ for _ in 'abcdefghi']\nmy_list[1] = [_ for _ in 'abcdefghi']",
"_____no_output_____"
],
[
"my_list[0:-1]",
"_____no_output_____"
],
[
"glamu.shape",
"_____no_output_____"
],
[
"a[296,10]",
"_____no_output_____"
],
[
"############################################################\n### EXPLORE TWO MESHES - NEMO ORAS5 and SS1500 #############\n### Apr 2021\nimport sys\n# load mask (tmask)\ndef loadmask(f):\n with nc.Dataset(f) as ncid:\n tmaskutil = ncid.variables[\"tmaskutil\"][0,:, :].filled()\n latt = ncid.variables[\"nav_lat\"][:, :].filled()\n lont = ncid.variables[\"nav_lon\"][:, :].filled()\n e1t = ncid.variables[\"e1t\"][0,:, :].filled()\n e2t = ncid.variables[\"e2t\"][0,:, :].filled()\n\n return tmaskutil, latt, lont, e1t, e2t\n\ndef plot_two_grids(f,g):\n\n # load ss1500mask\n tmask, latt, lont, e1t, e2t = loadmask(f)\n\n # load ORAS5\n tmask2, latt2, lont2, e1t2, e2t2 = loadmask(g)\n\n #print(tmask[:,])\n #plt.subplot(1,1,1)\n #plt.figure(figsize=(7,5)); plt.clf()\n \n plt.scatter(lont, latt, tmask)\n plt.scatter(lont2, latt2, tmask2)\n\n\n # Draw sides of every box\n #glamfe, gphife = expandf(glamf, gphif)\n #NY,NX = glamfe.shape\n\n #for j in range(NY):\n # plt.plot(glamfe[j,:],gphife[j,:], 'k')\n #for i in range(NX):\n # plt.plot(glamfe[:,i],gphife[:,i], 'k')\n\n # Plot t, u, v, f points in red, green, blue, magenta \n #plt.plot(glamt, gphit, 'r.')\n #plt.plot(glamu, gphiu, 'g.')\n #plt.plot(glamv, gphiv, 'b.')\n #plt.plot(glamf, gphif, 'm.')\n\n #plt.plot(glamt_2, gphit_2, 'b.')\n\n #plt.plot(glamu, gphiu, 'g.')\n #plt.plot(glamv, gphiv, 'b.')\n #plt.plot(glamf, gphif, 'm.')\n \n plt.tight_layout()\n plt.xlim([-126.2,-122.1])\n plt.ylim([46.84,52])\n\n #plt.savefig(f.replace(\".nc\",\"_gridpts.png\"))\n \nres = \"1500m\"\nss1500grid = \"..//data//grid//coordinates_salishsea_{}.nc\".format(res) # in \ndatetag = \"20210406\"\noras5grid = \"..//data//reference//ORAS5 Mask and Bathy//mesh_mask.nc\"\nss1500meshmask = \"..//data//mesh mask//mesh_mask_20210406.nc\"\n\nnp.set_printoptions(threshold=sys.maxsize)\nplot_two_grids(ss1500meshmask, oras5grid)",
"_____no_output_____"
],
[
"\n\n tmask, latt, lont, e1t, e2t = load2(f)\n \n plt.figure(figsize=(8,8))\n plt.subplot(1,1,1)\n plt.pcolormesh(tmask[:,:], cmap=plt.pink()); plt.title(\"model_mask\")\n\n plt.tight_layout()\n\n",
"(299, 132)\n"
],
[
"plt.figure(figsize=(7,5)); plt.clf()\nplt.plot(tmaskutil[0,:],tmaskutil[:,0], 'r.')",
"_____no_output_____"
],
[
"with nc.Dataset(ss1500meshmask) as ncid:\n print(tmaskutil[:,0])\n",
"[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0]\n"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0b20725f5a2d3545159b10c6ea8675adf4c0c7b | 495,808 | ipynb | Jupyter Notebook | notebooks/part3-neptune-graph.ipynb | aws-samples/aws-video-metadata-knowledge-graph-workshop | 88fb549d36e39222eb60ddf62f4468431dd32d29 | [
"MIT-0"
] | 6 | 2021-12-04T07:58:35.000Z | 2022-02-28T06:28:54.000Z | notebooks/part3-neptune-graph.ipynb | aws-samples/aws-video-metadata-knowledge-graph-workshop | 88fb549d36e39222eb60ddf62f4468431dd32d29 | [
"MIT-0"
] | null | null | null | notebooks/part3-neptune-graph.ipynb | aws-samples/aws-video-metadata-knowledge-graph-workshop | 88fb549d36e39222eb60ddf62f4468431dd32d29 | [
"MIT-0"
] | null | null | null | 316.002549 | 122,244 | 0.922018 | [
[
[
"# PART 3 - Metadata Knowledge Graph creation in Amazon Neptune.\n\nAmazon Neptune is a fast, reliable, fully managed graph database service that makes it easy to build and run applications that work with highly connected datasets. The core of Neptune is a purpose-built, high-performance graph database engine. This engine is optimized for storing billions of relationships and querying the graph with milliseconds latency. Neptune supports the popular graph query languages Apache TinkerPop Gremlin and W3C’s SPARQL, enabling you to build queries that efficiently navigate highly connected datasets.\n\nhttps://docs.aws.amazon.com/neptune/latest/userguide/feature-overview.html\n\nIn that section we're going to use TinkerPop Gremlin as the language to create and query our graph.",
"_____no_output_____"
],
[
"### Important\nWe need to downgrade the tornado library for the gremlin libraries to work in our notebook.\n\nWithout doing this, you'll most likely run into the following error when executing some gremlin queries: \n\"RuntimeError: Cannot run the event loop while another loop is running\"",
"_____no_output_____"
]
],
[
[
"!pip install --upgrade tornado==4.5.3",
"Looking in indexes: https://pypi.org/simple, https://pip.repos.neuron.amazonaws.com\nRequirement already satisfied: tornado==4.5.3 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (4.5.3)\n"
]
],
[
[
"### Restart your kernel\nBecause the notebook itself has some dependencies with the tornado library, we need to restart the kernel before proceeding.\n\nTo do so, go to the top menu > Kernel > Restart Kernel.. > Restart\n\nThen proceed and execute the following cells.",
"_____no_output_____"
]
],
[
[
"!pip install pandas\n!pip install jsonlines\n!pip install gremlinpython\n!pip install networkx\n!pip install matplotlib",
"Looking in indexes: https://pypi.org/simple, https://pip.repos.neuron.amazonaws.com\nRequirement already satisfied: pandas in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (1.1.5)\nRequirement already satisfied: python-dateutil>=2.7.3 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from pandas) (2.8.2)\nRequirement already satisfied: pytz>=2017.2 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from pandas) (2021.3)\nRequirement already satisfied: numpy>=1.15.4 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from pandas) (1.19.5)\nRequirement already satisfied: six>=1.5 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from python-dateutil>=2.7.3->pandas) (1.16.0)\nLooking in indexes: https://pypi.org/simple, https://pip.repos.neuron.amazonaws.com\nRequirement already satisfied: jsonlines in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (2.0.0)\nLooking in indexes: https://pypi.org/simple, https://pip.repos.neuron.amazonaws.com\nRequirement already satisfied: gremlinpython in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (3.5.1)\nRequirement already satisfied: isodate<1.0.0,>=0.6.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from gremlinpython) (0.6.0)\nRequirement already satisfied: aiohttp<=3.7.4,>=3.7.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from gremlinpython) (3.7.4)\nRequirement already satisfied: nest-asyncio in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from gremlinpython) (1.4.3)\nRequirement already satisfied: aenum<3.0.0,>=1.4.5 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from gremlinpython) (2.2.6)\nRequirement already satisfied: six<2.0.0,>=1.10.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from gremlinpython) (1.16.0)\nRequirement already satisfied: yarl<2.0,>=1.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from aiohttp<=3.7.4,>=3.7.0->gremlinpython) (1.6.3)\nRequirement already satisfied: typing-extensions>=3.6.5 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from aiohttp<=3.7.4,>=3.7.0->gremlinpython) (3.10.0.2)\nRequirement already satisfied: chardet<4.0,>=2.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from aiohttp<=3.7.4,>=3.7.0->gremlinpython) (3.0.4)\nRequirement already satisfied: idna-ssl>=1.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from aiohttp<=3.7.4,>=3.7.0->gremlinpython) (1.1.0)\nRequirement already satisfied: multidict<7.0,>=4.5 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from aiohttp<=3.7.4,>=3.7.0->gremlinpython) (5.1.0)\nRequirement already satisfied: async-timeout<4.0,>=3.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from aiohttp<=3.7.4,>=3.7.0->gremlinpython) (3.0.1)\nRequirement already satisfied: attrs>=17.3.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from aiohttp<=3.7.4,>=3.7.0->gremlinpython) (21.2.0)\nRequirement already satisfied: idna>=2.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from idna-ssl>=1.0->aiohttp<=3.7.4,>=3.7.0->gremlinpython) (3.3)\nLooking in indexes: https://pypi.org/simple, https://pip.repos.neuron.amazonaws.com\nRequirement already satisfied: networkx in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (2.5)\nRequirement already satisfied: decorator>=4.3.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from networkx) (4.4.2)\nLooking in indexes: https://pypi.org/simple, https://pip.repos.neuron.amazonaws.com\nRequirement already satisfied: matplotlib in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (3.3.4)\nRequirement already satisfied: kiwisolver>=1.0.1 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from matplotlib) (1.3.1)\nRequirement already satisfied: pillow>=6.2.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from matplotlib) (8.4.0)\nRequirement already satisfied: numpy>=1.15 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from matplotlib) (1.19.5)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.3 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from matplotlib) (2.4.7)\nRequirement already satisfied: cycler>=0.10 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from matplotlib) (0.10.0)\nRequirement already satisfied: python-dateutil>=2.1 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from matplotlib) (2.8.2)\nRequirement already satisfied: six in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from cycler>=0.10->matplotlib) (1.16.0)\n"
],
[
"import os\nimport jsonlines\nimport networkx as nx\nimport matplotlib.pyplot as plt\nimport pandas as pd",
"_____no_output_____"
],
[
"#load stored variable from previous notebooks\n%store -r",
"_____no_output_____"
]
],
[
[
"Loading the Gremlin libraries and connecting to our Neptune instance",
"_____no_output_____"
]
],
[
[
"from gremlin_python import statics\nfrom gremlin_python.process.anonymous_traversal import traversal\nfrom gremlin_python.process.graph_traversal import __\nfrom gremlin_python.process.strategies import *\nfrom gremlin_python.driver.driver_remote_connection import DriverRemoteConnection\nfrom gremlin_python.process.traversal import T\nfrom gremlin_python.process.traversal import Order\nfrom gremlin_python.process.traversal import Cardinality\nfrom gremlin_python.process.traversal import Column\nfrom gremlin_python.process.traversal import Direction\nfrom gremlin_python.process.traversal import Operator\nfrom gremlin_python.process.traversal import P\nfrom gremlin_python.process.traversal import Pop\nfrom gremlin_python.process.traversal import Scope\nfrom gremlin_python.process.traversal import Barrier\nfrom gremlin_python.process.traversal import Bindings\nfrom gremlin_python.process.traversal import WithOptions\nfrom gremlin_python.structure.graph import Graph\n\ngraph = Graph()\n\ndef start_remote_connection_neptune():\n remoteConn = DriverRemoteConnection(your_neptune_endpoint_url,'g')\n g = graph.traversal().withRemote(remoteConn)\n return g\n\n# g is the traversal source to use to query the graph \ng = start_remote_connection_neptune()",
"_____no_output_____"
]
],
[
[
"<b>IMPORTANT:</b>\n- Note that the remote connection will time out after few minutes if unused so if you're encountering exceptions after having paused the notebook execution for a while, please re-run the above cell.\n- <b>Make sure your Neptune DB is created for the sole purpose of this labs as we'll be cleaning it before starting.</b>",
"_____no_output_____"
]
],
[
[
"#CAREFUL - the below line of code empties your graph. Again, make sure you're using a dedicated instance for this workshop\ng.V().drop().iterate()",
"_____no_output_____"
]
],
[
[
"## A note on Gremlin\n\nGremlin is a functional, data-flow language that enables users to succinctly express complex traversals on (or queries of) their application's property graph. Every Gremlin traversal is composed of a sequence of (potentially nested) steps. A step performs an atomic operation on the data stream. Every step is either a map-step (transforming the objects in the stream), a filter-step (removing objects from the stream), or a sideEffect-step (computing statistics about the stream).\n\nMore info here: https://tinkerpop.apache.org/gremlin.html",
"_____no_output_____"
],
[
"The image below is an extract from:\nhttps://tinkerpop.apache.org/docs/3.5.1/tutorials/getting-started/#_the_next_fifteen_minutes\n\nI highly recommend you to be familiar with the concepts of Vertex and Edges at the very minimum before proceeding with the notebook.",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"## Vertices and Edges names\n\nSee below the variables containing the labels for our vertices and edges that we'll create across the notebook.",
"_____no_output_____"
]
],
[
[
"#Vertex representing a Video\nV_VIDEO = \"video\"\n\n#Vertex representing a \"scene\" e.g. SHOT, TECHNICAL_CUE\nV_VIDEO_SCENE = \"video_scene\"\n\n#Vertex representing a Video segment. we arbitrary split our video into 1min segments and attach metadata to the segments itselves\nV_VIDEO_SEGMENT = 'video_segment'\n\n#Edge between VIDEO and SEGMENT\nE_HAS_SEGMENT = 'contains_segment'\n\n#Edge between VIDEO and SCENE\nE_HAS_SCENE = 'contains_scene'\n\n#Edge between Scene and Segment\nE_BELONG_TO_SEGMENT = 'belong_to_segment'\n\n#Vertex representing a label extracted by Rekognition from the video\nV_LABEL = 'label'\n\n#Edge between SEGMENT and LABEL\nE_HAS_LABEL = 'has_label'\n\n#Edge between parent LABEL and child LABEL e.g. construction -> bulldozer\nE_HAS_CHILD_LABEL = 'has_child_label'\n\n#Vertex representing the NER\nV_ENTITY = 'entities'\n\n#Vertex representing the type of NER\nV_ENTITY_TYPE = 'entity_type'\n\n#Edge between ENTITY and ENTITY_TYPE\nE_IS_OF_ENTITY_TYPE = 'is_of_entity_type'\n\n#Edge between SEGMENT and ENTITY\nE_HAS_ENTITY = 'has_entity'\n\n#Vertex representing a TOPIC\nV_TOPIC = 'topic'\n\n#Vertex representing a TOPIC_TERM\nV_TOPIC_TERM = 'topic_term'\n\n#Edge between a VIDEO_SEGMENT and a TOPIC\nE_HAS_TOPIC = 'has_topic'\n\n#Edge between a TOPIC and a TOPIC_TERM\nE_HAS_TERM = 'has_term'\n\n#Vertex representing a TERM\nV_TERM = 'term'",
"_____no_output_____"
]
],
[
[
"## We start by adding our video to the Graph\n\nNote how I start with g, our traversal graph, then call the addV (V for Vertex) method and then attach properties to the new vertex. I end the line with \".next()\" which will return the newly created node (similar to how an iterator would work). all method are \"chained\" together in one expression.",
"_____no_output_____"
]
],
[
[
"sample_video_vertex = g.addV(V_VIDEO).property(\"name\", video_name).property(\"filename\", video_file) .property('description', 'description of the video').next() ",
"_____no_output_____"
]
],
[
[
"[QUERY] We're listing all the vertices in the graph with their metadata. At this stage, we only have one.\n\nExplanation: g.V() gets us all vertices in the graph, the .hasLabel() filters the vertices based on the vertex label(=type), the .valueMap() returns all properties for all vertices and the .toList() returns the full list. Note that you can use .next() instead of toList() to just return the next element in the list.",
"_____no_output_____"
]
],
[
[
"g.V().hasLabel(V_VIDEO).valueMap().toList()",
"_____no_output_____"
]
],
[
[
"[QUERY] Below is a different way to precisely return a vertex based on its name. \n\nExplanation: g.V() gives us all the vertices, .has() allows us to filter based on the name of the vertex and .next() returns the first (and only) item from the iterator. note that we haven't used .valueMap() so what is returned is the ID of the vertex.",
"_____no_output_____"
]
],
[
[
"g.V().has('name', video_name).next()",
"_____no_output_____"
]
],
[
[
"## Creating 1min segments vertices in Neptune \nAs mentioned in the previous notebook, we are creating metadata segments that we'll use to store labels and other information related to those 1min video segments. \nThis will give us a more fine grained view of the video's topics and metadata.",
"_____no_output_____"
]
],
[
[
"print(segment_size_ms)",
"60000\n"
],
[
"#get the video duration by looking at the end of the last segment.\ndef get_video_duration_in_ms(segment_detection_output):\n return segment_detection_output['Segments'][-1]['EndTimestampMillis']\n\n#create a new segment vertex and connect it to the video\ndef add_segment_vertex(video_name, start, end, g):\n #retrieving the video vertex\n video_vertex = g.V().has(V_VIDEO, 'name', video_name).next()\n\n #generating a segment ID\n segment_id = video_name + '-' + str(start) + '-' + str(end)\n \n #creating a new vertex for the segment\n new_segment_vert = g.addV(V_VIDEO_SEGMENT).property(\"name\", segment_id).property('StartTimestampMillis', start).property('EndTimestampMillis', end).next()\n\n #connecting the video vertex to the segment vertex\n g.V(video_vertex).addE(E_HAS_SEGMENT).to(new_segment_vert).iterate()\n\n#generate segment vertices of a specific duration (default 60s) for a specific video\ndef generate_segment_vertices(video_name, g, duration_in_millisecs, segment_size_in_millisecs=60000):\n #retrieve the mod\n modulo = duration_in_millisecs % segment_size_in_millisecs\n\n #counter that we'll increment by segment_size_in_millisecs steps\n counter = 0\n \n while ((counter + segment_size_in_millisecs) < duration_in_millisecs) : \n start = counter\n end = counter + segment_size_in_millisecs\n\n add_segment_vertex(video_name, start, end, g)\n \n counter += segment_size_in_millisecs\n \n #adding the segment vertex to the video vertex\n add_segment_vertex(video_name, duration_in_millisecs - modulo, duration_in_millisecs, g) \n \n#add a vertex if it doesn't already exist\ndef add_vertex(vertex_label, vertex_name, g): \n g.V().has(vertex_label,'name', vertex_name).fold().coalesce(__.unfold(), __.addV(vertex_label).property('name',vertex_name)).iterate()\n \n#add an edge between 2 vertices\ndef add_edge(vertex_label_from, vertex_label_to, vertex_name_from, vertex_name_to, edge_name, g, weight=None):\n if weight == None:\n g.V().has(vertex_label_to, 'name', vertex_name_to).as_('v1').V().has(vertex_label_from, 'name', vertex_name_from).coalesce(__.outE(edge_name).where(__.inV().as_('v1')), __.addE(edge_name).to('v1')).iterate()\n else:\n g.V().has(vertex_label_to, 'name', vertex_name_to).as_('v1').V().has(vertex_label_from, 'name', vertex_name_from).coalesce(__.outE(edge_name).where(__.inV().as_('v1')), __.addE(edge_name).property('weight', weight).to('v1')).iterate()",
"_____no_output_____"
]
],
[
[
"Note: remember, the SegmentDetectionOutput object contains the output of the Amazon Rekognition segment (=scene) detection job",
"_____no_output_____"
]
],
[
[
"duration = get_video_duration_in_ms(SegmentDetectionOutput)\ngenerate_segment_vertices(video_name, g, duration, segment_size_ms)",
"_____no_output_____"
]
],
[
[
"[QUERY] Let's retrieve the segments that are connected to the video vertex via an edge, ordered by StartTimestampMillis. In that case we limit the result set to 5 items.\n\nExplanation: g.V() get us all vertices, .has(V_VIDEO, 'name', video_name) filters on the video vertices with name=video_name, .out() gives us all vertices connected to this vertex by an outgoing edge, .hasLabel(V_VIDEO_SEGMENT) filters the vertices to video segments only, .order().by() orders the vertices by StartTimestampMillis, .valueMap() gives us all properties for those vertices, .limit(5) reduces the results to 5 items, .toList() gives us the list of items.",
"_____no_output_____"
]
],
[
[
"list_of_segments = g.V().has(V_VIDEO, 'name', video_name).out().hasLabel(V_VIDEO_SEGMENT) \\\n .order().by('StartTimestampMillis', Order.asc).valueMap().limit(5).toList()\n\nlist_of_segments",
"_____no_output_____"
]
],
[
[
"## Graph Visualisation\nThe networkx library alongside with matplotlib allows us to draw visually the graph.\n\nLet's draw our vertex video and the 1min segments we just created.",
"_____no_output_____"
]
],
[
[
"#Function printing the graph from a start vertex and a list of edges that will be traversed/displayed.\ndef print_graph(start_vertex_label, start_vertex_name, list_edges, displayLabels=True, node_size=2000, node_limit=200):\n\n #getting the paths between vertices\n paths = g.V().has(start_vertex_label, 'name', start_vertex_name)\n \n #adding the edges that we want to traverse\n for edge in list_edges:\n paths = paths.out(edge)\n paths = paths.path().toList()\n \n #creating graph object\n G=nx.DiGraph() \n \n #counters to limit the number of nodes being displayed.\n limit_nodes_counter = 0\n\n #creating the graph by iterating over the paths\n for p in paths:\n #depth of the graph\n depth = len(p)\n \n #we build our graph\n for i in range(0, depth -1):\n label1 = g.V(p[i]).valueMap().next()['name'][0]\n label2 = g.V(p[i+1]).valueMap().next()['name'][0]\n\n if limit_nodes_counter < node_limit:\n G.add_edge(label1, label2)\n limit_nodes_counter += 1\n \n plt.figure(figsize=(12,7))\n nx.draw(G, node_size=node_size, with_labels=displayLabels)\n plt.show()",
"_____no_output_____"
],
[
"#please note that we limit the number of nodes being displayed\nprint_graph(V_VIDEO, video_name, [E_HAS_SEGMENT], node_limit=15)",
"_____no_output_____"
]
],
[
[
"# Add the scenes into our graph\n\nIn the below steps we're connecting the scenes to the video itself and not the segments as we want to be able to search and list the different types of scenes at the video level. However, note that we're not going to attach any specific metadata at the scene level, only at the segment level.",
"_____no_output_____"
]
],
[
[
"def store_video_segment(original_video_name, json_segment_detection_output, orig_video_vertex):\n \n shot_counter = 0\n tech_cue_counter = 0\n \n for technicalCue in json_segment_detection_output['Segments']:\n #start\n frameStartValue = technicalCue['StartTimestampMillis'] / 1000\n #end\n frameEndValue = technicalCue['EndTimestampMillis'] / 1000\n \n #SHOT or TECHNICAL_CUE\n segment_type = technicalCue['Type']\n \n counter = -1\n if (segment_type == 'SHOT'):\n shot_counter += 1\n counter = shot_counter\n elif (segment_type == 'TECHNICAL_CUE'):\n tech_cue_counter += 1\n counter = tech_cue_counter\n\n segment_id = original_video_name + '-' + segment_type + '-' + str(counter)\n \n #creating the vertex for the video segment with all the metadata extracted from the segment generation job\n new_vert = g.addV(V_VIDEO_SCENE).property(\"name\", segment_id).property(\"type\", segment_type) \\\n .property('StartTimestampMillis', technicalCue['StartTimestampMillis']).property('EndTimestampMillis', technicalCue['EndTimestampMillis']) \\\n .property('StartFrameNumber', technicalCue['StartFrameNumber']).property('EndFrameNumber', technicalCue['EndFrameNumber']) \\\n .property('DurationFrames', technicalCue['DurationFrames']).next()\n \n #creating the edge between the original video vertex and the segment vertex with the type as a property of the relationship\n g.V(orig_video_vertex).addE(E_HAS_SCENE).to(new_vert).properties(\"type\", segment_type).iterate()",
"_____no_output_____"
],
[
"store_video_segment(video_name, SegmentDetectionOutput, sample_video_vertex)",
"_____no_output_____"
]
],
[
[
"[QUERY] We're retrieving the list of edges/branches created between the video and the scenes.\n\n\nExplanation: g.V() returns all vertices, .has(V_VIDEO, 'name', video_name) returns the V_VIDEO vertex with name=video_name, .out(E_HAS_SCENE) returns the list of vertices that are connected to the V_VIDEO vertex by a E_HAS_SCENE edge, toList() returns the list of items.",
"_____no_output_____"
]
],
[
[
"list_of_edges = g.V().has(V_VIDEO, 'name', video_name).out(E_HAS_SCENE).toList()\nprint(f\"the sample video vertex has now {len(list_of_edges)} edges connecting to the scenes vertices\")",
"the sample video vertex has now 133 edges connecting to the scenes vertices\n"
]
],
[
[
"[QUERY] Let's search for the technical cues (black and fix screens) at the end of the video.\n\nExplanation: g.V() returns all vertices, .has(V_VIDEO, 'name', video_name) returns the V_VIDEO vertex with name=video_name, .out(E_HAS_SCENE) returns the list of vertices that are connected to the V_VIDEO vertex by a E_HAS_SCENE edge, .has('type', 'TECHNICAL_CUE') filters the list on type=TECHNICAL_CUE, the rest was seen above already.",
"_____no_output_____"
]
],
[
[
"g.V().has(V_VIDEO, 'name', video_name).out(E_HAS_SCENE) \\\n .has('type', 'TECHNICAL_CUE') \\\n .order().by('EndTimestampMillis', Order.desc) \\\n .limit(5).valueMap().toList() ",
"_____no_output_____"
]
],
[
[
"</br>\nLet's print the graph for those newly created SCENE vertices",
"_____no_output_____"
]
],
[
[
"#please note that we limit the number of nodes being displayed\nprint_graph(V_VIDEO, video_name, [E_HAS_SCENE], node_limit=15)",
"_____no_output_____"
]
],
[
[
"## Create the labels vertices and link them to the segments\nWe're now going to create vertices to represent the labels in our graph and connect them to the 1min segments",
"_____no_output_____"
]
],
[
[
"def create_label_vertices(LabelDetectionOutput, video_name, g, confidence_threshold=80):\n\n labels = LabelDetectionOutput['Labels']\n \n for instance in labels:\n #keeping only the labels with high confidence\n label_details_obj = instance['Label']\n confidence = label_details_obj['Confidence']\n if confidence > confidence_threshold:\n \n #adding then main label name to the list\n label_name = str(label_details_obj['Name']).lower()\n \n #adding the label vertex\n add_vertex(V_LABEL, label_name, g)\n\n #adding the link between video and label\n add_edge(V_VIDEO, V_LABEL, video_name, label_name, E_HAS_LABEL, g, weight=None)\n \n \n #adding parent labels too\n parents = label_details_obj['Parents']\n if len(parents) > 0:\n for parent in parents:\n #create parent vertex if it doesn't exist\n parent_label_name = str(parent['Name']).lower()\n add_vertex(V_LABEL, parent_label_name, g)\n \n #create the relationship between parent and children if it doesn't already exist\n add_edge(V_LABEL, V_LABEL, parent_label_name, label_name, E_HAS_CHILD_LABEL, g, weight=None)\n ",
"_____no_output_____"
],
[
"create_label_vertices(LabelDetectionOutput, video_name, g, 80)",
"_____no_output_____"
]
],
[
[
"[QUERY] Let's list the labels vertices to see what was created above.\n\nExplanation: g.V() returns all vertices, .hasLabel(V_LABEL) returns only the vertices of label/type V_LABEL, .valueMap().limit(20).toList() gives us the list with properties for the first 20 items.",
"_____no_output_____"
]
],
[
[
"#retrieving a list of the first 20 labels\nlabel_list = g.V().hasLabel(V_LABEL).valueMap().limit(20).toList()\nlabel_list",
"_____no_output_____"
]
],
[
[
"Let's display a graph with our video's labels and the child labels relationships in between labels.",
"_____no_output_____"
]
],
[
[
"print_graph(V_VIDEO, video_name, [E_HAS_LABEL, E_HAS_CHILD_LABEL], node_limit=15)",
"_____no_output_____"
]
],
[
[
"[QUERY] A typical query would be to search for videos who have a specific label.\n\nExplanation: g.V().has(V_LABEL, 'name', ..) returns the first label vertex from the previous computed list, .in_(E_HAS_LABEL) returns all vertices who have an incoming edge (inE) pointing to this label vertex, .valueMap().toList() returns the list with properties. \n\nnote that in_(E_HAS_LABEL) is equivalent to .inE(E_HAS_LABEL).outV() where .inE(E_HAS_LABEL) returns all incoming edges with the specified label and .outV() will traverse to the vertices attached to that edge.\n\nObviously we only have the one result as we've only processed one video so far.",
"_____no_output_____"
]
],
[
[
"g.V().has(V_LABEL, 'name', label_list[0]['name'][0]).in_(E_HAS_LABEL).valueMap().toList()",
"_____no_output_____"
]
],
[
[
"## Create the topics and associated topic terms vertices\nWe are going to re-arrange a bit the raw results from the topic modeling job to make it more readable",
"_____no_output_____"
]
],
[
[
"comprehend_topics_df.head()",
"_____no_output_____"
]
],
[
[
"We extract the segment id/number from the docname column in a separate column, cast it to numeric values, drop the docname column and sort by segment_id",
"_____no_output_____"
]
],
[
[
"comprehend_topics_df['segment_id'] = comprehend_topics_df['docname'].apply(lambda x: x.split(':')[-1])\ncomprehend_topics_df['segment_id'] = pd.to_numeric(comprehend_topics_df['segment_id'], errors='coerce')\ncomprehend_topics_df = comprehend_topics_df.drop('docname', axis=1)\ncomprehend_topics_df = comprehend_topics_df.sort_values(by='segment_id')",
"_____no_output_____"
],
[
"comprehend_topics_df.head(5)",
"_____no_output_____"
]
],
[
[
"Looks better!\n\nNote that:\n- a segment_id can belong to several topics\n- proportion = the proportion of the document that is concerned with the topic",
"_____no_output_____"
],
[
"Let's now create our topic vertices",
"_____no_output_____"
]
],
[
[
"def create_topic_vertices(topics_df, terms_df, video_name, g):\n #retrieve all segments for the video\n segments_vertex_list = g.V().has(V_VIDEO, 'name', video_name).out(E_HAS_SEGMENT).order().by('StartTimestampMillis', Order.asc).valueMap().toList()\n \n for index, row in topics_df.iterrows():\n \n topic = row['topic']\n segment_id = int(row['segment_id'])\n \n #string formating to use as name for our vertices\n topic_str = str(int(row['topic']))\n \n #adding terms vertices that are associated with that topic and create the topic -> term edge\n list_of_terms = terms_df[comprehend_terms_df['topic'] == topic]\n \n #getting the segment name\n segment_name = segments_vertex_list[segment_id]['name'][0]\n \n #adding the topic vertex\n add_vertex(V_TOPIC, topic_str, g)\n \n #adding the link between entity and entity_type\n add_edge(V_VIDEO_SEGMENT, V_TOPIC, segment_name, topic_str, E_HAS_TOPIC, g, weight=None)\n \n \n \n #looping across all \n for index2, row2 in list_of_terms.iterrows():\n term = row2['term']\n weight = row2['weight']\n add_vertex(V_TERM, term, g)\n add_edge(V_TOPIC, V_TERM, topic_str, term, E_HAS_TERM, g, weight=weight)",
"_____no_output_____"
],
[
"create_topic_vertices(comprehend_topics_df, comprehend_terms_df, video_name, g)",
"_____no_output_____"
]
],
[
[
"Let's display our video, few segments and their associated topics",
"_____no_output_____"
]
],
[
[
"#please note that we limit the number of nodes being displayed\nprint_graph(V_VIDEO, video_name, [E_HAS_SEGMENT, E_HAS_TOPIC], node_limit=10)",
"_____no_output_____"
]
],
[
[
"Let's display a partial graph showing relationships between the video -> segment -> topic -> term",
"_____no_output_____"
]
],
[
[
"print_graph(V_VIDEO, video_name, [E_HAS_SEGMENT, E_HAS_TOPIC, E_HAS_TERM], node_limit=20)",
"_____no_output_____"
]
],
[
[
"[QUERY] We're now listing all the segments that are in topic 2 (try different topic numbers if you want)\n\nExplanation: g.V().has(V_TOPIC, 'name', '2') returns the topic vertex with name=2, .in_(E_HAS_TOPIC) returns all vertices that have a edge pointing into that topic vertex, .valueMap().toList() returns the list of items with their properties",
"_____no_output_____"
]
],
[
[
"g.V().has(V_TOPIC, 'name', '2').in_(E_HAS_TOPIC).valueMap().toList()",
"_____no_output_____"
]
],
[
[
"## Create the NER vertices and link them to the segments",
"_____no_output_____"
]
],
[
[
"#create the entity and entity_type vertices including the related edges\ndef create_ner_vertices(ner_job_data, video_name, g, score_threshold=0.8):\n \n #retrieve all segments for the video\n segments_vertex_list = g.V().has(V_VIDEO, 'name', video_name).out(E_HAS_SEGMENT).order().by('StartTimestampMillis', Order.asc).valueMap().toList()\n counter_vertex = 0\n for doc in ner_job_data:\n \n #each jsonline from the ner job is already segmented by 1min chunks, so we're just matching them to our ordered segments list.\n segment_vertex_name = segments_vertex_list[counter_vertex]['name'][0]\n \n for entity in doc:\n \n text = entity['Text']\n type_ = entity['Type']\n score = entity['Score']\n\n if score > score_threshold:\n #adding the entity type vertex\n entity_type_vertex = g.V().has(V_ENTITY_TYPE,'name', type_).fold().coalesce(__.unfold(), __.addV(V_ENTITY_TYPE).property('name',type_)).iterate()\n\n #adding the entity type vertex\n entity_vertex = g.V().has(V_ENTITY,'name', text).fold().coalesce(__.unfold(), __.addV(V_ENTITY).property('name',text)).iterate()\n\n #adding the link between entity and entity_type\n entity_entity_type_edge = g.V().has(V_ENTITY_TYPE, 'name', type_).as_('v1').V().has(V_ENTITY, 'name', text).coalesce(__.outE(E_IS_OF_ENTITY_TYPE).where(__.inV().as_('v1')), __.addE(E_IS_OF_ENTITY_TYPE).to('v1')).iterate()\n \n #adding the edge between entity and segment\n segment_entity_edge = g.V().has(V_ENTITY,'name', text).as_('v1').V().has(V_VIDEO_SEGMENT, 'name', segment_vertex_name).coalesce(__.outE(E_HAS_ENTITY).where(__.inV().as_('v1')), __.addE(E_HAS_ENTITY).to('v1')).iterate()\n #print(f\"attaching entity: {text} to segment: {segment_vertex_name}\")\n\n counter_vertex += 1",
"_____no_output_____"
],
[
"create_ner_vertices(ner_job_data, video_name, g, 0.8)",
"_____no_output_____"
]
],
[
[
"[QUERY] Let's get a list of the first 20 entities\n\nExplanation: g.V().hasLabel(V_ENTITY) returns all vertices of label/type V_ENTITY, .valueMap().limit(20).toList() returns the list of the first 20 items with their properties (just name in that case).",
"_____no_output_____"
]
],
[
[
"entities_list = g.V().hasLabel(V_ENTITY).valueMap().limit(20).toList()\nentities_list",
"_____no_output_____"
]
],
[
[
"[QUERY] Let's now look up the first entity of the previous entities_list and check its type\n\nExplanation: g.V().has(V_ENTITY, 'name', ...) return the first V_ENTITY vertex of the entities_list list, .out(E_IS_OF_ENTITY_TYPE) returns vertices connected to this V_ENTITY vertex by a E_IS_OF_ENTITY_TYPE edge.",
"_____no_output_____"
]
],
[
[
"g.V().has(V_ENTITY, 'name', entities_list[0]['name'][0]).out(E_IS_OF_ENTITY_TYPE).valueMap().toList()",
"_____no_output_____"
]
],
[
[
"[QUERY] Let's see now which video segments contains that entity\n\nExplanation: g.V().has(V_ENTITY, 'name', ...) return the first V_ENTITY vertex of the entities_list list, .in_(E_HAS_ENTITY) returns all vertices that have an incoming edge into that V_ENTITY vertex and .valueMap().toList() returns the list with properties.",
"_____no_output_____"
]
],
[
[
"g.V().has(V_ENTITY, 'name', entities_list[0]['name'][0]).in_(E_HAS_ENTITY).valueMap().toList()",
"_____no_output_____"
]
],
[
[
"[QUERY] Similar query but this time we traverse further the graph and only return the list of videos which have this specific entity.\n\nExplanation: g.V().has(V_ENTITY, 'name', ...) return the first V_ENTITY vertex of the entities_list list, .in_(E_HAS_ENTITY) returns the V_VIDEO_SEGMENT vertices that have an incoming edge into that V_ENTITY vertex, .in_(E_HAS_SEGMENT) returns the V_VIDEO vertices that have an incoming edge into those V_VIDEO_SEGMENT vertices and .valueMap().toList() returns the list with properties.\n\nNote how by chaining the .in_() methods we are able to traverse the graph from one type of vertex to the other.",
"_____no_output_____"
]
],
[
[
"g.V().has(V_ENTITY, 'name', entities_list[0]['name'][0]).in_(E_HAS_ENTITY).in_(E_HAS_SEGMENT).dedup().valueMap().toList()",
"_____no_output_____"
]
],
[
[
"</br>\nLet's now display a graph showing the relationship between Video -> Segment -> Entity",
"_____no_output_____"
]
],
[
[
"print_graph(V_VIDEO, video_name, [E_HAS_SEGMENT, E_HAS_ENTITY], node_size=800, node_limit=30)",
"_____no_output_____"
]
],
[
[
"# Summary",
"_____no_output_____"
],
[
"This notebook only touched the surface of what you can do with Graph databases but it should give you an idea of how powerful they are at modeling highly dimensional relationships between entities. This specific architecture allows them to be especially scalable and performing even with billions of vertices and edges. \n\nGremlin is the most widely used query language for graph DB and provides quite an intuitive way to traverse/query those graphs by chaining those instructions but if you want a more traditional SQL language, you can also look into SPARQL as an alternative. \nhttps://graphdb.ontotext.com/documentation/free/devhub/sparql.html#using-sparql-in-graphdb ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d0b2097f41e27437e4f4440e1da0b82d56751492 | 69,658 | ipynb | Jupyter Notebook | EDA/Plotly.ipynb | cdvv7788/experiments | 2151c27cb814cd56131884708ff8d41730114e87 | [
"MIT"
] | 1 | 2018-08-10T17:43:48.000Z | 2018-08-10T17:43:48.000Z | EDA/Plotly.ipynb | cdvv7788/experiments | 2151c27cb814cd56131884708ff8d41730114e87 | [
"MIT"
] | 9 | 2020-01-28T22:25:20.000Z | 2021-12-13T19:49:51.000Z | EDA/Plotly.ipynb | cdvv7788/experiments | 2151c27cb814cd56131884708ff8d41730114e87 | [
"MIT"
] | null | null | null | 44.767352 | 17,379 | 0.353814 | [
[
[
"import pandas as pd\nfrom plotly.offline import iplot, init_notebook_mode\nimport plotly.graph_objs as go\nimport plotly.figure_factory as ff\ninit_notebook_mode(connected=True)",
"_____no_output_____"
],
[
"target_url = 'https://raw.githubusercontent.com/plotly/datasets/master/school_earnings.csv'\ndf = pd.read_csv(target_url)",
"_____no_output_____"
],
[
"table = ff.create_table(df)\niplot(table)",
"_____no_output_____"
],
[
"data = [go.Bar(x=df.School, y=df.Gap)]\niplot(data)",
"_____no_output_____"
],
[
"data = [go.Box(y=df.Women)]\niplot(data)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
d0b217b2fefa4f3e7006aaa6e54d7324a4f12d1e | 26,715 | ipynb | Jupyter Notebook | 18cse042_Assignment-2(pandas).ipynb | Subr1ata/DMDW-Lab | 9543f1167a93df0d1cc18c1eca27bff7843e91f5 | [
"Apache-2.0"
] | null | null | null | 18cse042_Assignment-2(pandas).ipynb | Subr1ata/DMDW-Lab | 9543f1167a93df0d1cc18c1eca27bff7843e91f5 | [
"Apache-2.0"
] | null | null | null | 18cse042_Assignment-2(pandas).ipynb | Subr1ata/DMDW-Lab | 9543f1167a93df0d1cc18c1eca27bff7843e91f5 | [
"Apache-2.0"
] | null | null | null | 26.688312 | 238 | 0.347969 | [
[
[
"import pandas as pd",
"_____no_output_____"
],
[
"import numpy as np",
"_____no_output_____"
],
[
"data=[6,7,8,9,10]\ndta=pd.DataFrame(data)\ndta",
"_____no_output_____"
],
[
"data=[['Alex1',101],['Bob2',152],['Marley3',203]]\ndta=pd.DataFrame(data)\ndta",
"_____no_output_____"
],
[
"a=pd.Series([0,2,4,np.nan,7,9])\na",
"_____no_output_____"
],
[
"dates = pd.date_range('20130101', periods=6)\ndates",
"_____no_output_____"
],
[
"df = pd.DataFrame(np.random.randn(6, 4), index=dates, columns=list('ABCD'))\ndf",
"_____no_output_____"
],
[
"df2 = pd.DataFrame({'A': 1.,'B': pd.Timestamp('20130102'),'C': pd.Series(1, index=list(range(4)), dtype='float32'),'D': np.array([3] * 4, dtype='int32'),'E': pd.Categorical([\"test\", \"train\", \"test\", \"train\"]),'F': 'foo'})\ndf2",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"df.tail(3)",
"_____no_output_____"
],
[
"df.index",
"_____no_output_____"
],
[
"df.columns",
"_____no_output_____"
],
[
"df.to_numpy()",
"_____no_output_____"
],
[
"df2.to_numpy()",
"_____no_output_____"
],
[
"df2.describe()",
"_____no_output_____"
],
[
"df.describe()",
"_____no_output_____"
],
[
"df.T",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0b217e7614a19643fbfd1bdc95cd56adf61f39d | 152,903 | ipynb | Jupyter Notebook | test/validate_data_loaders.ipynb | akbokha/image-colorization | eb0dd370f42c5f7fe8fefc76513d8c3428cfc68e | [
"MIT"
] | null | null | null | test/validate_data_loaders.ipynb | akbokha/image-colorization | eb0dd370f42c5f7fe8fefc76513d8c3428cfc68e | [
"MIT"
] | 9 | 2019-03-27T21:59:58.000Z | 2019-03-27T22:02:12.000Z | test/validate_data_loaders.ipynb | akbokha/image-colorization | eb0dd370f42c5f7fe8fefc76513d8c3428cfc68e | [
"MIT"
] | null | null | null | 349.892449 | 14,920 | 0.932683 | [
[
[
"import os\nimport pickle\nimport sys\n\nimport numpy as np\nimport torch\nimport torch.utils.data\nfrom skimage.color import lab2rgb, rgb2lab, rgb2gray\nfrom torchvision import datasets, transforms\n\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
],
[
"class CIFAR10ImageDataSet(torch.utils.data.Dataset):\n def __init__(self, data, transforms=None):\n self.data = data\n self.transforms = transforms\n\n def __getitem__(self, index):\n img = self.data[index]\n\n img_original = transforms.functional.to_pil_image(torch.from_numpy(img.astype(np.uint8)))\n\n if self.transforms is not None:\n img_original = self.transforms(img_original)\n\n img_original = np.asarray(img_original)\n\n img_original = img_original / 255\n\n img_lab = rgb2lab(img_original)\n img_lab = (img_lab + 128) / 255\n\n img_ab = img_lab[:, :, 1:3]\n img_ab = torch.from_numpy(img_ab.transpose((2, 0, 1))).float()\n\n img_gray = rgb2gray(img_original)\n img_gray = torch.from_numpy(img_gray).unsqueeze(0).float()\n\n return img_gray, img_ab, img_original\n\n def __len__(self):\n return self.data.shape[0]",
"_____no_output_____"
],
[
"def unpickle_cifar10(file):\n with open(file, 'rb') as fo:\n dict = pickle.load(fo, encoding='bytes')\n return dict[b\"data\"]\n\n\ndef get_cifar10_loaders(dataset_path, batch_size):\n \"\"\"\n Get CIFAR-10 data set loaders\n \"\"\"\n\n '''\n Process training data into a DataLoader object\n '''\n train_transforms = transforms.Compose([\n transforms.RandomHorizontalFlip()\n ])\n\n train_set = datasets.CIFAR10(root=dataset_path, train=True, download=True)\n num_training_points = train_set.__len__()\n num_points_training_batch = int(num_training_points / batch_size)\n\n train_data = np.array([]).reshape(0, 3, 32, 32)\n\n data_batch_name = 'cifar-10-batches-py/data_batch_{}'\n for batch_num in range(1, 6):\n data_batch = data_batch_name.format(batch_num)\n batch_dir = os.path.join(dataset_path, data_batch)\n train_data = np.append(train_data, np.reshape(unpickle_cifar10(batch_dir),\n (num_points_training_batch, 3, 32, 32)), 0)\n\n train_lab_data = CIFAR10ImageDataSet(train_data, transforms=train_transforms)\n train_loader = torch.utils.data.DataLoader(train_lab_data, batch_size=batch_size, shuffle=True, num_workers=1)\n\n '''\n Process validation data into a DataLoader object\n '''\n val_transforms = transforms.Compose([\n transforms.Scale(32)\n ])\n\n val_set_name = 'cifar-10-batches-py/test_batch'\n val_dir = os.path.join(dataset_path, val_set_name)\n val_data = unpickle_cifar10(val_dir)\n num_points_val_batch = val_data.shape[0]\n\n val_data = np.reshape(val_data, (num_points_val_batch, 3, 32, 32))\n\n val_lab_data = CIFAR10ImageDataSet(val_data, transforms=val_transforms)\n val_loader = torch.utils.data.DataLoader(val_lab_data, batch_size=1, shuffle=False, num_workers=1)\n\n return train_loader, val_loader",
"_____no_output_____"
],
[
"train_loader, val_loader = get_cifar10_loaders('../data/cifar10', 5)",
"Files already downloaded and verified\n"
],
[
"def to_rgb(grayscale_input, ab_input, colour):\n plt.clf() # clear matplotlib \n \n color_image = torch.cat((grayscale_input, ab_input), 0).numpy() # combine channels\n color_image = color_image.transpose((1, 2, 0)) # rescale for matplotlib\n color_image[:, :, 0:1] = color_image[:, :, 0:1] * 100\n color_image[:, :, 1:3] = color_image[:, :, 1:3] * 255 - 128 \n color_image = lab2rgb(color_image.astype(np.float64))\n grayscale_input = grayscale_input.squeeze().numpy()\n \n f, axarr = plt.subplots(1, 3)\n axarr[0].imshow(grayscale_input, cmap='gray')\n axarr[1].imshow(color_image)\n axarr[2].imshow(colour)\n axarr[0].axis('off'), axarr[1].axis('off'), axarr[2].axis('off')\n plt.show();",
"_____no_output_____"
],
[
"for i, (input_gray, input_ab, colour) in enumerate(val_loader):\n for j in range(1):\n to_rgb(input_gray[j].cpu(), input_ab[j].cpu(), colour[j].cpu())\n if i == 10: break",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0b218ed2a5f48df58a0a4597d266495615bd9f6 | 298,750 | ipynb | Jupyter Notebook | Colab Notebooks/Untitled1.ipynb | DishenMakwana/Python-DS | 5aa34ba9fba1eeb96a67614c78275fc7c3dca705 | [
"MIT"
] | 1 | 2021-04-29T19:01:07.000Z | 2021-04-29T19:01:07.000Z | Colab Notebooks/Untitled1.ipynb | DishenMakwana/Python-DS | 5aa34ba9fba1eeb96a67614c78275fc7c3dca705 | [
"MIT"
] | null | null | null | Colab Notebooks/Untitled1.ipynb | DishenMakwana/Python-DS | 5aa34ba9fba1eeb96a67614c78275fc7c3dca705 | [
"MIT"
] | null | null | null | 298,750 | 298,750 | 0.886269 | [
[
[
"import pandas as pd\nimport numpy as np\n\ndf = pd.DataFrame({'Map': [0,0,0,1,1,2,2], 'Values': [1,2,3,5,4,2,5]})\ndf['S'] = df.groupby('Map')['Values'].transform(np.sum)\ndf['M'] = df.groupby('Map')['Values'].transform(np.mean)\ndf['V'] = df.groupby('Map')['Values'].transform(np.var)\nprint (df)",
" Map Values S M V\n0 0 1 6 2.0 1.0\n1 0 2 6 2.0 1.0\n2 0 3 6 2.0 1.0\n3 1 5 9 4.5 0.5\n4 1 4 9 4.5 0.5\n5 2 2 7 3.5 4.5\n6 2 5 7 3.5 4.5\n"
],
[
"import numpy as np\nimport pandas as pd\ndf = pd.DataFrame({'A': [2,3,1], 'B': [1,2,3], 'C': [5,3,4]})\ndf = df.drop(df.index[[1]])\nprint (df)\ndf = df.drop('B', 1)\nprint (df)",
" A B C\n0 2 1 5\n2 1 3 4\n A C\n0 2 5\n2 1 4\n"
],
[
"import pandas as pd\r\ndf = pd.DataFrame({'A': [0,0,0,0,0,1,1], 'B': [1,2,3,5,4,2,5],\r\n'C': [5,3,4,1,1,2,3]})\r\na_group_desc = df.groupby('A').describe()\r\nprint (a_group_desc)",
" B ... C \n count mean std min 25% 50% ... std min 25% 50% 75% max\nA ... \n0 5.0 3.0 1.581139 1.0 2.00 3.0 ... 1.788854 1.0 1.00 3.0 4.00 5.0\n1 2.0 3.5 2.121320 2.0 2.75 3.5 ... 0.707107 2.0 2.25 2.5 2.75 3.0\n\n[2 rows x 16 columns]\n"
],
[
"unstacked = a_group_desc.unstack()\r\nprint (unstacked) ",
" A\nB count 0 5.000000\n 1 2.000000\n mean 0 3.000000\n 1 3.500000\n std 0 1.581139\n 1 2.121320\n min 0 1.000000\n 1 2.000000\n 25% 0 2.000000\n 1 2.750000\n 50% 0 3.000000\n 1 3.500000\n 75% 0 4.000000\n 1 4.250000\n max 0 5.000000\n 1 5.000000\nC count 0 5.000000\n 1 2.000000\n mean 0 2.800000\n 1 2.500000\n std 0 1.788854\n 1 0.707107\n min 0 1.000000\n 1 2.000000\n 25% 0 1.000000\n 1 2.250000\n 50% 0 3.000000\n 1 2.500000\n 75% 0 4.000000\n 1 2.750000\n max 0 5.000000\n 1 3.000000\ndtype: float64\n"
],
[
"import pandas as pd\r\nimport numpy as np\r\ns = pd.Series([1, 2, 3, np.NaN, 5, 6, None])\r\nprint (s.isnull())\r\nprint (s[s.isnull()]) ",
"0 False\n1 False\n2 False\n3 True\n4 False\n5 False\n6 True\ndtype: bool\n3 NaN\n6 NaN\ndtype: float64\n"
],
[
"import pandas as pd\r\nimport numpy as np\r\ns = pd.Series([1, 2, 3, np.NaN, 5, 6, None])\r\nprint (s.fillna(int(s.mean())))\r\nprint (s.dropna())",
"0 1.0\n1 2.0\n2 3.0\n3 3.0\n4 5.0\n5 6.0\n6 3.0\ndtype: float64\n0 1.0\n1 2.0\n2 3.0\n4 5.0\n5 6.0\ndtype: float64\n"
],
[
"x = np.array([[[1, 2, 3], [4, 5, 6], [7, 8, 9],], [[11,12,13], [14,15,16], [17,18,19],],\r\n[[21,22,23], [24,25,26], [27,28,29]]])\r\nprint(x[[[0]]])",
"[[[1 2 3]\n [4 5 6]\n [7 8 9]]]\n"
],
[
"values = [1, 5, 8, 9, 2, 0, 3, 10, 4, 7]\r\nimport matplotlib.pyplot as plt\r\nplt.plot(range(1,11), values)\r\nplt.savefig('Image.jpeg', format='jpeg')",
"_____no_output_____"
],
[
"values = [1, 5, 8, 9, 2, 0, 3, 10, 4, 7]\r\nimport matplotlib.pyplot as plt\r\nplt.plot(range(1,11), values)\r\nplt.savefig('MySamplePlot.png', format='png')",
"_____no_output_____"
],
[
"values = [1, 5, 8, 9, 2, 0, 3, 10, 4, 7]\r\nimport matplotlib.pyplot as plt\r\nplt.plot(range(1,11), values)\r\nplt.savefig('plt.pdf', format='pdf')",
"_____no_output_____"
],
[
"import numpy as np\r\nimport matplotlib.pyplot as plt\r\nx1 = 50 * np.random.rand(40)\r\nx2 = 25 * np.random.rand(40) + 25\r\nx = np.concatenate((x1, x2))\r\ny1 = 25 * np.random.rand(40)\r\ny2 = 50 * np.random.rand(40) + 25\r\ny = np.concatenate((y1, y2))\r\nplt.scatter(x, y, s=[100], marker='^', c='m')\r\nplt.show() ",
"_____no_output_____"
],
[
"pip install matplotlib",
"Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (3.2.2)\nRequirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (1.3.1)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (2.4.7)\nRequirement already satisfied: numpy>=1.11 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (1.19.5)\nRequirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (0.10.0)\nRequirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (2.8.1)\nRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from cycler>=0.10->matplotlib) (1.15.0)\n"
],
[
"pip install --upgrade matplotlib",
"Collecting matplotlib\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/d2/43/2bd63467490036697e7be71444fafc7b236923d614d4521979a200c6b559/matplotlib-3.3.3-cp36-cp36m-manylinux1_x86_64.whl (11.6MB)\n\u001b[K |████████████████████████████████| 11.6MB 5.5MB/s \n\u001b[?25hRequirement already satisfied, skipping upgrade: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (1.3.1)\nRequirement already satisfied, skipping upgrade: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (2.8.1)\nRequirement already satisfied, skipping upgrade: numpy>=1.15 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (1.19.5)\nRequirement already satisfied, skipping upgrade: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (0.10.0)\nRequirement already satisfied, skipping upgrade: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.3 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (2.4.7)\nRequirement already satisfied, skipping upgrade: pillow>=6.2.0 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (7.0.0)\nRequirement already satisfied, skipping upgrade: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.1->matplotlib) (1.15.0)\n\u001b[31mERROR: albumentations 0.1.12 has requirement imgaug<0.2.7,>=0.2.5, but you'll have imgaug 0.2.9 which is incompatible.\u001b[0m\nInstalling collected packages: matplotlib\n Found existing installation: matplotlib 3.2.2\n Uninstalling matplotlib-3.2.2:\n Successfully uninstalled matplotlib-3.2.2\nSuccessfully installed matplotlib-3.3.3\n"
],
[
"pip install mpl_toolkits",
"\u001b[31mERROR: Could not find a version that satisfies the requirement mpl_toolkits (from versions: none)\u001b[0m\n\u001b[31mERROR: No matching distribution found for mpl_toolkits\u001b[0m\n"
],
[
"pip install basemap",
"\u001b[31mERROR: Could not find a version that satisfies the requirement basemap (from versions: none)\u001b[0m\n\u001b[31mERROR: No matching distribution found for basemap\u001b[0m\n"
],
[
"from mpl_toolkits.basemap import Basemap\r\nimport matplotlib.pyplot as plt\r\n\r\nm = Basemap(projection='mill')\r\nm.drawcoastlines()\r\nplt.show()",
"_____no_output_____"
],
[
"conda install basemap",
"_____no_output_____"
],
[
"pip install mpltoolkits.basemap",
"\u001b[31mERROR: Could not find a version that satisfies the requirement mpltoolkits.basemap (from versions: none)\u001b[0m\n\u001b[31mERROR: No matching distribution found for mpltoolkits.basemap\u001b[0m\n"
],
[
"per = (11438/500)*100\r\nx = 'result = {r:3.2f}%'.format(r=per)\r\nx",
"_____no_output_____"
],
[
"coords = {'lat':'37.25N','long':'-115.45W'}\r\n'Coords : {long}, {lat}'.format(**coords)",
"_____no_output_____"
],
[
"l = list(x for x in range(1,20))\r\nl",
"_____no_output_____"
],
[
"x = [2,4,8,6,3,1,7,9]\r\nx.sort()\r\nx.reverse()\r\nx",
"_____no_output_____"
],
[
"l = [(1,2,3), (4,5,6), (7,8,9)]\r\nfor x in l:\r\n for y in x:\r\n print(y)",
"1\n2\n3\n4\n5\n6\n7\n8\n9\n"
],
[
"import numpy as np\r\nimport matplotlib.pyplot as plt\r\nx = 20 * np.random.randint(1,10,10000)\r\nplt.hist(x, 25,histtype='stepfilled', align='mid', color='g',label='TestData')\r\nplt.legend()\r\nplt.title('Step Filled Histogram')\r\nplt.show()",
"_____no_output_____"
],
[
"import numpy as np\r\nimport matplotlib.pyplot as plt\r\ndata = 50 * np.random.rand(100) - 25\r\nplt.boxplot(data)\r\nplt.show()",
"_____no_output_____"
],
[
"import numpy as np\r\nimport matplotlib.pyplot as plt\r\nx1 = 5 * np.random.rand(40)\r\nx2 = 5 * np.random.rand(40) + 25\r\nx3 = 25 * np.random.rand(20)\r\nx = np.concatenate((x1, x2, x3))\r\ny1 = 5 * np.random.rand(40)\r\ny2 = 5 * np.random.rand(40) + 25\r\ny3 = 25 * np.random.rand(20)\r\ny = np.concatenate((y1, y2, y3))\r\nplt.scatter(x, y, s=[100], marker='^', c='m')\r\nplt.show() ",
"_____no_output_____"
],
[
"import numpy as np\r\nimport matplotlib.pyplot as plt\r\nx1 = 5 * np.random.rand(50)\r\nx2 = 5 * np.random.rand(50) + 25\r\nx3 = 30 * np.random.rand(25)\r\nx = np.concatenate((x1, x2, x3))\r\ny1 = 5 * np.random.rand(50)\r\ny2 = 5 * np.random.rand(50) + 25\r\ny3 = 30 * np.random.rand(25)\r\ny = np.concatenate((y1, y2, y3))\r\ncolor_array = ['b'] * 50 + ['g'] * 50 + ['r'] * 25\r\nplt.scatter(x, y, s=[50], marker='D', c=color_array)\r\nplt.show()\r\n",
"_____no_output_____"
],
[
"import networkx as nx\r\ng = nx.Graph()\r\ng.add_node(1)\r\ng.add_nodes_from([2,7])\r\ng.add_edge(1,2)\r\ng.add_edges_from([(2,3),(4,5),(6,7),(3,7),(2,5),(4,6)])\r\nnx.draw_networkx(g)\r\nnx.info(g)",
"_____no_output_____"
],
[
"import pandas as pd\r\ndf = pd.DataFrame({'A': [0,0,0,0,0,1,1], 'B': [1,2,3,5,4,2,5],\r\n'C': [5,3,4,1,1,2,3]})\r\na_group_desc = df.groupby('A').describe()\r\nprint (a_group_desc)",
" B ... C \n count mean std min 25% 50% ... std min 25% 50% 75% max\nA ... \n0 5.0 3.0 1.581139 1.0 2.00 3.0 ... 1.788854 1.0 1.00 3.0 4.00 5.0\n1 2.0 3.5 2.121320 2.0 2.75 3.5 ... 0.707107 2.0 2.25 2.5 2.75 3.0\n\n[2 rows x 16 columns]\n"
],
[
"unstacked = a_group_desc.unstack()\r\nprint (unstacked)",
" A\nB count 0 5.000000\n 1 2.000000\n mean 0 3.000000\n 1 3.500000\n std 0 1.581139\n 1 2.121320\n min 0 1.000000\n 1 2.000000\n 25% 0 2.000000\n 1 2.750000\n 50% 0 3.000000\n 1 3.500000\n 75% 0 4.000000\n 1 4.250000\n max 0 5.000000\n 1 5.000000\nC count 0 5.000000\n 1 2.000000\n mean 0 2.800000\n 1 2.500000\n std 0 1.788854\n 1 0.707107\n min 0 1.000000\n 1 2.000000\n 25% 0 1.000000\n 1 2.250000\n 50% 0 3.000000\n 1 2.500000\n 75% 0 4.000000\n 1 2.750000\n max 0 5.000000\n 1 3.000000\ndtype: float64\n"
],
[
"import nltk\r\nnltk.download()",
"_____no_output_____"
],
[
"from nltk.corpus import stopwords \r\nfrom nltk.tokenize import word_tokenize \r\n\r\nexample_sent = \"This is a sample sentence, showing off the stop words filtration.\"\r\n\r\nstop_words = set(stopwords.words('english'))\r\n\r\nword_tokens = word_tokenize(example_sent) \r\n\r\nfiltered_sentence = [w for w in word_tokens if not w in stop_words] \r\n\r\nprint(word_tokens) \r\nprint(filtered_sentence)",
"_____no_output_____"
],
[
"import networkx as nx\r\nG = nx.cycle_graph(10)\r\nA = nx.adjacency_matrix(G)\r\nprint(A.todense())\r\n",
"[[0 1 0 0 0 0 0 0 0 1]\n [1 0 1 0 0 0 0 0 0 0]\n [0 1 0 1 0 0 0 0 0 0]\n [0 0 1 0 1 0 0 0 0 0]\n [0 0 0 1 0 1 0 0 0 0]\n [0 0 0 0 1 0 1 0 0 0]\n [0 0 0 0 0 1 0 1 0 0]\n [0 0 0 0 0 0 1 0 1 0]\n [0 0 0 0 0 0 0 1 0 1]\n [1 0 0 0 0 0 0 0 1 0]]\n"
],
[
"import numpy as np \r\nimport pandas as pd \r\nc = pd.Series([\"a\", \"b\", \"d\", \"a\", \"d\"], dtype =\"category\") \r\nprint (\"\\nCategorical without pandas.Categorical() : \\n\", c) ",
"\nCategorical without pandas.Categorical() : \n 0 a\n1 b\n2 d\n3 a\n4 d\ndtype: category\nCategories (3, object): ['a', 'b', 'd']\n"
],
[
"c1 = pd.Categorical([1, 2, 3, 1, 2, 3]) \r\nprint (\"\\n\\nc1 : \", c1) ",
"\n\nc1 : [1, 2, 3, 1, 2, 3]\nCategories (3, int64): [1, 2, 3]\n"
],
[
"c2 = pd.Categorical(['e', 'm', 'f', 'i', 'f', 'e', 'h', 'm' ]) \r\nprint (\"\\nc2 : \", c2) ",
"\nc2 : ['e', 'm', 'f', 'i', 'f', 'e', 'h', 'm']\nCategories (5, object): ['e', 'f', 'h', 'i', 'm']\n"
],
[
"import sys\r\nsys.getdefaultencoding( )",
"_____no_output_____"
],
[
"from scipy.sparse import csc_matrix\r\nprint (csc_matrix([1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0]))",
" (0, 0)\t1\n (0, 5)\t1\n (0, 16)\t1\n (0, 18)\t1\n"
],
[
"sklearn_hashing_trick = txt.HashingVectorizer( n_features=20, binary=True,norm=None)\r\ntext_vector = sklearn_hashing_trick.transform( ['Python for data science','Python for machine learning'])\r\ntext_vector",
"_____no_output_____"
],
[
"from sklearn.feature_extraction.text import CountVectorizer \r\n\r\ndocument = [\"One Geek helps Two Geeks\", \r\n\t\t\t\"Two Geeks help Four Geeks\", \r\n\t\t\t\"Each Geek helps many other Geeks at GeeksforGeeks\"] \r\n\r\n# Create a Vectorizer Object \r\nvectorizer = CountVectorizer() \r\n\r\nvectorizer.fit(document) \r\n\r\n# Printing the identified Unique words along with their indices \r\nprint(\"Vocabulary: \", vectorizer.vocabulary_) \r\n\r\n# Encode the Document \r\nvector = vectorizer.transform(document) \r\n\r\n# Summarizing the Encoded Texts \r\nprint(\"Encoded Document is:\") \r\nprint(vector.toarray())",
"Vocabulary: {'one': 9, 'geek': 3, 'helps': 7, 'two': 11, 'geeks': 4, 'help': 6, 'four': 2, 'each': 1, 'many': 8, 'other': 10, 'at': 0, 'geeksforgeeks': 5}\nEncoded Document is:\n[[0 0 0 1 1 0 0 1 0 1 0 1]\n [0 0 1 0 2 0 1 0 0 0 0 1]\n [1 1 0 1 1 1 0 1 1 0 1 0]]\n"
],
[
"from sklearn.feature_extraction.text import HashingVectorizer \r\n\r\ndocument = [\"One Geek helps Two Geeks\", \r\n\t\t\t\"Two Geeks help Four Geeks\", \r\n\t\t\t\"Each Geek helps many other Geeks at GeeksforGeeks\"] \r\n\r\n# Create a Vectorizer Object \r\nvectorizer = HashingVectorizer() \r\n\r\nvectorizer.fit(document) \r\n\r\n# Encode the Document \r\nvector = vectorizer.transform(document) \r\n\r\n# Summarizing the Encoded Texts \r\nprint(\"Encoded Document is:\") \r\nprint(vector.toarray())\r\n",
"Encoded Document is:\n[[0. 0. 0. ... 0. 0. 0.]\n [0. 0. 0. ... 0. 0. 0.]\n [0. 0. 0. ... 0. 0. 0.]]\n"
],
[
"from sklearn.datasets import load_digits\r\ndigits = load_digits()\r\nX, y = digits.data,digits.target\r\nfrom sklearn.svm import SVC\r\nfrom sklearn.model_selection import cross_val_score\r\n%timeit single_core_learning = cross_val_score(SVC(), X,y, cv=20, n_jobs=1)\r\n%timeit multi_core_learning = cross_val_score(SVC(), X, y, cv=20, n_jobs=-1)from sklearn.datasets import load_iris\r\niris = load_iris()",
"1 loop, best of 3: 2.65 s per loop\n1 loop, best of 3: 1.7 s per loop\n"
],
[
"from sklearn.datasets import load_iris\r\niris = load_iris()",
"_____no_output_____"
],
[
"import pandas as pd\r\nimport numpy as np\r\niris_nparray = iris.data\r\n\r\niris_dataframe = pd.DataFrame(iris.data, columns=iris.feature_names)\r\niris_dataframe['group'] = pd.Series([iris.target_names[k] for k in iris.target],dtype=\"category\")",
"_____no_output_____"
],
[
"print (iris_dataframe.mean(numeric_only=True))\r\nprint (iris_dataframe.median(numeric_only=True))",
"sepal length (cm) 5.843333\nsepal width (cm) 3.057333\npetal length (cm) 3.758000\npetal width (cm) 1.199333\ndtype: float64\nsepal length (cm) 5.80\nsepal width (cm) 3.00\npetal length (cm) 4.35\npetal width (cm) 1.30\ndtype: float64\n"
],
[
"print (iris_dataframe.std())\r\nprint (iris_dataframe.max(numeric_only=True)-iris_dataframe.min(numeric_only=True) )",
"sepal length (cm) 0.828066\nsepal width (cm) 0.435866\npetal length (cm) 1.765298\npetal width (cm) 0.762238\ndtype: float64\nsepal length (cm) 3.6\nsepal width (cm) 2.4\npetal length (cm) 5.9\npetal width (cm) 2.4\ndtype: float64\n"
],
[
"print (iris_dataframe.quantile(np.array([0,.25,.50,.75,1])))",
" sepal length (cm) sepal width (cm) petal length (cm) petal width (cm)\n0.00 4.3 2.0 1.00 0.1\n0.25 5.1 2.8 1.60 0.3\n0.50 5.8 3.0 4.35 1.3\n0.75 6.4 3.3 5.10 1.8\n1.00 7.9 4.4 6.90 2.5\n"
],
[
"pip install scipy",
"Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (1.4.1)\nRequirement already satisfied: numpy>=1.13.3 in /usr/local/lib/python3.6/dist-packages (from scipy) (1.19.5)\n"
],
[
"from scipy.stats import kurtosis, kurtosistest\r\nk = kurtosis(iris_dataframe['petal length (cm)'])\r\nzscore, pvalue = kurtosistest(iris_dataframe['petal length (cm)'])\r\nprint ('Kurtosis %0.3f\\nz-score %0.3f\\np-value %0.3f' % (k, zscore, pvalue) )",
"Kurtosis -1.396\nz-score -14.823\np-value 0.000\n"
],
[
"from scipy.stats import skew, skewtest\r\ns = skew(iris_dataframe['petal length (cm)'])\r\nzscore, pvalue = skewtest(iris_dataframe['petal length (cm)'])\r\nprint ('Skewness %0.3f\\nz-score %0.3f\\np-value %0.3f' % (s, zscore, pvalue))",
"Skewness -0.272\nz-score -1.400\np-value 0.162\n"
],
[
"iris_binned = pd.concat([\r\npd.qcut(iris_dataframe.iloc[:,0], [0, .25, .5, .75, 1]),\r\npd.qcut(iris_dataframe.iloc[:,1], [0, .25, .5, .75, 1]),\r\npd.qcut(iris_dataframe.iloc[:,2], [0, .25, .5, .75, 1]),\r\npd.qcut(iris_dataframe.iloc[:,3], [0, .25, .5, .75, 1]),\r\n], join='outer', axis = 1)",
"_____no_output_____"
],
[
"print(iris_dataframe['group'].value_counts())\r\nprint(iris_binned['petal length (cm)'].value_counts())\r\nprint(iris_binned.describe())",
"virginica 50\nversicolor 50\nsetosa 50\nName: group, dtype: int64\n(0.999, 1.6] 44\n(4.35, 5.1] 41\n(5.1, 6.9] 34\n(1.6, 4.35] 31\nName: petal length (cm), dtype: int64\n sepal length (cm) ... petal width (cm)\ncount 150 ... 150\nunique 4 ... 4\ntop (4.2989999999999995, 5.1] ... (0.099, 0.3]\nfreq 41 ... 41\n\n[4 rows x 4 columns]\n"
],
[
"print (pd.crosstab(iris_dataframe['group'], iris_binned['petal length (cm)']) )",
"petal length (cm) (0.999, 1.6] (1.6, 4.35] (4.35, 5.1] (5.1, 6.9]\ngroup \nsetosa 44 6 0 0\nversicolor 0 25 25 0\nvirginica 0 0 16 34\n"
],
[
"boxplots = iris_dataframe.boxplot(return_type='axes')",
"_____no_output_____"
],
[
"from scipy.stats import ttest_ind\r\ngroup0 = iris_dataframe['group'] == 'setosa'\r\ngroup1 = iris_dataframe['group'] == 'versicolor'\r\ngroup2 = iris_dataframe['group'] == 'virginica'\r\nprint('var1 %0.3f var2 %03f' % (iris_dataframe['petal length (cm)'][group1].var(),iris_dataframe['petal length (cm)'][group2].var()))",
"var1 0.221 var2 0.304588\n"
],
[
"t, pvalue = ttest_ind(iris_dataframe['sepal width (cm)'][group1], iris_dataframe['sepal width (cm)'][group2], axis=0, equal_var=False)\r\nprint('t statistic %0.3f p-value %0.3f' % (t, pvalue))",
"t statistic -3.206 p-value 0.002\n"
],
[
"from scipy.stats import f_oneway\r\nf, pvalue = f_oneway(iris_dataframe['sepal width (cm)'][group0],iris_dataframe['sepal width (cm)'][group1],iris_dataframe['sepal width (cm)'][group2])\r\nprint(\"One-way ANOVA F-value %0.3f p-value %0.3f\" % (f,pvalue))",
"One-way ANOVA F-value 49.160 p-value 0.000\n"
],
[
"from pandas.plotting import parallel_coordinates\r\niris_dataframe['labels'] = [iris.target_names[k] for k in iris_dataframe['group']]\r\npll = parallel_coordinates(iris_dataframe,'labels')",
"_____no_output_____"
],
[
"densityplot = iris_dataframe[iris_dataframe.columns[:4]].plot(kind='density’)\r\nsingle_distribution = iris_dataframe['petal length (cm)'].plot(kind='hist')",
"_____no_output_____"
],
[
"simple_scatterplot = iris_dataframe.plot(kind='scatter', x='petal length (cm)', y='petal width (cm)')",
"_____no_output_____"
],
[
"from pandas import scatter_matrix\r\nmatrix_of_scatterplots = scatter_matrix(iris_dataframe, figsize=(6, 6),diagonal='kde') ",
"_____no_output_____"
],
[
"from sklearn.datasets import load_iris\r\niris = load_iris() ",
"_____no_output_____"
],
[
"import pandas as pd\r\nimport numpy as np\r\niris_nparray = iris.data\r\niris_dataframe = pd.DataFrame(iris.data, columns=iris.feature_names)\r\niris_dataframe['group'] = pd.Series([iris.target_names[k] for k in iris.target],\r\ndtype=\"category\") ",
"_____no_output_____"
],
[
"print(iris_dataframe['group'])",
"0 setosa\n1 setosa\n2 setosa\n3 setosa\n4 setosa\n ... \n145 virginica\n146 virginica\n147 virginica\n148 virginica\n149 virginica\nName: group, Length: 150, dtype: category\nCategories (3, object): ['setosa', 'versicolor', 'virginica']\n"
],
[
"from scipy.stats import spearmanr\r\nfrom scipy.stats.stats import pearsonr\r\nspearmanr_coef, spearmanr_p = spearmanr(iris_dataframe['sepal length (cm)'],iris_dataframe['sepal width (cm)'])\r\npearsonr_coef, pearsonr_p = pearsonr(iris_dataframe['sepal length (cm)'],iris_dataframe['sepal width (cm)'])\r\nprint ('Pearson correlation %0.3f | Spearman correlation %0.3f' % (pearsonr_coef,spearmanr_coef))",
"Pearson correlation -0.118 | Spearman correlation -0.167\n"
],
[
"from scipy.stats import chi2_contingency\r\ntable = pd.crosstab(iris_dataframe['group'], iris_binned['petal length (cm)'])\r\nchi2, p, dof, expected = chi2_contingency(table.values)\r\nprint('Chi-square %0.2f p-value %0.3f' % (chi2, p))",
"Chi-square 212.43 p-value 0.000\n"
],
[
"from sklearn.preprocessing import scale\r\nstand_sepal_width = scale(iris_dataframe['sepal width (cm)'])",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\r\nvalues = [5, 8, 9, 10, 4, 7]\r\ncolors = ['b', 'g', 'r', 'c', 'm', 'y']\r\nlabels = ['A', 'B', 'C', 'D', 'E', 'F']\r\nexplode = (0, 0.2, 0, 0, 0, 0)\r\nplt.pie(values, colors=colors, labels=labels, explode=explode, shadow=True, autopct='%1.2f%%')\r\nplt.title('Values')\r\nplt.show()",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\r\nvalues = [5, 8, 9, 10, 4, 7]\r\nwidths = [0.7, 0.8, 0.7, 0.7, 0.7, 0.7]\r\ncolors = ['b', 'r', 'b', 'b', 'b', 'b']\r\nplt.bar(range(0, 6), values, width=widths,color=colors, align='center')\r\nplt.show()",
"_____no_output_____"
],
[
"import numpy as np\r\nimport matplotlib.pyplot as plt\r\nx = 100 * np.random.randn(10000)\r\nplt.hist(x, histtype='stepfilled', color='g',label='TestData')\r\nplt.legend()\r\nplt.title('Step Filled Histogram')\r\nplt.show()",
"_____no_output_____"
],
[
"import numpy as np\r\nimport matplotlib.pyplot as plt\r\nx = 100 * np.random.randn(1000)\r\nplt.boxplot(x)\r\nplt.title('Step Filled Histogram')\r\nplt.show()",
"_____no_output_____"
],
[
"import numpy as np\r\nimport matplotlib.pyplot as plt\r\nimport matplotlib.pylab as plb\r\nx1 = 15 * np.random.rand(50)\r\nx2 = 15 * np.random.rand(50) + 15\r\nx3 = 30 * np.random.rand(30)\r\nx = np.concatenate((x1, x2, x3))\r\ny1 = 15 * np.random.rand(50)\r\ny2 = 15 * np.random.rand(50) + 15\r\ny3 = 30 * np.random.rand(30)\r\ny = np.concatenate((y1, y2, y3))\r\ncolor_array = ['b'] * 50 + ['g'] * 50 + ['r'] * 30\r\nplt.scatter(x, y, s=[90], marker='*', c=color_array)\r\nz = np.polyfit(x, y, 1)\r\np = np.poly1d(z)\r\nplb.plot(x, p(x), 'm-')\r\nplt.show()\r\n",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\r\nimport datetime\r\nimport numpy as np\r\n\r\nx = np.array([datetime.datetime(2021, 1, 1, i, 0) for i in range(24)])\r\ny = np.random.randint(100, size=x.shape)\r\n\r\nplt.plot(x,y)\r\nplt.show()",
"_____no_output_____"
],
[
"from sklearn.datasets import fetch_20newsgroups\r\nimport sklearn.feature_extraction.text as ext\r\ncategories = ['sci.space']\r\ntwenty_train = fetch_20newsgroups(subset='train',categories=categories,remove=('headers', 'footers', 'quotes’), shuffle=True,random_state=42)",
"_____no_output_____"
],
[
"import pandas as pd\r\nimport numpy as np\r\ndf = pd.DataFrame({'A': [2,1,2,3,3,5,4], 'B': [1,2,3,5,4,2,5], 'C': [5,3,4,1,1,2,3]})\r\ndf = df.sort_index(by=['A', 'B'], ascending=[True, True])\r\ndf = df.reset_index(drop=True)\r\ndf",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0b22d2c064eea7269ddd13619673b0d7dedf601 | 608,951 | ipynb | Jupyter Notebook | examples/LinearRadon_synth.ipynb | fercarozzi/myseismicjulia | a8b184af2dca29f36176e78128503d27411f2c28 | [
"MIT"
] | null | null | null | examples/LinearRadon_synth.ipynb | fercarozzi/myseismicjulia | a8b184af2dca29f36176e78128503d27411f2c28 | [
"MIT"
] | null | null | null | examples/LinearRadon_synth.ipynb | fercarozzi/myseismicjulia | a8b184af2dca29f36176e78128503d27411f2c28 | [
"MIT"
] | null | null | null | 4,684.238462 | 605,638 | 0.961411 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
d0b26f37e1b458005f42d5062b370c27a280617b | 31,646 | ipynb | Jupyter Notebook | SQL_WORLD_SUICIDE_ANALYTICS.ipynb | allanstar-byte/ESTRELLA | 3445b14e7eb70c6d223970521c227cd71111b0e9 | [
"MIT",
"Unlicense"
] | 1 | 2020-09-24T10:07:26.000Z | 2020-09-24T10:07:26.000Z | SQL_WORLD_SUICIDE_ANALYTICS.ipynb | allanstar-byte/ESTRELLA | 3445b14e7eb70c6d223970521c227cd71111b0e9 | [
"MIT",
"Unlicense"
] | null | null | null | SQL_WORLD_SUICIDE_ANALYTICS.ipynb | allanstar-byte/ESTRELLA | 3445b14e7eb70c6d223970521c227cd71111b0e9 | [
"MIT",
"Unlicense"
] | null | null | null | 30.664729 | 249 | 0.344214 | [
[
[
"<a href=\"https://colab.research.google.com/github/allanstar-byte/ESTRELLA/blob/master/SQL_WORLD_SUICIDE_ANALYTICS.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# **SQL DATA CLEANING, OUTLIERS AND ANALYTICS**",
"_____no_output_____"
],
[
"# **1. Connecting to our Database**",
"_____no_output_____"
]
],
[
[
"#loading the sql extension into our environment\n%load_ext sql\n\n# Then connect to our in memory sqlite database\n \n%sql sqlite://",
"_____no_output_____"
]
],
[
[
"# **2. Importing Data from CSV files**",
"_____no_output_____"
],
[
"The dataset we will use contains suicide cases from different countries in the world with different generations, age groups and other factors as outlined below.",
"_____no_output_____"
]
],
[
[
"# Importing the pandas library\n# We will use a function read_csv from pandas to read our datasets as shown\n#\nimport pandas as pd ",
"_____no_output_____"
],
[
"# Loading our table from the respective CSV files \nwith open('/content/Suicide.csv','r') as f:\n Suicide = pd.read_csv(f, index_col=0, encoding='utf-8')\n%sql DROP TABLE if EXISTS Suicide\n%sql PERSIST Suicide;\n%sql SELECT * FROM Suicide LIMIT 5;",
" * sqlite://\nDone.\n * sqlite://\n * sqlite://\nDone.\n"
]
],
[
[
"# **3. Analytics**",
"_____no_output_____"
]
],
[
[
"#1. identifying top 5 countries with the highest suicide cases in the world\n%%sql\nSELECT Country, \nSUM (Suicides_no) \nFROM Suicide \nGROUP BY Country\nORDER BY SUM (Suicides_no) DESC\nlimit 5;",
" * sqlite://\nDone.\n"
],
[
"#2. identifying top 5 countries with the lowest suicide cases in the world\n%%sql\nSELECT Country, \nSUM (Suicides_no) \nFROM Suicide \nGROUP BY Country\nORDER BY SUM (Suicides_no) ASC\nlimit 5;",
" * sqlite://\nDone.\n"
],
[
"#3. identifying the generation with the highest suicide cases\n%%sql\nSELECT Generation, \nSUM (Suicide_rate) \nFROM Suicide \nGROUP BY Generation\nORDER BY SUM (Suicide_rate) DESC\nlimit 5;",
" * sqlite://\nDone.\n"
],
[
"#4. identifying the generations with the lowest suicide cases\n%%sql\nSELECT Generation, \nSUM (Suicide_rate) \nFROM Suicide \nGROUP BY Generation\nORDER BY SUM (Suicide_rate) ASC\nlimit 5;",
" * sqlite://\nDone.\n"
],
[
"#5 Investigating which gender has more suicide rates compared to the other one\n%%sql\nSELECT Sex, \nSUM (Suicides_no) \nFROM Suicide \nGROUP BY Sex\nORDER BY SUM (Suicides_no) DESC\nlimit 5;",
" * sqlite://\nDone.\n"
],
[
"#6. Knowing the age group which most people commit suicide\n%%sql\nSELECT Age, \nSUM (Suicides_no) \nFROM Suicide \nGROUP BY Age\nORDER BY SUM (Suicide_rate) DESC\nlimit 5;",
" * sqlite://\nDone.\n"
],
[
"#7. Finding out the year where people committed suicide the most\n%%sql\nSELECT Year, \nSUM (Suicides_no) \nFROM Suicide \nGROUP BY Year\nORDER BY SUM (Suicides_no) DESC\nlimit 5;",
" * sqlite://\nDone.\n"
],
[
"#8. Finding which countries has the most suicides comited at every 100,000\n%%sql\nSELECT Country, \nSUM (Suicides_per_hundred_thousand_pop) \nFROM Suicide \nGROUP BY Country\nORDER BY SUM (Suicides_per_hundred_thousand_pop) DESC\nlimit 5;",
" * sqlite://\nDone.\n"
],
[
"#9. Finding which countries has the leas suicides comited at every 100,000\n%%sql\nSELECT Country, \nSUM (Suicides_per_hundred_thousand_pop) \nFROM Suicide \nGROUP BY Country\nORDER BY SUM (Suicides_per_hundred_thousand_pop) ASC\nlimit 7;",
" * sqlite://\nDone.\n"
],
[
"#10. Finding which Age groups has the most suicides commited at every 100,000\n%%sql\nSELECT Age, \nSUM (Suicides_per_hundred_thousand_pop) \nFROM Suicide \nGROUP BY Age\nORDER BY SUM (Suicides_per_hundred_thousand_pop) DESC\nlimit 5;",
" * sqlite://\nDone.\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0b27912f412eab3a7d47d12ec4009ecfb230e7e | 5,081 | ipynb | Jupyter Notebook | data-science.ipynb | linhbngo/presentations | 77f30ac6847871b089e46ab1014110986b27eb53 | [
"MIT"
] | null | null | null | data-science.ipynb | linhbngo/presentations | 77f30ac6847871b089e46ab1014110986b27eb53 | [
"MIT"
] | null | null | null | data-science.ipynb | linhbngo/presentations | 77f30ac6847871b089e46ab1014110986b27eb53 | [
"MIT"
] | null | null | null | 22.683036 | 159 | 0.539854 | [
[
[
"# <center> Data Wrangling </center>\n### <center> Clemson Infrastructure and Technology Integration (CITI) </center>\n##### <center> Linh B. Ngo </center>",
"_____no_output_____"
],
[
"**Wikipedia Definition:**\n- the process of manually converting or mapping data from one \"raw\" form into another format that allows for more convenient consumption of the data\n- done with the help of semi-automated tools. ",
"_____no_output_____"
],
[
"**Typical steps:**\n- Extracting the data in a raw form from the data source, \n- *munging* (transforming) the raw data using algorithms (e.g. sorting) or parsing the data into predefined data structures\n- depositing the resulting content into a data sink for storage and future use.",
"_____no_output_____"
],
[
"#### <center> Before you can ask the question, you have to understand the data! </center>",
"_____no_output_____"
],
[
"- What data are you using? (Where is the data coming from? Is it structured or unstructured? How many observations does it include?)\n- How do you need to transform the data to conduct your analysis?",
"_____no_output_____"
],
[
"**R**\n- Getting data\n- Cleaning data\n- Analyzing data\n- At Scale!!!",
"_____no_output_____"
],
[
"**Preliminary Tools**\n- `install.packages`\n - Simple, easy to use\n - Limited to certified CRAN packages only\n- `devtools`\n - Powerful, complex\n - Allow direct installation from github repositories!\n",
"_____no_output_____"
],
[
"**Getting Data**\n- `downloader`: direct download from URL\n- `RCurl`: interaction with websites (crawling) via HTTP requests\n- `twitteR`: interacting with Twitter API",
"_____no_output_____"
],
[
"**Importing Data**\n- `read.csv`: standrard R core library\n- `openxlsx, readxl`\n- `googlesheets`\n- `RMySQL, RPostgreSQL, RSQLite`\n- `rhdfs, rhbase, rhive`\n- `jsonlite, rjson`\n- `XML`\n- `quantmod`",
"_____no_output_____"
],
[
"**Cleaning Data (exploratory analysis)**\n- `sqldf`: SQL on data frames\n- `dplyr, plyr`: tools to interact with and manipulate data frames\n- `reshape2, tidyr`: tools to reshape data frames\n- `stringr` : text manipulation\n- `lubridate`: date manipulation\n- `zoo`: time series data\n",
"_____no_output_____"
],
[
"**Analyzing Data**\n- https://cran.r-project.org/web/views/MachineLearning.html\n- https://cran.r-project.org/web/views/TimeSeries.html\n- https://cran.r-project.org/web/views/SocialSciences.html\n",
"_____no_output_____"
],
[
"**At Scale!!!**\n- `parallel (snow and multicore)`: Palmetto\n- `rmr2, rhdfs, rhbase`\n- `sparkR`: Cypress",
"_____no_output_____"
],
[
"**One link to find them all:**\n- http://www.maths.lancs.ac.uk/~rowlings/R/TaskViews/",
"_____no_output_____"
],
[
"#### <center> EXAMPLES",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d0b28ab41dee89e620277322e71c45699cb13b0d | 18,827 | ipynb | Jupyter Notebook | Copy of 01-Python Intro(1).ipynb | manjeetchavhan15/python | 3b7b9c88d6d6ba8011da1570e472e1678fa69a28 | [
"MIT"
] | null | null | null | Copy of 01-Python Intro(1).ipynb | manjeetchavhan15/python | 3b7b9c88d6d6ba8011da1570e472e1678fa69a28 | [
"MIT"
] | null | null | null | Copy of 01-Python Intro(1).ipynb | manjeetchavhan15/python | 3b7b9c88d6d6ba8011da1570e472e1678fa69a28 | [
"MIT"
] | null | null | null | 18,827 | 18,827 | 0.640782 | [
[
[
"# Introduction to Python\n\n##***Welcome to your first iPython Notebook.***\n\n\n\n## **About iPython Notebooks**\n\niPython Notebooks are interactive coding environments embedded in a webpage. You will be using iPython notebooks in this class. You only need to write code between the ### START CODE HERE ### and ### END CODE HERE ### comments. After writing your code, you can run the cell by either pressing \"SHIFT\"+\"ENTER\" or by clicking on \"Run Cell\" (denoted by a play symbol) in the left bar of the cell.\n\n\n**In this notebook you will learn -**\n\n* Basic Syntax\n* Variables\n* Numbers\n* Casting\n* String\n\n\n\n\n\n\n\n\n\n\n\n",
"_____no_output_____"
],
[
"#Your First Program\n\n**Printing statements and numbers.**\n\nWe can use **print** function to display a string , integers, float, complex numbers.\n\n**Example:**\n",
"_____no_output_____"
]
],
[
[
"print(\"Hello Friend\")\nprint(30)\n",
"_____no_output_____"
]
],
[
[
"**Exercise 1.1:** \nDisplay \"Batman is the best superhero\" using print function.",
"_____no_output_____"
]
],
[
[
"### START CODE HERE ### (1 line of code)\nprint(\"Batman is the best superhero\")\n### END CODE HERE ###",
"Batman is the best superhero\n"
]
],
[
[
"\n\n**Expected Output: **\"Batman is the best superhero\"",
"_____no_output_____"
],
[
"#Python Variables\n",
"_____no_output_____"
],
[
"**Creating Variables :**\n\nUnlike other programming languages, Python has no command for declaring a variable.\n\nA variable is created the moment you first assign a value to it.\n\n**Example:**",
"_____no_output_____"
]
],
[
[
"x = 5\ny = \"Python\"\nprint(x)\nprint(y)",
"_____no_output_____"
]
],
[
[
"Variables do not need to be declared with any particular type and can even change type after they have been set.",
"_____no_output_____"
]
],
[
[
"x = 4 # x is of type int\ny = \"python\" # x is now of type str\nprint(x)\nprint(y)",
"_____no_output_____"
]
],
[
[
"**Variable Names :**\n\nA variable can have a short name (like x and y) or a more descriptive name (age, carname, total_volume). Rules for Python variables:\n\n\n\n* A variable name must start with a letter or the underscore character.\n* A variable name cannot start with a number.\n* A variable name can only contain alpha-numeric characters and underscores (A-z, 0-9, and _ ).\n* Variable names are case-sensitive (age, Age and AGE are three different variables).\n\n\n**NOTE:** Remember that variables are case-sensitive.",
"_____no_output_____"
],
[
"**Exercise 1.2:**\n\nCreate a variable **x** and assign value 10 to it. Create another variable **y** and assign the string **Hello there**. Print both variables.",
"_____no_output_____"
]
],
[
[
"### START CODE HERE ### (4 line of code)\nx = 10\ny = \"Hello there\"\nprint(x)\nprint(y)\n### END CODE HERE ### \n",
"10\nHello there\n"
]
],
[
[
"**Expected Output:**\n\n10\n\nHello there",
"_____no_output_____"
],
[
"**Exercise 1.3:**\n\nCreate a variable called **z**, assign** x + y** to it, and display the result.",
"_____no_output_____"
]
],
[
[
"### START CODE HERE ### \nx = 5\ny = 15\nz = x + y\nprint(z)\n### END CODE HERE ### ",
"20\n"
]
],
[
[
"**Expected output: ** \n\n20",
"_____no_output_____"
],
[
"# Python Numbers\n**There are three numeric types in Python:**\n\n* ** int**\n* ** float**\n* **complex**\n",
"_____no_output_____"
],
[
"**Int **:\n\nInt, or integer, is a whole number, positive or negative, without decimals, of unlimited length.",
"_____no_output_____"
]
],
[
[
"x = 1\ny = 35656222554887711\nz = -3255522\n\nprint(type(x)) # To verify the type of any object in Python, use the type() function\nprint(type(y))\nprint(type(z))\n",
"_____no_output_____"
]
],
[
[
"**Float :**\n\nFloat, or \"floating point number\" is a number, positive or negative, containing one or more decimals.",
"_____no_output_____"
]
],
[
[
"x = 1.10\ny = 1.0\nz = -35.59\n\nprint(type(x))\nprint(type(y))\nprint(type(z))",
"_____no_output_____"
]
],
[
[
"**Complex :**\n\nComplex numbers are written with a \"j\" as the imaginary part.",
"_____no_output_____"
]
],
[
[
"x = 3+5j\ny = 5j\nz = -5j\n\nprint(type(x))\nprint(type(y))\nprint(type(z))",
"_____no_output_____"
]
],
[
[
"\n**Exercise 1.4:**\n\nFind whether E=3.4j is integer, float or complex.",
"_____no_output_____"
]
],
[
[
"### START CODE HERE ### (1 line of code)\nE=3.4j\nprint(type(E))\n### END CODE HERE ###",
"<class 'complex'>\n"
]
],
[
[
" **Expected output:** class 'complex'",
"_____no_output_____"
],
[
"# Python Casting",
"_____no_output_____"
],
[
"**Specify a Variable Type :**\n\nThere may be times when you want to specify a type on to a variable. This can be done with casting. Python is an object-orientated language, and as such it uses classes to define data types, including its primitive types.\n\nCasting in python is therefore done using constructor functions:\n\n A **literal** is a notation for representing a fixed value in source code.\n\n* **int()** - constructs an integer number from an integer literal, a float literal (by rounding down to the previous whole number), or a string literal (providing the string represents a whole number)\n* **float()** - constructs a float number from an integer literal, a float literal or a string literal (providing the string represents a float or an integer)\n* **str()** - constructs a string from a wide variety of data types, including strings, integer literals and float literals\n\n",
"_____no_output_____"
],
[
"**Integers**:",
"_____no_output_____"
]
],
[
[
"x = int(1) # x will be 1\ny = int(2.8) # y will be 2\nz = int(\"3\") # z will be 3",
"_____no_output_____"
]
],
[
[
"**Floats:**",
"_____no_output_____"
]
],
[
[
"x = float(1) # x will be 1.0\ny = float(2.8) # y will be 2.8\nz = float(\"3\") # z will be 3.0\nw = float(\"4.2\") # w will be 4.2",
"_____no_output_____"
]
],
[
[
"**Strings:**",
"_____no_output_____"
]
],
[
[
"x = str(\"s1\") # x will be 's1'\ny = str(2) # y will be '2'\nz = str(3.0) # z will be '3.0'",
"_____no_output_____"
]
],
[
[
"Main advantage of type casting is you can print and integer and a string in the same line.\n\n**Example: **",
"_____no_output_____"
]
],
[
[
"a = \" kingdoms in Westeros\"\nb = str(7)\n\nprint (b + a)",
"_____no_output_____"
]
],
[
[
"**Excercise 1.5:**\n\nCreate a variable **x** and assign the integer 3 to it. Create another variable **y** and assign string '4' to it. Add both variables using **int** function.\n",
"_____no_output_____"
]
],
[
[
"### START CODE HERE ### (≈ 4 lines of code)\nx = 3\ny = \"4\"\nz = x + int(y)\nprint(z)\n### END CODE HERE ###",
"7\n"
]
],
[
[
"**Expected Output:**\n\n7\n",
"_____no_output_____"
],
[
"#Python Strings\n\n**String literals**:\n\nString literals in python are surrounded by either single quotation marks, or double quotation marks.\n\n'hello' is the same as \"hello\".\n\nLike many other popular programming languages, strings in Python are arrays of bytes representing unicode characters. Square brackets can be used to access elements of the string.",
"_____no_output_____"
]
],
[
[
"a = \"Hello, World!\"\nprint(a[1]) # Gets the character at position 1 (remember that the first character has the position 0)",
"_____no_output_____"
],
[
"b = \"Hello, World!\"\nprint(b[2:5]) #Gets the characters from position 2 to position 5 (not included)",
"_____no_output_____"
]
],
[
[
"**The strip() method:**\n\nThe strip() method removes any whitespace from the beginning or the end:",
"_____no_output_____"
]
],
[
[
"a = \" Hello, World! \"\nprint(a.strip()) # returns \"Hello, World!\"",
"_____no_output_____"
]
],
[
[
"**The len() method:**\n\nThe len() method returns the length of a string",
"_____no_output_____"
]
],
[
[
"a = \"Hello, World!\"\nprint(len(a))",
"_____no_output_____"
]
],
[
[
"**The lower() method:**\n\nThe lower() method returns the string in lower case",
"_____no_output_____"
]
],
[
[
"a = \"Hello, World!\"\nprint(a.lower()) \nprint(a) #Orignal value of a is not changed\na=a.lower() #Orignal value of a is changed\nprint(a)",
"_____no_output_____"
]
],
[
[
"**The upper() method:**\n\nThe upper() method returns the string in upper case",
"_____no_output_____"
]
],
[
[
"a = \"Hello, World!\"\nprint(a.upper())",
"_____no_output_____"
]
],
[
[
"**The replace() method :**\n\nThe replace() method replaces a string with another string.",
"_____no_output_____"
]
],
[
[
"a = \"Hello, World!\"\nprint(a.replace(\"H\", \"J\"))",
"_____no_output_____"
]
],
[
[
"**The split() method :**\n\nThe split() method splits the string into substrings if it finds instances of the separator",
"_____no_output_____"
]
],
[
[
"a = \"Hello, World!\"\nprint(a.split(\",\")) # returns ['Hello', ' World!']",
"_____no_output_____"
]
],
[
[
"**Exercise 1.6:**\n\nGet the first character of the string **str** and print it.\n",
"_____no_output_____"
]
],
[
[
"### START CODE HERE ### (≈ 3 lines of code)\nstr=\"Learning python\"\nx = str[0]\nprint(x)\n### END CODE HERE ###",
"L\n"
]
],
[
[
"**Expected Output:**\n\nL",
"_____no_output_____"
],
[
"**Exercise 1.7:**\n\nGet the characters from position 3 to position 8 (not included) using strinf slicing method and print it.",
"_____no_output_____"
]
],
[
[
"### START CODE HERE ### (≈ 3 lines of code)\nstr=\"Learning python\"\nx = str[3:8]\nprint(x)\n### END CODE HERE ###",
"rning\n"
]
],
[
[
"**Expected Output:**\n\nrning",
"_____no_output_____"
],
[
"**Exercise 1.8:** \n\nFor E=\"HELLO FRIENS\" make the string lowercase, print, replace **s** by **d** and return the length of the string.",
"_____no_output_____"
]
],
[
[
"### START CODE HERE ### (≈ 4-5 lines of code)\nE = \"HELLO FRIENS\"\nE = E.lower()\nprint(E)\nE = E.replace(\"s\",\"d\")\nprint(E)\nprint(len(E))\n### END CODE HERE ###",
"hello friens\nhello friend\n12\n"
]
],
[
[
"**Expected Output:**\n\nhello friens\n\nhello friend\n\n12",
"_____no_output_____"
],
[
"# Great Job!",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d0b2a688007569fb72544ea585c78f95ffd87075 | 3,234 | ipynb | Jupyter Notebook | 0.14/_downloads/plot_gamma_map_inverse.ipynb | drammock/mne-tools.github.io | 5d3a104d174255644d8d5335f58036e32695e85d | [
"BSD-3-Clause"
] | null | null | null | 0.14/_downloads/plot_gamma_map_inverse.ipynb | drammock/mne-tools.github.io | 5d3a104d174255644d8d5335f58036e32695e85d | [
"BSD-3-Clause"
] | null | null | null | 0.14/_downloads/plot_gamma_map_inverse.ipynb | drammock/mne-tools.github.io | 5d3a104d174255644d8d5335f58036e32695e85d | [
"BSD-3-Clause"
] | null | null | null | 59.888889 | 2,011 | 0.626469 | [
[
[
"%matplotlib inline",
"_____no_output_____"
]
],
[
[
"\n# Compute a sparse inverse solution using the Gamma-Map empirical Bayesian method\n\n\nSee Wipf et al. \"A unified Bayesian framework for MEG/EEG source imaging.\"\nNeuroImage, vol. 44, no. 3, pp. 947?66, Mar. 2009.\n\n",
"_____no_output_____"
]
],
[
[
"# Author: Martin Luessi <[email protected]>\n#\n# License: BSD (3-clause)\n\nimport numpy as np\n\nimport mne\nfrom mne.datasets import sample\nfrom mne.inverse_sparse import gamma_map\nfrom mne.viz import plot_sparse_source_estimates\n\nprint(__doc__)\n\ndata_path = sample.data_path()\nsubjects_dir = data_path + '/subjects'\nfwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'\nevoked_fname = data_path + '/MEG/sample/sample_audvis-ave.fif'\ncov_fname = data_path + '/MEG/sample/sample_audvis-cov.fif'\n\n# Read the evoked response and crop it\ncondition = 'Left visual'\nevoked = mne.read_evokeds(evoked_fname, condition=condition,\n baseline=(None, 0))\nevoked.crop(tmin=-50e-3, tmax=300e-3)\n\n# Read the forward solution\nforward = mne.read_forward_solution(fwd_fname, surf_ori=True,\n force_fixed=False)\n\n# Read noise noise covariance matrix and regularize it\ncov = mne.read_cov(cov_fname)\ncov = mne.cov.regularize(cov, evoked.info)\n\n# Run the Gamma-MAP method\nalpha = 0.5\nstc, residual = gamma_map(evoked, forward, cov, alpha, xyz_same_gamma=True,\n return_residual=True)\n\n# View in 2D and 3D (\"glass\" brain like 3D plot)\n\n# Show the sources as spheres scaled by their strength\nscale_factors = np.max(np.abs(stc.data), axis=1)\nscale_factors = 0.5 * (1 + scale_factors / np.max(scale_factors))\n\nplot_sparse_source_estimates(\n forward['src'], stc, bgcolor=(1, 1, 1),\n modes=['sphere'], opacity=0.1, scale_factors=(scale_factors, None),\n fig_name=\"Gamma-MAP\")\n\n# Show the evoked response and the residual for gradiometers\nylim = dict(grad=[-120, 120])\nevoked.pick_types(meg='grad', exclude='bads')\nevoked.plot(titles=dict(grad='Evoked Response Gradiometers'), ylim=ylim,\n proj=True)\n\nresidual.pick_types(meg='grad', exclude='bads')\nresidual.plot(titles=dict(grad='Residuals Gradiometers'), ylim=ylim,\n proj=True)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0b2af2606a0e61227564cf7936c79167743c835 | 18,236 | ipynb | Jupyter Notebook | source visualization.ipynb | parsafarinnia/rumor | c2748082a38713c8cc6eac6e5b882002624f02dd | [
"MIT"
] | 1 | 2020-12-14T07:16:18.000Z | 2020-12-14T07:16:18.000Z | source visualization.ipynb | parsafarinnia/rumor | c2748082a38713c8cc6eac6e5b882002624f02dd | [
"MIT"
] | null | null | null | source visualization.ipynb | parsafarinnia/rumor | c2748082a38713c8cc6eac6e5b882002624f02dd | [
"MIT"
] | null | null | null | 39.471861 | 114 | 0.401788 | [
[
[
"import os\nimport csv\nimport json\nimport pickle\nimport pandas as pd\ndf=pd.read_json(\"/Users/macbook/Desktop/reasearch/rumor/rumor/df_id_feature_class.json\",)\ndf",
"_____no_output_____"
],
[
"df_replies=pd.read_json(\"/Users/macbook/Desktop/reasearch/rumor/rumor/repliesdf_id_feature_class.json\")\ndf_replies",
"_____no_output_____"
],
[
"df_source=pd.read_json(\"/Users/macbook/Desktop/reasearch/rumor/rumor/sourcedf_id_feature_class.json\")\ndf_source",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
d0b2c7718ad0f7f60d149362d53143628eebc7e8 | 616 | ipynb | Jupyter Notebook | Notebooks/results.ipynb | sanjay920/metric-anomaly-poc | 48fd93cc4f211dd3dde7dd2d8a409aefb961c723 | [
"Apache-2.0"
] | null | null | null | Notebooks/results.ipynb | sanjay920/metric-anomaly-poc | 48fd93cc4f211dd3dde7dd2d8a409aefb961c723 | [
"Apache-2.0"
] | null | null | null | Notebooks/results.ipynb | sanjay920/metric-anomaly-poc | 48fd93cc4f211dd3dde7dd2d8a409aefb961c723 | [
"Apache-2.0"
] | null | null | null | 18.117647 | 48 | 0.543831 | [] | [] | [] |
d0b2e34e731714ed635fabd790fd1d4b52a0bf5f | 78,961 | ipynb | Jupyter Notebook | Taller_semana_7.ipynb | AngieCat26/MujeresDigitales | 64d0da0a4d31c60f70d4a3209d0fcb54884ea2b6 | [
"MIT"
] | null | null | null | Taller_semana_7.ipynb | AngieCat26/MujeresDigitales | 64d0da0a4d31c60f70d4a3209d0fcb54884ea2b6 | [
"MIT"
] | null | null | null | Taller_semana_7.ipynb | AngieCat26/MujeresDigitales | 64d0da0a4d31c60f70d4a3209d0fcb54884ea2b6 | [
"MIT"
] | null | null | null | 145.684502 | 38,670 | 0.83597 | [
[
[
"<a href=\"https://colab.research.google.com/github/AngieCat26/MujeresDigitales/blob/main/Taller_semana_7.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"## Introducción",
"_____no_output_____"
],
[
"**Contexto comercial.** Usted es un analista en una entidad bancaria, y se le proporciona un conjunto de datos de los clientes. Su jefe le pide que analice la información para determinar si existen similaridades entre grupos de clientes para lanzar una campaña de mercadeo.\n\n**Problema comercial.** Su tarea es **crear un modelo de clusterización para determinar si existen grupos de clientes similares**.\n\n**Contexto analítico.** Como científico de datos, se le pide realizar una clusterización de los clientes para identificar ",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport scipy\nimport seaborn as sns\nimport sklearn # Paquete base de ML\n\nfrom scipy.stats import norm\nfrom sklearn.cluster import KMeans\nfrom sklearn.preprocessing import MinMaxScaler, MaxAbsScaler, RobustScaler, StandardScaler\n\n%matplotlib inline",
"_____no_output_____"
],
[
"url = 'https://raw.githubusercontent.com/AngieCat26/MujeresDigitales/main/Lending_club_cleaned_2.csv'\ndf = pd.read_csv(url)\n",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
]
],
[
[
"## Ejercicio 1:\n\nRealice una normalización de los datos numéricos es decir que los valores oscilen entre 0 y 1 en las columnas annual_inc y loan_amnt.\nConsejo: antes de realizar la normalización asegúrese de que el tipo de dichas columnas si sea numérico.",
"_____no_output_____"
]
],
[
[
"# Escriba aquí su codigo\ndef normalize(df):\n resultado = df.copy()\n for normalizado in df.columns:\n max = df[normalizado].max()\n min = df[normalizado].min()\n resultado[normalizado] = (df[normalizado] - min) / (max - min)\n return resultado\n\n",
"_____no_output_____"
],
[
"df_normalizado = normalize(df[['annual_inc', 'loan_amnt']])\ndf_normalizado.head(11)\n",
"_____no_output_____"
]
],
[
[
"## Ejercicio 2:\n\nEmplee el algoritmo de k-means para agrupar a los clientes usando un número de clusters de 4.",
"_____no_output_____"
]
],
[
[
"# Escriba aquí su codigo\nk = 4\nkmeans = KMeans(n_clusters = k, init='k-means++')\nkmeans.fit(df_normalizado)\n\nlabels = kmeans.predict(df_normalizado)\ncentroids = kmeans.cluster_centers_\ncentroids",
"_____no_output_____"
]
],
[
[
"## Ejercicio 3 (Opcional):\n\nRealice un gráfico de dispersión (scatter) para vizualizar los cluster que descubrió en el punto anterior (ejercicio 2). Usando colores diferentes para identificar los 4 cluster.",
"_____no_output_____"
]
],
[
[
"# Escriba aquí su codigo\nplt.figure(figsize=(6, 6))\ncolor_map = {1:'r', 2:'g', 3:'b' , 4:'c'}\ncolors = [color_map[x+1] for x in labels]\n\nplt.scatter(df_normalizado['annual_inc'], df_normalizado['loan_amnt'], color=colors, alpha=0.4, edgecolor='k')\nfor idx, centroid in enumerate(centroids):\n plt.scatter(*centroid, marker='*', edgecolor='k')\nplt.xlim(-0.25, 1.25)\nplt.xlabel('annual_inc', fontsize=12)\nplt.xticks(fontsize=12)\nplt.ylim(-0.25, 1.25)\nplt.ylabel('loan_amnt', fontsize=12)\nplt.yticks(fontsize=12)\nplt.title('annual_inc VS loan_amnt', fontsize=16)\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Ejercicio 4 (Opcional):\n\nUse el método del codo para verificar cual es el número de clusters óptimo. Revise desde 1 clúster hasta 11 para realizar esta validación.",
"_____no_output_____"
]
],
[
[
"# Escriba aquí su codigo\nsum_sq_d = []\nK = range(1, 11)\nfor k in K:\n km = KMeans(n_clusters = k)\n km = km.fit(df_normalizado[['annual_inc', 'loan_amnt']])\n sum_sq_d.append(km.inertia_)\nplt.figure(figsize=(8,6))\nplt.plot(K, sum_sq_d, 'rx-.')\nplt.xlabel('Numero de Clusters, k', fontsize=12)\nplt.xticks(range(1,11), fontsize=12)\nplt.ylabel('suma de Distancias al Cuadradp', fontsize=12)\nplt.xticks(fontsize=12)\nplt.title('Metodo del Codo', fontsize=16)\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0b2e4844121a58d52f2d316933066d756399538 | 2,990 | ipynb | Jupyter Notebook | 4._Case_Study/4.0_Introduction.ipynb | YeoLab/single-cell-bioinformatics-scrm-2016 | 2ec3f4e8439b6574eee27a8cc0ed4bb8efbb1925 | [
"BSD-3-Clause"
] | 3 | 2016-10-23T09:17:16.000Z | 2018-06-07T22:38:43.000Z | 4._Case_Study/4.0_Introduction.ipynb | zqfang/single-cell-bioinformatics-scrm-2016 | 2ec3f4e8439b6574eee27a8cc0ed4bb8efbb1925 | [
"BSD-3-Clause"
] | null | null | null | 4._Case_Study/4.0_Introduction.ipynb | zqfang/single-cell-bioinformatics-scrm-2016 | 2ec3f4e8439b6574eee27a8cc0ed4bb8efbb1925 | [
"BSD-3-Clause"
] | 8 | 2016-06-16T12:50:40.000Z | 2021-03-02T21:50:48.000Z | 48.225806 | 417 | 0.686622 | [
[
[
"# Case Study: Macaulay & Svensson, *Cell Reports* (2016)\nOlga: print this paper for people to have\n\nThis \"Case Study\" is somewhat like a journal club except instead of just presenting the paper, we're going to re-work through some of these analyses ourselves and see how the interpretation of the data changes by using different algorithms\n\nWe will be using In this paper, co-authors Macaulay and Svensson studied haematopoesis in Zebrafish by doing...\n\n* Clustered cells into groups (Figure 1)\n* Ordered them by psuedotime (Figure 2)\n* Showed expression of known (Figure 3) and novel (Figure 4) lineage markers\n* Found a novel early committed group in the kidney before circulation in the blood (Figure 5)\n* Identified genes that were differentially regulated over pseudotime (Figure 6)\n* Investigated usage of duplicated genes during thrombopoeisis (Figure 7)\n\nThe computational scientist, [Valentine Svensson](http://nxn.se/), kindly provided the [notebooks and analyses on GitHub](https://github.com/Teichlab/spectrum-of-differentiation-supplements), and the exercises today are adapted from them. We're going to follow along, playing with different parameters as much as we can, until it gets to the very domain-specific analyses. The full set of notebooks are here:\n\n1. [Find state change clusters](macaulay2016/1. Find state change clusters.ipynb) (will do today, interactively)\n2. [Find cluster marker genes.ipynb](macaulay2016/2. Find cluster marker genes.ipynb)\n3. [Progression ordering and plots.ipynb](macaulay2016/3. Progression ordering and plots.ipynb) (will run today, but not suitable for interactivity)\n4. [Gaussian process pseudotime analysis.ipynb](macaulay2016/4. Gaussian process pseudotime analysis.ipynb)\n5. [Ohnolog taxonomy.ipynb](macaulay2016/5. Ohnolog taxonomy.ipynb)\n6. [Followup results.ipynb](macaulay2016/6. Followup results.ipynb)\n7. [Alternative pseudotime calculation.ipynb](macaulay2016/7. Alternative pseudotime calculation.ipynb)\n\n\n## [Click me to get started!](4.1_Case_Study.ipynb)\n\n## [Course feedback survey](https://docs.google.com/forms/d/1MY9ZHvMsYaORh7H_SAgSmCWlI0bDp28yfEj8kfr0kww/viewform)\n\nOnce we're done with the day, please fill out the survey above.",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown"
]
] |
d0b2f36482b313ecfd05a09e9ae5a202ea31d5de | 290,393 | ipynb | Jupyter Notebook | Pertemuan 7/Seleksi_Fitur.ipynb | DiendaRHR/Metodologi-Data-Science | c8258545b4429d60c986794ae1883f7260a29781 | [
"MIT"
] | null | null | null | Pertemuan 7/Seleksi_Fitur.ipynb | DiendaRHR/Metodologi-Data-Science | c8258545b4429d60c986794ae1883f7260a29781 | [
"MIT"
] | null | null | null | Pertemuan 7/Seleksi_Fitur.ipynb | DiendaRHR/Metodologi-Data-Science | c8258545b4429d60c986794ae1883f7260a29781 | [
"MIT"
] | null | null | null | 864.264881 | 267,082 | 0.943607 | [
[
[
"# Hands On: Seleksi Fitur",
"_____no_output_____"
],
[
"Seleksi fitur (feature selection) adalah proses memilih feature yang tepat untuk melatih model ML. \n\nUntuk melakukan feature selection, kita perlu memahami hubungan antara variables.\nHubungan antar dua random variables disebut correlation dan dapat dihitung dengan menggunakan correlation coefficient.\n\nRange nilai correlation coeficient adalah:\n\nPositif maks +1, korelasi positif, artinya kedua variable akan bergerak searah.\nNegatif maks -1, korelasi negatif, artinya kedua variable akan bergerak berlawanan.\nNol, menunjukan antara kedua variable tidak ada correlation.\nTeknik perhitungan correlation cukup banyak, berikut yang umum digunakan: Pearson, Kendall dan Spearman.\n\nA. Pearson\n* Paling umum digunakan.\n* Digunakan untuk numerical data.\n* Tidak bisa digunakan untuk ordinal data.\n* Mengukur linear data dengan asumsi data terdistribusi normal.\n\nB. Kendall\n* Rank correlation measure.\n* Dapat digunakan untuk numerical dan ordinal data, namun tidak untuk nominal data.\n* Tidak diperlukan linear relationship antar variable.\n* Digunakan untuk mengukur kemiripan ranked ordering data.\n* Untuk kondisi normal lebih baik menggunakan Kendall dibandingkan Spearman.\n\nC. Spearman\n* Rank correlation measure\n* Dapat digunakan untuk numerical dan ordinal data, namun tidak untuk nominal data.\n* Tidak diperlukan linear relationship antar variable.\n* Monotonic relationship\n\nAda beberapa metoda feature selection yang umum digunakan, yaitu Filter, Embedded dan Wrapper.\n\n**Filter Method**\nUmumnya digunakan pada tahap preprocessing. Pemilihan features tidak tergantung kepada algoritma ML yang akan digunakan . Features dipilih berdasarkan score test statistik kolerasi.\n\n**Embedded Method**\nFeature dipilih saat proses model training. Menggunakan learning algorithm untuk melakukan variable selection dan feature selection and classification secara simultan. Harus memilih algoritma machine learning yang sesuai.\n\n**Wrapper Method**\nMenggunakan subset of features untuk melatih model. Berdasarkan hasil yang dihasilkan dari model sebelumnya, kita tentukan untuk menambah atau membuang features dari subset. Kelemahannya membutuhkan resource besar dalam melakukan komputasi.\n\nAda jenis seleksi fitur lainnya, seperti dalam slide modul 8 ini, diantaranya:\n1. Seleksi Univariat (Univariate Selection)\n2. Pentingnya Fitur (Feature Importance)\n3. Matriks Korelasi (Correlation Matrix) dengan Heatmap\n\nTeknik pemilihan fitur yang perlu kita ketahui, untuk mendapatkan performa terbaik dari model Anda.\n\n1. SelectKBest\n2. Regresi linier\n3. Random Forest\n4. XGBoost\n5. Penghapusan Fitur Rekursif\n6. Boruta",
"_____no_output_____"
],
[
"### Berikut ini adalah sebagian kecil dari metode/teknik dalam Seleksi Fitur",
"_____no_output_____"
],
[
"#### Sumber dataset: \n---\n\nhttps://www.kaggle.com/iabhishekofficial/mobile-price-classification#train.csv\n",
"_____no_output_____"
],
[
"### 1. Seleksi Unvariate\n---\nMetode paling sederhana dan tercepat didasarkan pada uji statistik univariat. Untuk setiap fitur, ukur seberapa kuat target bergantung pada fitur menggunakan uji statistik seperti χ2 (chi-square) or ANOVA.\n\nUji statistik dapat digunakan untuk memilih fitur-fitur tersebut yang memiliki relasi paling kuat dengan variabel output/target.\nLibrary scikit-learn menyediakan class *SelectKBest* yang digunakan untuk serangkaian uji statistik berbeda untuk memilih angka spesifik dari fitur. Berikut ini adalah uji statistik chi-square utk fitur non-negatif untuk memilih 10 fitur terbaik dari dataset *Mobile Price Range Prediction*.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nfrom sklearn.feature_selection import SelectKBest\nfrom sklearn.feature_selection import chi2\n\ndata = pd.read_csv(\"train.csv\")\n\nX = data.iloc[:,0:20] #independent colums\ny = data.iloc[:,-1] # target colum i.e price range\n\n# apply SelectKBest class to extract\n\nbestfeatures = SelectKBest(score_func=chi2, k=10)\nfit = bestfeatures.fit(X,y)\ndfscores = pd.DataFrame(fit.scores_)\ndfcolumns = pd.DataFrame(X.columns)\n\n#concat two dataframes for better visualization \n\nfeatureScores = pd.concat([dfcolumns,dfscores],axis=1)\nfeatureScores.columns = ['Specs','Score'] #naming the dataframe columns\nprint(featureScores.nlargest(10,'Score')) #print 10 best features",
" Specs Score\n13 ram 931267.519053\n11 px_height 17363.569536\n0 battery_power 14129.866576\n12 px_width 9810.586750\n8 mobile_wt 95.972863\n6 int_memory 89.839124\n15 sc_w 16.480319\n16 talk_time 13.236400\n4 fc 10.135166\n14 sc_h 9.614878\n"
]
],
[
[
"### 2. Feature Importance\n---\n*Feature importance* mengacu pada kelas teknik untuk menetapkan skor ke fitur input ke model prediktif yang menunjukkan *importance* relatif dari setiap fitur saat membuat prediksi.\n\nSkor *Feature importance* dapat dihitung untuk masalah yang melibatkan prediksi nilai numerik, yang disebut regresi, dan masalah yang melibatkan prediksi label kelas, yang disebut klasifikasi.\n\nSkor berguna dan dapat digunakan dalam berbagai situasi dalam masalah pemodelan prediktif, seperti:\n\n* Lebih memahami data.\n* Lebih memahami model.\n* Mengurangi jumlah fitur input.\n* memberi skor untuk setiap fitur data, semakin tinggi skor semakin penting atau relevan fitur tersebut terhadap variabel output\n\ninbuilt yang dilengkapi dengan Pengklasifikasi Berbasis Pohon (Tree Based Classifier), kami akan menggunakan Pengklasifikasi Pohon Ekstra untuk mengekstraksi 10 fitur teratas untuk kumpulan data",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\n\ndata = pd.read_csv(\"train.csv\")\nX = data.iloc[:,0:20] #independent columns\ny = data.iloc[:,-1] #target column i.e price range\n\nfrom sklearn.ensemble import ExtraTreesClassifier\nimport matplotlib.pyplot as plt\nmodel = ExtraTreesClassifier()\nmodel.fit(X,y)\n\nprint(model.feature_importances_) #use inbuilt class feature_importances of tree based classifiers\n\n#plot graph of feature importances for better visualization\nfeat_importances = pd.Series(model.feature_importances_, index=X.columns)\nfeat_importances.nlargest(10).plot(kind='barh')\nplt.show()",
"[0.06211811 0.01976522 0.03262347 0.01943255 0.0312432 0.01696415\n 0.03540888 0.0328172 0.03525155 0.03272211 0.03170136 0.04648436\n 0.04934521 0.40406817 0.03209748 0.03192379 0.03393193 0.01437244\n 0.01841761 0.0193112 ]\n"
]
],
[
[
"### 3. Matriks Korelasi dengan Heatmap\n---\n\n* Korelasi menyatakan bagaimana fitur terkait satu sama lain atau variabel target.\n* Korelasi bisa positif (kenaikan satu nilai fitur meningkatkan nilai variabel target) atau negatif (kenaikan satu nilai fitur menurunkan nilai variabel target)\n* Heatmap memudahkan untuk mengidentifikasi fitur mana yang paling terkait dengan variabel target, kami akan memplot peta panas fitur yang berkorelasi menggunakan seaborn library\n",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport seaborn as sns\n\ndata = pd.read_csv(\"train.csv\")\n\nX = data.iloc[:,0:20] #independent columns\ny = data.iloc[:,-1] #target column i.e price range\n\n#get correlations of each features in dataset\ncorrmat = data.corr()\ntop_corr_features = corrmat.index\nplt.figure(figsize=(20,20))\n\n#plot heat map\ng=sns.heatmap(data[top_corr_features].corr(),annot=True,cmap=\"RdYlGn\")",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0b304da1a1a4b1505b9c45c821ad9b3e4ae40d5 | 305,608 | ipynb | Jupyter Notebook | Predicting_bike_sharing_data.ipynb | gwillig/bikesharing | 31a2774695df04f2ad491ac8ae4f479d075b4e00 | [
"MIT"
] | null | null | null | Predicting_bike_sharing_data.ipynb | gwillig/bikesharing | 31a2774695df04f2ad491ac8ae4f479d075b4e00 | [
"MIT"
] | null | null | null | Predicting_bike_sharing_data.ipynb | gwillig/bikesharing | 31a2774695df04f2ad491ac8ae4f479d075b4e00 | [
"MIT"
] | null | null | null | 326.504274 | 159,608 | 0.910611 | [
[
[
"# Your first neural network\n\nIn this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.\n\n",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n%load_ext autoreload\n%autoreload 2\n%config InlineBackend.figure_format = 'retina'\n\nimport numpy as np\n\nimport pandas as pd\nimport matplotlib.pyplot as plt",
"The autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\n"
]
],
[
[
"## Load and prepare the data\n\nA critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!",
"_____no_output_____"
]
],
[
[
"data_path = 'Bike-Sharing-Dataset/hour.csv'\n\nrides = pd.read_csv(data_path)",
"_____no_output_____"
],
[
"rides.head()",
"_____no_output_____"
]
],
[
[
"## Checking out the data\n\nThis dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the `cnt` column. You can see the first few rows of the data above.\n\nBelow is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.",
"_____no_output_____"
]
],
[
[
"rides[:24*10].plot(x='dteday', y='cnt')",
"_____no_output_____"
]
],
[
[
"### Dummy variables\nHere we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to `get_dummies()`.",
"_____no_output_____"
]
],
[
[
"dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']\nfor each in dummy_fields:\n dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)\n rides = pd.concat([rides, dummies], axis=1)\n\nfields_to_drop = ['instant', 'dteday', 'season', 'weathersit', \n 'weekday', 'atemp', 'mnth', 'workingday', 'hr']\ndata = rides.drop(fields_to_drop, axis=1)\ndata.head()",
"_____no_output_____"
]
],
[
[
"### Scaling target variables\nTo make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.\n\nThe scaling factors are saved so we can go backwards when we use the network for predictions.",
"_____no_output_____"
]
],
[
[
"quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']\n# Store scalings in a dictionary so we can convert back later\nscaled_features = {}\nfor each in quant_features:\n mean, std = data[each].mean(), data[each].std()\n scaled_features[each] = [mean, std]\n data.loc[:, each] = (data[each] - mean)/std",
"_____no_output_____"
]
],
[
[
"### Splitting the data into training, testing, and validation sets\n\nWe'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.",
"_____no_output_____"
]
],
[
[
"# Save data for approximately the last 21 days \ntest_data = data[-21*24:]\n\n# Now remove the test data from the data set \ndata = data[:-21*24]\n\n# Separate the data into features and targets\ntarget_fields = ['cnt', 'casual', 'registered']\nfeatures, targets = data.drop(target_fields, axis=1), data[target_fields]\ntest_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]",
"_____no_output_____"
]
],
[
[
"We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).",
"_____no_output_____"
]
],
[
[
"# Hold out the last 60 days or so of the remaining data as a validation set\ntrain_features, train_targets = features[:-60*24], targets[:-60*24]\nval_features, val_targets = features[-60*24:], targets[-60*24:]",
"_____no_output_____"
]
],
[
[
"## Time to build the network\n\nBelow you'll build your network. We've built out the structure. You'll implement both the forward pass and backwards pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.\n\n<img src=\"assets/neural_network.png\" width=300px>\n\nThe network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called *forward propagation*.\n\nWe use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called *backpropagation*.\n\n> **Hint:** You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.\n\nBelow, you have these tasks:\n1. Implement the sigmoid function to use as the activation function. Set `self.activation_function` in `__init__` to your sigmoid function.\n2. Implement the forward pass in the `train` method.\n3. Implement the backpropagation algorithm in the `train` method, including calculating the output error.\n4. Implement the forward pass in the `run` method.\n ",
"_____no_output_____"
]
],
[
[
"#############\n# In the my_answers.py file, fill out the TODO sections as specified\n#############\n",
"_____no_output_____"
],
[
"def MSE(y, Y):\n return np.mean((y-Y)**2)",
"_____no_output_____"
]
],
[
[
"## Unit tests\n\nRun these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.",
"_____no_output_____"
]
],
[
[
"\nfrom my_answers import NeuralNetwork\nimport unittest\n\ninputs = np.array([[0.5, -0.2, 0.1]])\ntargets = np.array([[0.4]])\ntest_w_i_h = np.array([[0.1, -0.2],\n [0.4, 0.5],\n [-0.3, 0.2]])\ntest_w_h_o = np.array([[0.3],\n [-0.1]])\n\nclass TestMethods(unittest.TestCase):\n \n ##########\n # Unit tests for data loading\n ##########\n \n def test_data_path(self):\n # Test that file path to dataset has been unaltered\n self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')\n \n def test_data_loaded(self):\n # Test that data frame loaded\n self.assertTrue(isinstance(rides, pd.DataFrame))\n \n ##########\n # Unit tests for network functionality\n ##########\n\n def test_activation(self):\n network = NeuralNetwork(3, 2, 1, 0.5)\n # Test that the activation function is a sigmoid\n self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))\n\n def test_train(self):\n # Test that weights are updated correctly on training\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n \n network.train(inputs, targets)\n self.assertTrue(np.allclose(network.weights_hidden_to_output, \n np.array([[ 0.37275328], \n [-0.03172939]])))\n self.assertTrue(np.allclose(network.weights_input_to_hidden,\n np.array([[ 0.10562014, -0.20185996], \n [0.39775194, 0.50074398], \n [-0.29887597, 0.19962801]])))\n\n def test_run(self):\n # Test correctness of run method\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n\n self.assertTrue(np.allclose(network.run(inputs), 0.09998924))\n\nsuite = unittest.TestLoader().loadTestsFromModule(TestMethods())\nunittest.TextTestRunner().run(suite)",
".....\n----------------------------------------------------------------------\nRan 5 tests in 0.004s\n\nOK\n"
]
],
[
[
"## Training the network\n\nHere you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.\n\nYou'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.\n\n### Choose the number of iterations\nThis is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, this process can have sharply diminishing returns and can waste computational resources if you use too many iterations. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. The ideal number of iterations would be a level that stops shortly after the validation loss is no longer decreasing.\n\n### Choose the learning rate\nThis scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. Normally a good choice to start at is 0.1; however, if you effectively divide the learning rate by n_records, try starting out with a learning rate of 1. In either case, if the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.\n\n### Choose the number of hidden nodes\nIn a model where all the weights are optimized, the more hidden nodes you have, the more accurate the predictions of the model will be. (A fully optimized model could have weights of zero, after all.) However, the more hidden nodes you have, the harder it will be to optimize the weights of the model, and the more likely it will be that suboptimal weights will lead to overfitting. With overfitting, the model will memorize the training data instead of learning the true pattern, and won't generalize well to unseen data. \n\nTry a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. You'll generally find that the best number of hidden nodes to use ends up being between the number of input and output nodes.",
"_____no_output_____"
]
],
[
[
"import sys\n\n####################\n### Set the hyperparameters in you myanswers.py file ###\n####################\n\nfrom my_answers import iterations, learning_rate, hidden_nodes, output_nodes\n\n\nN_i = train_features.shape[1]\nnetwork = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)\n\nlosses = {'train':[], 'validation':[]}\nfor ii in range(iterations):\n # Go through a random batch of 128 records from the training data set\n batch = np.random.choice(train_features.index, size=128)\n X, y = train_features.loc[batch].values, train_targets.loc[batch]['cnt']\n \n network.train(X, y)\n \n # Printing out the training progress\n train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)\n val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)\n sys.stdout.write(\"\\rProgress: {:2.1f}\".format(100 * ii/float(iterations)) \\\n + \"% ... Training loss: \" + str(train_loss)[:5] \\\n + \" ... Validation loss: \" + str(val_loss)[:5])\n sys.stdout.flush()\n \n losses['train'].append(train_loss)\n losses['validation'].append(val_loss)",
"Progress: 99.9% ... Training loss: 0.110 ... Validation loss: 0.207"
],
[
"plt.plot(losses['train'], label='Training loss')\nplt.plot(losses['validation'], label='Validation loss')\nplt.legend()\n_ = plt.ylim()",
"_____no_output_____"
]
],
[
[
"## Check out your predictions\n\nHere, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(figsize=(8,4))\n\nmean, std = scaled_features['cnt']\npredictions = network.run(test_features).T*std + mean\nax.plot(predictions[0], label='Prediction')\nax.plot((test_targets['cnt']*std + mean).values, label='Data')\nax.set_xlim(right=len(predictions))\nax.legend()\n\ndates = pd.to_datetime(rides.loc[test_data.index]['dteday'])\ndates = dates.apply(lambda d: d.strftime('%b %d'))\nax.set_xticks(np.arange(len(dates))[12::24])\n_ = ax.set_xticklabels(dates[12::24], rotation=45)",
"_____no_output_____"
]
],
[
[
"## OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric).\n \nAnswer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?\n\n> **Note:** You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter\n\n#### Your answer below",
"_____no_output_____"
],
[
"The prediction of the model is almost on point for the timespan before the 21.12.\nAfter the 21.12 the model predicts the demain of bike too high.\n\nIn my option the reason for that can befound in th\n* that the model doenst take into account that most people are on holiday between 21.12 and New Year (the Witching Week)\n* To less data: The whole data set consist of 1 year. In order for the model to learn that the 21.12-31.12 is a special time of the year it would need a data set which consist of several years.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d0b3080b4d5ffd2071f901fd97a1df34d8bfe37d | 40,958 | ipynb | Jupyter Notebook | Week 2 - Intro to Vectors and Numpy/LinAlg_Lab_2.ipynb | bryanpioloEspanol/linearAlgebra2021 | 0044efe815af2e842b02b8fa3364052c23d45d16 | [
"Apache-2.0"
] | null | null | null | Week 2 - Intro to Vectors and Numpy/LinAlg_Lab_2.ipynb | bryanpioloEspanol/linearAlgebra2021 | 0044efe815af2e842b02b8fa3364052c23d45d16 | [
"Apache-2.0"
] | null | null | null | Week 2 - Intro to Vectors and Numpy/LinAlg_Lab_2.ipynb | bryanpioloEspanol/linearAlgebra2021 | 0044efe815af2e842b02b8fa3364052c23d45d16 | [
"Apache-2.0"
] | 9 | 2021-02-10T16:27:49.000Z | 2021-08-10T09:45:26.000Z | 41.793878 | 9,590 | 0.647248 | [
[
[
"<a href=\"https://colab.research.google.com/github/dyjdlopez/linearAlgebra2021/blob/main/Week%202%20-%20Intro%20to%20Vectors%20and%20Numpy/LinAlg_Lab_2.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Lab 2 - Plotting Vector using NumPy and MatPlotLib",
"_____no_output_____"
],
[
"In this laboratory we will be disucssin the basics of numerical and scientific programming by working with Vectors using NumPy and MatPlotLib.",
"_____no_output_____"
],
[
"### Objectives\nAt the end of this activity you will be able to:\n1. Be familiar with the libraries in Python for numerical and scientific programming.\n2. Visualize vectors through Python programming.\n3. Perform simple vector operations through code.",
"_____no_output_____"
],
[
"## Discussion",
"_____no_output_____"
],
[
"### *NumPy*\r\n\r\nNumPy or Numerical Python, is mainly used for matrix and vector operations. It is capable of declaring computing and representing matrices. Most Python scienitifc programming libraries uses NumPy as the basic code.",
"_____no_output_____"
],
[
"### Defining Vectors, Matrices, and Tensors\r\nVectors, Matrices, and Tensors are the fundamental objects in Linear Algebra programming. We'll be defining each of these objects specifically in the Computer Science/Engineering perspective since it would be much confusing if we consider their Physics and Pure Mathematics definitions.",
"_____no_output_____"
],
[
"#### <i>Scalars</i>\r\nScalars are numerical entities that are represented by a single value. ",
"_____no_output_____"
]
],
[
[
"import numpy as np\r\n\r\nx = np.array(-0.5)\r\nx",
"_____no_output_____"
]
],
[
[
"#### *Vectors*\r\nVectors are array of numerical values or scalars that would represent any feature space. Feature spaces or simply dimensions or the parameters of an equation or a function.",
"_____no_output_____"
],
[
"#### *Representing Vectors*",
"_____no_output_____"
],
[
"Now that you know how to represent vectors using their component and matrix form we can now hard-code them in Python. Let's say that you have the vectors:",
"_____no_output_____"
],
[
"$$ A = 4\\hat{x} + 3\\hat{y} \\\\\nB = 2\\hat{x} - 5\\hat{y}$$",
"_____no_output_____"
],
[
"In which it's matrix equivalent is:",
"_____no_output_____"
],
[
"$$ A = \\begin{bmatrix} 4 \\\\ 3\\end{bmatrix} , B = \\begin{bmatrix} 2 \\\\ -5\\end{bmatrix}\\\\\n A = \\begin{bmatrix} 4 & 3\\end{bmatrix} \\\\\n B = \\begin{bmatrix} 2 & -5\\end{bmatrix} \n$$",
"_____no_output_____"
],
[
"We can then start doing numpy code with this by:",
"_____no_output_____"
]
],
[
[
"A = np.array([4,3])\nB = np.array([2, -5])\n\nprint('Vector A is ', A)\nprint('Vector B is ', B)",
"Vector A is [4 3]\nVector B is [ 2 -5]\n"
]
],
[
[
"#### Describing vectors in NumPy",
"_____no_output_____"
],
[
"Describing vectors is very important if we want to perform basic to advanced operations with them. The fundamental ways in describing vectors are knowing their shape, size and dimensions.",
"_____no_output_____"
]
],
[
[
"### Checking shapes\n### Shapes tells us how many rows and columns are there\nball1 = np.array([1,2,3])\nball2 = np.array([0,1,-1])\npool = np.array([J,K]) ## Matrix\npool.shape",
"_____no_output_____"
],
[
"U = np.array([\n [1, 2],\n [2, 3]\n])\nU.shape",
"_____no_output_____"
],
[
"### Checking size\n### Array/Vector sizes tells us many total number of elements are there in the vector\n\nU.size",
"_____no_output_____"
],
[
"### Checking dimensions\n### The dimensions or rank of a vector tells us how many dimensions are there for the vector.\nA.ndim",
"_____no_output_____"
],
[
"pool.ndim",
"_____no_output_____"
]
],
[
[
"Great! Now let's try to explore in performing operations with these vectors.",
"_____no_output_____"
],
[
"#### Addition",
"_____no_output_____"
],
[
"The addition rule is simple, the we just need to add the elements of the matrices according to their index. So in this case if we add vector $A$ and vector $B$ we will have a resulting vector:",
"_____no_output_____"
],
[
"$$R = 6\\hat{x}-2\\hat{y} \\\\ \\\\or \\\\ \\\\ R = \\begin{bmatrix} 6 \\\\ -2\\end{bmatrix} $$",
"_____no_output_____"
],
[
"So let's try to do that in NumPy in several number of ways:",
"_____no_output_____"
]
],
[
[
"position1 = np.array([0, 0, 0])\nposition2 = np.array([1, 1, 0])\nposition3 = np.array([-1, 2, 0])\nposition4 = np.array([2, 5, 3])\n\nR = position1 + position2 + position3 + position4 #Eager execution\nR",
"_____no_output_____"
],
[
"R1 = np.add(position1,position2) #functional method\r\nR2 = np.add(R1,position3)\r\nR3 = np.add(R2,position4)\r\nR3",
"_____no_output_____"
],
[
"Rm = np.multiply(position3, position4)\nRm",
"_____no_output_____"
],
[
"Rm = position3 * position4\nRm",
"_____no_output_____"
]
],
[
[
"##### Try for yourself!",
"_____no_output_____"
],
[
"Try to implement subtraction and division with vectors $A$ and $B$!",
"_____no_output_____"
]
],
[
[
"### Try out you code here! Don't forget to take a screenshot or a selfie!\n\n",
"_____no_output_____"
]
],
[
[
"$$\nW = \\hat{x} + \\hat{y}\\\\\nT = -2\\hat{x} -3\\hat{y}\\\\\nR3 = W + (T*-W)\n$$",
"_____no_output_____"
]
],
[
[
"W = np.array([1, 1])\nT = np.array([-2, -3])\n# R3 = np.add(W,np.multiply(T,np.multiply(-1,W)))\nR3 = W + (T*(-1*W))\nR3",
"_____no_output_____"
]
],
[
[
"### Scaling",
"_____no_output_____"
],
[
"Scaling or scalar multiplication takes a scalar value and performs multiplication with a vector. Let's take the example below:",
"_____no_output_____"
],
[
"$$S = 5 \\cdot A$$",
"_____no_output_____"
],
[
"We can do this in numpy through:",
"_____no_output_____"
]
],
[
[
"A = np.array([1,5,8,9])\nS = 5*A\nS",
"_____no_output_____"
],
[
"S = np.multiply(5,A)\nS",
"_____no_output_____"
]
],
[
[
"$$R = 3X - Y\\\\X = \\hat{x} + \\hat{y} , Y = 2\\hat{x} - 3\\hat{y}$$",
"_____no_output_____"
]
],
[
[
"X = np.array([1, 1])\nY = np.array([2, -3])\nR = np.subtract(np.multiply(3,X),Y) ## functional method\n# R = 3*X - Y\nR",
"_____no_output_____"
]
],
[
[
"### MatPlotLib",
"_____no_output_____"
],
[
"MatPlotLib or MATLab Plotting library is Python's take on MATLabs plotting feature. MatPlotLib can be used vastly from graping values to visualizing several dimensions of data.",
"_____no_output_____"
],
[
"#### Visualizing Data",
"_____no_output_____"
],
[
"It's not enough just sloving these vectors so might need to visualize them. So we'll use MatPlotLib for that. We'll need to import it first.",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt ## use this one if not in jupyterlab/notebook\n# from matplotlib import pyplot as plt\nimport matplotlib",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
],
[
[
"A = [2,-1]\r\nB = [5,2]\r\nplt.scatter(A[0],A[1], label='A', c='magenta')\r\nplt.scatter(B[0],B[1], label='B', c='mediumspringgreen')\r\n\r\nplt.grid()\r\nplt.legend()\r\nplt.show()",
"_____no_output_____"
],
[
"A = np.array([-5,0])\nB = np.array([0,5])\n\nplt.title(\"Resultant Vector\\nMagnitude:{:.2f}\".format(R_mag))\n\nplt.xlim(-15, 15)\nplt.ylim(-15, 15)\n# print(B)\nplt.quiver(0,0, A[0], A[1], angles='xy', scale_units='xy',scale=1, color='red') # Red --> A\nplt.quiver(A[0], A[1], B[0], B[1],angles='xy', scale_units='xy',scale=1, color='green')\nR = A+B\nplt.quiver(0, 0, R[0], R[1],angles='xy', scale_units='xy',scale=1, color='orange')\nplt.grid()\nplt.show()",
"_____no_output_____"
]
],
[
[
"$\\sqrt{A^2+B^2+C^2}$",
"_____no_output_____"
]
],
[
[
"R",
"_____no_output_____"
],
[
"R_mag = np.sqrt(np.sum(A**2+B**2)) ##Euclidean Distance / Euclidean Norm\r\nrise = R[1]\r\nrun = R[0]\r\nslope = rise/run\r\nslope\r\n## angle of the vector? arctan(rise/run)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0b335ec12feb9cf00f53d6812b9430247a43dc2 | 34,264 | ipynb | Jupyter Notebook | plots/data.ipynb | kayzhou/election | 3b2659c478272e9171e2bfc81efe93aad00b6b94 | [
"MIT"
] | null | null | null | plots/data.ipynb | kayzhou/election | 3b2659c478272e9171e2bfc81efe93aad00b6b94 | [
"MIT"
] | null | null | null | plots/data.ipynb | kayzhou/election | 3b2659c478272e9171e2bfc81efe93aad00b6b94 | [
"MIT"
] | 1 | 2019-03-18T16:42:30.000Z | 2019-03-18T16:42:30.000Z | 31.036232 | 91 | 0.306502 | [
[
[
"import pandas as pd",
"_____no_output_____"
],
[
"df = pd.read_csv(\"../data/community-BT/non_c0_BT.csv\")\ndisplay(df)\nprint(df['Pt'].tolist())\n\ndf = pd.read_csv(\"../data/community-BT/IRA_c0_BT.csv\")\ndisplay(df)\nprint(df['Pt'].tolist())",
"_____no_output_____"
],
[
"df = pd.read_csv(\"../data/community-BT/non_c1_BT.csv\")\ndisplay(df)\nprint(df['Pt'].tolist())\n\ndf = pd.read_csv(\"../data/community-BT/IRA_c1_BT.csv\")\ndisplay(df)\nprint(df['Pt'].tolist())",
"_____no_output_____"
],
[
"df = pd.read_csv(\"../data/community-BT/non_c3_BT.csv\")\ndisplay(df)\nprint(df['Pt'].tolist())\n\ndf = pd.read_csv(\"../data/community-BT/IRA_c3_BT.csv\")\ndisplay(df)\nprint(df['Pt'].tolist())",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
d0b354b9facd51adef6cc73871a00466c67c75bc | 41,162 | ipynb | Jupyter Notebook | S06 - DS in the Real World/BLU15 - Model CSI/original_model.ipynb | LDSSA/batch4-students | c0547ee0cf10645a0244336c976b304cff2f2000 | [
"MIT"
] | 19 | 2020-06-10T09:24:18.000Z | 2022-01-25T15:19:29.000Z | S06 - DS in the Real World/BLU15 - Model CSI/original_model.ipynb | LDSSA/sidecar-academy-batch2 | 594665662cbe830c79869f24a6d37a1aa78e0c0a | [
"MIT"
] | 25 | 2020-05-16T14:25:41.000Z | 2022-03-12T00:41:55.000Z | S06 - DS in the Real World/BLU15 - Model CSI/original_model.ipynb | LDSSA/batch4-students | c0547ee0cf10645a0244336c976b304cff2f2000 | [
"MIT"
] | 9 | 2020-08-04T22:08:14.000Z | 2021-12-16T17:24:30.000Z | 44.165236 | 16,228 | 0.646324 | [
[
[
"import json\nimport joblib\nimport pickle\nimport pandas as pd\nfrom lightgbm import LGBMClassifier\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.pipeline import make_pipeline, Pipeline\nfrom sklearn.preprocessing import OneHotEncoder\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.metrics import precision_score, recall_score\nimport numpy as np\nfrom sklearn.metrics import precision_recall_curve\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"df = pd.read_csv(\"data/train_searched.csv\")\ndf.head()",
"_____no_output_____"
],
[
"# lowercaes departments and location names\ndf['Department Name'] = df['Department Name'].apply(lambda x: str(x).lower())\ndf['InterventionLocationName'] = df['InterventionLocationName'].apply(lambda x: str(x).lower())",
"_____no_output_____"
],
[
"train_features = df.columns.drop(['VehicleSearchedIndicator', 'ContrabandIndicator'])",
"_____no_output_____"
],
[
"categorical_features = train_features.drop(['InterventionDateTime', 'SubjectAge'])\nnumerical_features = ['SubjectAge']",
"_____no_output_____"
],
[
"target = 'ContrabandIndicator'",
"_____no_output_____"
],
[
"# show the most common feature values for all the categorical features\nfor feature in categorical_features:\n display(df[feature].value_counts())",
"_____no_output_____"
],
[
"# I'm going to remove less common features. \n# Let's create a dictionary with the minimum required number of appearences\nmin_frequency = {\n \"Department Name\": 50,\n \"InterventionLocationName\": 50,\n \"ReportingOfficerIdentificationID\": 30,\n \"StatuteReason\": 10\n}",
"_____no_output_____"
],
[
"def filter_values(df: pd.DataFrame, column_name: str, threshold: int):\n value_counts = df[column_name].value_counts()\n to_keep = value_counts[value_counts > threshold].index\n filtered = df[df[column_name].isin(to_keep)]\n return filtered",
"_____no_output_____"
],
[
"df.shape",
"_____no_output_____"
],
[
"for feature, threshold in min_frequency.items():\n df = filter_values(df, feature, threshold)",
"_____no_output_____"
],
[
"df.shape",
"_____no_output_____"
],
[
"X = df[train_features]\ny = df[target]",
"_____no_output_____"
],
[
"X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)",
"_____no_output_____"
],
[
"categorical_transformer = Pipeline(steps=[\n ('imputer', SimpleImputer(strategy='constant', fill_value='missing')),\n ('onehot', OneHotEncoder(handle_unknown='ignore'))])\n\npreprocessor = ColumnTransformer(\n transformers=[('cat', categorical_transformer, categorical_features)])\n\npipeline = make_pipeline(\n preprocessor,\n LGBMClassifier(n_jobs=-1, random_state=42),\n)",
"_____no_output_____"
],
[
"pipeline.fit(X_train, y_train)",
"_____no_output_____"
],
[
"preds = pipeline.predict(X_test)",
"_____no_output_____"
],
[
"def verify_success_rate_above(y_true, y_pred, min_success_rate=0.5):\n \"\"\"\n Verifies the success rate on a test set is above a provided minimum\n \n \n \"\"\"\n \n precision = precision_score(y_true, y_pred, pos_label=True)\n is_satisfied = (precision >= min_success_rate)\n \n return is_satisfied, precision\n",
"_____no_output_____"
],
[
"def verify_amount_found(y_true, y_pred):\n \"\"\"\n Verifies the amout of contraband found in the test dataset - a.k.a the recall in our test set\n \"\"\"\n \n recall = recall_score(y_true, y_pred) \n return recall\n",
"_____no_output_____"
],
[
"verify_success_rate_above(y_test, preds)",
"_____no_output_____"
],
[
"verify_amount_found(y_test, preds)",
"_____no_output_____"
]
],
[
[
"Now let's find the best threshold for our requirements.\n\nPrecision needs to be at least 0.5, and recall has to be as max as possible.\n\nIt's usually true that the bigger is precision, the lower is the recall. \n\nSo we need to find the threshold that coresponds to precision = 0.5",
"_____no_output_____"
]
],
[
[
"proba = pipeline.predict_proba(X_test)",
"_____no_output_____"
],
[
"precision, recall, thresholds = precision_recall_curve(y_test, proba[:, 1])",
"_____no_output_____"
],
[
"print(len(precision), len(recall), len(thresholds))",
"6615 6615 6614\n"
],
[
"# according to documentation, precision and recall\n# have 1 and 0 at the end, so we should remove them before plotting.\nprecision = precision[:-1]\nrecall = recall[:-1]",
"_____no_output_____"
],
[
"fig=plt.figure()\nax1 = plt.subplot(211)\nax2 = plt.subplot(212)\nax1.hlines(y=0.5,xmin=0, xmax=1, colors='red')\nax1.plot(thresholds,precision)\nax2.plot(thresholds,recall)\nax1.get_shared_x_axes().join(ax1, ax2)\nax1.set_xticklabels([])\nax1.set_title('Precision')\nax2.set_title('Recall')\nplt.xlabel('Threshold')\nplt.show()",
"_____no_output_____"
]
],
[
[
"Red line shows the point where precision is equal 0.5. \n\nIt looks like the biggest recall for precision >= 0.5 is around 0.2\n\nLet's find the exact value.",
"_____no_output_____"
]
],
[
[
"min_index = [i for i, prec in enumerate(precision) if prec >= 0.5][0]\nprint(min_index)",
"2570\n"
],
[
"precision[min_index]",
"_____no_output_____"
],
[
"recall[min_index]",
"_____no_output_____"
],
[
"thresholds[min_index]",
"_____no_output_____"
],
[
"best_preds = [1 if pred > thresholds[min_index] else 0 for pred in proba[:, 1]]",
"_____no_output_____"
],
[
"verify_success_rate_above(y_test, best_preds)",
"_____no_output_____"
],
[
"verify_amount_found(y_test, best_preds)",
"_____no_output_____"
],
[
"with open('columns.json', 'w') as fh:\n json.dump(X_train.columns.tolist(), fh)\n \nwith open('dtypes.pickle', 'wb') as fh:\n pickle.dump(X_train.dtypes, fh)\n \njoblib.dump(pipeline, 'pipeline.pickle');",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0b36dfcb0e9e2b9d66df23458969fc1792f368a | 704,740 | ipynb | Jupyter Notebook | docs/notebooks/Parser.ipynb | cn-fairy/fuzzingbook | 49c7194604caecc2a062cb56dcd39e093f5b103d | [
"MIT"
] | null | null | null | docs/notebooks/Parser.ipynb | cn-fairy/fuzzingbook | 49c7194604caecc2a062cb56dcd39e093f5b103d | [
"MIT"
] | null | null | null | docs/notebooks/Parser.ipynb | cn-fairy/fuzzingbook | 49c7194604caecc2a062cb56dcd39e093f5b103d | [
"MIT"
] | null | null | null | 39.858605 | 1,043 | 0.502333 | [
[
[
"# Parsing Inputs\n\nIn the chapter on [Grammars](Grammars.ipynb), we discussed how grammars can be\nused to represent various languages. We also saw how grammars can be used to\ngenerate strings of the corresponding language. Grammars can also perform the\nreverse. That is, given a string, one can decompose the string into its\nconstituent parts that correspond to the parts of grammar used to generate it\n– the _derivation tree_ of that string. These parts (and parts from other similar\nstrings) can later be recombined using the same grammar to produce new strings.\n\nIn this chapter, we use grammars to parse and decompose a given set of valid seed inputs into their corresponding derivation trees. This structural representation allows us to mutate, crossover, and recombine their parts in order to generate new valid, slightly changed inputs (i.e., fuzz)",
"_____no_output_____"
]
],
[
[
"from bookutils import YouTubeVideo\nYouTubeVideo('2yS9EfBEirE')",
"_____no_output_____"
]
],
[
[
"**Prerequisites**\n\n* You should have read the [chapter on grammars](Grammars.ipynb).\n* An understanding of derivation trees from the [chapter on grammar fuzzer](GrammarFuzzer.ipynb)\n is also required.",
"_____no_output_____"
],
[
"## Synopsis\n<!-- Automatically generated. Do not edit. -->\n\nTo [use the code provided in this chapter](Importing.ipynb), write\n\n```python\n>>> from fuzzingbook.Parser import <identifier>\n```\n\nand then make use of the following features.\n\n\nThis chapter introduces `Parser` classes, parsing a string into a _derivation tree_ as introduced in the [chapter on efficient grammar fuzzing](GrammarFuzzer.ipynb). Two important parser classes are provided:\n\n* [Parsing Expression Grammar parsers](#Parsing-Expression-Grammars) (`PEGParser`). These are very efficient, but limited to specific grammar structure. Notably, the alternatives represent *ordered choice*. That is, rather than choosing all rules that can potentially match, we stop at the first match that succeed.\n* [Earley parsers](#Parsing-Context-Free-Grammars) (`EarleyParser`). These accept any kind of context-free grammars, and explore all parsing alternatives (if any).\n\nUsing any of these is fairly easy, though. First, instantiate them with a grammar:\n\n```python\n>>> from Grammars import US_PHONE_GRAMMAR\n>>> us_phone_parser = EarleyParser(US_PHONE_GRAMMAR)\n```\nThen, use the `parse()` method to retrieve a list of possible derivation trees:\n\n```python\n>>> trees = us_phone_parser.parse(\"(555)987-6543\")\n>>> tree = list(trees)[0]\n>>> display_tree(tree)\n```\n\n\nThese derivation trees can then be used for test generation, notably for mutating and recombining existing inputs.\n\n\n\n",
"_____no_output_____"
]
],
[
[
"import bookutils",
"_____no_output_____"
],
[
"from typing import Dict, List, Tuple, Collection, Set, Iterable, Generator, cast",
"_____no_output_____"
],
[
"from Fuzzer import Fuzzer # minor dependendcy",
"_____no_output_____"
],
[
"from Grammars import EXPR_GRAMMAR, START_SYMBOL, RE_NONTERMINAL\nfrom Grammars import is_valid_grammar, syntax_diagram, Grammar",
"_____no_output_____"
],
[
"from GrammarFuzzer import GrammarFuzzer, display_tree, tree_to_string, dot_escape\nfrom GrammarFuzzer import DerivationTree",
"_____no_output_____"
],
[
"from ExpectError import ExpectError",
"_____no_output_____"
],
[
"from IPython.display import display",
"_____no_output_____"
],
[
"from Timer import Timer",
"_____no_output_____"
]
],
[
[
"## Why Parsing for Fuzzing?",
"_____no_output_____"
],
[
"Why would one want to parse existing inputs in order to fuzz? Let us illustrate the problem with an example. Here is a simple program that accepts a CSV file of vehicle details and processes this information.",
"_____no_output_____"
]
],
[
[
"def process_inventory(inventory):\n res = []\n for vehicle in inventory.split('\\n'):\n ret = process_vehicle(vehicle)\n res.extend(ret)\n return '\\n'.join(res)",
"_____no_output_____"
]
],
[
[
"The CSV file contains details of one vehicle per line. Each row is processed in `process_vehicle()`.",
"_____no_output_____"
]
],
[
[
"def process_vehicle(vehicle):\n year, kind, company, model, *_ = vehicle.split(',')\n if kind == 'van':\n return process_van(year, company, model)\n\n elif kind == 'car':\n return process_car(year, company, model)\n\n else:\n raise Exception('Invalid entry')",
"_____no_output_____"
]
],
[
[
"Depending on the kind of vehicle, the processing changes.",
"_____no_output_____"
]
],
[
[
"def process_van(year, company, model):\n res = [\"We have a %s %s van from %s vintage.\" % (company, model, year)]\n iyear = int(year)\n if iyear > 2010:\n res.append(\"It is a recent model!\")\n else:\n res.append(\"It is an old but reliable model!\")\n return res",
"_____no_output_____"
],
[
"def process_car(year, company, model):\n res = [\"We have a %s %s car from %s vintage.\" % (company, model, year)]\n iyear = int(year)\n if iyear > 2016:\n res.append(\"It is a recent model!\")\n else:\n res.append(\"It is an old but reliable model!\")\n return res",
"_____no_output_____"
]
],
[
[
"Here is a sample of inputs that the `process_inventory()` accepts.",
"_____no_output_____"
]
],
[
[
"mystring = \"\"\"\\\n1997,van,Ford,E350\n2000,car,Mercury,Cougar\\\n\"\"\"\nprint(process_inventory(mystring))",
"We have a Ford E350 van from 1997 vintage.\nIt is an old but reliable model!\nWe have a Mercury Cougar car from 2000 vintage.\nIt is an old but reliable model!\n"
]
],
[
[
"Let us try to fuzz this program. Given that the `process_inventory()` takes a CSV file, we can write a simple grammar for generating comma separated values, and generate the required CSV rows. For convenience, we fuzz `process_vehicle()` directly.",
"_____no_output_____"
]
],
[
[
"import string",
"_____no_output_____"
],
[
"CSV_GRAMMAR: Grammar = {\n '<start>': ['<csvline>'],\n '<csvline>': ['<items>'],\n '<items>': ['<item>,<items>', '<item>'],\n '<item>': ['<letters>'],\n '<letters>': ['<letter><letters>', '<letter>'],\n '<letter>': list(string.ascii_letters + string.digits + string.punctuation + ' \\t\\n')\n}",
"_____no_output_____"
]
],
[
[
" We need some infrastructure first for viewing the grammar.",
"_____no_output_____"
]
],
[
[
"syntax_diagram(CSV_GRAMMAR)",
"start\n"
]
],
[
[
"We generate `1000` values, and evaluate the `process_vehicle()` with each.",
"_____no_output_____"
]
],
[
[
"gf = GrammarFuzzer(CSV_GRAMMAR, min_nonterminals=4)\ntrials = 1000\nvalid: List[str] = []\ntime = 0\nfor i in range(trials):\n with Timer() as t:\n vehicle_info = gf.fuzz()\n try:\n process_vehicle(vehicle_info)\n valid.append(vehicle_info)\n except:\n pass\n time += t.elapsed_time()\nprint(\"%d valid strings, that is GrammarFuzzer generated %f%% valid entries from %d inputs\" %\n (len(valid), len(valid) * 100.0 / trials, trials))\nprint(\"Total time of %f seconds\" % time)",
"0 valid strings, that is GrammarFuzzer generated 0.000000% valid entries from 1000 inputs\nTotal time of 5.692460 seconds\n"
]
],
[
[
"This is obviously not working. But why?",
"_____no_output_____"
]
],
[
[
"gf = GrammarFuzzer(CSV_GRAMMAR, min_nonterminals=4)\ntrials = 10\ntime = 0\nfor i in range(trials):\n vehicle_info = gf.fuzz()\n try:\n print(repr(vehicle_info), end=\"\")\n process_vehicle(vehicle_info)\n except Exception as e:\n print(\"\\t\", e)\n else:\n print()",
"'9w9J\\'/,LU<\"l,|,Y,Zv)Amvx,c\\n'\t Invalid entry\n'(n8].H7,qolS'\t not enough values to unpack (expected at least 4, got 2)\n'\\nQoLWQ,jSa'\t not enough values to unpack (expected at least 4, got 2)\n'K1,\\n,RE,fq,%,,sT+aAb'\t Invalid entry\n\"m,d,,8j4'),-yQ,B7\"\t Invalid entry\n'g4,s1\\t[}{.,M,<,\\nzd,.am'\t Invalid entry\n',Z[,z,c,#x1,gc.F'\t Invalid entry\n'pWs,rT`,R'\t not enough values to unpack (expected at least 4, got 3)\n'iN,br%,Q,R'\t Invalid entry\n'ol,\\nH<\\tn,^#,=A'\t Invalid entry\n"
]
],
[
[
"None of the entries will get through unless the fuzzer can produce either `van` or `car`.\nIndeed, the reason is that the grammar itself does not capture the complete information about the format. So here is another idea. We modify the `GrammarFuzzer` to know a bit about our format.",
"_____no_output_____"
]
],
[
[
"import copy",
"_____no_output_____"
],
[
"import random",
"_____no_output_____"
],
[
"class PooledGrammarFuzzer(GrammarFuzzer):\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._node_cache = {}\n\n def update_cache(self, key, values):\n self._node_cache[key] = values\n\n def expand_node_randomly(self, node):\n (symbol, children) = node\n assert children is None\n if symbol in self._node_cache:\n if random.randint(0, 1) == 1:\n return super().expand_node_randomly(node)\n return copy.deepcopy(random.choice(self._node_cache[symbol]))\n return super().expand_node_randomly(node)",
"_____no_output_____"
]
],
[
[
"Let us try again!",
"_____no_output_____"
]
],
[
[
"gf = PooledGrammarFuzzer(CSV_GRAMMAR, min_nonterminals=4)\ngf.update_cache('<item>', [\n ('<item>', [('car', [])]),\n ('<item>', [('van', [])]),\n])\ntrials = 10\ntime = 0\nfor i in range(trials):\n vehicle_info = gf.fuzz()\n try:\n print(repr(vehicle_info), end=\"\")\n process_vehicle(vehicle_info)\n except Exception as e:\n print(\"\\t\", e)\n else:\n print()",
"',h,van,|'\t Invalid entry\n'M,w:K,car,car,van'\t Invalid entry\n'J,?Y,van,van,car,J,~D+'\t Invalid entry\n'S4,car,car,o'\t invalid literal for int() with base 10: 'S4'\n'2*-,van'\t not enough values to unpack (expected at least 4, got 2)\n'van,%,5,]'\t Invalid entry\n'van,G3{y,j,h:'\t Invalid entry\n'$0;o,M,car,car'\t Invalid entry\n'2d,f,e'\t not enough values to unpack (expected at least 4, got 3)\n'/~NE,car,car'\t not enough values to unpack (expected at least 4, got 3)\n"
]
],
[
[
"At least we are getting somewhere! It would be really nice if _we could incorporate what we know about the sample data in our fuzzer._ In fact, it would be nice if we could _extract_ the template and valid values from samples, and use them in our fuzzing. How do we do that? The quick answer to this question is: Use a *parser*. ",
"_____no_output_____"
],
[
"## Using a Parser\n\nGenerally speaking, a _parser_ is the part of a a program that processes (structured) input. The parsers we discuss in this chapter transform an input string into a _derivation tree_ (discussed in the [chapter on efficient grammar fuzzing](GrammarFuzzer.ipynb)). From a user's perspective, all it takes to parse an input is two steps: \n\n1. Initialize the parser with a grammar, as in\n```\nparser = Parser(grammar)\n```\n\n2. Using the parser to retrieve a list of derivation trees:\n\n```python\ntrees = parser.parse(input)\n```\n\nOnce we have parsed a tree, we can use it just as the derivation trees produced from grammar fuzzing.\n\nWe discuss a number of such parsers, in particular\n* [parsing expression grammar parsers](#Parsing-Expression-Grammars) (`PEGParser`), which are very efficient, but limited to specific grammar structure; and\n* [Earley parsers](#Parsing-Context-Free-Grammars) (`EarleyParser`), which accept any kind of context-free grammars.\n\nIf you just want to _use_ parsers (say, because your main focus is testing), you can just stop here and move on [to the next chapter](LangFuzzer.ipynb), where we learn how to make use of parsed inputs to mutate and recombine them. If you want to _understand_ how parsers work, though, this chapter is right for you.",
"_____no_output_____"
],
[
"## An Ad Hoc Parser\n\nAs we saw in the previous section, programmers often have to extract parts of data that obey certain rules. For example, for *CSV* files, each element in a row is separated by *commas*, and multiple raws are used to store the data.",
"_____no_output_____"
],
[
"To extract the information, we write an ad hoc parser `simple_parse_csv()`.",
"_____no_output_____"
]
],
[
[
"def simple_parse_csv(mystring: str) -> DerivationTree:\n children: List[DerivationTree] = []\n tree = (START_SYMBOL, children)\n for i, line in enumerate(mystring.split('\\n')):\n children.append((\"record %d\" % i, [(cell, [])\n for cell in line.split(',')]))\n return tree",
"_____no_output_____"
]
],
[
[
"We also change the default orientation of the graph to *left to right* rather than *top to bottom* for easier viewing using `lr_graph()`.",
"_____no_output_____"
]
],
[
[
"def lr_graph(dot):\n dot.attr('node', shape='plain')\n dot.graph_attr['rankdir'] = 'LR'",
"_____no_output_____"
]
],
[
[
"The `display_tree()` shows the structure of our CSV file after parsing.",
"_____no_output_____"
]
],
[
[
"tree = simple_parse_csv(mystring)\ndisplay_tree(tree, graph_attr=lr_graph)",
"_____no_output_____"
]
],
[
[
"This is of course simple. What if we encounter slightly more complexity? Again, another example from the Wikipedia.",
"_____no_output_____"
]
],
[
[
"mystring = '''\\\n1997,Ford,E350,\"ac, abs, moon\",3000.00\\\n'''\nprint(mystring)",
"1997,Ford,E350,\"ac, abs, moon\",3000.00\n"
]
],
[
[
"We define a new annotation method `highlight_node()` to mark the nodes that are interesting.",
"_____no_output_____"
]
],
[
[
"def highlight_node(predicate):\n def hl_node(dot, nid, symbol, ann):\n if predicate(dot, nid, symbol, ann):\n dot.node(repr(nid), dot_escape(symbol), fontcolor='red')\n else:\n dot.node(repr(nid), dot_escape(symbol))\n return hl_node",
"_____no_output_____"
]
],
[
[
"Using `highlight_node()` we can highlight particular nodes that we were wrongly parsed.",
"_____no_output_____"
]
],
[
[
"tree = simple_parse_csv(mystring)\nbad_nodes = {5, 6, 7, 12, 13, 20, 22, 23, 24, 25}",
"_____no_output_____"
],
[
"def hl_predicate(_d, nid, _s, _a): return nid in bad_nodes",
"_____no_output_____"
],
[
"highlight_err_node = highlight_node(hl_predicate)\ndisplay_tree(tree, log=False, node_attr=highlight_err_node,\n graph_attr=lr_graph)",
"_____no_output_____"
]
],
[
[
"The marked nodes indicate where our parsing went wrong. We can of course extend our parser to understand quotes. First we define some of the helper functions `parse_quote()`, `find_comma()` and `comma_split()`",
"_____no_output_____"
]
],
[
[
"def parse_quote(string, i):\n v = string[i + 1:].find('\"')\n return v + i + 1 if v >= 0 else -1",
"_____no_output_____"
],
[
"def find_comma(string, i):\n slen = len(string)\n while i < slen:\n if string[i] == '\"':\n i = parse_quote(string, i)\n if i == -1:\n return -1\n if string[i] == ',':\n return i\n i += 1\n return -1",
"_____no_output_____"
],
[
"def comma_split(string):\n slen = len(string)\n i = 0\n while i < slen:\n c = find_comma(string, i)\n if c == -1:\n yield string[i:]\n return\n else:\n yield string[i:c]\n i = c + 1",
"_____no_output_____"
]
],
[
[
"We can update our `parse_csv()` procedure to use our advanced quote parser.",
"_____no_output_____"
]
],
[
[
"def parse_csv(mystring):\n children = []\n tree = (START_SYMBOL, children)\n for i, line in enumerate(mystring.split('\\n')):\n children.append((\"record %d\" % i, [(cell, [])\n for cell in comma_split(line)]))\n return tree",
"_____no_output_____"
]
],
[
[
"Our new `parse_csv()` can now handle quotes correctly.",
"_____no_output_____"
]
],
[
[
"tree = parse_csv(mystring)\ndisplay_tree(tree, graph_attr=lr_graph)",
"_____no_output_____"
]
],
[
[
"That of course does not survive long:",
"_____no_output_____"
]
],
[
[
"mystring = '''\\\n1999,Chevy,\"Venture \\\\\"Extended Edition, Very Large\\\\\"\",,5000.00\\\n'''\nprint(mystring)",
"1999,Chevy,\"Venture \\\"Extended Edition, Very Large\\\"\",,5000.00\n"
]
],
[
[
"A few embedded quotes are sufficient to confuse our parser again.",
"_____no_output_____"
]
],
[
[
"tree = parse_csv(mystring)\nbad_nodes = {4, 5}\ndisplay_tree(tree, node_attr=highlight_err_node, graph_attr=lr_graph)",
"_____no_output_____"
]
],
[
[
"Here is another record from that CSV file:",
"_____no_output_____"
]
],
[
[
"mystring = '''\\\n1996,Jeep,Grand Cherokee,\"MUST SELL!\nair, moon roof, loaded\",4799.00\n'''\nprint(mystring)",
"1996,Jeep,Grand Cherokee,\"MUST SELL!\nair, moon roof, loaded\",4799.00\n\n"
],
[
"tree = parse_csv(mystring)\nbad_nodes = {5, 6, 7, 8, 9, 10}\ndisplay_tree(tree, node_attr=highlight_err_node, graph_attr=lr_graph)",
"_____no_output_____"
]
],
[
[
"Fixing this would require modifying both inner `parse_quote()` and the outer `parse_csv()` procedures. We note that each of these features actually documented in the CSV [RFC 4180](https://tools.ietf.org/html/rfc4180)",
"_____no_output_____"
],
[
"Indeed, each additional improvement falls apart even with a little extra complexity. The problem becomes severe when one encounters recursive expressions. For example, JSON is a common alternative to CSV files for saving data. Similarly, one may have to parse data from an HTML table instead of a CSV file if one is getting the data from the web.\n\nOne might be tempted to fix it with a little more ad hoc parsing, with a bit of *regular expressions* thrown in. However, that is the [path to insanity](https://stackoverflow.com/a/1732454).",
"_____no_output_____"
],
[
"It is here that _formal parsers_ shine. The main idea is that, any given set of strings belong to a language, and these languages can be specified by their grammars (as we saw in the [chapter on grammars](Grammars.ipynb)). The great thing about grammars is that they can be _composed_. That is, one can introduce finer and finer details into an internal structure without affecting the external structure, and similarly, one can change the external structure without much impact on the internal structure.",
"_____no_output_____"
],
[
"## Grammars in Parsing\n\nWe briefly describe grammars in the context of parsing.",
"_____no_output_____"
],
[
"### Excursion: Grammars and Derivation Trees",
"_____no_output_____"
],
[
"A grammar, as you have read from the [chapter on grammars](Grammars.ipynb) is a set of _rules_ that explain how the start symbol can be expanded. Each rule has a name, also called a _nonterminal_, and a set of _alternative choices_ in how the nonterminal can be expanded.",
"_____no_output_____"
]
],
[
[
"A1_GRAMMAR: Grammar = {\n \"<start>\": [\"<expr>\"],\n \"<expr>\": [\"<expr>+<expr>\", \"<expr>-<expr>\", \"<integer>\"],\n \"<integer>\": [\"<digit><integer>\", \"<digit>\"],\n \"<digit>\": [\"0\", \"1\", \"2\", \"3\", \"4\", \"5\", \"6\", \"7\", \"8\", \"9\"]\n}",
"_____no_output_____"
],
[
"syntax_diagram(A1_GRAMMAR)",
"start\n"
]
],
[
[
"In the above expression, the rule `<expr> : [<expr>+<expr>,<expr>-<expr>,<integer>]` corresponds to how the nonterminal `<expr>` might be expanded. The expression `<expr>+<expr>` corresponds to one of the alternative choices. We call this an _alternative_ expansion for the nonterminal `<expr>`. Finally, in an expression `<expr>+<expr>`, each of `<expr>`, `+`, and `<expr>` are _symbols_ in that expansion. A symbol could be either a nonterminal or a terminal symbol based on whether its expansion is available in the grammar.",
"_____no_output_____"
],
[
"Here is a string that represents an arithmetic expression that we would like to parse, which is specified by the grammar above:",
"_____no_output_____"
]
],
[
[
"mystring = '1+2'",
"_____no_output_____"
]
],
[
[
"The _derivation tree_ for our expression from this grammar is given by:",
"_____no_output_____"
]
],
[
[
"tree = ('<start>', [('<expr>',\n [('<expr>', [('<integer>', [('<digit>', [('1', [])])])]),\n ('+', []),\n ('<expr>', [('<integer>', [('<digit>', [('2',\n [])])])])])])\nassert mystring == tree_to_string(tree)\ndisplay_tree(tree)",
"_____no_output_____"
]
],
[
[
"While a grammar can be used to specify a given language, there could be multiple\ngrammars that correspond to the same language. For example, here is another \ngrammar to describe the same addition expression.",
"_____no_output_____"
]
],
[
[
"A2_GRAMMAR: Grammar = {\n \"<start>\": [\"<expr>\"],\n \"<expr>\": [\"<integer><expr_>\"],\n \"<expr_>\": [\"+<expr>\", \"-<expr>\", \"\"],\n \"<integer>\": [\"<digit><integer_>\"],\n \"<integer_>\": [\"<integer>\", \"\"],\n \"<digit>\": [\"0\", \"1\", \"2\", \"3\", \"4\", \"5\", \"6\", \"7\", \"8\", \"9\"]\n}",
"_____no_output_____"
],
[
"syntax_diagram(A2_GRAMMAR)",
"start\n"
]
],
[
[
"The corresponding derivation tree is given by:",
"_____no_output_____"
]
],
[
[
"tree = ('<start>', [('<expr>', [('<integer>', [('<digit>', [('1', [])]),\n ('<integer_>', [])]),\n ('<expr_>', [('+', []),\n ('<expr>',\n [('<integer>',\n [('<digit>', [('2', [])]),\n ('<integer_>', [])]),\n ('<expr_>', [])])])])])\nassert mystring == tree_to_string(tree)\ndisplay_tree(tree)",
"_____no_output_____"
]
],
[
[
"Indeed, there could be different classes of grammars that\ndescribe the same language. For example, the first grammar `A1_GRAMMAR`\nis a grammar that sports both _right_ and _left_ recursion, while the\nsecond grammar `A2_GRAMMAR` does not have left recursion in the\nnonterminals in any of its productions, but contains _epsilon_ productions.\n(An epsilon production is a production that has empty string in its right\nhand side.)",
"_____no_output_____"
],
[
"### End of Excursion",
"_____no_output_____"
],
[
"### Excursion: Recursion",
"_____no_output_____"
],
[
"You would have noticed that we reuse the term `<expr>` in its own definition. Using the same nonterminal in its own definition is called *recursion*. There are two specific kinds of recursion one should be aware of in parsing, as we see in the next section.",
"_____no_output_____"
],
[
"#### Recursion\n\nA grammar is _left recursive_ if any of its nonterminals are left recursive,\nand a nonterminal is directly left-recursive if the left-most symbol of\nany of its productions is itself.",
"_____no_output_____"
]
],
[
[
"LR_GRAMMAR: Grammar = {\n '<start>': ['<A>'],\n '<A>': ['<A>a', ''],\n}",
"_____no_output_____"
],
[
"syntax_diagram(LR_GRAMMAR)",
"start\n"
],
[
"mystring = 'aaaaaa'\ndisplay_tree(\n ('<start>', [('<A>', [('<A>', [('<A>', []), ('a', [])]), ('a', [])]),\n ('a', [])]))",
"_____no_output_____"
]
],
[
[
"A grammar is indirectly left-recursive if any\nof the left-most symbols can be expanded using their definitions to\nproduce the nonterminal as the left-most symbol of the expansion. The left\nrecursion is called a _hidden-left-recursion_ if during the series of\nexpansions of a nonterminal, one reaches a rule where the rule contains\nthe same nonterminal after a prefix of other symbols, and these symbols can\nderive the empty string. For example, in `A1_GRAMMAR`, `<integer>` will be\nconsidered hidden-left recursive if `<digit>` could derive an empty string.\n\nRight recursive grammars are defined similarly.\nBelow is the derivation tree for the right recursive grammar that represents the same\nlanguage as that of `LR_GRAMMAR`.",
"_____no_output_____"
]
],
[
[
"RR_GRAMMAR: Grammar = {\n '<start>': ['<A>'],\n '<A>': ['a<A>', ''],\n}",
"_____no_output_____"
],
[
"syntax_diagram(RR_GRAMMAR)",
"start\n"
],
[
"display_tree(('<start>', [('<A>', [\n ('a', []), ('<A>', [('a', []), ('<A>', [('a', []), ('<A>', [])])])])]\n ))",
"_____no_output_____"
]
],
[
[
"#### Ambiguity\n\nTo complicate matters further, there could be\nmultiple derivation trees – also called _parses_ – corresponding to the\nsame string from the same grammar. For example, a string `1+2+3` can be parsed\nin two ways as we see below using the `A1_GRAMMAR`",
"_____no_output_____"
]
],
[
[
"mystring = '1+2+3'\ntree = ('<start>',\n [('<expr>',\n [('<expr>', [('<expr>', [('<integer>', [('<digit>', [('1', [])])])]),\n ('+', []),\n ('<expr>', [('<integer>',\n [('<digit>', [('2', [])])])])]), ('+', []),\n ('<expr>', [('<integer>', [('<digit>', [('3', [])])])])])])\nassert mystring == tree_to_string(tree)\ndisplay_tree(tree)",
"_____no_output_____"
],
[
"tree = ('<start>',\n [('<expr>', [('<expr>', [('<integer>', [('<digit>', [('1', [])])])]),\n ('+', []),\n ('<expr>',\n [('<expr>', [('<integer>', [('<digit>', [('2', [])])])]),\n ('+', []),\n ('<expr>', [('<integer>', [('<digit>', [('3',\n [])])])])])])])\nassert tree_to_string(tree) == mystring\ndisplay_tree(tree)",
"_____no_output_____"
]
],
[
[
"There are many ways to resolve ambiguities. One approach taken by *Parsing Expression Grammars* explained in the next section is to specify a particular order of resolution, and choose the first one. Another approach is to simply return all possible derivation trees, which is the approach taken by *Earley parser* we develop later.",
"_____no_output_____"
],
[
"### End of Excursion",
"_____no_output_____"
],
[
"## A Parser Class",
"_____no_output_____"
],
[
"Next, we develop different parsers. To do that, we define a minimal interface for parsing that is obeyed by all parsers. There are two approaches to parsing a string using a grammar.\n\n1. The traditional approach is to use a *lexer* (also called a *tokenizer* or a *scanner*) to first tokenize the incoming string, and feed the grammar one token at a time. The lexer is typically a smaller parser that accepts a *regular language*. The advantage of this approach is that the grammar used by the parser can eschew the details of tokenization. Further, one gets a shallow derivation tree at the end of the parsing which can be directly used for generating the *Abstract Syntax Tree*.\n2. The second approach is to use a tree pruner after the complete parse. With this approach, one uses a grammar that incorporates complete details of the syntax. Next, the nodes corresponding to tokens are pruned and replaced with their corresponding strings as leaf nodes. The utility of this approach is that the parser is more powerful, and further there is no artificial distinction between *lexing* and *parsing*.\n\nIn this chapter, we use the second approach. This approach is implemented in the `prune_tree` method.",
"_____no_output_____"
],
[
"The *Parser* class we define below provides the minimal interface. The main methods that need to be implemented by the classes implementing this interface are `parse_prefix` and `parse`. The `parse_prefix` returns a tuple, which contains the index until which parsing was completed successfully, and the parse forest until that index. The method `parse` returns a list of derivation trees if the parse was successful.",
"_____no_output_____"
]
],
[
[
"class Parser:\n \"\"\"Base class for parsing.\"\"\"\n\n def __init__(self, grammar: Grammar, *,\n start_symbol: str = START_SYMBOL,\n log: bool = False,\n coalesce: bool = True,\n tokens: Set[str] = set()) -> None:\n \"\"\"Constructor.\n `grammar` is the grammar to be used for parsing.\n Keyword arguments:\n `start_symbol` is the start symbol (default: '<start>').\n `log` enables logging (default: False).\n `coalesce` defines if tokens should be coalesced (default: True).\n `tokens`, if set, is a set of tokens to be used.\"\"\"\n self._grammar = grammar\n self._start_symbol = start_symbol\n self.log = log\n self.coalesce_tokens = coalesce\n self.tokens = tokens\n\n def grammar(self) -> Grammar:\n \"\"\"Return the grammar of this parser.\"\"\"\n return self._grammar\n\n def start_symbol(self) -> str:\n \"\"\"Return the start symbol of this parser.\"\"\"\n return self._start_symbol\n\n def parse_prefix(self, text: str) -> Tuple[int, Iterable[DerivationTree]]:\n \"\"\"Return pair (cursor, forest) for longest prefix of text. \n To be defined in subclasses.\"\"\"\n raise NotImplementedError\n\n def parse(self, text: str) -> Iterable[DerivationTree]:\n \"\"\"Parse `text` using the grammar. \n Return an iterable of parse trees.\"\"\"\n cursor, forest = self.parse_prefix(text)\n if cursor < len(text):\n raise SyntaxError(\"at \" + repr(text[cursor:]))\n return [self.prune_tree(tree) for tree in forest]\n\n def parse_on(self, text: str, start_symbol: str) -> Generator:\n old_start = self._start_symbol\n try:\n self._start_symbol = start_symbol\n yield from self.parse(text)\n finally:\n self._start_symbol = old_start\n\n def coalesce(self, children: List[DerivationTree]) -> List[DerivationTree]:\n last = ''\n new_lst: List[DerivationTree] = []\n for cn, cc in children:\n if cn not in self._grammar:\n last += cn\n else:\n if last:\n new_lst.append((last, []))\n last = ''\n new_lst.append((cn, cc))\n if last:\n new_lst.append((last, []))\n return new_lst\n\n def prune_tree(self, tree: DerivationTree) -> DerivationTree:\n name, children = tree\n assert isinstance(children, list)\n\n if self.coalesce_tokens:\n children = self.coalesce(cast(List[DerivationTree], children))\n if name in self.tokens:\n return (name, [(tree_to_string(tree), [])])\n else:\n return (name, [self.prune_tree(c) for c in children])",
"_____no_output_____"
]
],
[
[
"### Excursion: Canonical Grammars",
"_____no_output_____"
],
[
"The `EXPR_GRAMMAR` we import from the [chapter on grammars](Grammars.ipynb) is oriented towards generation. In particular, the production rules are stored as strings. We need to massage this representation a little to conform to a _canonical representation_ where each token in a rule is represented separately. The `canonical` format uses separate tokens to represent each symbol in an expansion.",
"_____no_output_____"
]
],
[
[
"CanonicalGrammar = Dict[str, List[List[str]]]",
"_____no_output_____"
],
[
"import re",
"_____no_output_____"
],
[
"def single_char_tokens(grammar: Grammar) -> Dict[str, List[List[Collection[str]]]]:\n g_ = {}\n for key in grammar:\n rules_ = []\n for rule in grammar[key]:\n rule_ = []\n for token in rule:\n if token in grammar:\n rule_.append(token)\n else:\n rule_.extend(token)\n rules_.append(rule_)\n g_[key] = rules_\n return g_",
"_____no_output_____"
],
[
"def canonical(grammar: Grammar) -> CanonicalGrammar:\n def split(expansion):\n if isinstance(expansion, tuple):\n expansion = expansion[0]\n\n return [token for token in re.split(\n RE_NONTERMINAL, expansion) if token]\n\n return {\n k: [split(expression) for expression in alternatives]\n for k, alternatives in grammar.items()\n }",
"_____no_output_____"
],
[
"CE_GRAMMAR: CanonicalGrammar = canonical(EXPR_GRAMMAR)\nCE_GRAMMAR",
"_____no_output_____"
]
],
[
[
"We also provide a convenience method for easier display of canonical grammars.",
"_____no_output_____"
]
],
[
[
"def recurse_grammar(grammar, key, order):\n rules = sorted(grammar[key])\n old_len = len(order)\n for rule in rules:\n for token in rule:\n if token not in grammar: continue\n if token not in order:\n order.append(token)\n new = order[old_len:]\n for ckey in new:\n recurse_grammar(grammar, ckey, order)",
"_____no_output_____"
],
[
"def show_grammar(grammar, start_symbol=START_SYMBOL):\n order = [start_symbol]\n recurse_grammar(grammar, start_symbol, order)\n return {k: sorted(grammar[k]) for k in order}",
"_____no_output_____"
],
[
"show_grammar(CE_GRAMMAR)",
"_____no_output_____"
]
],
[
[
"We provide a way to revert a canonical expression.",
"_____no_output_____"
]
],
[
[
"def non_canonical(grammar):\n new_grammar = {}\n for k in grammar:\n rules = grammar[k]\n new_rules = []\n for rule in rules:\n new_rules.append(''.join(rule))\n new_grammar[k] = new_rules\n return new_grammar",
"_____no_output_____"
],
[
"non_canonical(CE_GRAMMAR)",
"_____no_output_____"
]
],
[
[
"It is easier to work with the `canonical` representation during parsing. Hence, we update our parser class to store the `canonical` representation also.",
"_____no_output_____"
]
],
[
[
"class Parser(Parser):\n def __init__(self, grammar, **kwargs):\n self._start_symbol = kwargs.get('start_symbol', START_SYMBOL)\n self.log = kwargs.get('log', False)\n self.tokens = kwargs.get('tokens', set())\n self.coalesce_tokens = kwargs.get('coalesce', True)\n canonical_grammar = kwargs.get('canonical', False)\n if canonical_grammar:\n self.cgrammar = single_char_tokens(grammar)\n self._grammar = non_canonical(grammar)\n else:\n self._grammar = dict(grammar)\n self.cgrammar = single_char_tokens(canonical(grammar))\n # we do not require a single rule for the start symbol\n if len(grammar.get(self._start_symbol, [])) != 1:\n self.cgrammar['<>'] = [[self._start_symbol]]",
"_____no_output_____"
]
],
[
[
"We update the `prune_tree()` to account for the phony start symbol if it was insserted.",
"_____no_output_____"
]
],
[
[
"class Parser(Parser):\n def prune_tree(self, tree):\n name, children = tree\n if name == '<>':\n assert len(children) == 1\n return self.prune_tree(children[0])\n if self.coalesce_tokens:\n children = self.coalesce(children)\n if name in self.tokens:\n return (name, [(tree_to_string(tree), [])])\n else:\n return (name, [self.prune_tree(c) for c in children])",
"_____no_output_____"
]
],
[
[
"### End of Excursion",
"_____no_output_____"
],
[
"## Parsing Expression Grammars\n\nA _[Parsing Expression Grammar](http://bford.info/pub/lang/peg)_ (*PEG*) \\cite{Ford2004} is a type of _recognition based formal grammar_ that specifies the sequence of steps to take to parse a given string.\nA _parsing expression grammar_ is very similar to a _context-free grammar_ (*CFG*) such as the ones we saw in the [chapter on grammars](Grammars.ipynb). As in a CFG, a parsing expression grammar is represented by a set of nonterminals and corresponding alternatives representing how to match each. For example, here is a PEG that matches `a` or `b`.",
"_____no_output_____"
]
],
[
[
"PEG1 = {\n '<start>': ['a', 'b']\n}",
"_____no_output_____"
]
],
[
[
"However, unlike the _CFG_, the alternatives represent *ordered choice*. That is, rather than choosing all rules that can potentially match, we stop at the first match that succeed. For example, the below _PEG_ can match `ab` but not `abc` unlike a _CFG_ which will match both. (We call the sequence of ordered choice expressions *choice expressions* rather than alternatives to make the distinction from _CFG_ clear.)",
"_____no_output_____"
]
],
[
[
"PEG2 = {\n '<start>': ['ab', 'abc']\n}",
"_____no_output_____"
]
],
[
[
"Each choice in a _choice expression_ represents a rule on how to satisfy that particular choice. The choice is a sequence of symbols (terminals and nonterminals) that are matched against a given text as in a _CFG_.",
"_____no_output_____"
],
[
"Beyond the syntax of grammar definitions we have seen so far, a _PEG_ can also contain a few additional elements. See the exercises at the end of the chapter for additional information.\n\nThe PEGs model the typical practice in handwritten recursive descent parsers, and hence it may be considered more intuitive to understand.",
"_____no_output_____"
],
[
"### The Packrat Parser for Predicate Expression Grammars\n\nShort of hand rolling a parser, _Packrat_ parsing is one of the simplest parsing techniques, and is one of the techniques for parsing PEGs.\nThe _Packrat_ parser is so named because it tries to cache all results from simpler problems in the hope that these solutions can be used to avoid re-computation later. We develop a minimal _Packrat_ parser next.",
"_____no_output_____"
],
[
"We derive from the `Parser` base class first, and we accept the text to be parsed in the `parse()` method, which in turn calls `unify_key()` with the `start_symbol`.\n\n__Note.__ While our PEG parser can produce only a single unambiguous parse tree, other parsers can produce multiple parses for ambiguous grammars. Hence, we return a list of trees (in this case with a single element).",
"_____no_output_____"
]
],
[
[
"class PEGParser(Parser):\n def parse_prefix(self, text):\n cursor, tree = self.unify_key(self.start_symbol(), text, 0)\n return cursor, [tree]",
"_____no_output_____"
]
],
[
[
"### Excursion: Implementing `PEGParser`",
"_____no_output_____"
],
[
"#### Unify Key\nThe `unify_key()` algorithm is simple. If given a terminal symbol, it tries to match the symbol with the current position in the text. If the symbol and text match, it returns successfully with the new parse index `at`.\n\nIf on the other hand, it was given a nonterminal, it retrieves the choice expression corresponding to the key, and tries to match each choice *in order* using `unify_rule()`. If **any** of the rules succeed in being unified with the given text, the parse is considered a success, and we return with the new parse index returned by `unify_rule()`.",
"_____no_output_____"
]
],
[
[
"class PEGParser(PEGParser):\n \"\"\"Packrat parser for Parsing Expression Grammars (PEGs).\"\"\"\n\n def unify_key(self, key, text, at=0):\n if self.log:\n print(\"unify_key: %s with %s\" % (repr(key), repr(text[at:])))\n if key not in self.cgrammar:\n if text[at:].startswith(key):\n return at + len(key), (key, [])\n else:\n return at, None\n for rule in self.cgrammar[key]:\n to, res = self.unify_rule(rule, text, at)\n if res is not None:\n return (to, (key, res))\n return 0, None",
"_____no_output_____"
],
[
"mystring = \"1\"\npeg = PEGParser(EXPR_GRAMMAR, log=True)\npeg.unify_key('1', mystring)",
"unify_key: '1' with '1'\n"
],
[
"mystring = \"2\"\npeg.unify_key('1', mystring)",
"unify_key: '1' with '2'\n"
]
],
[
[
"#### Unify Rule\n\nThe `unify_rule()` method is similar. It retrieves the tokens corresponding to the rule that it needs to unify with the text, and calls `unify_key()` on them in sequence. If **all** tokens are successfully unified with the text, the parse is a success.",
"_____no_output_____"
]
],
[
[
"class PEGParser(PEGParser):\n def unify_rule(self, rule, text, at):\n if self.log:\n print('unify_rule: %s with %s' % (repr(rule), repr(text[at:])))\n results = []\n for token in rule:\n at, res = self.unify_key(token, text, at)\n if res is None:\n return at, None\n results.append(res)\n return at, results",
"_____no_output_____"
],
[
"mystring = \"0\"\npeg = PEGParser(EXPR_GRAMMAR, log=True)\npeg.unify_rule(peg.cgrammar['<digit>'][0], mystring, 0)",
"unify_rule: ['0'] with '0'\nunify_key: '0' with '0'\n"
],
[
"mystring = \"12\"\npeg.unify_rule(peg.cgrammar['<integer>'][0], mystring, 0)",
"unify_rule: ['<digit>', '<integer>'] with '12'\nunify_key: '<digit>' with '12'\nunify_rule: ['0'] with '12'\nunify_key: '0' with '12'\nunify_rule: ['1'] with '12'\nunify_key: '1' with '12'\nunify_key: '<integer>' with '2'\nunify_rule: ['<digit>', '<integer>'] with '2'\nunify_key: '<digit>' with '2'\nunify_rule: ['0'] with '2'\nunify_key: '0' with '2'\nunify_rule: ['1'] with '2'\nunify_key: '1' with '2'\nunify_rule: ['2'] with '2'\nunify_key: '2' with '2'\nunify_key: '<integer>' with ''\nunify_rule: ['<digit>', '<integer>'] with ''\nunify_key: '<digit>' with ''\nunify_rule: ['0'] with ''\nunify_key: '0' with ''\nunify_rule: ['1'] with ''\nunify_key: '1' with ''\nunify_rule: ['2'] with ''\nunify_key: '2' with ''\nunify_rule: ['3'] with ''\nunify_key: '3' with ''\nunify_rule: ['4'] with ''\nunify_key: '4' with ''\nunify_rule: ['5'] with ''\nunify_key: '5' with ''\nunify_rule: ['6'] with ''\nunify_key: '6' with ''\nunify_rule: ['7'] with ''\nunify_key: '7' with ''\nunify_rule: ['8'] with ''\nunify_key: '8' with ''\nunify_rule: ['9'] with ''\nunify_key: '9' with ''\nunify_rule: ['<digit>'] with ''\nunify_key: '<digit>' with ''\nunify_rule: ['0'] with ''\nunify_key: '0' with ''\nunify_rule: ['1'] with ''\nunify_key: '1' with ''\nunify_rule: ['2'] with ''\nunify_key: '2' with ''\nunify_rule: ['3'] with ''\nunify_key: '3' with ''\nunify_rule: ['4'] with ''\nunify_key: '4' with ''\nunify_rule: ['5'] with ''\nunify_key: '5' with ''\nunify_rule: ['6'] with ''\nunify_key: '6' with ''\nunify_rule: ['7'] with ''\nunify_key: '7' with ''\nunify_rule: ['8'] with ''\nunify_key: '8' with ''\nunify_rule: ['9'] with ''\nunify_key: '9' with ''\nunify_rule: ['<digit>'] with '2'\nunify_key: '<digit>' with '2'\nunify_rule: ['0'] with '2'\nunify_key: '0' with '2'\nunify_rule: ['1'] with '2'\nunify_key: '1' with '2'\nunify_rule: ['2'] with '2'\nunify_key: '2' with '2'\n"
],
[
"mystring = \"1 + 2\"\npeg = PEGParser(EXPR_GRAMMAR, log=False)\npeg.parse(mystring)",
"_____no_output_____"
]
],
[
[
"The two methods are mutually recursive, and given that `unify_key()` tries each alternative until it succeeds, `unify_key` can be called multiple times with the same arguments. Hence, it is important to memoize the results of `unify_key`. Python provides a simple decorator `lru_cache` for memoizing any function call that has hashable arguments. We add that to our implementation so that repeated calls to `unify_key()` with the same argument get cached results.\n\nThis memoization gives the algorithm its name – _Packrat_.",
"_____no_output_____"
]
],
[
[
"from functools import lru_cache",
"_____no_output_____"
],
[
"class PEGParser(PEGParser):\n @lru_cache(maxsize=None)\n def unify_key(self, key, text, at=0):\n if key not in self.cgrammar:\n if text[at:].startswith(key):\n return at + len(key), (key, [])\n else:\n return at, None\n for rule in self.cgrammar[key]:\n to, res = self.unify_rule(rule, text, at)\n if res is not None:\n return (to, (key, res))\n return 0, None",
"_____no_output_____"
]
],
[
[
"We wrap initialization and calling of `PEGParser` in a method `parse()` already implemented in the `Parser` base class that accepts the text to be parsed along with the grammar.",
"_____no_output_____"
],
[
"### End of Excursion",
"_____no_output_____"
],
[
"Here are a few examples of our parser in action.",
"_____no_output_____"
]
],
[
[
"mystring = \"1 + (2 * 3)\"\npeg = PEGParser(EXPR_GRAMMAR)\nfor tree in peg.parse(mystring):\n assert tree_to_string(tree) == mystring\n display(display_tree(tree))",
"_____no_output_____"
],
[
"mystring = \"1 * (2 + 3.35)\"\nfor tree in peg.parse(mystring):\n assert tree_to_string(tree) == mystring\n display(display_tree(tree))",
"_____no_output_____"
]
],
[
[
"One should be aware that while the grammar looks like a *CFG*, the language described by a *PEG* may be different. Indeed, only *LL(1)* grammars are guaranteed to represent the same language for both PEGs and other parsers. Behavior of PEGs for other classes of grammars could be surprising \\cite{redziejowski2008}. ",
"_____no_output_____"
],
[
"## Parsing Context-Free Grammars",
"_____no_output_____"
],
[
"### Problems with PEG\nWhile _PEGs_ are simple at first sight, their behavior in some cases might be a bit unintuitive. For example, here is an example \\cite{redziejowski2008}:",
"_____no_output_____"
]
],
[
[
"PEG_SURPRISE: Grammar = {\n \"<A>\": [\"a<A>a\", \"aa\"]\n}",
"_____no_output_____"
]
],
[
[
"When interpreted as a *CFG* and used as a string generator, it will produce strings of the form `aa, aaaa, aaaaaa` that is, it produces strings where the number of `a` is $ 2*n $ where $ n > 0 $.",
"_____no_output_____"
]
],
[
[
"strings = []\nfor nn in range(4):\n f = GrammarFuzzer(PEG_SURPRISE, start_symbol='<A>')\n tree = ('<A>', None)\n for _ in range(nn):\n tree = f.expand_tree_once(tree)\n tree = f.expand_tree_with_strategy(tree, f.expand_node_min_cost)\n strings.append(tree_to_string(tree))\n display_tree(tree)\nstrings",
"_____no_output_____"
]
],
[
[
"However, the _PEG_ parser can only recognize strings of the form $2^n$",
"_____no_output_____"
]
],
[
[
"peg = PEGParser(PEG_SURPRISE, start_symbol='<A>')\nfor s in strings:\n with ExpectError():\n for tree in peg.parse(s):\n display_tree(tree)\n print(s)",
"aa\naaaa\naaaaaaaa\n"
]
],
[
[
"This is not the only problem with _Parsing Expression Grammars_. While *PEGs* are expressive and the *packrat* parser for parsing them is simple and intuitive, *PEGs* suffer from a major deficiency for our purposes. *PEGs* are oriented towards language recognition, and it is not clear how to translate an arbitrary *PEG* to a *CFG*. As we mentioned earlier, a naive re-interpretation of a *PEG* as a *CFG* does not work very well. Further, it is not clear what is the exact relation between the class of languages represented by *PEG* and the class of languages represented by *CFG*. Since our primary focus is *fuzzing* – that is _generation_ of strings – , we next look at _parsers that can accept context-free grammars_.",
"_____no_output_____"
],
[
"The general idea of *CFG* parser is the following: Peek at the input text for the allowed number of characters, and use these, and our parser state to determine which rules can be applied to complete parsing. We next look at a typical *CFG* parsing algorithm, the Earley Parser.",
"_____no_output_____"
],
[
"### The Earley Parser",
"_____no_output_____"
],
[
"The Earley parser is a general parser that is able to parse any arbitrary *CFG*. It was invented by Jay Earley \\cite{Earley1970} for use in computational linguistics. While its computational complexity is $O(n^3)$ for parsing strings with arbitrary grammars, it can parse strings with unambiguous grammars in $O(n^2)$ time, and all *[LR(k)](https://en.wikipedia.org/wiki/LR_parser)* grammars in linear time ($O(n)$ \\cite{Leo1991}). Further improvements – notably handling epsilon rules – were invented by Aycock et al. \\cite{Aycock2002}.",
"_____no_output_____"
],
[
"Note that one restriction of our implementation is that the start symbol can have only one alternative in its alternative expressions. This is not a restriction in practice because any grammar with multiple alternatives for its start symbol can be extended with a new start symbol that has the original start symbol as its only choice. That is, given a grammar as below,\n\n```\ngrammar = {\n '<start>': ['<A>', '<B>'],\n ...\n}\n```\none may rewrite it as below to conform to the *single-alternative* rule.\n```\ngrammar = {\n '<start>': ['<start_>'],\n '<start_>': ['<A>', '<B>'],\n ...\n}\n```",
"_____no_output_____"
],
[
"Let us implement a class `EarleyParser`, again derived from `Parser` which implements an Earley parser.",
"_____no_output_____"
],
[
"### Excursion: Implementing `EarleyParser`",
"_____no_output_____"
],
[
"We first implement a simpler parser that is a parser for nearly all *CFGs*, but not quite. In particular, our parser does not understand _epsilon rules_ – rules that derive empty string. We show later how the parser can be extended to handle these.",
"_____no_output_____"
],
[
"We use the following grammar in our examples below.",
"_____no_output_____"
]
],
[
[
"SAMPLE_GRAMMAR: Grammar = {\n '<start>': ['<A><B>'],\n '<A>': ['a<B>c', 'a<A>'],\n '<B>': ['b<C>', '<D>'],\n '<C>': ['c'],\n '<D>': ['d']\n}\nC_SAMPLE_GRAMMAR = canonical(SAMPLE_GRAMMAR)",
"_____no_output_____"
],
[
"syntax_diagram(SAMPLE_GRAMMAR)",
"start\n"
]
],
[
[
"The basic idea of Earley parsing is the following:\n\n* Start with the alternative expressions corresponding to the START_SYMBOL. These represent the possible ways to parse the string from a high level. Essentially each expression represents a parsing path. Queue each expression in our set of possible parses of the string. The parsed index of an expression is the part of expression that has already been recognized. In the beginning of parse, the parsed index of all expressions is at the beginning. Further, each letter gets a queue of expressions that recognizes that letter at that point in our parse.\n* Examine our queue of possible parses and check if any of them start with a nonterminal. If it does, then that nonterminal needs to be recognized from the input before the given rule can be parsed. Hence, add the alternative expressions corresponding to the nonterminal to the queue. Do this recursively.\n* At this point, we are ready to advance. Examine the current letter in the input, and select all expressions that have that particular letter at the parsed index. These expressions can now advance one step. Advance these selected expressions by incrementing their parsed index and add them to the queue of expressions in line for recognizing the next input letter.\n* If while doing these things, we find that any of the expressions have finished parsing, we fetch its corresponding nonterminal, and advance all expressions that have that nonterminal at their parsed index.\n* Continue this procedure recursively until all expressions that we have queued for the current letter have been processed. Then start processing the queue for the next letter.\n\nWe explain each step in detail with examples in the coming sections.",
"_____no_output_____"
],
[
"The parser uses dynamic programming to generate a table containing a _forest of possible parses_ at each letter index – the table contains as many columns as there are letters in the input, and each column contains different parsing rules at various stages of the parse.\n\nFor example, given an input `adcd`, the Column 0 would contain the following:\n```\n<start> : ● <A> <B>\n```\nwhich is the starting rule that indicates that we are currently parsing the rule `<start>`, and the parsing state is just before identifying the symbol `<A>`. It would also contain the following which are two alternative paths it could take to complete the parsing.\n\n```\n<A> : ● a <B> c\n<A> : ● a <A>\n```",
"_____no_output_____"
],
[
"Column 1 would contain the following, which represents the possible completion after reading `a`.\n```\n<A> : a ● <B> c\n<A> : a ● <A>\n<B> : ● b <C>\n<B> : ● <D>\n<A> : ● a <B> c\n<A> : ● a <A>\n<D> : ● d\n```",
"_____no_output_____"
],
[
"Column 2 would contain the following after reading `d`\n```\n<D> : d ●\n<B> : <D> ●\n<A> : a <B> ● c\n```",
"_____no_output_____"
],
[
"Similarly, Column 3 would contain the following after reading `c`\n```\n<A> : a <B> c ●\n<start> : <A> ● <B>\n<B> : ● b <C>\n<B> : ● <D>\n<D> : ● d\n```",
"_____no_output_____"
],
[
"Finally, Column 4 would contain the following after reading `d`, with the `●` at the end of the `<start>` rule indicating that the parse was successful.\n```\n<D> : d ●\n<B> : <D> ●\n<start> : <A> <B> ●\n```",
"_____no_output_____"
],
[
"As you can see from above, we are essentially filling a table (a table is also called a **chart**) of entries based on each letter we read, and the grammar rules that can be applied. This chart gives the parser its other name -- Chart parsing.",
"_____no_output_____"
],
[
"#### Columns\n\nWe define the `Column` first. The `Column` is initialized by its own `index` in the input string, and the `letter` at that index. Internally, we also keep track of the states that are added to the column as the parsing progresses.",
"_____no_output_____"
]
],
[
[
"class Column:\n def __init__(self, index, letter):\n self.index, self.letter = index, letter\n self.states, self._unique = [], {}\n\n def __str__(self):\n return \"%s chart[%d]\\n%s\" % (self.letter, self.index, \"\\n\".join(\n str(state) for state in self.states if state.finished()))",
"_____no_output_____"
]
],
[
[
"The `Column` only stores unique `states`. Hence, when a new `state` is `added` to our `Column`, we check whether it is already known.",
"_____no_output_____"
]
],
[
[
"class Column(Column):\n def add(self, state):\n if state in self._unique:\n return self._unique[state]\n self._unique[state] = state\n self.states.append(state)\n state.e_col = self\n return self._unique[state]",
"_____no_output_____"
]
],
[
[
"#### Items\n\nAn item represents a _parse in progress for a specific rule._ Hence the item contains the name of the nonterminal, and the corresponding alternative expression (`expr`) which together form the rule, and the current position of parsing in this expression -- `dot`.\n\n\n**Note.** If you are familiar with [LR parsing](https://en.wikipedia.org/wiki/LR_parser), you will notice that an item is simply an `LR0` item.",
"_____no_output_____"
]
],
[
[
"class Item:\n def __init__(self, name, expr, dot):\n self.name, self.expr, self.dot = name, expr, dot",
"_____no_output_____"
]
],
[
[
"We also provide a few convenience methods. The method `finished()` checks if the `dot` has moved beyond the last element in `expr`. The method `advance()` produces a new `Item` with the `dot` advanced one token, and represents an advance of the parsing. The method `at_dot()` returns the current symbol being parsed.",
"_____no_output_____"
]
],
[
[
"class Item(Item):\n def finished(self):\n return self.dot >= len(self.expr)\n\n def advance(self):\n return Item(self.name, self.expr, self.dot + 1)\n\n def at_dot(self):\n return self.expr[self.dot] if self.dot < len(self.expr) else None",
"_____no_output_____"
]
],
[
[
"Here is how an item could be used. We first define our item",
"_____no_output_____"
]
],
[
[
"item_name = '<B>'\nitem_expr = C_SAMPLE_GRAMMAR[item_name][1]\nan_item = Item(item_name, tuple(item_expr), 0)",
"_____no_output_____"
]
],
[
[
"To determine where the status of parsing, we use `at_dot()`",
"_____no_output_____"
]
],
[
[
"an_item.at_dot()",
"_____no_output_____"
]
],
[
[
"That is, the next symbol to be parsed is `<D>`",
"_____no_output_____"
],
[
"If we advance the item, we get another item that represents the finished parsing rule `<B>`.",
"_____no_output_____"
]
],
[
[
"another_item = an_item.advance()",
"_____no_output_____"
],
[
"another_item.finished()",
"_____no_output_____"
]
],
[
[
"#### States\n\nFor `Earley` parsing, the state of the parsing is simply one `Item` along with some meta information such as the starting `s_col` and ending column `e_col` for each state. Hence we inherit from `Item` to create a `State`.\nSince we are interested in comparing states, we define `hash()` and `eq()` with the corresponding methods.",
"_____no_output_____"
]
],
[
[
"class State(Item):\n def __init__(self, name, expr, dot, s_col, e_col=None):\n super().__init__(name, expr, dot)\n self.s_col, self.e_col = s_col, e_col\n\n def __str__(self):\n def idx(var):\n return var.index if var else -1\n\n return self.name + ':= ' + ' '.join([\n str(p)\n for p in [*self.expr[:self.dot], '|', *self.expr[self.dot:]]\n ]) + \"(%d,%d)\" % (idx(self.s_col), idx(self.e_col))\n\n def copy(self):\n return State(self.name, self.expr, self.dot, self.s_col, self.e_col)\n\n def _t(self):\n return (self.name, self.expr, self.dot, self.s_col.index)\n\n def __hash__(self):\n return hash(self._t())\n\n def __eq__(self, other):\n return self._t() == other._t()\n\n def advance(self):\n return State(self.name, self.expr, self.dot + 1, self.s_col)",
"_____no_output_____"
]
],
[
[
"The usage of `State` is similar to that of `Item`. The only difference is that it is used along with the `Column` to track the parsing state. For example, we initialize the first column as follows:",
"_____no_output_____"
]
],
[
[
"col_0 = Column(0, None)\nitem_tuple = tuple(*C_SAMPLE_GRAMMAR[START_SYMBOL])\nstart_state = State(START_SYMBOL, item_tuple, 0, col_0)\ncol_0.add(start_state)\nstart_state.at_dot()",
"_____no_output_____"
]
],
[
[
"The first column is then updated by using `add()` method of `Column`",
"_____no_output_____"
]
],
[
[
"sym = start_state.at_dot()\nfor alt in C_SAMPLE_GRAMMAR[sym]:\n col_0.add(State(sym, tuple(alt), 0, col_0))\nfor s in col_0.states:\n print(s)",
"<start>:= | <A> <B>(0,0)\n<A>:= | a <B> c(0,0)\n<A>:= | a <A>(0,0)\n"
]
],
[
[
"#### The Parsing Algorithm",
"_____no_output_____"
],
[
"The _Earley_ algorithm starts by initializing the chart with columns (as many as there are letters in the input). We also seed the first column with a state representing the expression corresponding to the start symbol. In our case, the state corresponds to the start symbol with the `dot` at `0` is represented as below. The `●` symbol represents the parsing status. In this case, we have not parsed anything.\n```\n<start>: ● <A> <B>\n```\nWe pass this partial chart to a method for filling the rest of the parse chart.",
"_____no_output_____"
],
[
"Before starting to parse, we seed the chart with the state representing the ongoing parse of the start symbol.",
"_____no_output_____"
]
],
[
[
"class EarleyParser(Parser):\n \"\"\"Earley Parser. This parser can parse any context-free grammar.\"\"\"\n\n def __init__(self, grammar: Grammar, **kwargs) -> None:\n super().__init__(grammar, **kwargs)\n self.chart: List = [] # for type checking\n\n def chart_parse(self, words, start):\n alt = tuple(*self.cgrammar[start])\n chart = [Column(i, tok) for i, tok in enumerate([None, *words])]\n chart[0].add(State(start, alt, 0, chart[0]))\n return self.fill_chart(chart)",
"_____no_output_____"
]
],
[
[
"The main parsing loop in `fill_chart()` has three fundamental operations. `predict()`, `scan()`, and `complete()`. We discuss `predict` next.",
"_____no_output_____"
],
[
"#### Predicting States\n\nWe have already seeded `chart[0]` with a state `[<A>,<B>]` with `dot` at `0`. Next, given that `<A>` is a nonterminal, we `predict` the possible parse continuations of this state. That is, it could be either `a <B> c` or `A <A>`.\n\nThe general idea of `predict()` is as follows: Say you have a state with name `<A>` from the above grammar, and expression containing `[a,<B>,c]`. Imagine that you have seen `a` already, which means that the `dot` will be on `<B>`. Below, is a representation of our parse status. The left hand side of ● represents the portion already parsed (`a`), and the right hand side represents the portion yet to be parsed (`<B> c`).\n\n```\n<A>: a ● <B> c\n```",
"_____no_output_____"
],
[
"To recognize `<B>`, we look at the definition of `<B>`, which has different alternative expressions. The `predict()` step adds each of these alternatives to the set of states, with `dot` at `0`.\n\n```\n<A>: a ● <B> c\n<B>: ● b c\n<B>: ● <D>\n```\n\nIn essence, the `predict()` method, when called with the current nonterminal, fetches the alternative expressions corresponding to this nonterminal, and adds these as predicted _child_ states to the _current_ column.",
"_____no_output_____"
]
],
[
[
"class EarleyParser(EarleyParser):\n def predict(self, col, sym, state):\n for alt in self.cgrammar[sym]:\n col.add(State(sym, tuple(alt), 0, col))",
"_____no_output_____"
]
],
[
[
"To see how to use `predict`, we first construct the 0th column as before, and we assign the constructed column to an instance of the EarleyParser.",
"_____no_output_____"
]
],
[
[
"col_0 = Column(0, None)\ncol_0.add(start_state)\nep = EarleyParser(SAMPLE_GRAMMAR)\nep.chart = [col_0]",
"_____no_output_____"
]
],
[
[
"It should contain a single state -- `<start> at 0`",
"_____no_output_____"
]
],
[
[
"for s in ep.chart[0].states:\n print(s)",
"<start>:= | <A> <B>(0,0)\n"
]
],
[
[
"We apply predict to fill out the 0th column, and the column should contain the possible parse paths.",
"_____no_output_____"
]
],
[
[
"ep.predict(col_0, '<A>', s)\nfor s in ep.chart[0].states:\n print(s)",
"<start>:= | <A> <B>(0,0)\n<A>:= | a <B> c(0,0)\n<A>:= | a <A>(0,0)\n"
]
],
[
[
"#### Scanning Tokens\n\nWhat if rather than a nonterminal, the state contained a terminal symbol such as a letter? In that case, we are ready to make some progress. For example, consider the second state:\n```\n<B>: ● b c\n```\nWe `scan` the next column's letter. Say the next token is `b`.\nIf the letter matches what we have, then create a new state by advancing the current state by one letter.\n\n```\n<B>: b ● c\n```\nThis new state is added to the next column (i.e the column that has the matched letter).",
"_____no_output_____"
]
],
[
[
"class EarleyParser(EarleyParser):\n def scan(self, col, state, letter):\n if letter == col.letter:\n col.add(state.advance())",
"_____no_output_____"
]
],
[
[
"As before, we construct the partial parse first, this time adding a new column so that we can observe the effects of `scan()`",
"_____no_output_____"
]
],
[
[
"ep = EarleyParser(SAMPLE_GRAMMAR)\ncol_1 = Column(1, 'a')\nep.chart = [col_0, col_1]",
"_____no_output_____"
],
[
"new_state = ep.chart[0].states[1]\nprint(new_state)",
"<A>:= | a <B> c(0,0)\n"
],
[
"ep.scan(col_1, new_state, 'a')\nfor s in ep.chart[1].states:\n print(s)",
"<A>:= a | <B> c(0,1)\n"
]
],
[
[
"#### Completing Processing\n\nWhen we advance, what if we actually `complete()` the processing of the current rule? If so, we want to update not just this state, but also all the _parent_ states from which this state was derived.\nFor example, say we have states as below.\n```\n<A>: a ● <B> c\n<B>: b c ● \n```\nThe state `<B>: b c ●` is now complete. So, we need to advance `<A>: a ● <B> c` one step forward.\n\nHow do we determine the parent states? Note from `predict` that we added the predicted child states to the _same_ column as that of the inspected state. Hence, we look at the starting column of the current state, with the same symbol `at_dot` as that of the name of the completed state.\n\nFor each such parent found, we advance that parent (because we have just finished parsing that non terminal for their `at_dot`) and add the new states to the current column.",
"_____no_output_____"
]
],
[
[
"class EarleyParser(EarleyParser):\n def complete(self, col, state):\n return self.earley_complete(col, state)\n\n def earley_complete(self, col, state):\n parent_states = [\n st for st in state.s_col.states if st.at_dot() == state.name\n ]\n for st in parent_states:\n col.add(st.advance())",
"_____no_output_____"
]
],
[
[
"Here is an example of completed processing. First we complete the Column 0",
"_____no_output_____"
]
],
[
[
"ep = EarleyParser(SAMPLE_GRAMMAR)\ncol_1 = Column(1, 'a')\ncol_2 = Column(2, 'd')\nep.chart = [col_0, col_1, col_2]\nep.predict(col_0, '<A>', s)\nfor s in ep.chart[0].states:\n print(s)",
"<start>:= | <A> <B>(0,0)\n<A>:= | a <B> c(0,0)\n<A>:= | a <A>(0,0)\n"
]
],
[
[
"Then we use `scan()` to populate Column 1",
"_____no_output_____"
]
],
[
[
"for state in ep.chart[0].states:\n if state.at_dot() not in SAMPLE_GRAMMAR:\n ep.scan(col_1, state, 'a')\nfor s in ep.chart[1].states:\n print(s)",
"<A>:= a | <B> c(0,1)\n<A>:= a | <A>(0,1)\n"
],
[
"for state in ep.chart[1].states:\n if state.at_dot() in SAMPLE_GRAMMAR:\n ep.predict(col_1, state.at_dot(), state)\nfor s in ep.chart[1].states:\n print(s)",
"<A>:= a | <B> c(0,1)\n<A>:= a | <A>(0,1)\n<B>:= | b <C>(1,1)\n<B>:= | <D>(1,1)\n<A>:= | a <B> c(1,1)\n<A>:= | a <A>(1,1)\n<D>:= | d(1,1)\n"
]
],
[
[
"Then we use `scan()` again to populate Column 2",
"_____no_output_____"
]
],
[
[
"for state in ep.chart[1].states:\n if state.at_dot() not in SAMPLE_GRAMMAR:\n ep.scan(col_2, state, state.at_dot())\n\nfor s in ep.chart[2].states:\n print(s)",
"<D>:= d |(1,2)\n"
]
],
[
[
"Now, we can use `complete()`:",
"_____no_output_____"
]
],
[
[
"for state in ep.chart[2].states:\n if state.finished():\n ep.complete(col_2, state)\n\nfor s in ep.chart[2].states:\n print(s)",
"<D>:= d |(1,2)\n<B>:= <D> |(1,2)\n<A>:= a <B> | c(0,2)\n"
]
],
[
[
"#### Filling the Chart\n\nThe main driving loop in `fill_chart()` essentially calls these operations in order. We loop over each column in order.\n* For each column, fetch one state in the column at a time, and check if the state is `finished`. \n * If it is, then we `complete()` all the parent states depending on this state. \n* If the state was not finished, we check to see if the state's current symbol `at_dot` is a nonterminal. \n * If it is a nonterminal, we `predict()` possible continuations, and update the current column with these states. \n * If it was not, we `scan()` the next column and advance the current state if it matches the next letter.",
"_____no_output_____"
]
],
[
[
"class EarleyParser(EarleyParser):\n def fill_chart(self, chart):\n for i, col in enumerate(chart):\n for state in col.states:\n if state.finished():\n self.complete(col, state)\n else:\n sym = state.at_dot()\n if sym in self.cgrammar:\n self.predict(col, sym, state)\n else:\n if i + 1 >= len(chart):\n continue\n self.scan(chart[i + 1], state, sym)\n if self.log:\n print(col, '\\n')\n return chart",
"_____no_output_____"
]
],
[
[
"We now can recognize a given string as belonging to a language represented by a grammar.",
"_____no_output_____"
]
],
[
[
"ep = EarleyParser(SAMPLE_GRAMMAR, log=True)\ncolumns = ep.chart_parse('adcd', START_SYMBOL)",
"None chart[0]\n \n\na chart[1]\n \n\nd chart[2]\n<D>:= d |(1,2)\n<B>:= <D> |(1,2) \n\nc chart[3]\n<A>:= a <B> c |(0,3) \n\nd chart[4]\n<D>:= d |(3,4)\n<B>:= <D> |(3,4)\n<start>:= <A> <B> |(0,4) \n\n"
]
],
[
[
"The chart we printed above only shows completed entries at each index. The parenthesized expression indicates the column just before the first character was recognized, and the ending column.\n\nNotice how the `<start>` nonterminal shows fully parsed status.",
"_____no_output_____"
]
],
[
[
"last_col = columns[-1]\nfor state in last_col.states:\n if state.name == '<start>':\n print(state)",
"<start>:= <A> <B> |(0,4)\n"
]
],
[
[
"Since `chart_parse()` returns the completed table, we now need to extract the derivation trees.",
"_____no_output_____"
],
[
"#### The Parse Method\n\nFor determining how far we have managed to parse, we simply look for the last index from `chart_parse()` where the `start_symbol` was found.",
"_____no_output_____"
]
],
[
[
"class EarleyParser(EarleyParser):\n def parse_prefix(self, text):\n self.table = self.chart_parse(text, self.start_symbol())\n for col in reversed(self.table):\n states = [\n st for st in col.states if st.name == self.start_symbol()\n ]\n if states:\n return col.index, states\n return -1, []",
"_____no_output_____"
]
],
[
[
"Here is the `parse_prefix()` in action.",
"_____no_output_____"
]
],
[
[
"ep = EarleyParser(SAMPLE_GRAMMAR)\ncursor, last_states = ep.parse_prefix('adcd')\nprint(cursor, [str(s) for s in last_states])",
"4 ['<start>:= <A> <B> |(0,4)']\n"
]
],
[
[
"The following is adapted from the excellent reference on Earley parsing by [Loup Vaillant](http://loup-vaillant.fr/tutorials/earley-parsing/).\n",
"_____no_output_____"
],
[
"Our `parse()` method is as follows. It depends on two methods `parse_forest()` and `extract_trees()` that will be defined next.",
"_____no_output_____"
]
],
[
[
"class EarleyParser(EarleyParser):\n def parse(self, text):\n cursor, states = self.parse_prefix(text)\n start = next((s for s in states if s.finished()), None)\n\n if cursor < len(text) or not start:\n raise SyntaxError(\"at \" + repr(text[cursor:]))\n\n forest = self.parse_forest(self.table, start)\n for tree in self.extract_trees(forest):\n yield self.prune_tree(tree)",
"_____no_output_____"
]
],
[
[
"#### Parsing Paths\n\nThe `parse_paths()` method tries to unify the given expression in `named_expr` with the parsed string. For that, it extracts the last symbol in `named_expr` and checks if it is a terminal symbol. If it is, then it checks the chart at `til` to see if the letter corresponding to the position matches the terminal symbol. If it does, extend our start index by the length of the symbol.\n\nIf the symbol was a nonterminal symbol, then we retrieve the parsed states at the current end column index (`til`) that correspond to the nonterminal symbol, and collect the start index. These are the end column indexes for the remaining expression.\n\nGiven our list of start indexes, we obtain the parse paths from the remaining expression. If we can obtain any, then we return the parse paths. If not, we return an empty list.",
"_____no_output_____"
]
],
[
[
"class EarleyParser(EarleyParser):\n def parse_paths(self, named_expr, chart, frm, til):\n def paths(state, start, k, e):\n if not e:\n return [[(state, k)]] if start == frm else []\n else:\n return [[(state, k)] + r\n for r in self.parse_paths(e, chart, frm, start)]\n\n *expr, var = named_expr\n starts = None\n if var not in self.cgrammar:\n starts = ([(var, til - len(var),\n 't')] if til > 0 and chart[til].letter == var else [])\n else:\n starts = [(s, s.s_col.index, 'n') for s in chart[til].states\n if s.finished() and s.name == var]\n\n return [p for s, start, k in starts for p in paths(s, start, k, expr)]",
"_____no_output_____"
]
],
[
[
"Here is the `parse_paths()` in action",
"_____no_output_____"
]
],
[
[
"print(SAMPLE_GRAMMAR['<start>'])\nep = EarleyParser(SAMPLE_GRAMMAR)\ncompleted_start = last_states[0]\npaths = ep.parse_paths(completed_start.expr, columns, 0, 4)\nfor path in paths:\n print([list(str(s_) for s_ in s) for s in path])",
"['<A><B>']\n[['<B>:= <D> |(3,4)', 'n'], ['<A>:= a <B> c |(0,3)', 'n']]\n"
]
],
[
[
"That is, the parse path for `<start>` given the input `adcd` included recognizing the expression `<A><B>`. This was recognized by the two states: `<A>` from input(0) to input(2) which further involved recognizing the rule `a<B>c`, and the next state `<B>` from input(3) which involved recognizing the rule `<D>`.",
"_____no_output_____"
],
[
"#### Parsing Forests\n\nThe `parse_forest()` method takes the state which represents the completed parse, and determines the possible ways that its expressions corresponded to the parsed expression. For example, say we are parsing `1+2+3`, and the state has `[<expr>,+,<expr>]` in `expr`. It could have been parsed as either `[{<expr>:1+2},+,{<expr>:3}]` or `[{<expr>:1},+,{<expr>:2+3}]`.",
"_____no_output_____"
]
],
[
[
"class EarleyParser(EarleyParser):\n def forest(self, s, kind, chart):\n return self.parse_forest(chart, s) if kind == 'n' else (s, [])\n\n def parse_forest(self, chart, state):\n pathexprs = self.parse_paths(state.expr, chart, state.s_col.index,\n state.e_col.index) if state.expr else []\n return state.name, [[(v, k, chart) for v, k in reversed(pathexpr)]\n for pathexpr in pathexprs]",
"_____no_output_____"
],
[
"ep = EarleyParser(SAMPLE_GRAMMAR)\nresult = ep.parse_forest(columns, last_states[0])\nresult",
"_____no_output_____"
]
],
[
[
"#### Extracting Trees",
"_____no_output_____"
],
[
"What we have from `parse_forest()` is a forest of trees. We need to extract a single tree from that forest. That is accomplished as follows.",
"_____no_output_____"
],
[
"(For now, we return the first available derivation tree. To do that, we need to extract the parse forest from the state corresponding to `start`.)",
"_____no_output_____"
]
],
[
[
"class EarleyParser(EarleyParser):\n def extract_a_tree(self, forest_node):\n name, paths = forest_node\n if not paths:\n return (name, [])\n return (name, [self.extract_a_tree(self.forest(*p)) for p in paths[0]])\n\n def extract_trees(self, forest):\n yield self.extract_a_tree(forest)",
"_____no_output_____"
]
],
[
[
"We now verify that our parser can parse a given expression.",
"_____no_output_____"
]
],
[
[
"A3_GRAMMAR: Grammar = {\n \"<start>\": [\"<bexpr>\"],\n \"<bexpr>\": [\n \"<aexpr><gt><aexpr>\", \"<aexpr><lt><aexpr>\", \"<aexpr>=<aexpr>\",\n \"<bexpr>=<bexpr>\", \"<bexpr>&<bexpr>\", \"<bexpr>|<bexpr>\", \"(<bexrp>)\"\n ],\n \"<aexpr>\":\n [\"<aexpr>+<aexpr>\", \"<aexpr>-<aexpr>\", \"(<aexpr>)\", \"<integer>\"],\n \"<integer>\": [\"<digit><integer>\", \"<digit>\"],\n \"<digit>\": [\"0\", \"1\", \"2\", \"3\", \"4\", \"5\", \"6\", \"7\", \"8\", \"9\"],\n \"<lt>\": ['<'],\n \"<gt>\": ['>']\n}",
"_____no_output_____"
],
[
"syntax_diagram(A3_GRAMMAR)",
"start\n"
],
[
"mystring = '(1+24)=33'\nparser = EarleyParser(A3_GRAMMAR)\nfor tree in parser.parse(mystring):\n assert tree_to_string(tree) == mystring\n display_tree(tree)",
"_____no_output_____"
]
],
[
[
"We now have a complete parser that can parse almost arbitrary *CFG*. There remains a small corner to fix -- the case of epsilon rules as we will see later.",
"_____no_output_____"
],
[
"#### Ambiguous Parsing",
"_____no_output_____"
],
[
"Ambiguous grammars are grammars that can produce multiple derivation trees for some given string. For example, the `A3_GRAMMAR` can parse `1+2+3` in two different ways – `[1+2]+3` and `1+[2+3]`.\n\nExtracting a single tree might be reasonable for unambiguous parses. However, what if the given grammar produces ambiguity when given a string? We need to extract all derivation trees in that case. We enhance our `extract_trees()` method to extract multiple derivation trees.",
"_____no_output_____"
]
],
[
[
"import itertools as I",
"_____no_output_____"
],
[
"class EarleyParser(EarleyParser):\n def extract_trees(self, forest_node):\n name, paths = forest_node\n if not paths:\n yield (name, [])\n\n for path in paths:\n ptrees = [self.extract_trees(self.forest(*p)) for p in path]\n for p in I.product(*ptrees):\n yield (name, p)",
"_____no_output_____"
]
],
[
[
"As before, we verify that everything works.",
"_____no_output_____"
]
],
[
[
"mystring = '1+2'\nparser = EarleyParser(A1_GRAMMAR)\nfor tree in parser.parse(mystring):\n assert mystring == tree_to_string(tree)\n display_tree(tree)",
"_____no_output_____"
]
],
[
[
"One can also use a `GrammarFuzzer` to verify that everything works.",
"_____no_output_____"
]
],
[
[
"gf = GrammarFuzzer(A1_GRAMMAR)\nfor i in range(5):\n s = gf.fuzz()\n print(i, s)\n for tree in parser.parse(s):\n assert tree_to_string(tree) == s",
"0 045+3+2-9+7-7-5-1-449\n1 0+9+5-2+1-8+4-3+7+2\n2 76413\n3 9339\n4 62\n"
]
],
[
[
"#### The Aycock Epsilon Fix\n\nWhile parsing, one often requires to know whether a given nonterminal can derive an empty string. For example, in the following grammar A can derive an empty string, while B can't. The nonterminals that can derive an empty string are called _nullable_ nonterminals. For example, in the below grammar `E_GRAMMAR_1`, `<A>` is _nullable_, and since `<A>` is one of the alternatives of `<start>`, `<start>` is also _nullable_. But `<B>` is not _nullable_.",
"_____no_output_____"
]
],
[
[
"E_GRAMMAR_1: Grammar = {\n '<start>': ['<A>', '<B>'],\n '<A>': ['a', ''],\n '<B>': ['b']\n}",
"_____no_output_____"
]
],
[
[
"One of the problems with the original Earley implementation is that it does not handle rules that can derive empty strings very well. For example, the given grammar should match `a`",
"_____no_output_____"
]
],
[
[
"EPSILON = ''\nE_GRAMMAR: Grammar = {\n '<start>': ['<S>'],\n '<S>': ['<A><A><A><A>'],\n '<A>': ['a', '<E>'],\n '<E>': [EPSILON]\n}",
"_____no_output_____"
],
[
"syntax_diagram(E_GRAMMAR)",
"start\n"
],
[
"mystring = 'a'\nparser = EarleyParser(E_GRAMMAR)\nwith ExpectError():\n trees = parser.parse(mystring)",
"_____no_output_____"
]
],
[
[
"Aycock et al.\\cite{Aycock2002} suggests a simple fix. Their idea is to pre-compute the `nullable` set and use it to advance the `nullable` states. However, before we do that, we need to compute the `nullable` set. The `nullable` set consists of all nonterminals that can derive an empty string.",
"_____no_output_____"
],
[
"Computing the `nullable` set requires expanding each production rule in the grammar iteratively and inspecting whether a given rule can derive the empty string. Each iteration needs to take into account new terminals that have been found to be `nullable`. The procedure stops when we obtain a stable result. This procedure can be abstracted into a more general method `fixpoint`.",
"_____no_output_____"
],
[
"##### Fixpoint\n\nA `fixpoint` of a function is an element in the function's domain such that it is mapped to itself. For example, 1 is a `fixpoint` of square root because `squareroot(1) == 1`.\n\n(We use `str` rather than `hash` to check for equality in `fixpoint` because the data structure `set`, which we would like to use as an argument has a good string representation but is not hashable).",
"_____no_output_____"
]
],
[
[
"def fixpoint(f):\n def helper(arg):\n while True:\n sarg = str(arg)\n arg_ = f(arg)\n if str(arg_) == sarg:\n return arg\n arg = arg_\n\n return helper",
"_____no_output_____"
]
],
[
[
"Remember `my_sqrt()` from [the first chapter](Intro_Testing.ipynb)? We can define `my_sqrt()` using fixpoint.",
"_____no_output_____"
]
],
[
[
"def my_sqrt(x):\n @fixpoint\n def _my_sqrt(approx):\n return (approx + x / approx) / 2\n\n return _my_sqrt(1)",
"_____no_output_____"
],
[
"my_sqrt(2)",
"_____no_output_____"
]
],
[
[
"##### Nullable\n\nSimilarly, we can define `nullable` using `fixpoint`. We essentially provide the definition of a single intermediate step. That is, assuming that `nullables` contain the current `nullable` nonterminals, we iterate over the grammar looking for productions which are `nullable` -- that is, productions where the entire sequence can yield an empty string on some expansion.",
"_____no_output_____"
],
[
"We need to iterate over the different alternative expressions and their corresponding nonterminals. Hence we define a `rules()` method converts our dictionary representation to this pair format.",
"_____no_output_____"
]
],
[
[
"def rules(grammar):\n return [(key, choice)\n for key, choices in grammar.items()\n for choice in choices]",
"_____no_output_____"
]
],
[
[
"The `terminals()` method extracts all terminal symbols from a `canonical` grammar representation.",
"_____no_output_____"
]
],
[
[
"def terminals(grammar):\n return set(token\n for key, choice in rules(grammar)\n for token in choice if token not in grammar)",
"_____no_output_____"
],
[
"def nullable_expr(expr, nullables):\n return all(token in nullables for token in expr)",
"_____no_output_____"
],
[
"def nullable(grammar):\n productions = rules(grammar)\n\n @fixpoint\n def nullable_(nullables):\n for A, expr in productions:\n if nullable_expr(expr, nullables):\n nullables |= {A}\n return (nullables)\n\n return nullable_({EPSILON})",
"_____no_output_____"
],
[
"for key, grammar in {\n 'E_GRAMMAR': E_GRAMMAR,\n 'E_GRAMMAR_1': E_GRAMMAR_1\n}.items():\n print(key, nullable(canonical(grammar)))",
"E_GRAMMAR {'', '<E>', '<start>', '<A>', '<S>'}\nE_GRAMMAR_1 {'', '<A>', '<start>'}\n"
]
],
[
[
"So, once we have the `nullable` set, all that we need to do is, after we have called `predict` on a state corresponding to a nonterminal, check if it is `nullable` and if it is, advance and add the state to the current column.",
"_____no_output_____"
]
],
[
[
"class EarleyParser(EarleyParser):\n def __init__(self, grammar, **kwargs):\n super().__init__(grammar, **kwargs)\n self.epsilon = nullable(self.cgrammar)\n\n def predict(self, col, sym, state):\n for alt in self.cgrammar[sym]:\n col.add(State(sym, tuple(alt), 0, col))\n if sym in self.epsilon:\n col.add(state.advance())",
"_____no_output_____"
],
[
"mystring = 'a'\nparser = EarleyParser(E_GRAMMAR)\nfor tree in parser.parse(mystring):\n display_tree(tree)",
"_____no_output_____"
]
],
[
[
"To ensure that our parser does parse all kinds of grammars, let us try two more test cases.",
"_____no_output_____"
]
],
[
[
"DIRECTLY_SELF_REFERRING: Grammar = {\n '<start>': ['<query>'],\n '<query>': ['select <expr> from a'],\n \"<expr>\": [\"<expr>\", \"a\"],\n}\nINDIRECTLY_SELF_REFERRING: Grammar = {\n '<start>': ['<query>'],\n '<query>': ['select <expr> from a'],\n \"<expr>\": [\"<aexpr>\", \"a\"],\n \"<aexpr>\": [\"<expr>\"],\n}",
"_____no_output_____"
],
[
"mystring = 'select a from a'\nfor grammar in [DIRECTLY_SELF_REFERRING, INDIRECTLY_SELF_REFERRING]:\n forest = EarleyParser(grammar).parse(mystring)\n print('recognized', mystring)\n try:\n for tree in forest:\n print(tree_to_string(tree))\n except RecursionError as e:\n print(\"Recursion error\", e)",
"recognized select a from a\nRecursion error maximum recursion depth exceeded\nrecognized select a from a\nRecursion error maximum recursion depth exceeded\n"
]
],
[
[
"Why do we get recursion error here? The reason is that, our implementation of `extract_trees()` is eager. That is, it attempts to extract _all_ inner parse trees before it can construct the outer parse tree. When there is a self reference, this results in recursion. Here is a simple extractor that avoids this problem. The idea here is that we randomly and lazily choose a node to expand, which avoids the infinite recursion.",
"_____no_output_____"
],
[
"#### Tree Extractor",
"_____no_output_____"
],
[
"As you saw above, one of the problems with attempting to extract all trees is that the parse forest can consist of an infinite number of trees. So, here, we solve that problem by extracting one tree at a time.",
"_____no_output_____"
]
],
[
[
"class SimpleExtractor:\n def __init__(self, parser, text):\n self.parser = parser\n cursor, states = parser.parse_prefix(text)\n start = next((s for s in states if s.finished()), None)\n if cursor < len(text) or not start:\n raise SyntaxError(\"at \" + repr(cursor))\n self.my_forest = parser.parse_forest(parser.table, start)\n\n def extract_a_node(self, forest_node):\n name, paths = forest_node\n if not paths:\n return ((name, 0, 1), []), (name, [])\n cur_path, i, length = self.choose_path(paths)\n child_nodes = []\n pos_nodes = []\n for s, kind, chart in cur_path:\n f = self.parser.forest(s, kind, chart)\n postree, ntree = self.extract_a_node(f)\n child_nodes.append(ntree)\n pos_nodes.append(postree)\n\n return ((name, i, length), pos_nodes), (name, child_nodes)\n\n def choose_path(self, arr):\n length = len(arr)\n i = random.randrange(length)\n return arr[i], i, length\n\n def extract_a_tree(self):\n pos_tree, parse_tree = self.extract_a_node(self.my_forest)\n return self.parser.prune_tree(parse_tree)",
"_____no_output_____"
]
],
[
[
"Using it is as folows:",
"_____no_output_____"
]
],
[
[
"de = SimpleExtractor(EarleyParser(DIRECTLY_SELF_REFERRING), mystring)",
"_____no_output_____"
],
[
"for i in range(5):\n tree = de.extract_a_tree()\n print(tree_to_string(tree))",
"select a from a\nselect a from a\nselect a from a\nselect a from a\nselect a from a\n"
]
],
[
[
"On the indirect reference:",
"_____no_output_____"
]
],
[
[
"ie = SimpleExtractor(EarleyParser(INDIRECTLY_SELF_REFERRING), mystring)",
"_____no_output_____"
],
[
"for i in range(5):\n tree = ie.extract_a_tree()\n print(tree_to_string(tree))",
"select a from a\nselect a from a\nselect a from a\nselect a from a\nselect a from a\n"
]
],
[
[
"Note that the `SimpleExtractor` gives no guarantee of the uniqueness of the returned trees. This can however be fixed by keeping track of the particular nodes that were expanded from `pos_tree` variable, and hence, avoiding exploration of the same paths.\n\nFor implementing this, we extract the random stream passing into the `SimpleExtractor`, and use it to control which nodes are explored. Different exploration paths can then form a tree of nodes.",
"_____no_output_____"
],
[
"We start with the node definition for a single choice. The `self._chosen` is the current choice made, `self.next` holds the next choice done using `self._chosen`. The `self.total` holds the total number of choices that one can have in this node.",
"_____no_output_____"
]
],
[
[
"class ChoiceNode:\n def __init__(self, parent, total):\n self._p, self._chosen = parent, 0\n self._total, self.next = total, None\n\n def chosen(self):\n assert not self.finished()\n return self._chosen\n\n def __str__(self):\n return '%d(%s/%s %s)' % (self._i, str(self._chosen),\n str(self._total), str(self.next))\n\n def __repr__(self):\n return repr((self._i, self._chosen, self._total))\n\n def increment(self):\n # as soon as we increment, next becomes invalid\n self.next = None\n self._chosen += 1\n if self.finished():\n if self._p is None:\n return None\n return self._p.increment()\n return self\n\n def finished(self):\n return self._chosen >= self._total",
"_____no_output_____"
]
],
[
[
"Now we come to the enhanced `EnhancedExtractor()`.",
"_____no_output_____"
]
],
[
[
"class EnhancedExtractor(SimpleExtractor):\n def __init__(self, parser, text):\n super().__init__(parser, text)\n self.choices = ChoiceNode(None, 1)",
"_____no_output_____"
]
],
[
[
"First we define `choose_path()` that given an array and a choice node, returns the element in array corresponding to the next choice node if it exists, or produces a new choice nodes, and returns that element.",
"_____no_output_____"
]
],
[
[
"class EnhancedExtractor(EnhancedExtractor):\n def choose_path(self, arr, choices):\n arr_len = len(arr)\n if choices.next is not None:\n if choices.next.finished():\n return None, None, None, choices.next\n else:\n choices.next = ChoiceNode(choices, arr_len)\n next_choice = choices.next.chosen()\n choices = choices.next\n return arr[next_choice], next_choice, arr_len, choices",
"_____no_output_____"
]
],
[
[
"We define `extract_a_node()` here. While extracting, we have a choice. Should we allow infinite forests, or should we have a finite number of trees with no direct recursion? A direct recursion is when there exists a parent node with the same nonterminal that parsed the same span. We choose here not to extract such trees. They can be added back after parsing.\n\nThis is a recursive procedure that inspects a node, extracts the path required to complete that node. A single path (corresponding to a nonterminal) may again be composed of a sequence of smaller paths. Such paths are again extracted using another call to `extract_a_node()` recursively.\n\nWhat happens when we hit on one of the node recursions we want to avoid? In that case, we return the current choice node, which bubbles up to `extract_a_tree()`. That procedure increments the last choice, which in turn increments up the parents until we reach a choice node that still has options to explore.\n\nWhat if we hit the end of choices for a particular choice node(i.e, we have exhausted paths that can be taken from a node)? In this case also, we return the current choice node, which bubbles up to `extract_a_tree()`.\nThat procedure increments the last choice, which bubbles up to the next choice that has some unexplored paths.",
"_____no_output_____"
]
],
[
[
"class EnhancedExtractor(EnhancedExtractor):\n def extract_a_node(self, forest_node, seen, choices):\n name, paths = forest_node\n if not paths:\n return (name, []), choices\n\n cur_path, _i, _l, new_choices = self.choose_path(paths, choices)\n if cur_path is None:\n return None, new_choices\n child_nodes = []\n for s, kind, chart in cur_path:\n if kind == 't':\n child_nodes.append((s, []))\n continue\n nid = (s.name, s.s_col.index, s.e_col.index)\n if nid in seen:\n return None, new_choices\n f = self.parser.forest(s, kind, chart)\n ntree, newer_choices = self.extract_a_node(f, seen | {nid}, new_choices)\n if ntree is None:\n return None, newer_choices\n child_nodes.append(ntree)\n new_choices = newer_choices\n return (name, child_nodes), new_choices",
"_____no_output_____"
]
],
[
[
"The `extract_a_tree()` is a depth first extractor of a single tree. It tries to extract a tree, and if the extraction returns `None`, it means that a particular choice was exhausted, or we hit on a recursion. In that case, we increment the choice, and explore a new path.",
"_____no_output_____"
]
],
[
[
"class EnhancedExtractor(EnhancedExtractor):\n def extract_a_tree(self):\n while not self.choices.finished():\n parse_tree, choices = self.extract_a_node(self.my_forest, set(), self.choices)\n choices.increment()\n if parse_tree is not None:\n return self.parser.prune_tree(parse_tree)\n return None",
"_____no_output_____"
]
],
[
[
"Note that the `EnhancedExtractor` only extracts nodes that are not directly recursive. That is, if it finds a node with a nonterminal that covers the same span as that of a parent node with the same nonterminal, it skips the node.",
"_____no_output_____"
]
],
[
[
"ee = EnhancedExtractor(EarleyParser(INDIRECTLY_SELF_REFERRING), mystring)",
"_____no_output_____"
],
[
"i = 0\nwhile True:\n i += 1\n t = ee.extract_a_tree()\n if t is None: break\n print(i, t)\n s = tree_to_string(t)\n assert s == mystring",
"1 ('<start>', [('<query>', [('select ', []), ('<expr>', [('a', [])]), (' from a', [])])])\n"
],
[
"istring = '1+2+3+4'\nee = EnhancedExtractor(EarleyParser(A1_GRAMMAR), istring)",
"_____no_output_____"
],
[
"i = 0\nwhile True:\n i += 1\n t = ee.extract_a_tree()\n if t is None: break\n print(i, t)\n s = tree_to_string(t)\n assert s == istring",
"1 ('<start>', [('<expr>', [('<expr>', [('<expr>', [('<expr>', [('<integer>', [('<digit>', [('1', [])])])]), ('+', []), ('<expr>', [('<integer>', [('<digit>', [('2', [])])])])]), ('+', []), ('<expr>', [('<integer>', [('<digit>', [('3', [])])])])]), ('+', []), ('<expr>', [('<integer>', [('<digit>', [('4', [])])])])])])\n2 ('<start>', [('<expr>', [('<expr>', [('<expr>', [('<integer>', [('<digit>', [('1', [])])])]), ('+', []), ('<expr>', [('<expr>', [('<integer>', [('<digit>', [('2', [])])])]), ('+', []), ('<expr>', [('<integer>', [('<digit>', [('3', [])])])])])]), ('+', []), ('<expr>', [('<integer>', [('<digit>', [('4', [])])])])])])\n3 ('<start>', [('<expr>', [('<expr>', [('<expr>', [('<integer>', [('<digit>', [('1', [])])])]), ('+', []), ('<expr>', [('<integer>', [('<digit>', [('2', [])])])])]), ('+', []), ('<expr>', [('<expr>', [('<integer>', [('<digit>', [('3', [])])])]), ('+', []), ('<expr>', [('<integer>', [('<digit>', [('4', [])])])])])])])\n4 ('<start>', [('<expr>', [('<expr>', [('<integer>', [('<digit>', [('1', [])])])]), ('+', []), ('<expr>', [('<expr>', [('<expr>', [('<integer>', [('<digit>', [('2', [])])])]), ('+', []), ('<expr>', [('<integer>', [('<digit>', [('3', [])])])])]), ('+', []), ('<expr>', [('<integer>', [('<digit>', [('4', [])])])])])])])\n5 ('<start>', [('<expr>', [('<expr>', [('<integer>', [('<digit>', [('1', [])])])]), ('+', []), ('<expr>', [('<expr>', [('<integer>', [('<digit>', [('2', [])])])]), ('+', []), ('<expr>', [('<expr>', [('<integer>', [('<digit>', [('3', [])])])]), ('+', []), ('<expr>', [('<integer>', [('<digit>', [('4', [])])])])])])])])\n"
]
],
[
[
"#### More Earley Parsing\n\nA number of other optimizations exist for Earley parsers. A fast industrial strength Earley parser implementation is the [Marpa parser](https://jeffreykegler.github.io/Marpa-web-site/). Further, Earley parsing need not be restricted to character data. One may also parse streams (audio and video streams) \\cite{qi2018generalized} using a generalized Earley parser.",
"_____no_output_____"
],
[
"### End of Excursion",
"_____no_output_____"
],
[
"Here are a few examples of the Earley parser in action.",
"_____no_output_____"
]
],
[
[
"mystring = \"1 + (2 * 3)\"\nearley = EarleyParser(EXPR_GRAMMAR)\nfor tree in earley.parse(mystring):\n assert tree_to_string(tree) == mystring\n display(display_tree(tree))",
"_____no_output_____"
],
[
"mystring = \"1 * (2 + 3.35)\"\nfor tree in earley.parse(mystring):\n assert tree_to_string(tree) == mystring\n display(display_tree(tree))",
"_____no_output_____"
]
],
[
[
"In contrast to the `PEGParser`, above, the `EarleyParser` can handle arbitrary context-free grammars.",
"_____no_output_____"
],
[
"### Excursion: Testing the Parsers\n\nWhile we have defined two parser variants, it would be nice to have some confirmation that our parses work well. While it is possible to formally prove that they work, it is much more satisfying to generate random grammars, their corresponding strings, and parse them using the same grammar.",
"_____no_output_____"
]
],
[
[
"def prod_line_grammar(nonterminals, terminals):\n g = {\n '<start>': ['<symbols>'],\n '<symbols>': ['<symbol><symbols>', '<symbol>'],\n '<symbol>': ['<nonterminals>', '<terminals>'],\n '<nonterminals>': ['<lt><alpha><gt>'],\n '<lt>': ['<'],\n '<gt>': ['>'],\n '<alpha>': nonterminals,\n '<terminals>': terminals\n }\n\n if not nonterminals:\n g['<nonterminals>'] = ['']\n del g['<lt>']\n del g['<alpha>']\n del g['<gt>']\n\n return g",
"_____no_output_____"
],
[
"syntax_diagram(prod_line_grammar([\"A\", \"B\", \"C\"], [\"1\", \"2\", \"3\"]))",
"start\n"
],
[
"def make_rule(nonterminals, terminals, num_alts):\n prod_grammar = prod_line_grammar(nonterminals, terminals)\n\n gf = GrammarFuzzer(prod_grammar, min_nonterminals=3, max_nonterminals=5)\n name = \"<%s>\" % ''.join(random.choices(string.ascii_uppercase, k=3))\n\n return (name, [gf.fuzz() for _ in range(num_alts)])",
"_____no_output_____"
],
[
"make_rule([\"A\", \"B\", \"C\"], [\"1\", \"2\", \"3\"], 3)",
"_____no_output_____"
],
[
"from Grammars import unreachable_nonterminals",
"_____no_output_____"
],
[
"def make_grammar(num_symbols=3, num_alts=3):\n terminals = list(string.ascii_lowercase)\n grammar = {}\n name = None\n for _ in range(num_symbols):\n nonterminals = [k[1:-1] for k in grammar.keys()]\n name, expansions = \\\n make_rule(nonterminals, terminals, num_alts)\n grammar[name] = expansions\n\n grammar[START_SYMBOL] = [name]\n\n # Remove unused parts\n for nonterminal in unreachable_nonterminals(grammar):\n del grammar[nonterminal]\n\n assert is_valid_grammar(grammar)\n\n return grammar",
"_____no_output_____"
],
[
"make_grammar()",
"_____no_output_____"
]
],
[
[
"Now we verify if our arbitrary grammars can be used by the Earley parser.",
"_____no_output_____"
]
],
[
[
"for i in range(5):\n my_grammar = make_grammar()\n print(my_grammar)\n parser = EarleyParser(my_grammar)\n mygf = GrammarFuzzer(my_grammar)\n s = mygf.fuzz()\n print(s)\n for tree in parser.parse(s):\n assert tree_to_string(tree) == s\n display_tree(tree)",
"{'<SCS>': ['ts', 'f', 'ng'], '<BQN>': ['wm<SCS>', '<SCS>wi', '<SCS>hw'], '<UZC>': ['gyk<BQN>br', '<SCS>iqp', '<BQN>vb'], '<start>': ['<UZC>']}\nfhwvb\n{'<CRN>': ['meze', 'de', 'cpcv'], '<AIS>': ['<CRN>hb', 'dc<CRN>', 'pa<CRN>x'], '<MAO>': ['<CRN>su', '<CRN>hj', '<CRN><AIS>g'], '<start>': ['<MAO>']}\ndehj\n{'<MFY>': ['y', 'w', ''], '<ZOY>': ['oe<MFY>', 'h<MFY>u', 'lowr'], '<HFT>': ['<ZOY>ro', '<ZOY>w', '<ZOY><ZOY>w'], '<start>': ['<HFT>']}\nlowrro\n{'<CYC>': ['cg', 'enl', 'ovd'], '<TUV>': ['<CYC>hf', '<CYC>nl', 'fhg'], '<MOQ>': ['g<TUV>g', '<CYC>ix', '<CYC><TUV><CYC>'], '<start>': ['<MOQ>']}\ncgix\n{'<WJJ>': ['dszdlh', 'j', 'fd'], '<RQM>': ['<WJJ>wx', 'xs<WJJ><WJJ>', '<WJJ>x'], '<JNY>': ['<WJJ>oa', '<WJJ><WJJ>cx', 'xd<RQM>'], '<start>': ['<JNY>']}\njoa\n"
]
],
[
[
"With this, we have completed both implementation and testing of *arbitrary* CFG, which can now be used along with `LangFuzzer` to generate better fuzzing inputs.",
"_____no_output_____"
],
[
"### End of Excursion",
"_____no_output_____"
],
[
"## Background\n\n\nNumerous parsing techniques exist that can parse a given string using a\ngiven grammar, and produce corresponding derivation tree or trees. However,\nsome of these techniques work only on specific classes of grammars.\nThese classes of grammars are named after the specific kind of parser\nthat can accept grammars of that category. That is, the upper bound for\nthe capabilities of the parser defines the grammar class named after that\nparser.\n\nThe *LL* and *LR* parsing are the main traditions in parsing. Here, *LL* means left-to-right, leftmost derivation, and it represents a top-down approach. On the other hand, and LR (left-to-right, rightmost derivation) represents a bottom-up approach. Another way to look at it is that LL parsers compute the derivation tree incrementally in *pre-order* while LR parsers compute the derivation tree in *post-order* \\cite{pingali2015graphical}).\n\nDifferent classes of grammars differ in the features that are available to\nthe user for writing a grammar of that class. That is, the corresponding\nkind of parser will be unable to parse a grammar that makes use of more\nfeatures than allowed. For example, the `A2_GRAMMAR` is an *LL*\ngrammar because it lacks left recursion, while `A1_GRAMMAR` is not an\n*LL* grammar. This is because an *LL* parser parses\nits input from left to right, and constructs the leftmost derivation of its\ninput by expanding the nonterminals it encounters. If there is a left\nrecursion in one of these rules, an *LL* parser will enter an infinite loop.\n\nSimilarly, a grammar is LL(k) if it can be parsed by an LL parser with k lookahead token, and LR(k) grammar can only be parsed with LR parser with at least k lookahead tokens. These grammars are interesting because both LL(k) and LR(k) grammars have $O(n)$ parsers, and can be used with relatively restricted computational budget compared to other grammars.\n\nThe languages for which one can provide an *LL(k)* grammar is called *LL(k)* languages (where k is the minimum lookahead required). Similarly, *LR(k)* is defined as the set of languages that have an *LR(k)* grammar. In terms of languages, LL(k) $\\subset$ LL(k+1) and LL(k) $\\subset$ LR(k), and *LR(k)* $=$ *LR(1)*. All deterministic *CFLs* have an *LR(1)* grammar. However, there exist *CFLs* that are inherently ambiguous \\cite{ogden1968helpful}, and for these, one can't provide an *LR(1)* grammar.\n\nThe other main parsing algorithms for *CFGs* are GLL \\cite{scott2010gll}, GLR \\cite{tomita1987efficient,tomita2012generalized}, and CYK \\cite{grune2008parsing}.\nThe ALL(\\*) (used by ANTLR) on the other hand is a grammar representation that uses *Regular Expression* like predicates (similar to advanced PEGs – see [Exercise](#Exercise-3:-PEG-Predicates)) rather than a fixed lookahead. Hence, ALL(\\*) can accept a larger class of grammars than CFGs.\n\nIn terms of computational limits of parsing, the main CFG parsers have a complexity of $O(n^3)$ for arbitrary grammars. However, parsing with arbitrary *CFG* is reducible to boolean matrix multiplication \\cite{Valiant1975} (and the reverse \\cite{Lee2002}). This is at present bounded by $O(2^{23728639}$) \\cite{LeGall2014}. Hence, worse case complexity for parsing arbitrary CFG is likely to remain close to cubic.\n\nRegarding PEGs, the actual class of languages that is expressible in *PEG* is currently unknown. In particular, we know that *PEGs* can express certain languages such as $a^n b^n c^n$. However, we do not know if there exist *CFLs* that are not expressible with *PEGs*. In Section 2.3, we provided an instance of a counter-intuitive PEG grammar. While important for our purposes (we use grammars for generation of inputs) this is not a criticism of parsing with PEGs. PEG focuses on writing grammars for recognizing a given language, and not necessarily in interpreting what language an arbitrary PEG might yield. Given a Context-Free Language to parse, it is almost always possible to write a grammar for it in PEG, and given that 1) a PEG can parse any string in $O(n)$ time, and 2) at present we know of no CFL that can't be expressed as a PEG, and 3) compared with *LR* grammars, a PEG is often more intuitive because it allows top-down interpretation, when writing a parser for a language, PEGs should be under serious consideration.",
"_____no_output_____"
],
[
"## Synopsis\n\nThis chapter introduces `Parser` classes, parsing a string into a _derivation tree_ as introduced in the [chapter on efficient grammar fuzzing](GrammarFuzzer.ipynb). Two important parser classes are provided:\n\n* [Parsing Expression Grammar parsers](#Parsing-Expression-Grammars) (`PEGParser`). These are very efficient, but limited to specific grammar structure. Notably, the alternatives represent *ordered choice*. That is, rather than choosing all rules that can potentially match, we stop at the first match that succeed.\n* [Earley parsers](#Parsing-Context-Free-Grammars) (`EarleyParser`). These accept any kind of context-free grammars, and explore all parsing alternatives (if any).\n\nUsing any of these is fairly easy, though. First, instantiate them with a grammar:",
"_____no_output_____"
]
],
[
[
"from Grammars import US_PHONE_GRAMMAR",
"_____no_output_____"
],
[
"us_phone_parser = EarleyParser(US_PHONE_GRAMMAR)",
"_____no_output_____"
]
],
[
[
"Then, use the `parse()` method to retrieve a list of possible derivation trees:",
"_____no_output_____"
]
],
[
[
"trees = us_phone_parser.parse(\"(555)987-6543\")\ntree = list(trees)[0]\ndisplay_tree(tree)",
"_____no_output_____"
]
],
[
[
"These derivation trees can then be used for test generation, notably for mutating and recombining existing inputs.",
"_____no_output_____"
]
],
[
[
"# ignore\nfrom ClassDiagram import display_class_hierarchy",
"_____no_output_____"
],
[
"# ignore\ndisplay_class_hierarchy([PEGParser, EarleyParser],\n public_methods=[\n Parser.parse,\n Parser.__init__,\n Parser.grammar,\n Parser.start_symbol\n ],\n types={\n 'DerivationTree': DerivationTree,\n 'Grammar': Grammar\n },\n project='fuzzingbook')",
"_____no_output_____"
]
],
[
[
"## Lessons Learned\n\n* Grammars can be used to generate derivation trees for a given string.\n* Parsing Expression Grammars are intuitive, and easy to implement, but require care to write.\n* Earley Parsers can parse arbitrary Context Free Grammars.\n",
"_____no_output_____"
],
[
"## Next Steps\n\n* Use parsed inputs to [recombine existing inputs](LangFuzzer.ipynb)",
"_____no_output_____"
],
[
"## Exercises",
"_____no_output_____"
],
[
"### Exercise 1: An Alternative Packrat\n\nIn the _Packrat_ parser, we showed how one could implement a simple _PEG_ parser. That parser kept track of the current location in the text using an index. Can you modify the parser so that it simply uses the current substring rather than tracking the index? That is, it should no longer have the `at` parameter.",
"_____no_output_____"
],
[
"**Solution.** Here is a possible solution:",
"_____no_output_____"
]
],
[
[
"class PackratParser(Parser):\n def parse_prefix(self, text):\n txt, res = self.unify_key(self.start_symbol(), text)\n return len(txt), [res]\n\n def parse(self, text):\n remain, res = self.parse_prefix(text)\n if remain:\n raise SyntaxError(\"at \" + res)\n return res\n\n def unify_rule(self, rule, text):\n results = []\n for token in rule:\n text, res = self.unify_key(token, text)\n if res is None:\n return text, None\n results.append(res)\n return text, results\n\n def unify_key(self, key, text):\n if key not in self.cgrammar:\n if text.startswith(key):\n return text[len(key):], (key, [])\n else:\n return text, None\n for rule in self.cgrammar[key]:\n text_, res = self.unify_rule(rule, text)\n if res:\n return (text_, (key, res))\n return text, None",
"_____no_output_____"
],
[
"mystring = \"1 + (2 * 3)\"\nfor tree in PackratParser(EXPR_GRAMMAR).parse(mystring):\n assert tree_to_string(tree) == mystring\n display_tree(tree)",
"_____no_output_____"
]
],
[
[
"### Exercise 2: More PEG Syntax\n\nThe _PEG_ syntax provides a few notational conveniences reminiscent of regular expressions. For example, it supports the following operators (letters `T` and `A` represents tokens that can be either terminal or nonterminal. `ε` is an empty string, and `/` is the ordered choice operator similar to the non-ordered choice operator `|`):\n\n* `T?` represents an optional greedy match of T and `A := T?` is equivalent to `A := T/ε`.\n* `T*` represents zero or more greedy matches of `T` and `A := T*` is equivalent to `A := T A/ε`.\n* `T+` represents one or more greedy matches – equivalent to `TT*`\n\nIf you look at the three notations above, each can be represented in the grammar in terms of basic syntax.\nRemember the exercise from [the chapter on grammars](Grammars.ipynb) that developed `define_ex_grammar()` that can represent grammars as Python code? extend `define_ex_grammar()` to `define_peg()` to support the above notational conveniences. The decorator should rewrite a given grammar that contains these notations to an equivalent grammar in basic syntax.",
"_____no_output_____"
],
[
"### Exercise 3: PEG Predicates\n\nBeyond these notational conveniences, it also supports two predicates that can provide a powerful lookahead facility that does not consume any input.\n\n* `T&A` represents an _And-predicate_ that matches `T` if `T` is matched, and it is immediately followed by `A`\n* `T!A` represents a _Not-predicate_ that matches `T` if `T` is matched, and it is *not* immediately followed by `A`\n\nImplement these predicates in our _PEG_ parser.",
"_____no_output_____"
],
[
"### Exercise 4: Earley Fill Chart\n\nIn the `Earley Parser`, `Column` class, we keep the states both as a `list` and also as a `dict` even though `dict` is ordered. Can you explain why?\n\n**Hint**: see the `fill_chart` method.",
"_____no_output_____"
],
[
"**Solution.** Python allows us to append to a list in flight, while a dict, eventhough it is ordered does not allow that facility.\n\nThat is, the following will work\n\n```python\nvalues = [1]\nfor v in values:\n values.append(v*2)\n```\n\nHowever, the following will result in an error\n```python\nvalues = {1:1}\nfor v in values:\n values[v*2] = v*2\n```\n\nIn the `fill_chart`, we make use of this facility to modify the set of states we are iterating on, on the fly.",
"_____no_output_____"
],
[
"### Exercise 5: Leo Parser\n\nOne of the problems with the original Earley parser is that while it can parse strings using arbitrary _Context Free Gramamrs_, its performance on right-recursive grammars is quadratic. That is, it takes $O(n^2)$ runtime and space for parsing with right-recursive grammars. For example, consider the parsing of the following string by two different grammars `LR_GRAMMAR` and `RR_GRAMMAR`.",
"_____no_output_____"
]
],
[
[
"mystring = 'aaaaaa'",
"_____no_output_____"
]
],
[
[
"To see the problem, we need to enable logging. Here is the logged version of parsing with the `LR_GRAMMAR`",
"_____no_output_____"
]
],
[
[
"result = EarleyParser(LR_GRAMMAR, log=True).parse(mystring)\nfor _ in result: pass # consume the generator so that we can see the logs",
"None chart[0]\n<A>:= |(0,0)\n<start>:= <A> |(0,0) \n\na chart[1]\n<A>:= <A> a |(0,1)\n<start>:= <A> |(0,1) \n\na chart[2]\n<A>:= <A> a |(0,2)\n<start>:= <A> |(0,2) \n\na chart[3]\n<A>:= <A> a |(0,3)\n<start>:= <A> |(0,3) \n\na chart[4]\n<A>:= <A> a |(0,4)\n<start>:= <A> |(0,4) \n\na chart[5]\n<A>:= <A> a |(0,5)\n<start>:= <A> |(0,5) \n\na chart[6]\n<A>:= <A> a |(0,6)\n<start>:= <A> |(0,6) \n\n"
]
],
[
[
"Compare that to the parsing of `RR_GRAMMAR` as seen below:",
"_____no_output_____"
]
],
[
[
"result = EarleyParser(RR_GRAMMAR, log=True).parse(mystring)\nfor _ in result: pass",
"None chart[0]\n<A>:= |(0,0)\n<start>:= <A> |(0,0) \n\na chart[1]\n<A>:= |(1,1)\n<A>:= a <A> |(0,1)\n<start>:= <A> |(0,1) \n\na chart[2]\n<A>:= |(2,2)\n<A>:= a <A> |(1,2)\n<A>:= a <A> |(0,2)\n<start>:= <A> |(0,2) \n\na chart[3]\n<A>:= |(3,3)\n<A>:= a <A> |(2,3)\n<A>:= a <A> |(1,3)\n<A>:= a <A> |(0,3)\n<start>:= <A> |(0,3) \n\na chart[4]\n<A>:= |(4,4)\n<A>:= a <A> |(3,4)\n<A>:= a <A> |(2,4)\n<A>:= a <A> |(1,4)\n<A>:= a <A> |(0,4)\n<start>:= <A> |(0,4) \n\na chart[5]\n<A>:= |(5,5)\n<A>:= a <A> |(4,5)\n<A>:= a <A> |(3,5)\n<A>:= a <A> |(2,5)\n<A>:= a <A> |(1,5)\n<A>:= a <A> |(0,5)\n<start>:= <A> |(0,5) \n\na chart[6]\n<A>:= |(6,6)\n<A>:= a <A> |(5,6)\n<A>:= a <A> |(4,6)\n<A>:= a <A> |(3,6)\n<A>:= a <A> |(2,6)\n<A>:= a <A> |(1,6)\n<A>:= a <A> |(0,6)\n<start>:= <A> |(0,6) \n\n"
]
],
[
[
"As can be seen from the parsing log for each letter, the number of states with representation `<A>: a <A> ● (i, j)` increases at each stage, and these are simply a left over from the previous letter. They do not contribute anything more to the parse other than to simply complete these entries. However, they take up space, and require resources for inspection, contributing a factor of `n` in analysis.\n\nJoop Leo \\cite{Leo1991} found that this inefficiency can be avoided by detecting right recursion. The idea is that before starting the `completion` step, check whether the current item has a _deterministic reduction path_. If such a path exists, add a copy of the topmost element of the _deteministic reduction path_ to the current column, and return. If not, perform the original `completion` step.\n\n\n**Definition 2.1**: An item is said to be on the deterministic reduction path above $[A \\rightarrow \\gamma., i]$ if it is $[B \\rightarrow \\alpha A ., k]$ with $[B \\rightarrow \\alpha . A, k]$ being the only item in $ I_i $ with the dot in front of A, or if it is on the deterministic reduction path above $[B \\rightarrow \\alpha A ., k]$. An item on such a path is called *topmost* one if there is no item on the deterministic reduction path above it\\cite{Leo1991}.",
"_____no_output_____"
],
[
"Finding a _deterministic reduction path_ is as follows:\n\nGiven a complete state, represented by `<A> : seq_1 ● (s, e)` where `s` is the starting column for this rule, and `e` the current column, there is a _deterministic reduction path_ **above** it if two constraints are satisfied.\n\n1. There exist a *single* item in the form `<B> : seq_2 ● <A> (k, s)` in column `s`.\n2. That should be the *single* item in s with dot in front of `<A>`\n\nThe resulting item is of the form `<B> : seq_2 <A> ● (k, e)`, which is simply item from (1) advanced, and is considered above `<A>:.. (s, e)` in the deterministic reduction path.\nThe `seq_1` and `seq_2` are arbitrary symbol sequences.\n\nThis forms the following chain of links, with `<A>:.. (s_1, e)` being the child of `<B>:.. (s_2, e)` etc.",
"_____no_output_____"
],
[
"Here is one way to visualize the chain:\n```\n<C> : seq_3 <B> ● (s_3, e) \n | constraints satisfied by <C> : seq_3 ● <B> (s_3, s_2)\n <B> : seq_2 <A> ● (s_2, e) \n | constraints satisfied by <B> : seq_2 ● <A> (s_2, s_1)\n <A> : seq_1 ● (s_1, e)\n```",
"_____no_output_____"
],
[
"Essentially, what we want to do is to identify potential deterministic right recursion candidates, perform completion on them, and *throw away the result*. We do this until we reach the top. See Grune et al.~\\cite{grune2008parsing} for further information.",
"_____no_output_____"
],
[
"Note that the completions are in the same column (`e`), with each candidates with constraints satisfied \nin further and further earlier columns (as shown below):\n```\n<C> : seq_3 ● <B> (s_3, s_2) --> <C> : seq_3 <B> ● (s_3, e)\n |\n <B> : seq_2 ● <A> (s_2, s_1) --> <B> : seq_2 <A> ● (s_2, e) \n |\n <A> : seq_1 ● (s_1, e)\n```",
"_____no_output_____"
],
[
"Following this chain, the topmost item is the item `<C>:.. (s_3, e)` that does not have a parent. The topmost item needs to be saved is called a *transitive* item by Leo, and it is associated with the non-terminal symbol that started the lookup. The transitive item needs to be added to each column we inspect.",
"_____no_output_____"
],
[
"Here is the skeleton for the parser `LeoParser`.",
"_____no_output_____"
]
],
[
[
"class LeoParser(EarleyParser):\n def complete(self, col, state):\n return self.leo_complete(col, state)\n\n def leo_complete(self, col, state):\n detred = self.deterministic_reduction(state)\n if detred:\n col.add(detred.copy())\n else:\n self.earley_complete(col, state)\n\n def deterministic_reduction(self, state):\n raise NotImplementedError",
"_____no_output_____"
]
],
[
[
"Can you implement the `deterministic_reduction()` method to obtain the topmost element?",
"_____no_output_____"
],
[
"**Solution.** Here is a possible solution:",
"_____no_output_____"
],
[
"First, we update our `Column` class with the ability to add transitive items. Note that, while Leo asks the transitive to be added to the set $ I_k $ there is no actual requirement for the transitive states to be added to the `states` list. The transitive items are only intended for memoization and not for the `fill_chart()` method. Hence, we track them separately.",
"_____no_output_____"
]
],
[
[
"class Column(Column):\n def __init__(self, index, letter):\n self.index, self.letter = index, letter\n self.states, self._unique, self.transitives = [], {}, {}\n\n def add_transitive(self, key, state):\n assert key not in self.transitives\n self.transitives[key] = state\n return self.transitives[key]",
"_____no_output_____"
]
],
[
[
"Remember the picture we drew of the deterministic path?\n```\n <C> : seq_3 <B> ● (s_3, e) \n | constraints satisfied by <C> : seq_3 ● <B> (s_3, s_2)\n <B> : seq_2 <A> ● (s_2, e) \n | constraints satisfied by <B> : seq_2 ● <A> (s_2, s_1)\n <A> : seq_1 ● (s_1, e)\n```",
"_____no_output_____"
],
[
"We define a function `uniq_postdot()` that given the item `<A> := seq_1 ● (s_1, e)`, returns a `<B> : seq_2 ● <A> (s_2, s_1)` that satisfies the constraints mentioned in the above picture.",
"_____no_output_____"
]
],
[
[
"class LeoParser(LeoParser):\n def uniq_postdot(self, st_A):\n col_s1 = st_A.s_col\n parent_states = [\n s for s in col_s1.states if s.expr and s.at_dot() == st_A.name\n ]\n if len(parent_states) > 1:\n return None\n matching_st_B = [s for s in parent_states if s.dot == len(s.expr) - 1]\n return matching_st_B[0] if matching_st_B else None",
"_____no_output_____"
],
[
"lp = LeoParser(RR_GRAMMAR)\n[(str(s), str(lp.uniq_postdot(s))) for s in columns[-1].states]",
"_____no_output_____"
]
],
[
[
"We next define the function `get_top()` that is the core of deterministic reduction which gets the topmost state above the current state (`A`).",
"_____no_output_____"
]
],
[
[
"class LeoParser(LeoParser):\n def get_top(self, state_A):\n st_B_inc = self.uniq_postdot(state_A)\n if not st_B_inc:\n return None\n \n t_name = st_B_inc.name\n if t_name in st_B_inc.e_col.transitives:\n return st_B_inc.e_col.transitives[t_name]\n\n st_B = st_B_inc.advance()\n\n top = self.get_top(st_B) or st_B\n return st_B_inc.e_col.add_transitive(t_name, top)",
"_____no_output_____"
]
],
[
[
"Once we have the machinery in place, `deterministic_reduction()` itself is simply a wrapper to call `get_top()`",
"_____no_output_____"
]
],
[
[
"class LeoParser(LeoParser):\n def deterministic_reduction(self, state):\n return self.get_top(state)",
"_____no_output_____"
],
[
"lp = LeoParser(RR_GRAMMAR)\ncolumns = lp.chart_parse(mystring, lp.start_symbol())\n[(str(s), str(lp.get_top(s))) for s in columns[-1].states]",
"_____no_output_____"
]
],
[
[
"Now, both LR and RR grammars should work within $O(n)$ bounds.",
"_____no_output_____"
]
],
[
[
"result = LeoParser(RR_GRAMMAR, log=True).parse(mystring)\nfor _ in result: pass",
"None chart[0]\n<A>:= |(0,0)\n<start>:= <A> |(0,0) \n\na chart[1]\n<A>:= |(1,1)\n<A>:= a <A> |(0,1)\n<start>:= <A> |(0,1) \n\na chart[2]\n<A>:= |(2,2)\n<A>:= a <A> |(1,2)\n<start>:= <A> |(0,2) \n\na chart[3]\n<A>:= |(3,3)\n<A>:= a <A> |(2,3)\n<start>:= <A> |(0,3) \n\na chart[4]\n<A>:= |(4,4)\n<A>:= a <A> |(3,4)\n<start>:= <A> |(0,4) \n\na chart[5]\n<A>:= |(5,5)\n<A>:= a <A> |(4,5)\n<start>:= <A> |(0,5) \n\na chart[6]\n<A>:= |(6,6)\n<A>:= a <A> |(5,6)\n<start>:= <A> |(0,6) \n\n"
]
],
[
[
"We verify the Leo parser with a few more right recursive grammars.",
"_____no_output_____"
]
],
[
[
"RR_GRAMMAR2 = {\n '<start>': ['<A>'],\n '<A>': ['ab<A>', ''],\n}\nmystring2 = 'ababababab'",
"_____no_output_____"
],
[
"result = LeoParser(RR_GRAMMAR2, log=True).parse(mystring2)\nfor _ in result: pass",
"None chart[0]\n<A>:= |(0,0)\n<start>:= <A> |(0,0) \n\na chart[1]\n \n\nb chart[2]\n<A>:= |(2,2)\n<A>:= a b <A> |(0,2)\n<start>:= <A> |(0,2) \n\na chart[3]\n \n\nb chart[4]\n<A>:= |(4,4)\n<A>:= a b <A> |(2,4)\n<start>:= <A> |(0,4) \n\na chart[5]\n \n\nb chart[6]\n<A>:= |(6,6)\n<A>:= a b <A> |(4,6)\n<start>:= <A> |(0,6) \n\na chart[7]\n \n\nb chart[8]\n<A>:= |(8,8)\n<A>:= a b <A> |(6,8)\n<start>:= <A> |(0,8) \n\na chart[9]\n \n\nb chart[10]\n<A>:= |(10,10)\n<A>:= a b <A> |(8,10)\n<start>:= <A> |(0,10) \n\n"
],
[
"RR_GRAMMAR3 = {\n '<start>': ['c<A>'],\n '<A>': ['ab<A>', ''],\n}\nmystring3 = 'cababababab'",
"_____no_output_____"
],
[
"result = LeoParser(RR_GRAMMAR3, log=True).parse(mystring3)\nfor _ in result: pass",
"None chart[0]\n \n\nc chart[1]\n<A>:= |(1,1)\n<start>:= c <A> |(0,1) \n\na chart[2]\n \n\nb chart[3]\n<A>:= |(3,3)\n<A>:= a b <A> |(1,3)\n<start>:= c <A> |(0,3) \n\na chart[4]\n \n\nb chart[5]\n<A>:= |(5,5)\n<A>:= a b <A> |(3,5)\n<start>:= c <A> |(0,5) \n\na chart[6]\n \n\nb chart[7]\n<A>:= |(7,7)\n<A>:= a b <A> |(5,7)\n<start>:= c <A> |(0,7) \n\na chart[8]\n \n\nb chart[9]\n<A>:= |(9,9)\n<A>:= a b <A> |(7,9)\n<start>:= c <A> |(0,9) \n\na chart[10]\n \n\nb chart[11]\n<A>:= |(11,11)\n<A>:= a b <A> |(9,11)\n<start>:= c <A> |(0,11) \n\n"
],
[
"RR_GRAMMAR4 = {\n '<start>': ['<A>c'],\n '<A>': ['ab<A>', ''],\n}\nmystring4 = 'ababababc'",
"_____no_output_____"
],
[
"result = LeoParser(RR_GRAMMAR4, log=True).parse(mystring4)\nfor _ in result: pass",
"None chart[0]\n<A>:= |(0,0) \n\na chart[1]\n \n\nb chart[2]\n<A>:= |(2,2)\n<A>:= a b <A> |(0,2) \n\na chart[3]\n \n\nb chart[4]\n<A>:= |(4,4)\n<A>:= a b <A> |(2,4)\n<A>:= a b <A> |(0,4) \n\na chart[5]\n \n\nb chart[6]\n<A>:= |(6,6)\n<A>:= a b <A> |(4,6)\n<A>:= a b <A> |(0,6) \n\na chart[7]\n \n\nb chart[8]\n<A>:= |(8,8)\n<A>:= a b <A> |(6,8)\n<A>:= a b <A> |(0,8) \n\nc chart[9]\n<start>:= <A> c |(0,9) \n\n"
],
[
"RR_GRAMMAR5 = {\n '<start>': ['<A>'],\n '<A>': ['ab<B>', ''],\n '<B>': ['<A>'],\n}\nmystring5 = 'abababab'",
"_____no_output_____"
],
[
"result = LeoParser(RR_GRAMMAR5, log=True).parse(mystring5)\nfor _ in result: pass",
"None chart[0]\n<A>:= |(0,0)\n<start>:= <A> |(0,0) \n\na chart[1]\n \n\nb chart[2]\n<A>:= a b <B> |(0,2)\n<A>:= |(2,2)\n<B>:= <A> |(2,2)\n<start>:= <A> |(0,2) \n\na chart[3]\n \n\nb chart[4]\n<A>:= a b <B> |(2,4)\n<A>:= |(4,4)\n<B>:= <A> |(4,4)\n<start>:= <A> |(0,4) \n\na chart[5]\n \n\nb chart[6]\n<A>:= a b <B> |(4,6)\n<A>:= |(6,6)\n<B>:= <A> |(6,6)\n<start>:= <A> |(0,6) \n\na chart[7]\n \n\nb chart[8]\n<A>:= a b <B> |(6,8)\n<A>:= |(8,8)\n<B>:= <A> |(8,8)\n<start>:= <A> |(0,8) \n\n"
],
[
"RR_GRAMMAR6 = {\n '<start>': ['<A>'],\n '<A>': ['a<B>', ''],\n '<B>': ['b<A>'],\n}\nmystring6 = 'abababab'",
"_____no_output_____"
],
[
"result = LeoParser(RR_GRAMMAR6, log=True).parse(mystring6)\nfor _ in result: pass",
"None chart[0]\n<A>:= |(0,0)\n<start>:= <A> |(0,0) \n\na chart[1]\n \n\nb chart[2]\n<A>:= |(2,2)\n<B>:= b <A> |(1,2)\n<start>:= <A> |(0,2) \n\na chart[3]\n \n\nb chart[4]\n<A>:= |(4,4)\n<B>:= b <A> |(3,4)\n<start>:= <A> |(0,4) \n\na chart[5]\n \n\nb chart[6]\n<A>:= |(6,6)\n<B>:= b <A> |(5,6)\n<start>:= <A> |(0,6) \n\na chart[7]\n \n\nb chart[8]\n<A>:= |(8,8)\n<B>:= b <A> |(7,8)\n<start>:= <A> |(0,8) \n\n"
],
[
"RR_GRAMMAR7 = {\n '<start>': ['<A>'],\n '<A>': ['a<A>', 'a'],\n}\nmystring7 = 'aaaaaaaa'",
"_____no_output_____"
],
[
"result = LeoParser(RR_GRAMMAR7, log=True).parse(mystring7)\nfor _ in result: pass",
"None chart[0]\n \n\na chart[1]\n<A>:= a |(0,1)\n<start>:= <A> |(0,1) \n\na chart[2]\n<A>:= a |(1,2)\n<start>:= <A> |(0,2) \n\na chart[3]\n<A>:= a |(2,3)\n<start>:= <A> |(0,3) \n\na chart[4]\n<A>:= a |(3,4)\n<start>:= <A> |(0,4) \n\na chart[5]\n<A>:= a |(4,5)\n<start>:= <A> |(0,5) \n\na chart[6]\n<A>:= a |(5,6)\n<start>:= <A> |(0,6) \n\na chart[7]\n<A>:= a |(6,7)\n<start>:= <A> |(0,7) \n\na chart[8]\n<A>:= a |(7,8)\n<start>:= <A> |(0,8) \n\n"
]
],
[
[
"We verify that our parser works correctly on `LR_GRAMMAR` too.",
"_____no_output_____"
]
],
[
[
"result = LeoParser(LR_GRAMMAR, log=True).parse(mystring)\nfor _ in result: pass",
"None chart[0]\n<A>:= |(0,0)\n<start>:= <A> |(0,0) \n\na chart[1]\n<A>:= <A> a |(0,1)\n<start>:= <A> |(0,1) \n\na chart[2]\n<A>:= <A> a |(0,2)\n<start>:= <A> |(0,2) \n\na chart[3]\n<A>:= <A> a |(0,3)\n<start>:= <A> |(0,3) \n\na chart[4]\n<A>:= <A> a |(0,4)\n<start>:= <A> |(0,4) \n\na chart[5]\n<A>:= <A> a |(0,5)\n<start>:= <A> |(0,5) \n\na chart[6]\n<A>:= <A> a |(0,6)\n<start>:= <A> |(0,6) \n\n"
]
],
[
[
"__Advanced:__ We have fixed the complexity bounds. However, because we are saving only the topmost item of a right recursion, we need to fix our parser to be aware of our fix while extracting parse trees. Can you fix it?\n\n__Hint:__ Leo suggests simply transforming the Leo item sets to normal Earley sets, with the results from deterministic reduction expanded to their originals. For that, keep in mind the picture of constraint chain we drew earlier.",
"_____no_output_____"
],
[
"**Solution.** Here is a possible solution.",
"_____no_output_____"
],
[
"We first change the definition of `add_transitive()` so that results of deterministic reduction can be identified later.",
"_____no_output_____"
]
],
[
[
"class Column(Column):\n def add_transitive(self, key, state):\n assert key not in self.transitives\n self.transitives[key] = TState(state.name, state.expr, state.dot,\n state.s_col, state.e_col)\n return self.transitives[key]",
"_____no_output_____"
]
],
[
[
"We also need a `back()` method to create the constraints.",
"_____no_output_____"
]
],
[
[
"class State(State):\n def back(self):\n return TState(self.name, self.expr, self.dot - 1, self.s_col, self.e_col)",
"_____no_output_____"
]
],
[
[
"We update `copy()` to make `TState` items instead.",
"_____no_output_____"
]
],
[
[
"class TState(State):\n def copy(self):\n return TState(self.name, self.expr, self.dot, self.s_col, self.e_col)",
"_____no_output_____"
]
],
[
[
"We now modify the `LeoParser` to keep track of the chain of constrains that we mentioned earlier.",
"_____no_output_____"
]
],
[
[
"class LeoParser(LeoParser):\n def __init__(self, grammar, **kwargs):\n super().__init__(grammar, **kwargs)\n self._postdots = {}",
"_____no_output_____"
]
],
[
[
"Next, we update the `uniq_postdot()` so that it tracks the chain of links.",
"_____no_output_____"
]
],
[
[
"class LeoParser(LeoParser):\n def uniq_postdot(self, st_A):\n col_s1 = st_A.s_col\n parent_states = [\n s for s in col_s1.states if s.expr and s.at_dot() == st_A.name\n ]\n if len(parent_states) > 1:\n return None\n matching_st_B = [s for s in parent_states if s.dot == len(s.expr) - 1]\n if matching_st_B:\n self._postdots[matching_st_B[0]._t()] = st_A\n return matching_st_B[0]\n return None\n ",
"_____no_output_____"
]
],
[
[
"We next define a method `expand_tstate()` that, when given a `TState`, generates all the intermediate links that we threw away earlier for a given end column.",
"_____no_output_____"
]
],
[
[
"class LeoParser(LeoParser):\n def expand_tstate(self, state, e):\n if state._t() not in self._postdots:\n return\n c_C = self._postdots[state._t()]\n e.add(c_C.advance())\n self.expand_tstate(c_C.back(), e)",
"_____no_output_____"
]
],
[
[
"We define a `rearrange()` method to generate a reversed table where each column contains states that start at that column.",
"_____no_output_____"
]
],
[
[
"class LeoParser(LeoParser):\n def rearrange(self, table):\n f_table = [Column(c.index, c.letter) for c in table]\n for col in table:\n for s in col.states:\n f_table[s.s_col.index].states.append(s)\n return f_table",
"_____no_output_____"
]
],
[
[
"Here is the rearranged table. (Can you explain why the Column 0 has a large number of `<start>` items?)",
"_____no_output_____"
]
],
[
[
"ep = LeoParser(RR_GRAMMAR)\ncolumns = ep.chart_parse(mystring, ep.start_symbol())\nr_table = ep.rearrange(columns)\nfor col in r_table:\n print(col, \"\\n\")",
"None chart[0]\n<A>:= |(0,0)\n<start>:= <A> |(0,0)\n<A>:= a <A> |(0,1)\n<start>:= <A> |(0,1)\n<start>:= <A> |(0,2)\n<start>:= <A> |(0,3)\n<start>:= <A> |(0,4)\n<start>:= <A> |(0,5)\n<start>:= <A> |(0,6) \n\na chart[1]\n<A>:= |(1,1)\n<A>:= a <A> |(1,2) \n\na chart[2]\n<A>:= |(2,2)\n<A>:= a <A> |(2,3) \n\na chart[3]\n<A>:= |(3,3)\n<A>:= a <A> |(3,4) \n\na chart[4]\n<A>:= |(4,4)\n<A>:= a <A> |(4,5) \n\na chart[5]\n<A>:= |(5,5)\n<A>:= a <A> |(5,6) \n\na chart[6]\n<A>:= |(6,6) \n\n"
]
],
[
[
"We save the result of rearrange before going into `parse_forest()`.",
"_____no_output_____"
]
],
[
[
"class LeoParser(LeoParser):\n def parse(self, text):\n cursor, states = self.parse_prefix(text)\n start = next((s for s in states if s.finished()), None)\n if cursor < len(text) or not start:\n raise SyntaxError(\"at \" + repr(text[cursor:]))\n\n self.r_table = self.rearrange(self.table)\n forest = self.extract_trees(self.parse_forest(self.table, start))\n for tree in forest:\n yield self.prune_tree(tree)",
"_____no_output_____"
]
],
[
[
"Finally, during `parse_forest()`, we first check to see if it is a transitive state, and if it is, expand it to the original sequence of states using `traverse_constraints()`.",
"_____no_output_____"
]
],
[
[
"class LeoParser(LeoParser):\n def parse_forest(self, chart, state):\n if isinstance(state, TState):\n self.expand_tstate(state.back(), state.e_col)\n \n return super().parse_forest(chart, state)",
"_____no_output_____"
]
],
[
[
"This completes our implementation of `LeoParser`.",
"_____no_output_____"
],
[
"We check whether the previously defined right recursive grammars parse and return the correct parse trees.",
"_____no_output_____"
]
],
[
[
"result = LeoParser(RR_GRAMMAR).parse(mystring)\nfor tree in result:\n assert mystring == tree_to_string(tree)",
"_____no_output_____"
],
[
"result = LeoParser(RR_GRAMMAR2).parse(mystring2)\nfor tree in result:\n assert mystring2 == tree_to_string(tree)",
"_____no_output_____"
],
[
"result = LeoParser(RR_GRAMMAR3).parse(mystring3)\nfor tree in result:\n assert mystring3 == tree_to_string(tree)",
"_____no_output_____"
],
[
"result = LeoParser(RR_GRAMMAR4).parse(mystring4)\nfor tree in result:\n assert mystring4 == tree_to_string(tree)",
"_____no_output_____"
],
[
"result = LeoParser(RR_GRAMMAR5).parse(mystring5)\nfor tree in result:\n assert mystring5 == tree_to_string(tree)",
"_____no_output_____"
],
[
"result = LeoParser(RR_GRAMMAR6).parse(mystring6)\nfor tree in result:\n assert mystring6 == tree_to_string(tree)",
"_____no_output_____"
],
[
"result = LeoParser(RR_GRAMMAR7).parse(mystring7)\nfor tree in result:\n assert mystring7 == tree_to_string(tree)",
"_____no_output_____"
],
[
"result = LeoParser(LR_GRAMMAR).parse(mystring)\nfor tree in result:\n assert mystring == tree_to_string(tree)",
"_____no_output_____"
],
[
"RR_GRAMMAR8 = {\n '<start>': ['<A>'],\n '<A>': ['a<A>', 'a']\n}\nmystring8 = 'aa'",
"_____no_output_____"
],
[
"RR_GRAMMAR9 = {\n '<start>': ['<A>'],\n '<A>': ['<B><A>', '<B>'],\n '<B>': ['b']\n}\nmystring9 = 'bbbbbbb'",
"_____no_output_____"
],
[
"result = LeoParser(RR_GRAMMAR8).parse(mystring8)\nfor tree in result:\n print(repr(tree_to_string(tree)))\n assert mystring8 == tree_to_string(tree)",
"'aa'\n'aa'\n"
],
[
"result = LeoParser(RR_GRAMMAR9).parse(mystring9)\nfor tree in result:\n print(repr(tree_to_string(tree)))\n assert mystring9 == tree_to_string(tree)",
"'bbbbbbb'\n'bbbbbbb'\n"
]
],
[
[
"### Exercise 6: Filtered Earley Parser",
"_____no_output_____"
],
[
"One of the problems with our Earley and Leo Parsers is that it can get stuck in infinite loops when parsing with grammars that contain token repetitions in alternatives. For example, consider the grammar below.",
"_____no_output_____"
]
],
[
[
"RECURSION_GRAMMAR: Grammar = {\n \"<start>\": [\"<A>\"],\n \"<A>\": [\"<A>\", \"<A>aa\", \"AA\", \"<B>\"],\n \"<B>\": [\"<C>\", \"<C>cc\", \"CC\"],\n \"<C>\": [\"<B>\", \"<B>bb\", \"BB\"]\n}",
"_____no_output_____"
]
],
[
[
"With this grammar, one can produce an infinite chain of derivations of `<A>`, (direct recursion) or an infinite chain of derivations of `<B> -> <C> -> <B> ...` (indirect recursion). The problem is that, our implementation can get stuck trying to derive one of these infinite chains. One possibility is to use the `LazyExtractor`. Another, is to simply avoid generating such chains.",
"_____no_output_____"
]
],
[
[
"from ExpectError import ExpectTimeout",
"_____no_output_____"
],
[
"with ExpectTimeout(1, print_traceback=False):\n mystring = 'AA'\n parser = LeoParser(RECURSION_GRAMMAR)\n tree, *_ = parser.parse(mystring)\n assert tree_to_string(tree) == mystring\n display_tree(tree)",
"RecursionError: maximum recursion depth exceeded (expected)\n"
]
],
[
[
"Can you implement a solution such that any tree that contains such a chain is discarded?",
"_____no_output_____"
],
[
"**Solution.** Here is a possible solution.",
"_____no_output_____"
]
],
[
[
"class FilteredLeoParser(LeoParser):\n def forest(self, s, kind, seen, chart):\n return self.parse_forest(chart, s, seen) if kind == 'n' else (s, [])\n\n def parse_forest(self, chart, state, seen=None):\n if isinstance(state, TState):\n self.expand_tstate(state.back(), state.e_col)\n\n def was_seen(chain, s):\n if isinstance(s, str):\n return False\n if len(s.expr) > 1:\n return False\n return s in chain\n\n if len(state.expr) > 1: # things get reset if we have a non loop\n seen = set()\n elif seen is None: # initialization\n seen = {state}\n\n pathexprs = self.parse_paths(state.expr, chart, state.s_col.index,\n state.e_col.index) if state.expr else []\n return state.name, [[(s, k, seen | {s}, chart)\n for s, k in reversed(pathexpr)\n if not was_seen(seen, s)] for pathexpr in pathexprs]",
"_____no_output_____"
]
],
[
[
"With the `FilteredLeoParser`, we should be able to recover minimal parse trees in reasonable time.",
"_____no_output_____"
]
],
[
[
"mystring = 'AA'\nparser = FilteredLeoParser(RECURSION_GRAMMAR)\ntree, *_ = parser.parse(mystring)\nassert tree_to_string(tree) == mystring\ndisplay_tree(tree)",
"_____no_output_____"
],
[
"mystring = 'AAaa'\nparser = FilteredLeoParser(RECURSION_GRAMMAR)\ntree, *_ = parser.parse(mystring)\nassert tree_to_string(tree) == mystring\ndisplay_tree(tree)",
"_____no_output_____"
],
[
"mystring = 'AAaaaa'\nparser = FilteredLeoParser(RECURSION_GRAMMAR)\ntree, *_ = parser.parse(mystring)\nassert tree_to_string(tree) == mystring\ndisplay_tree(tree)",
"_____no_output_____"
],
[
"mystring = 'CC'\nparser = FilteredLeoParser(RECURSION_GRAMMAR)\ntree, *_ = parser.parse(mystring)\nassert tree_to_string(tree) == mystring\ndisplay_tree(tree)",
"_____no_output_____"
],
[
"mystring = 'BBcc'\nparser = FilteredLeoParser(RECURSION_GRAMMAR)\ntree, *_ = parser.parse(mystring)\nassert tree_to_string(tree) == mystring\ndisplay_tree(tree)",
"_____no_output_____"
],
[
"mystring = 'BB'\nparser = FilteredLeoParser(RECURSION_GRAMMAR)\ntree, *_ = parser.parse(mystring)\nassert tree_to_string(tree) == mystring\ndisplay_tree(tree)",
"_____no_output_____"
],
[
"mystring = 'BBccbb'\nparser = FilteredLeoParser(RECURSION_GRAMMAR)\ntree, *_ = parser.parse(mystring)\nassert tree_to_string(tree) == mystring\ndisplay_tree(tree)",
"_____no_output_____"
]
],
[
[
"As can be seen, we are able to recover minimal parse trees without hitting on infinite chains.",
"_____no_output_____"
],
[
"### Exercise 7: Iterative Earley Parser\n\nRecursive algorithms are quite handy in some cases but sometimes we might want to have iteration instead of recursion due to memory or speed problems. \n\nCan you implement an iterative version of the `EarleyParser`? \n\n__Hint:__ In general, you can use a stack to replace a recursive algorithm with an iterative one. An easy way to do this is pushing the parameters onto a stack instead of passing them to the recursive function.",
"_____no_output_____"
],
[
"**Solution.** Here is a possible solution.",
"_____no_output_____"
],
[
"First, we define `parse_paths()` that extract paths from a parsed expression, which is very similar to the original.",
"_____no_output_____"
]
],
[
[
"class IterativeEarleyParser(EarleyParser):\n def parse_paths(self, named_expr_, chart, frm, til_):\n return_paths = []\n path_build_stack = [(named_expr_, til_, [])]\n\n def iter_paths(path_prefix, path, start, k, e):\n x = path_prefix + [(path, k)]\n if not e:\n return_paths.extend([x] if start == frm else [])\n else:\n path_build_stack.append((e, start, x))\n\n while path_build_stack:\n named_expr, til, path_prefix = path_build_stack.pop()\n *expr, var = named_expr\n\n starts = None\n if var not in self.cgrammar:\n starts = ([(var, til - len(var), 't')]\n if til > 0 and chart[til].letter == var else [])\n else:\n starts = [(s, s.s_col.index, 'n') for s in chart[til].states\n if s.finished() and s.name == var]\n\n for s, start, k in starts:\n iter_paths(path_prefix, s, start, k, expr)\n\n return return_paths",
"_____no_output_____"
]
],
[
[
"Next we used these paths to recover the forest data structure using `parse_forest()`. Since `parse_forest()` does not recurse, we reuse the original definition. Next, we define `extract_a_tree()`",
"_____no_output_____"
],
[
"Now we are ready to extract trees from the forest using `extract_a_tree()`",
"_____no_output_____"
]
],
[
[
"class IterativeEarleyParser(IterativeEarleyParser):\n def choose_a_node_to_explore(self, node_paths, level_count):\n first, *rest = node_paths\n return first\n\n def extract_a_tree(self, forest_node_):\n start_node = (forest_node_[0], [])\n tree_build_stack = [(forest_node_, start_node[-1], 0)]\n\n while tree_build_stack:\n forest_node, tree, level_count = tree_build_stack.pop()\n name, paths = forest_node\n\n if not paths:\n tree.append((name, []))\n else:\n new_tree = []\n current_node = self.choose_a_node_to_explore(paths, level_count)\n for p in reversed(current_node):\n new_forest_node = self.forest(*p)\n tree_build_stack.append((new_forest_node, new_tree, level_count + 1))\n tree.append((name, new_tree))\n\n return start_node",
"_____no_output_____"
]
],
[
[
"For now, we simply extract the first tree found.",
"_____no_output_____"
]
],
[
[
"class IterativeEarleyParser(IterativeEarleyParser):\n def extract_trees(self, forest):\n yield self.extract_a_tree(forest)",
"_____no_output_____"
]
],
[
[
"Let's see if it works with some of the grammars we have seen so far.",
"_____no_output_____"
]
],
[
[
"test_cases: List[Tuple[Grammar, str]] = [\n (A1_GRAMMAR, '1-2-3+4-5'),\n (A2_GRAMMAR, '1+2'),\n (A3_GRAMMAR, '1+2+3-6=6-1-2-3'),\n (LR_GRAMMAR, 'aaaaa'),\n (RR_GRAMMAR, 'aa'),\n (DIRECTLY_SELF_REFERRING, 'select a from a'),\n (INDIRECTLY_SELF_REFERRING, 'select a from a'),\n (RECURSION_GRAMMAR, 'AA'),\n (RECURSION_GRAMMAR, 'AAaaaa'),\n (RECURSION_GRAMMAR, 'BBccbb')\n]\n\nfor i, (grammar, text) in enumerate(test_cases):\n print(i, text)\n tree, *_ = IterativeEarleyParser(grammar).parse(text)\n assert text == tree_to_string(tree)",
"0 1-2-3+4-5\n1 1+2\n2 1+2+3-6=6-1-2-3\n3 aaaaa\n4 aa\n5 select a from a\n6 select a from a\n7 AA\n8 AAaaaa\n9 BBccbb\n"
]
],
[
[
"As can be seen, our `IterativeEarleyParser` is able to handle recursive grammars. However, it can only extract the first tree found. What should one do to get all possible parses? What we can do, is to keep track of options to explore at each `choose_a_node_to_explore()`. Next, capture in the nodes explored in a tree data structure, adding new paths each time a new leaf is expanded. See the `TraceTree` datastructure in the [chapter on Concolic fuzzing](ConcolicFuzzer.ipynb) for an example.",
"_____no_output_____"
],
[
"### Exercise 8: First Set of a Nonterminal\n\nWe previously gave a way to extract a the `nullable` (epsilon) set, which is often used for parsing.\nAlong with `nullable`, parsing algorithms often use two other sets [`first` and `follow`](https://en.wikipedia.org/wiki/Canonical_LR_parser#FIRST_and_FOLLOW_sets).\nThe first set of a terminal symbol is itself, and the first set of a nonterminal is composed of terminal symbols that can come at the beginning of any derivation\nof that nonterminal. The first set of any nonterminal that can derive the empty string should contain `EPSILON`. For example, using our `A1_GRAMMAR`, the first set of both `<expr>` and `<start>` is `{0,1,2,3,4,5,6,7,8,9}`. The extraction first set for any self-recursive nonterminal is simple enough. One simply has to recursively compute the first set of the first element of its choice expressions. The computation of `first` set for a self-recursive nonterminal is tricky. One has to recursively compute the first set until one is sure that no more terminals can be added to the first set.\n\nCan you implement the `first` set using our `fixpoint()` decorator?",
"_____no_output_____"
],
[
"**Solution.** The first set of all terminals is the set containing just themselves. So we initialize that first. Then we update the first set with rules that derive empty strings.",
"_____no_output_____"
]
],
[
[
"def firstset(grammar, nullable):\n first = {i: {i} for i in terminals(grammar)}\n for k in grammar:\n first[k] = {EPSILON} if k in nullable else set()\n return firstset_((rules(grammar), first, nullable))[1]",
"_____no_output_____"
]
],
[
[
"Finally, we rely on the `fixpoint` to update the first set with the contents of the current first set until the first set stops changing.",
"_____no_output_____"
]
],
[
[
"def first_expr(expr, first, nullable):\n tokens = set()\n for token in expr:\n tokens |= first[token]\n if token not in nullable:\n break\n return tokens",
"_____no_output_____"
],
[
"@fixpoint\ndef firstset_(arg):\n (rules, first, epsilon) = arg\n for A, expression in rules:\n first[A] |= first_expr(expression, first, epsilon)\n return (rules, first, epsilon)",
"_____no_output_____"
],
[
"firstset(canonical(A1_GRAMMAR), EPSILON)",
"_____no_output_____"
]
],
[
[
"### Exercise 9: Follow Set of a Nonterminal\n\nThe follow set definition is similar to the first set. The follow set of a nonterminal is the set of terminals that can occur just after that nonterminal is used in any derivation. The follow set of the start symbol is `EOF`, and the follow set of any nonterminal is the super set of first sets of all symbols that come after it in any choice expression.\n\nFor example, the follow set of `<expr>` in `A1_GRAMMAR` is the set `{EOF, +, -}`.\n\nAs in the previous exercise, implement the `followset()` using the `fixpoint()` decorator.",
"_____no_output_____"
],
[
"**Solution.** The implementation of `followset()` is similar to `firstset()`. We first initialize the follow set with `EOF`, get the epsilon and first sets, and use the `fixpoint()` decorator to iteratively compute the follow set until nothing changes.",
"_____no_output_____"
]
],
[
[
"EOF = '\\0'",
"_____no_output_____"
],
[
"def followset(grammar, start):\n follow = {i: set() for i in grammar}\n follow[start] = {EOF}\n\n epsilon = nullable(grammar)\n first = firstset(grammar, epsilon)\n return followset_((grammar, epsilon, first, follow))[-1]",
"_____no_output_____"
]
],
[
[
"Given the current follow set, one can update the follow set as follows:",
"_____no_output_____"
]
],
[
[
"@fixpoint\ndef followset_(arg):\n grammar, epsilon, first, follow = arg\n for A, expression in rules(grammar):\n f_B = follow[A]\n for t in reversed(expression):\n if t in grammar:\n follow[t] |= f_B\n f_B = f_B | first[t] if t in epsilon else (first[t] - {EPSILON})\n\n return (grammar, epsilon, first, follow)",
"_____no_output_____"
],
[
"followset(canonical(A1_GRAMMAR), START_SYMBOL)",
"_____no_output_____"
]
],
[
[
"### Exercise 10: A LL(1) Parser\n\nAs we mentioned previously, there exist other kinds of parsers that operate left-to-right with right most derivation (*LR(k)*) or left-to-right with left most derivation (*LL(k)*) with _k_ signifying the amount of lookahead the parser is permitted to use.\n\nWhat should one do with the lookahead? That lookahead can be used to determine which rule to apply. In the case of an *LL(1)* parser, the rule to apply is determined by looking at the _first_ set of the different rules. We previously implemented `first_expr()` that takes a an expression, the set of `nullables`, and computes the first set of that rule.\n\nIf a rule can derive an empty set, then that rule may also be applicable if of sees the `follow()` set of the corresponding nonterminal.",
"_____no_output_____"
],
[
"#### Part 1: A LL(1) Parsing Table\n\nThe first part of this exercise is to implement the _parse table_ that describes what action to take for an *LL(1)* parser on seeing a terminal symbol on lookahead. The table should be in the form of a _dictionary_ such that the keys represent the nonterminal symbol, and the value should contain another dictionary with keys as terminal symbols and the particular rule to continue parsing as the value.\n\nLet us illustrate this table with an example. The `parse_table()` method populates a `self.table` data structure that should conform to the following requirements:",
"_____no_output_____"
]
],
[
[
"class LL1Parser(Parser):\n def parse_table(self):\n self.my_rules = rules(self.cgrammar)\n self.table = ... # fill in here to produce\n\n def rules(self):\n for i, rule in enumerate(self.my_rules):\n print(i, rule)\n\n def show_table(self):\n ts = list(sorted(terminals(self.cgrammar)))\n print('Rule Name\\t| %s' % ' | '.join(t for t in ts))\n for k in self.table:\n pr = self.table[k]\n actions = list(str(pr[t]) if t in pr else ' ' for t in ts)\n print('%s \\t| %s' % (k, ' | '.join(actions)))",
"_____no_output_____"
]
],
[
[
"On invocation of `LL1Parser(A2_GRAMMAR).show_table()`\nIt should result in the following table:",
"_____no_output_____"
]
],
[
[
"for i, r in enumerate(rules(canonical(A2_GRAMMAR))):\n print(\"%d\\t %s := %s\" % (i, r[0], r[1]))",
"0\t <start> := ['<expr>']\n1\t <expr> := ['<integer>', '<expr_>']\n2\t <expr_> := ['+', '<expr>']\n3\t <expr_> := ['-', '<expr>']\n4\t <expr_> := []\n5\t <integer> := ['<digit>', '<integer_>']\n6\t <integer_> := ['<integer>']\n7\t <integer_> := []\n8\t <digit> := ['0']\n9\t <digit> := ['1']\n10\t <digit> := ['2']\n11\t <digit> := ['3']\n12\t <digit> := ['4']\n13\t <digit> := ['5']\n14\t <digit> := ['6']\n15\t <digit> := ['7']\n16\t <digit> := ['8']\n17\t <digit> := ['9']\n"
]
],
[
[
"|Rule Name || + | - | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9|\n|-----------||---|---|---|---|---|---|---|---|---|---|---|--|\n|start \t|| | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0|\n|expr \t|| | | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1|\n|expr_ \t|| 2 | 3 | | | | | | | | | | |\n|integer \t|| | | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5|\n|integer_ \t|| 7 | 7 | 6 | 6 | 6 | 6 | 6 | 6 | 6 | 6 | 6 | 6|\n|digit \t|| | | 8 | 9 |10 |11 |12 |13 |14 |15 |16 |17|",
"_____no_output_____"
],
[
"**Solution.** We define `predict()` as we explained before. Then we use the predicted rules to populate the parse table.",
"_____no_output_____"
]
],
[
[
"class LL1Parser(LL1Parser):\n def predict(self, rulepair, first, follow, epsilon):\n A, rule = rulepair\n rf = first_expr(rule, first, epsilon)\n if nullable_expr(rule, epsilon):\n rf |= follow[A]\n return rf\n\n def parse_table(self):\n self.my_rules = rules(self.cgrammar)\n epsilon = nullable(self.cgrammar)\n first = firstset(self.cgrammar, epsilon)\n # inefficient, can combine the three.\n follow = followset(self.cgrammar, self.start_symbol())\n\n ptable = [(i, self.predict(rule, first, follow, epsilon))\n for i, rule in enumerate(self.my_rules)]\n\n parse_tbl = {k: {} for k in self.cgrammar}\n\n for i, pvals in ptable:\n (k, expr) = self.my_rules[i]\n parse_tbl[k].update({v: i for v in pvals})\n\n self.table = parse_tbl",
"_____no_output_____"
],
[
"ll1parser = LL1Parser(A2_GRAMMAR)\nll1parser.parse_table()\nll1parser.show_table()",
"Rule Name\t| + | - | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9\n<start> \t| | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0\n<expr> \t| | | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1\n<expr_> \t| 2 | 3 | | | | | | | | | | \n<integer> \t| | | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5\n<integer_> \t| 7 | 7 | 6 | 6 | 6 | 6 | 6 | 6 | 6 | 6 | 6 | 6\n<digit> \t| | | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17\n"
]
],
[
[
"#### Part 2: The Parser\n\nOnce we have the parse table, implementing the parser is as follows: Consider the first item from the sequence of tokens to parse, and seed the stack with the start symbol.\n\nWhile the stack is not empty, extract the first symbol from the stack, and if the symbol is a terminal, verify that the symbol matches the item from the input stream. If the symbol is a nonterminal, use the symbol and input item to lookup the next rule from the parse table. Insert the rule thus found to the top of the stack. Keep track of the expressions being parsed to build up the parse table.\n\nUse the parse table defined previously to implement the complete LL(1) parser.",
"_____no_output_____"
],
[
"**Solution.** Here is the complete parser:",
"_____no_output_____"
]
],
[
[
"class LL1Parser(LL1Parser):\n def parse_helper(self, stack, inplst):\n inp, *inplst = inplst\n exprs = []\n while stack:\n val, *stack = stack\n if isinstance(val, tuple):\n exprs.append(val)\n elif val not in self.cgrammar: # terminal\n assert val == inp\n exprs.append(val)\n inp, *inplst = inplst or [None]\n else:\n if inp is not None:\n i = self.table[val][inp]\n _, rhs = self.my_rules[i]\n stack = rhs + [(val, len(rhs))] + stack\n return self.linear_to_tree(exprs)\n\n def parse(self, inp):\n self.parse_table()\n k, _ = self.my_rules[0]\n stack = [k]\n return self.parse_helper(stack, inp)\n\n def linear_to_tree(self, arr):\n stack = []\n while arr:\n elt = arr.pop(0)\n if not isinstance(elt, tuple):\n stack.append((elt, []))\n else:\n # get the last n\n sym, n = elt\n elts = stack[-n:] if n > 0 else []\n stack = stack[0:len(stack) - n]\n stack.append((sym, elts))\n assert len(stack) == 1\n return stack[0]",
"_____no_output_____"
],
[
"ll1parser = LL1Parser(A2_GRAMMAR)\ntree = ll1parser.parse('1+2')\ndisplay_tree(tree)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
d0b370d675edeae805eca49211299345b93484d0 | 42,106 | ipynb | Jupyter Notebook | 100_Numpy_exercises.ipynb | ViniciusgCaetano/numpy-100 | 27530eb6b171e304915db1d461a46c627016a9ed | [
"MIT"
] | null | null | null | 100_Numpy_exercises.ipynb | ViniciusgCaetano/numpy-100 | 27530eb6b171e304915db1d461a46c627016a9ed | [
"MIT"
] | null | null | null | 100_Numpy_exercises.ipynb | ViniciusgCaetano/numpy-100 | 27530eb6b171e304915db1d461a46c627016a9ed | [
"MIT"
] | null | null | null | 22.504543 | 297 | 0.485703 | [
[
[
"# 100 numpy exercises\n\nThis is a collection of exercises that have been collected in the numpy mailing list, on stack overflow\nand in the numpy documentation. The goal of this collection is to offer a quick reference for both old\nand new users but also to provide a set of exercises for those who teach.\n\n\nIf you find an error or think you've a better way to solve some of them, feel\nfree to open an issue at <https://github.com/rougier/numpy-100>.",
"_____no_output_____"
],
[
"File automatically generated. See the documentation to update questions/answers/hints programmatically.",
"_____no_output_____"
],
[
"Run the `initialize.py` module, then for each question you can query the\nanswer or an hint with `hint(n)` or `answer(n)` for `n` question number.",
"_____no_output_____"
]
],
[
[
"import initialise as ini",
"_____no_output_____"
]
],
[
[
"#### 1. Import the numpy package under the name `np` (★☆☆)",
"_____no_output_____"
]
],
[
[
"import numpy as np",
"_____no_output_____"
]
],
[
[
"#### 2. Print the numpy version and the configuration (★☆☆)",
"_____no_output_____"
]
],
[
[
"np.__version__",
"_____no_output_____"
]
],
[
[
"#### 3. Create a null vector of size 10 (★☆☆)",
"_____no_output_____"
]
],
[
[
"np.zeros(10)",
"_____no_output_____"
]
],
[
[
"#### 4. How to find the memory size of any array (★☆☆)",
"_____no_output_____"
]
],
[
[
"random_array = np.array(45)\nrandom_array.nbytes",
"_____no_output_____"
]
],
[
[
"#### 5. How to get the documentation of the numpy add function from the command line? (★☆☆)",
"_____no_output_____"
]
],
[
[
"np.info('add')",
" *** Found in numpy ***\nadd(x1, x2, /, out=None, *, where=True, casting='same_kind', order='K', dtype=None, subok=True[, signature, extobj])\n\nAdd arguments element-wise.\n\nParameters\n----------\nx1, x2 : array_like\n The arrays to be added.\n If ``x1.shape != x2.shape``, they must be broadcastable to a common\n shape (which becomes the shape of the output).\nout : ndarray, None, or tuple of ndarray and None, optional\n A location into which the result is stored. If provided, it must have\n a shape that the inputs broadcast to. If not provided or None,\n a freshly-allocated array is returned. A tuple (possible only as a\n keyword argument) must have length equal to the number of outputs.\nwhere : array_like, optional\n This condition is broadcast over the input. At locations where the\n condition is True, the `out` array will be set to the ufunc result.\n Elsewhere, the `out` array will retain its original value.\n Note that if an uninitialized `out` array is created via the default\n ``out=None``, locations within it where the condition is False will\n remain uninitialized.\n**kwargs\n For other keyword-only arguments, see the\n :ref:`ufunc docs <ufuncs.kwargs>`.\n\nReturns\n-------\nadd : ndarray or scalar\n The sum of `x1` and `x2`, element-wise.\n This is a scalar if both `x1` and `x2` are scalars.\n\nNotes\n-----\nEquivalent to `x1` + `x2` in terms of array broadcasting.\n\nExamples\n--------\n>>> np.add(1.0, 4.0)\n5.0\n>>> x1 = np.arange(9.0).reshape((3, 3))\n>>> x2 = np.arange(3.0)\n>>> np.add(x1, x2)\narray([[ 0., 2., 4.],\n [ 3., 5., 7.],\n [ 6., 8., 10.]])\n----------------------------------------------------------------------------\n\n *** Repeat reference found in numpy.core *** \n *** Found in numpy.core.defchararray ***\n add(*args, **kwargs)\n\nReturn element-wise string concatenation for two arrays of str or unicode.\n\nArrays `x1` and `x2` must have the same shape.\n\nParameters\n----------\nx1 : array_like of str or unicode\n Input array.\nx2 : array_like of str or unicode\n Input array.\n\nReturns\n-------\nadd : ndarray\n Output array of `string_` or `unicode_`, depending on input types\n of the same shape as `x1` and `x2`.\n----------------------------------------------------------------------------\n *** Found in numpy.ma ***\nadd(x1, x2, /, out=None, *, where=True, casting='same_kind', order='K', dtype=None, subok=True[, signature, extobj])\n\nAdd arguments element-wise.\n\nParameters\n----------\nx1, x2 : array_like\n The arrays to be added.\n If ``x1.shape != x2.shape``, they must be broadcastable to a common\n shape (which becomes the shape of the output).\nout : ndarray, None, or tuple of ndarray and None, optional\n A location into which the result is stored. If provided, it must have\n a shape that the inputs broadcast to. If not provided or None,\n a freshly-allocated array is returned. A tuple (possible only as a\n keyword argument) must have length equal to the number of outputs.\nwhere : array_like, optional\n This condition is broadcast over the input. At locations where the\n condition is True, the `out` array will be set to the ufunc result.\n Elsewhere, the `out` array will retain its original value.\n Note that if an uninitialized `out` array is created via the default\n ``out=None``, locations within it where the condition is False will\n remain uninitialized.\n**kwargs\n For other keyword-only arguments, see the\n :ref:`ufunc docs <ufuncs.kwargs>`.\n\nReturns\n-------\nadd : ndarray or scalar\n The sum of `x1` and `x2`, element-wise.\n This is a scalar if both `x1` and `x2` are scalars.\n\nNotes\n-----\nEquivalent to `x1` + `x2` in terms of array broadcasting.\n\nExamples\n--------\n>>> np.add(1.0, 4.0)\n5.0\n>>> x1 = np.arange(9.0).reshape((3, 3))\n>>> x2 = np.arange(3.0)\n>>> np.add(x1, x2)\narray([[ 0., 2., 4.],\n [ 3., 5., 7.],\n [ 6., 8., 10.]])\n----------------------------------------------------------------------------\n\n *** Repeat reference found in numpy.core._multiarray_umath *** \n\n *** Repeat reference found in numpy.core.multiarray *** \n\n *** Repeat reference found in numpy.core.umath *** \n\n *** Repeat reference found in numpy.core.numeric *** \n\n *** Repeat reference found in numpy.linalg.linalg *** \n\n *** Repeat reference found in numpy.lib.function_base *** \n\n *** Repeat reference found in numpy.ma.core *** \n\n *** Repeat reference found in numpy.ma.extras *** \n *** Found in operator ***\nSame as a + b.\n----------------------------------------------------------------------------\n\n *** Total of 13 references found. ***\n"
]
],
[
[
"#### 6. Create a null vector of size 10 but the fifth value which is 1 (★☆☆)",
"_____no_output_____"
]
],
[
[
"x = np.zeros(10)\nx[4] = 1",
"_____no_output_____"
]
],
[
[
"#### 7. Create a vector with values ranging from 10 to 49 (★☆☆)",
"_____no_output_____"
]
],
[
[
"a = np.array(range(10,50))",
"_____no_output_____"
]
],
[
[
"#### 8. Reverse a vector (first element becomes last) (★☆☆)",
"_____no_output_____"
]
],
[
[
"a[::-1]",
"_____no_output_____"
]
],
[
[
"#### 9. Create a 3x3 matrix with values ranging from 0 to 8 (★☆☆)",
"_____no_output_____"
]
],
[
[
"np.reshape(np.array(range(0,9)), (3, 3))",
"_____no_output_____"
]
],
[
[
"#### 10. Find indices of non-zero elements from [1,2,0,0,4,0] (★☆☆)",
"_____no_output_____"
]
],
[
[
"a = [1,2,0,0,4,0]\nnp.nonzero(a)",
"_____no_output_____"
]
],
[
[
"#### 11. Create a 3x3 identity matrix (★☆☆)",
"_____no_output_____"
]
],
[
[
"np.eye(3)",
"_____no_output_____"
]
],
[
[
"#### 12. Create a 3x3x3 array with random values (★☆☆)",
"_____no_output_____"
]
],
[
[
"from numpy import random\nrandom.randint(10, size=(3,3))",
"_____no_output_____"
]
],
[
[
"#### 13. Create a 10x10 array with random values and find the minimum and maximum values (★☆☆)",
"_____no_output_____"
]
],
[
[
"from numpy import random\nx = random.randint(100, size=(10,10))\n\nx, np.amax(x), np.amin(x)",
"_____no_output_____"
]
],
[
[
"#### 14. Create a random vector of size 30 and find the mean value (★☆☆)",
"_____no_output_____"
]
],
[
[
"from numpy import random\nnp.mean(random.randint(10, size=(30)))",
"_____no_output_____"
]
],
[
[
"#### 15. Create a 2d array with 1 on the border and 0 inside (★☆☆)",
"_____no_output_____"
]
],
[
[
"a = np.ones((4,4))\na[1: -1, 1:-1] = 0\na",
"_____no_output_____"
]
],
[
[
"#### 16. How to add a border (filled with 0's) around an existing array? (★☆☆)",
"_____no_output_____"
]
],
[
[
"a = np.ones((3,3))\nnp.pad(a, 1)",
"_____no_output_____"
]
],
[
[
"#### 17. What is the result of the following expression? (★☆☆)\n```python\n0 * np.nan\nnp.nan == np.nan\nnp.inf > np.nan\nnp.nan - np.nan\nnp.nan in set([np.nan])\n0.3 == 3 * 0.1\n```",
"_____no_output_____"
],
[
"#### 18. Create a 5x5 matrix with values 1,2,3,4 just below the diagonal (★☆☆)",
"_____no_output_____"
]
],
[
[
"x = random.randint(5, size=(5, 5))\nnp.diag(x) = np.array([1, 2, 3, 4, 5])\n",
"_____no_output_____"
]
],
[
[
"#### 19. Create a 8x8 matrix and fill it with a checkerboard pattern (★☆☆)",
"_____no_output_____"
],
[
"#### 20. Consider a (6,7,8) shape array, what is the index (x,y,z) of the 100th element?",
"_____no_output_____"
],
[
"#### 21. Create a checkerboard 8x8 matrix using the tile function (★☆☆)",
"_____no_output_____"
],
[
"#### 22. Normalize a 5x5 random matrix (★☆☆)",
"_____no_output_____"
],
[
"#### 23. Create a custom dtype that describes a color as four unsigned bytes (RGBA) (★☆☆)",
"_____no_output_____"
],
[
"#### 24. Multiply a 5x3 matrix by a 3x2 matrix (real matrix product) (★☆☆)",
"_____no_output_____"
],
[
"#### 25. Given a 1D array, negate all elements which are between 3 and 8, in place. (★☆☆)",
"_____no_output_____"
],
[
"#### 26. What is the output of the following script? (★☆☆)\n```python\n# Author: Jake VanderPlas\n\nprint(sum(range(5),-1))\nfrom numpy import *\nprint(sum(range(5),-1))\n```",
"_____no_output_____"
],
[
"#### 27. Consider an integer vector Z, which of these expressions are legal? (★☆☆)\n```python\nZ**Z\n2 << Z >> 2\nZ <- Z\n1j*Z\nZ/1/1\nZ<Z>Z\n```",
"_____no_output_____"
],
[
"#### 28. What are the result of the following expressions?\n```python\nnp.array(0) / np.array(0)\nnp.array(0) // np.array(0)\nnp.array([np.nan]).astype(int).astype(float)\n```",
"_____no_output_____"
],
[
"#### 29. How to round away from zero a float array ? (★☆☆)",
"_____no_output_____"
],
[
"#### 30. How to find common values between two arrays? (★☆☆)",
"_____no_output_____"
],
[
"#### 31. How to ignore all numpy warnings (not recommended)? (★☆☆)",
"_____no_output_____"
],
[
"#### 32. Is the following expressions true? (★☆☆)\n```python\nnp.sqrt(-1) == np.emath.sqrt(-1)\n```",
"_____no_output_____"
],
[
"#### 33. How to get the dates of yesterday, today and tomorrow? (★☆☆)",
"_____no_output_____"
],
[
"#### 34. How to get all the dates corresponding to the month of July 2016? (★★☆)",
"_____no_output_____"
],
[
"#### 35. How to compute ((A+B)*(-A/2)) in place (without copy)? (★★☆)",
"_____no_output_____"
],
[
"#### 36. Extract the integer part of a random array of positive numbers using 4 different methods (★★☆)",
"_____no_output_____"
],
[
"#### 37. Create a 5x5 matrix with row values ranging from 0 to 4 (★★☆)",
"_____no_output_____"
],
[
"#### 38. Consider a generator function that generates 10 integers and use it to build an array (★☆☆)",
"_____no_output_____"
],
[
"#### 39. Create a vector of size 10 with values ranging from 0 to 1, both excluded (★★☆)",
"_____no_output_____"
],
[
"#### 40. Create a random vector of size 10 and sort it (★★☆)",
"_____no_output_____"
],
[
"#### 41. How to sum a small array faster than np.sum? (★★☆)",
"_____no_output_____"
],
[
"#### 42. Consider two random array A and B, check if they are equal (★★☆)",
"_____no_output_____"
],
[
"#### 43. Make an array immutable (read-only) (★★☆)",
"_____no_output_____"
],
[
"#### 44. Consider a random 10x2 matrix representing cartesian coordinates, convert them to polar coordinates (★★☆)",
"_____no_output_____"
],
[
"#### 45. Create random vector of size 10 and replace the maximum value by 0 (★★☆)",
"_____no_output_____"
],
[
"#### 46. Create a structured array with `x` and `y` coordinates covering the [0,1]x[0,1] area (★★☆)",
"_____no_output_____"
],
[
"#### 47. Given two arrays, X and Y, construct the Cauchy matrix C (Cij =1/(xi - yj))",
"_____no_output_____"
],
[
"#### 48. Print the minimum and maximum representable value for each numpy scalar type (★★☆)",
"_____no_output_____"
],
[
"#### 49. How to print all the values of an array? (★★☆)",
"_____no_output_____"
],
[
"#### 50. How to find the closest value (to a given scalar) in a vector? (★★☆)",
"_____no_output_____"
],
[
"#### 51. Create a structured array representing a position (x,y) and a color (r,g,b) (★★☆)",
"_____no_output_____"
],
[
"#### 52. Consider a random vector with shape (100,2) representing coordinates, find point by point distances (★★☆)",
"_____no_output_____"
],
[
"#### 53. How to convert a float (32 bits) array into an integer (32 bits) in place?",
"_____no_output_____"
],
[
"#### 54. How to read the following file? (★★☆)\n```\n1, 2, 3, 4, 5\n6, , , 7, 8\n , , 9,10,11\n```",
"_____no_output_____"
],
[
"#### 55. What is the equivalent of enumerate for numpy arrays? (★★☆)",
"_____no_output_____"
],
[
"#### 56. Generate a generic 2D Gaussian-like array (★★☆)",
"_____no_output_____"
],
[
"#### 57. How to randomly place p elements in a 2D array? (★★☆)",
"_____no_output_____"
],
[
"#### 58. Subtract the mean of each row of a matrix (★★☆)",
"_____no_output_____"
],
[
"#### 59. How to sort an array by the nth column? (★★☆)",
"_____no_output_____"
],
[
"#### 60. How to tell if a given 2D array has null columns? (★★☆)",
"_____no_output_____"
],
[
"#### 61. Find the nearest value from a given value in an array (★★☆)",
"_____no_output_____"
],
[
"#### 62. Considering two arrays with shape (1,3) and (3,1), how to compute their sum using an iterator? (★★☆)",
"_____no_output_____"
],
[
"#### 63. Create an array class that has a name attribute (★★☆)",
"_____no_output_____"
],
[
"#### 64. Consider a given vector, how to add 1 to each element indexed by a second vector (be careful with repeated indices)? (★★★)",
"_____no_output_____"
],
[
"#### 65. How to accumulate elements of a vector (X) to an array (F) based on an index list (I)? (★★★)",
"_____no_output_____"
],
[
"#### 66. Considering a (w,h,3) image of (dtype=ubyte), compute the number of unique colors (★★☆)",
"_____no_output_____"
],
[
"#### 67. Considering a four dimensions array, how to get sum over the last two axis at once? (★★★)",
"_____no_output_____"
],
[
"#### 68. Considering a one-dimensional vector D, how to compute means of subsets of D using a vector S of same size describing subset indices? (★★★)",
"_____no_output_____"
],
[
"#### 69. How to get the diagonal of a dot product? (★★★)",
"_____no_output_____"
],
[
"#### 70. Consider the vector [1, 2, 3, 4, 5], how to build a new vector with 3 consecutive zeros interleaved between each value? (★★★)",
"_____no_output_____"
],
[
"#### 71. Consider an array of dimension (5,5,3), how to mulitply it by an array with dimensions (5,5)? (★★★)",
"_____no_output_____"
],
[
"#### 72. How to swap two rows of an array? (★★★)",
"_____no_output_____"
],
[
"#### 73. Consider a set of 10 triplets describing 10 triangles (with shared vertices), find the set of unique line segments composing all the triangles (★★★)",
"_____no_output_____"
],
[
"#### 74. Given a sorted array C that corresponds to a bincount, how to produce an array A such that np.bincount(A) == C? (★★★)",
"_____no_output_____"
],
[
"#### 75. How to compute averages using a sliding window over an array? (★★★)",
"_____no_output_____"
],
[
"#### 76. Consider a one-dimensional array Z, build a two-dimensional array whose first row is (Z[0],Z[1],Z[2]) and each subsequent row is shifted by 1 (last row should be (Z[-3],Z[-2],Z[-1]) (★★★)",
"_____no_output_____"
],
[
"#### 77. How to negate a boolean, or to change the sign of a float inplace? (★★★)",
"_____no_output_____"
],
[
"#### 78. Consider 2 sets of points P0,P1 describing lines (2d) and a point p, how to compute distance from p to each line i (P0[i],P1[i])? (★★★)",
"_____no_output_____"
],
[
"#### 79. Consider 2 sets of points P0,P1 describing lines (2d) and a set of points P, how to compute distance from each point j (P[j]) to each line i (P0[i],P1[i])? (★★★)",
"_____no_output_____"
],
[
"#### 80. Consider an arbitrary array, write a function that extract a subpart with a fixed shape and centered on a given element (pad with a `fill` value when necessary) (★★★)",
"_____no_output_____"
],
[
"#### 81. Consider an array Z = [1,2,3,4,5,6,7,8,9,10,11,12,13,14], how to generate an array R = [[1,2,3,4], [2,3,4,5], [3,4,5,6], ..., [11,12,13,14]]? (★★★)",
"_____no_output_____"
],
[
"#### 82. Compute a matrix rank (★★★)",
"_____no_output_____"
],
[
"#### 83. How to find the most frequent value in an array?",
"_____no_output_____"
],
[
"#### 84. Extract all the contiguous 3x3 blocks from a random 10x10 matrix (★★★)",
"_____no_output_____"
],
[
"#### 85. Create a 2D array subclass such that Z[i,j] == Z[j,i] (★★★)",
"_____no_output_____"
],
[
"#### 86. Consider a set of p matrices wich shape (n,n) and a set of p vectors with shape (n,1). How to compute the sum of of the p matrix products at once? (result has shape (n,1)) (★★★)",
"_____no_output_____"
],
[
"#### 87. Consider a 16x16 array, how to get the block-sum (block size is 4x4)? (★★★)",
"_____no_output_____"
],
[
"#### 88. How to implement the Game of Life using numpy arrays? (★★★)",
"_____no_output_____"
],
[
"#### 89. How to get the n largest values of an array (★★★)",
"_____no_output_____"
],
[
"#### 90. Given an arbitrary number of vectors, build the cartesian product (every combinations of every item) (★★★)",
"_____no_output_____"
],
[
"#### 91. How to create a record array from a regular array? (★★★)",
"_____no_output_____"
],
[
"#### 92. Consider a large vector Z, compute Z to the power of 3 using 3 different methods (★★★)",
"_____no_output_____"
],
[
"#### 93. Consider two arrays A and B of shape (8,3) and (2,2). How to find rows of A that contain elements of each row of B regardless of the order of the elements in B? (★★★)",
"_____no_output_____"
],
[
"#### 94. Considering a 10x3 matrix, extract rows with unequal values (e.g. [2,2,3]) (★★★)",
"_____no_output_____"
],
[
"#### 95. Convert a vector of ints into a matrix binary representation (★★★)",
"_____no_output_____"
],
[
"#### 96. Given a two dimensional array, how to extract unique rows? (★★★)",
"_____no_output_____"
],
[
"#### 97. Considering 2 vectors A & B, write the einsum equivalent of inner, outer, sum, and mul function (★★★)",
"_____no_output_____"
],
[
"#### 98. Considering a path described by two vectors (X,Y), how to sample it using equidistant samples (★★★)?",
"_____no_output_____"
],
[
"#### 99. Given an integer n and a 2D array X, select from X the rows which can be interpreted as draws from a multinomial distribution with n degrees, i.e., the rows which only contain integers and which sum to n. (★★★)",
"_____no_output_____"
],
[
"#### 100. Compute bootstrapped 95% confidence intervals for the mean of a 1D array X (i.e., resample the elements of an array with replacement N times, compute the mean of each sample, and then compute percentiles over the means). (★★★)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d0b37a35efdec346f5cd5ba2d10352686ed7a62b | 15,325 | ipynb | Jupyter Notebook | notebooks/extract data from website using requests.ipynb | vcprakash217/titanic-disaster-prediction-model | c3f938a921f819fdd09d6186c24e05e9cfc6ed78 | [
"MIT"
] | null | null | null | notebooks/extract data from website using requests.ipynb | vcprakash217/titanic-disaster-prediction-model | c3f938a921f819fdd09d6186c24e05e9cfc6ed78 | [
"MIT"
] | null | null | null | notebooks/extract data from website using requests.ipynb | vcprakash217/titanic-disaster-prediction-model | c3f938a921f819fdd09d6186c24e05e9cfc6ed78 | [
"MIT"
] | null | null | null | 39.094388 | 1,012 | 0.497618 | [
[
[
"from dotenv import load_dotenv, find_dotenv",
"_____no_output_____"
],
[
"dotenv_path = find_dotenv()\nload_dotenv(dotenv_path)",
"_____no_output_____"
],
[
"import os\nKAGGLE_USERNAME = os.environ.get('KAGGLE_USERNAME')\nKAGGLE_PWD = os.environ.get('KAGGLE_PWD')",
"_____no_output_____"
],
[
"import requests\nfrom requests import session",
"_____no_output_____"
],
[
"payload = {\n 'action': 'login',\n 'username': KAGGLE_USERNAME,\n 'password': KAGGLE_PWD\n}\n\nurl = 'https://www.kaggle.com/c/titanic/download/train.csv'\n\nwith session() as c:\n c.post('https://www.kaggle.com/account/login', data=payload)\n response = c.get(url)\n print(response.text)",
"vcprakash217\n<!DOCTYPE html>\n<html>\n<head>\n <title>Kaggle: Your Home for Data Science</title>\n <meta charset=\"utf-8\" />\n <meta name=\"robots\" content=\"index, follow\" />\n <meta name=\"theme-color\" content=\"#008ABC\" />\n <link rel=\"dns-prefetch\" href=\"https://www.google-analytics.com\" /><link rel=\"dns-prefetch\" href=\"https://stats.g.doubleclick.net\" /><link rel=\"dns-prefetch\" href=\"https://js.intercomcdn.com\" /><link rel=\"dns-prefetch\" href=\"https://kaggle2.blob.core.windows.net\" />\n <link href=\"/static/images/favicon.ico\" rel=\"shortcut icon\" type=\"image/x-icon\" />\n <link rel=\"manifest\" href=\"/static/json/manifest.json\">\n <link href=\"//fonts.googleapis.com/css?family=Open+Sans:400,300,300italic,400italic,600,600italic,700,700italic\" rel='stylesheet' type='text/css'>\n <link rel=\"stylesheet\" type=\"text/css\" href=\"/static/assets/vendor.css?v=e5d30288f769\" />\n <link rel=\"stylesheet\" type=\"text/css\" href=\"/static/assets/app.css?v=118b90a01c14\" />\n \n \n \n \n \n <script>\n window.ga = window.ga || function () { (ga.q = ga.q || []).push(arguments) }; ga.l = +new Date;\n ga('create', 'UA-12629138-1', 'auto');\n ga('set', 'displayFeaturesTask', null);\n ga('send', 'pageview');\n </script>\n <script async src=\"https://www.google-analytics.com/analytics.js\"></script>\n\n<script async src=\"https://www.googletagmanager.com/gtag/js?id=AW-955737689\"></script>\n<script>\n window.dataLayer = window.dataLayer || [];\n function gtag() { dataLayer.push(arguments); }\n gtag('js', new Date());\n gtag('config', 'AW-955737689');\n</script>\n\n \n<script>\n !function(f,b,e,v,n,t,s)\n {if(f.fbq)return;n=f.fbq=function(){n.callMethod?\n n.callMethod.apply(n,arguments):n.queue.push(arguments)};\n if(!f._fbq)f._fbq=n;n.push=n;n.loaded=!0;n.version='2.0';\n n.queue=[];t=b.createElement(e);t.async=!0;\n t.src=v;s=b.getElementsByTagName(e)[0];\n s.parentNode.insertBefore(t,s)}(window,document,'script',\n 'https://connect.facebook.net/en_US/fbevents.js');\n fbq(\"set\", \"autoConfig\", \"false\", \"136809193586742\");\n fbq('init', '136809193586742'); \n fbq('track', 'PageView');\n</script>\n<noscript>\n <img height=\"1\" width=\"1\" src=\"https://www.facebook.com/tr?id=136809193586742&ev=PageView&noscript=1\"/>\n</noscript>\n\n<script>window.intercomSettings = {\"app_id\":\"koj6gxx6\"};</script> <script>(function () { var w = window; var ic = w.Intercom; if (typeof ic === \"function\") { ic('reattach_activator'); ic('update', intercomSettings); } else { var d = document; var i = function () { i.c(arguments) }; i.q = []; i.c = function (args) { i.q.push(args) }; w.Intercom = i; function l() { var s = d.createElement('script'); s.type = 'text/javascript'; s.async = true; s.src = 'https://widget.intercom.io/widget/koj6gxx6'; var x = d.getElementsByTagName('script')[0]; x.parentNode.insertBefore(s, x); } if (w.attachEvent) { w.attachEvent('onload', l); } else { w.addEventListener('load', l, false); } } })()</script>\n \n \n\n \n \n\n \n \n\n\n \n <script>let useKaggleAnalytics = true;</script>\n\n <script src=\"/static/assets/manifest.js?v=93edf14d10b2\"></script>\n<script src=\"/static/assets/vendor.js?v=78b99511a8df\"></script>\n</head>\n<body>\n \n\n\n\n\n\n\n\n\n<div class=\"site-layout\">\n\n <div class=\"site-layout__main-content\">\n \n\n\n\n<div data-component-name=\"LoginPage\" style=\"display: flex; flex-direction: column; flex: 1 0 auto;\"></div><script>var Kaggle=window.Kaggle||{};Kaggle.State=Kaggle.State||[];Kaggle.State.push({\"errors\":[],\"showCaptcha\":false});performance && performance.mark && performance.mark(\"LoginPage.componentCouldBootstrap\");</script>\n\n </div>\n\n</div>\n\n\n\n<script type=\"text/javascript\">\n var Kaggle = Kaggle || {};\n\n Kaggle.Current = {\n antiForgeryToken: 'hK7dBb2Q413aF9OA0CqE2e5fHImINoH5t91Hl_R9gArQq2ktCxlDX-sMrArmZmIJ5VOgCzVaOpxUDa8EtPESFys_ho81',\n isAnonymous: true,\n isFullScreen: false,\n analyticsToken: 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1NDAwOTM4OTksIlVzZXJJZCI6MH0.4qV4wWNP3AIJTMRsuNNCFyGXtsGqeNCbrdur3y9bzWI',\n analyticsTokenExpiry: 15,\n internetKernelsEnabled: false,\n \n \n \n \n \n \n \n \n \n \n \n }\n Kaggle.Current.log = function(){};\n Kaggle.Current.warn = function(){};\n\n var decodeUserDisplayName = function () {\n var escapedUserDisplayName = Kaggle.Current.userDisplayNameEscaped || \"\";\n try {\n var textVersion = new DOMParser().parseFromString(escapedUserDisplayName, \"text/html\").documentElement.textContent;\n if (textVersion) {\n return textVersion;\n }\n } catch(ex) {}\n \n return escapedUserDisplayName;\n }\n Kaggle.Current.userDisplayName = decodeUserDisplayName();\n</script>\n\n\n\n\n\n<script type=\"text/javascript\">\n var Kaggle = Kaggle || {};\n Kaggle.PageMessages = [];\n</script>\n\n\n\n\n\n\n\n\n\n\n\n\n <script src=\"/static/assets/app.js?v=b43997f8eb07\"></script>\n \n <script>\n (function() {\n if ('serviceWorker' in navigator) {\n \n navigator.serviceWorker.register(\"/static/assets/service-worker.js\").then(function(reg) {\n \n reg.onupdatefound = function() {\n \n var installingWorker = reg.installing;\n installingWorker.onstatechange = function() {\n switch (installingWorker.state) {\n case 'installed':\n if (navigator.serviceWorker.controller) {\n \n console.log('New or updated content is available.');\n } else {\n \n console.log('Content is now available offline!');\n }\n break;\n case 'redundant':\n console.error('The installing service worker became redundant.');\n break;\n }\n };\n };\n }).catch(function(e) {\n console.error('Error during service worker registration:', e);\n });\n }\n })();\n </script>\n</body>\n</html>\n\n"
],
[
"payload = {\n 'action': 'login',\n 'username': KAGGLE_USERNAME,\n 'password': KAGGLE_PWD\n}\n\ndef extract_data(url, file_path):\n with session() as c:\n c.post('https://www.kaggle.com/account/login', data=payload)\n with open(file_path, 'w') as handle:\n response = c.get(url, stream=True)\n for block in response.iter_content(1024):\n handle.write(block)",
"_____no_output_____"
],
[
"train_url = 'https://www.kaggle.com/c/titanic/download/train.csv'\ntest_url = 'https://www.kaggle.com/c/titanic/download/test.csv'\n\nraw_data_path = os.path.join(os.path.pardir, 'data', 'raw')\ntrain_data_path = os.path.join(raw_data_path, 'train.csv')\ntest_data_path = os.path.join(raw_data_path, 'test.csv')\n\nextract_data(train_url, train_data_path)\nextract_data(test_url, test_data_path)",
"_____no_output_____"
],
[
"! ls",
"Extract data from relational databases.ipynb\r\nclassroomDB.db\r\nextract-titanic-data.ipynb\r\n"
],
[
"! ls -l ../data/raw",
"total 0\r\n-rw-r--r-- 1 pv032381 1745752250 0 Oct 20 22:32 train.csv\r\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0b381738267fd95ba14110b6123101d4a294207 | 7,758 | ipynb | Jupyter Notebook | 2Category_BarChart.ipynb | sgmcdonnell/oil-gas_functions | 6185c2159084d6508fca55d9ea389359c405c506 | [
"MIT"
] | 1 | 2020-09-06T05:31:02.000Z | 2020-09-06T05:31:02.000Z | 2Category_BarChart.ipynb | sgmcdonnell/oil-gas_functions | 6185c2159084d6508fca55d9ea389359c405c506 | [
"MIT"
] | null | null | null | 2Category_BarChart.ipynb | sgmcdonnell/oil-gas_functions | 6185c2159084d6508fca55d9ea389359c405c506 | [
"MIT"
] | null | null | null | 41.935135 | 246 | 0.494844 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
d0b3904add1e8ce1897598d6870230980c993450 | 61,309 | ipynb | Jupyter Notebook | Tutorials/Boston Housing - XGBoost (Batch Transform) - Low Level.ipynb | jobquiroz/sagemaker-deployment | 758358472c564407c450bac081419768aa6afadb | [
"MIT"
] | null | null | null | Tutorials/Boston Housing - XGBoost (Batch Transform) - Low Level.ipynb | jobquiroz/sagemaker-deployment | 758358472c564407c450bac081419768aa6afadb | [
"MIT"
] | null | null | null | Tutorials/Boston Housing - XGBoost (Batch Transform) - Low Level.ipynb | jobquiroz/sagemaker-deployment | 758358472c564407c450bac081419768aa6afadb | [
"MIT"
] | null | null | null | 71.041715 | 15,072 | 0.72681 | [
[
[
"# Predicting Boston Housing Prices\n\n## Using XGBoost in SageMaker (Batch Transform)\n\n_Deep Learning Nanodegree Program | Deployment_\n\n---\n\nAs an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.\n\nThe documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/)\n\n## General Outline\n\nTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.\n\n1. Download or otherwise retrieve the data.\n2. Process / Prepare the data.\n3. Upload the processed data to S3.\n4. Train a chosen model.\n5. Test the trained model (typically using a batch transform job).\n6. Deploy the trained model.\n7. Use the deployed model.\n\nIn this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail.",
"_____no_output_____"
]
],
[
[
"# Make sure that we use SageMaker 1.x\n!pip install sagemaker==1.72.0",
"Collecting sagemaker==1.72.0\n Downloading sagemaker-1.72.0.tar.gz (297 kB)\n\u001b[K |████████████████████████████████| 297 kB 41.4 MB/s eta 0:00:01\n\u001b[?25hRequirement already satisfied: boto3>=1.14.12 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.16.19)\nRequirement already satisfied: numpy>=1.9.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.18.1)\nRequirement already satisfied: protobuf>=3.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.11.4)\nRequirement already satisfied: scipy>=0.19.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.4.1)\nRequirement already satisfied: protobuf3-to-dict>=0.1.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (0.1.5)\nCollecting smdebug-rulesconfig==0.1.4\n Downloading smdebug_rulesconfig-0.1.4-py2.py3-none-any.whl (10 kB)\nRequirement already satisfied: importlib-metadata>=1.4.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (2.0.0)\nRequirement already satisfied: packaging>=20.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (20.1)\nRequirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.3.3)\nRequirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.10.0)\nRequirement already satisfied: botocore<1.20.0,>=1.19.19 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (1.19.19)\nRequirement already satisfied: six>=1.9 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from protobuf>=3.1->sagemaker==1.72.0) (1.14.0)\nRequirement already satisfied: setuptools in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from protobuf>=3.1->sagemaker==1.72.0) (45.2.0.post20200210)\nRequirement already satisfied: zipp>=0.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (2.2.0)\nRequirement already satisfied: pyparsing>=2.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from packaging>=20.0->sagemaker==1.72.0) (2.4.6)\nRequirement already satisfied: python-dateutil<3.0.0,>=2.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.20.0,>=1.19.19->boto3>=1.14.12->sagemaker==1.72.0) (2.8.1)\nRequirement already satisfied: urllib3<1.27,>=1.25.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.20.0,>=1.19.19->boto3>=1.14.12->sagemaker==1.72.0) (1.25.10)\nBuilding wheels for collected packages: sagemaker\n Building wheel for sagemaker (setup.py) ... \u001b[?25ldone\n\u001b[?25h Created wheel for sagemaker: filename=sagemaker-1.72.0-py2.py3-none-any.whl size=386358 sha256=a54fee13acf38a3078ba71a95b0fcce300019dad296f9efd0342d37c1b8351d6\n Stored in directory: /home/ec2-user/.cache/pip/wheels/c3/58/70/85faf4437568bfaa4c419937569ba1fe54d44c5db42406bbd7\nSuccessfully built sagemaker\nInstalling collected packages: smdebug-rulesconfig, sagemaker\n Attempting uninstall: smdebug-rulesconfig\n Found existing installation: smdebug-rulesconfig 0.1.6\n Uninstalling smdebug-rulesconfig-0.1.6:\n Successfully uninstalled smdebug-rulesconfig-0.1.6\n Attempting uninstall: sagemaker\n Found existing installation: sagemaker 2.16.4.dev0\n Uninstalling sagemaker-2.16.4.dev0:\n Successfully uninstalled sagemaker-2.16.4.dev0\nSuccessfully installed sagemaker-1.72.0 smdebug-rulesconfig-0.1.4\n\u001b[33mWARNING: You are using pip version 20.0.2; however, version 20.3.3 is available.\nYou should consider upgrading via the '/home/ec2-user/anaconda3/envs/pytorch_p36/bin/python -m pip install --upgrade pip' command.\u001b[0m\n"
]
],
[
[
"## Step 0: Setting up the notebook\n\nWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n\nimport os\n\nimport time\nfrom time import gmtime, strftime\n\nimport numpy as np\nimport pandas as pd\n\nimport matplotlib.pyplot as plt\n\nfrom sklearn.datasets import load_boston\nimport sklearn.model_selection",
"_____no_output_____"
]
],
[
[
"In addition to the modules above, we need to import the various bits of SageMaker that we will be using. ",
"_____no_output_____"
]
],
[
[
"import sagemaker\nfrom sagemaker import get_execution_role\nfrom sagemaker.amazon.amazon_estimator import get_image_uri\n\n# This is an object that represents the SageMaker session that we are currently operating in. This\n# object contains some useful information that we will need to access later such as our region.\nsession = sagemaker.Session()\n\n# This is an object that represents the IAM role that we are currently assigned. When we construct\n# and launch the training job later we will need to tell it what IAM role it should have. Since our\n# use case is relatively simple we will simply assign the training job the role we currently have.\nrole = get_execution_role()",
"_____no_output_____"
]
],
[
[
"## Step 1: Downloading the data\n\nFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.",
"_____no_output_____"
]
],
[
[
"boston = load_boston()",
"_____no_output_____"
]
],
[
[
"## Step 2: Preparing and splitting the data\n\nGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.",
"_____no_output_____"
]
],
[
[
"# First we package up the input data and the target variable (the median value) as pandas dataframes. This\n# will make saving the data to a file a little easier later on.\n\nX_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)\nY_bos_pd = pd.DataFrame(boston.target)\n\n# We split the dataset into 2/3 training and 1/3 testing sets.\nX_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)\n\n# Then we split the training set further into 2/3 training and 1/3 validation sets.\nX_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)",
"_____no_output_____"
]
],
[
[
"## Step 3: Uploading the data files to S3\n\nWhen a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details.\n\n### Save the data locally\n\nFirst we need to create the test, train and validation csv files which we will then upload to S3.",
"_____no_output_____"
]
],
[
[
"# This is our local data directory. We need to make sure that it exists.\ndata_dir = '../data/boston'\nif not os.path.exists(data_dir):\n os.makedirs(data_dir)",
"_____no_output_____"
],
[
"# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header\n# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and\n# validation data, it is assumed that the first entry in each row is the target variable.\n\nX_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)\n\npd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)\npd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)",
"_____no_output_____"
]
],
[
[
"### Upload to S3\n\nSince we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.",
"_____no_output_____"
]
],
[
[
"prefix = 'boston-xgboost-LL'\n\ntest_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)\nval_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)\ntrain_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)",
"_____no_output_____"
]
],
[
[
"## Step 4: Train and construct the XGBoost model\n\nNow that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself.\n\n### Set up the training job\n\nFirst, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.",
"_____no_output_____"
]
],
[
[
"# We will need to know the name of the container that we want to use for training. SageMaker provides\n# a nice utility method to construct this for us.\ncontainer = get_image_uri(session.boto_region_name, 'xgboost')\n\n# We now specify the parameters we wish to use for our training job\ntraining_params = {}\n\n# We need to specify the permissions that this training job will have. For our purposes we can use\n# the same permissions that our current SageMaker session has.\ntraining_params['RoleArn'] = role\n\n# Here we describe the algorithm we wish to use. The most important part is the container which\n# contains the training code.\ntraining_params['AlgorithmSpecification'] = {\n \"TrainingImage\": container,\n \"TrainingInputMode\": \"File\"\n}\n\n# We also need to say where we would like the resulting model artifacts stored.\ntraining_params['OutputDataConfig'] = {\n \"S3OutputPath\": \"s3://\" + session.default_bucket() + \"/\" + prefix + \"/output\"\n}\n\n# We also need to set some parameters for the training job itself. Namely we need to describe what sort of\n# compute instance we wish to use along with a stopping condition to handle the case that there is\n# some sort of error and the training script doesn't terminate.\ntraining_params['ResourceConfig'] = {\n \"InstanceCount\": 1,\n \"InstanceType\": \"ml.m4.xlarge\",\n \"VolumeSizeInGB\": 5\n}\n \ntraining_params['StoppingCondition'] = {\n \"MaxRuntimeInSeconds\": 86400\n}\n\n# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect\n# there is on the resulting model.\ntraining_params['HyperParameters'] = {\n \"max_depth\": \"5\",\n \"eta\": \"0.2\",\n \"gamma\": \"4\",\n \"min_child_weight\": \"6\",\n \"subsample\": \"0.8\",\n \"objective\": \"reg:linear\",\n \"early_stopping_rounds\": \"10\",\n \"num_round\": \"200\"\n}\n\n# Now we need to tell SageMaker where the data should be retrieved from.\ntraining_params['InputDataConfig'] = [\n {\n \"ChannelName\": \"train\",\n \"DataSource\": {\n \"S3DataSource\": {\n \"S3DataType\": \"S3Prefix\",\n \"S3Uri\": train_location,\n \"S3DataDistributionType\": \"FullyReplicated\"\n }\n },\n \"ContentType\": \"csv\",\n \"CompressionType\": \"None\"\n },\n {\n \"ChannelName\": \"validation\",\n \"DataSource\": {\n \"S3DataSource\": {\n \"S3DataType\": \"S3Prefix\",\n \"S3Uri\": val_location,\n \"S3DataDistributionType\": \"FullyReplicated\"\n }\n },\n \"ContentType\": \"csv\",\n \"CompressionType\": \"None\"\n }\n]",
"'get_image_uri' method will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.\nThere is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='1.0-1'. For example:\n\tget_image_uri(region, 'xgboost', '1.0-1').\n"
]
],
[
[
"### Execute the training job\n\nNow that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.",
"_____no_output_____"
]
],
[
[
"# First we need to choose a training job name. This is useful for if we want to recall information about our\n# training job at a later date. Note that SageMaker requires a training job name and that the name needs to\n# be unique, which we accomplish by appending the current timestamp.\ntraining_job_name = \"boston-xgboost-\" + strftime(\"%Y-%m-%d-%H-%M-%S\", gmtime())\ntraining_params['TrainingJobName'] = training_job_name\n\n# And now we ask SageMaker to create (and execute) the training job\ntraining_job = session.sagemaker_client.create_training_job(**training_params)",
"_____no_output_____"
]
],
[
[
"The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.",
"_____no_output_____"
]
],
[
[
"session.logs_for_job(training_job_name, wait=True)",
"2020-12-17 03:42:31 Starting - Starting the training job...\n2020-12-17 03:42:33 Starting - Launching requested ML instances......\n2020-12-17 03:43:48 Starting - Preparing the instances for training......\n2020-12-17 03:44:42 Downloading - Downloading input data...\n2020-12-17 03:45:17 Training - Downloading the training image..\u001b[34mArguments: train\u001b[0m\n\u001b[34m[2020-12-17:03:45:38:INFO] Running standalone xgboost training.\u001b[0m\n\u001b[34m[2020-12-17:03:45:38:INFO] File size need to be processed in the node: 0.02mb. Available memory size in the node: 8446.92mb\u001b[0m\n\u001b[34m[2020-12-17:03:45:38:INFO] Determined delimiter of CSV input is ','\u001b[0m\n\u001b[34m[03:45:38] S3DistributionType set as FullyReplicated\u001b[0m\n\u001b[34m[03:45:38] 227x13 matrix with 2951 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,\u001b[0m\n\u001b[34m[2020-12-17:03:45:38:INFO] Determined delimiter of CSV input is ','\u001b[0m\n\u001b[34m[03:45:38] S3DistributionType set as FullyReplicated\u001b[0m\n\u001b[34m[03:45:38] 112x13 matrix with 1456 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=3\u001b[0m\n\u001b[34m[0]#011train-rmse:19.7065#011validation-rmse:18.5842\u001b[0m\n\u001b[34mMultiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.\n\u001b[0m\n\u001b[34mWill train until validation-rmse hasn't improved in 10 rounds.\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=3\u001b[0m\n\u001b[34m[1]#011train-rmse:16.1671#011validation-rmse:15.2685\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=3\u001b[0m\n\u001b[34m[2]#011train-rmse:13.2144#011validation-rmse:12.5412\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[3]#011train-rmse:10.8785#011validation-rmse:10.3291\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=4\u001b[0m\n\u001b[34m[4]#011train-rmse:9.08219#011validation-rmse:8.6974\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[5]#011train-rmse:7.59479#011validation-rmse:7.33901\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[6]#011train-rmse:6.39213#011validation-rmse:6.31128\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[7]#011train-rmse:5.48037#011validation-rmse:5.61484\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 2 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[8]#011train-rmse:4.70741#011validation-rmse:5.02152\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 2 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[9]#011train-rmse:4.15645#011validation-rmse:4.65854\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[10]#011train-rmse:3.69245#011validation-rmse:4.33171\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[11]#011train-rmse:3.34931#011validation-rmse:4.07579\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[12]#011train-rmse:3.04768#011validation-rmse:3.8819\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[13]#011train-rmse:2.78618#011validation-rmse:3.72065\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[14]#011train-rmse:2.63808#011validation-rmse:3.70509\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[15]#011train-rmse:2.50284#011validation-rmse:3.6421\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[16]#011train-rmse:2.37132#011validation-rmse:3.57954\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[17]#011train-rmse:2.29844#011validation-rmse:3.56863\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[18]#011train-rmse:2.19925#011validation-rmse:3.49482\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[19]#011train-rmse:2.13312#011validation-rmse:3.4643\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[20]#011train-rmse:2.07915#011validation-rmse:3.45007\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[21]#011train-rmse:2.01566#011validation-rmse:3.42334\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[22]#011train-rmse:1.96578#011validation-rmse:3.37898\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[23]#011train-rmse:1.9326#011validation-rmse:3.37188\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[24]#011train-rmse:1.89971#011validation-rmse:3.34891\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[25]#011train-rmse:1.79644#011validation-rmse:3.36891\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[26]#011train-rmse:1.75075#011validation-rmse:3.32175\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 4 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[27]#011train-rmse:1.65705#011validation-rmse:3.2801\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[28]#011train-rmse:1.61417#011validation-rmse:3.28417\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[29]#011train-rmse:1.58175#011validation-rmse:3.30193\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 12 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[30]#011train-rmse:1.5509#011validation-rmse:3.32142\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[31]#011train-rmse:1.51497#011validation-rmse:3.3001\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[32]#011train-rmse:1.45906#011validation-rmse:3.27012\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[33]#011train-rmse:1.4385#011validation-rmse:3.24937\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[34]#011train-rmse:1.40865#011validation-rmse:3.22099\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[35]#011train-rmse:1.39844#011validation-rmse:3.23563\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=4\u001b[0m\n\u001b[34m[36]#011train-rmse:1.38464#011validation-rmse:3.22714\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[37]#011train-rmse:1.36937#011validation-rmse:3.22625\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[38]#011train-rmse:1.36435#011validation-rmse:3.23085\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[39]#011train-rmse:1.34749#011validation-rmse:3.21578\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[40]#011train-rmse:1.29011#011validation-rmse:3.19104\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[41]#011train-rmse:1.26507#011validation-rmse:3.17801\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 6 pruned nodes, max_depth=3\u001b[0m\n\u001b[34m[42]#011train-rmse:1.25664#011validation-rmse:3.17489\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 10 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[43]#011train-rmse:1.23078#011validation-rmse:3.15502\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[44]#011train-rmse:1.20449#011validation-rmse:3.14924\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 16 pruned nodes, max_depth=2\u001b[0m\n\u001b[34m[45]#011train-rmse:1.20568#011validation-rmse:3.15654\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 14 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[46]#011train-rmse:1.18716#011validation-rmse:3.15701\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 10 pruned nodes, max_depth=1\u001b[0m\n\u001b[34m[47]#011train-rmse:1.1816#011validation-rmse:3.14747\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 12 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[48]#011train-rmse:1.1653#011validation-rmse:3.13868\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[49]#011train-rmse:1.14584#011validation-rmse:3.13437\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 8 pruned nodes, max_depth=4\u001b[0m\n\u001b[34m[50]#011train-rmse:1.13731#011validation-rmse:3.12423\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 20 pruned nodes, max_depth=3\u001b[0m\n\u001b[34m[51]#011train-rmse:1.11298#011validation-rmse:3.11808\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 12 pruned nodes, max_depth=4\u001b[0m\n\u001b[34m[52]#011train-rmse:1.09732#011validation-rmse:3.10818\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 14 pruned nodes, max_depth=4\u001b[0m\n\u001b[34m[53]#011train-rmse:1.06984#011validation-rmse:3.12214\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[54]#011train-rmse:1.04907#011validation-rmse:3.12844\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 20 pruned nodes, max_depth=4\u001b[0m\n\u001b[34m[55]#011train-rmse:1.03942#011validation-rmse:3.11977\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[56]#011train-rmse:1.02553#011validation-rmse:3.11341\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5\u001b[0m\n\u001b[34m[57]#011train-rmse:1.00393#011validation-rmse:3.11076\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 18 pruned nodes, max_depth=1\u001b[0m\n\u001b[34m[58]#011train-rmse:1.00433#011validation-rmse:3.1192\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 20 pruned nodes, max_depth=2\u001b[0m\n\u001b[34m[59]#011train-rmse:0.996355#011validation-rmse:3.11243\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 22 pruned nodes, max_depth=3\u001b[0m\n\u001b[34m[60]#011train-rmse:0.97763#011validation-rmse:3.11046\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 26 pruned nodes, max_depth=1\u001b[0m\n\u001b[34m[61]#011train-rmse:0.977801#011validation-rmse:3.11732\u001b[0m\n\u001b[34m[03:45:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 22 pruned nodes, max_depth=0\u001b[0m\n\u001b[34m[62]#011train-rmse:0.977827#011validation-rmse:3.11705\u001b[0m\n\u001b[34mStopping. Best iteration:\u001b[0m\n\u001b[34m[52]#011train-rmse:1.09732#011validation-rmse:3.10818\n\u001b[0m\n"
]
],
[
[
"### Build the model\n\nNow that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.",
"_____no_output_____"
]
],
[
[
"# We begin by asking SageMaker to describe for us the results of the training job. The data structure\n# returned contains a lot more information than we currently need, try checking it out yourself in\n# more detail.\ntraining_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)\n\nmodel_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']",
"_____no_output_____"
],
[
"# Just like when we created a training job, the model name must be unique\nmodel_name = training_job_name + \"-model\"\n\n# We also need to tell SageMaker which container should be used for inference and where it should\n# retrieve the model artifacts from. In our case, the xgboost container that we used for training\n# can also be used for inference.\nprimary_container = {\n \"Image\": container,\n \"ModelDataUrl\": model_artifacts\n}\n\n# And lastly we construct the SageMaker model\nmodel_info = session.sagemaker_client.create_model(\n ModelName = model_name,\n ExecutionRoleArn = role,\n PrimaryContainer = primary_container)",
"_____no_output_____"
]
],
[
[
"## Step 5: Testing the model\n\nNow that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier.\n\n### Set up the batch transform job\n\nJust like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.\n\nWe will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).",
"_____no_output_____"
]
],
[
[
"# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.\ntransform_job_name = 'boston-xgboost-batch-transform-' + strftime(\"%Y-%m-%d-%H-%M-%S\", gmtime())\n\n# Now we construct the data structure which will describe the batch transform job.\ntransform_request = \\\n{\n \"TransformJobName\": transform_job_name,\n \n # This is the name of the model that we created earlier.\n \"ModelName\": model_name,\n \n # This describes how many compute instances should be used at once. If you happen to be doing a very large\n # batch transform job it may be worth running multiple compute instances at once.\n \"MaxConcurrentTransforms\": 1,\n \n # This says how big each individual request sent to the model should be, at most. One of the things that\n # SageMaker does in the background is to split our data up into chunks so that each chunks stays under\n # this size limit.\n \"MaxPayloadInMB\": 6,\n \n # Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of\n # the chunks that we send should contain multiple samples of our input data.\n \"BatchStrategy\": \"MultiRecord\",\n \n # This next object describes where the output data should be stored. Some of the more advanced options which\n # we don't cover here also describe how SageMaker should collect output from various batches.\n \"TransformOutput\": {\n \"S3OutputPath\": \"s3://{}/{}/batch-bransform/\".format(session.default_bucket(),prefix)\n },\n \n # Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in\n # addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to\n # split our data up into chunks, it needs to know how the individual samples in our data file appear. In our\n # case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what\n # type of data is being sent, in this case csv, so that it can properly serialize the data.\n \"TransformInput\": {\n \"ContentType\": \"text/csv\",\n \"SplitType\": \"Line\",\n \"DataSource\": {\n \"S3DataSource\": {\n \"S3DataType\": \"S3Prefix\",\n \"S3Uri\": test_location,\n }\n }\n },\n \n # And lastly we tell SageMaker what sort of compute instance we would like it to use.\n \"TransformResources\": {\n \"InstanceType\": \"ml.m4.xlarge\",\n \"InstanceCount\": 1\n }\n}",
"_____no_output_____"
]
],
[
[
"### Execute the batch transform job\n\nNow that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.",
"_____no_output_____"
]
],
[
[
"transform_response = session.sagemaker_client.create_transform_job(**transform_request)",
"_____no_output_____"
],
[
"transform_desc = session.wait_for_transform_job(transform_job_name)",
"..........................................................!\n"
]
],
[
[
"### Analyze the results\n\nNow that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.",
"_____no_output_____"
]
],
[
[
"transform_output = \"s3://{}/{}/batch-bransform/\".format(session.default_bucket(),prefix)",
"_____no_output_____"
],
[
"!aws s3 cp --recursive $transform_output $data_dir",
"Completed 2.3 KiB/2.3 KiB (28.7 KiB/s) with 1 file(s) remaining\rdownload: s3://sagemaker-us-east-1-302590777472/boston-xgboost-LL/batch-bransform/test.csv.out to ../data/boston/test.csv.out\r\n"
]
],
[
[
"To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.",
"_____no_output_____"
]
],
[
[
"Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)",
"_____no_output_____"
],
[
"plt.scatter(Y_test, Y_pred)\nplt.xlabel(\"Median Price\")\nplt.ylabel(\"Predicted Price\")\nplt.title(\"Median Price vs Predicted Price\")",
"_____no_output_____"
]
],
[
[
"## Optional: Clean up\n\nThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.",
"_____no_output_____"
]
],
[
[
"# First we will remove all of the files contained in the data_dir directory\n!rm $data_dir/*\n\n# And then we delete the directory itself\n!rmdir $data_dir",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0b3a8bcae94c7a5f77ee1aa452fc3727720fd73 | 668,336 | ipynb | Jupyter Notebook | Web-Scraping-Beautiful-Soup.ipynb | FelipeOshiro/E-commerce-market-analysis | ee2499d070fa0fdb58354ec3d5437f93f5f03985 | [
"MIT"
] | null | null | null | Web-Scraping-Beautiful-Soup.ipynb | FelipeOshiro/E-commerce-market-analysis | ee2499d070fa0fdb58354ec3d5437f93f5f03985 | [
"MIT"
] | null | null | null | Web-Scraping-Beautiful-Soup.ipynb | FelipeOshiro/E-commerce-market-analysis | ee2499d070fa0fdb58354ec3d5437f93f5f03985 | [
"MIT"
] | null | null | null | 73.500055 | 22,363 | 0.604533 | [
[
[
"# Problema do negócio \n\nA empresa deseja uma análise dos preços dos produtos das lojas concorrentes para precificar melhor o próprio produto no mercado, neste caso, calças. ",
"_____no_output_____"
],
[
"### Saída (Produto final)\n1. Descobrir a reposta para a pergunta calculando a mediana dos preços dos concorrentes\n2. Formato da entrega: tabela ou gráfico\n3. Aplicação de entrega: Streamlit\n\n### Processo\n1. Realizar o calculo da mediana sobre o produto, tipo e cor\n2. Gerar gráfico de barraas com a mediana dos preços dos produtos, por tipo e cr dos últimos 10 dias\n3. Tabela com: id | product_name | product_type | product_color | product_price\n4. Definiçãao do schema: coluna e seu tipo\n5. Infraestrutura de armazenamento (SQLite)\n6. Design do ETL\n7. Planejamento de agendamento dos scripts (dependencias entre os scripts) \n8. Fazer as visualizações\n9. Entrega do produto final\n\n### Entrada (Fonte de dados)\n1. Fonte de dados: site da H&M e Macy's (lojas de e-commerce)\n2. Ferramentas: Python 3.8.0, Biblioteca de Webscraping (BeautifulSoup e Selenium), Jupyter Notebook, Vs Code, Scheduler e Streamlit",
"_____no_output_____"
],
[
"# Métricas de um E-commerce",
"_____no_output_____"
],
[
"## Crescimento \n1. Porcentagem do Market Share\n1. Quantidade de clientes novos\n\n## Faturamento\n1. Quantidade de vendas\n2. Ticket médio *\n3. LTV\n4. Recência média\n5. Basket Size\n6. Markup Médio *\n\n## Custo\n1. CAC (Aquisição de clientes)\n2. Desconto médio\n3. Custo de produção *\n4. Taxa de devolução\n5. Custos fixos (folha de pagamento, escritório, ferramentas)\n6. Impostos",
"_____no_output_____"
],
[
"# Extração dos dados em HTML",
"_____no_output_____"
],
[
"## Beautiful Soup",
"_____no_output_____"
]
],
[
[
"import requests\nimport pandas as pd\nimport numpy as np\nfrom bs4 import BeautifulSoup\nfrom datetime import datetime",
"_____no_output_____"
],
[
"html_doc = \"\"\"\n<html><head><title>The Dormouse's story</title></head>\n<body>\n<p class=\"title\"><b>The Dormouse's story</b></p>\n\n<p class=\"story\">Once upon a time there were three little sisters; and their names were\n<a href=\"http://example.com/elsie\" class=\"sister\" id=\"link1\">Elsie</a>,\n<a href=\"http://example.com/lacie\" class=\"sister\" id=\"link2\">Lacie</a> and\n<a href=\"http://example.com/tillie\" class=\"sister\" id=\"link3\">Tillie</a>;\nand they lived at the bottom of a well.</p>\n\n<p class=\"story\">...</p>\n\"\"\"",
"_____no_output_____"
],
[
"soup = BeautifulSoup(html_doc, 'html.parser')",
"_____no_output_____"
],
[
"soup",
"_____no_output_____"
],
[
"print(soup.body.p)\n",
"<p class=\"title\"><b>The Dormouse's story</b></p>\n"
],
[
"soup.find_all('p')[1] #vai buscar todos os paragrafos cuja a classe é title ",
"_____no_output_____"
],
[
"#busco dentre todos os \"sisters\", apenas o que tem id=link1 = Elsie\nsoup.find_all('a', id='link1') ",
"_____no_output_____"
],
[
"#busco dentre todos os \"sisters\", apenas o que tem id=link1 = Elsie e EXTRAIR APENAS O NOME\nsoup.find_all('a', id='link1')[0].get_text()\n# ou [0].string",
"_____no_output_____"
],
[
"url= 'https://www2.hm.com/en_us/men/products/jeans.html'\n\nheaders={'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36'}\npage = requests.get(url, headers=headers) #requisição provinda de um navegador para o site\n",
"_____no_output_____"
],
[
"soup = BeautifulSoup(page.text, 'html.parser') \nsoup #cria o objeto que contem todo o html da pagina \n",
"_____no_output_____"
],
[
"#busca todos os objetos da lista não ordenanada 'ul' product listing small = vitrine de calças\nproducts = soup.find('ul', class_='products-listing small')\nproducts",
"_____no_output_____"
],
[
"#busca apenas um produto/item da vitrine\nproducts_list = soup.find_all('article', class_=\"hm-product-item\")\n\nproducts_list",
"_____no_output_____"
],
[
"#Extraindo com um loop, todos os codigos dos items da vitrine\nproduct_id = [p.get('data-articlecode') for p in products_list] \nproduct_id",
"_____no_output_____"
],
[
"#extraindo as categorias de cada produto \nproduct_category = [p.get('data-category') for p in products_list] \nproduct_category",
"_____no_output_____"
],
[
"product_list = products.find_all('a', class_='link')\nproduct_name=[p.get_text() for p in product_list]\nproduct_name",
"_____no_output_____"
],
[
"product_list = products.find_all('span', class_='price regular')\nproduct_price=[p.get_text() for p in product_list]\nproduct_price",
"_____no_output_____"
],
[
"data = pd.DataFrame( [product_id, product_category, product_name, product_price] ).T\ndata.columns=['product_id', 'product_category','product_name', 'product_price']\n\ndata['scrapy_datetime'] = datetime.now().strftime( '%Y-%m-%d %H:%M:%S' )\n\ndata.head()",
"_____no_output_____"
],
[
"#Buscar os produtos para as proximas paginas. Paginação serve para melhorar a navegação do usuário no site\n\nurl= 'https://www2.hm.com/en_us/men/products/jeans.html'\nheaders={'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36'}\npage = requests.get(url, headers=headers) #requisição provinda de um navegador para o site",
"_____no_output_____"
],
[
"soup = BeautifulSoup(page.text, 'html.parser')\n#extraindo todos os itens de todas as páginas , no total são 94 itens \ntotal_item = soup.find_all('h2', class_='load-more-heading')[0].get('data-total')\ntotal_item\n",
"_____no_output_____"
],
[
"#3 paginas de 36 items\npage_number = int(total_item)/36\npage_number",
"_____no_output_____"
],
[
"url02 = url + '?page-size='+str(int(page_number*36))\nurl02 #variavel com o link de toda a vitrine de produtos do tipo calça",
"_____no_output_____"
]
],
[
[
"### Buscar os detalhes de cada produto em suas paginas individuais: para extrair cor, tipo do tecido e codigo do produto",
"_____no_output_____"
]
],
[
[
"#Realizando a request da API \nurl= 'https://www2.hm.com/en_us/productpage.0985159001.html'\nheaders={'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36'}\npage = requests.get(url, headers=headers) #requisição provinda de um navegador para o site \n\nsoup = BeautifulSoup(page.text, 'html.parser')",
"_____no_output_____"
],
[
"#Extraindo as cores disponiveis para apenas uma calça da vitrine\nproduct_list = soup.find_all('a', role = 'radio') #tem que descobrir a classe em comum\n\ncolor_name = [p.get('data-color') for p in product_list]\ncolor_name\n\nproduct_id = [p.get('data-articlecode') for p in product_list]\nproduct_id\n\ndf_color = pd.DataFrame([product_id, color_name]).T\ndf_color.columns = ['product_id', 'color_name']\n\n#gerar style_id + color_id\n\ndf_color['style_id'] = df_color['product_id'].apply(lambda x: x[:-3])\ndf_color['color_id'] = df_color['product_id'].apply(lambda x: x[-3:])\n\ndf_color\n",
"_____no_output_____"
],
[
"#Extraindo o tipo de tecido das calças\n\nproduct_composition_list = soup.find_all('div', class_='pdp-description-list-item')\nproduct_composition = [list(filter(None, p.get_text().split('\\n'))) for p in product_composition_list]\nproduct_composition",
"_____no_output_____"
],
[
"pd.DataFrame(product_composition).T\n\n",
"_____no_output_____"
],
[
"#É necessário promover a primeira linha para o titulo das colunas e substituir os valores 'None'\n\n#renomeando o dataframe\ndf_composition = pd.DataFrame(product_composition).T\ndf_composition.columns = df_composition.iloc[0]\n\n#deletando a primeira linha\ndf_composition = df_composition.iloc[1:].fillna(method='ffill') # preenchendo os valores nulos com os valores da linha acima\n\n\n#gerar as colunas style id + color id para fazer o merge dos dois dataframes \ndf_composition['style_id'] = df_composition['Art. No.'].apply(lambda x: x[:-3]) #cria a coluna style_id e pega o codigo até os ultimos 3 numeros\ndf_composition['color_id'] = df_composition['Art. No.'].apply(lambda x: x[-3:])\ndel df_composition['Size']\ndf_composition\n\n\n#explicado In[105]\ndata_sku = pd.merge(df_color, df_composition[['style_id','Fit','Composition']], how='left', on='style_id')\n\n",
"_____no_output_____"
],
[
"df_color\n\n",
"_____no_output_____"
],
[
"#Unidos os dois dataframes, com left join, considerando style_id como parametro de união\n\n#left join = todos os dados que estao na df_composition sao adicionados na df_color\n\npd.merge(df_color, df_composition[['style_id','Fit','Composition']], how='left', on='style_id')\n\n\n\n\n",
"_____no_output_____"
]
],
[
[
"### Codigo único para extração dos detalhes de um produto ",
"_____no_output_____"
]
],
[
[
"#Realizando a request da API \nurl= 'https://www2.hm.com/en_us/productpage.0985159001.html'\nheaders={'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36'}\npage = requests.get(url, headers=headers) #requisição provinda de um navegador para o site \n\nsoup = BeautifulSoup(page.text, 'html.parser')\n\n\n#==================== color_name ===========================\n\n#Extraindo as cores disponiveis para apenas uma calça da vitrine\nproduct_list = soup.find_all('a', role = 'radio') #tem que descobrir a classe em comum\n\ncolor_name = [p.get('data-color') for p in product_list]\ncolor_name\n\nproduct_id = [p.get('data-articlecode') for p in product_list]\nproduct_id\n\ndf_color = pd.DataFrame([product_id, color_name]).T\ndf_color.columns = ['product_id', 'color_name']\n\n#gerar style_id + color_id\n\ndf_color['style_id'] = df_color['product_id'].apply(lambda x: x[:-3])\ndf_color['color_id'] = df_color['product_id'].apply(lambda x: x[-3:])\n\ndf_color\n\n#==================== composition ===========================\n\n#Extraindo o tipo de tecido das calças\n\nproduct_composition_list = soup.find_all('div', class_='pdp-description-list-item')\nproduct_composition = [list(filter(None, p.get_text().split('\\n'))) for p in product_composition_list]\nproduct_composition\n\npd.DataFrame(product_composition).T\n\n#É necessário promover a primeira linha para o titulo das colunas e substituir os valores 'None'\n\n#renomeando o dataframe\ndf_composition = pd.DataFrame(product_composition).T\ndf_composition.columns = df_composition.iloc[0]\n\n#deletando a primeira linha\ndf_composition = df_composition.iloc[1:].fillna(method='ffill') # preenchendo os valores nulos com os valores da linha acima\n\n\n#gerar as colunas style id + color id para fazer o merge dos dois dataframes \ndf_composition['style_id'] = df_composition['Art. No.'].apply(lambda x: x[:-3]) #cria a coluna style_id e pega o codigo até os ultimos 3 numeros\ndf_composition['color_id'] = df_composition['Art. No.'].apply(lambda x: x[-3:])\ndel df_composition['Size']\ndf_composition\n\n\n#explicado In[105]\ndata_sku = pd.merge(df_color, df_composition[['style_id','Fit','Composition']], how='left', on='style_id')\n\ndata_sku",
"_____no_output_____"
]
],
[
[
"### Para vários produtos\n",
"_____no_output_____"
]
],
[
[
"\nheaders={'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36'}\n\n#criando um dataframe vazio\ndf_details = pd.DataFrame()\n\naux=[]\n\ncols=['Art. No.','Composition','Fit','More sustainable materials','color_id','style_id','Size']\n\ndf_pattern = pd.DataFrame(columns=cols)\n\nfor i in range(len(data)):\n#Realizando a request da API para percorrer todas as paginas de detalhes dos produtos da vitrine \n url = 'https://www2.hm.com/en_us/productpage.' +data.loc[i, 'product_id'] + '.html'\n page = requests.get(url, headers=headers) #requisição provinda de um navegador para o site \n\n soup = BeautifulSoup(page.text, 'html.parser')\n\n\n#==================== color_name ===========================\n\n#Extraindo as cores disponiveis para apenas uma calça da vitrine\n product_list = soup.find_all('a', role = 'radio') #tem que descobrir a classe em comum\n\n color_name = [p.get('data-color') for p in product_list]\n color_name\n\n product_id = [p.get('data-articlecode') for p in product_list]\n product_id\n\n df_color = pd.DataFrame([product_id, color_name]).T\n df_color.columns = ['product_id', 'color_name']\n\n#gerar style_id + color_id\n\n df_color['style_id'] = df_color['product_id'].apply(lambda x: x[:-3])\n df_color['color_id'] = df_color['product_id'].apply(lambda x: x[-3:])\n\n df_color\n\n#==================== composition ===========================\n\n#Extraindo o tipo de tecido das calças\n\n product_composition_list = soup.find_all('div', class_='pdp-description-list-item')\n product_composition = [list(filter(None, p.get_text().split('\\n'))) for p in product_composition_list]\n product_composition\n\n pd.DataFrame(product_composition).T\n\n#É necessário promover a primeira linha para o titulo das colunas e substituir os valores 'None'\n\n#renomeando o dataframe\n\ndf_composition = pd.DataFrame(product_composition).T\ndf_composition.columns = df_composition.iloc[0]\n\n#deletando a primeira linha\ndf_composition = df_composition.iloc[1:].fillna(method='ffill') # preenchendo os valores nulos com os valores da linha acima\n\n#garantir a mesma quantidade de colunas\ndf_composition = pd.concat([df_pattern, df_composition], axis=0)\n\n#gerar as colunas style id + color id para fazer o merge dos dois dataframes \ndf_composition['style_id'] = df_composition['Art. No.'].apply(lambda x: x[:-3]) #cria a coluna style_id e pega o codigo até os ultimos 3 numeros\ndf_composition['color_id'] = df_composition['Art. No.'].apply(lambda x: x[-3:])\n\n#buscar todos os detalhes de todos os items (pagina de detalhes) da vitrine\naux = aux + df_composition.columns.tolist()\n\n#explicado In[105]\ndata_sku = pd.merge(df_color, df_composition[['style_id','Fit','Composition','More sustainable materials','Size']], how='left', on='style_id')\n\ndf_details = pd.concat([df_details, data_sku], axis=0)\n\n\n#União dos produtos da vitrine + detalhes de cada item\ndata['style_id'] = data['product_id'].apply(lambda x: x[:-3])\ndata['color_id'] = data['product_id'].apply(lambda x: x[-3:])\n\ndata_raw = pd.merge(data, df_details[['style_id','color_name','Fit', 'Composition', 'Size', 'More sustainable materials']], \n how='left', on='style_id')\n ",
"_____no_output_____"
],
[
"data_raw.head()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0b3b8caf73c56646e18ab6b7ac50d0f02b91c18 | 189 | ipynb | Jupyter Notebook | setup/Untitled0.ipynb | ICOSBigDataCamp/2016-summer-camp | f75a59bf0e86fc566cca2c3b4aa8687d0a08ba2c | [
"CC-BY-3.0"
] | null | null | null | setup/Untitled0.ipynb | ICOSBigDataCamp/2016-summer-camp | f75a59bf0e86fc566cca2c3b4aa8687d0a08ba2c | [
"CC-BY-3.0"
] | null | null | null | setup/Untitled0.ipynb | ICOSBigDataCamp/2016-summer-camp | f75a59bf0e86fc566cca2c3b4aa8687d0a08ba2c | [
"CC-BY-3.0"
] | 1 | 2016-05-02T18:15:51.000Z | 2016-05-02T18:15:51.000Z | 21 | 89 | 0.661376 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
d0b3b98b6bdf707a2ad03effa3288c19decb155d | 443,311 | ipynb | Jupyter Notebook | Investigate_Children_Out_of_School_20180108.ipynb | Shilin-Li/Investigate_Children_Out_of_School | af56fbd2cc0231fdf4fab2ef9a940c812880218d | [
"MIT"
] | 2 | 2018-09-10T00:01:21.000Z | 2018-09-10T05:40:48.000Z | Investigate_Children_Out_of_School_20180108.ipynb | Shilin-Li/Investigate_Children_Out_of_School | af56fbd2cc0231fdf4fab2ef9a940c812880218d | [
"MIT"
] | null | null | null | Investigate_Children_Out_of_School_20180108.ipynb | Shilin-Li/Investigate_Children_Out_of_School | af56fbd2cc0231fdf4fab2ef9a940c812880218d | [
"MIT"
] | null | null | null | 236.684997 | 126,724 | 0.875839 | [
[
[
"\n\n# Project: Investigate Children Out of School\n\n## Table of Contents\n<ul>\n<li><a href=\"#intro\">Introduction</a></li>\n<li><a href=\"#wrangling\">Data Wrangling</a></li>\n<li><a href=\"#eda\">Exploratory Data Analysis</a></li>\n<li><a href=\"#conclusions\">Conclusions</a></li>\n</ul>",
"_____no_output_____"
],
[
"<a id='intro'></a>\n## Introduction\n\n> **Key notes**: \"Gapminder has collected a lot of information about how people live their lives in different countries, tracked across the years, and on a number of different indicators. \n\n> **Questions to explore**: \n><ul>\n><li><a href=\"#q1\"> 1. Research Question 1: What is the total numbers of children out of primary school over years, indicate the male and female numbers as well?</a></li>\n><li><a href=\"#q2\"> 2. Research Question 2: What is distribution of female children who was out of primary school from 1980 to 1995?</a></li>\n><li><a href=\"#q3\"> 3. Research Question 3: What are numbers of children out of school in total, by male and female in China, 1985?</a></li> \n><li><a href=\"#q4\"> 4. What are relationship of children out of school of female in China in russian and usa over time? Which has a better trend?</a></li>\n><li><a href=\"#q5\"> 5. Research Question 5: What is the overall trend for children out of primary school over the years?</a></li>\n\n\n\n\n",
"_____no_output_____"
]
],
[
[
"# Set up import statements for all of the packages that are planed to use;\n# Include a 'magic word' so that visualizations are plotted;\n# call on dataframe to display the first 5 rows.\n\nimport pandas as pd\nimport numpy as np\nimport datetime\nfrom statistics import mode\n% matplotlib inline\nimport matplotlib.pyplot as plt\n%config InlineBackend.figure_format = 'retina'\nimport seaborn as sns\nsns.set_style('darkgrid')",
"_____no_output_____"
],
[
"# Reading an Excel file in python using pandas\n# call on dataframe to display the first 5 rows\n\nxl = pd.ExcelFile('Child out of school primary.xlsx')\n \nxl.sheet_names\n[u'Data']\n\ndf_tot = xl.parse(\"Data\")\ndf_tot.head()\n",
"_____no_output_____"
],
[
"x2 = pd.ExcelFile('Child out of school primiary female.xlsx')\n \nx2.sheet_names\n[u'Data']\n\ndf_f = x2.parse(\"Data\")\ndf_f.head()",
"_____no_output_____"
],
[
"x3 = pd.ExcelFile('Child out of school primiary male.xlsx')\n \nx3.sheet_names\n[u'Data']\n\ndf_m = x3.parse(\"Data\")\ndf_m.head()",
"_____no_output_____"
],
[
"# Check if the three dataframe have the same shape\n\ndf_tot.shape, df_m.shape, df_f.shape",
"_____no_output_____"
],
[
"# Check if the first columns from the 3 dataframe are exactly the same\nassert (df_tot['Children out of school, primary'].tolist() == df_m['Children out of school, primary, male'].tolist()\\\n == df_f['Children out of school, primary, female'].tolist())",
"_____no_output_____"
],
[
"# Merge the 3 dataframe\n\ndf1 = df_tot.merge(df_f, how='outer', left_index = True, right_index = True)\n\ndf1 = df1.merge(df_m, how='outer', left_index = True, right_index = True)\n\n# Confirm changes\n\ndf1.shape",
"_____no_output_____"
]
],
[
[
"<a id='wrangling'></a>\n## Data Wrangling\n\n> **Key notes**: In this section of the report, the following work will be done: load the data; check for cleanliness; trim and clean dataset for analysis.\n\n### General Properties",
"_____no_output_____"
]
],
[
[
"# return the datatypes of the columns.\n\ndf1.dtypes",
"_____no_output_____"
],
[
"# check for duplicates in the data.\n\nsum(df1.duplicated())",
"_____no_output_____"
],
[
"# check if any value is NaN in DataFrame and in how many columns\n\ndf1.isnull().any().any(), sum(df1.isnull().any())",
"_____no_output_____"
],
[
"# Generates descriptive statistics, excluding NaN values.\n\ndf1.describe()",
"_____no_output_____"
]
],
[
[
"### Data Cleaning ",
"_____no_output_____"
]
],
[
[
"# Locate the columns whose NaN values needs to be treated\n\ncol = df1.drop(['Children out of school, primary', 'Children out of school, primary, female'\\\n , 'Children out of school, primary, male'], axis=1)\n\n# Replace NaN with mean\n\nfor c in col:\n c_mean = df1[c].mean()\n df1[c].fillna(c_mean, inplace = True)\n \n# Confirm changes\n\ndf1.isnull().any().any()",
"_____no_output_____"
],
[
"# Rename column for simplification\n\ndf1.rename(columns = {'Children out of school, primary':'country'}, inplace = True)\n\n# check the new dataframe\n\ndf1.head()",
"_____no_output_____"
]
],
[
[
"<a id='eda'></a>\n## Exploratory Data Analysis\n<a id='q1'></a>\n### Research Question 1: What is the total numbers of children out of primary school over years, indicate the male and female numbers as well?",
"_____no_output_____"
]
],
[
[
"# Get the sum for each group\n\nsum_tot = df1.iloc[:, 1:43]\nm_tot = df1.iloc[:, 44:86]\nf_tot = df1.iloc[:, 87:]\n\n\ntot = []\nfor t in sum_tot.columns:\n tot.append(sum_tot[t].sum())\n\n\nm = []\nfor ma in m_tot.columns:\n m.append(m_tot[ma].sum())\n\n\nf = []\nfor fa in f_tot.columns:\n f.append(f_tot[fa].sum())\n\n# Plot\n\nx = ['total number', 'male number', 'female number']\ny = [sum(tot), sum(m), sum(f)]\n\nplt.subplots(figsize=(10,6))\nsns.barplot(x,y, alpha = 0.8);\n\n",
"_____no_output_____"
]
],
[
[
"<a id='q2'></a>\n### Research Question 2: What is distribution of female children who was out of primary school from 1980 to 1995?",
"_____no_output_____"
]
],
[
[
"# Target the year and plot\n\nsum_tot1 = sum_tot.iloc[:, 10:26]\nnew_col = []\nfor ele in sum_tot1.columns:\n new_col.append(ele.split('_x')[0])\n \nsum_tot1.columns = new_col \n\nplt.figure(figsize=(20,15))\nsns.boxplot(data = sum_tot1);",
"_____no_output_____"
]
],
[
[
"<a id='q3'></a>\n### Research Question 3: What are numbers of children out of school in total, by male and female in China, 1985?",
"_____no_output_____"
]
],
[
[
"china = df1.copy()\nchina = china.set_index('country')\ntot_chi = china.loc['China', '1985_x']\nf_chi = china.loc['China', '1985_y']\nm_chi = china.loc['China', '1985']\n\nprint('The numbers of children out of school in total, by male and female in China were \\\n{0:.0f}, {1:.0f} and {2:.0f} in 1985, respectively.'.format(tot_chi, f_chi, m_chi))",
"The numbers of children out of school in total, by male and female in China were 284172, 177367 and 122195 in 1985, respectively.\n"
]
],
[
[
"<a id='q4'></a>\n### Research Question 4: What are relationship of children out of school of female in China in russian and usa over time? Which has a better trend?",
"_____no_output_____"
]
],
[
[
"rus_us = df1.iloc[:, 0:42].copy()\n\n\nnew_col1 = []\nfor ele in rus_us:\n new_col1.append(ele.split('_x')[0])\n \nrus_us.columns = new_col1\n\nrus_us = rus_us.set_index('country')\nrus_us_df = pd.DataFrame(columns=['USA','Russia'])\nrus_us_df['USA'] = rus_us.loc['United States'].values\nrus_us_df['Russia'] = rus_us.loc['Russia'].values\n\n\nsns.lmplot(x = 'USA', y = 'Russia', data = rus_us_df);\n\n",
"_____no_output_____"
],
[
"sns.boxplot(data=rus_us_df);",
"_____no_output_____"
],
[
"rus_us_df['year'] = rus_us.columns\nrus_us_df.index = rus_us_df.year\nrus_us_df.plot();\nplt.ylabel('Numers')\nplt.xlabel('Country')\nplt.title('Numbers of children out of primary school from 1970 to 2011');\n",
"_____no_output_____"
]
],
[
[
"> There is a positive correlation between children droped out of primary school in Russia and USA. The estimated linear regression is shown as the blue line, the estimates varies in the light blue shade with 95% confident level. The trend of children out of school in USA is much higher than that of Russia over that past 40 years.",
"_____no_output_____"
],
[
"<a id='q5'></a>\n### Research Question 5: What is the overall trend for children out of primary school over the years?",
"_____no_output_____"
]
],
[
[
"overall_df = pd.DataFrame(columns=['year','numbers'])\noverall_df['year'] = rus_us.columns \nn_list =[]\n\nfor n in rus_us.columns:\n n_list.append(rus_us[n].mean())\n\n\noverall_df['numbers'] = np.array(n_list)\noverall_df.index = overall_df.year\n\noverall_df.plot();\nplt.ylabel('Numers')\nplt.xlabel('Year')\nplt.title('Numbers of children out of primary school from 1970 to 2011');",
"_____no_output_____"
]
],
[
[
"> From the analysis we can conclude that the overall trend of children out of primary school had been descreasing starting between 1970 and 1975 at which point of time the numbers fell down dramatically",
"_____no_output_____"
],
[
"<a id='conclusions'></a>\n## Conclusions\n> In current study, a good amount of profound analysis has been carried out. Prior to each step, deailed instructions was given and interpretions was also provided afterwards. The dataset across 41 years from 1970 to 2011. \n\n> The limitations of current study was that the structure is only 275*42 in shape, thus the analysis would not be much reliable due to small scale samples.\n\n> In addition, the parameters in the dataset is very simple, it only focus on the number of children out of school.\n\n",
"_____no_output_____"
]
],
[
[
"from subprocess import call\ncall(['python', '-m', 'nbconvert', 'Investigate_Children_Out_of_School_20180108.ipynb'])",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
d0b3d02fad9b20bfd01a8954c88321b8f3c18739 | 51,433 | ipynb | Jupyter Notebook | python-jupyter-apache-kafka-ksql-tensorflow-keras.ipynb | sundayayandele/python-jupyter-apache-kafka-ksql-tensorflow-keras | fbd5798c87faa5aeb43d16111efd929ea5b6f847 | [
"Apache-2.0"
] | 85 | 2018-12-07T01:56:35.000Z | 2022-01-12T01:16:07.000Z | python-jupyter-apache-kafka-ksql-tensorflow-keras.ipynb | priyasi345/python-jupyter-apache-kafka-ksql-tensorflow-keras | fbd5798c87faa5aeb43d16111efd929ea5b6f847 | [
"Apache-2.0"
] | 2 | 2019-05-31T04:57:16.000Z | 2019-07-11T08:19:51.000Z | python-jupyter-apache-kafka-ksql-tensorflow-keras.ipynb | priyasi345/python-jupyter-apache-kafka-ksql-tensorflow-keras | fbd5798c87faa5aeb43d16111efd929ea5b6f847 | [
"Apache-2.0"
] | 54 | 2019-01-08T16:59:57.000Z | 2022-03-10T00:51:34.000Z | 35.300618 | 593 | 0.489627 | [
[
[
"# Apache Kafka Integration + Preprocessing / Interactive Analysis with KSQL",
"_____no_output_____"
],
[
"This notebook uses the combination of Python, Apache Kafka, KSQL for Machine Learning infrastructures. \n\nIt includes code examples using ksql-python and other widespread components from Python’s machine learning ecosystem, like Numpy, pandas, TensorFlow and Keras. \n\nThe use case is fraud detection for credit card payments. We use a test data set from Kaggle as foundation to train an unsupervised autoencoder to detect anomalies and potential fraud in payments. Focus of this example is not just model training, but the whole Machine Learning infrastructure including data ingestion, data preprocessing, model training, model deployment and monitoring. All of this needs to be scalable, reliable and performant.\n\nIf you want to learn more about the relation between the Apache Kafka open source ecosystem and Machine Learning, please check out these two blog posts:\n\n- [How to Build and Deploy Scalable Machine Learning in Production with Apache Kafka](https://www.confluent.io/blog/build-deploy-scalable-machine-learning-production-apache-kafka/)\n- [Using Apache Kafka to Drive Cutting-Edge Machine Learning](https://www.confluent.io/blog/using-apache-kafka-drive-cutting-edge-machine-learning)\n\n##### This notebook is not meant to be perfect using all coding and ML best practices, but just a simple guide how to build your own notebooks where you can combine Python APIs with Kafka and KSQL",
"_____no_output_____"
],
[
"### Start Backend Services (Zookeeper, Kafka, KSQL)\n\nThe only server requirement is a local KSQL server running (with Kafka broker ZK node). If you don't have it running, just use Confluent CLI:",
"_____no_output_____"
]
],
[
[
"# Shows correct startup but does not work 100% yet. Better run this command from outside Jupyter if you have any problems (e.g. from Terminal)!\n! confluent start ksql-server",
"This CLI is intended for development only, not for production\nhttps://docs.confluent.io/current/cli/index.html\n\nUsing CONFLUENT_CURRENT: /var/folders/0s/0xdkb9n12yqdb3fs71926z3c0000gp/T/confluent.KQJxmlHC\nStarting zookeeper\nzookeeper is [\u001b[0;32mUP\u001b[0m]\nStarting kafka\nkafka is [\u001b[0;32mUP\u001b[0m]\nStarting schema-registry\nschema-registry is [\u001b[0;32mUP\u001b[0m]\nStarting ksql-server\nksql-server is [\u001b[0;32mUP\u001b[0m]\n"
]
],
[
[
"## Data Integration and Preprocessing with Python and KSQL",
"_____no_output_____"
],
[
"First of all, create the Kafka Topic 'creditcardfraud_source' if it does not exist already:",
"_____no_output_____"
]
],
[
[
"! kafka-topics --zookeeper localhost:2181 --create --topic creditcardfraud_source --partitions 3 --replication-factor 1",
"WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could collide. To avoid issues it is best to use either, but not both.\nCreated topic \"creditcardfraud_source\".\n"
]
],
[
[
"Then load KSQL library and initiate connection to KSQL server:",
"_____no_output_____"
]
],
[
[
"from ksql import KSQLAPI\nclient = KSQLAPI('http://localhost:8088')",
"_____no_output_____"
]
],
[
[
"Consume source data from Kafka Topic \"creditcardfraud_source\":",
"_____no_output_____"
]
],
[
[
"client.create_stream(table_name='creditcardfraud_source',\n columns_type=['Id bigint', 'Timestamp varchar', 'User varchar', 'Time int', 'V1 double', 'V2 double', 'V3 double', 'V4 double', 'V5 double', 'V6 double', 'V7 double', 'V8 double', 'V9 double', 'V10 double', 'V11 double', 'V12 double', 'V13 double', 'V14 double', 'V15 double', 'V16 double', 'V17 double', 'V18 double', 'V19 double', 'V20 double', 'V21 double', 'V22 double', 'V23 double', 'V24 double', 'V25 double', 'V26 double', 'V27 double', 'V28 double', 'Amount double', 'Class string'],\n topic='creditcardfraud_source',\n value_format='DELIMITED')",
"_____no_output_____"
]
],
[
[
"Preprocessing: \n\n- Filter columns which are not needed \n- Filter messages where column 'class' is empty\n- Change data format to Avro for more convenient further processing\n",
"_____no_output_____"
]
],
[
[
"client.create_stream_as(table_name='creditcardfraud_preprocessed_avro',\n select_columns=['Time', 'V1', 'V2', 'V3', 'V4', 'V5', 'V6', 'V7', 'V8', 'V9', 'V10', 'V11', 'V12', 'V13', 'V14', 'V15', 'V16', 'V17', 'V18', 'V19', 'V20', 'V21', 'V22', 'V23', 'V24', 'V25', 'V26', 'V27', 'V28', 'Amount', 'Class'],\n src_table='creditcardfraud_source',\n conditions='Class IS NOT NULL',\n kafka_topic='creditcardfraud_preprocessed_avro',\n value_format='AVRO')",
"_____no_output_____"
]
],
[
[
"Take a look at the creates KSQL Streams:",
"_____no_output_____"
]
],
[
[
"client.ksql('show streams')",
"_____no_output_____"
]
],
[
[
"Take a look at the metadata of the KSQL Stream:",
"_____no_output_____"
]
],
[
[
"client.ksql('describe CREDITCARDFRAUD_PREPROCESSED_AVRO')",
"_____no_output_____"
]
],
[
[
"Interactive query statement:",
"_____no_output_____"
]
],
[
[
"query = client.query('SELECT * FROM CREDITCARDFRAUD_PREPROCESSED_AVRO LIMIT 1')\n\nfor item in query: \n print(item)",
"\n\n\n\n{\"row\":{\"columns\":[1547818491376,null,0,-1.3598071336738,-0.0727811733098497,2.53634673796914,1.37815522427443,-0.338320769942518,0\n.462387777762292,0.239598554061257,0.0986979012610507,0.363786969611213,0.0907941719789316,-0.551599533260813,-0.617800855762348,-0.991389847235408,-0.311169353699879,1.46817697209427,-0.470400525259478,0.207971241929242,0.0257905801985591,0.403992960255733,0.25141209823970\n5,-0.018306777944153,0.277837575558899,-0.110473910188767,0.0669280749146731,0.128539358273528,-0.189114843888824,0.133558376740387,-0.0210530534538215,149.62,\"0\"]},\"errorMessage\":null,\"finalMessage\":null}\n{\"row\":null,\"errorMessage\":null,\"finalMessage\":\"Limit Reached\"}\n\n"
]
],
[
[
"Produce single test data manually (if you did not connect to a real data stream which produces data continuously), e.g. from terminal:\n\n confluent produce creditcardfraud_source\n\n 1,\"2018-12- 18T12:00:00Z\",\"Hans\",0,-1.3598071336738,-0.0727811733098497,2.53634673796914,1.37815522427443,-0.338320769942518,0.462387777762292,0.239598554061257,0.0986979012610507,0.363786969611213,0.0907941719789316,-0.551599533260813,-0.617800855762348,-0.991389847235408,-0.311169353699879,1.46817697209427,-0.470400525259478,0.207971241929242,0.0257905801985591,0.403992960255733,0.251412098239705,-0.018306777944153,0.277837575558899,-0.110473910188767,0.0669280749146731,0.128539358273528,-0.189114843888824,0.133558376740387,-0.0210530534538215,149.62,\"0\"\n \n*BE AWARE: The KSQL Python API does a REST call. This only waits a few seconds by default and then throws a timeout exception. You need to get data into the query before the timeout (e.g. by using above command).*",
"_____no_output_____"
]
],
[
[
"# TODO How to embed ' ' in Python ???\n# See https://github.com/bryanyang0528/ksql-python/issues/54\n# client.ksql('SET \\'auto.offset.reset\\'=\\'earliest\\'');",
"_____no_output_____"
]
],
[
[
"### Additional (optional) analysis and preprocessing examples\n\nSome more examples for possible data wrangling and preprocessing with KSQL:\n\n- Anonymization\n- Augmentation\n- Merge / Join data frames",
"_____no_output_____"
]
],
[
[
"query = client.query('SELECT Id, MASK_LEFT(User, 2) FROM creditcardfraud_source LIMIT 1')\n\nfor item in query: \n print(item)",
"\n\n\n{\"row\":{\"columns\":[1,\"Xxns\"]},\"errorMessage\":null,\"finalMessage\":null}\n{\"row\":null,\"errorMessage\":null,\"finalMessage\":\"Limit Reached\"}\n\n"
],
[
"query = client.query('SELECT Id, IFNULL(Class, \\'-1\\') FROM creditcardfraud_source LIMIT 1')\n\nfor item in query: \n print(item)",
"\n\n{\"row\":{\"columns\":[1,\"-1\"]},\"errorMessage\":null,\"finalMessage\":null}\n{\"row\":null,\"errorMessage\":null,\"finalMessage\":\"Limit Reached\"}\n\n"
]
],
[
[
"#### Stream-Table-Join\n\nFor the STREAM-TABLE-JOIN, you first need to create a Kafka Topic 'Users' (for the corresponding KSQL TABLE 'Users):",
"_____no_output_____"
]
],
[
[
"! kafka-topics --zookeeper localhost:2181 --create --topic users --partitions 3 --replication-factor 1 ",
"Created topic \"users\".\r\n"
]
],
[
[
"Then create the KSQL Table:",
"_____no_output_____"
]
],
[
[
"client.create_table(table_name='users',\n columns_type=['userid varchar', 'gender varchar', 'regionid varchar'],\n topic='users',\n key='userid',\n value_format='AVRO')",
"_____no_output_____"
],
[
"client.ksql(\"CREATE STREAM creditcardfraud_per_user WITH (VALUE_FORMAT='AVRO', KAFKA_TOPIC='creditcardfraud_per_user') AS SELECT Time, Amount, Class FROM creditcardfraud_source c INNER JOIN USERS u on c.user = u.userid WHERE u.USERID = 1\")",
"_____no_output_____"
]
],
[
[
"# Mapping from KSQL to NumPy / pandas for Machine Learning tasks",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport json",
"_____no_output_____"
]
],
[
[
"The query below command returns a Python generator. It can be printed e.g. by reading its values via next(query) or a for loop.\n\nDue to a current [bug in ksql-python library](https://github.com/bryanyang0528/ksql-python/issues/57), we need to to an additional line of Python code to strip out unnecessary info and change to 2D array ",
"_____no_output_____"
]
],
[
[
"query = client.query('SELECT * FROM CREDITCARDFRAUD_PREPROCESSED_AVRO LIMIT 8') # Returns a Python generator object\n\n#items = [item for item in query][:-1] # -1 to remove last record that is a dummy msg for \"Limit Reached\" \n#one_record = json.loads(''.join(items)) # Join two records as one as ksql-python is splitting it into two? \n#data = [one_record['row']['columns'][2:-1]] # Strip out unnecessary info and change to 2D array \n#df = pd.DataFrame(data=data) \n\nrecords = [json.loads(r) for r in ''.join(query).strip().replace('\\n\\n\\n\\n', '').split('\\n')]\ndata = [r['row']['columns'][2:] for r in records[:-1]]\n#data = r['row']['columns'][2] for r in records\ndf = pd.DataFrame(data=data, columns=['Time', 'V1' , 'V2' , 'V3' , 'V4' , 'V5' , 'V6' , 'V7' , 'V8' , 'V9' , 'V10' , 'V11' , 'V12' , 'V13' , 'V14' , 'V15' , 'V16' , 'V17' , 'V18' , 'V19' , 'V20' , 'V21' , 'V22' , 'V23' , 'V24' , 'V25' , 'V26' , 'V27' , 'V28' , 'Amount' , 'Class'])\ndf",
"_____no_output_____"
]
],
[
[
"### Generate some test data \n\nAs discussed in the step-by-step guide, you have various options. Here we - ironically - read messages from a CSV file. This is for simple demo purposes so that you don't have to set up a real continuous Kafka stream. \n\nIn real world or more advanced examples, you should connect to a real Kafka data stream (for instance using the Kafka data generator or Kafka Connect).\n\nHere we just consume a few messages for demo purposes so that they get mapped into a pandas dataframe:\n\n cat /Users/kai.waehner/git-projects/python-jupyter-apache-kafka-ksql-tensorflow-keras/data/creditcard_extended.csv | kafka-console-producer --broker-list localhost:9092 --topic creditcardfraud_source\n \nYou need to do this from command line because Jupyter cannot execute this in parallel to above KSQL query.",
"_____no_output_____"
],
[
"# Preprocessing with Pandas + Model Training with TensorFlow / Keras",
"_____no_output_____"
],
[
"#### BE AWARE: You need enough messages in the pandas data frame to train the model in the below cells (if you just play around with ksql-python and just add a few Kafka events, it is not a sufficient number of rows to continue. You can simply change to df = pd.read_csv(\"data/creditcard.csv\") as shown below in this case to get a bigger data set...\n\n\nThis part only includes the steps required for model training of the Autoencoder with Keras and TensorFlow. \n\nIf you want to get a better understanding of the model, take a look at the other notebook [Python Tensorflow Keras Fraud Detection Autoencoder.ipynb](http://localhost:8888/notebooks/Python%20Tensorflow%20Keras%20Fraud%20Detection%20Autoencoder.ipynb) which includes many more details, plots and explanations.\n\n[Kudos to David Ellison](https://www.datascience.com/blog/fraud-detection-with-tensorflow).\n\n[The credit card fraud data set is available at Kaggle](https://www.kaggle.com/mlg-ulb/creditcardfraud/data).",
"_____no_output_____"
]
],
[
[
"# import packages\n# matplotlib inline\n#import pandas as pd\n#import numpy as np\nfrom scipy import stats\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport pickle\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import confusion_matrix, precision_recall_curve\nfrom sklearn.metrics import recall_score, classification_report, auc, roc_curve\nfrom sklearn.metrics import precision_recall_fscore_support, f1_score\nfrom sklearn.preprocessing import StandardScaler\nfrom pylab import rcParams\nfrom keras.models import Model, load_model\nfrom keras.layers import Input, Dense\nfrom keras.callbacks import ModelCheckpoint, TensorBoard\nfrom keras import regularizers",
"Using TensorFlow backend.\n"
],
[
"# Use the dataframe from above (imported and preprocessed with KSQL)\n\n# As alternative directly import from a CSV file (\"the normal approach without Kafka and streaming data\")\n\n# \"data/creditcard_small.csv\" is a very small data set (just for quick demo purpose to get a model binary)\n# => replace with \"data/creditcard.csv\" to use a real data set to train a model with good accuracy\n#df = pd.read_csv(\"data/creditcard.csv\") \n\n\ndf.head(n=5) #just to check you imported the dataset properly",
"_____no_output_____"
],
[
"#set random seed and percentage of test data\nRANDOM_SEED = 314 #used to help randomly select the data points\nTEST_PCT = 0.2 # 20% of the data\n\n#set up graphic style in this case I am using the color scheme from xkcd.com\nrcParams['figure.figsize'] = 14, 8.7 # Golden Mean\nLABELS = [\"Normal\",\"Fraud\"]\n#col_list = [\"cerulean\",\"scarlet\"]# https://xkcd.com/color/rgb/\n#sns.set(style='white', font_scale=1.75, palette=sns.xkcd_palette(col_list))",
"_____no_output_____"
],
[
"normal_df = [df.Class == 0] #save normal_df observations into a separate df\nfraud_df = [df.Class == 1] #do the same for frauds",
"_____no_output_____"
],
[
"#data = df.drop(['Time'], axis=1) #if you think the var is unimportant\ndf_norm = df\ndf_norm['Time'] = StandardScaler().fit_transform(df_norm['Time'].values.reshape(-1, 1))\ndf_norm['Amount'] = StandardScaler().fit_transform(df_norm['Amount'].values.reshape(-1, 1))",
"_____no_output_____"
],
[
"train_x, test_x = train_test_split(df_norm, test_size=TEST_PCT, random_state=RANDOM_SEED)\ntrain_x = train_x[train_x.Class == 0] #where normal transactions\ntrain_x = train_x.drop(['Class'], axis=1) #drop the class column\n\ntest_y = test_x['Class'] #save the class column for the test set\ntest_x = test_x.drop(['Class'], axis=1) #drop the class column\n\ntrain_x = train_x.values #transform to ndarray\ntest_x = test_x.values",
"_____no_output_____"
]
],
[
[
"### My Jupyter Notebook crashed sometimes in the next step 'model training' (probably memory issues):",
"_____no_output_____"
]
],
[
[
"# Reduce number of epochs and batch_size if your Jupyter crashes (due to memory issues)\n# nb_epoch = 100\n# batch_size = 128\nnb_epoch = 5\nbatch_size = 32\n\ninput_dim = train_x.shape[1] #num of columns, 30\nencoding_dim = 14\nhidden_dim = int(encoding_dim / 2) #i.e. 7\nlearning_rate = 1e-7\n\ninput_layer = Input(shape=(input_dim, ))\nencoder = Dense(encoding_dim, activation=\"tanh\", activity_regularizer=regularizers.l1(learning_rate))(input_layer)\nencoder = Dense(hidden_dim, activation=\"relu\")(encoder)\ndecoder = Dense(hidden_dim, activation='tanh')(encoder)\ndecoder = Dense(input_dim, activation='relu')(decoder)\nautoencoder = Model(inputs=input_layer, outputs=decoder)",
"_____no_output_____"
],
[
"autoencoder.compile(metrics=['accuracy'],\n loss='mean_squared_error',\n optimizer='adam')\n\ncp = ModelCheckpoint(filepath=\"models/autoencoder_fraud.h5\",\n save_best_only=True,\n verbose=0)\n\ntb = TensorBoard(log_dir='./logs',\n histogram_freq=0,\n write_graph=True,\n write_images=True)\n\nhistory = autoencoder.fit(train_x, train_x,\n epochs=nb_epoch,\n batch_size=batch_size,\n shuffle=True,\n validation_data=(test_x, test_x),\n verbose=1,\n callbacks=[cp, tb]).history",
"Train on 227468 samples, validate on 56962 samples\nEpoch 1/5\n"
],
[
"autoencoder = load_model('models/autoencoder_fraud.h5')\n",
"_____no_output_____"
],
[
"test_x_predictions = autoencoder.predict(test_x)\nmse = np.mean(np.power(test_x - test_x_predictions, 2), axis=1)\nerror_df = pd.DataFrame({'Reconstruction_error': mse,\n 'True_class': test_y})\nerror_df.describe()",
"_____no_output_____"
]
],
[
[
"The binary 'models/autoencoder_fraud.h5' is the trained model which can then be deployed anywhere to do prediction on new incoming events in real time. ",
"_____no_output_____"
],
[
"# Model Deployment\n\nThis demo focuses on the combination of Python and KSQL for data preprocessing and model training. If you want to understand the relation between Apache Kafka, KSQL and Python-related Machine Learning tools like TensorFlow for model deployment and monitoring, please check out my other Github projects:\n\nSome examples of model deployment in Kafka environments:\n\n- [Analytic models (TensorFlow, Keras, H2O and Deeplearning4j) embedded in Kafka Streams microservices](https://github.com/kaiwaehner/kafka-streams-machine-learning-examples)\n- [Anomaly detection of IoT sensor data with a model embedded into a KSQL UDF](https://github.com/kaiwaehner/ksql-udf-deep-learning-mqtt-iot)\n- [RPC communication between Kafka Streams application and model server (TensorFlow Serving)](https://github.com/kaiwaehner/tensorflow-serving-java-grpc-kafka-streams)",
"_____no_output_____"
],
[
"# Appendix: Pandas analysis with above Fraud Detection Data",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv(\"data/creditcard.csv\")",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"df.shape",
"_____no_output_____"
],
[
"df.index",
"_____no_output_____"
],
[
"df.columns",
"_____no_output_____"
],
[
"df.values",
"_____no_output_____"
],
[
"df.describe()",
"_____no_output_____"
],
[
"df['Amount']",
"_____no_output_____"
],
[
"df[0:3]",
"_____no_output_____"
],
[
"df.iloc[1,1]",
"_____no_output_____"
],
[
"# Takes a minute or two (big CSV file)...\n#df.plot()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0b3e7d663c517269e1b3158f0aa67109e780a07 | 178,506 | ipynb | Jupyter Notebook | convolutional-neural-networks/mnist-mlp/mnist_mlp_solution_with_validation.ipynb | ahmed-gharib89/ud_deep_learning_v2_pytorch | bb5f66fc8e674daba134f1feba4877f784430486 | [
"MIT"
] | 1 | 2020-07-21T19:06:10.000Z | 2020-07-21T19:06:10.000Z | convolutional-neural-networks/mnist-mlp/mnist_mlp_solution_with_validation.ipynb | ahmed-gharib89/ud_deep_learning_v2_pytorch | bb5f66fc8e674daba134f1feba4877f784430486 | [
"MIT"
] | 2 | 2020-06-24T22:51:09.000Z | 2020-09-26T07:28:35.000Z | convolutional-neural-networks/mnist-mlp/mnist_mlp_solution_with_validation.ipynb | ahmed-gharib89/ud_deep_learning_v2_pytorch | bb5f66fc8e674daba134f1feba4877f784430486 | [
"MIT"
] | null | null | null | 286.987138 | 102,848 | 0.907734 | [
[
[
"# Multi-Layer Perceptron, MNIST\n---\nIn this notebook, we will train an MLP to classify images from the [MNIST database](http://yann.lecun.com/exdb/mnist/) hand-written digit database.\n\nThe process will be broken down into the following steps:\n>1. Load and visualize the data\n2. Define a neural network\n3. Train the model\n4. Evaluate the performance of our trained model on a test dataset!\n\nBefore we begin, we have to import the necessary libraries for working with data and PyTorch.",
"_____no_output_____"
]
],
[
[
"# import libraries\nimport torch\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"---\n## Load and Visualize the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)\n\nDownloading may take a few moments, and you should see your progress as the data is loading. You may also choose to change the `batch_size` if you want to load more data at a time.\n\nThis cell will create DataLoaders for each of our datasets.",
"_____no_output_____"
]
],
[
[
"from torchvision import datasets\nimport torchvision.transforms as transforms\nfrom torch.utils.data.sampler import SubsetRandomSampler\n\n# number of subprocesses to use for data loading\nnum_workers = 0\n# how many samples per batch to load\nbatch_size = 20\n# percentage of training set to use as validation\nvalid_size = 0.2\n\n# convert data to torch.FloatTensor\ntransform = transforms.ToTensor()\n\n# choose the training and test datasets\ntrain_data = datasets.MNIST(root='data', train=True,\n download=True, transform=transform)\ntest_data = datasets.MNIST(root='data', train=False,\n download=True, transform=transform)\n\n# obtain training indices that will be used for validation\nnum_train = len(train_data)\nindices = list(range(num_train))\nnp.random.shuffle(indices)\nsplit = int(np.floor(valid_size * num_train))\ntrain_idx, valid_idx = indices[split:], indices[:split]\n\n# define samplers for obtaining training and validation batches\ntrain_sampler = SubsetRandomSampler(train_idx)\nvalid_sampler = SubsetRandomSampler(valid_idx)\n\n# prepare data loaders\ntrain_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,\n sampler=train_sampler, num_workers=num_workers)\nvalid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, \n sampler=valid_sampler, num_workers=num_workers)\ntest_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, \n num_workers=num_workers)",
"_____no_output_____"
]
],
[
[
"### Visualize a Batch of Training Data\n\nThe first step in a classification task is to take a look at the data, make sure it is loaded in correctly, then make any initial observations about patterns in that data.",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\n%matplotlib inline\n \n# obtain one batch of training images\n\ndataiter = iter(train_loader)\nimages, labels = dataiter.next()\nimages = images.numpy()\n\n# plot the images in the batch, along with the corresponding labels\nfig = plt.figure(figsize=(25, 4))\nfor idx in np.arange(20):\n ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])\n ax.imshow(np.squeeze(images[idx]), cmap='gray')\n # print out the correct label for each image\n # .item() gets the value contained in a Tensor\n ax.set_title(str(labels[idx].item()))",
"_____no_output_____"
]
],
[
[
"### View an Image in More Detail",
"_____no_output_____"
]
],
[
[
"img = np.squeeze(images[1])\n\nfig = plt.figure(figsize = (12,12)) \nax = fig.add_subplot(111)\nax.imshow(img, cmap='gray')\nwidth, height = img.shape\nthresh = img.max()/2.5\nfor x in range(width):\n for y in range(height):\n val = round(img[x][y],2) if img[x][y] !=0 else 0\n ax.annotate(str(val), xy=(y,x),\n horizontalalignment='center',\n verticalalignment='center',\n color='white' if img[x][y]<thresh else 'black')",
"_____no_output_____"
]
],
[
[
"---\n## Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)\n\nThe architecture will be responsible for seeing as input a 784-dim Tensor of pixel values for each image, and producing a Tensor of length 10 (our number of classes) that indicates the class scores for an input image. This particular example uses two hidden layers and dropout to avoid overfitting.",
"_____no_output_____"
]
],
[
[
"import torch.nn as nn\nimport torch.nn.functional as F\n\n# define the NN architecture\nclass Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n # number of hidden nodes in each layer (512)\n hidden_1 = 512\n hidden_2 = 512\n # linear layer (784 -> hidden_1)\n self.fc1 = nn.Linear(28 * 28, hidden_1)\n # linear layer (n_hidden -> hidden_2)\n self.fc2 = nn.Linear(hidden_1, hidden_2)\n # linear layer (n_hidden -> 10)\n self.fc3 = nn.Linear(hidden_2, 10)\n # dropout layer (p=0.2)\n # dropout prevents overfitting of data\n self.dropout = nn.Dropout(0.2)\n\n def forward(self, x):\n # flatten image input\n x = x.view(-1, 28 * 28)\n # add hidden layer, with relu activation function\n x = F.relu(self.fc1(x))\n # add dropout layer\n x = self.dropout(x)\n # add hidden layer, with relu activation function\n x = F.relu(self.fc2(x))\n # add dropout layer\n x = self.dropout(x)\n # add output layer\n x = self.fc3(x)\n return x\n\n# initialize the NN\nmodel = Net()\nprint(model)",
"Net(\n (fc1): Linear(in_features=784, out_features=512, bias=True)\n (fc2): Linear(in_features=512, out_features=512, bias=True)\n (fc3): Linear(in_features=512, out_features=10, bias=True)\n (dropout): Dropout(p=0.2, inplace=False)\n)\n"
]
],
[
[
"### Specify [Loss Function](http://pytorch.org/docs/stable/nn.html#loss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)\n\nIt's recommended that you use cross-entropy loss for classification. If you look at the documentation (linked above), you can see that PyTorch's cross entropy function applies a softmax funtion to the output layer *and* then calculates the log loss.",
"_____no_output_____"
]
],
[
[
"# specify loss function (categorical cross-entropy)\ncriterion = nn.CrossEntropyLoss()\n\n# specify optimizer (stochastic gradient descent) and learning rate = 0.01\noptimizer = torch.optim.SGD(model.parameters(), lr=0.01)",
"_____no_output_____"
]
],
[
[
"---\n## Train the Network\n\nThe steps for training/learning from a batch of data are described in the comments below:\n1. Clear the gradients of all optimized variables\n2. Forward pass: compute predicted outputs by passing inputs to the model\n3. Calculate the loss\n4. Backward pass: compute gradient of the loss with respect to model parameters\n5. Perform a single optimization step (parameter update)\n6. Update average training loss\n\nThe following loop trains for 50 epochs; take a look at how the values for the training loss decrease over time. We want it to decrease while also avoiding overfitting the training data.",
"_____no_output_____"
]
],
[
[
"model.to('cuda')\n# number of epochs to train the model\nn_epochs = 50\n\n# initialize tracker for minimum validation loss\nvalid_loss_min = np.Inf # set initial \"min\" to infinity\n\nfor epoch in range(n_epochs):\n # monitor training loss\n train_loss = 0.0\n valid_loss = 0.0\n \n ###################\n # train the model #\n ###################\n model.train() # prep model for training\n for data, target in train_loader:\n data, target = data.to('cuda'), target.to('cuda')\n # clear the gradients of all optimized variables\n optimizer.zero_grad()\n # forward pass: compute predicted outputs by passing inputs to the model\n output = model(data)\n # calculate the loss\n loss = criterion(output, target)\n # backward pass: compute gradient of the loss with respect to model parameters\n loss.backward()\n # perform a single optimization step (parameter update)\n optimizer.step()\n # update running training loss\n train_loss += loss.item()*data.size(0)\n \n ###################### \n # validate the model #\n ######################\n model.eval() # prep model for evaluation\n for data, target in valid_loader:\n data, target = data.to('cuda'), target.to('cuda')\n # forward pass: compute predicted outputs by passing inputs to the model\n output = model(data)\n # calculate the loss\n loss = criterion(output, target)\n # update running validation loss \n valid_loss += loss.item()*data.size(0)\n \n # print training/validation statistics \n # calculate average loss over an epoch\n train_loss = train_loss/len(train_loader.sampler)\n valid_loss = valid_loss/len(valid_loader.sampler)\n \n print('Epoch: {} \\tTraining Loss: {:.6f} \\tValidation Loss: {:.6f}'.format(\n epoch+1, \n train_loss,\n valid_loss\n ))\n \n # save model if validation loss has decreased\n if valid_loss <= valid_loss_min:\n print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(\n valid_loss_min,\n valid_loss))\n torch.save(model.state_dict(), 'model.pt')\n valid_loss_min = valid_loss",
"Epoch: 1 \tTraining Loss: 0.361179 \tValidation Loss: 0.291315\nValidation loss decreased (inf --> 0.291315). Saving model ...\nEpoch: 2 \tTraining Loss: 0.285200 \tValidation Loss: 0.239275\nValidation loss decreased (0.291315 --> 0.239275). Saving model ...\nEpoch: 3 \tTraining Loss: 0.236005 \tValidation Loss: 0.203692\nValidation loss decreased (0.239275 --> 0.203692). Saving model ...\nEpoch: 4 \tTraining Loss: 0.199481 \tValidation Loss: 0.175669\nValidation loss decreased (0.203692 --> 0.175669). Saving model ...\nEpoch: 5 \tTraining Loss: 0.173909 \tValidation Loss: 0.160357\nValidation loss decreased (0.175669 --> 0.160357). Saving model ...\nEpoch: 6 \tTraining Loss: 0.153004 \tValidation Loss: 0.144246\nValidation loss decreased (0.160357 --> 0.144246). Saving model ...\nEpoch: 7 \tTraining Loss: 0.137062 \tValidation Loss: 0.133014\nValidation loss decreased (0.144246 --> 0.133014). Saving model ...\nEpoch: 8 \tTraining Loss: 0.122044 \tValidation Loss: 0.125957\nValidation loss decreased (0.133014 --> 0.125957). Saving model ...\nEpoch: 9 \tTraining Loss: 0.110446 \tValidation Loss: 0.117490\nValidation loss decreased (0.125957 --> 0.117490). Saving model ...\nEpoch: 10 \tTraining Loss: 0.102860 \tValidation Loss: 0.109840\nValidation loss decreased (0.117490 --> 0.109840). Saving model ...\nEpoch: 11 \tTraining Loss: 0.093429 \tValidation Loss: 0.103411\nValidation loss decreased (0.109840 --> 0.103411). Saving model ...\nEpoch: 12 \tTraining Loss: 0.085458 \tValidation Loss: 0.097992\nValidation loss decreased (0.103411 --> 0.097992). Saving model ...\nEpoch: 13 \tTraining Loss: 0.079678 \tValidation Loss: 0.099140\nEpoch: 14 \tTraining Loss: 0.072495 \tValidation Loss: 0.091042\nValidation loss decreased (0.097992 --> 0.091042). Saving model ...\nEpoch: 15 \tTraining Loss: 0.069303 \tValidation Loss: 0.088470\nValidation loss decreased (0.091042 --> 0.088470). Saving model ...\nEpoch: 16 \tTraining Loss: 0.065515 \tValidation Loss: 0.087642\nValidation loss decreased (0.088470 --> 0.087642). Saving model ...\nEpoch: 17 \tTraining Loss: 0.060732 \tValidation Loss: 0.083811\nValidation loss decreased (0.087642 --> 0.083811). Saving model ...\nEpoch: 18 \tTraining Loss: 0.057880 \tValidation Loss: 0.082875\nValidation loss decreased (0.083811 --> 0.082875). Saving model ...\nEpoch: 19 \tTraining Loss: 0.052881 \tValidation Loss: 0.083557\nEpoch: 20 \tTraining Loss: 0.049798 \tValidation Loss: 0.082006\nValidation loss decreased (0.082875 --> 0.082006). Saving model ...\nEpoch: 21 \tTraining Loss: 0.047927 \tValidation Loss: 0.078411\nValidation loss decreased (0.082006 --> 0.078411). Saving model ...\nEpoch: 22 \tTraining Loss: 0.045486 \tValidation Loss: 0.077525\nValidation loss decreased (0.078411 --> 0.077525). Saving model ...\nEpoch: 23 \tTraining Loss: 0.041816 \tValidation Loss: 0.074999\nValidation loss decreased (0.077525 --> 0.074999). Saving model ...\nEpoch: 24 \tTraining Loss: 0.040343 \tValidation Loss: 0.074555\nValidation loss decreased (0.074999 --> 0.074555). Saving model ...\nEpoch: 25 \tTraining Loss: 0.038133 \tValidation Loss: 0.076803\nEpoch: 26 \tTraining Loss: 0.036810 \tValidation Loss: 0.073787\nValidation loss decreased (0.074555 --> 0.073787). Saving model ...\nEpoch: 27 \tTraining Loss: 0.034755 \tValidation Loss: 0.073893\nEpoch: 28 \tTraining Loss: 0.032561 \tValidation Loss: 0.072337\nValidation loss decreased (0.073787 --> 0.072337). Saving model ...\nEpoch: 29 \tTraining Loss: 0.030766 \tValidation Loss: 0.071891\nValidation loss decreased (0.072337 --> 0.071891). Saving model ...\nEpoch: 30 \tTraining Loss: 0.030052 \tValidation Loss: 0.070751\nValidation loss decreased (0.071891 --> 0.070751). Saving model ...\nEpoch: 31 \tTraining Loss: 0.027714 \tValidation Loss: 0.073124\nEpoch: 32 \tTraining Loss: 0.026635 \tValidation Loss: 0.070788\nEpoch: 33 \tTraining Loss: 0.025980 \tValidation Loss: 0.072915\nEpoch: 34 \tTraining Loss: 0.025190 \tValidation Loss: 0.072318\nEpoch: 35 \tTraining Loss: 0.023234 \tValidation Loss: 0.070464\nValidation loss decreased (0.070751 --> 0.070464). Saving model ...\nEpoch: 36 \tTraining Loss: 0.023196 \tValidation Loss: 0.068323\nValidation loss decreased (0.070464 --> 0.068323). Saving model ...\nEpoch: 37 \tTraining Loss: 0.021589 \tValidation Loss: 0.070177\nEpoch: 38 \tTraining Loss: 0.020518 \tValidation Loss: 0.068799\nEpoch: 39 \tTraining Loss: 0.019935 \tValidation Loss: 0.068493\nEpoch: 40 \tTraining Loss: 0.019375 \tValidation Loss: 0.068532\nEpoch: 41 \tTraining Loss: 0.018920 \tValidation Loss: 0.068881\nEpoch: 42 \tTraining Loss: 0.017016 \tValidation Loss: 0.069970\nEpoch: 43 \tTraining Loss: 0.017391 \tValidation Loss: 0.068416\nEpoch: 44 \tTraining Loss: 0.015818 \tValidation Loss: 0.070202\nEpoch: 45 \tTraining Loss: 0.015345 \tValidation Loss: 0.071929\nEpoch: 46 \tTraining Loss: 0.016532 \tValidation Loss: 0.068365\nEpoch: 47 \tTraining Loss: 0.014171 \tValidation Loss: 0.070992\nEpoch: 48 \tTraining Loss: 0.014257 \tValidation Loss: 0.070559\nEpoch: 49 \tTraining Loss: 0.013542 \tValidation Loss: 0.071192\nEpoch: 50 \tTraining Loss: 0.013093 \tValidation Loss: 0.069902\n"
]
],
[
[
"### Load the Model with the Lowest Validation Loss",
"_____no_output_____"
]
],
[
[
"model.load_state_dict(torch.load('model.pt'))",
"_____no_output_____"
]
],
[
[
"---\n## Test the Trained Network\n\nFinally, we test our best model on previously unseen **test data** and evaluate it's performance. Testing on unseen data is a good way to check that our model generalizes well. It may also be useful to be granular in this analysis and take a look at how this model performs on each class as well as looking at its overall loss and accuracy.",
"_____no_output_____"
]
],
[
[
"model.to('cpu')\n# initialize lists to monitor test loss and accuracy\ntest_loss = 0.0\nclass_correct = list(0. for i in range(10))\nclass_total = list(0. for i in range(10))\n\nmodel.eval() # prep model for evaluation\n\nfor data, target in test_loader:\n # forward pass: compute predicted outputs by passing inputs to the model\n output = model(data)\n # calculate the loss\n loss = criterion(output, target)\n # update test loss \n test_loss += loss.item()*data.size(0)\n # convert output probabilities to predicted class\n _, pred = torch.max(output, 1)\n # compare predictions to true label\n correct = np.squeeze(pred.eq(target.data.view_as(pred)))\n # calculate test accuracy for each object class\n for i in range(len(target)):\n label = target.data[i]\n class_correct[label] += correct[i].item()\n class_total[label] += 1\n\n# calculate and print avg test loss\ntest_loss = test_loss/len(test_loader.sampler)\nprint('Test Loss: {:.6f}\\n'.format(test_loss))\n\nfor i in range(10):\n if class_total[i] > 0:\n print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (\n str(i), 100 * class_correct[i] / class_total[i],\n np.sum(class_correct[i]), np.sum(class_total[i])))\n else:\n print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))\n\nprint('\\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (\n 100. * np.sum(class_correct) / np.sum(class_total),\n np.sum(class_correct), np.sum(class_total)))",
"Test Loss: 0.074413\n\nTest Accuracy of 0: 99% (971/980)\nTest Accuracy of 1: 98% (1123/1135)\nTest Accuracy of 2: 96% (1001/1032)\nTest Accuracy of 3: 97% (984/1010)\nTest Accuracy of 4: 97% (960/982)\nTest Accuracy of 5: 97% (870/892)\nTest Accuracy of 6: 97% (935/958)\nTest Accuracy of 7: 96% (993/1028)\nTest Accuracy of 8: 97% (946/974)\nTest Accuracy of 9: 97% (981/1009)\n\nTest Accuracy (Overall): 97% (9764/10000)\n"
]
],
[
[
"### Visualize Sample Test Results\n\nThis cell displays test images and their labels in this format: `predicted (ground-truth)`. The text will be green for accurately classified examples and red for incorrect predictions.",
"_____no_output_____"
]
],
[
[
"# obtain one batch of test images\ndataiter = iter(test_loader)\nimages, labels = dataiter.next()\n\n# get sample outputs\noutput = model(images)\n# convert output probabilities to predicted class\n_, preds = torch.max(output, 1)\n# prep images for display\nimages = images.numpy()\n\n# plot the images in the batch, along with predicted and true labels\nfig = plt.figure(figsize=(25, 4))\nfor idx in np.arange(20):\n ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])\n ax.imshow(np.squeeze(images[idx]), cmap='gray')\n ax.set_title(\"{} ({})\".format(str(preds[idx].item()), str(labels[idx].item())),\n color=(\"green\" if preds[idx]==labels[idx] else \"red\"))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0b3f1670a526aff76658f4a82fb4f825c53d489 | 991,013 | ipynb | Jupyter Notebook | notebooks/initial_training.ipynb | smolendawid/ubaar-competition | 28de972d6beb13343c537fc030101be672a852a3 | [
"Apache-2.0"
] | null | null | null | notebooks/initial_training.ipynb | smolendawid/ubaar-competition | 28de972d6beb13343c537fc030101be672a852a3 | [
"Apache-2.0"
] | null | null | null | notebooks/initial_training.ipynb | smolendawid/ubaar-competition | 28de972d6beb13343c537fc030101be672a852a3 | [
"Apache-2.0"
] | null | null | null | 715.532852 | 39,648 | 0.943174 | [
[
[
"These notebook is used for initial training. Only necessary preprocessing is done, mainly categorical features encoding and Nans replacement. \nIt should show the main problems with observations, show main model difficulties, and feaures importances. It should also guide the way of validation Therefore we have:\n- data preparation\n- cross-validation and modeling\n- features and error analysis",
"_____no_output_____"
]
],
[
[
"import os\nimport pandas as pd\nimport numpy as np\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.linear_model import Ridge\nfrom sklearn.ensemble import RandomForestRegressor\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.metrics import mean_absolute_percentage_error\nfrom sklearn.model_selection import KFold\n\nimport mlflow\n\nimport IPython.display as ipd\nimport seaborn as sns\nimport matplotlib.pylab as plt",
"_____no_output_____"
],
[
"data = pd.read_csv(os.path.join('..', 'data', 'raw', 'ubaar-competition', 'train.csv'), encoding=\"utf-8\", index_col=\"ID\")",
"_____no_output_____"
],
[
"data['distanceKM'].fillna(data['distanceKM'].median(), inplace=True)\ndata['taxiDurationMin'].fillna(data['taxiDurationMin'].median(), inplace=True)",
"_____no_output_____"
],
[
"data.head()",
"_____no_output_____"
],
[
"data.columns",
"_____no_output_____"
],
[
"columns_countinous = ['date', 'sourceLatitude', 'sourceLongitude', 'destinationLatitude', 'destinationLongitude', 'distanceKM', 'taxiDurationMin', 'weight']\ncolumns_cat = ['vehicleType', 'vehicleOption']",
"_____no_output_____"
],
[
"data_oh = pd.get_dummies(data, columns=columns_cat, drop_first=True)\ndata_oh = data_oh.drop(columns=['SourceState', 'destinationState'])",
"_____no_output_____"
],
[
"data_oh.head()",
"_____no_output_____"
],
[
"features_columns = data_oh.columns[data_oh.columns != 'price'].values",
"_____no_output_____"
],
[
"remote_server_uri = \"http://18.185.244.61:5050\"\nmlflow.set_tracking_uri(remote_server_uri)\nmlflow.set_experiment(\"UbaarCVinitial\")\nmlflow.start_run(run_name='')\nmlflow.log_param('features', features_columns)\n\n\ny_full = data_oh['price'].values\nx_full = data_oh[features_columns].values\n\nkfold = KFold(n_splits=5, shuffle=True, random_state=42)\n\ntrain_mapes = []\ndev_mapes = []\ndev_preds = []\ndev_refs = []\ndev_inds = []\n\nfor train_ind, dev_ind in kfold.split(x_full):\n \n model = RandomForestRegressor(n_estimators=20, max_depth=None, min_samples_leaf=8, random_state=42)\n# model = Ridge(alpha=10, solver='auto')\n mlflow.log_param('features', features_columns)\n mlflow.log_param('model', model.__dict__)\n\n x_train = x_full[train_ind]\n y_train = y_full[train_ind]\n x_dev = x_full[dev_ind]\n y_dev = y_full[dev_ind]\n \n# scaler = StandardScaler()\n# scaler.fit(x_train)\n# x_train = scaler.transform(x_train)\n# x_dev = scaler.transform(x_dev)\n\n model.fit(x_train, y_train)\n\n preds_train = model.predict(x_train)\n preds_dev = model.predict(x_dev)\n\n train_mape = mean_absolute_percentage_error(y_train, preds_train)\n dev_mape = mean_absolute_percentage_error(y_dev, preds_dev)\n \n train_mapes.append(train_mape)\n dev_mapes.append(dev_mape)\n \n dev_preds.extend(list(preds_dev))\n dev_refs.extend(list(y_dev))\n dev_inds.extend(list(dev_ind))\n \n print(f\"Train MAPE: {train_mape}\")\n print(f\"Dev MAPE: {dev_mape}\")\n\nprint(\"================\")\nprint(f\"Mean MAPE: {np.mean(dev_mapes)}\")\nprint(f\"Std MAPE: {np.std(dev_mapes)}\")\n\nmlflow.log_metric(\"Mean dev MAPE\", np.mean(dev_mapes))\nmlflow.log_metric(\"Std dev MAPE\", np.std(dev_mapes))\n \nmlflow.end_run()",
"Train MAPE: 0.15811852032243692\nDev MAPE: 0.2076681168682707\nTrain MAPE: 0.157782689063902\nDev MAPE: 0.20706672162305118\nTrain MAPE: 0.1586713347314613\nDev MAPE: 0.20592405684018758\nTrain MAPE: 0.15819431667628758\nDev MAPE: 0.2060751622414274\nTrain MAPE: 0.15605939646205402\nDev MAPE: 0.20927568114740672\n================\nMean MAPE: 0.2072019477440687\nStd MAPE: 0.0012197229780133896\n"
],
[
"results = pd.DataFrame(list(zip(dev_refs, dev_preds, dev_inds)), columns=['refs', 'preds', 'inds'])\nresults = results.sort_values('inds')\nresults.head()",
"_____no_output_____"
],
[
"sorted_idx = model.feature_importances_.argsort()\nplt.figure(figsize=(10,20))\nplt.barh(features_columns[sorted_idx], model.feature_importances_[sorted_idx])\nplt.xlabel(\"Random Forest Feature Importance\")",
"_____no_output_____"
],
[
"plt.figure(figsize=(10,10))\nsns.histplot(data=results, x='refs', y='preds')\nplt.plot([0, 50000000], [0, 50000000], linewidth=1, c='r')\nplt.xlim([0, 50000000])\nplt.ylim([0, 50000000])",
"_____no_output_____"
]
],
[
[
"# Error analysis\n\nError analysis is a crucial step in working on a model. We can check the performance of the model according to specific features \nto find the weakest aspect of the model.",
"_____no_output_____"
]
],
[
[
"data['refs'] = results['refs'].values\ndata['preds'] = results['preds'].values",
"_____no_output_____"
],
[
"column = 'vehicleOption'\n\nfor vehicle_type in data[column].unique():\n tmp_data = data[data[column] == vehicle_type]\n title = f\"{vehicle_type}, MPAE:{mean_absolute_percentage_error(tmp_data['refs'], tmp_data['preds']):.2f} #:{len(tmp_data)}\"\n plt.title(title)\n sns.scatterplot(data=tmp_data, x='refs', y='preds')\n plt.xlim([0, 50000000])\n plt.ylim([0, 50000000])\n plt.show()",
"_____no_output_____"
],
[
"column = 'SourceState'\n\nfor vehicle_type in data[column].unique():\n tmp_data = data[data[column] == vehicle_type]\n title = f\"{vehicle_type}, MPAE:{mean_absolute_percentage_error(tmp_data['refs'], tmp_data['preds']):.2f} #:{len(tmp_data)}\"\n plt.title(title)\n sns.scatterplot(data=tmp_data, x='refs', y='preds')\n plt.xlim([0, 50000000])\n plt.ylim([0, 50000000])\n plt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0b3f69e9a5ddaf17a35b460eaefa7a70f37f1a7 | 67,224 | ipynb | Jupyter Notebook | POO/Lista_Exercicios_POO.ipynb | augustojardim/Degree_DS_Tecnicas_Programacao_I | 6b724882767534010c4f287144ab791e6f9947e2 | [
"MIT"
] | null | null | null | POO/Lista_Exercicios_POO.ipynb | augustojardim/Degree_DS_Tecnicas_Programacao_I | 6b724882767534010c4f287144ab791e6f9947e2 | [
"MIT"
] | null | null | null | POO/Lista_Exercicios_POO.ipynb | augustojardim/Degree_DS_Tecnicas_Programacao_I | 6b724882767534010c4f287144ab791e6f9947e2 | [
"MIT"
] | null | null | null | 32.856305 | 337 | 0.482 | [
[
[
"1. Crie uma classe Bola cujos atributos são cor e raio. Crie um método que imprime a cor da bola. Crie um método para calcular a área dessa bola. Crie um método para calcular o volume da bola. Crie um objeto dessa classe e calcule a área e o volume, imprimindo ambos em seguida. \n Obs.:\n\n Área da esfera = 4 * 3.14 * r * r;\n\n Volume da esfera = 4 * 3.14 * r* r * r/3",
"_____no_output_____"
]
],
[
[
"class Bola:\n '''\n Cria a representação de uma bola\n '''\n def __init__(self, cor, raio):\n '''\n Construtor\n Parâmetros\n ----------\n cor : str\n cor associada a bola\n raio : float\n raio da bola\n '''\n self.cor = cor\n self.raio = raio\n self.area = None\n self.volume = None\n\n \n def calcula_area(self):\n self.area = 4*3.14*(self.raio)**2\n return self.area\n \n def calcula_volume(self):\n self.volume = 4/3*3.14*(self.raio)**3\n return self.volume\n \n def __repr__(self):\n if self.area == None and self.volume == None:\n return f'A cor da bola é: {self.cor}, O raio da bola é: {self.raio}'\n elif self.volume == None:\n return f'A cor da bola é: {self.cor}, O raio da bola é: {self.raio}, A area da bola é {self.area}'\n else:\n return f'A cor da bola é: {self.cor}, O raio da bola é: {self.raio}, A area da bola é {self.area}, O volume da bola é {self.volume}'\n ",
"_____no_output_____"
],
[
"Bola_teste = Bola(cor = 'azul',\n raio = 4)\nAre = Bola_teste.calcula_area()\nVol = Bola_teste.calcula_volume()",
"_____no_output_____"
],
[
"repr(Bola_teste)",
"_____no_output_____"
],
[
"Are = Bola_teste.calcula_area()\nVol = Bola_teste.calcula_volume()\nprint(Are)\nprint(Vol)",
"_____no_output_____"
],
[
"Bola_123 = Bola(cor = 'verde',\n raio = 10)",
"_____no_output_____"
],
[
"Bola_123.calcula_area()\nBola_123.area",
"_____no_output_____"
]
],
[
[
"2. Crie uma classe Retângulo cujos atributos são lado_a e lado_b. Crie um método para calcular a área desse retângulo. Crie um objeto dessa classe e calcule a área e a imprima em seguida.",
"_____no_output_____"
]
],
[
[
"class Retângulo:\n '''\n Cria a representação de um retângulo\n '''\n def __init__(self, lado_a, lado_b):\n '''\n Construtor\n Parâmetros\n ----------\n lado_a : float\n medida do primeiro lado do retângulo\n lado_b : float\n medida do segundo lado do retângulo\n '''\n self.lado_a = lado_a\n self.lado_b = lado_b\n self.area = None\n \n def calcula_area(self):\n self.area = self.lado_a*self.lado_b\n return self.area\n \n def __repr__(self):\n if self.area == None:\n return f'O lado a vale {self.lado_a} e o lado b vale {self.lado_b}'\n else:\n return f'O lado a vale {self.lado_a} e o lado b vale {self.lado_b} e a área do retangulo vale {self.area}'",
"_____no_output_____"
],
[
"Teste = Retângulo (lado_a = 10,\n lado_b = 2)\nTeste.calcula_area()",
"_____no_output_____"
],
[
"repr(Teste)",
"_____no_output_____"
]
],
[
[
"3. Crie uma classe Funcionario cujos atributos são nome e e-mail. Guarde as horas trabalhadas em um dicionário cujas chaves são o mês em questão e, em outro dicionário, guarde o salário por hora relativo ao mês em questão. Crie um método que retorna o salário mensal do funcionário.",
"_____no_output_____"
]
],
[
[
"class Funcionario:\n '''\n Cria uma representação do funcionário\n '''\n def __init__(self,nome,email):\n '''\n Construtor\n Parâmetros\n ----------\n nome : str\n nome do funcionário\n email : str\n email do funcionário\n '''\n self.nome = nome\n self.email = email\n \n self.horas_mes = {}\n self.salario_hora = {}\n \n def define_horas_mes(self, mes, horas):\n '''\n Define a quantidade de horas trabalhadas em determinado mês\n \n Parâmetros\n ---------\n mes : str\n mes no formato: 'nov/2021'\n quantidade_horas : int\n quantidade de horas trabalhadas no mês\n '''\n self.horas_mes[mes] = horas\n \n def define_valor_hora(self, mes, salario_hora):\n '''\n Define o valor a ser recebido por hora naquele mês\n \n Parâmetros\n ----------\n mes : str\n mes no formato: 'nov/2021'\n valor_salario : float\n valor da hora no mês\n '''\n self.salario_hora[mes] = salario_hora\n \n def calcula_salario_mes(self, mes):\n '''\n Calcula o salário a ser recebido no mês pelo funcionário\n \n Parâmetros\n ---------\n mes : str\n mes no formato: 'nov/2021'\n '''\n if mes in self.horas_mes and mes in self.salario_hora:\n self.salario = self.salario_hora[mes]*self.horas_mes[mes]\n return self.salario\n else:\n print('O mês desejado não possui horas ou valor por hora cadastrado!')\n \n def __repr__(self):\n return f'Nome: {self.nome}\\nEmail: {self.email}'\n \n ",
"_____no_output_____"
],
[
"Augusto = Funcionario(nome = 'Augusto', email = '[email protected]')\n",
"_____no_output_____"
],
[
"Augusto.define_valor_hora('nov/2021',20)\nAugusto.define_valor_hora('dez/2021',30)\n",
"_____no_output_____"
],
[
"Augusto",
"_____no_output_____"
]
],
[
[
"4. Crie uma classe Televisor cujos atributos são:\n \n a. fabricante;\n \n b. modelo;\n \n c. canal atual;\n \n d. lista de canais; e\n \n e. volume.\n <p style='text-align: justify;'> Faça métodos para aumentar/diminuir volume, trocar o canal e sintonizar um novo canal, que adiciona um novo canal à lista de canais (somente se esse canal não estiver nessa lista). No atributo lista de canais, devem estar armazenados todos os canais já sintonizados dessa TV.\n\n Obs.: O volume não pode ser menor que zero e maior que cem; só se pode trocar para um canal que já esteja na lista de canais. </p>\n ",
"_____no_output_____"
]
],
[
[
"class Televisor:\n '''\n Cria uma representação de Televisor\n '''\n def __init__(self, fabricante, modelo):\n '''\n Construtor\n \n Parâmetros\n ---------\n fabricante : str\n nome do fabricante do televisor\n modelo : str\n modelo do televisor\n '''\n self.fabricante = fabricante\n self.modelo = modelo\n self.lista_canais = [2,5,7,10,13] # Canais sintonizados de fábrica\n self.volume = 20 # Valor padrão\n self.canal_atual = 2 # Canal atual padrão\n \n def aumentar_volume(self, quantidade):\n '''\n Aumenta o volume do televisor\n \n Parâmetros\n ----------\n quantidade : int\n quantidade da qual se deseja aumentar o volume do televisor\n '''\n volume = self.volume + quantidade\n while volume > 100:\n print('Erro! O volume não pode superar 100!\\n')\n quantidade = int(input('Escolha outra fator para aumentar o volume: '))\n volume = self.volume + quantidade\n self.volume = volume\n# return True\n \n def diminuir_volume(self, quantidade):\n '''\n Diminui o volume do televisor\n\n Parâmetros\n ----------\n quantidade : int\n quantidade da qual se deseja diminuir o volume do televisor\n '''\n volume = self.volume - quantidade\n while volume < 0:\n print('Erro! O volume não pode ser menor que 0!\\n')\n quantidade = int(input('Escolha outra fator para aumentar o volume: '))\n volume = self.volume - quantidade\n self.volume = volume\n# return True\n \n def sintonizar_canal(self, canal):\n '''\n Adicional um novo canal a lista de canais padrão do televisor\n \n Parâmetros\n --------------\n canal : int\n canal que deseja incluir na lista de canais\n '''\n self.lista_canais.append(canal)\n print(f'Canal {canal} sintonizado com sucesso.')\n \n def trocar_canal(self, canal):\n '''\n Troca de canal\n \n Parâmetros\n ----------\n canal : int\n canal para o qual deseja mudar\n '''\n if canal in self.lista_canais:\n self.canal_atual = canal\n else:\n print('Canal não alterado, pois não está na lista de canais.') \n return print(f'O canal atual é: {self.canal_atual}')\n \n def __repr__(self):\n return f'Fabricante: {self.fabricante}\\nModelo: {self.modelo}\\nLista de Canais: {self.lista_canais}\\nVolume: {self.volume}\\nCanal atual: {self.canal_atual}'\n\n ",
"_____no_output_____"
],
[
"Tv_sala = Televisor('TCL','ABC123')\nTv_sala.sintonizar_canal(35)",
"Canal 35 sintonizado com sucesso.\n"
],
[
"Tv_sala",
"_____no_output_____"
],
[
"Tv_sala.trocar_canal(35)",
"_____no_output_____"
],
[
"Tv_sala.aumentar_volume(85)",
"_____no_output_____"
],
[
"Tv_sala.diminuir_volume(90)",
"_____no_output_____"
],
[
"Tv_sala.lista_canais",
"_____no_output_____"
]
],
[
[
"\n<p style='text-align: justify;'> 5. Crie uma classe ControleRemoto cujo atributo é televisão (isso é, recebe um objeto da classe do exercício 4). Crie métodos para aumentar/diminuir volume, trocar o canal e sintonizar um novo canal, que adiciona um novo canal à lista de canais (somente se esse canal não estiver nessa lista). </p>",
"_____no_output_____"
]
],
[
[
"class ControleRemoto:\n '''\n Cria uma representação do controle remoto\n '''\n def __init__(self, Televisor):\n '''\n Construtor\n \n Parâmetros\n ----------\n Televisor : objeto da classe Televisor\n '''\n self.fabricante = Televisor.fabricante\n self.modelo = Televisor.modelo\n self.lista_canais = Televisor.lista_canais\n self.volume = Televisor.volume\n self.canal_atual = Televisor.canal_atual\n\n def aumentar_volume(self, quantidade, Televisor):\n '''\n Aumenta o volume do televisor\n\n Parâmetros\n ----------\n Televisor : objeto televisor\n televisor do qual se deseja realizar a ação\n quantidade : int\n quantidade da qual se deseja aumentar o volume do televisor\n '''\n volume = self.volume + quantidade\n while volume > 100:\n print('Erro! O volume não pode superar 100!\\n')\n quantidade = int(input('Escolha outra fator para aumentar o volume: '))\n volume = self.volume + quantidade\n self.volume = volume\n Televisor.volume = self.volume\n \n def diminuir_volume(self, quantidade, Televisor):\n '''\n Diminui o volume do televisor\n\n Parâmetros\n ----------\n quantidade : int\n quantidade da qual se deseja diminuir o volume do televisor\n '''\n volume = self.volume - quantidade\n while volume < 0:\n print('Erro! O volume não pode ser menor que 0!\\n')\n quantidade = int(input('Escolha outra fator para aumentar o volume: '))\n volume = self.volume - quantidade\n self.volume = volume\n Televisor.volume = self.volume\n# return True\n\n \n def sintonizar_canal(self, canal, Televisor):\n '''\n Adicional um novo canal a lista de canais padrão do televisor\n \n Parâmetros\n --------------\n canal : int\n canal que deseja incluir na lista de canais\n '''\n self.lista_canais.append(canal)\n Televisor.lista_canais.append(canal)\n print(f'Canal {canal} sintonizado com sucesso.')\n \n def trocar_canal(self, canal, Televisor):\n '''\n Troca de canal\n \n Parâmetros\n ----------\n canal : int\n canal para o qual deseja mudar\n '''\n if canal in self.lista_canais:\n self.canal_atual = canal\n Televisor.canal_atual = canal\n else:\n print('Canal não alterado, pois não está na lista de canais.') \n return print(f'O canal atual é: {self.canal_atual}')\n\n\nclass Televisor:\n '''\n Cria uma representação de Televisor\n '''\n def __init__(self, fabricante, modelo):\n '''\n Construtor\n \n Parâmetros\n ----------\n fabricante : str\n nome do fabricante do televisor\n modelo : str\n modelo do televisor\n '''\n self.fabricante = fabricante\n self.modelo = modelo\n self.lista_canais = [2,5,7,10,13] # Canais sintonizados de fábrica\n self.volume = 20 # Valor padrão\n self.canal_atual = 2 # Canal atual padrão\n \n def aumentar_volume(self, quantidade):\n '''\n Aumenta o volume do televisor\n \n Parâmetros\n ----------\n quantidade : int\n quantidade da qual se deseja aumentar o volume do televisor\n '''\n volume = self.volume + quantidade\n while volume > 100:\n print('Erro! O volume não pode superar 100!\\n')\n quantidade = int(input('Escolha outra fator para aumentar o volume: '))\n volume = self.volume + quantidade\n self.volume = volume\n# return True\n \n def diminuir_volume(self, quantidade):\n '''\n Diminui o volume do televisor\n\n Parâmetros\n ----------\n quantidade : int\n quantidade da qual se deseja diminuir o volume do televisor\n '''\n volume = self.volume - quantidade\n while volume < 0:\n print('Erro! O volume não pode ser menor que 0!\\n')\n quantidade = int(input('Escolha outra fator para aumentar o volume: '))\n volume = self.volume - quantidade\n self.volume = volume\n# return True\n \n def sintonizar_canal(self, canal):\n '''\n Adicional um novo canal a lista de canais padrão do televisor\n \n Parâmetros\n --------------\n canal : int\n canal que deseja incluir na lista de canais\n '''\n self.lista_canais.append(canal)\n print(f'Canal {canal} sintonizado com sucesso.')\n \n def trocar_canal(self, canal):\n '''\n Troca de canal\n \n Parâmetros\n ----------\n canal : int\n canal para o qual deseja mudar\n '''\n if canal in self.lista_canais:\n self.canal_atual = canal\n else:\n print('Canal não alterado, pois não está na lista de canais.') \n return print(f'O canal atual é: {self.canal_atual}')\n ",
"_____no_output_____"
],
[
"Tv1 = Televisor('TLC','123')\nC1 = ControleRemoto(Tv1)\nC1.diminuir_volume(5,Tv1)\nC1.sintonizar_canal(99,Tv1)\n",
"_____no_output_____"
],
[
"C1.trocar_canal(99,Tv1)",
"_____no_output_____"
],
[
"Tv1.volume",
"_____no_output_____"
]
],
[
[
"<p style='text-align: justify;'> 6. O módulo time possui a função time.sleep(x), que faz seu programa “dormir” por x segundos. Utilizando essa função, crie uma classe Cronômetro e faça um programa que cronometre o tempo.\n</p>",
"_____no_output_____"
]
],
[
[
"from time import sleep\nclass cronometro:\n \n def __init__(self):\n self.hora = 0\n self.minuto = 0\n self.segundo = 0\n \n def contagem_progressiva(self):\n flag = True\n while flag == True:\n if self.segundo < 60:\n self.segundo = self.segundo + 1\n print(f'{self.hora}:{self.minuto}:{self.segundo}')\n sleep(1)\n if self.segundo == 60:\n self.segundo = 0\n self.minuto = self.minuto + 1\n if self.minuto == 60:\n self.minuto = 0\n self.hora = self.hora + 1\n \n def timer(self, hora, minuto, segundo):\n \n self.hora = hora\n self.minuto = minuto\n self.segundo = segundo\n flag = True\n print(f'{self.hora}:{self.minuto}:{self.segundo}')\n while flag == True: \n if self.segundo <= 60 and self.segundo != 0:\n self.segundo = self.segundo - 1\n print(f'{self.hora}:{self.minuto}:{self.segundo}')\n sleep(1)\n if self.segundo == 0 and self.minuto ==0 and self.hora == 0:\n flag = False\n \n if self.segundo == 0 and self.hora == 0:\n self.segundo = 60\n self.minuto = self.minuto - 1\n \n if self.minuto == 0:\n self.minuto = 60\n self.hora = self.hora - 1",
"_____no_output_____"
],
[
"cron1 = cronometro()",
"_____no_output_____"
],
[
"cron1.timer(1,0,0)",
"_____no_output_____"
],
[
"cron1.contagem_progressiva()",
"_____no_output_____"
]
],
[
[
"<p style='text-align: justify;'> 7. Crie uma modelagem de classes para uma agenda capaz de armazenar contatos. Através dessa agenda é possível incluir, remover, buscar e listar contatos já cadastrados.\n</p>",
"_____no_output_____"
]
],
[
[
"class Contato:\n def __init__(self, nome, email, telefone, endereco):\n '''\n Construtor\n \n Parâmetros\n ----------\n nome : str\n nome do contato\n email : str\n email do contato\n telefone : str\n telefone do contato\n endereço : str\n endereço do contato\n '''\n self.nome = nome\n self.email = email\n self.telefone = telefone\n self.endereco = endereco\n \n def ver_contato(self):\n print(f'Nome: {self.nome}')\n print(f'Email: {self.email}')\n print(f'Telefone: {self.telefone}')\n print(f'Endereço: {self.endereco}')\n \nclass Agenda:\n def __init__(self):\n self.contatos = {}\n \n def adicionar_cadastro(self):\n '''\n Adiciona novos contatos na agenda\n '''\n nome = input('Insira o nome do contato: ')\n nome = nome.title()\n email = input('Insira o email do contato: ')\n telefone = int(input('Insira o telefone do contato: '))\n endereco = input('Insira o endereço do contato: ')\n novo_contato = Contato(nome, email, telefone, endereco)\n self.contatos[nome] = novo_contato\n \n def visualizar_cadastros(self):\n '''\n Mostra a lista de cadastrados\n '''\n if len(self.contatos) == 0:\n print('Lista Vazia!')\n else:\n for nome in self.contatos:\n self.contatos[nome].ver_contato()\n print('_________________________')\n \n def remover_cadastro(self):\n '''\n Remove cadastro do usuário\n '''\n nome = input('Digite o nome do contato que deseja excluir: ')\n self.contatos.pop(nome)\n ",
"_____no_output_____"
],
[
"agenda1 = Agenda()\nagenda1.adicionar_cadastro()",
"_____no_output_____"
],
[
"agenda1.visualizar_cadastros()",
"_____no_output_____"
],
[
"agenda1.buscar_cadastro('Augusto')",
"_____no_output_____"
],
[
"agenda1.remover_cadastro()",
"_____no_output_____"
]
],
[
[
"<p style='text-align: justify;'> 8. Crie uma classe Cliente cujos atributos são nome, idade e e-mail. Construa um método que imprima as informações tal como abaixo: \n\nNome: Fulano de Tal\n \nIdade: 40\n \nE-mail: [email protected]\n \n</p>\n\n\n",
"_____no_output_____"
]
],
[
[
"class Cliente:\n '''\n Cria a representação de um cliente\n '''\n def __init__(self, nome, idade, email):\n '''\n Construtor\n \n Parâmetros\n ----------\n nome : str\n nome do cliente\n idade : int\n idade do cliente\n email : str\n email do cliente\n '''\n nome = nome.title()\n self.nome = nome\n self.idade = idade\n self.email = email\n \n def imprimir_info(self):\n print(f'Nome: {self.nome}')\n print(f'Idade: {self.idade}')\n print(f'Email: {self.email}')",
"_____no_output_____"
],
[
"Augusto = Cliente('augusto', 29, '[email protected]' )\nAugusto.imprimir_info()",
"_____no_output_____"
]
],
[
[
"<p style='text-align: justify;'> 9. Com base no exercício anterior, crie um sistema de cadastro e a classe Cliente. Seu programa deve perguntar se o usuário quer cadastrar um novo cliente, alterar um cadastro ou sair.\n\nDica: Você pode fazer esse exercício criando uma classe Sistema, que irá controlar o sistema de cadastros. Essa classe deve ter o atributo cadastro e os métodos para imprimir os cadastrados, cadastrar um novo cliente, alterar um cadastro ou sair. \n</p>\n",
"_____no_output_____"
]
],
[
[
"class Cliente:\n '''\n Cria a representação de um cliente\n '''\n def __init__(self, nome, idade, email):\n '''\n Construtor\n \n Parâmetros\n ----------\n nome : str\n nome do cliente\n idade : int\n idade do cliente\n email : str\n email do cliente\n '''\n nome = nome.title()\n self.nome = nome\n self.idade = idade\n self.email = email\n \n def ver_cliente(self):\n print(f'Nome: {self.nome}')\n print(f'Idade: {self.idade}')\n print(f'Email: {self.email}')\n \n \nclass Sistema_Cadastro:\n '''\n Cria a representação do sistema de cadastro\n '''\n \n def __init__(self):\n '''\n Construtor\n\n '''\n self.cadastrados = {}\n \n def adicionar_cadastrados(self):\n '''\n Adiciona cadastros ao sistema por meio de inputs pedidos ao usuário\n '''\n nome = input('Insira o nome do cadastrado: ')\n nome = nome.title()\n email = input('Insira o email do cadastrado: ')\n idade = input('Insira a idade do cadastrado: ')\n while not idade.isdigit():\n print('Idade inválida!')\n idade = input('Insira uma idade válida: ')\n if email in self.cadastrados:\n print('Cliente já cadastrado!\\n')\n else: \n novo_cadastrado = Cliente(nome, idade, email)\n self.cadastrados[email] = novo_cadastrado\n print('Cadastro realizado com sucesso!\\n')\n \n def ver_cadastrados(self):\n '''\n Visualiza a lista de cadastrados no sistema\n '''\n for email in self.cadastrados:\n self.cadastrados[email].ver_cliente()\n print('________________________________')\n \n def alterar_cadastro(self, email):\n '''\n Altera o cadastro de alguém que já está no sistema\n \n Parâmetros\n ----------\n email : str\n email de quem se deseja alterar o cadastro\n '''\n alterar = input('Insira 1 para alterar nome, 2 para alterar email e 3 para alterar idade')\n while (alterar != '1') and (alterar != '2') and (alterar != '3'):\n print('Erro! Opção não reconhecida. Tente novamente.')\n alterar = input('Insira 1 para alterar nome, 2 para alterar email e 3 para alterar idade')\n if alterar == '1':\n novo_nome = input('Digite o novo nome: ')\n novo_nome = novo_nome.title()\n self.cadastrados[email].nome = novo_nome\n print('Nome alterado com sucesso!\\n')\n elif alterar == '2':\n novo_email = input('Digite o novo email: ')\n nome = self.cadastrados[email].nome\n idade = self.cadastrados[email].idade\n novo_cadastrado = Cliente(nome, idade, novo_email)\n self.cadastrados[novo_email] = novo_cadastrado\n self.cadastrados.pop(email)\n print('Email alterado com sucesso!\\n')\n elif alterar == '3':\n nova_idade = int(input('Digite a nova idade: '))\n while not nova_idade.isdigit():\n print('Idade inválida!')\n nova_idade = input('Insira uma idade válida: ')\n self.cadastrados[email].idade = nova_idade\n print('Idade alterada com sucesso!\\n')\n\n def rodar(self):\n '''\n Roda o sistema de cadastro\n '''\n flag = True\n print('Olá! Bem vindo ao seu sistema de Cadastro!')\n while flag == True: \n print('O que deseja fazer?')\n print('Digite 1 para adicionar um cliente ao seu sistema de cadastro.')\n print('Digite 2 para ver a lista de clientes do seu sistema de cadastro.')\n print('Digite 3 para alterar os dados de um cliente do seu sistema de cadastro.')\n print('Digite 0 para sair do seus sistema de Cadastro.')\n opcao = input('Digite a opção desejada: ')\n if opcao == '1':\n self.adicionar_cadastrados()\n elif opcao == '2':\n if len(self.cadastrados) == 0:\n print('Cadastro vazio!\\n')\n else:\n self.ver_cadastrados()\n elif opcao == '3':\n email = input('Digite o email do cliente que deseja alterar alguma informação: ')\n if email in self.cadastrados:\n self.alterar_cadastro(email)\n else:\n print('Cliente não cadastrado!\\n')\n elif opcao == '0':\n flag = False\n else:\n print('Erro! Opção não reconhecida!\\nEscolha uma opção válida.')\n print('Saida do sistema realizada com sucesso.') ",
"_____no_output_____"
],
[
"sistema_1 = Sistema_Cadastro()",
"_____no_output_____"
],
[
"sistema_1.rodar()",
"_____no_output_____"
]
],
[
[
"<p style='text-align: justify;'> 10. Crie uma classe ContaCorrente com os atributos cliente (que deve ser um objeto da classe Cliente) e saldo. Crie métodos para depósito, saque e transferência. Os métodos de saque e transferência devem verificar se é possível realizar a transação.\n</p>",
"_____no_output_____"
]
],
[
[
"class Cliente:\n '''\n Cria representação de um cliente\n '''\n def __init__(self, nome, cpf):\n '''\n Construtor\n \n Parâmetro\n ---------\n nome : str\n nome do cliente\n cpf : str\n cpf do cliente\n '''\n self.nome = nome.capitalize()\n self.cpf = cpf\n \nclass ContaCorrente:\n '''\n Cria a representação da conta corrente\n '''\n def __init__(self, Cliente):\n '''\n Construtor\n \n Parâmetro\n ---------\n Cliente : objeto da classe Cliente\n Objeto criado a partir de cliente\n '''\n self.cliente = Cliente.nome\n self.saldo = 0\n \n def depositar(self, deposito):\n self.saldo = self.saldo + deposito\n \n def sacar(self, saque):\n if saque > self.saldo:\n print('Saldo insuficiente!')\n else:\n self.saldo = self.saldo - saque\n \n def transferir(self, other, transferencia):\n if transferencia > self.saldo:\n print('Saldo insuficiente para realizar operação!')\n else:\n self.saldo = self.saldo - transferencia\n other.saldo = other.saldo + transferencia",
"_____no_output_____"
],
[
"# Exemplo de uso",
"_____no_output_____"
],
[
"augusto = Cliente('augusto', '12345678900')\njoeise = Cliente('joeise','98765432100')\ncontaAugusto = ContaCorrente(augusto)\ncontaJoeise = ContaCorrente(joeise)",
"_____no_output_____"
],
[
"contaAugusto.depositar(1000)",
"_____no_output_____"
],
[
"contaAugusto.transferir(contaJoeise,600)",
"_____no_output_____"
],
[
"contaAugusto.saldo",
"_____no_output_____"
],
[
"contaJoeise.saldo",
"_____no_output_____"
]
],
[
[
"<p style='text-align: justify;'> 11. Crie uma classe Fração cujos atributos são numerador (número de cima) e denominador (número de baixo). <br/><br/>\nImplemente os métodos de adição, subtração, multiplicação, divisão que retornam objetos do tipo Fração.<br/><br/>\nImplemente também o método _ repr _.<br/><br/>\nImplemente métodos para comparação: igualdade (==) e desigualdades (!=, <=, >=, < e >). \n</p>\n\n",
"_____no_output_____"
]
],
[
[
"class Fracao:\n '''\n Cria a representação de uma fração\n '''\n def __init__(self, numerador, denominador):\n '''\n Construtor\n \n Parâmetros\n ---------\n numerador : int\n numerador da fração\n denominador : int\n denominador da fração\n '''\n if (denominador == 0):\n raise ValueError('Denominador deve ser diferente de zero.')\n else:\n self.numerador = numerador\n self.denominador = denominador\n \n def __repr__(self):\n return f'{self.numerador}/{self.denominador}'\n \n def __add__(self, other):\n# numerador = self.numerador*other.denominador + self.denominador*other.numerador\n# denominador = self.denominador*other.denominador\n numerador_1 = self.numerador\n numerador_2 = other.numerador\n denominador_1 = self.denominador\n denominador_2 = other.denominador\n #mmc\n if denominador_1 > denominador_2:\n maior = denominador_1\n else:\n maior = denominador_2\n\n for i in range(maior):\n aux = denominador_1 * i\n if (aux % denominador_2) == 0:\n mmc = aux\n #Calculo\n numerador_resultante = ((mmc/denominador_1)*numerador_1 + (mmc/denominador_2)*numerador_2)\n denominador_resultante = mmc\n return Fracao(numerador_resultante,denominador_resultante)\n\n def __sub__(self,other):\n# numerador = self.numerador*other.denominador - self.denominador*other.numerador\n# denominador = self.denominador*other.denominador\n numerador_1 = self.numerador\n numerador_2 = other.numerador\n denominador_1 = self.denominador\n denominador_2 = other.denominador\n #mmc\n if denominador_1 > denominador_2:\n maior = denominador_1\n else:\n maior = denominador_2\n\n for i in range(maior):\n aux = denominador_1 * i\n if (aux % denominador_2) == 0:\n mmc = aux\n #Calculo\n numerador_resultante = ((mmc/denominador_1)*numerador_1 - (mmc/denominador_2)*numerador_2)\n denominador_resultante = mmc\n return Fracao(numerador_resultante,denominador_resultante)\n \n def __mul__(self,other):\n numerador_1 = self.numerador\n numerador_2 = other.numerador\n denominador_1 = self.denominador\n denominador_2 = other.denominador\n numerador_resultante = numerador_1*numerador_2\n denominador_resultante = denominador_1*denominador_2\n return Fracao(numerador_resultante,denominador_resultante)\n \n def __truediv__(self, other):\n numerador_1 = self.numerador\n numerador_2 = other.denominador\n denominador_1 = self.denominador\n denominador_2 = other.numerador\n numerador_resultante = numerador_1*numerador_2\n denominador_resultante = denominador_1*denominador_2\n return Fracao(numerador_resultante,denominador_resultante)\n \n def __eq__(self, other):\n return self.numerador/self.denominador == other.numerador/other.denominador\n \n def __lt__(self, other):\n return self.numerador/self.denominador < other.numerador/other.denominador\n \n def __le__(self, other):\n return self.numerador/self.denominador <= other.numerador/other.denominador\n \n def __gt__(self, other):\n return self.numerador/self.denominador > other.numerador/other.denominador \n \n def __ge__(self, other):\n return self.numerador/self.denominador >= other.numerador/other.denominador\n ",
"_____no_output_____"
],
[
"fr1 = Fracao(6,8)\nfr2 = Fracao(7,6)\n# repr(fr1)",
"_____no_output_____"
],
[
"fr1/fr2",
"_____no_output_____"
],
[
"fr1<fr2",
"_____no_output_____"
]
],
[
[
"<p style='text-align: justify;'> 12. Crie uma classe Data cujos atributos são dia, mês e ano. Implemente métodos _ repr _ e para comparação: igualdade (==) e desigualdades (!=, <=, >=, < e >).\n</p>",
"_____no_output_____"
]
],
[
[
"class Data:\n def __init__(self, dia, mes, ano):\n self.dia = dia\n self.mes = mes\n self.ano = ano\n \n def __repr__(self):\n return f'{self.dia}/{self.mes}/{self.ano}'\n \n def __eq__(self, other):\n if (self.dia == other.dia) and (self.mes == other.mes) and (self.ano == other.ano):\n return True\n else:\n return False\n \n def __lt__(self, other):\n if (self.ano < other.ano):\n return True\n elif (self.ano > other.ano):\n return False\n elif (self.mes < other.mes):\n return True\n elif (self.mes > other.mes):\n return False\n elif (self.dia < other.dia):\n return True\n elif (self.dia > other.dia):\n return False\n \n def __gt__(self, other):\n if (self.ano > other.ano):\n return True\n elif (self.ano < other.ano):\n return False\n elif (self.mes > other.mes):\n return True\n elif (self.mes < other.mes):\n return False\n elif (self.dia > other.dia):\n return True\n elif (self.dia < other.dia):\n return False\n \n def __ne__(self, other):\n if (self.dia != other.dia) or (self.mes != other.mes) or (self.ano != other.ano):\n return True\n else:\n return False\n \n",
"_____no_output_____"
],
[
"data1 = Data(24,9,2023)\ndata2 = Data(30,1,2022)",
"_____no_output_____"
],
[
"data1 == data2",
"_____no_output_____"
]
],
[
[
"<p style='text-align: justify;'> 13. Nos exercícios 1, 2, 3, 4 e 6, implemente o método _ repr _ para exibir as informações desejadas de cada uma das classes.\n</p>",
"_____no_output_____"
]
],
[
[
"# Questão 1\nclass Bola:\n '''\n Cria a representação de uma bola\n '''\n def __init__(self, cor, raio):\n '''\n Construtor\n Parâmetros\n ----------\n cor : str\n cor associada a bola\n raio : float\n raio da bola\n '''\n self.cor = cor\n self.raio = raio\n self.area = None\n self.volume = None\n\n \n def calcula_area(self):\n self.area = 4*3.14*(self.raio)**2\n return self.area\n \n def calcula_volume(self):\n self.volume = 4/3*3.14*(self.raio)**3\n return self.volume\n \n def __repr__(self):\n if self.area == None and self.volume == None:\n return f'A cor da bola é: {self.cor}, O raio da bola é: {self.raio}'\n elif self.volume == None:\n return f'A cor da bola é: {self.cor}, O raio da bola é: {self.raio}, A area da bola é {self.area}'\n else:\n return f'A cor da bola é: {self.cor}, O raio da bola é: {self.raio}, A area da bola é {self.area}, O volume da bola é {self.volume}'\n#Questão 2\nclass Retângulo:\n '''\n Cria a representação de um retângulo\n '''\n def __init__(self, lado_a, lado_b):\n '''\n Construtor\n Parâmetros\n ----------\n lado_a : float\n medida do primeiro lado do retângulo\n lado_b : float\n medida do segundo lado do retângulo\n '''\n self.lado_a = lado_a\n self.lado_b = lado_b\n self.area = None\n \n def calcula_area(self):\n self.area = self.lado_a*self.lado_b\n return self.area\n \n def __repr__(self):\n if self.area == None:\n return f'O lado a vale {self.lado_a} e o lado b vale {self.lado_b}'\n else:\n return f'O lado a vale {self.lado_a} e o lado b vale {self.lado_b} e a área do retangulo vale {self.area}'\n \n#Questão 3\nclass Funcionario:\n '''\n Cria uma representação do funcionário\n '''\n def __init__(self,nome,email):\n '''\n Construtor\n Parâmetros\n ----------\n nome : str\n nome do funcionário\n email : str\n email do funcionário\n '''\n self.nome = nome\n self.email = email\n \n self.horas_mes = {}\n self.salario_hora = {}\n \n def define_horas_mes(self, mes, horas):\n '''\n Define a quantidade de horas trabalhadas em determinado mês\n \n Parâmetros\n ---------\n mes : str\n mes no formato: 'nov/2021'\n quantidade_horas : int\n quantidade de horas trabalhadas no mês\n '''\n self.horas_mes[mes] = horas\n \n def define_valor_hora(self, mes, salario_hora):\n '''\n Define o valor a ser recebido por hora naquele mês\n \n Parâmetros\n ----------\n mes : str\n mes no formato: 'nov/2021'\n valor_salario : float\n valor da hora no mês\n '''\n self.salario_hora[mes] = salario_hora\n \n def calcula_salario_mes(self, mes):\n '''\n Calcula o salário a ser recebido no mês pelo funcionário\n \n Parâmetros\n ---------\n mes : str\n mes no formato: 'nov/2021'\n '''\n if mes in self.horas_mes and mes in self.salario_hora:\n self.salario = self.salario_hora[mes]*self.horas_mes[mes]\n return self.salario\n else:\n print('O mês desejado não possui horas ou valor por hora cadastrado!')\n \n def __repr__(self):\n return f'Nome: {self.nome}\\nEmail: {self.email}'\n\n#Questão 4\nclass Televisor:\n '''\n Cria uma representação de Televisor\n '''\n def __init__(self, fabricante, modelo):\n '''\n Construtor\n \n Parâmetros\n ---------\n fabricante : str\n nome do fabricante do televisor\n modelo : str\n modelo do televisor\n '''\n self.fabricante = fabricante\n self.modelo = modelo\n self.lista_canais = [2,5,7,10,13] # Canais sintonizados de fábrica\n self.volume = 20 # Valor padrão\n self.canal_atual = 2 # Canal atual padrão\n \n def aumentar_volume(self, quantidade):\n '''\n Aumenta o volume do televisor\n \n Parâmetros\n ----------\n quantidade : int\n quantidade da qual se deseja aumentar o volume do televisor\n '''\n volume = self.volume + quantidade\n while volume > 100:\n print('Erro! O volume não pode superar 100!\\n')\n quantidade = int(input('Escolha outra fator para aumentar o volume: '))\n volume = self.volume + quantidade\n self.volume = volume\n# return True\n \n def diminuir_volume(self, quantidade):\n '''\n Diminui o volume do televisor\n\n Parâmetros\n ----------\n quantidade : int\n quantidade da qual se deseja diminuir o volume do televisor\n '''\n volume = self.volume - quantidade\n while volume < 0:\n print('Erro! O volume não pode ser menor que 0!\\n')\n quantidade = int(input('Escolha outra fator para aumentar o volume: '))\n volume = self.volume - quantidade\n self.volume = volume\n# return True\n \n def sintonizar_canal(self, canal):\n '''\n Adicional um novo canal a lista de canais padrão do televisor\n \n Parâmetros\n --------------\n canal : int\n canal que deseja incluir na lista de canais\n '''\n self.lista_canais.append(canal)\n print(f'Canal {canal} sintonizado com sucesso.')\n \n def trocar_canal(self, canal):\n '''\n Troca de canal\n \n Parâmetros\n ----------\n canal : int\n canal para o qual deseja mudar\n '''\n if canal in self.lista_canais:\n self.canal_atual = canal\n else:\n print('Canal não alterado, pois não está na lista de canais.') \n return print(f'O canal atual é: {self.canal_atual}')\n \n def __repr__(self):\n return f'Fabricante: {self.fabricante}\\nModelo: {self.modelo}\\nLista de Canais: {self.lista_canais}\\nVolume: {self.volume}\\nCanal atual: {self.canal_atual}'\n\n \n#Questão 6\n# Não coloquei __rep__ pois não consegui enxergar aonde poderia coloccá-lo no cronometro que montei.",
"_____no_output_____"
]
],
[
[
"<p style='text-align: justify;'> 14. Faça uma classe ContaVip que difere da ContaCorrente por ter cheque especial (novo atributo) e é filha da classe ContaCorrente. Você precisa implementar os métodos para saque, transferência ou depósito?\n</p>",
"_____no_output_____"
]
],
[
[
"class Cliente:\n '''\n Cria representação de um cliente\n '''\n def __init__(self, nome, cpf):\n '''\n Construtor\n \n Parâmetro\n ---------\n nome : str\n nome do cliente\n cpf : str\n cpf do cliente\n '''\n self.nome = nome.capitalize()\n self.cpf = cpf\n \nclass ContaCorrente:\n '''\n Cria a representação da conta corrente\n '''\n def __init__(self, cliente, saldo=0):\n '''\n Construtor\n \n Parâmetro\n ---------\n Cliente : str\n Nome do cliente\n '''\n self.cliente = cliente\n self.saldo = saldo\n \n def depositar(self, deposito):\n self.saldo = self.saldo + deposito\n \n def sacar(self, saque):\n if saque > self.saldo:\n print('Saldo insuficiente!')\n else:\n self.saldo = self.saldo - saque\n \n def transferir(self, other, transferencia):\n if transferencia > self.saldo:\n print('Saldo insuficiente para realizar operação!')\n else:\n self.saldo = self.saldo - transferencia\n other.saldo = other.saldo + transferencia\n \nclass ContaVip(ContaCorrente):\n '''\n Cria a representação de uma conta vip que tem o atributo cheque especial como diferencial\n '''\n def __init__(self, cliente, saldo = 0, cheque_especial=0):\n super().__init__(cliente, saldo)\n self.cheque_especial = cheque_especial\n \n \n def sacar(self, saque):\n if (self.saldo + self.cheque_especial) - saque < 0:\n print('Saldo insuficiente!')\n else:\n self.depositar(-saque)\n \n def depositar (self, deposito):\n self.saldo = self.saldo + deposito\n \n def transferir(self, other, transferencia):\n if transferencia > self.saldo + self.cheque_especial:\n print('Saldo insuficiente para realizar operação!')\n else:\n self.saldo = self.saldo - transferencia\n other.saldo = other.saldo + transferencia",
"_____no_output_____"
],
[
"augusto = Cliente('augusto', '12345678900')\njoeise = Cliente('joeise','98765432100')\ncontaAugusto = ContaCorrente('augusto')\ncontaJoeise = ContaCorrente('joeise')\ncontaAugusto.depositar(1000)\ncontaAugusto.saldo",
"_____no_output_____"
],
[
"ContaVipAugusto = ContaVip(contaAugusto, contaAugusto.saldo, cheque_especial = 200)",
"_____no_output_____"
],
[
"ContaVipAugusto.sacar(1100)",
"_____no_output_____"
]
],
[
[
"15. Crie uma classe Quadrado, filha da classe Retângulo do exercício 2.",
"_____no_output_____"
]
],
[
[
"class Retângulo:\n '''\n Cria a representação de um retângulo\n '''\n def __init__(self, lado_a, lado_b):\n '''\n Construtor\n Parâmetros\n ----------\n lado_a : float\n medida do primeiro lado do retângulo\n lado_b : float\n medida do segundo lado do retângulo\n '''\n self.lado_a = lado_a\n self.lado_b = lado_b\n def calcula_area(self):\n self.area = self.lado_a*self.lado_b\n return self.area\n \nclass Quadrado(Retângulo):\n '''\n Cria a representação de um quadrado\n '''\n def __init__(self, lado):\n super().__init__(lado, lado)",
"_____no_output_____"
],
[
"quad1 = Quadrado(5)\nquad1.calcula_area()",
"_____no_output_____"
],
[
"import time\n\ndef countdown(num_of_secs):\n while num_of_secs:\n m, s = divmod(num_of_secs, 60)\n min_sec_format = '{:02d}:{:02d}'.format(m, s)\n print(min_sec_format, end='\\n')\n time.sleep(1)\n num_of_secs -= 1\n \n print('Countdown finished.')\n\ninp = int(input('Input number of seconds to countdown: '))\ncountdown(inp)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0b3fa8545691845fa8d2bf02343d2ffc326e636 | 1,131 | ipynb | Jupyter Notebook | tests/data/notebooks/cell_with_source_string.ipynb | s-weigand/flake8-nb | 39c6cf6158cc231c420ff783a550b09ee5f7e4c7 | [
"Apache-2.0"
] | 23 | 2019-12-05T06:02:43.000Z | 2022-03-11T18:17:19.000Z | tests/data/notebooks/cell_with_source_string.ipynb | s-weigand/flake8-nb | 39c6cf6158cc231c420ff783a550b09ee5f7e4c7 | [
"Apache-2.0"
] | 191 | 2019-10-04T06:22:14.000Z | 2022-03-29T04:02:28.000Z | tests/data/notebooks/cell_with_source_string.ipynb | s-weigand/flake8-nb | 39c6cf6158cc231c420ff783a550b09ee5f7e4c7 | [
"Apache-2.0"
] | 6 | 2020-06-13T13:35:15.000Z | 2021-11-28T19:50:12.000Z | 21.339623 | 99 | 0.418214 | [
[
[
"from ipywidgets import interact\n\ndef f(x):\n return x\n\n\ninteract(f, x=10)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
d0b3fb9e36d8ff420fd556318036026e61ac5273 | 50,929 | ipynb | Jupyter Notebook | Data/Code/business_cycle_data.ipynb | letsgoexploring/econ126 | 05f50d2392dd1c7c38b14950cb8d7eff7ff775ee | [
"MIT"
] | 2 | 2020-12-12T16:28:44.000Z | 2021-02-24T12:11:04.000Z | Data/Code/.ipynb_checkpoints/business_cycle_data-checkpoint.ipynb | letsgoexploring/econ126 | 05f50d2392dd1c7c38b14950cb8d7eff7ff775ee | [
"MIT"
] | 1 | 2019-04-29T08:50:41.000Z | 2019-04-29T08:51:05.000Z | Data/Code/business_cycle_data.ipynb | letsgoexploring/econ126 | 05f50d2392dd1c7c38b14950cb8d7eff7ff775ee | [
"MIT"
] | 19 | 2019-03-08T18:49:19.000Z | 2022-03-07T23:27:16.000Z | 118.164733 | 17,684 | 0.844607 | [
[
[
"# US Production Data for RBC Modeling",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport fredpy as fp\nimport matplotlib.pyplot as plt\nplt.style.use('classic')\n%matplotlib inline\npd.plotting.register_matplotlib_converters()",
"_____no_output_____"
],
[
"# Load API key\nfp.api_key = fp.load_api_key('fred_api_key.txt')\n\n# Download nominal GDP, nominal personal consumption expenditures, nominal\n# gross private domestic investment, the GDP deflator, and an index of hours \n# worked in the nonfarm business sector produced by the BLS. All data are\n# from FRED and are quarterly.\ngdp = fp.series('GDP')\ncons = fp.series('PCEC')\ninvest = fp.series('GPDI')\nhours = fp.series('HOANBS')\ndefl = fp.series('GDPDEF')\npcec = fp.series('PCEC')\nm2 = fp.series('M2SL')\ntb3mo = fp.series('TB3MS')\nunemp = fp.series('UNRATE')\n\n# Convert monthly M2, 3-mo T-Bill, and unemployment to quarterly\nm2 = m2.as_frequency('Q')\ntb3mo = tb3mo.as_frequency('Q')\nunemp = unemp.as_frequency('Q')\n\n# Convert unemployment and t-bill data to decimals instead of percents\nunemp.data = unemp.data/100\ntb3mo.data = tb3mo.data/100\n\n# pcec inflation as pecent change over past year\npcec = pcec.apc()\npcec.data = pcec.data/100\n\n# Make sure that all of the downloaded series have the same data ranges\ngdp,cons,invest,hours,defl,pcec,m2,tb3mo,unemp = fp.window_equalize([gdp,cons,invest,hours,defl,pcec,m2,tb3mo,unemp])\n\n# Compute real GDP, real consumption, real investment\ngdp.data = gdp.data/defl.data*100\ncons.data = cons.data/defl.data*100\ninvest.data = invest.data/defl.data*100\nm2.data = m2.data/defl.data*100\n\n\n# Print units\nprint('Hours units: ',hours.units)\nprint('Deflator units:',defl.units)",
"Hours units: Index 2012=100\nDeflator units: Index 2012=100\n"
]
],
[
[
"Next, compute the quarterly capital stock series for the US using the perpetual inventory method. The discrete-time Solow growth model is given by:\n\n\\begin{align}\nY_t & = A_tK_t^{\\alpha}L_t^{1-\\alpha} \\tag{1}\\\\\nC_t & = (1-s)Y_t \\tag{2}\\\\\nY_t & = C_t + I_t \\tag{3}\\\\\nK_{t+1} & = I_t + (1-\\delta)K_t \\tag{4}\\\\\nA_{t+1} & = (1+g)A_t \\tag{5}\\\\\nL_{t+1} & = (1+n)L_t \\tag{6}.\n\\end{align}\n\nHere the model is assumed to be quarterly so $n$ is the *quarterly* growth rate of labor hours, $g$ is the *quarterly* growth rate of TFP, and $\\delta$ is the *quarterly* rate of depreciation of the capital stock. Given a value of the quarterly depreciation rate $\\delta$, an investment series $I_t$, and an initial capital stock $K_0$, the law of motion for the capital stock, Equation (4), can be used to compute an implied capital series. But we don't know $K_0$ or $\\delta$ so we'll have to *calibrate* these values using statistics computed from the data that we've already obtained.\n\nLet lowercase letters denote a variable that's been divided by $A_t^{1/(1-\\alpha)}L_t$. E.g.,\n\n\\begin{align}\ny_t = \\frac{Y_t}{A_t^{1/(1-\\alpha)}L_t}\\tag{7}\n\\end{align}\n\nThen (after substituting consumption from the model), the scaled version of the model can be written as: \n\n\\begin{align}\ny_t & = k_t^{\\alpha} \\tag{8}\\\\\ni_t & = sy_t \\tag{9}\\\\\nk_{t+1} & = i_t + (1-\\delta-n-g')k_t,\\tag{10}\n\\end{align}\n\nwhere $g' = g/(1-\\alpha)$ is the growth rate of $A_t^{1/(1-\\alpha)}$. In the steady state:\n\n\\begin{align}\nk & = \\left(\\frac{s}{\\delta+n+g'}\\right)^{\\frac{1}{1-\\alpha}} \\tag{11}\n\\end{align}\n\nwhich means that the ratio of capital to output is constant:\n\n\\begin{align}\n\\frac{k}{y} & = \\frac{s}{\\delta+n+g'} \\tag{12}\n\\end{align}\n\nand therefore the steady state ratio of depreciation to output is:\n\n\\begin{align}\n\\overline{\\delta K/ Y} & = \\frac{\\delta s}{\\delta + n + g'} \\tag{13}\n\\end{align}\n\nwhere $\\overline{\\delta K/ Y}$ is the long-run average ratio of depreciation to output. We can use Equation (13) to calibrate $\\delta$ given $\\overline{\\delta K/ Y}$, $s$, $n$, and $g'$.\n\nFurthermore, in the steady state, the growth rate of output is constant:\n\n\\begin{align}\n\\frac{\\Delta Y}{Y} & = n + g' \\tag{14}\n\\end{align} \n\n\n1. Assume $\\alpha = 0.35$.\n2. Calibrate $s$ as the average of ratio of investment to GDP.\n3. Calibrate $n$ as the average quarterly growth rate of labor hours.\n4. Calibrate $g'$ as the average quarterly growth rate of real GDP minus n.\n5. Calculate the average ratio of depreciation to GDP $\\overline{\\delta K/ Y}$ and use the result to calibrate $\\delta$. That is, find the average ratio of Current-Cost Depreciation of Fixed Assets (FRED series ID: M1TTOTL1ES000) to GDP (FRED series ID: GDPA). Then calibrate $\\delta$ from the following steady state relationship:\n\\begin{align}\n\\delta & = \\frac{\\left( \\overline{\\delta K/ Y} \\right)\\left(n + g' \\right)}{s - \\left( \\overline{\\delta K/ Y} \\right)} \\tag{15}\n\\end{align}\n6. Calibrate $K_0$ by asusming that the capital stock is initially equal to its steady state value:\n\\begin{align}\nK_0 & = \\left(\\frac{s}{\\delta + n + g'}\\right) Y_0 \\tag{16}\n\\end{align}\n\nThen, armed with calibrated values for $K_0$ and $\\delta$, compute $K_1, K_2, \\ldots$ recursively. See Timothy Kehoe's notes for more information on the perpetual inventory method:\n\nhttp://users.econ.umn.edu/~tkehoe/classes/GrowthAccountingNotes.pdf\n\n",
"_____no_output_____"
]
],
[
[
"# Set the capital share of income\nalpha = 0.35\n\n# Average saving rate\ns = np.mean(invest.data/gdp.data)\n\n# Average quarterly labor hours growth rate\nn = (hours.data[-1]/hours.data[0])**(1/(len(hours.data)-1)) - 1\n\n# Average quarterly real GDP growth rate\ng = ((gdp.data[-1]/gdp.data[0])**(1/(len(gdp.data)-1)) - 1) - n\n\n# Compute annual depreciation rate\ndepA = fp.series('M1TTOTL1ES000')\ngdpA = fp.series('gdpa')\n\ngdpA = gdpA.window([gdp.data.index[0],gdp.data.index[-1]])\ngdpA,depA = fp.window_equalize([gdpA,depA])\n\ndeltaKY = np.mean(depA.data/gdpA.data)\ndelta = (n+g)*deltaKY/(s-deltaKY)\n\n# print calibrated values:\nprint('Avg saving rate: ',round(s,5))\nprint('Avg annual labor growth:',round(4*n,5))\nprint('Avg annual gdp growth: ',round(4*g,5))\nprint('Avg annual dep rate: ',round(4*delta,5))\n\n# Construct the capital series. Note that the GPD and investment data are reported on an annualized basis\n# so divide by 4 to get quarterly data.\ncapital = np.zeros(len(gdp.data))\ncapital[0] = gdp.data[0]/4*s/(n+g+delta)\n\nfor t in range(len(gdp.data)-1):\n capital[t+1] = invest.data[t]/4 + (1-delta)*capital[t]\n\n# Save in a fredpy series\ncapital = fp.to_fred_series(data = capital,dates =gdp.data.index,units = gdp.units,title='Capital stock of the US',frequency='Quarterly')\n# plot the computed capital series\nplt.plot(capital.data.index,capital.data,'-',lw=3,alpha = 0.7)\nplt.ylabel(capital.units)\nplt.title(capital.title)\nplt.grid()",
"Avg saving rate: 0.17475\nAvg annual labor growth: 0.01152\nAvg annual gdp growth: 0.0176\nAvg annual dep rate: 0.13672\n"
],
[
"# Compute TFP\ntfp = gdp.data/capital.data**alpha/hours.data**(1-alpha)\ntfp = fp.to_fred_series(data = tfp,dates =gdp.data.index,units = gdp.units,title='TFP of the US',frequency='Quarterly')\n\n# Plot the computed capital series\nplt.plot(tfp.data.index,tfp.data,'-',lw=3,alpha = 0.7)\nplt.ylabel(tfp.units)\nplt.title(tfp.title)\nplt.grid()",
"_____no_output_____"
],
[
"# Convert each series into per capita using civilian pop 16 and over\ngdp = gdp.per_capita(civ_pop=True)\ncons = cons.per_capita(civ_pop=True)\ninvest = invest.per_capita(civ_pop=True)\nhours = hours.per_capita(civ_pop=True)\ncapital = capital.per_capita(civ_pop=True)\nm2 = m2.per_capita(civ_pop=True)\n\n# Put GDP, consumption, investment, and M2 in units of thousands of dollars per person\ngdp.data = gdp.data*1000\ncons.data = cons.data*1000\ninvest.data = invest.data*1000\ncapital.data = capital.data*1000\nm2.data = m2.data/1000\n\n# Scale hours per person to equal 100 on October (Quarter III) of 2012 \nhours.data = hours.data/hours.data.loc['2012-10-01']*100\n\n# Compute and plot log real GDP, log consumption, log investment, log hours\ngdp_log = gdp.log()\ncons_log = cons.log()\ninvest_log = invest.log()\nhours_log = hours.log()\ncapital_log = capital.log()\ntfp_log = tfp.log()\nm2_log = m2.log()\nm2_log = m2.log()",
"_____no_output_____"
],
[
"# HP filter to isolate trend and cyclical components\ngdp_log_cycle,gdp_log_trend = gdp_log.hp_filter()\ncons_log_cycle,cons_log_trend = cons_log.hp_filter()\ninvest_log_cycle,invest_log_trend = invest_log.hp_filter()\nhours_log_cycle,hours_log_trend = hours_log.hp_filter()\ncapital_log_cycle,capital_log_trend = capital_log.hp_filter()\ntfp_log_cycle,tfp_log_trend = tfp_log.hp_filter()\nm2_log_cycle,m2_log_trend = m2_log.hp_filter()\ntb3mo_cycle,tb3mo_trend = tb3mo.hp_filter()\nunemp_cycle,unemp_trend = unemp.hp_filter()\npcec_cycle,pcec_trend = pcec.hp_filter()",
"_____no_output_____"
],
[
"# Create a DataFrame with actual and trend data\ndata = pd.DataFrame({\n 'gdp':gdp.data,\n 'gdp_trend':np.exp(gdp_log_trend.data),\n 'gdp_cycle':gdp_log_cycle.data,\n 'consumption':cons.data,\n 'consumption_trend':np.exp(cons_log_trend.data),\n 'consumption_cycle':cons_log_cycle.data,\n 'investment':invest.data,\n 'investment_trend':np.exp(invest_log_trend.data),\n 'investment_cycle':invest_log_cycle.data,\n 'hours':hours.data,\n 'hours_trend':np.exp(hours_log_trend.data),\n 'hours_cycle':hours_log_cycle.data,\n 'capital':capital.data,\n 'capital_trend':np.exp(capital_log_trend.data),\n 'capital_cycle':capital_log_cycle.data,\n 'tfp':tfp.data,\n 'tfp_trend':np.exp(tfp_log_trend.data),\n 'tfp_cycle':tfp_log_cycle.data,\n 'real_m2':m2.data,\n 'real_m2_trend':np.exp(m2_log_trend.data),\n 'real_m2_cycle':m2_log_cycle.data,\n 't_bill_3mo':tb3mo.data,\n 't_bill_3mo_trend':tb3mo_trend.data,\n 't_bill_3mo_cycle':tb3mo_cycle.data,\n 'pce_inflation':pcec.data,\n 'pce_inflation_trend':pcec_trend.data,\n 'pce_inflation_cycle':pcec_cycle.data,\n 'unemployment':unemp.data,\n 'unemployment_trend':unemp_trend.data,\n 'unemployment_cycle':unemp_cycle.data,\n },index = gdp.data.index)\n\n\n# # RBC Data\n# columns_ordered =[]\n# names = ['gdp','consumption','investment','hours','capital','tfp']\n# for name in names:\n# columns_ordered.append(name)\n# columns_ordered.append(name+'_trend')\n\n# data[columns_ordered].to_csv('../Csv/rbc_data_actual_trend.csv')\n\n\n# # Create a DataFrame with actual, trend, and cycle data\n# columns_ordered =[]\n# names = ['gdp','consumption','investment','hours','capital','tfp']\n# for name in names:\n# columns_ordered.append(name)\n# columns_ordered.append(name+'_trend')\n# columns_ordered.append(name+'_cycle')\n \n# data[columns_ordered].to_csv('../Csv/rbc_data_actual_trend_cycle.csv')\n\n\n# Business Cycle Data\ncolumns_ordered =[]\nnames = ['gdp','consumption','investment','hours','capital','tfp','real_m2','t_bill_3mo','pce_inflation','unemployment']\nfor name in names:\n columns_ordered.append(name)\n columns_ordered.append(name+'_trend')\n\ndata[columns_ordered].to_csv('../Csv/business_cycle_data_actual_trend.csv')\n\n\n# Create a DataFrame with actual, trend, and cycle data\ncolumns_ordered =[]\nnames = ['gdp','consumption','investment','hours','capital','tfp','real_m2','t_bill_3mo','pce_inflation','unemployment']\nfor name in names:\n columns_ordered.append(name)\n columns_ordered.append(name+'_trend')\n columns_ordered.append(name+'_cycle')\n \ndata[columns_ordered].to_csv('../Csv/business_cycle_data_actual_trend_cycle.csv')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.