hexsha
stringlengths 40
40
| size
int64 6
14.9M
| ext
stringclasses 1
value | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 6
260
| max_stars_repo_name
stringlengths 6
119
| max_stars_repo_head_hexsha
stringlengths 40
41
| max_stars_repo_licenses
sequence | max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 6
260
| max_issues_repo_name
stringlengths 6
119
| max_issues_repo_head_hexsha
stringlengths 40
41
| max_issues_repo_licenses
sequence | max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 6
260
| max_forks_repo_name
stringlengths 6
119
| max_forks_repo_head_hexsha
stringlengths 40
41
| max_forks_repo_licenses
sequence | max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | avg_line_length
float64 2
1.04M
| max_line_length
int64 2
11.2M
| alphanum_fraction
float64 0
1
| cells
sequence | cell_types
sequence | cell_type_groups
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
d0889763edf64586fcfee5e7c18b5ce43dad1b4b | 181,416 | ipynb | Jupyter Notebook | MathConceptsForDevelopers/01.InitialExcercise/High-School Maths Exercise.ipynb | LuGeorgiev/Python-SoftUni | 545daa4684b7a333f78dd958f8e9d13263575ddf | [
"MIT"
] | null | null | null | MathConceptsForDevelopers/01.InitialExcercise/High-School Maths Exercise.ipynb | LuGeorgiev/Python-SoftUni | 545daa4684b7a333f78dd958f8e9d13263575ddf | [
"MIT"
] | null | null | null | MathConceptsForDevelopers/01.InitialExcercise/High-School Maths Exercise.ipynb | LuGeorgiev/Python-SoftUni | 545daa4684b7a333f78dd958f8e9d13263575ddf | [
"MIT"
] | null | null | null | 184.553408 | 33,472 | 0.881278 | [
[
[
"%matplotlib inline",
"_____no_output_____"
],
[
"# Write your imports here\nimport sympy as sp\nimport math\nimport numpy as np\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"# High-School Maths Exercise\n## Getting to Know Jupyter Notebook. Python Libraries and Best Practices. Basic Workflow",
"_____no_output_____"
],
[
"### Problem 1. Markdown\nJupyter Notebook is a very light, beautiful and convenient way to organize your research and display your results. Let's play with it for a while.\n\nFirst, you can double-click each cell and edit its content. If you want to run a cell (that is, execute the code inside it), use Cell > Run Cells in the top menu or press <kbd>Ctrl</kbd> + <kbd>Enter</kbd>.\n\nSecond, each cell has a type. There are two main types: Markdown (which is for any kind of free text, explanations, formulas, results... you get the idea), and code (which is, well... for code :D).\n\nLet me give you a...\n#### Quick Introduction to Markdown\n##### Text and Paragraphs\nThere are several things that you can do. As you already saw, you can write paragraph text just by typing it. In order to create a new paragraph, just leave a blank line. See how this works below:\n```\nThis is some text.\nThis text is on a new line, but it will continue the same paragraph (so you can make your paragraphs more easily readable by just continuing on a new line, or just go on and on like this one line is ever continuing).\n\nThis text is displayed in a new paragraph.\n\nAnd this is yet another paragraph.\n```\n**Result:**\n\nThis is some text.\nThis text is on a new line, but it will continue the same paragraph (so you can make your paragraphs more easily readable by just continuing on a new line, or just go on and on like this one line is ever continuing).\n\nThis text is displayed in a new paragraph.\n\nAnd this is yet another paragraph.\n\n##### Headings\nThere are six levels of headings. Level one is the highest (largest and most important), and level 6 is the smallest. You can create headings of several types by prefixing the header line with one to six \"#\" symbols (this is called a pound sign if you are ancient, or a sharp sign if you're a musician... or a hashtag if you're too young :D). Have a look:\n```\n# Heading 1\n## Heading 2\n### Heading 3\n#### Heading 4\n##### Heading 5\n###### Heading 6\n```\n\n**Result:**\n\n# Heading 1\n## Heading 2\n### Heading 3\n#### Heading 4\n##### Heading 5\n###### Heading 6\n\nIt is recommended that you have **only one** H1 heading - this should be the header of your notebook (or scientific paper). Below that, you can add your name or just jump to the explanations directly.\n\n##### Emphasis\nYou can create emphasized (stonger) text by using a **bold** or _italic_ font. You can do this in several ways (using asterisks (\\*) or underscores (\\_)). In order to \"escape\" a symbol, prefix it with a backslash (\\). You can also strike thorugh your text in order to signify a correction.\n```\n**bold** __bold__\n*italic* _italic_\n\nThis is \\*\\*not \\*\\* bold.\n\nI ~~didn't make~~ a mistake.\n```\n\n**Result:**\n\n**bold** __bold__\n*italic* _italic_\n\nThis is \\*\\*not\\*\\* bold.\n\nI ~~didn't make~~ a mistake.\n\n##### Lists\nYou can add two types of lists: ordered and unordered. Lists can also be nested inside one another. To do this, press <kbd>Tab</kbd> once (it will be converted to 4 spaces).\n\nTo create an ordered list, just type the numbers. Don't worry if your numbers are wrong - Jupyter Notebook will create them properly for you. Well, it's better to have them properly numbered anyway...\n```\n1. This is\n2. A list\n10. With many\n9. Items\n 1. Some of which\n 2. Can\n 3. Be nested\n42. You can also\n * Mix \n * list\n * types\n```\n\n**Result:**\n1. This is\n2. A list\n10. With many\n9. Items\n 1. Some of which\n 2. Can\n 3. Be nested\n42. You can also\n * Mix \n * list\n * types\n \nTo create an unordered list, type an asterisk, plus or minus at the beginning:\n```\n* This is\n* An\n + Unordered\n - list\n```\n\n**Result:**\n* This is\n* An\n + Unordered\n - list\n \n##### Links\nThere are many ways to create links but we mostly use one of them: we present links with some explanatory text. See how it works:\n```\nThis is [a link](http://google.com) to Google.\n```\n\n**Result:**\n\nThis is [a link](http://google.com) to Google.\n\n##### Images\nThey are very similar to links. Just prefix the image with an exclamation mark. The alt(ernative) text will be displayed if the image is not available. Have a look (hover over the image to see the title text):\n```\n Do you know that \"taco cat\" is a palindrome? Thanks to The Oatmeal :)\n```\n\n**Result:**\n\n Do you know that \"taco cat\" is a palindrome? Thanks to The Oatmeal :)\n\nIf you want to resize images or do some more advanced stuff, just use HTML. \n\nDid I mention these cells support HTML, CSS and JavaScript? Now I did.\n\n##### Tables\nThese are a pain because they need to be formatted (somewhat) properly. Here's a good [table generator](http://www.tablesgenerator.com/markdown_tables). Just select File > Paste table data... and provide a tab-separated list of values. It will generate a good-looking ASCII-art table for you.\n```\n| Cell1 | Cell2 | Cell3 |\n|-------|-------|-------|\n| 1.1 | 1.2 | 1.3 |\n| 2.1 | 2.2 | 2.3 |\n| 3.1 | 3.2 | 3.3 |\n```\n\n**Result:**\n\n| Cell1 | Cell2 | Cell3 |\n|-------|-------|-------|\n| 1.1 | 1.2 | 1.3 |\n| 2.1 | 2.2 | 2.3 |\n| 3.1 | 3.2 | 3.3 |\n\n##### Code\nJust use triple backtick symbols. If you provide a language, it will be syntax-highlighted. You can also use inline code with single backticks.\n<pre>\n```python\ndef square(x):\n return x ** 2\n```\nThis is `inline` code. No syntax highlighting here.\n</pre>\n\n**Result:**\n```python\ndef square(x):\n return x ** 2\n```\nThis is `inline` code. No syntax highlighting here.",
"_____no_output_____"
],
[
"**Now it's your turn to have some Markdown fun.** In the next cell, try out some of the commands. You can just throw in some things, or do something more structured (like a small notebook).",
"_____no_output_____"
],
[
"<p style=\"color: #d9534f\">Write some Markdown here.</p>\n\n# This is my highlight with a _italic_ word\n\n<p style=\"text-align:right\">by LuGe</p>\n\n### few python code\n\n```python\n\ndef multiply_by(x, y):\n return x * y\n\n```\nprevious python method will `multiply` any two numbers in other words: $result = x * y$\n",
"_____no_output_____"
]
],
[
[
"def multiply_by(x, y):\n return x * y\n ",
"_____no_output_____"
],
[
"res = multiply_by(4, 7.21324)\nprint(res)",
"28.85296\n"
]
],
[
[
"### Problem 2. Formulas and LaTeX\nWriting math formulas has always been hard. But scientists don't like difficulties and prefer standards. So, thanks to Donald Knuth (a very popular computer scientist, who also invented a lot of algorithms), we have a nice typesetting system, called LaTeX (pronounced _lah_-tek). We'll be using it mostly for math formulas, but it has a lot of other things to offer.\n\nThere are two main ways to write formulas. You could enclose them in single `$` signs like this: `$ ax + b $`, which will create an **inline formula**: $ ax + b $. You can also enclose them in double `$` signs `$$ ax + b $$` to produce $$ ax + b $$.\n\nMost commands start with a backslash and accept parameters either in square brackets `[]` or in curly braces `{}`. For example, to make a fraction, you typically would write `$$ \\frac{a}{b} $$`: $$ \\frac{a}{b} $$.\n\n[Here's a resource](http://www.stat.pitt.edu/stoffer/freetex/latex%20basics.pdf) where you can look up the basics of the math syntax. You can also search StackOverflow - there are all sorts of solutions there.\n\nYou're on your own now. Research and recreate all formulas shown in the next cell. Try to make your cell look exactly the same as mine. It's an image, so don't try to cheat by copy/pasting :D.\n\nNote that you **do not** need to understand the formulas, what's written there or what it means. We'll have fun with these later in the course.\n\n",
"_____no_output_____"
],
[
"<p style=\"color: #d9534f\">Write your formulas here.</p>\nEquation of a line: $$y = ax+b$$\nRoots of quadratic equasion $ax^2 + bx + c = 0$ $$x_{1,2}=\\frac{-b\\pm\\sqrt{b^2-4ac}}{2a}$$\nTaylor series expansion: $$f(x)\\arrowvert_{x=a}=f(a)+f'(a)(x-a)+\\frac{f^n(a)}{2!}(x-a)^2+\\dots+\\frac{f^{(n)}(a)}{n!}(x-a)^n+\\dots$$\nBinominal theoren: $$ (x+y)^n=\\left(\\begin{array}{cc}n \\\\0 \\end{array}\\right)x^ny^0+\\left(\\begin{array}{cc}n \\\\1 \\end{array}\\right)x^{n-1}y^1+\\dots+ \\left(\\begin{array}{cc}n \\\\n \\end{array}\\right)x^0y^n=\\sum^n_{k=0}\\left(\\begin{array}{cc}n \\\\k \\end{array}\\right)x^{n-k}y^k$$\nAn integral(this one is a lot of fun to solve:D): $$\\int_{+\\infty}^{-\\infty}e^{-x^{2}}dx=\\sqrt\\pi$$\nA short matrix: $$\\left(\\begin{array}{cc}2&1&3 \\\\2&6&8\\\\6&8&18 \\end{array}\\right)$$\nA long matrix: $$A=\\left(\\begin{array}{cc}a_{11}&a_{12}&\\dots&a_{1n} \\\\a_{21}&a_{22}&\\dots&a_{2n}\\\\\\vdots&\\vdots&\\ddots&\\vdots \\\\a_{m1}&a_{m2}&\\dots&a_{mn}\\end{array}\\right)$$",
"_____no_output_____"
],
[
"### Problem 3. Solving with Python\nLet's first do some symbolic computation. We need to import `sympy` first. \n\n**Should your imports be in a single cell at the top or should they appear as they are used?** There's not a single valid best practice. Most people seem to prefer imports at the top of the file though. **Note: If you write new code in a cell, you have to re-execute it!**\n\nLet's use `sympy` to give us a quick symbolic solution to our equation. First import `sympy` (you can use the second cell in this notebook): \n```python \nimport sympy \n```\n\nNext, create symbols for all variables and parameters. You may prefer to do this in one pass or separately:\n```python \nx = sympy.symbols('x')\na, b, c = sympy.symbols('a b c')\n```\n\nNow solve:\n```python \nsympy.solve(a * x**2 + b * x + c)\n```\n\nHmmmm... we didn't expect that :(. We got an expression for $a$ because the library tried to solve for the first symbol it saw. This is an equation and we have to solve for $x$. We can provide it as a second paramter:\n```python \nsympy.solve(a * x**2 + b * x + c, x)\n```\n\nFinally, if we use `sympy.init_printing()`, we'll get a LaTeX-formatted result instead of a typed one. This is very useful because it produces better-looking formulas.",
"_____no_output_____"
]
],
[
[
"sp.init_printing()\nx = sp.symbols('x')\na,b,c = sp.symbols('a b c')",
"_____no_output_____"
],
[
"sp.solve(a*x**2 + b*x + c, x)",
"_____no_output_____"
]
],
[
[
"How about a function that takes $a, b, c$ (assume they are real numbers, you don't need to do additional checks on them) and returns the **real** roots of the quadratic equation?\n\nRemember that in order to calculate the roots, we first need to see whether the expression under the square root sign is non-negative.\n\nIf $b^2 - 4ac > 0$, the equation has two real roots: $x_1, x_2$\n\nIf $b^2 - 4ac = 0$, the equation has one real root: $x_1 = x_2$\n\nIf $b^2 - 4ac < 0$, the equation has zero real roots\n\nWrite a function which returns the roots. In the first case, return a list of 2 numbers: `[2, 3]`. In the second case, return a list of only one number: `[2]`. In the third case, return an empty list: `[]`.",
"_____no_output_____"
]
],
[
[
"def solve_quadratic_equation(a, b, c):\n d = b**2 - 4*a*c\n if a ==0 and b != 0:\n return [-c/b]\n elif a==0:\n return []\n elif d < 0:\n return []\n elif d == 0: \n return[-b/2*a]\n else:\n d = math.sqrt(d)\n return[(-b - d)/2*a,(-b + d)/2*a ]",
"_____no_output_____"
],
[
"# Testing: Execute this cell. The outputs should match the expected outputs. Feel free to write more tests\nprint(solve_quadratic_equation(1, -1, -2)) # [-1.0, 2.0]\nprint(solve_quadratic_equation(1, -8, 16)) # [4.0]\nprint(solve_quadratic_equation(1, 1, 1)) # []\nprint(solve_quadratic_equation(0, 1, 1)) # [-1.0]\nprint(solve_quadratic_equation(0, 0, 1)) # []",
"[-1.0, 2.0]\n[4.0]\n[]\n[-1.0]\n[]\n"
]
],
[
[
"**Bonus:** Last time we saw how to solve a linear equation. Remember that linear equations are just like quadratic equations with $a = 0$. In this case, however, division by 0 will throw an error. Extend your function above to support solving linear equations (in the same way we did it last time).",
"_____no_output_____"
],
[
"### Problem 4. Equation of a Line\nLet's go back to our linear equations and systems. There are many ways to define what \"linear\" means, but they all boil down to the same thing.\n\nThe equation $ax + b = 0$ is called *linear* because the function $f(x) = ax+b$ is a linear function. We know that there are several ways to know what one particular function means. One of them is to just write the expression for it, as we did above. Another way is to **plot** it. This is one of the most exciting parts of maths and science - when we have to fiddle around with beautiful plots (although not so beautiful in this case).\n\nThe function produces a straight line and we can see it.\n\nHow do we plot functions in general? We know that functions take many (possibly infinitely many) inputs. We can't draw all of them. We could, however, evaluate the function at some points and connect them with tiny straight lines. If the points are too many, we won't notice - the plot will look smooth.\n\nNow, let's take a function, e.g. $y = 2x + 3$ and plot it. For this, we're going to use `numpy` arrays. This is a special type of array which has two characteristics:\n* All elements in it must be of the same type\n* All operations are **broadcast**: if `x = [1, 2, 3, 10]` and we write `2 * x`, we'll get `[2, 4, 6, 20]`. That is, all operations are performed at all indices. This is very powerful, easy to use and saves us A LOT of looping.\n\nThere's one more thing: it's blazingly fast because all computations are done in C, instead of Python.\n\nFirst let's import `numpy`. Since the name is a bit long, a common convention is to give it an **alias**:\n```python\nimport numpy as np\n```\n\nImport that at the top cell and don't forget to re-run it.\n\nNext, let's create a range of values, e.g. $[-3, 5]$. There are two ways to do this. `np.arange(start, stop, step)` will give us evenly spaced numbers with a given step, while `np.linspace(start, stop, num)` will give us `num` samples. You see, one uses a fixed step, the other uses a number of points to return. When plotting functions, we usually use the latter. Let's generate, say, 1000 points (we know a straight line only needs two but we're generalizing the concept of plotting here :)).\n```python\nx = np.linspace(-3, 5, 1000)\n```\nNow, let's generate our function variable\n```python\ny = 2 * x + 3\n```\n\nWe can print the values if we like but we're more interested in plotting them. To do this, first let's import a plotting library. `matplotlib` is the most commnly used one and we usually give it an alias as well.\n```python\nimport matplotlib.pyplot as plt\n```\n\nNow, let's plot the values. To do this, we just call the `plot()` function. Notice that the top-most part of this notebook contains a \"magic string\": `%matplotlib inline`. This hints Jupyter to display all plots inside the notebook. However, it's a good practice to call `show()` after our plot is ready.\n```python\nplt.plot(x, y)\nplt.show()\n```",
"_____no_output_____"
]
],
[
[
"k = np.arange(1, 7, 1)\nprint(k)\n\nx = np.linspace(-3, 5, 1000)\n\n##y = 2 * x + 3\ny = [2 * current + 3 for current in x]\n\nplt.plot(x,y)\nax = plt.gca()\nax.spines[\"bottom\"].set_position(\"zero\")\nax.spines[\"left\"].set_position(\"zero\")\nax.spines[\"top\"].set_visible(False)\nax.spines[\"right\"].set_visible(False)\n\nxticks = ax.xaxis.get_major_ticks() \nxticks[4].label1.set_visible(False)\nyticks = ax.yaxis.get_major_ticks()\nyticks[2].label1.set_visible(False)\nax.text(-0.3,-1, '0', fontsize = 12)\n\nplt.show()",
"[1 2 3 4 5 6]\n"
]
],
[
[
"It doesn't look too bad bit we can do much better. See how the axes don't look like they should? Let's move them to zeto. This can be done using the \"spines\" of the plot (i.e. the borders).\n\nAll `matplotlib` figures can have many plots (subfigures) inside them. That's why when performing an operation, we have to specify a target figure. There is a default one and we can get it by using `plt.gca()`. We usually call it `ax` for \"axis\".\nLet's save it in a variable (in order to prevent multiple calculations and to make code prettier). Let's now move the bottom and left spines to the origin $(0, 0)$ and hide the top and right one.\n```python\nax = plt.gca()\nax.spines[\"bottom\"].set_position(\"zero\")\nax.spines[\"left\"].set_position(\"zero\")\nax.spines[\"top\"].set_visible(False)\nax.spines[\"right\"].set_visible(False)\n```\n\n**Note:** All plot manipulations HAVE TO be done before calling `show()`. It's up to you whether they should be before or after the function you're plotting.\n\nThis should look better now. We can, of course, do much better (e.g. remove the double 0 at the origin and replace it with a single one), but this is left as an exercise for the reader :).",
"_____no_output_____"
],
[
"### * Problem 5. Linearizing Functions\nWhy is the line equation so useful? The main reason is because it's so easy to work with. Scientists actually try their best to linearize functions, that is, to make linear functions from non-linear ones. There are several ways of doing this. One of them involves derivatives and we'll talk about it later in the course. \n\nA commonly used method for linearizing functions is through algebraic transformations. Try to linearize \n$$ y = ae^{bx} $$\n\nHint: The inverse operation of $e^{x}$ is $\\ln(x)$. Start by taking $\\ln$ of both sides and see what you can do. Your goal is to transform the function into another, linear function. You can look up more hints on the Internet :).",
"_____no_output_____"
]
],
[
[
"x = np.linspace(-5,5,5000)\ny = 0.5 * np.exp(0.5 * x)",
"_____no_output_____"
],
[
"plt.plot(x, y)\nplt.show\nplt.title('exponent')",
"_____no_output_____"
]
],
[
[
"### * Problem 6. Generalizing the Plotting Function\nLet's now use the power of Python to generalize the code we created to plot. In Python, you can pass functions as parameters to other functions. We'll utilize this to pass the math function that we're going to plot.\n\nNote: We can also pass *lambda expressions* (anonymous functions) like this: \n```python\nlambda x: x + 2```\nThis is a shorter way to write\n```python\ndef some_anonymous_function(x):\n return x + 2\n```\n\nWe'll also need a range of x values. We may also provide other optional parameters which will help set up our plot. These may include titles, legends, colors, fonts, etc. Let's stick to the basics now.\n\nWrite a Python function which takes another function, x range and number of points, and plots the function graph by evaluating it at every point.\n\n**BIG hint:** If you want to use not only `numpy` functions for `f` but any one function, a very useful (and easy) thing to do, is to vectorize the function `f` (e.g. to allow it to be used with `numpy` broadcasting):\n```python\nf_vectorized = np.vectorize(f)\ny = f_vectorized(x)\n```",
"_____no_output_____"
]
],
[
[
"def plot_math_function(f, min_x, max_x, num_points):\n x = np.linspace(min_x, max_x, num_points)\n f_vectorized = np.vectorize(f)\n y = f_vectorized(x)\n plt.plot(x,y)\n plt.show()\n ",
"_____no_output_____"
],
[
"plot_math_function(lambda x: 2 * x + 3, -3, 5, 1000)\nplot_math_function(lambda x: -x + 8, -1, 10, 1000)\nplot_math_function(lambda x: x**2 - x - 2, -3, 4, 1000)\nplot_math_function(lambda x: np.sin(x), -np.pi, np.pi, 1000)\nplot_math_function(lambda x: np.sin(x) / x, -4 * np.pi, 4 * np.pi, 1000)",
"_____no_output_____"
]
],
[
[
"### * Problem 7. Solving Equations Graphically\nNow that we have a general plotting function, we can use it for more interesting things. Sometimes we don't need to know what the exact solution is, just to see where it lies. We can do this by plotting the two functions around the \"=\" sign ans seeing where they intersect. Take, for example, the equation $2x + 3 = 0$. The two functions are $f(x) = 2x + 3$ and $g(x) = 0$. Since they should be equal, the point of their intersection is the solution of the given equation. We don't need to bother marking the point of intersection right now, just showing the functions.\n\nTo do this, we'll need to improve our plotting function yet once. This time we'll need to take multiple functions and plot them all on the same graph. Note that we still need to provide the $[x_{min}; x_{max}]$ range and it's going to be the same for all functions.\n\n```python\nvectorized_fs = [np.vectorize(f) for f in functions]\nys = [vectorized_f(x) for vectorized_f in vectorized_fs]\n```",
"_____no_output_____"
]
],
[
[
"def plot_math_functions(functions, min_x, max_x, num_points):\n # Write your code here\n pass",
"_____no_output_____"
],
[
"plot_math_functions([lambda x: 2 * x + 3, lambda x: 0], -3, 5, 1000)\nplot_math_functions([lambda x: 3 * x**2 - 2 * x + 5, lambda x: 3 * x + 7], -2, 3, 1000)",
"_____no_output_____"
]
],
[
[
"This is also a way to plot the solutions of systems of equation, like the one we solved last time. Let's actually try it.",
"_____no_output_____"
]
],
[
[
"plot_math_functions([lambda x: (-4 * x + 7) / 3, lambda x: (-3 * x + 8) / 5, lambda x: (-x - 1) / -2], -1, 4, 1000)",
"_____no_output_____"
]
],
[
[
"### Problem 8. Trigonometric Functions\nWe already saw the graph of the function $y = \\sin(x)$. But, how do we define the trigonometric functions once again? Let's quickly review that.\n\n<img src=\"angle-in-right-triangle.png\" style=\"max-height: 200px\" alt=\"Right triangle\" />\n\nThe two basic trigonometric functions are defined as the ratio of two sides:\n$$ \\sin(x) = \\frac{\\text{opposite}}{\\text{hypotenuse}} $$\n$$ \\cos(x) = \\frac{\\text{adjacent}}{\\text{hypotenuse}} $$\n\nAnd also:\n$$ \\tan(x) = \\frac{\\text{opposite}}{\\text{adjacent}} = \\frac{\\sin(x)}{\\cos(x)} $$\n$$ \\cot(x) = \\frac{\\text{adjacent}}{\\text{opposite}} = \\frac{\\cos(x)}{\\sin(x)} $$\n\nThis is fine, but using this, \"right-triangle\" definition, we're able to calculate the trigonometric functions of angles up to $90^\\circ$. But we can do better. Let's now imagine a circle centered at the origin of the coordinate system, with radius $r = 1$. This is called a \"unit circle\".\n\n<img src=\"triangle-unit-circle.png\" style=\"max-height: 300px\" alt=\"Trigonometric unit circle\" />\n\nWe can now see exactly the same picture. The $x$-coordinate of the point in the circle corresponds to $\\cos(\\alpha)$ and the $y$-coordinate - to $\\sin(\\alpha)$. What did we get? We're now able to define the trigonometric functions for all degrees up to $360^\\circ$. After that, the same values repeat: these functions are **periodic**: \n$$ \\sin(k.360^\\circ + \\alpha) = \\sin(\\alpha), k = 0, 1, 2, \\dots $$\n$$ \\cos(k.360^\\circ + \\alpha) = \\cos(\\alpha), k = 0, 1, 2, \\dots $$\n\nWe can, of course, use this picture to derive other identities, such as:\n$$ \\sin(90^\\circ + \\alpha) = \\cos(\\alpha) $$\n\nA very important property of the sine and cosine is that they accept values in the range $(-\\infty; \\infty)$ and produce values in the range $[-1; 1]$. The two other functions take values in the range $(-\\infty; \\infty)$ **except when their denominators are zero** and produce values in the same range. \n\n#### Radians\nA degree is a geometric object, $1/360$th of a full circle. This is quite inconvenient when we work with angles. There is another, natural and intrinsic measure of angles. It's called the **radian** and can be written as $\\text{rad}$ or without any designation, so $\\sin(2)$ means \"sine of two radians\".\n\n\nIt's defined as *the central angle of an arc with length equal to the circle's radius* and $1\\text{rad} \\approx 57.296^\\circ$.\n\nWe know that the circle circumference is $C = 2\\pi r$, therefore we can fit exactly $2\\pi$ arcs with length $r$ in $C$. The angle corresponding to this is $360^\\circ$ or $2\\pi\\ \\text{rad}$. Also, $\\pi rad = 180^\\circ$.\n\n(Some people prefer using $\\tau = 2\\pi$ to avoid confusion with always multiplying by 2 or 0.5 but we'll use the standard notation here.)\n\n**NOTE:** All trigonometric functions in `math` and `numpy` accept radians as arguments. In order to convert between radians and degrees, you can use the relations $\\text{[deg]} = 180/\\pi.\\text{[rad]}, \\text{[rad]} = \\pi/180.\\text{[deg]}$. This can be done using `np.deg2rad()` and `np.rad2deg()` respectively.\n\n#### Inverse trigonometric functions\nAll trigonometric functions have their inverses. If you plug in, say $\\pi/4$ in the $\\sin(x)$ function, you get $\\sqrt{2}/2$. The inverse functions (also called, arc-functions) take arguments in the interval $[-1; 1]$ and return the angle that they correspond to. Take arcsine for example:\n$$ \\arcsin(y) = x: sin(y) = x $$\n$$ \\arcsin\\left(\\frac{\\sqrt{2}}{2}\\right) = \\frac{\\pi}{4} $$\n\nPlease note that this is NOT entirely correct. From the relations we found:\n$$\\sin(x) = sin(2k\\pi + x), k = 0, 1, 2, \\dots $$\n\nit follows that $\\arcsin(x)$ has infinitely many values, separated by $2k\\pi$ radians each:\n$$ \\arcsin\\left(\\frac{\\sqrt{2}}{2}\\right) = \\frac{\\pi}{4} + 2k\\pi, k = 0, 1, 2, \\dots $$\n\nIn most cases, however, we're interested in the first value (when $k = 0$). It's called the **principal value**.\n\nNote 1: There are inverse functions for all four basic trigonometric functions: $\\arcsin$, $\\arccos$, $\\arctan$, $\\text{arccot}$. These are sometimes written as $\\sin^{-1}(x)$, $cos^{-1}(x)$, etc. These definitions are completely equivalent. \n\nJust notice the difference between $\\sin^{-1}(x) := \\arcsin(x)$ and $\\sin(x^{-1}) = \\sin(1/x)$.",
"_____no_output_____"
],
[
"#### Exercise\nUse the plotting function you wrote above to plot the inverse trigonometric functions. Use `numpy` (look up how to use inverse trigonometric functions).",
"_____no_output_____"
]
],
[
[
"x = np.linspace(-10,10)\n\nplt.plot(x, np.arctan(x))\nplt.plot(x, np.sin(x))\nplt.plot(x, np.cos(x))\n\nplt.show()",
"_____no_output_____"
],
[
"x = np.linspace(-10, 10)\nplt.plot(x, np.arccosh(x))\nplt.show()",
"C:\\Users\\lgeorgiev\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\ipykernel_launcher.py:2: RuntimeWarning: invalid value encountered in arccosh\n \n"
]
],
[
[
"### ** Problem 9. Perlin Noise\nThis algorithm has many applications in computer graphics and can serve to demonstrate several things... and help us learn about math, algorithms and Python :).\n#### Noise\nNoise is just random values. We can generate noise by just calling a random generator. Note that these are actually called *pseudorandom generators*. We'll talk about this later in this course.\nWe can generate noise in however many dimensions we want. For example, if we want to generate a single dimension, we just pick N random values and call it a day. If we want to generate a 2D noise space, we can take an approach which is similar to what we already did with `np.meshgrid()`.\n\n$$ \\text{noise}(x, y) = N, N \\in [n_{min}, n_{max}] $$\n\nThis function takes two coordinates and returns a single number N between $n_{min}$ and $n_{max}$. (This is what we call a \"scalar field\").\n\nRandom variables are always connected to **distributions**. We'll talk about these a great deal but now let's just say that these define what our noise will look like. In the most basic case, we can have \"uniform noise\" - that is, each point in our little noise space $[n_{min}, n_{max}]$ will have an equal chance (probability) of being selected.\n\n#### Perlin noise\nThere are many more distributions but right now we'll want to have a look at a particular one. **Perlin noise** is a kind of noise which looks smooth. It looks cool, especially if it's colored. The output may be tweaked to look like clouds, fire, etc. 3D Perlin noise is most widely used to generate random terrain.\n\n#### Algorithm\n... Now you're on your own :). Research how the algorithm is implemented (note that this will require that you understand some other basic concepts like vectors and gradients).\n\n#### Your task\n1. Research about the problem. See what articles, papers, Python notebooks, demos, etc. other people have created\n2. Create a new notebook and document your findings. Include any assumptions, models, formulas, etc. that you're using\n3. Implement the algorithm. Try not to copy others' work, rather try to do it on your own using the model you've created\n4. Test and improve the algorithm\n5. (Optional) Create a cool demo :), e.g. using Perlin noise to simulate clouds. You can even do an animation (hint: you'll need gradients not only in space but also in time)\n6. Communicate the results (e.g. in the Softuni forum)\n\nHint: [This](http://flafla2.github.io/2014/08/09/perlinnoise.html) is a very good resource. It can show you both how to organize your notebook (which is important) and how to implement the algorithm.",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
d0889c09167ed4df0cf2245adc950e37c9d1dd18 | 35,910 | ipynb | Jupyter Notebook | Python_Modules/STT.ipynb | Soyeon-ErinLee/KPMG_Ideation | 463f83f920b747a011e19e71d5951027ef05a826 | [
"MIT"
] | 4 | 2021-02-03T16:27:40.000Z | 2021-02-11T20:46:39.000Z | Python_Modules/STT.ipynb | Soyeon-ErinLee/KPMG_Ideation | 463f83f920b747a011e19e71d5951027ef05a826 | [
"MIT"
] | 4 | 2021-02-12T20:54:48.000Z | 2021-02-21T11:37:48.000Z | Python_Modules/STT.ipynb | Soyeon-ErinLee/KPMG_Ideation | 463f83f920b747a011e19e71d5951027ef05a826 | [
"MIT"
] | 3 | 2021-02-06T14:23:39.000Z | 2021-02-21T15:44:56.000Z | 54.992343 | 1,928 | 0.549346 | [
[
[
"1/14 최초 구현 by 소연 \r\n\r\n수정 및 테스트 시 본 파일이 아닌 사본 사용을 부탁드립니다.",
"_____no_output_____"
]
],
[
[
"import os, sys\r\nfrom google.colab import drive\r\ndrive.mount('/content/drive')\r\n%cd /content/drive/Shareddrives/KPMG_Ideation\r\nimport warnings\r\nwarnings.filterwarnings('ignore')\r\nimport numpy as np\r\nimport pandas as pd\r\nfrom pprint import pprint\r\nfrom krwordrank.word import KRWordRank\r\nfrom copy import deepcopy\r\nimport kss\r\nimport itertools\r\nimport unicodedata\r\nimport requests\r\nfrom functools import reduce\r\nfrom bs4 import BeautifulSoup\r\nimport string\r\nimport torch\r\nfrom textrankr import TextRank\r\nfrom lexrankr import LexRank\r\nfrom nltk.corpus import stopwords \r\nfrom nltk.tokenize import word_tokenize, sent_tokenize \r\nfrom pydub import AudioSegment\r\nfrom konlpy.tag import Okt\r\nimport re\r\nimport nltk\r\n# nltk.download('punkt')",
"Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount(\"/content/drive\", force_remount=True).\n/content/drive/Shareddrives/KPMG_Ideation\n"
],
[
"# import pre-trained model -- frameBERT (pytorch GPU 환경 필요)\r\n%cd /content/drive/Shareddrives/KPMG_Ideation/OpenInformationExtraction/frameBERT\r\n!pip install transformers \r\nimport frame_parser\r\npath=\"/content/drive/Shareddrives/KPMG_Ideation/OpenInformationExtraction/frameBERT\"\r\nparser = frame_parser.FrameParser(model_path=path, language='ko')",
"/content/drive/Shareddrives/KPMG_Ideation/OpenInformationExtraction/frameBERT\nRequirement already satisfied: transformers in /usr/local/lib/python3.6/dist-packages (4.2.1)\nRequirement already satisfied: dataclasses; python_version < \"3.7\" in /usr/local/lib/python3.6/dist-packages (from transformers) (0.8)\nRequirement already satisfied: filelock in /usr/local/lib/python3.6/dist-packages (from transformers) (3.0.12)\nRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from transformers) (1.19.5)\nRequirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.6/dist-packages (from transformers) (4.41.1)\nRequirement already satisfied: packaging in /usr/local/lib/python3.6/dist-packages (from transformers) (20.8)\nRequirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.6/dist-packages (from transformers) (2019.12.20)\nRequirement already satisfied: tokenizers==0.9.4 in /usr/local/lib/python3.6/dist-packages (from transformers) (0.9.4)\nRequirement already satisfied: sacremoses in /usr/local/lib/python3.6/dist-packages (from transformers) (0.0.43)\nRequirement already satisfied: importlib-metadata; python_version < \"3.8\" in /usr/local/lib/python3.6/dist-packages (from transformers) (3.3.0)\nRequirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from transformers) (2.23.0)\nRequirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.6/dist-packages (from packaging->transformers) (2.4.7)\nRequirement already satisfied: click in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers) (7.1.2)\nRequirement already satisfied: joblib in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers) (1.0.0)\nRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers) (1.15.0)\nRequirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata; python_version < \"3.8\"->transformers) (3.4.0)\nRequirement already satisfied: typing-extensions>=3.6.4; python_version < \"3.8\" in /usr/local/lib/python3.6/dist-packages (from importlib-metadata; python_version < \"3.8\"->transformers) (3.7.4.3)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (2020.12.5)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (2.10)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (1.24.3)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (3.0.4)\nsrl model: framenet\nlanguage: ko\nversion: 1.2\nusing viterbi: False\nusing masking: True\npretrained BERT: bert-base-multilingual-cased\nusing TGT special token: True\nused dictionary:\n\t /content/drive/Shareddrives/KPMG_Ideation/OpenInformationExtraction/frameBERT/src/../koreanframenet/resource/info/kfn1.2_lu2idx.json\n\t /content/drive/Shareddrives/KPMG_Ideation/OpenInformationExtraction/frameBERT/src/../koreanframenet/resource/info/kfn1.2_lufrmap.json\n\t /content/drive/Shareddrives/KPMG_Ideation/OpenInformationExtraction/frameBERT/src/../koreanframenet/resource/info/mul_bio_frargmap.json\n...loaded model path: /content/drive/Shareddrives/KPMG_Ideation/OpenInformationExtraction/frameBERT\n/content/drive/Shareddrives/KPMG_Ideation/OpenInformationExtraction/frameBERT\n...model is loaded\n"
],
[
"##### below are permanently installed packages #####\r\n# nb_path = '/content/notebooks'\r\n# os.symlink('/content/drive/Shareddrives/KPMG_Ideation', nb_path)\r\n# sys.path.insert(0, nb_path)\r\n# !pip install --target=$nb_path pydub\r\n# !pip install --target=$nb_path kss\r\n# %cd /content/drive/Shareddrives/KPMG_Ideation/hanspell\r\n# !python setup.py install\r\n# !pip install --target=$nb_path transformers\r\n# !apt-get update\r\n# !apt-get g++ openjdk-8-jdk \r\n# !pip3 install --target=$nb_path konlpy\r\n# !pip install --target=$nb_path soykeyword\r\n# !pip install --target=$nb_path krwordrank\r\n# !pip install --target=$nb_path bert\r\n# !pip install --target=$nb_path textrankr\r\n# !pip install --target=$nb_path lexrankr",
"_____no_output_____"
],
[
"# Due to google api credentials, SpeechRecognition needs to be installed everytime\r\n!pip install SpeechRecognition\r\nimport speech_recognition as sr\r\n# !pip install --upgrade google-cloud-speech",
"Requirement already satisfied: SpeechRecognition in /usr/local/lib/python3.6/dist-packages (3.8.1)\n"
],
[
"def to_wav(audio_file_name):\r\n if audio_file_name.split('.')[1] == 'mp3': \r\n sound = AudioSegment.from_mp3(audio_file_name)\r\n audio_file_name = audio_file_name.split('.')[0] + '.wav'\r\n sound.export(audio_file_name, format=\"wav\")\r\n if audio_file_name.split('.')[1] == 'm4a':\r\n sound = AudioSegment.from_file(file_name,'m4a')\r\n audio_file_name = audio_file_name.replace('m4a','wav')\r\n sound.export(audio_file_name, format=\"wav\")",
"_____no_output_____"
],
[
"#!/usr/bin/env python3\r\nfiles_path = ''\r\nfile_name = ''\r\n\r\nstartMin = 0\r\nstartSec = 0\r\n\r\nendMin = 4\r\nendSec = 30\r\n\r\n# Time to miliseconds\r\nstartTime = startMin*60*1000+startSec*1000\r\nendTime = endMin*60*1000+endSec*1000\r\n\r\n%cd /content/drive/Shareddrives/KPMG_Ideation/data\r\nfile_name='audio_only_1.m4a'\r\ntrack = AudioSegment.from_file(file_name,'m4a')\r\nwav_filename = file_name.replace('m4a', 'wav')\r\nfile_handle = track.export(wav_filename, format='wav')\r\nsong = AudioSegment.from_wav('audio_only_1.wav')\r\nextract = song[startTime:endTime]\r\n\r\n# Saving as wav\r\nextract.export('result.wav', format=\"wav\")\r\n\r\nAUDIO_FILE = os.path.join(os.path.dirname(os.path.abspath('data')), \"result.wav\")\r\n\r\n# use the audio file as the audio source\r\nr = sr.Recognizer()\r\nwith sr.AudioFile(AUDIO_FILE) as source:\r\n audio = r.record(source) # read the entire audio file\r\n\r\n# recognize speech using Google Speech Recognition\r\ntry:\r\n # for testing purposes, we're just using the default API key\r\n # to use another API key, use `r.recognize_google(audio, key=\"GOOGLE_SPEECH_RECOGNITION_API_KEY\")`\r\n # instead of `r.recognize_google(audio)`\r\n txt = r.recognize_google(audio, language='ko')\r\n print(\"Google Speech Recognition:\" + txt)\r\nexcept sr.UnknownValueError:\r\n print(\"Google Speech Recognition could not understand audio\")\r\nexcept sr.RequestError as e:\r\n print(\"Could not request results from Google Speech Recognition service; {0}\".format(e))\r\n ",
"/content/drive/Shareddrives/KPMG_Ideation/data\nGoogle Speech Recognition:문제가 있었다 그 알고리즘으로 막상 해 놨길래 했는데 거기서 그 의료인들이 거의 다 빠지고 1순위에서 의료인들이 제일 필요한데 개를 다 빠지고 다른데 접종을 하는 것 때문에 막 난리가 났다고 그래가지고 알고리즘을 다시 만나기로 했어 나 뭐 이런 얘기 하다 그래서 거기서 어떻게 그 어떤 자료들을 사용을 한다면은 아까 말씀했듯이 그 바이러스 노출도 직업에따른 바이러스 노출도 그러나이 그걸 가진 크게 두 개로 받고 멜론에서 나이가 65세 이상 해결하는 거 아닌가 25세 이하 라던가 하면은 그 조금 더 가산점을 주는 거라던가 아니면은 뭐 직업특성상 뭐 그 의도가 이런데 종사하면 가산점을 주는 거고 근데 반면에 이제 조금 문제가 어떻게 될 수 있냐면 직업특성 만 가지고 하면 내 막상 재택근무자 아들이라도 산업군이 막 그런 그쪽에서 켜면은 태권도는 아무런 위험에 노출되어 있지 않은데 그런 식으로 바다쪽으로 받아가지고 오히려 더 그런 그런 문제들이 아직 해결 할 때 많이 남았다고 찾아왔어요 굉장히 좋은 지적이십니다 그렇다면 직업 특성상 재택근무가 아예 불가능한 집중된 섞여 있을 거 같은데 직업특성 뿐만이 아니라 그냥 바이러스 노출도 제일 중요한 거 같기도 하고 그 그게 이제 목적에 따라서 누굴먼저 맞춰야 되니까 조금 다른게 그 감염률을 나 죽고 싶으면네 사실 뭐랄까 제일 활발하게 활동하는 새끼들을 먼저 맞춰야지 그래야지 지금 뭐 그 퍼트리고 다니는 애들이 안 걸려 있으니까 그래서 그런 경우 조금 야외활동이 라던가 이런 것들이 활동이 많은 젊은 층들이 맞춰야 되는 반면에 치사율을 낮추고자를 목표로 하면은 우선은 노인 인구가 가장 큰 높으니까 애들 먼저 맞춰야 된다 이런 일이 있어가지고 달달 말씀이시죠 목적에 따라 달라진다 목적에 따라 준다 그리고 또 다른 이슈가 될 만한게 또 배탈 개 중에 하나가 박신 백신을 맞고 나서 이제 어느 정도로 확진자 수가 누적되서 증가하는 야 그걸 또 어느 도착할 것으로 보는 경우도 많잖아요 그러려면 그렇죠 그런데 그럴 때는 차라리 야 그런 그의 만약 그 수치를 낮추고 싶으면 감옥에 있는 사람들아 사람들이 맞추는게 제일 좋지 않냐라는 의견들이 있어서 좀 논란이 됐다고 하더라고요 그래서 타겠어요 줘 동해 보는게 좋을 거 같고 왜냐면 사실대로 감옥에서 가장 감염이 많이 일어나긴 했으니까요 저게 나라 타기 시작한다 좀 계속 와 가지고 없는데이 바이러스 바이러스 백신 저희가 여러 개가 있잖아요 제가 아는 것만 정도 되는데 그 모든 백진희 백신에 대해서 최적화알고리즘 나오는 건가요 저희는 어떤 특정 몇 개의 백신의 포커스링 아 그렇네요 따라서 죄송합니다 아네네네 처음에 생각하기로는 그렇게 백신의 여러 개까지 아니고요 일단은 택시 하나에 대해서 그 그 대신에 그 예방 줄 알고 있다 하고 그걸 적용하고 그렇게 생각했습니다 말씀하세요 적용해 볼 수 있을 것 같네요 내일 각각 내가 문제를 적용할 수 있을 거니까 그 저도 얘기를 한번 해 보자면 그거 그렇게 이제 백신의 각각의 따라서 어떻게 그 최적화모델 만드는 거는 조금 일을 키우는 거 같긴 한데 많은 얘기를 키우고 잘 맞는 것도 이제 따라서 달라질 수밖에 없는게 콜드체인 방이 달라지기 때문에 어디에 어디에는 이거를 보내는 것이 더 비효율적일 수도 있고 가장 가까운 데에다가 나는 그냥 빠져 공급망을 사용하는 거는 문제가 없지만 막 멀리 있는 데까지 굳이 그런 걸 보내기에는 시간이나 이런 뭐 경제적인 제약이 있을 수 그런 그런 것까지 고려를 하는 거네 좀 후순위 담긴 거 같고 그 민찬이 말씀했듯이 우선 가장 기본적인 모델은 재생산지수 같은 거를 이렇게 생각을 해 가지고 그 각 지역별로 그게 다르잖아요 그 각 지역별로 인구밀도 하던가 아니면은 그 사람들이 활동량이 다르기 때문에 어떤 곳은 그 전체 인구의 10% 만 백신이나 그 뭐라고 하지 학원이 있어\n"
],
[
"%cd /content/drive/Shareddrives/KPMG_Ideation/hanspell\r\nfrom hanspell import spell_checker\r\nchked=\"\"\r\nline = kss.split_sentences(txt)\r\nfor i in range(len(line)):\r\n line[i] = spell_checker.check(line[i])[2]\r\n print(\"Checked spelling \",line[i])\r\n chked += \"\".join(line[i])\r\n chked += \". \"",
"/content/drive/Shareddrives/KPMG_Ideation/hanspell\nChecked spelling 문제가 있었다\nChecked spelling 그 알고리즘으로 막상 해 놨길래 했는데 거기서 그 의료인들이 거의 다 빠지고 1순위에서 의료인들이 제일 필요한데 개를 다 빠지고 다른데 접종을 하는 것 때문에 막 난리가 났다고 그래가지고 알고리즘을 다시 만나기로 했어 나 뭐 이런 얘기 하다\nChecked spelling 그래서 거기서 어떻게 그 어떤 자료들을 사용을 한다면은 아까 말씀했듯이 그 바이러스 노출도 직업에 따른 바이러스 노출도 그러나 이 그걸 가진 크게 두 개로 받고 멜론에서 나이가 65세 이상 해결하는 거 아닌가 25세 이하라던가 하면은 그 조금 더 가산점을 주는 거라던가 아니면은 뭐 직업 특성상 뭐 그 의도가 이런데 종사하면 가산점을 주는 거고 근데 반면에 이제 조금 문제가 어떻게 될 수 있냐면 직업 특성 만 가지고 하면 내 막상 재택근무자 아들이라도 산업 군이 막 그런 그쪽에서 켜면은 태권도는 아무런 위험에 노출되어 있지 않은데 그런 식으로 바다 쪽으로 받아가지고 오히려 더 그런 그런 문제들이 아직 해결할 때 많이 남았다고 찾아왔어요\nChecked spelling 굉장히 좋은 지적이십니다\nChecked spelling 그렇다면 직업 특성상 재택근무가 아예 불가능한 집중된 섞여 있을 거 같은데 직업 특성뿐만이 아니라 그냥 바이러스 노출도 제일 중요한 거 같기도 하고 그 그게 이제 목적에 따라서 누굴 먼저 맞춰야 되니까 조금 다른게 그 감염률을 나 죽고 싶으면 네 사실 뭐랄까 제일 활발하게 활동하는 새끼들을 먼저 맞춰야지 그래야지 지금 뭐 그 퍼트리고 다니는 애들이 안 걸려 있으니까 그래서 그런 경우 조금 야외활동이라던가 이런 것들이 활동이 많은 젊은 층들이 맞춰야 되는 반면에 치사율을 낮추고자를 목표로 하면은 우선은 노인 인구가 가장 큰 높으니까 애들 먼저 맞춰야 된다 이런 일이 있어가지고 달달 말씀이시죠 목적에 따라 달라진다\nChecked spelling 목적에 따라 준다\nChecked spelling 그리고 또 다른 이슈가 될 만한 게 또 배탈 개 중에 하나가 박신 백신을 맞고 나서 이제 어느 정도로 확진자 수가 누적돼서 증가하는 야 그걸 또 어느 도착할 것으로 보는 경우도 많잖아요\nChecked spelling 그러려면 그렇죠 그런데 그럴 때는 차라리 야 그런 그의 만약 그 수치를 낮추고 싶으면 감옥에 있는 사람들아 사람들이 맞추는 게 제일 좋지 않냐라는 의견들이 있어서 좀 논란이 됐다고 하더라고요\nChecked spelling 그래서 타겠어요\nChecked spelling 줘 동해 보는 게 좋을 거 같고 왜냐면 사실대로 감옥에서 가장 감염이 많이 일어나긴 했으니까요\nChecked spelling 저게 나라 타기 시작한다\nChecked spelling 좀 계속 와 가지고 없는데 이 바이러스 바이러스 백신 저희가 여러 개가 있잖아요\nChecked spelling 제가 아는 것만 정도 되는데 그 모든 백진희 백신에 대해서 최적화 알고리즘 나오는 건가요\nChecked spelling 저희는 어떤 특정 몇 개의 백신의 포커스링 아 그렇네요\nChecked spelling 따라서 죄송합니다\nChecked spelling 아 네네네 처음에 생각하기로는 그렇게 백신의 여러 개까지 아니고요\nChecked spelling 일단은 택시 하나에 대해서 그 그 대신에 그 예방 줄 알고 있다 하고 그걸 적용하고 그렇게 생각했습니다\nChecked spelling 말씀하세요\nChecked spelling 적용해 볼 수 있을 것 같네요\nChecked spelling 내일 각각 내가 문제를 적용할 수 있을 거니까 그 저도 얘기를 한번 해 보자면 그거 그렇게 이제 백신의 각각의 따라서 어떻게 그 최적화 모델 만드는 거는 조금 일을 키우는 거 같긴 한데 많은 얘기를 키우고 잘 맞는 것도 이제 따라서 달라질 수밖에 없는 게 콜드체인 방이 달라지기 때문에 어디에 어디에는 이거를 보내는 것이 더 비효율적일 수도 있고 가장 가까운 데에다가 나는 그냥 빠져 공급망을 사용하는 거는 문제가 없지만 막 멀리 있는 데까지 굳이 그런 걸 보내기에는 시간이나 이런 뭐 경제적인 제약이 있을 수 그런 그런 것까지 고려를 하는 거네 좀 후 순위 담긴 거 같고 그 민찬이 말씀했듯이 우선 가장 기본적인 모델은 재생산지수 같은 거를 이렇게 생각을 해 가지고 그 각 지역별로 그게 다르잖아요\nChecked spelling 그 각 지역별로 인구밀도 하더가 아니면은 그 사람들이 활동량이 다르기 때문에 어떤 곳은 그 전체 인구의 10%만 백신이나 그 뭐라고 하지 학원이 있어\n"
],
[
"chked",
"_____no_output_____"
],
[
"okt = Okt()\r\nclass Text():\r\n def __init__(self, text):\r\n text = re.sub(\"'\", ' ', text)\r\n paragraphs = text.split('\\n')\r\n self.text = text\r\n self.paragraphs = [i for i in paragraphs if i]\r\n self.counts = len(self.paragraphs)\r\n self.docs = [kss.split_sentences(paragraph) for paragraph in paragraphs if kss.split_sentences(paragraph)]\r\n self.newtext = deepcopy(self.text)\r\n print(\"TEXT\")\r\n\r\n def findall(self, p, s):\r\n i = s.find(p)\r\n while i != -1:\r\n yield i\r\n i = s.find(p, i + 1)\r\n \r\n def countMatcher(self, sentences, paragraph_no):\r\n paragraph = self.docs[paragraph_no]\r\n total_no = len(paragraph)\r\n vec = [0] * total_no\r\n \r\n for idx, candidate in enumerate(paragraph):\r\n for sentence in sentences:\r\n if sentence[:4] in candidate:\r\n vec[idx] += 1\r\n return vec\r\n\r\n\r\nclass Highlight(Text):\r\n def __init__(self, text):\r\n super().__init__(text)\r\n print(\"Highlight\")\r\n wordrank_extractor = KRWordRank(min_count=3, max_length=10)\r\n self.keywords, rank, graph = wordrank_extractor.extract(self.paragraphs)\r\n self.path = \"/content/drive/Shareddrives/KPMG_Ideation/OpenInformationExtraction/frameBERT\"\r\n p = []\r\n kw = []\r\n for k, v in self.keywords.items():\r\n p.append(okt.pos(k))\r\n kw.append(k)\r\n words = self.text.split(' ')\r\n s = set()\r\n keylist = [word for i in kw for word in words if i in word]\r\n keylist = [i for i in keylist if len(i)>2]\r\n for i in keylist:\r\n if len(i)>2:\r\n s.add(i)\r\n # print(\"KEYLIST: \",keylist)\r\n\r\n p = [okt.pos(word) for word in s]\r\n self.s = set()\r\n for idx in range(len(p)):\r\n ls = p[idx]\r\n for tags in ls:\r\n word,tag = tags\r\n if tag == \"Noun\":\r\n if len(word)>=2:\r\n self.s.add(word)\r\n self.keys = []\r\n for temp in self.s:\r\n self.keys.append(\" \" + str(temp))\r\n print(\"KEYWORDS: \", self.keys)\r\n\r\n def add_tags_conj(self, txt):\r\n conj = '그리고, 그런데, 그러나, 그래도, 그래서, 또는, 및, 즉, 게다가, 따라서, 때문에, 아니면, 왜냐하면, 단, 오히려, 비록, 예를 들어, 반면에, 하지만, 그렇다면, 바로, 이에 대해'\r\n conj = conj.replace(\"'\", \"\")\r\n self.candidates = conj.split(\",\")\r\n self.newtext = deepcopy(txt)\r\n self.idx = [(i, i + len(candidate)) for candidate in self.candidates for i in\r\n self.findall(candidate, txt)]\r\n for i in range(len(self.idx)):\r\n try:\r\n self.idx = [(start, start + len(candidate)) for candidate in self.candidates for start in\r\n self.findall(candidate, self.newtext)]\r\n word = self.newtext[self.idx[i][0]:self.idx[i][1]]\r\n self.newtext = word.join([self.newtext[:self.idx[i][0]], self.newtext[self.idx[i][1]:]])\r\n except:\r\n pass\r\n return self.newtext",
"_____no_output_____"
],
[
"class Summarize(Highlight):\r\n def __init__(self, text, paragraph_no):\r\n super().__init__(text)\r\n print(\"length of paragraphs \",len(self.paragraphs))\r\n self.txt = self.paragraphs[paragraph_no]\r\n self.paragraph_no = paragraph_no\r\n\r\n def summarize(self):\r\n url = \"https://api.smrzr.io/v1/summarize?num_sentences=5&algorithm=kmeans\"\r\n headers = {\r\n 'content-type': 'raw/text',\r\n 'origin': 'https://smrzr.io',\r\n 'referer': 'https://smrzr.io/',\r\n 'sec-fetch-dest': 'empty',\r\n 'sec-fetch-mode': 'cors',\r\n 'sec-fetch-site': 'same-site',\r\n \"user-agent\": \"Mozilla/5.0\"\r\n }\r\n resp = requests.post(url, headers=headers, data= self.txt.encode('utf-8'))\r\n assert resp.status_code == 200\r\n summary = resp.json()['summary']\r\n temp = summary.split('\\n')\r\n print(\"BERT: \", temp)\r\n return temp\r\n\r\n\r\n def summarizeTextRank(self):\r\n tr = TextRank(sent_tokenize)\r\n summary = tr.summarize(self.txt, num_sentences=5).split('\\n')\r\n print(\"Textrank: \",summary)\r\n return summary\r\n\r\n\r\n def summarizeLexRank(self):\r\n lr = LexRank()\r\n lr.summarize(self.txt)\r\n summaries = lr.probe()\r\n print(\"Lexrank: \",summaries)\r\n return summaries\r\n\r\n def ensembleSummarize(self):\r\n a = np.array(self.countMatcher(self.summarize(), self.paragraph_no))\r\n \r\n try:\r\n b = np.array(self.countMatcher(self.summarizeLexRank(), self.paragraph_no))\r\n except:\r\n b = np.zeros_like(a)\r\n c = np.array(self.countMatcher(self.summarizeTextRank(),self.paragraph_no))\r\n result= a+b+c\r\n i, = np.where(result == max(result))\r\n txt, index = self.docs[self.paragraph_no][i[0]], i[0]\r\n return txt, index",
"_____no_output_____"
],
[
"result = chked\r\nhigh = Highlight(result)",
"TEXT\nHighlight\nKEYLIST: ['그런데', '바이러스', '바이러스', '바이러스', '바이러스', '바이러스', '따라서', '따라서', '따라서', '따라서', '직업에', '이런데', '노출도', '노출도', '노출도', '백신을', '백신에', '백신의', '백신의', '백신의', '백신이나', '맞춰야', '맞춰야지', '맞춰야', '맞춰야', '그렇게', '그렇게', '그렇게', '그래서', '그래서', '그래서', '문제가', '문제가', '문제들이', '문제를', '문제가', '목적에', '목적에', '목적에', '특성상', '특성상', '특성뿐만이', '때문에', '때문에', '때문에', '그래가지고', '가지고', '받아가지고', '있어가지고', '가지고', '가지고', '어떻게', '어떻게', '어떻게', '알고리즘으로', '알고리즘을', '알고리즘', '얘기를', '얘기를', '하면은', '종사하면', '하면은', '다른데', '다른게', '있어가지고', '있어서', '있어.', '아니면은', '아니라', '아니고요.', '아니면은', '사람들아', '사람들이', '사람들이', '적용하고', '적용해', '적용할', '활동하는', '야외활동이라던가', '활동이', '활동량이', '생각하기로는', '생각했습니다.', '생각을', '인구가', '인구밀도', '인구의', '달라진다.', '달라질', '달라지기']\nKEYWORDS: [' 때문', ' 백신', ' 문제', ' 사람', ' 따라서', ' 가지', ' 특성', ' 알고리즘', ' 노출', ' 활동량', ' 목적', ' 다른', ' 종사', ' 적용', ' 직업', ' 얘기', ' 활동', ' 인구', ' 밀도', ' 바이러스', ' 야외', ' 생각']\n"
],
[
"summarizer = Summarize(chked, 0)\r\nsum, id = summarizer.ensembleSummarize()\r\nprint(\"summarized \",sum)",
"TEXT\nHighlight\nKEYLIST: ['그런데', '바이러스', '바이러스', '바이러스', '바이러스', '바이러스', '따라서', '따라서', '따라서', '따라서', '직업에', '이런데', '노출도', '노출도', '노출도', '백신을', '백신에', '백신의', '백신의', '백신의', '백신이나', '맞춰야', '맞춰야지', '맞춰야', '맞춰야', '그렇게', '그렇게', '그렇게', '그래서', '그래서', '그래서', '문제가', '문제가', '문제들이', '문제를', '문제가', '목적에', '목적에', '목적에', '특성상', '특성상', '특성뿐만이', '때문에', '때문에', '때문에', '그래가지고', '가지고', '받아가지고', '있어가지고', '가지고', '가지고', '어떻게', '어떻게', '어떻게', '알고리즘으로', '알고리즘을', '알고리즘', '얘기를', '얘기를', '하면은', '종사하면', '하면은', '다른데', '다른게', '있어가지고', '있어서', '있어.', '아니면은', '아니라', '아니고요.', '아니면은', '사람들아', '사람들이', '사람들이', '적용하고', '적용해', '적용할', '활동하는', '야외활동이라던가', '활동이', '활동량이', '생각하기로는', '생각했습니다.', '생각을', '인구가', '인구밀도', '인구의', '달라진다.', '달라질', '달라지기']\nKEYWORDS: [' 때문', ' 백신', ' 문제', ' 사람', ' 따라서', ' 가지', ' 특성', ' 알고리즘', ' 노출', ' 활동량', ' 목적', ' 다른', ' 종사', ' 적용', ' 직업', ' 얘기', ' 활동', ' 인구', ' 밀도', ' 바이러스', ' 야외', ' 생각']\nlength of paragraphs 1\nBERT: ['그 알고리즘으로 막상 해 놨길래 했는데 거기서 그 의료인들이 거의 다 빠지고 1순위에서 의료인들이 제일 필요한데 개를 다 빠지고 다른데 접종을 하는 것 때문에 막 난리가 났다고 그래가지고 알고리즘을 다시 만나기로 했어 나 뭐 이런 얘기 하다. 그렇다면 직업 특성상 재택근무가 아예 불가능한 집중된 섞여 있을 거 같은데 직업 특성뿐만이 아니라 그냥 바이러스 노출도 제일 중요한 거 같기도 하고 그 그게 이제 목적에 따라서 누굴 먼저 맞춰야 되니까 조금 다른게 그 감염률을 나 죽고 싶으면 네 사실 뭐랄까 제일 활발하게 활동하는 새끼들을 먼저 맞춰야지 그래야지 지금 뭐 그 퍼트리고 다니는 애들이 안 걸려 있으니까 그래서 그런 경우 조금 야외활동이라던가 이런 것들이 활동이 많은 젊은 층들이 맞춰야 되는 반면에 치사율을 낮추고자를 목표로 하면은 우선은 노인 인구가 가장 큰 높으니까 애들 먼저 맞춰야 된다 이런 일이 있어가지고 달달 말씀이시죠 목적에 따라 달라진다. 좀 계속 와 가지고 없는데 이 바이러스 바이러스 백신 저희가 여러 개가 있잖아요. 제가 아는 것만 정도 되는데 그 모든 백진희 백신에 대해서 최적화 알고리즘 나오는 건가요. 내일 각각 내가 문제를 적용할 수 있을 거니까 그 저도 얘기를 한번 해 보자면 그거 그렇게 이제 백신의 각각의 따라서 어떻게 그 최적화 모델 만드는 거는 조금 일을 키우는 거 같긴 한데 많은 얘기를 키우고 잘 맞는 것도 이제 따라서 달라질 수밖에 없는 게 콜드체인 방이 달라지기 때문에 어디에 어디에는 이거를 보내는 것이 더 비효율적일 수도 있고 가장 가까운 데에다가 나는 그냥 빠져 공급망을 사용하는 거는 문제가 없지만 막 멀리 있는 데까지 굳이 그런 걸 보내기에는 시간이나 이런 뭐 경제적인 제약이 있을 수 그런 그런 것까지 고려를 하는 거네 좀 후 순위 담긴 거 같고 그 민찬이 말씀했듯이 우선 가장 기본적인 모델은 재생산지수 같은 거를 이렇게 생각을 해 가지고 그 각 지역별로 그게 다르잖아요.']\nLexrank: ['그렇다면 직업 특성상 재택근무가 아예 불가능한 집중된 섞여 있을 거 같은데 직업 특성뿐만이 아니라 그냥 바이러스 노출도 제일 중요한 거 같기도 하고 그 그게 이제 목적에 따라서 누굴 먼저 맞춰야 되니까 조금 다른게 그 감염률을 나 죽고 싶으면 네 사실 뭐랄까 제일 활발하게 활동하는 새끼들을 먼저 맞춰야지 그래야지 지금 뭐 그 퍼트리고 다니는 애들이 안 걸려 있으니까 그래서 그런 경우 조금 야외활동이라던가 이런 것들이 활동이 많은 젊은 층들이 맞춰야 되는 반면에 치사율을 낮추고자를 목표로 하면은 우선은 노인 인구가 가장 큰 높으니까 애들 먼저 맞춰야 된다 이런 일이 있어가지고 달달 말씀이시죠 목적에 따라 달라진다', '내일 각각 내가 문제를 적용할 수 있을 거니까 그 저도 얘기를 한번 해 보자면 그거 그렇게 이제 백신의 각각의 따라서 어떻게 그 최적화 모델 만드는 거는 조금 일을 키우는 거 같긴 한데 많은 얘기를 키우고 잘 맞는 것도 이제 따라서 달라질 수밖에 없는 게 콜드체인 방이 달라지기 때문에 어디에 어디에는 이거를 보내는 것이 더 비효율적일 수도 있고 가장 가까운 데에다가 나는 그냥 빠져 공급망을 사용하는 거는 문제가 없지만 막 멀리 있는 데까지 굳이 그런 걸 보내기에는 시간이나 이런 뭐 경제적인 제약이 있을 수 그런 그런 것까지 고려를 하는 거네 좀 후 순위 담긴 거 같고 그 민찬이 말씀했듯이 우선 가장 기본적인 모델은 재생산지수 같은 거를 이렇게 생각을 해 가지고 그 각 지역별로 그게 다르잖아요']\nTextrank: ['문제가 있었다', '그 알고리즘으로 막상 해 놨길래 했는데 거기서 그 의료인들이 거의 다 빠지고 1순위에서 의료인들이 제일 필요한데 개를 다 빠지고 다른데 접종을 하는 것 때문에 막 난리가 났다고 그래가지고 알고리즘을 다시 만나기로 했어 나 뭐 이런 얘기 하다', '그래서 거기서 어떻게 그 어떤 자료들을 사용을 한다면은 아까 말씀했듯이 그 바이러스 노출도 직업에 따른 바이러스 노출도 그러나 이 그걸 가진 크게 두 개로 받고 멜론에서 나이가 65세 이상 해결하는 거 아닌가 25세 이하라던가 하면은 그 조금 더 가산점을 주는 거라던가 아니면은 뭐 직업 특성상 뭐 그 의도가 이런데 종사하면 가산점을 주는 거고 근데 반면에 이제 조금 문제가 어떻게 될 수 있냐면 직업 특성 만 가지고 하면 내 막상 재택근무자 아들이라도 산업 군이 막 그런 그쪽에서 켜면은 태권도는 아무런 위험에 노출되어 있지 않은데 그런 식으로 바다 쪽으로 받아가지고 오히려 더 그런 그런 문제들이 아직 해결할 때 많이 남았다고 찾아왔어요', '굉장히 좋은 지적이십니다', '그렇다면 직업 특성상 재택근무가 아예 불가능한 집중된 섞여 있을 거 같은데 직업 특성뿐만이 아니라 그냥 바이러스 노출도 제일 중요한 거 같기도 하고 그 그게 이제 목적에 따라서 누굴 먼저 맞춰야 되니까 조금 다른게 그 감염률을 나 죽고 싶으면 네 사실 뭐랄까 제일 활발하게 활동하는 새끼들을 먼저 맞춰야지 그래야지 지금 뭐 그 퍼트리고 다니는 애들이 안 걸려 있으니까 그래서 그런 경우 조금 야외활동이라던가 이런 것들이 활동이 많은 젊은 층들이 맞춰야 되는 반면에 치사율을 낮추고자를 목표로 하면은 우선은 노인 인구가 가장 큰 높으니까 애들 먼저 맞춰야 된다 이런 일이 있어가지고 달달 말씀이시죠 목적에 따라 달라진다']\nsummarized 그렇다면 직업 특성상 재택근무가 아예 불가능한 집중된 섞여 있을 거 같은데 직업 특성뿐만이 아니라 그냥 바이러스 노출도 제일 중요한 거 같기도 하고 그 그게 이제 목적에 따라서 누굴 먼저 맞춰야 되니까 조금 다른게 그 감염률을 나 죽고 싶으면 네 사실 뭐랄까 제일 활발하게 활동하는 새끼들을 먼저 맞춰야지 그래야지 지금 뭐 그 퍼트리고 다니는 애들이 안 걸려 있으니까 그래서 그런 경우 조금 야외활동이라던가 이런 것들이 활동이 많은 젊은 층들이 맞춰야 되는 반면에 치사율을 낮추고자를 목표로 하면은 우선은 노인 인구가 가장 큰 높으니까 애들 먼저 맞춰야 된다 이런 일이 있어가지고 달달 말씀이시죠 목적에 따라 달라진다.\n"
],
[
"sum",
"_____no_output_____"
],
[
" ",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
],
[
[
"- 사용자 인식(speaker identification)이 됐으면 좋겠다 -- clova NOTE 사용시 해결\r\n\r\n> 무료 api는 supervised만 있는 듯\r\n\r\n>google speech api는 한국어 speaker diarization 지원 X\r\n\r\n- 시간단위로 잘리는 것 루프 만들기\r\n\r\n- 기본 웹프레임워크 만들기\r\n\r\n- 아웃풋 어떤 모양일지?\r\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d0889c56674bc0adf2bb6749c63a7d30fec3896f | 123,416 | ipynb | Jupyter Notebook | build/html/python_scripts1.ipynb | krishna-harsha/Tolerance-to-modern-slavery | 63f012976f666e8dc0571d37c19234adcb7547d2 | [
"MIT"
] | null | null | null | build/html/python_scripts1.ipynb | krishna-harsha/Tolerance-to-modern-slavery | 63f012976f666e8dc0571d37c19234adcb7547d2 | [
"MIT"
] | null | null | null | build/html/python_scripts1.ipynb | krishna-harsha/Tolerance-to-modern-slavery | 63f012976f666e8dc0571d37c19234adcb7547d2 | [
"MIT"
] | null | null | null | 51.466222 | 572 | 0.446401 | [
[
[
"## Networks and Simulation",
"_____no_output_____"
],
[
"### Packages",
"_____no_output_____"
]
],
[
[
"%%writefile magic_functions.py\nfrom tqdm import tqdm\nfrom multiprocess import Pool\nimport scipy\nimport networkx as nx\nimport random\nimport pandas as pd\nimport numpy as np\nimport rpy2.robjects as robjects\nfrom rpy2.robjects import pandas2ri\nfrom sklearn.metrics.pairwise import cosine_similarity\nfrom tqdm.notebook import tqdm\nimport warnings\nwarnings.filterwarnings(\"ignore\")\nimport pickle\nfrom scipy import stats",
"_____no_output_____"
],
[
"### read percentage of organizations in each region and market cap range \np_reg = pd.read_excel('C:/Users/Krishna/OneDrive/Documents/IIM_R1/proj2/prob_mats.xlsx', 'reg',index_col=0)\np_med = pd.read_excel('C:/Users/Krishna/OneDrive/Documents/IIM_R1/proj2/prob_mats.xlsx', 'med',index_col=0)",
"_____no_output_____"
]
],
[
[
"### Generating network with desired characteristics",
"_____no_output_____"
]
],
[
[
"def create_network(N,nr,er,asa,bs_n,m_size):\n \n ### Graph generation\n ## Total organizations\n N=N\n ## region specific N\n n_regions_list=[int(0.46*N),int( 0.16*N),int( 0.38*N)]\n if (len(n_regions_list[0]*[\"NrA\"]+n_regions_list[1]*[\"Eur\"]+n_regions_list[2]*['Asia'])!=N):\n if (len(n_regions_list[0]*[\"NrA\"]+n_regions_list[1]*[\"Eur\"]+n_regions_list[2]*['Asia'])-N)>0:\n \n n_regions_list[0] = n_regions_list[0]+len(n_regions_list[0]*[\"NrA\"]+n_regions_list[1]*[\"Eur\"]+n_regions_list[2]*['Asia'])-N\n else:\n n_regions_list[0] = n_regions_list[0]-len(n_regions_list[0]*[\"NrA\"]+n_regions_list[1]*[\"Eur\"]+n_regions_list[2]*['Asia'])+N\n\n \n\n g = nx.random_partition_graph(n_regions_list, p_in= 0.60, p_out=0.15, seed=123, directed=True)\n\n edge_list_df=pd.DataFrame(list(g.edges(data=True)))\n edge_list_df.columns=['source','target','weight']\n \n \n ###\n #calculate n of b,bs,s\n nr_n=[int(nr[0]*n_regions_list[0]),int(nr[1]*n_regions_list[0]),int(nr[2]*n_regions_list[0])]\n er_n=[int(er[0]*n_regions_list[1]),int(er[1]*n_regions_list[1]),int(er[2]*n_regions_list[1])]\n asa_n=[int(asa[0]*n_regions_list[2]),int(asa[1]*n_regions_list[2]),int(asa[2]*n_regions_list[2])]\n\n if (np.sum(nr_n)<n_regions_list[0]):\n nr_n[0]=nr_n[0]+(n_regions_list[0]-np.sum(nr_n))\n if (np.sum(er_n)<n_regions_list[1]):\n er_n[0]=er_n[0]+(n_regions_list[1]-np.sum(er_n))\n if (np.sum(asa_n)<n_regions_list[2]):\n asa_n[0]=asa_n[0]+(n_regions_list[2]-np.sum(asa_n))\n ## if bs n controlled \n k_diff=nr_n[2]-int((nr_n[0]+nr_n[2])/((nr_n[0]/nr_n[2])+1+bs_n))\n nr_n[2]=nr_n[2]-k_diff\n nr_n[0]=nr_n[0]+k_diff\n\n k_diff=er_n[2]-int((er_n[0]+er_n[2])/((er_n[0]/er_n[2])+1+bs_n))\n er_n[2]=er_n[2]-k_diff\n er_n[0]=er_n[0]+k_diff\n\n k_diff=asa_n[2]-int((asa_n[0]+asa_n[2])/((asa_n[0]/asa_n[2])+1+bs_n))\n asa_n[2]=asa_n[2]-k_diff\n asa_n[0]=asa_n[0]+k_diff \n\n # choose b , s , bs\n #nr\n list1=range(0,n_regions_list[0])\n random.seed(10)\n list1_0=random.sample(list1, nr_n[0])\n random.seed(10)\n list1_1=random.sample(pd.DataFrame(set(list1)-set(list1_0)).iloc[:,0].tolist(),nr_n[1])\n random.seed(10)\n list1_2=random.sample(pd.DataFrame(set(list1)-(set(list1_1).union(set(list1_0)))).iloc[:,0].tolist(),nr_n[2])\n\n #eur\n list2=range(0+n_regions_list[0],n_regions_list[1]+n_regions_list[0])\n random.seed(10)\n list2_0=random.sample(list2, er_n[0])\n random.seed(10)\n list2_1=random.sample(pd.DataFrame(set(list2)-set(list2_0)).iloc[:,0].tolist(),er_n[1])\n random.seed(10)\n list2_2=random.sample(pd.DataFrame(set(list2)-(set(list2_1).union(set(list2_0)))).iloc[:,0].tolist(),er_n[2])\n\n #asi\n list3=range(0+n_regions_list[0]+n_regions_list[1],n_regions_list[2]+n_regions_list[0]+n_regions_list[1])\n random.seed(10)\n list3_0=random.sample(list3, asa_n[0])\n random.seed(10)\n list3_1=random.sample(pd.DataFrame(set(list3)-set(list3_0)).iloc[:,0].tolist(),asa_n[1])\n random.seed(10)\n list3_2=random.sample(pd.DataFrame(set(list3)-(set(list3_1).union(set(list3_0)))).iloc[:,0].tolist(),asa_n[2])\n\n #\n nodes_frame=pd.DataFrame(range(N),columns=['nodes'])\n \n nodes_frame['partition']=n_regions_list[0]*[\"NrA\"]+n_regions_list[1]*[\"Eur\"]+n_regions_list[2]*['Asia']\n\n nodes_frame['category']=\"\"\n\n nodes_frame['category'][list1_0]=\"buyer\"\n nodes_frame['category'][list2_0]=\"buyer\"\n nodes_frame['category'][list3_0]=\"buyer\"\n\n nodes_frame['category'][list1_1]=\"both\"\n nodes_frame['category'][list2_1]=\"both\"\n nodes_frame['category'][list3_1]=\"both\"\n\n nodes_frame['category'][list1_2]=\"sup\"\n nodes_frame['category'][list2_2]=\"sup\"\n nodes_frame['category'][list3_2]=\"sup\"\n \n #\n params_sn=pd.read_csv('skew_norm_params_reg_tier_mark_size.csv',index_col=0)\n nodes_frame['ms']=\"\"\n ########### draw a market size based on region and tier \n for i in nodes_frame['nodes']:\n ps = params_sn.loc[(params_sn['tier']==nodes_frame['category'][i])&((params_sn['reg']==nodes_frame['partition'][i]))]\n #print(ps)\n np.random.seed(seed=123)\n nodes_frame['ms'][i] = stats.skewnorm(ps['ae'], ps['loce'], ps['scalee']).rvs(1)[0]\n\n nqn1=np.quantile(nodes_frame['ms'],0.05)\n nqn3=np.quantile(nodes_frame['ms'],0.5)\n \n nodes_frame['ms']=nodes_frame['ms']+ m_size*nodes_frame['ms']\n \n dummy=pd.DataFrame(columns=['ms'])\n dummy['ms']=range(0,N)\n \n \n for i in range(0,N):\n if nodes_frame.iloc[i,3]<=nqn1:\n dummy['ms'][i]=\"low\"\n elif nodes_frame.iloc[i,3]<=nqn3: \n dummy['ms'][i]=\"med\"\n else:\n dummy['ms'][i]=\"high\"\n \n nodes_frame['ms2']=dummy['ms']\n\n\n buy_list=list1_0+list2_0+list3_0\n sup_list=list1_2+list2_2+list3_2\n\n edge_list_df_new=edge_list_df.drop([i for i, e in enumerate(list(edge_list_df['source'])) if e in set(sup_list)],axis=0)\n new_index=range(edge_list_df_new.shape[0])\n edge_list_df_new.index=new_index\n\n edge_list_df_new=edge_list_df_new.drop([i for i, e in enumerate(list(edge_list_df_new['target'])) if e in set(buy_list)],axis=0)\n new_index=range(edge_list_df_new.shape[0])\n edge_list_df_new.index=new_index\n \n g = nx.DiGraph( )\n # Add edges and edge attributes\n for i, elrow in edge_list_df_new.iterrows():\n g.add_edge(elrow[0], elrow[1], attr_dict=elrow[2])\n \n return [edge_list_df_new,nodes_frame,g]",
"_____no_output_____"
]
],
[
[
"### Generate initial attributes ",
"_____no_output_____"
],
[
"#### Python wrapper ",
"_____no_output_____"
]
],
[
[
"def sample_lab_attr_all_init(N):\n # Defining the R script and loading the instance in Python\n r = robjects.r\n r['source']('sampling_for_attributes_normal.R')\n # Loading the function we have defined in R.\n sampling_for_attributes_r2 = robjects.globalenv['sampling_for_attributes_normal']\n #Invoking the R function and getting the result\n df_result_r = sampling_for_attributes_r2(N)\n #Converting it back to a pandas dataframe.\n df_result = pandas2ri.rpy2py(df_result_r)\n \n return(df_result)",
"_____no_output_____"
]
],
[
[
"#### R function for beta distributed tolerance ",
"_____no_output_____"
]
],
[
[
"library(bnlearn)\nlibrary(stats)\nsampling_for_attributes_normal <- function(N){\n #' Preprocessing df to filter country\n #'\n #'\n data_orgs<-read.csv('C:/Users/Krishna/OneDrive/Documents/IIM_R1/proj2/cosine_input.csv')\n library(bnlearn)\n library(stats)\n my_model <- readRDS(\"C:/Users/Krishna/OneDrive/Documents/IIM_R1/proj2/model_fit.rds\")\n x_s <- seq(0, 1, length.out = N)\n y1<-dbeta(x_s, 1.1, 0.5)*100\n x<-y1\n #N_reg=N\n for (i in 1:length(x)){\n ## S3 method for class 'bn.fit'\n sampled_data<-rbn(my_model, n = 500)\n sampled_data[,c(1:7)] <- lapply(sampled_data[,c(1:7)], as.numeric)\n sampled_data[sampled_data <=0] <- NA\n sampled_data[sampled_data >=100] <- NA\n r_ind<-rowMeans(sampled_data, na.rm=FALSE) \n sampled_data<-sampled_data[!is.na(r_ind),]\n #head(sampled_data)\n sampled_data$score= as.numeric(rowMeans(sampled_data))#as.matrix(rowMeans(sampled_data))\n sc_diffs=abs(x[i]-sampled_data$score)\n if(i==1){\n sampled_data_f<-sampled_data[sc_diffs==min(sc_diffs),]\n }else{\n sampled_data_f<-rbind(sampled_data_f,sampled_data[sc_diffs==min(sc_diffs),])\n }\n }\n \n return(sampled_data_f)\n}\n\n",
"_____no_output_____"
]
],
[
[
"### Generate new attributes",
"_____no_output_____"
],
[
"#### Python wrapper ",
"_____no_output_____"
]
],
[
[
"def sample_lab_attr_new_B(N,reg,s_av1,s_av2):\n # Defining the R script and loading the instance in Python\n r = robjects.r\n r['source']('sampling_for_attributes.R')\n # Loading the function we have defined in R.\n sampling_for_attributes_r = robjects.globalenv['sampling_for_attributes']\n #Invoking the R function and getting the result\n df_result_r = sampling_for_attributes_r(N,reg)\n #print(df_result_r.head())\n #Converting it back to a pandas dataframe.\n df_result = pandas2ri.rpy2py(df_result_r)\n \"\"\"if (s_av2-s_av1)<2:\n s_av2=np.min([s_av2+2,64.28571])\n if (s_av2-s_av1)>0:\n s_av1=np.max([s_av1-2,0])\n else:\n s_av1=np.max([s_av2-2,0])\n \n if s_av2>100:\n s_av2=100\"\"\"\n \n sampled_data=df_result.loc[((df_result['score']>=(s_av1)) & (df_result['score']<=(s_av2)))]\n if sampled_data.shape[0]==0:\n s_av=np.mean([s_av1,s_av2])\n s_th=s_av*0.05\n sampled_data=df_result.loc[((df_result['score']>=(s_av-s_th)) | (df_result['score']<=(s_av+s_th)))]\n\n tmp_vector=np.abs(sampled_data['score']-s_av)\n #tmp_vector2=np.abs(df_result['score']-s_av)\n sampled_data=sampled_data.loc[tmp_vector==np.min(tmp_vector)]\n\n \n return(sampled_data.sample())",
"_____no_output_____"
]
],
[
[
"#### R function to sample new attributes",
"_____no_output_____"
]
],
[
[
"\n\nsampling_for_attributes <- function(N,reg){\n #' Preprocessing df to filter country\n #'\n #'\n library(bnlearn)\n \n #my_model <- readRDS(\"C:/Users/ADMIN/OneDrive/Documents/IIM_R1/proj2/model_fit.rds\")\n if ( reg==1){\n #C:/Users/ADMIN/OneDrive/Documents\n my_model <- readRDS(\"C:/Users/Krishna/OneDrive/Documents/IIM_R1/proj2/model_fit_1.rds\")\n }else if(reg==2){\n my_model <- readRDS(\"C:/Users/Krishna/OneDrive/Documents/IIM_R1/proj2/model_fit_2.rds\")\n }else{\n my_model <- readRDS(\"C:/Users/Krishna/OneDrive/Documents/IIM_R1/proj2/model_fit_3.rds\")\n }\n N_reg=N\n N=N+500\n ## S3 method for class 'bn.fit'\n sampled_data<-rbn(my_model, n = N)\n sampled_data[,c(1:7)] <- lapply(sampled_data[,c(1:7)], as.numeric)\n sampled_data[sampled_data <=0] <- NA\n sampled_data[sampled_data >=100] <- NA\n r_ind<-rowMeans(sampled_data, na.rm=FALSE) \n sampled_data<-sampled_data[!is.na(r_ind),]\n #head(sampled_data)\n sampled_data<-sampled_data[sample(nrow(sampled_data), N_reg), ]\n rownames(sampled_data) <- seq(length=nrow(sampled_data))\n sampled_data\n #sampled_data$score= (0.38752934*sampled_data[,1]+ 0.37163856*sampled_data[,2]+ 0.32716766*sampled_data[,3]+ 0.39613783*sampled_data[,4]+ 0.38654069*sampled_data[,5]+0.38654069*sampled_data[,6]+ 0.38589444*sampled_data[,7])/(0.38752934+ 0.37163856+ 0.32716766+ 0.39613783+ 0.38654069+0.38654069+ 0.38589444)\n sampled_data$score= as.numeric(rowMeans(sampled_data))#as.matrix(rowMeans(sampled_data))\n \n return(sampled_data)\n}\n\n",
"_____no_output_____"
]
],
[
[
"### Bayesian Network fit to attributes of an organization",
"_____no_output_____"
]
],
[
[
"library(bnlearn)\n\n\ndata<-read.csv('C:/Users/ADMIN/OneDrive/Documents/IIM_R1/proj2/cosine_input.csv')\nhead(data)\ndata1<-data[data$Region=='North America',]\ndata2<-data[data$Region=='Europe',]\ndata3<-data[data$Region=='Asia',]\n\ndata<-data[,c(2:8)]\ndata1<-data1[,c(2:8)]\ndata2<-data2[,c(2:8)]\ndata3<-data3[,c(2:8)]\n\ndim(data)\n\nsummary(data)\n\ndata[,c(1:7)] <- lapply(data[,c(1:7)], as.numeric)\ndata1[,c(1:7)] <- lapply(data1[,c(1:7)], as.numeric)\ndata2[,c(1:7)] <- lapply(data2[,c(1:7)], as.numeric)\ndata3[,c(1:7)] <- lapply(data3[,c(1:7)], as.numeric)\n\n\nbn.scores <- hc(data)\nbn.scores1 <- hc(data1)\nbn.scores2 <- hc(data2)\nbn.scores3 <- hc(data3)\n\nplot(bn.scores)\nplot(bn.scores1)\nplot(bn.scores2)\nplot(bn.scores3)\n\nbn.scores\n\n\nfit = bn.fit(bn.scores,data )\nfit1 = bn.fit(bn.scores1,data1 )\nfit2 = bn.fit(bn.scores2,data2 )\nfit3 = bn.fit(bn.scores3,data3 )\n\n\n\nfit\n\n\nbn.fit.qqplot(fit)\nbn.fit.xyplot(fit)\nbn.fit.histogram(fit)\nbn.fit.histogram(fit1)\nbn.fit.histogram(fit2)\nbn.fit.histogram(fit3)\n\nsaveRDS(fit1, file = \"model_fit_1.rds\")\nsaveRDS(fit1, file = \"model_fit_2.rds\")\nsaveRDS(fit1, file = \"model_fit_3.rds\")\n\n\n## S3 method for class 'bn'\nrbn(bn.scores, n = 1000, data, fit = \"mle\", ..., debug = FALSE)\n## S3 method for class 'bn.fit'\nsampled_data<-rbn(fit, n = 1000,)\n\nhead(sampled_data)\n\nwrite.csv(sampled_data,file = 'C:/Users/ADMIN/OneDrive/Documents/IIM_R1/proj2/sampled_data.csv')\n",
"_____no_output_____"
]
],
[
[
"### R scripts for cumulative and probability density of a tolerance score ",
"_____no_output_____"
]
],
[
[
"prob_cdf<-function(cur_sc,reg){\n data<-read.csv('C:/Users/Krishna/OneDrive/Documents/IIM_R1/proj2/cosine_input.csv')\n #head(data)\n reg_lt=unique(data$Region)\n \n data<-data[data$Region==reg_lt[reg],]\n #hist(data$Tolerance,probability=TRUE)\n #lines(density(data$Tolerance),col=\"red\")\n \n ecdff<-ecdf(data$Tolerance)\n p=1-ecdff(cur_sc)\n return(p)\n}\n\nprob_cdf_m<-function(cur_sc,msh){\n data<-read.csv('C:/Users/Krishna/OneDrive/Documents/IIM_R1/proj2/cosine_input.csv')\n #head(data)\n m_lt=c(unique(data$market_cap)[1],unique(data$market_cap)[3],unique(data$market_cap)[2])\n \n data<-data[data$market_cap==m_lt[msh],]\n #hist(data$Tolerance,probability=TRUE)\n #lines(density(data$Tolerance),col=\"red\")\n \n ecdff<-ecdf(data$Tolerance)\n p=1-ecdff(cur_sc)\n return(p)\n}\n\nprob_pdf<-function(cur_sc,reg){\n #library(MEPDF)\n data<-read.csv('C:/Users/Krishna/OneDrive/Documents/IIM_R1/proj2/cosine_input.csv')\n #head(data)\n reg_lt=unique(data$Region)\n \n data<-data[data$Region==reg_lt[reg],]\n #hist(data$Tolerance,probability=TRUE)\n #lines(density(data$Tolerance),col=\"red\")\n \n #ecdff<-epdf(data$Tolerance)\n \n kd=density(data$Tolerance)\n p= kd$y[which(abs(kd$x-cur_sc)==min(abs(kd$x-cur_sc)))]\n return(p)\n}\n\nprob_pdf_m<-function(cur_sc,msh){\n #library(MEPDF)\n \n data<-read.csv('C:/Users/Krishna/OneDrive/Documents/IIM_R1/proj2/cosine_input.csv')\n #head(data)\n m_lt=c(unique(data$market_cap)[1],unique(data$market_cap)[3],unique(data$market_cap)[2])\n \n data<-data[data$market_cap==m_lt[msh],]\n #hist(data$Tolerance,probability=TRUE)\n #lines(density(data$Tolerance),col=\"red\")\n \n kd=density(data$Tolerance)\n p= kd$y[which(abs(kd$x-cur_sc)==min(abs(kd$x-cur_sc)))]\n return(p)\n}\n",
"_____no_output_____"
]
],
[
[
"### Python scripts for running R scripts ",
"_____no_output_____"
]
],
[
[
"def prob_cdf(cur_sc,reg):\n # Defining the R script and loading the instance in Python\n r = robjects.r\n r['source']('prob_cdf.R')\n # Loading the function we have defined in R.\n prob_cdf_r = robjects.globalenv['prob_cdf']\n #Invoking the R function and getting the result\n df_result_p= prob_cdf_r(cur_sc,reg)\n #Converting it back to a pandas dataframe.\n #df_result = pandas2ri.rpy2py(df_result_r)\n \n return(df_result_p)\n\ndef prob_cdf_m(cur_sc,msh):\n # Defining the R script and loading the instance in Python\n r = robjects.r\n r['source']('prob_cdf.R')\n # Loading the function we have defined in R.\n prob_cdf_m = robjects.globalenv['prob_cdf_m']\n #Invoking the R function and getting the result\n df_result_p= prob_cdf_m(cur_sc,msh)\n #Converting it back to a pandas dataframe.\n #df_result = pandas2ri.rpy2py(df_result_r)\n \n return(df_result_p)\n\ndef prob_pdf(cur_sc,reg):\n # Defining the R script and loading the instance in Python\n r = robjects.r\n r['source']('prob_cdf.R')\n # Loading the function we have defined in R.\n prob_pdf_r = robjects.globalenv['prob_pdf']\n #Invoking the R function and getting the result\n df_result_p= prob_pdf_r(cur_sc,reg)\n #Converting it back to a pandas dataframe.\n #df_result = pandas2ri.rpy2py(df_result_r)\n \n return(df_result_p)\ndef prob_pdf_m(cur_sc,msh):\n # Defining the R script and loading the instance in Python\n r = robjects.r\n r['source']('prob_cdf.R')\n # Loading the function we have defined in R.\n prob_pdf_m = robjects.globalenv['prob_pdf_m']\n #Invoking the R function and getting the result\n df_result_p= prob_pdf_m(cur_sc,msh)\n #Converting it back to a pandas dataframe.\n #df_result = pandas2ri.rpy2py(df_result_r)\n \n return(df_result_p)",
"_____no_output_____"
]
],
[
[
"### Simulation",
"_____no_output_____"
]
],
[
[
"def simulation_continous (node_attr,edge_list_df,num_sim,W,bs1,bs2,N,r_on,m_on,p_reg,p_med,probs_mat,probs_mat2,run_iter,alpha1,alpha2,alpha3,Tmp,rgn,mcp):\n #nodes and edges\n N=N\n node_attr = node_attr\n edge_list_df = edge_list_df\n #P's \n blanck_data_tot=np.empty([N,32,num_sim],dtype='object')\n #blanck_data_tot2=np.empty([N,4,num_sim],dtype='object')\n\n for i in tqdm (range (num_sim), desc=\"Running i ...\"):\n blanck_data=np.empty([N,32],dtype='object')\n #blanck_data2=np.empty([N,4],dtype='object')\n\n\n # node attr to edge attr\n df_3=cosine_similarity(node_attr.iloc[:,:8])\n df_4=pd.DataFrame(df_3)\n df_4.values[[np.arange(len(df_4))]*2] = np.nan\n #mat_data.head()\n edge_list_2=df_4.stack().reset_index()\n edge_list_2.columns=['source','target','weight']\n #edge_list_2.head()\n edge_list_f=pd.merge(edge_list_df, edge_list_2, how='left', left_on=['source','target'], right_on = ['source','target'])\n #edge_list_f.head()\n edge_list_f.drop('weight_x',axis=1,inplace=True)\n edge_list_f.columns=['source','target','weight']\n #edge_list_f.head()\n st = [\"high\",\"low\"]\n \n ###########################################################\n for j in tqdm (range(0,N), desc=\"Running j...\"):\n #N=np.float(N)\n if len(list(np.where(edge_list_f.iloc[:,1]==j)[0]))>=1:\n #################################################################################################### \n ########################################## MIMETIC##################################################\n ####################################################################################################\n st=[\"high\",\"low\"]\n st=pd.DataFrame(st)\n st.columns=['state']\n\n #Index in node attributes df['partitions'] == jth row partition column \n p_tier_ind = [i for i, e in enumerate(list(node_attr['tier'])) if e in set([node_attr.iloc[j,10]])]\n t_node_attr = node_attr.iloc[p_tier_ind,:]\n #t_node_attr=t_node_attr.reset_index().iloc[:,1:]\n #t_node_attr.head()\n\n\n\n t_node_attr_score=t_node_attr['score'].copy()\n t_node_attr_score=t_node_attr_score.reset_index().iloc[:,1:]\n #t_node_attr_score\n\n #t_node_attr.index[tnr]\n\n for tnr in range(0,t_node_attr.shape[0]):\n if node_attr.iloc[j,:]['score']<t_node_attr_score['score'][tnr]:\n t_node_attr['state'][t_node_attr.index[tnr]]='high'\n else:\n t_node_attr['state'][t_node_attr.index[tnr]]='low'\n\n tier_p=pd.DataFrame(t_node_attr['state'].value_counts()/np.sum(t_node_attr['state'].value_counts()))\n tier_p=tier_p.reset_index()\n tier_p.columns=['state','t_p']\n #tier_p\n\n t_tier_p=pd.merge(st,tier_p,how=\"left\",left_on=['state'],right_on='state')\n t_tier_p=t_tier_p.fillna(0.01)\n tier_p=t_tier_p\n #tier_p\n ############################################################################### \n #d_tier.index\n\n #pd.DataFrame(node_attr.iloc[p_tier_ind,-2-2-1])\n #df_4.iloc[list(node_attr.iloc[p_tier_ind,:].index),j].reset_index().iloc[:,-1]\n\n #states and distances \n #d_tier=pd.concat([node_attr.iloc[p_tier_ind,-2-2-1],\n # df_4.iloc[list(node_attr.iloc[p_tier_ind,:].index),j] ],axis=1)\n d_tier=pd.concat([t_node_attr.iloc[:,-2-2-1],\n df_4.iloc[list(node_attr.iloc[p_tier_ind,:].index),j] ],axis=1)\n\n #print(Ld)\n #d_tier=d_tier.drop([j])\n #d_tier=d_tier.reset_index()\n\n d_tier=d_tier.fillna(1)\n\n #and average disances per state\n d_tier_avg=d_tier.groupby(['state']).mean(str(j))\n #d_tier_avg\n\n\n\n s_tier_avg=pd.DataFrame(t_node_attr.groupby(['state']).mean()['score'])\n s_tier_avg=pd.merge(st,s_tier_avg, how='left', left_on=['state'], right_on = ['state']).fillna(node_attr.iloc[j,:]['score'])\n #s_tier_avg\n\n ## state local prob and avg distance\n mimetic_p=pd.merge(tier_p,d_tier_avg, how='left', left_on=['state'], right_on = ['state'])\n mimetic_p=pd.merge(mimetic_p,s_tier_avg, how='left', left_on=['state'], right_on = ['state'])\n\n #mimetic_p\n\n mimetic_p.columns=['state','tier_p','cur_node','score_m']\n mimetic_p['tier_p'] = mimetic_p['tier_p']/np.sum(mimetic_p['tier_p'])\n #mimetic_p\n\n #round(mimetic_p['score_m'][0])\n\n ################################################ \n region_ind = [i for i, e in enumerate(list(p_reg.columns)) if e in set([node_attr.iloc[j,9]])]\n ms_ind = [i for i, e in enumerate(list(p_med.columns)) if e in set([node_attr.iloc[j,12]])]\n\n h_reg=prob_pdf(round(round(mimetic_p['score_m'][0])),region_ind[0]+1)[0]/prob_pdf(round(node_attr.iloc[j,:]['score']),region_ind[0]+1)[0]\n l_reg=prob_pdf(round(round(mimetic_p['score_m'][1])),region_ind[0]+1)[0]/prob_pdf(round(node_attr.iloc[j,:]['score']),region_ind[0]+1)[0]\n pbreg=pd.DataFrame([h_reg/(h_reg+l_reg),l_reg/(h_reg+l_reg)],columns=['pbreg'])\n pbreg.index=['high','low']\n pbreg=pbreg.reset_index()\n pbreg.columns=['state','pbreg']\n #pbreg\n\n h_reg=prob_pdf_m(round(round(mimetic_p['score_m'][0])),ms_ind[0]+1)[0]/prob_pdf_m(round(node_attr.iloc[j,:]['score']),ms_ind[0]+1)[0]\n l_reg=prob_pdf_m(round(round(mimetic_p['score_m'][1])),ms_ind[0]+1)[0]/prob_pdf_m(round(node_attr.iloc[j,:]['score']),ms_ind[0]+1)[0]\n pbm=pd.DataFrame([h_reg/(h_reg+l_reg),l_reg/(h_reg+l_reg)],columns=['pbm'])\n pbm.index=['high','low']\n pbm=pbm.reset_index()\n pbm.columns=['state','pbm']\n #pbm\n pbreg.index=mimetic_p.index\n pbm.index=mimetic_p.index\n\n\n mimetic_p['pbreg_m']=pbreg['pbreg']\n mimetic_p['pbm_m']=pbm['pbm']\n #mimetic_p\n #################################################################################################### \n ########################################## Local & Global / inform reg & normative #################\n ####################################################################################################\n #Index in node attributes df for rows with target column == j\n prnt_ind = [i for i, e in enumerate(list(node_attr.index)) if e in set(edge_list_f.loc[edge_list_f.iloc[:,1]==j].iloc[:,0])] \n #Index in node attributes df for rows with target column == j\n prnt_ind2 = [i for i, e in enumerate(list(node_attr.index)) if e in set(edge_list_f.loc[edge_list_f.iloc[:,0]==j].iloc[:,1])] \n\n l_node_attr = node_attr.iloc[prnt_ind,:]\n l_node_attr_score=l_node_attr['score'].copy()\n l_node_attr_score=l_node_attr_score.reset_index().iloc[:,1:]\n\n\n #len(l_node_attr.iloc[:,-2-2-1])\n\n #l_node_attr.loc[j]\n\n for tnr in range(0,l_node_attr.shape[0]):\n if node_attr.iloc[j,:]['score']<l_node_attr_score['score'][tnr]:\n l_node_attr['state'][l_node_attr.index[tnr]]='high'\n else:\n l_node_attr['state'][l_node_attr.index[tnr]]='low'\n\n l2_node_attr = node_attr.iloc[prnt_ind2,:]\n l2_node_attr_score=l2_node_attr['score'].copy()\n l2_node_attr_score=l2_node_attr_score.reset_index().iloc[:,1:]\n for tnr in range(0,l2_node_attr.shape[0]):\n if node_attr.iloc[j,:]['score']<l2_node_attr_score['score'][tnr]:\n l2_node_attr['state'][l2_node_attr.index[tnr]]='high'\n else:\n l2_node_attr['state'][l2_node_attr.index[tnr]]='low'\n\n\n\n #Lp1\n\n if len(prnt_ind2)>0:\n #states prob of parent nodes(can also clculate d*count probabilities)\n Lp1 = pd.DataFrame(l_node_attr.iloc[:,-2-2-1].value_counts()/np.sum(l_node_attr.iloc[:,-2-2-1].value_counts()))\n Lp1 = Lp1.reset_index()\n #states prob of parent nodes(can also clculate d*count probabilities)\n Lp2 = pd.DataFrame(l2_node_attr.iloc[:,-2-2-1].value_counts()/np.sum(l2_node_attr.iloc[:,-2-2-1].value_counts()))\n Lp2 = Lp2.reset_index()\n Lp1=pd.merge(st,Lp1,how=\"left\",left_on=['state'],right_on='index').fillna(0.01)\n Lp2=pd.merge(st,Lp2,how=\"left\",left_on=['state'],right_on='index').fillna(0.01)\n Lp=pd.merge(Lp1,Lp2,how=\"left\",left_on=['state_x'],right_on='state_x')\n #print(Lp.head())\n Lp['state']=bs1*Lp['state_y_x']+bs2*Lp['state_y_y']\n Lp=Lp.iloc[:,[0,5]]\n Lp.columns=['index','state']\n #print(Lp1.head())\n #print(Lp2.head())\n\n\n else:\n #states prob of parent nodes(can also clculate d*count probabilities)\n Lp = pd.DataFrame(l_node_attr.iloc[:,-2-2-1].value_counts()/np.sum(l_node_attr.iloc[:,-2-2-1].value_counts()))\n Lp = Lp.reset_index()\n #print(Lp)\n Lp=pd.merge(st,Lp,how=\"left\",left_on=['state'],right_on='index').fillna(0.01)\n Lp=Lp.iloc[:,[0,2]]\n #print(Lp)\n Lp.columns=['index','state'] \n\n #Lp.head()\n\n if len(prnt_ind2)>0:\n\n #states and distances \n Ld1=pd.concat([l_node_attr.iloc[:,-2-2-1],\n df_4.iloc[list(node_attr.iloc[prnt_ind,:].index),j] ],axis=1)\n\n Lad1=Ld1.groupby(['state']).mean()\n\n #states and distances \n Ld2=pd.concat([l2_node_attr.iloc[:,-2-2-1],\n df_4.iloc[list(node_attr.iloc[prnt_ind2,:].index),j] ],axis=1)\n #Lp2.head()\n Lad2=Ld2.groupby(['state']).mean()\n Lad1=pd.merge(st,Lad1,how=\"left\",left_on=['state'],right_on='state').fillna(0.01)\n Lad2=pd.merge(st,Lad2,how=\"left\",left_on=['state'],right_on='state').fillna(0.01)\n Lad=pd.merge(Lad1,Lad2,how=\"left\",left_on=['state'],right_on='state').fillna(0.01)\n #print(Lad)\n Lad['state_n']=bs1*Lad[str(j)+'_x']+bs2*Lad[str(j)+'_y']\n Lad=Lad.iloc[:,[0,3]]\n Lad.columns=['state',str(j)]\n Lad.index=Lad['state']\n Lad=Lad.iloc[:,1]\n #print(Lad.head())\n s_l1_avg=pd.DataFrame(l_node_attr.groupby(['state']).mean()['score'])\n s_l1_avg=pd.merge(st,s_l1_avg, how='left', left_on=['state'], right_on = ['state']).fillna(node_attr.iloc[j,:]['score'])\n s_l2_avg=pd.DataFrame(l2_node_attr.groupby(['state']).mean()['score'])\n s_l2_avg=pd.merge(st,s_l2_avg, how='left', left_on=['state'], right_on = ['state']).fillna(node_attr.iloc[j,:]['score'])\n s_l_avg=pd.merge(s_l1_avg,s_l2_avg,how=\"left\",left_on=['state'],right_on='state')\n #print(s_l_avg)\n s_l_avg['score_n']=bs1*s_l_avg['score'+'_x']+bs2*s_l_avg['score'+'_y']\n s_l_avg=s_l_avg.iloc[:,[0,3]]\n s_l_avg.columns=['state','score']\n\n else:\n #states and distances \n Ld=pd.concat([l_node_attr.iloc[:,-2-2-1],\n df_4.iloc[list(node_attr.iloc[prnt_ind,:].index),j] ],axis=1)\n #print(Ld)\n #and average disances per state\n Lad=Ld.groupby(['state']).mean()#str(j)\n Lad=pd.merge(st,Lad,how=\"left\",left_on=['state'],right_on='state').fillna(0.01)\n\n Lad=Lad.reset_index()\n\n s_l_avg=pd.DataFrame(l_node_attr.groupby(['state']).mean()['score'])\n s_l_avg=pd.merge(st,s_l_avg, how='left', left_on=['state'], right_on = ['state']).fillna(node_attr.iloc[j,:]['score'])\n s_l_avg=s_l_avg.reset_index()\n\n #Lad.head()\n #print(Lad)\n\n #Lad\n\n #s_l_avg\n\n #print(dist_local)\n if len(prnt_ind2)>0:\n\n ## state local prob and avg distance\n dist_local=pd.merge(Lp,Lad, how='left', left_on=['index'], right_on = ['state'])\n dist_local.columns=['state','local_prob','cur_node_l']\n #dist_local\n\n dist_local=pd.merge(dist_local,s_l_avg, how='left', left_on=['state'], right_on = ['state'])\n else :\n #bs1*s_l_avg['score'+'_x']+s_l_avg*Lad['score'+'_y']\n dist_local=pd.merge(Lp,Lad, how='left', left_on=['index'], right_on = ['state'])\n dist_local=dist_local.iloc[:,[0,1,4]]\n dist_local.columns=['state','local_prob','cur_node_l']\n #dist_local\n #print(s_l_avg)\n\n dist_local=pd.merge(dist_local,s_l_avg, how='left', left_on=['state'], right_on = ['state'])\n dist_local=dist_local.iloc[:,[0,1,2,4]]\n\n\n\n #dist_local\n\n dist_local.columns=['state','local_prob','cur_node_l','score_l']\n\n dist_local['local_prob']=dist_local['local_prob']/np.sum(dist_local['local_prob'])\n\n #print(dist_local)\n\n h_reg=prob_pdf(round(round(dist_local['score_l'][0])),region_ind[0]+1)[0]/prob_pdf(round(node_attr.iloc[j,:]['score']),region_ind[0]+1)[0]\n l_reg=prob_pdf(round(round(dist_local['score_l'][1])),region_ind[0]+1)[0]/prob_pdf(round(node_attr.iloc[j,:]['score']),region_ind[0]+1)[0]\n pbreg=pd.DataFrame([h_reg/(h_reg+l_reg),l_reg/(h_reg+l_reg)],columns=['pbreg'])\n pbreg.index=['high','low']\n pbreg=pbreg.reset_index()\n pbreg.columns=['state','pbreg']\n #pbreg\n\n h_reg=prob_pdf_m(round(round(dist_local['score_l'][0])),ms_ind[0]+1)[0]/prob_pdf_m(round(node_attr.iloc[j,:]['score']),ms_ind[0]+1)[0]\n l_reg=prob_pdf_m(round(round(dist_local['score_l'][1])),ms_ind[0]+1)[0]/prob_pdf_m(round(node_attr.iloc[j,:]['score']),ms_ind[0]+1)[0]\n pbm=pd.DataFrame([h_reg/(h_reg+l_reg),l_reg/(h_reg+l_reg)],columns=['pbm'])\n pbm.index=['high','low']\n pbm=pbm.reset_index()\n pbm.columns=['state','pbm']\n #pbm\n pbreg.index=mimetic_p.index\n pbm.index=mimetic_p.index\n\n\n dist_local['pbreg_l']=pbreg['pbreg']\n dist_local['pbm_l']=pbm['pbm']\n #dist_local\n\n ## global prob\n #glb_p=pd.DataFrame(node_attr['state'].value_counts()/np.sum(node_attr['state'].value_counts()))\n #glb_p=glb_p.reset_index()\n #glb_p.columns=['state','g_p']\n st=[\"high\",\"low\"]\n st=pd.DataFrame(st)\n st.columns=['state']\n #Index in node attributes df['partitions'] == jth row partition column \n p_region_ind = [i for i, e in enumerate(list(node_attr['partition'])) if e in set([node_attr.iloc[j,9]])]\n r_node_attr = node_attr.iloc[p_region_ind,:]\n\n r_node_attr_score=r_node_attr['score'].copy()\n r_node_attr_score=r_node_attr_score.reset_index().iloc[:,1:]\n for tnr in range(0,r_node_attr.shape[0]):\n if node_attr.iloc[j,:]['score']<r_node_attr_score['score'][tnr]:\n r_node_attr['state'][r_node_attr.index[tnr]]='high'\n else:\n r_node_attr['state'][r_node_attr.index[tnr]]='low'\n\n\n\n\n glb_p=pd.DataFrame(r_node_attr['state'].value_counts()/np.sum(r_node_attr['state'].value_counts()))\n\n\n glb_p=glb_p.reset_index()\n glb_p.columns=['state','g_p']\n\n t_glb_p=pd.merge(st,glb_p,how=\"left\",left_on=['state'],right_on='state')\n t_glb_p=t_glb_p.fillna(0.01)\n glb_p=t_glb_p\n\n #print(glb_p)\n #states and distances \n gd=pd.concat([r_node_attr.iloc[:,-2-2-1],\n df_4.iloc[list(node_attr.iloc[p_region_ind,:].index),j] ],axis=1)\n #print(gd)\n\n #and average disances per state\n gad=gd.groupby(['state']).mean(str(j))\n gad=pd.merge(st,gad,how=\"left\",left_on=['state'],right_on='state')\n #gad.reset_index(inplace=True)\n #print(gad)\n s_g_avg=pd.DataFrame(r_node_attr.groupby(['state']).mean()['score'])\n s_g_avg=pd.merge(st,s_g_avg, how='left', left_on=['state'], right_on = ['state']).fillna(node_attr.iloc[j,:]['score'])\n #s_g_avg\n\n ## state local prob and avg distance\n dist_global=pd.merge(glb_p,gad, how='left', left_on=['state'], right_on = ['state'])\n dist_global=pd.merge(dist_global,s_g_avg, how='left', left_on=['state'], right_on = ['state'])\n\n #dist_local\n\n dist_global.columns=['state','glob_prob','cur_node_g','score_g']\n\n dist_global['glob_prob'] =dist_global['glob_prob']/np.sum(dist_global['glob_prob'])\n #print(dist_global)\n\n h_reg=prob_pdf(round(round(dist_global['score_g'][0])),region_ind[0]+1)[0]/prob_pdf(round(node_attr.iloc[j,:]['score']),region_ind[0]+1)[0]\n l_reg=prob_pdf(round(round(dist_global['score_g'][1])),region_ind[0]+1)[0]/prob_pdf(round(node_attr.iloc[j,:]['score']),region_ind[0]+1)[0]\n pbreg=pd.DataFrame([h_reg/(h_reg+l_reg),l_reg/(h_reg+l_reg)],columns=['pbreg'])\n pbreg.index=['high','low']\n pbreg=pbreg.reset_index()\n pbreg.columns=['state','pbreg']\n #pbreg\n\n h_reg=prob_pdf_m(round(round(dist_global['score_g'][0])),ms_ind[0]+1)[0]/prob_pdf_m(round(node_attr.iloc[j,:]['score']),ms_ind[0]+1)[0]\n l_reg=prob_pdf_m(round(round(dist_global['score_g'][1])),ms_ind[0]+1)[0]/prob_pdf_m(round(node_attr.iloc[j,:]['score']),ms_ind[0]+1)[0]\n pbm=pd.DataFrame([h_reg/(h_reg+l_reg),l_reg/(h_reg+l_reg)],columns=['pbm'])\n pbm.index=['high','low']\n pbm=pbm.reset_index()\n pbm.columns=['state','pbm']\n #pbm\n pbreg.index=mimetic_p.index\n pbm.index=mimetic_p.index\n\n\n dist_global['pbreg_g']=pbreg['pbreg']\n dist_global['pbm_g']=pbm['pbm']\n #dist_global\n\n #print('glb_p')\n if (((i+1)*(j+1)) % 5000) ==0: print(dist_global)\n ## all memetic\n dist_local_global=pd.merge(dist_global,dist_local, how='left', left_on=['state'], right_on = ['state'])\n dist_local_global=dist_local_global.fillna(0.01)\n\n #dist_local_global['m_p']=dist_local_global.product(axis=1)/np.sum(dist_local_global.product(axis=1))\n #print(dist_local_global)\n #\n #################################################################################################### \n ########################################## All_ Pressures ##########################################\n ####################################################################################################\n #\n # All presures\n all_p = pd.merge(mimetic_p,dist_local_global,how='left', left_on=['state'], right_on = ['state'])\n all_p=all_p.fillna(0.01)\n #all_p = pd.merge(all_p,mimetic_p,how='left', left_on=['state'], right_on = ['state'])\n #all_p \n #= all_p.iloc[:,[0,4,5,6]]\n\n #0.25*all_p.iloc[:,3:5].product(axis=1)\n\n #all_p.iloc[:,3:5]\n\n #w1=w2=w3=w4=0.25\n #all_p\n\n #w1*all_p['tier_p'][0]*all_p['cur_node'][0]*all_p['score_m'][0]\n\n #w1=w2=w3=w4=0.25\n\n all_p_tpd=all_p.copy()\n #all_p_tpd\n all_p_new=pd.DataFrame(all_p_tpd['state'])\n #print(all_p_new)\n all_p_new['tier_p']=(all_p_tpd['tier_p']*all_p_tpd['cur_node'])/np.sum(all_p_tpd['tier_p']*all_p_tpd['cur_node'])\n all_p_new['glob_prob']=(all_p_tpd['glob_prob']*all_p_tpd['cur_node_g'])/np.sum(all_p_tpd['glob_prob']*all_p_tpd['cur_node_g'])\n all_p_new['local_prob']=(all_p_tpd['local_prob']*all_p_tpd['cur_node_l'])/np.sum(all_p_tpd['local_prob']*all_p_tpd['cur_node_l'])\n all_p_new['pbreg']=(all_p_tpd['pbreg_m']+all_p_tpd['pbreg_g']+all_p_tpd['pbreg_l'])/np.sum(all_p_tpd['pbreg_m']+all_p_tpd['pbreg_g']+all_p_tpd['pbreg_l'])\n all_p_new['pbm']=(all_p_tpd['pbm_m']+all_p_tpd['pbm_g']+all_p_tpd['pbm_l'])/np.sum(all_p_tpd['pbm_m']+all_p_tpd['pbm_g']+all_p_tpd['pbm_l'])\n all_p_new['score_m']=all_p_tpd['score_m']\n all_p_new['score_l']=all_p_tpd['score_l']\n all_p_new['score_g']=all_p_tpd['score_g']\n\n #pd.DataFrame(all_p_new)\n all_p=all_p_new\n\n\n \"\"\"\n if r_on==1: \n rpbr =[all_p['pbreg_m'][0],all_p['pbreg_l'][0],all_p['pbreg_g'][0]]\n else:\n rpbr =[1,1,1]\n if m_on==1: \n mpbm =[all_p['pbm_m'][0],all_p['pbm_l'][0],all_p['pbm_g'][0]]\n else:\n mpbm =[1,1,1]\n\n ptotalh=np.exp((w1*all_p['tier_p'][0]*all_p['cur_node'][0]*rpbr[0]*mpbm[0]*all_p['score_m'][0]+w2*all_p['local_prob'][0]*all_p['cur_node_l'][0]*rpbr[1]*mpbm[1]*all_p['score_l'][0]+w3*all_p['glob_prob'][0]*all_p['cur_node_g'][0]*rpbr[2]*mpbm[2]*all_p['score_g'][0])/w4)/(1+np.exp((w1*all_p['tier_p'][0]*all_p['cur_node'][0]*rpbr[0]*mpbm[0]*all_p['score_m'][0]+w2*all_p['local_prob'][0]*all_p['cur_node_l'][0]*rpbr[1]*mpbm[1]*all_p['score_l'][0]+w3*all_p['glob_prob'][0]*all_p['cur_node_g'][0]*rpbr[2]*mpbm[2]*all_p['score_g'][0])/w4))\n ptotall=1-ptotalh\n ptot=pd.DataFrame([ptotalh,ptotall],columns=['ptotal'])\n ptot.index=all_p.index\n all_p['ptotal']=ptot['ptotal']\n #all_p\n \"\"\"\n if r_on==1: \n rpbr =all_p['pbreg'][0]\n else:\n rpbr =0\n if m_on==1: \n mpbm =all_p['pbm'][0]\n else:\n mpbm =0\n rpmp=(all_p['pbreg']*all_p['pbm'])/np.sum(all_p['pbreg']*all_p['pbm'])\n if r_on==0:\n rpmp[0]=1\n rpmp[1]=1\n \n ###### multivariate normal\n nsd2=list()\n for repeat in range(0,100):\n\n nsd=list()\n for mni in range(0,3):\n\n nsd.append(np.random.normal(0,1))\n nsd2.append(np.random.multivariate_normal([0]*3,([nsd]*3)))\n #for ni,nsd2i in enumerate(nsd2):\n # nsd2[ni]=np.round_(nsd2i,2)\n nsd2=list(np.round_(pd.DataFrame(nsd2).mean(axis=0),2))\n ## 2\n if (j==0):\n nsd3=list()\n for repeat in range(0,100):\n\n nsd=list()\n for mni in range(0,3):\n\n nsd.append(np.random.normal(0,1))\n nsd3.append(np.random.multivariate_normal([0]*3,([nsd]*3)))\n #for ni,nsd2i in enumerate(nsd2):\n # nsd2[ni]=np.round_(nsd2i,2)\n nsd3=list(np.round_(pd.DataFrame(nsd3).mean(axis=0),2))\n #### normal\n\n epsilon_l=list()\n for repeat in range(0,100):\n epsilon_l.append(np.random.normal(0,1))\n epsilon=np.mean(epsilon_l) \n ####\n\n \"\"\"\n if ((node_attr.iloc[j,9]==rgn)&(node_attr.iloc[j,12]==mcp)):\n w1=W[0]\n w2=W[1]\n w3=W[2]\n else:\n w1=W[3]\n w2=W[4]\n w3=W[5]\n \"\"\" \n ####\n if ((node_attr.iloc[j,9]==\"NrA\")&(node_attr.iloc[j,12]==\"high\")):\n w1=W[0]\n w2=W[1]\n w3=W[2]\n elif ((node_attr.iloc[j,9]==\"NrA\")&(node_attr.iloc[j,12]==\"high\")):\n w1=W[3]\n w2=W[4]\n w3=W[5]\n elif ((node_attr.iloc[j,9]==\"NrA\")&(node_attr.iloc[j,12]==\"high\")):\n w1=W[6]\n w2=W[7]\n w3=W[8]\n elif ((node_attr.iloc[j,9]==\"NrA\")&(node_attr.iloc[j,12]==\"high\")):\n w1=W[9]\n w2=W[10]\n w3=W[11]\n elif ((node_attr.iloc[j,9]==\"NrA\")&(node_attr.iloc[j,12]==\"high\")):\n w1=W[12]\n w2=W[13]\n w3=W[14]\n elif ((node_attr.iloc[j,9]==\"NrA\")&(node_attr.iloc[j,12]==\"high\")):\n w1=W[15]\n w2=W[16]\n w3=W[17]\n elif ((node_attr.iloc[j,9]==\"NrA\")&(node_attr.iloc[j,12]==\"high\")):\n w1=W[18]\n w2=W[19]\n w3=W[20]\n elif ((node_attr.iloc[j,9]==\"NrA\")&(node_attr.iloc[j,12]==\"high\")):\n w1=W[21]\n w2=W[22]\n w3=W[23]\n elif ((node_attr.iloc[j,9]==\"NrA\")&(node_attr.iloc[j,12]==\"high\")):\n w1=W[24]\n w2=W[25]\n w3=W[26]\n else :\n w1=0.333\n w2=0.333\n w3=0.333\n\n #ptotalh=np.exp((w1*all_p['tier_p'][0]+w2*all_p['local_prob'][0]+w3*all_p['glob_prob'][0]+w4*rpbr+w5*mpbm))/(1+np.exp((w1*all_p['tier_p'][0]+w2*all_p['local_prob'][0]+w3*all_p['glob_prob'][0]+w4*rpbr+w5*mpbm)))\n #ptotalh=((np.exp((w1*(all_p['tier_p'][0]+alpha1)+w2*(all_p['local_prob'][0]+alpha2)+w3*(all_p['glob_prob'][0]+alpha3))/Tmp)/(1+np.exp((w1*(all_p['tier_p'][0]+alpha1)+w2*(all_p['local_prob'][0]+alpha2)+w3*(all_p['glob_prob'][0]+alpha3))/Tmp)))*(rpmp[0]))\n ptotalh=((np.exp(((w1+alpha1)*(all_p['tier_p'][0])+(w2+alpha2)*(all_p['local_prob'][0])+(w3+alpha3)*(all_p['glob_prob'][0]) +\n (nsd2[0]+nsd3[0])*(all_p['tier_p'][0])+(nsd2[1]+nsd3[1])*(all_p['local_prob'][0])+\n (nsd2[2]+nsd3[2])*(all_p['glob_prob'][0])+epsilon)/Tmp)/(1+np.exp(((w1+alpha1)*(all_p['tier_p'][0])+(w2+alpha2)*(all_p['local_prob'][0])+(w3+alpha3)*(all_p['glob_prob'][0]) +\n (nsd2[0]+nsd3[0])*(all_p['tier_p'][0])+(nsd2[1]+nsd3[1])*(all_p['local_prob'][0])+\n (nsd2[2]+nsd3[2])*(all_p['glob_prob'][0])+epsilon)/Tmp)))*(rpmp[0]))\n\n #ptotalh=ptotalh/np.sum(ptotalh)\n\n #ptotall=1-ptotalh\n ptotall=(1/(1+np.exp(((w1+alpha1)*(all_p['tier_p'][0])+(w2+alpha2)*(all_p['local_prob'][0])+(w3+alpha3)*(all_p['glob_prob'][0]) +\n (nsd2[0]+nsd3[0])*(all_p['tier_p'][0])+(nsd2[1]+nsd3[1])*(all_p['local_prob'][0])+\n (nsd2[2]+nsd3[2])*(all_p['glob_prob'][0])+epsilon)/Tmp)))*(rpmp[1])\n \n ptot=pd.DataFrame([ptotalh,ptotall],columns=['ptotal'])\n ptot.index=all_p.index\n all_p['ptotal']=ptot['ptotal']\n all_p['ptotal']=all_p['ptotal']/np.sum(all_p['ptotal'])\n #all_p\n\n\n #print(all_p)\n\n #0.6224593312018546\n \"\"\"\n d_s_ind=np.where(all_p['ptotal']==np.max(all_p['ptotal']))[0][0]\n \"\"\"\n \n if np.count_nonzero([w1,w2,w3])!=0:\n if all_p['ptotal'][0]>0.6224593312018546: \n #0.6224593312018546:\n d_s_ind=0\n elif all_p['ptotal'][0]<0.6224593312018546:\n #0.6224593312018546:\n d_s_ind=1\n else:\n d_s_ind = 1 if np.random.random()<0.5 else 0\n else:\n if all_p['ptotal'][0]>0.5:\n d_s_ind=0\n elif all_p['ptotal'][0]<0.5:\n d_s_ind=1\n else:\n d_s_ind = 1 if np.random.random()<0.5 else 0\n \"\"\"u = np.random.uniform()\n if all_p['ptotal'][0]>u:\n d_s_ind=0\n else:\n d_s_ind=1\"\"\"\n\n #print(d_s_ind)\n \"\"\" \n if r_on==1: \n rpbr =[all_p['pbreg_m'][d_s_ind],all_p['pbreg_l'][d_s_ind],all_p['pbreg_g'][d_s_ind]]\n else:\n rpbr =[1,1,1]\n if m_on==1: \n mpbm =[all_p['pbm_m'][d_s_ind],all_p['pbm_l'][d_s_ind],all_p['pbm_g'][d_s_ind]]\n else:\n mpbm =[1,1,1]\n \"\"\"\n if r_on==1: \n rpbr =all_p['pbreg'][d_s_ind]\n else:\n rpbr =0\n if m_on==1: \n mpbm =all_p['pbm'][d_s_ind]\n else:\n mpbm =0\n\n \"\"\"s_av=(w1*all_p['tier_p'][d_s_ind]*all_p['cur_node'][d_s_ind]*rpbr[0]*mpbm[0]*all_p['score_m'][d_s_ind]+w2*all_p['local_prob'][d_s_ind]*all_p['cur_node_l'][d_s_ind]*rpbr[1]*mpbm[1]*all_p['score_l'][d_s_ind]+w3*all_p['glob_prob'][d_s_ind]*all_p['cur_node_g'][d_s_ind]*rpbr[2]*mpbm[2]*all_p['score_g'][d_s_ind])/(w1*all_p['tier_p'][d_s_ind]*all_p['cur_node'][d_s_ind]*rpbr[0]*mpbm[0]+w2*all_p['local_prob'][d_s_ind]*all_p['cur_node_l'][d_s_ind]*rpbr[1]*mpbm[1]+w3*all_p['glob_prob'][d_s_ind]*all_p['cur_node_g'][d_s_ind]*rpbr[2]*mpbm[2])\"\"\"\n \"\"\"s_av=(w1*all_p['score_m'][d_s_ind]+w2*all_p['score_l'][d_s_ind]+w3*all_p['score_g'][d_s_ind])/(w1+w2+w3)\"\"\"\n # s_av1=np.min([all_p['score_m'][d_s_ind],all_p['score_l'][d_s_ind],all_p['score_g'][d_s_ind]])\n # s_av2=np.max([all_p['score_m'][d_s_ind],all_p['score_l'][d_s_ind],all_p['score_g'][d_s_ind]])\n\n if w1==0:\n s_av1=np.min([all_p['score_l'][d_s_ind],all_p['score_g'][d_s_ind]])\n s_av2=np.max([all_p['score_l'][d_s_ind],all_p['score_g'][d_s_ind]])\n elif w2==0:\n s_av1=np.min([all_p['score_m'][d_s_ind],all_p['score_g'][d_s_ind]])\n s_av2=np.max([all_p['score_m'][d_s_ind],all_p['score_g'][d_s_ind]])\n elif w3==0:\n s_av1=np.min([all_p['score_m'][d_s_ind],all_p['score_l'][d_s_ind]])\n s_av2=np.max([all_p['score_m'][d_s_ind],all_p['score_l'][d_s_ind]])\n elif np.count_nonzero([w1,w2,w3])==1:\n s_av1=[w1*all_p['score_m'][d_s_ind]+w2*all_p['score_l'][d_s_ind]+w3*all_p['score_g'][d_s_ind]]\n s_av2=[w1*all_p['score_m'][d_s_ind]+w2*all_p['score_l'][d_s_ind]+w3*all_p['score_g'][d_s_ind]]\n elif np.count_nonzero([w1,w2,w3])==0:\n s_av1=node_attr['score'][j]\n s_av2=node_attr['score'][j]\n else:\n \n s_av1=np.min([all_p['score_m'][d_s_ind],all_p['score_l'][d_s_ind],all_p['score_g'][d_s_ind]])\n s_av2=np.max([all_p['score_m'][d_s_ind],all_p['score_l'][d_s_ind],all_p['score_g'][d_s_ind]])\n\n \n\n\n\n\n\n #s_av\n\n #region_ind\n\n #print(all_p)\n probs_mat[i,j]=np.max(all_p['ptotal'])\n if i==0:\n probs_mat2[i,j]=np.max(all_p['ptotal'])\n else:\n probs_mat2[i+j,:]=probs_mat[i,j]\n\n ## hihest prob label\n #desired_state = random.choices(list(all_p['state']),list(all_p['all']))[0]\n #desired_state = all_p['state'][d_s_ind] \n #desired_state\n #desired_state = list(all_p.loc[all_p['all']==np.max(all_p['all'])]['state'])[0]\n\n\n\n ##### draw attributes with given label\n\n \"\"\"sample_df_1=sample_lab_attr_new(np.float(N),region_ind[0],s_av,0.05*s_av)\"\"\"\n \"\"\"if s_av1==s_av2:\n if d_s_ind==0:\n \n sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],s_av1+0.1,100)\n else:\n sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],0,s_av2+0.12)\n else:\n sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],s_av1+0.1,s_av2+0.12)\"\"\"\n if s_av1==s_av2:\n if d_s_ind==0:\n \n sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],s_av1+0.1,100)\n else:\n sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],0,s_av2+0.12)\n else:\n if d_s_ind==0:\n \n sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],(s_av1+s_av2)/2,100)\n else:\n sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],0,(s_av1+s_av2)/2)\n \n \n \n \n #################################################################################################### \n ########################################## Update attributes ######################################\n ####################################################################################################\n ## update node attributes \n for k,replc in enumerate(sample_df_1.values[0]):\n node_attr.iloc[j,k]=replc \n ## update edge attributes\n # node attr to edge attr\n df_3=cosine_similarity(node_attr.iloc[:,:8])\n df_4=pd.DataFrame(df_3)\n df_4.values[[np.arange(len(df_4))]*2] = np.nan\n #mat_data.head()\n edge_list_2=df_4.stack().reset_index()\n edge_list_2.columns=['source','target','weight']\n #edge_list_2.head()\n edge_list_f=pd.merge(edge_list_df, edge_list_2, how='left', left_on=['source','target'], right_on = ['source','target'])\n #edge_list_f.head()\n edge_list_f.drop('weight_x',axis=1,inplace=True)\n edge_list_f.columns=['source','target','weight']\n\n\n\n for k,replc in enumerate(node_attr.iloc[j,:].values):\n blanck_data[j,k]=replc \n\n for k,replc in enumerate(all_p.iloc[0,1:].values):\n blanck_data[j,k+13]=replc \n blanck_data[j,29]=j\n blanck_data[j,30]=i\n blanck_data[j,31]=all_p['state'][d_s_ind]\n\n\n #blanck_data2[:,:2,i]=np.array(edge_list_f) \n\n\n\n else:\n\n ####2\n #################################################################################################### \n ########################################## MIMETIC##################################################\n ####################################################################################################\n st=[\"high\",\"low\"]\n st=pd.DataFrame(st)\n st.columns=['state']\n\n #Index in node attributes df['partitions'] == jth row partition column \n p_tier_ind = [i for i, e in enumerate(list(node_attr['tier'])) if e in set([node_attr.iloc[j,10]])]\n t_node_attr = node_attr.iloc[p_tier_ind,:]\n #t_node_attr=t_node_attr.reset_index().iloc[:,1:]\n #t_node_attr.head()\n\n\n\n t_node_attr_score=t_node_attr['score'].copy()\n t_node_attr_score=t_node_attr_score.reset_index().iloc[:,1:]\n #t_node_attr_score\n\n #t_node_attr.index[tnr]\n\n for tnr in range(0,t_node_attr.shape[0]):\n if node_attr.iloc[j,:]['score']<t_node_attr_score['score'][tnr]:\n t_node_attr['state'][t_node_attr.index[tnr]]='high'\n else:\n t_node_attr['state'][t_node_attr.index[tnr]]='low'\n\n tier_p=pd.DataFrame(t_node_attr['state'].value_counts()/np.sum(t_node_attr['state'].value_counts()))\n tier_p=tier_p.reset_index()\n tier_p.columns=['state','t_p']\n #tier_p\n\n t_tier_p=pd.merge(st,tier_p,how=\"left\",left_on=['state'],right_on='state')\n t_tier_p=t_tier_p.fillna(0.01)\n tier_p=t_tier_p\n #tier_p\n\n #d_tier.index\n\n #pd.DataFrame(node_attr.iloc[p_tier_ind,-2-2-1])\n #df_4.iloc[list(node_attr.iloc[p_tier_ind,:].index),j].reset_index().iloc[:,-1]\n\n #states and distances \n #d_tier=pd.concat([node_attr.iloc[p_tier_ind,-2-2-1],\n # df_4.iloc[list(node_attr.iloc[p_tier_ind,:].index),j] ],axis=1)\n d_tier=pd.concat([t_node_attr.iloc[:,-2-2-1],\n df_4.iloc[list(node_attr.iloc[p_tier_ind,:].index),j] ],axis=1)\n\n #print(Ld)\n #d_tier=d_tier.drop([j])\n #d_tier=d_tier.reset_index()\n\n d_tier=d_tier.fillna(1)\n\n #and average disances per state\n d_tier_avg=d_tier.groupby(['state']).mean(str(j))\n #d_tier_avg\n\n\n\n s_tier_avg=pd.DataFrame(t_node_attr.groupby(['state']).mean()['score'])\n s_tier_avg=pd.merge(st,s_tier_avg, how='left', left_on=['state'], right_on = ['state']).fillna(node_attr.iloc[j,:]['score'])\n #s_tier_avg\n\n ## state local prob and avg distance\n mimetic_p=pd.merge(tier_p,d_tier_avg, how='left', left_on=['state'], right_on = ['state'])\n mimetic_p=pd.merge(mimetic_p,s_tier_avg, how='left', left_on=['state'], right_on = ['state'])\n\n #mimetic_p\n\n mimetic_p.columns=['state','tier_p','cur_node','score_m']\n mimetic_p['tier_p'] = mimetic_p['tier_p']/np.sum(mimetic_p['tier_p'])\n #mimetic_p\n\n #round(mimetic_p['score_m'][0])\n\n ################################################ regulatary mem\n region_ind = [i for i, e in enumerate(list(p_reg.columns)) if e in set([node_attr.iloc[j,9]])]\n ms_ind = [i for i, e in enumerate(list(p_med.columns)) if e in set([node_attr.iloc[j,12]])]\n\n h_reg=prob_pdf(round(round(mimetic_p['score_m'][0])),region_ind[0]+1)[0]/prob_pdf(round(node_attr.iloc[j,:]['score']),region_ind[0]+1)[0]\n l_reg=prob_pdf(round(round(mimetic_p['score_m'][1])),region_ind[0]+1)[0]/prob_pdf(round(node_attr.iloc[j,:]['score']),region_ind[0]+1)[0]\n pbreg=pd.DataFrame([h_reg/(h_reg+l_reg),l_reg/(h_reg+l_reg)],columns=['pbreg'])\n pbreg.index=['high','low']\n pbreg=pbreg.reset_index()\n pbreg.columns=['state','pbreg']\n #pbreg\n\n h_reg=prob_pdf_m(round(round(mimetic_p['score_m'][0])),ms_ind[0]+1)[0]/prob_pdf_m(round(node_attr.iloc[j,:]['score']),ms_ind[0]+1)[0]\n l_reg=prob_pdf_m(round(round(mimetic_p['score_m'][1])),ms_ind[0]+1)[0]/prob_pdf_m(round(node_attr.iloc[j,:]['score']),ms_ind[0]+1)[0]\n pbm=pd.DataFrame([h_reg/(h_reg+l_reg),l_reg/(h_reg+l_reg)],columns=['pbm'])\n pbm.index=['high','low']\n pbm=pbm.reset_index()\n pbm.columns=['state','pbm']\n #pbm\n pbreg.index=mimetic_p.index\n pbm.index=mimetic_p.index\n\n\n mimetic_p['pbreg_m']=pbreg['pbreg']\n mimetic_p['pbm_m']=pbm['pbm']\n #mimetic_p\n #################################################################################################### \n ########################################## Local & Global / inform reg & normative #################\n ####################################################################################################\n #Index in node attributes df for rows with target column == j\n \"\"\"prnt_ind = [i for i, e in enumerate(list(node_attr.index)) if e in set(edge_list_f.loc[edge_list_f.iloc[:,1]==j].iloc[:,0])]\"\"\" \n #Index in node attributes df for rows with target column == j\n prnt_ind2 = [i for i, e in enumerate(list(node_attr.index)) if e in set(edge_list_f.loc[edge_list_f.iloc[:,0]==j].iloc[:,1])] \n\n \"\"\"l_node_attr = node_attr.iloc[prnt_ind,:]\n l_node_attr_score=l_node_attr['score'].copy()\n l_node_attr_score=l_node_attr_score.reset_index().iloc[:,1:]\n\n\n #len(l_node_attr.iloc[:,-2-2-1])\n\n #l_node_attr.loc[j]\n\n for tnr in range(0,l_node_attr.shape[0]):\n if node_attr.iloc[j,:]['score']<l_node_attr_score['score'][tnr]:\n l_node_attr['state'][l_node_attr.index[tnr]]='high'\n else:\n l_node_attr['state'][l_node_attr.index[tnr]]='low'\"\"\"\n\n l2_node_attr = node_attr.iloc[prnt_ind2,:]\n l2_node_attr_score=l2_node_attr['score'].copy()\n l2_node_attr_score=l2_node_attr_score.reset_index().iloc[:,1:]\n for tnr in range(0,l2_node_attr.shape[0]):\n if node_attr.iloc[j,:]['score']<l2_node_attr_score['score'][tnr]:\n l2_node_attr['state'][l2_node_attr.index[tnr]]='high'\n else:\n l2_node_attr['state'][l2_node_attr.index[tnr]]='low'\n\n\n\n #Lp1\n\n \"\"\"if len(prnt_ind2)>0:\n #states prob of parent nodes(can also clculate d*count probabilities)\n Lp1 = pd.DataFrame(l_node_attr.iloc[:,-2-2-1].value_counts()/np.sum(l_node_attr.iloc[:,-2-2-1].value_counts()))\n Lp1 = Lp1.reset_index()\n #states prob of parent nodes(can also clculate d*count probabilities)\n Lp2 = pd.DataFrame(l2_node_attr.iloc[:,-2-2-1].value_counts()/np.sum(l2_node_attr.iloc[:,-2-2-1].value_counts()))\n Lp2 = Lp2.reset_index()\n Lp1=pd.merge(st,Lp1,how=\"left\",left_on=['state'],right_on='index').fillna(0.01)\n Lp2=pd.merge(st,Lp2,how=\"left\",left_on=['state'],right_on='index').fillna(0.01)\n Lp=pd.merge(Lp1,Lp2,how=\"left\",left_on=['state_x'],right_on='state_x')\n #print(Lp.head())\n Lp['state']=bs1*Lp['state_y_x']+bs2*Lp['state_y_y']\n Lp=Lp.iloc[:,[0,5]]\n Lp.columns=['index','state']\n #print(Lp1.head())\n #print(Lp2.head())\n\n\n else:\"\"\"\n #states prob of parent nodes(can also clculate d*count probabilities)\n Lp = pd.DataFrame(l2_node_attr.iloc[:,-2-2-1].value_counts()/np.sum(l2_node_attr.iloc[:,-2-2-1].value_counts()))\n Lp = Lp.reset_index()\n #print(Lp)\n Lp=pd.merge(st,Lp,how=\"left\",left_on=['state'],right_on='index').fillna(0.01)\n Lp=Lp.iloc[:,[0,2]]\n #print(Lp)\n Lp.columns=['index','state'] \n #print(Lp)\n \n\n \n #Lp.head()\n\n \"\"\"if len(prnt_ind2)>0:\n\n #states and distances \n Ld1=pd.concat([l_node_attr.iloc[:,-2-2-1],\n df_4.iloc[list(node_attr.iloc[prnt_ind,:].index),j] ],axis=1)\n\n Lad1=Ld1.groupby(['state']).mean()\n\n #states and distances \n Ld2=pd.concat([l2_node_attr.iloc[:,-2-2-1],\n df_4.iloc[list(node_attr.iloc[prnt_ind2,:].index),j] ],axis=1)\n #Lp2.head()\n Lad2=Ld2.groupby(['state']).mean()\n Lad1=pd.merge(st,Lad1,how=\"left\",left_on=['state'],right_on='state').fillna(0.01)\n Lad2=pd.merge(st,Lad2,how=\"left\",left_on=['state'],right_on='state').fillna(0.01)\n Lad=pd.merge(Lad1,Lad2,how=\"left\",left_on=['state'],right_on='state').fillna(0.01)\n #print(Lad)\n Lad['state_n']=bs1*Lad[str(j)+'_x']+bs2*Lad[str(j)+'_y']\n Lad=Lad.iloc[:,[0,3]]\n Lad.columns=['state',str(j)]\n #print(Lad.head())\n s_l1_avg=pd.DataFrame(l_node_attr.groupby(['state']).mean()['score'])\n s_l1_avg=pd.merge(st,s_l1_avg, how='left', left_on=['state'], right_on = ['state']).fillna(node_attr.iloc[j,:]['score'])\n s_l2_avg=pd.DataFrame(l2_node_attr.groupby(['state']).mean()['score'])\n s_l2_avg=pd.merge(st,s_l2_avg, how='left', left_on=['state'], right_on = ['state']).fillna(node_attr.iloc[j,:]['score'])\n s_l_avg=pd.merge(s_l1_avg,s_l2_avg,how=\"left\",left_on=['state'],right_on='state')\n #print(s_l_avg)\n s_l_avg['score_n']=bs1*s_l_avg['score'+'_x']+bs2*s_l_avg['score'+'_y']\n s_l_avg=s_l_avg.iloc[:,[0,3]]\n s_l_avg.columns=['state','score']\n\n else:\"\"\"\n #states and distances \n Ld=pd.concat([l2_node_attr.iloc[:,-2-2-1],\n df_4.iloc[list(node_attr.iloc[prnt_ind2,:].index),j] ],axis=1)\n #print(Ld)\n #and average disances per state\n Lad=Ld.groupby(['state']).mean()#str(j)\n #print(Lad)\n Lad=pd.merge(st,Lad,how=\"left\",left_on=['state'],right_on='state').fillna(0.01)\n #print(Lad)\n\n Lad=Lad.reset_index()\n #print(Lad)\n\n s_l_avg=pd.DataFrame(l2_node_attr.groupby(['state']).mean()['score'])\n s_l_avg=pd.merge(st,s_l_avg, how='left', left_on=['state'], right_on = ['state']).fillna(node_attr.iloc[j,:]['score'])\n s_l_avg=s_l_avg.reset_index()\n\n #Lad.head()\n #print(Lad)\n #Lad.index=Lad['state']\n #Lad=Lad.iloc[:,1:]\n #print(Lad)\n #Lad\n\n #s_l_avg\n\n #bs1*s_l_avg['score'+'_x']+s_l_avg*Lad['score'+'_y']\n\n ## state local prob and avg distance\n \n dist_local=pd.merge(Lp,Lad, how='left', left_on=['index'], right_on = ['state'])\n dist_local=dist_local.iloc[:,[0,1,4]]\n dist_local.columns=['state','local_prob','cur_node_l']\n #dist_local\n #print(s_l_avg)\n\n dist_local=pd.merge(dist_local,s_l_avg, how='left', left_on=['state'], right_on = ['state'])\n dist_local=dist_local.iloc[:,[0,1,2,4]]\n #print(dist_local)\n\n #dist_local=dist_local.drop(['index'])\n\n #dist_local\n\n dist_local.columns=['state','local_prob','cur_node_l','score_l']\n\n dist_local['local_prob']=dist_local['local_prob']/np.sum(dist_local['local_prob'])\n\n #print(dist_local)\n\n h_reg=prob_pdf(round(round(dist_local['score_l'][0])),region_ind[0]+1)[0]/prob_pdf(round(node_attr.iloc[j,:]['score']),region_ind[0]+1)[0]\n l_reg=prob_pdf(round(round(dist_local['score_l'][1])),region_ind[0]+1)[0]/prob_pdf(round(node_attr.iloc[j,:]['score']),region_ind[0]+1)[0]\n pbreg=pd.DataFrame([h_reg/(h_reg+l_reg),l_reg/(h_reg+l_reg)],columns=['pbreg'])\n pbreg.index=['high','low']\n pbreg=pbreg.reset_index()\n pbreg.columns=['state','pbreg']\n #pbreg\n\n h_reg=prob_pdf_m(round(round(dist_local['score_l'][0])),ms_ind[0]+1)[0]/prob_pdf_m(round(node_attr.iloc[j,:]['score']),ms_ind[0]+1)[0]\n l_reg=prob_pdf_m(round(round(dist_local['score_l'][1])),ms_ind[0]+1)[0]/prob_pdf_m(round(node_attr.iloc[j,:]['score']),ms_ind[0]+1)[0]\n pbm=pd.DataFrame([h_reg/(h_reg+l_reg),l_reg/(h_reg+l_reg)],columns=['pbm'])\n pbm.index=['high','low']\n pbm=pbm.reset_index()\n pbm.columns=['state','pbm']\n #pbm\n pbreg.index=mimetic_p.index\n pbm.index=mimetic_p.index\n\n\n dist_local['pbreg_l']=pbreg['pbreg']\n dist_local['pbm_l']=pbm['pbm']\n #dist_local\n\n ## global prob\n #glb_p=pd.DataFrame(node_attr['state'].value_counts()/np.sum(node_attr['state'].value_counts()))\n #glb_p=glb_p.reset_index()\n #glb_p.columns=['state','g_p']\n st=[\"high\",\"low\"]\n st=pd.DataFrame(st)\n st.columns=['state']\n #Index in node attributes df['partitions'] == jth row partition column \n p_region_ind = [i for i, e in enumerate(list(node_attr['partition'])) if e in set([node_attr.iloc[j,9]])]\n r_node_attr = node_attr.iloc[p_region_ind,:]\n\n r_node_attr_score=r_node_attr['score'].copy()\n r_node_attr_score=r_node_attr_score.reset_index().iloc[:,1:]\n for tnr in range(0,r_node_attr.shape[0]):\n if node_attr.iloc[j,:]['score']<r_node_attr_score['score'][tnr]:\n r_node_attr['state'][r_node_attr.index[tnr]]='high'\n else:\n r_node_attr['state'][r_node_attr.index[tnr]]='low'\n\n\n\n\n glb_p=pd.DataFrame(r_node_attr['state'].value_counts()/np.sum(r_node_attr['state'].value_counts()))\n\n\n glb_p=glb_p.reset_index()\n glb_p.columns=['state','g_p']\n\n t_glb_p=pd.merge(st,glb_p,how=\"left\",left_on=['state'],right_on='state')\n t_glb_p=t_glb_p.fillna(0.01)\n glb_p=t_glb_p\n\n #print(glb_p)\n #states and distances \n gd=pd.concat([r_node_attr.iloc[:,-2-2-1],\n df_4.iloc[list(node_attr.iloc[p_region_ind,:].index),j] ],axis=1)\n #print(gd)\n\n #and average disances per state\n gad=gd.groupby(['state']).mean(str(j))\n gad=pd.merge(st,gad,how=\"left\",left_on=['state'],right_on='state')\n #gad.reset_index(inplace=True)\n #print(gad)\n s_g_avg=pd.DataFrame(r_node_attr.groupby(['state']).mean()['score'])\n s_g_avg=pd.merge(st,s_g_avg, how='left', left_on=['state'], right_on = ['state']).fillna(node_attr.iloc[j,:]['score'])\n #s_g_avg\n\n ## state local prob and avg distance\n dist_global=pd.merge(glb_p,gad, how='left', left_on=['state'], right_on = ['state'])\n dist_global=pd.merge(dist_global,s_g_avg, how='left', left_on=['state'], right_on = ['state'])\n\n #dist_local\n\n dist_global.columns=['state','glob_prob','cur_node_g','score_g']\n\n dist_global['glob_prob'] =dist_global['glob_prob']/np.sum(dist_global['glob_prob'])\n #print(dist_global)\n\n h_reg=prob_pdf(round(round(dist_global['score_g'][0])),region_ind[0]+1)[0]/prob_pdf(round(node_attr.iloc[j,:]['score']),region_ind[0]+1)[0]\n l_reg=prob_pdf(round(round(dist_global['score_g'][1])),region_ind[0]+1)[0]/prob_pdf(round(node_attr.iloc[j,:]['score']),region_ind[0]+1)[0]\n pbreg=pd.DataFrame([h_reg/(h_reg+l_reg),l_reg/(h_reg+l_reg)],columns=['pbreg'])\n pbreg.index=['high','low']\n pbreg=pbreg.reset_index()\n pbreg.columns=['state','pbreg']\n #pbreg\n\n h_reg=prob_pdf_m(round(round(dist_global['score_g'][0])),ms_ind[0]+1)[0]/prob_pdf_m(round(node_attr.iloc[j,:]['score']),ms_ind[0]+1)[0]\n l_reg=prob_pdf_m(round(round(dist_global['score_g'][1])),ms_ind[0]+1)[0]/prob_pdf_m(round(node_attr.iloc[j,:]['score']),ms_ind[0]+1)[0]\n pbm=pd.DataFrame([h_reg/(h_reg+l_reg),l_reg/(h_reg+l_reg)],columns=['pbm'])\n pbm.index=['high','low']\n pbm=pbm.reset_index()\n pbm.columns=['state','pbm']\n #pbm\n pbreg.index=mimetic_p.index\n pbm.index=mimetic_p.index\n\n\n dist_global['pbreg_g']=pbreg['pbreg']\n dist_global['pbm_g']=pbm['pbm']\n #dist_global\n\n #print('glb_p')\n #if (((i+1)*(j+1)) % 5000) ==0: print(dist_global)\n ## all memetic\n dist_local_global=pd.merge(dist_global,dist_local, how='left', left_on=['state'], right_on = ['state'])\n dist_local_global=dist_local_global.fillna(0.01)\n\n #dist_local_global['m_p']=dist_local_global.product(axis=1)/np.sum(dist_local_global.product(axis=1))\n #print(dist_local_global)\n #\n #################################################################################################### \n ########################################## All_ Pressures ##########################################\n ####################################################################################################\n #\n # All presures\n all_p = pd.merge(mimetic_p,dist_local_global,how='left', left_on=['state'], right_on = ['state'])\n all_p=all_p.fillna(0.01)\n #all_p = pd.merge(all_p,mimetic_p,how='left', left_on=['state'], right_on = ['state'])\n #all_p \n #= all_p.iloc[:,[0,4,5,6]]\n\n #0.25*all_p.iloc[:,3:5].product(axis=1)\n\n #all_p.iloc[:,3:5]\n\n #w1=w2=w3=w4=0.25\n #all_p\n\n #w1*all_p['tier_p'][0]*all_p['cur_node'][0]*all_p['score_m'][0]\n\n #w1=w2=w3=w4=0.25\n all_p_tpd=all_p.copy()\n #all_p_tpd\n all_p_new=pd.DataFrame(all_p_tpd['state'])\n #print(all_p_new)\n all_p_new['tier_p']=(all_p_tpd['tier_p']*all_p_tpd['cur_node'])/np.sum(all_p_tpd['tier_p']*all_p_tpd['cur_node'])\n all_p_new['glob_prob']=(all_p_tpd['glob_prob']*all_p_tpd['cur_node_g'])/np.sum(all_p_tpd['glob_prob']*all_p_tpd['cur_node_g'])\n all_p_new['local_prob']=(all_p_tpd['local_prob']*all_p_tpd['cur_node_l'])/np.sum(all_p_tpd['local_prob']*all_p_tpd['cur_node_l'])\n all_p_new['pbreg']=(all_p_tpd['pbreg_m']+all_p_tpd['pbreg_g']+all_p_tpd['pbreg_l'])/np.sum(all_p_tpd['pbreg_m']+all_p_tpd['pbreg_g']+all_p_tpd['pbreg_l'])\n all_p_new['pbm']=(all_p_tpd['pbm_m']+all_p_tpd['pbm_g']+all_p_tpd['pbm_l'])/np.sum(all_p_tpd['pbm_m']+all_p_tpd['pbm_g']+all_p_tpd['pbm_l'])\n all_p_new['score_m']=all_p_tpd['score_m']\n all_p_new['score_l']=all_p_tpd['score_l']\n all_p_new['score_g']=all_p_tpd['score_g']\n\n #pd.DataFrame(all_p_new)\n all_p=all_p_new\n\n\n \"\"\"\n if r_on==1: \n rpbr =[all_p['pbreg_m'][0],all_p['pbreg_l'][0],all_p['pbreg_g'][0]]\n else:\n rpbr =[1,1,1]\n if m_on==1: \n mpbm =[all_p['pbm_m'][0],all_p['pbm_l'][0],all_p['pbm_g'][0]]\n else:\n mpbm =[1,1,1]\n\n ptotalh=np.exp((w1*all_p['tier_p'][0]*all_p['cur_node'][0]*rpbr[0]*mpbm[0]*all_p['score_m'][0]+w2*all_p['local_prob'][0]*all_p['cur_node_l'][0]*rpbr[1]*mpbm[1]*all_p['score_l'][0]+w3*all_p['glob_prob'][0]*all_p['cur_node_g'][0]*rpbr[2]*mpbm[2]*all_p['score_g'][0])/w4)/(1+np.exp((w1*all_p['tier_p'][0]*all_p['cur_node'][0]*rpbr[0]*mpbm[0]*all_p['score_m'][0]+w2*all_p['local_prob'][0]*all_p['cur_node_l'][0]*rpbr[1]*mpbm[1]*all_p['score_l'][0]+w3*all_p['glob_prob'][0]*all_p['cur_node_g'][0]*rpbr[2]*mpbm[2]*all_p['score_g'][0])/w4))\n ptotall=1-ptotalh\n ptot=pd.DataFrame([ptotalh,ptotall],columns=['ptotal'])\n ptot.index=all_p.index\n all_p['ptotal']=ptot['ptotal']\n #all_p\n \"\"\"\n if r_on==1: \n rpbr =all_p['pbreg'][0]\n else:\n rpbr =0\n if m_on==1: \n mpbm =all_p['pbm'][0]\n else:\n mpbm =0\n rpmp=(all_p['pbreg']*all_p['pbm'])/np.sum(all_p['pbreg']*all_p['pbm'])\n if r_on==0:\n rpmp[0]=1\n rpmp[1]=1\n \n ###### multivariate normal\n nsd2=list()\n for repeat in range(0,100):\n\n nsd=list()\n for mni in range(0,3):\n\n nsd.append(np.random.normal(0,1))\n nsd2.append(np.random.multivariate_normal([0]*3,([nsd]*3)))\n #for ni,nsd2i in enumerate(nsd2):\n # nsd2[ni]=np.round_(nsd2i,2)\n nsd2=list(np.round_(pd.DataFrame(nsd2).mean(axis=0),2))\n ## 2\n if (j==0):\n nsd3=list()\n for repeat in range(0,100):\n\n nsd=list()\n for mni in range(0,3):\n\n nsd.append(np.random.normal(0,1))\n nsd3.append(np.random.multivariate_normal([0]*3,([nsd]*3)))\n #for ni,nsd2i in enumerate(nsd2):\n # nsd2[ni]=np.round_(nsd2i,2)\n nsd3=list(np.round_(pd.DataFrame(nsd3).mean(axis=0),2))\n #### normal\n\n epsilon_l=list()\n for repeat in range(0,100):\n epsilon_l.append(np.random.normal(0,1))\n epsilon=np.mean(epsilon_l) \n\n\n \"\"\"\n if ((node_attr.iloc[j,9]==rgn)&(node_attr.iloc[j,12]==mcp)):\n w1=W[0]\n w2=W[1]\n w3=W[2]\n else:\n w1=W[3]\n w2=W[4]\n w3=W[5]\n \"\"\" \n ####\n if ((node_attr.iloc[j,9]==\"NrA\")&(node_attr.iloc[j,12]==\"high\")):\n w1=W[0]\n w2=W[1]\n w3=W[2]\n elif ((node_attr.iloc[j,9]==\"NrA\")&(node_attr.iloc[j,12]==\"high\")):\n w1=W[3]\n w2=W[4]\n w3=W[5]\n elif ((node_attr.iloc[j,9]==\"NrA\")&(node_attr.iloc[j,12]==\"high\")):\n w1=W[6]\n w2=W[7]\n w3=W[8]\n elif ((node_attr.iloc[j,9]==\"NrA\")&(node_attr.iloc[j,12]==\"high\")):\n w1=W[9]\n w2=W[10]\n w3=W[11]\n elif ((node_attr.iloc[j,9]==\"NrA\")&(node_attr.iloc[j,12]==\"high\")):\n w1=W[12]\n w2=W[13]\n w3=W[14]\n elif ((node_attr.iloc[j,9]==\"NrA\")&(node_attr.iloc[j,12]==\"high\")):\n w1=W[15]\n w2=W[16]\n w3=W[17]\n elif ((node_attr.iloc[j,9]==\"NrA\")&(node_attr.iloc[j,12]==\"high\")):\n w1=W[18]\n w2=W[19]\n w3=W[20]\n elif ((node_attr.iloc[j,9]==\"NrA\")&(node_attr.iloc[j,12]==\"high\")):\n w1=W[21]\n w2=W[22]\n w3=W[23]\n elif ((node_attr.iloc[j,9]==\"NrA\")&(node_attr.iloc[j,12]==\"high\")):\n w1=W[24]\n w2=W[25]\n w3=W[26]\n else :\n w1=0.333\n w2=0.333\n w3=0.333\n\n #ptotalh=np.exp((w1*all_p['tier_p'][0]+w2*all_p['local_prob'][0]+w3*all_p['glob_prob'][0]+w4*rpbr+w5*mpbm))/(1+np.exp((w1*all_p['tier_p'][0]+w2*all_p['local_prob'][0]+w3*all_p['glob_prob'][0]+w4*rpbr+w5*mpbm)))\n #ptotalh=((np.exp((w1*(all_p['tier_p'][0]+alpha1)+w2*(all_p['local_prob'][0]+alpha2)+w3*(all_p['glob_prob'][0]+alpha3))/Tmp)/(1+np.exp((w1*(all_p['tier_p'][0]+alpha1)+w2*(all_p['local_prob'][0]+alpha2)+w3*(all_p['glob_prob'][0]+alpha3))/Tmp)))*(rpmp[0]))\n ptotalh=((np.exp(((w1+alpha1)*(all_p['tier_p'][0])+(w2+alpha2)*(all_p['local_prob'][0])+(w3+alpha3)*(all_p['glob_prob'][0]) +\n (nsd2[0]+nsd3[0])*(all_p['tier_p'][0])+(nsd2[1]+nsd3[1])*(all_p['local_prob'][0])+\n (nsd2[2]+nsd3[2])*(all_p['glob_prob'][0])+epsilon)/Tmp)/(1+np.exp(((w1+alpha1)*(all_p['tier_p'][0])+(w2+alpha2)*(all_p['local_prob'][0])+(w3+alpha3)*(all_p['glob_prob'][0]) +\n (nsd2[0]+nsd3[0])*(all_p['tier_p'][0])+(nsd2[1]+nsd3[1])*(all_p['local_prob'][0])+\n (nsd2[2]+nsd3[2])*(all_p['glob_prob'][0])+epsilon)/Tmp)))*(rpmp[0]))\n\n #ptotalh=ptotalh/np.sum(ptotalh)\n\n #ptotall=1-ptotalh\n ptotall=(1/(1+np.exp(((w1+alpha1)*(all_p['tier_p'][0])+(w2+alpha2)*(all_p['local_prob'][0])+(w3+alpha3)*(all_p['glob_prob'][0]) +\n (nsd2[0]+nsd3[0])*(all_p['tier_p'][0])+(nsd2[1]+nsd3[1])*(all_p['local_prob'][0])+\n (nsd2[2]+nsd3[2])*(all_p['glob_prob'][0])+epsilon)/Tmp)))*(rpmp[1])\n \n ptot=pd.DataFrame([ptotalh,ptotall],columns=['ptotal'])\n ptot.index=all_p.index\n all_p['ptotal']=ptot['ptotal']\n all_p['ptotal']=all_p['ptotal']/np.sum(all_p['ptotal'])\n #all_p\n\n\n #print(all_p)\n\n #0.6224593312018546\n \"\"\"\n d_s_ind=np.where(all_p['ptotal']==np.max(all_p['ptotal']))[0][0]\n \"\"\"\n if np.count_nonzero([w1,w2,w3])!=0:\n if all_p['ptotal'][0]>0.6224593312018546:\n #0.6224593312018546:\n d_s_ind=0\n elif all_p['ptotal'][0]<0.6224593312018546:\n #0.6224593312018546:\n d_s_ind=1\n else:\n d_s_ind = 1 if np.random.random()<0.5 else 0\n else:\n if all_p['ptotal'][0]>0.5:\n d_s_ind=0\n elif all_p['ptotal'][0]<0.5:\n d_s_ind=1\n else:\n d_s_ind = 1 if np.random.random()<0.5 else 0\n \"\"\"u = np.random.uniform()\n if all_p['ptotal'][0]>u:\n d_s_ind=0\n else:\n d_s_ind=1\"\"\"\n \n\n\n #print(d_s_ind)\n \"\"\" \n if r_on==1: \n rpbr =[all_p['pbreg_m'][d_s_ind],all_p['pbreg_l'][d_s_ind],all_p['pbreg_g'][d_s_ind]]\n else:\n rpbr =[1,1,1]\n if m_on==1: \n mpbm =[all_p['pbm_m'][d_s_ind],all_p['pbm_l'][d_s_ind],all_p['pbm_g'][d_s_ind]]\n else:\n mpbm =[1,1,1]\n \"\"\"\n if r_on==1: \n rpbr =all_p['pbreg'][d_s_ind]\n else:\n rpbr =0\n if m_on==1: \n mpbm =all_p['pbm'][d_s_ind]\n else:\n mpbm =0\n\n \"\"\"s_av=(w1*all_p['tier_p'][d_s_ind]*all_p['cur_node'][d_s_ind]*rpbr[0]*mpbm[0]*all_p['score_m'][d_s_ind]+w2*all_p['local_prob'][d_s_ind]*all_p['cur_node_l'][d_s_ind]*rpbr[1]*mpbm[1]*all_p['score_l'][d_s_ind]+w3*all_p['glob_prob'][d_s_ind]*all_p['cur_node_g'][d_s_ind]*rpbr[2]*mpbm[2]*all_p['score_g'][d_s_ind])/(w1*all_p['tier_p'][d_s_ind]*all_p['cur_node'][d_s_ind]*rpbr[0]*mpbm[0]+w2*all_p['local_prob'][d_s_ind]*all_p['cur_node_l'][d_s_ind]*rpbr[1]*mpbm[1]+w3*all_p['glob_prob'][d_s_ind]*all_p['cur_node_g'][d_s_ind]*rpbr[2]*mpbm[2])\"\"\"\n \"\"\"s_av=(w1*all_p['score_m'][d_s_ind]+w2*all_p['score_l'][d_s_ind]+w3*all_p['score_g'][d_s_ind])/(w1+w2+w3)\"\"\"\n # s_av1=np.min([all_p['score_m'][d_s_ind],all_p['score_l'][d_s_ind],all_p['score_g'][d_s_ind]])\n # s_av2=np.max([all_p['score_m'][d_s_ind],all_p['score_l'][d_s_ind],all_p['score_g'][d_s_ind]])\n\n if w1==0:\n s_av1=np.min([all_p['score_l'][d_s_ind],all_p['score_g'][d_s_ind]])\n s_av2=np.max([all_p['score_l'][d_s_ind],all_p['score_g'][d_s_ind]])\n elif w2==0:\n s_av1=np.min([all_p['score_m'][d_s_ind],all_p['score_g'][d_s_ind]])\n s_av2=np.max([all_p['score_m'][d_s_ind],all_p['score_g'][d_s_ind]])\n elif w3==0:\n s_av1=np.min([all_p['score_m'][d_s_ind],all_p['score_l'][d_s_ind]])\n s_av2=np.max([all_p['score_m'][d_s_ind],all_p['score_l'][d_s_ind]])\n elif np.count_nonzero([w1,w2,w3])==1:\n s_av1=[w1*all_p['score_m'][d_s_ind]+w2*all_p['score_l'][d_s_ind]+w3*all_p['score_g'][d_s_ind]]\n s_av2=[w1*all_p['score_m'][d_s_ind]+w2*all_p['score_l'][d_s_ind]+w3*all_p['score_g'][d_s_ind]]\n elif np.count_nonzero([w1,w2,w3])==0:\n s_av1=node_attr['score'][j]\n s_av2=node_attr['score'][j]\n else:\n \n s_av1=np.min([all_p['score_m'][d_s_ind],all_p['score_l'][d_s_ind],all_p['score_g'][d_s_ind]])\n s_av2=np.max([all_p['score_m'][d_s_ind],all_p['score_l'][d_s_ind],all_p['score_g'][d_s_ind]])\n\n\n\n\n\n #s_av\n\n #region_ind\n\n #print(all_p)\n probs_mat[i,j]=np.max(all_p['ptotal'])\n if i==0:\n probs_mat2[i,j]=np.max(all_p['ptotal'])\n else:\n probs_mat2[i+j,:]=probs_mat[i,j]\n\n ## hihest prob label\n #desired_state = random.choices(list(all_p['state']),list(all_p['all']))[0]\n #desired_state = all_p['state'][d_s_ind] \n #desired_state\n #desired_state = list(all_p.loc[all_p['all']==np.max(all_p['all'])]['state'])[0]\n\n\n\n ##### draw attributes with given label\n\n \"\"\"sample_df_1=sample_lab_attr_new(np.float(N),region_ind[0],s_av,0.05*s_av)\"\"\"\n \"\"\"if s_av1==s_av2:\n if d_s_ind==0:\n \n sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],s_av1+0.1,100)\n else:\n sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],0,s_av2+0.12)\n else:\n sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],s_av1+0.1,s_av2+0.12)\"\"\"\n if s_av1==s_av2:\n if d_s_ind==0:\n \n sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],s_av1+0.1,100)\n else:\n sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],0,s_av2+0.12)\n else:\n if d_s_ind==0:\n \n sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],(s_av1+s_av2)/2,100)\n else:\n sample_df_1=sample_lab_attr_new_B(np.float(N),region_ind[0],0,(s_av1+s_av2)/2)\n \n \n \n #################################################################################################### \n ########################################## Update attributes ######################################\n ####################################################################################################\n ## update node attributes \n for k,replc in enumerate(sample_df_1.values[0]):\n node_attr.iloc[j,k]=replc \n ## update edge attributes\n # node attr to edge attr\n df_3=cosine_similarity(node_attr.iloc[:,:8])\n df_4=pd.DataFrame(df_3)\n df_4.values[[np.arange(len(df_4))]*2] = np.nan\n #mat_data.head()\n edge_list_2=df_4.stack().reset_index()\n edge_list_2.columns=['source','target','weight']\n #edge_list_2.head()\n edge_list_f=pd.merge(edge_list_df, edge_list_2, how='left', left_on=['source','target'], right_on = ['source','target'])\n #edge_list_f.head()\n edge_list_f.drop('weight_x',axis=1,inplace=True)\n edge_list_f.columns=['source','target','weight']\n\n\n\n for k,replc in enumerate(node_attr.iloc[j,:].values):\n blanck_data[j,k]=replc \n\n for k,replc in enumerate(all_p.iloc[0,1:].values):\n blanck_data[j,k+13]=replc \n blanck_data[j,29]=j\n blanck_data[j,30]=i\n blanck_data[j,31]=all_p['state'][d_s_ind] \n\n #blanck_data2[:,:2,i]=np.array(edge_list_f) \n\n blanck_data_tot[:,:,i]=pd.DataFrame(blanck_data)\n\n \n\n#if i>= 2:\n #if i%5==0:\n #probs_mat_pr.append(np.prod(np.log(probs_mat[i,:]),axis=1))\n \n edge_list_f.to_csv(folder_location+\"sc_\"+str(run_iter+1)+\"/\"+str(i)+\"_edge_attr.csv\")\n reshaped_bd = np.vstack(blanck_data_tot[:,:,i] for i in range(num_sim))\n reshaped_bd_df=pd.DataFrame(reshaped_bd)\n reshaped_bd_df.to_csv(folder_location+\"sc_\"+str(run_iter+1)+\"/\"+ \"other_node_attr.csv\")\n\n \n print('Complete')\n ",
"_____no_output_____"
]
],
[
[
"### Parallelization of simulation",
"_____no_output_____"
]
],
[
[
"\n\ndef process_func(run_iter):\n #print('@@@@@ run iter @@@@@ --' + str(run_iter))\n \n stc=scn_params.iloc[run_iter,20]\n Tmp=scn_params.iloc[run_iter,21]\n if stc==1:\n nr=[0.22,0.35,0.43]\n er=[0.38,0.13,0.50]\n asa=[0.22,0.06,0.72]\n else:\n nr=[0.33,0.335,0.335]\n er=[0.33,0.335,0.335]\n asa=[0.33,0.335,0.335]\n \n #W=[scn_params.iloc[run_iter,5],scn_params.iloc[run_iter,6],scn_params.iloc[run_iter,7],scn_params.iloc[run_iter,8],scn_params.iloc[run_iter,9],scn_params.iloc[run_iter,10]]\n W=[scn_params.iloc[run_iter,23],scn_params.iloc[run_iter,24],scn_params.iloc[run_iter,25],scn_params.iloc[run_iter,26],\n scn_params.iloc[run_iter,27],scn_params.iloc[run_iter,28],scn_params.iloc[run_iter,29],scn_params.iloc[run_iter,30],\n scn_params.iloc[run_iter,31],scn_params.iloc[run_iter,32],scn_params.iloc[run_iter,33],scn_params.iloc[run_iter,34],\n scn_params.iloc[run_iter,35],scn_params.iloc[run_iter,36],scn_params.iloc[run_iter,37],scn_params.iloc[run_iter,38],\n scn_params.iloc[run_iter,39],scn_params.iloc[run_iter,40],scn_params.iloc[run_iter,41],scn_params.iloc[run_iter,42],\n scn_params.iloc[run_iter,43],scn_params.iloc[run_iter,44],scn_params.iloc[run_iter,45],scn_params.iloc[run_iter,46],\n scn_params.iloc[run_iter,47],scn_params.iloc[run_iter,48],scn_params.iloc[run_iter,49]]\n\n N=scn_params.iloc[run_iter,0]\n bs_n=scn_params.iloc[run_iter,3]\n m_size=scn_params.iloc[run_iter,4]\n bs1=scn_params.iloc[run_iter,1]\n bs2=scn_params.iloc[run_iter,2]\n rgn=scn_params.iloc[run_iter,13]\n mcp=scn_params.iloc[run_iter,14]\n \n ################################################### create network\n network_created=create_network(N,nr,er,asa,bs_n,m_size)\n #graph\n g=network_created[2]\n #centrality\n deg_cent = nx.degree_centrality(g)\n in_deg_cent = nx.in_degree_centrality(g)\n out_deg_cent = nx.out_degree_centrality(g)\n eigen_cent = nx.eigenvector_centrality(g)\n #katz_cent = nx.katz_centrality(g)\n closeness_cent = nx.closeness_centrality(g)\n #betw_cent = nx.betweenness_centrality(g)\n #vote_cent = nx.voterank(g)\n deg=pd.DataFrame(list(deg_cent.values()),columns=['deg'])\n indeg=pd.DataFrame(list(in_deg_cent.values()),columns=['indeg'])\n outdeg=pd.DataFrame(list(out_deg_cent.values()),columns=['outdeg'])\n eigencent=pd.DataFrame(list(eigen_cent.values()),columns=['eigdeg'])\n closeness=pd.DataFrame(list(closeness_cent.values()),columns=['closedeg'])\n all_net_p=pd.concat([deg,indeg,outdeg,closeness,eigencent],axis=1)\n #tier and ms\n nodes_frame=network_created[1]\n #edge list\n edge_list_df_new=network_created[0]\n edge_list_df=edge_list_df_new.copy()\n\n\n n_regions_list=[int(0.46*N),int( 0.16*N),int( 0.38*N)]\n if (len(n_regions_list[0]*[\"NrA\"]+n_regions_list[1]*[\"Eur\"]+n_regions_list[2]*['Asia'])!=N):\n if (len(n_regions_list[0]*[\"NrA\"]+n_regions_list[1]*[\"Eur\"]+n_regions_list[2]*['Asia'])-N)>0:\n\n n_regions_list[0] = n_regions_list[0]+len(n_regions_list[0]*[\"NrA\"]+n_regions_list[1]*[\"Eur\"]+n_regions_list[2]*['Asia'])-N\n else:\n n_regions_list[0] = n_regions_list[0]-len(n_regions_list[0]*[\"NrA\"]+n_regions_list[1]*[\"Eur\"]+n_regions_list[2]*['Asia'])+N\n\n #print(n_regions_list)\n #till partition\n ###################################################initial attributes at random one at a time\n #### method 1\n \n\n \"\"\"init_samples = sample_lab_attr_all_init(np.float(N),30,10)\"\"\"\n init_samples = sample_lab_attr_all_init(np.float(N))\n #### method 2\n \"\"\"init_samples1 = initial_random_attr(n_regions_list[0],np.array([0.2,0.3,0.5]))\n init_samples1=init_samples1.reset_index().iloc[:,1:]\n init_samples2 = initial_random_attr(n_regions_list[1],np.array([0.3,0.3,0.4]))\n init_samples2=init_samples2.reset_index().iloc[:,1:]\n init_samples3 = initial_random_attr(n_regions_list[2],np.array([0.5,0.3,0.2]))\n init_samples3=init_samples3.reset_index().iloc[:,1:]\n\n init_samples=pd.concat([init_samples1,init_samples2,init_samples3],axis=0)\"\"\"\n \n ### method 3\n \"\"\"init_samples1 = sample_lab_attr_all(n_regions_list[0],1)\n init_samples1=init_samples1.reset_index().iloc[:,1:]\n init_samples2 = sample_lab_attr_all(n_regions_list[1],2)\n init_samples2=init_samples2.reset_index().iloc[:,1:]\n init_samples3 = sample_lab_attr_all(n_regions_list[2],3)\n init_samples3=init_samples3.reset_index().iloc[:,1:]\n\n init_samples=pd.concat([init_samples1,init_samples2,init_samples3],axis=0)\"\"\"\n ############\n \n #init_samples.head()\n init_samples=init_samples.reset_index()\n #init_samples.head()\n init_samples=init_samples.iloc[:,1:]\n #init_samples.index\n node_attr=init_samples\n node_attr['state']=\"high\"\n\n node_attr['partition']=\"\"\n for i in range(0,node_attr.shape[0]):\n if i<n_regions_list[0]:\n node_attr['partition'][i]='NrA'\n elif i< (n_regions_list[0]+n_regions_list[1]):\n node_attr['partition'][i]='Eur'\n else:\n node_attr['partition'][i]='Asia'\n\n #tier and MS merge with attributes\n node_attr = pd.concat([node_attr,nodes_frame.iloc[:,2:]],axis=1)\n #node_attr.columns\n\n node_attr.columns=['X1..Commitment...Governance', 'X2..Traceability.and.Risk.Assessment',\n 'X3..Purchasing.Practices', 'X4..Recruitment', 'X5..Worker.Voice',\n 'X6..Monitoring', 'X7..Remedy', 'score', 'state', 'partition','tier','ms','ms2']\n #\n\n #node_attr.info()\n\n\n\n # region wise reg assumption and market size assumption\n #p_reg_org=p_reg.copy()\n #p_med_org=p_med.copy()\n\n #\n init_node_attrs_df=node_attr.copy()\n init_edge_attrs_df=edge_list_df.copy()\n\n import os\n os.mkdir(folder_location+\"sc_\"+str(run_iter+1))\n\n\n node_attr.to_csv(folder_location+\"sc_\"+str(run_iter+1)+\"/\"+str(0)+ \"_node_attr.csv\")\n\n df_3=cosine_similarity(node_attr.iloc[:,:8])\n df_4=pd.DataFrame(df_3)\n df_4.values[[np.arange(len(df_4))]*2] = np.nan\n #mat_data.head()\n edge_list_2=df_4.stack().reset_index()\n edge_list_2.columns=['source','target','weight']\n #edge_list_2.head()\n edge_list_f=pd.merge(edge_list_df, edge_list_2, how='left', left_on=['source','target'], right_on = ['source','target'])\n #edge_list_f.head()\n edge_list_f.drop('weight_x',axis=1,inplace=True)\n edge_list_f.columns=['source','target','weight']\n\n edge_list_f.to_csv(folder_location+\"sc_\"+str(run_iter+1)+\"/\"+\"initial_edge_attr.csv\")\n\n ##### run simulation for all\n\n print(\"@@@@@@@@@@@@@@@ -- \"+str(run_iter))\n \"\"\"\n w1=scn_params.iloc[run_iter,5]\n w2=scn_params.iloc[run_iter,6]\n w3=scn_params.iloc[run_iter,7]\n w4=scn_params.iloc[run_iter,8] \n w5=scn_params.iloc[run_iter,9] \n \"\"\"\n r_on=scn_params.iloc[run_iter,15]\n m_on=scn_params.iloc[run_iter,16]\n alpha1=scn_params.iloc[run_iter,17]\n alpha2=scn_params.iloc[run_iter,18]\n alpha3=scn_params.iloc[run_iter,19]\n \n if N==500:\n num_sim=20\n else:\n num_sim=20\n probs_mat=np.zeros((num_sim,N))\n\n probs_mat2=np.zeros((((num_sim-1)*N)+1,N))\n\n ## Initial node and edge attributes\n\n node_attr=init_node_attrs_df.copy()\n edge_list_df=init_edge_attrs_df.copy()\n ################################################## simulation\n simulation_continous(node_attr=node_attr,edge_list_df=edge_list_df,num_sim=num_sim,W=W,bs1=bs1,bs2=bs2,N=N,r_on=r_on,m_on=m_on,p_reg=p_reg,p_med=p_med,probs_mat=probs_mat,probs_mat2=probs_mat2,run_iter=run_iter,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3,Tmp=Tmp,rgn=rgn,mcp=mcp)\n\n lik_probs_mat=pd.DataFrame(probs_mat)\n lik_probs_mat2=pd.DataFrame(probs_mat2)\n \n lik_probs_mat.to_csv(folder_location+\"sc_\"+str(run_iter+1)+\"/\"+\"lik_probs_mat.csv\")\n lik_probs_mat2.to_csv(folder_location+\"sc_\"+str(run_iter+1)+\"/\"+\"lik_probs_mat2.csv\")\n del lik_probs_mat\n\n ",
"Overwriting magic_functions.py\n"
]
],
[
[
"### Running simulation",
"_____no_output_____"
]
],
[
[
"#scenarios\nscn_params=pd.read_csv('simulation_log/testscenarios_uniform_parallel_W_alpha_new_sense_simple_new_missing_reg/testscenarios_uniform_parallel_W_alpha_new_sense_simple.csv')\n# Organizational data \ndata_orgs=pd.read_csv('C:/Users/Krishna/OneDrive/Documents/IIM_R1/proj2/cosine_input.csv')\n\nfolder_location='simulation_log/testscenarios_uniform_parallel_W_alpha_new_sense_simple_new_missing_reg/'\n\n\nfrom magic_functions import process_func\nframes_list = range(0,31)#range(0,8)\nwith Pool(max_pool) as p:\n pool_outputs = list(\n tqdm(\n p.imap(process_func,\n frames_list),\n total=len(frames_list)\n )\n ) \n ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d088a5008101d1b4bba400e76ad4b694aec51e54 | 15,529 | ipynb | Jupyter Notebook | src/Ensemble.ipynb | mulargui/kaggle-Classify-forest-types | ab657002f61080932107e4ec3f210b2a106f841d | [
"MIT"
] | null | null | null | src/Ensemble.ipynb | mulargui/kaggle-Classify-forest-types | ab657002f61080932107e4ec3f210b2a106f841d | [
"MIT"
] | null | null | null | src/Ensemble.ipynb | mulargui/kaggle-Classify-forest-types | ab657002f61080932107e4ec3f210b2a106f841d | [
"MIT"
] | null | null | null | 15,529 | 15,529 | 0.668169 | [
[
[
"I started this competition investigating neural networks with this kernel https://www.kaggle.com/mulargui/keras-nn\nNow switching to using ensembles in this new kernel. As of today V6 is the most performant version.\nYou can find all my notes and versions at https://github.com/mulargui/kaggle-Classify-forest-types",
"_____no_output_____"
]
],
[
[
"# This Python 3 environment comes with many helpful analytics libraries installed\n# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python\n# For example, here's several helpful packages to load in \n\nimport numpy as np # linear algebra\nimport pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)\n\n#load data\ndftrain=pd.read_csv('/kaggle/input/learn-together/train.csv')\ndftest=pd.read_csv('/kaggle/input/learn-together/test.csv')\n\n####### DATA PREPARATION #####\n#split train data in features and labels\ny = dftrain.Cover_Type\nx = dftrain.drop(['Id','Cover_Type'], axis=1)\n\n# split test data in features and Ids\nIds = dftest.Id\nx_predict = dftest.drop('Id', axis=1)\n\n# one data set with all features\nX = pd.concat([x,x_predict],keys=[0,1])",
"_____no_output_____"
],
[
"###### FEATURE ENGINEERING #####\n#https://www.kaggle.com/mancy7/simple-eda\n#Soil_Type7, Soil_Type15 are non-existent in the training set, nothing to learn\n#I have problems with np.where if I do this, postponed\n#X.drop([\"Soil_Type7\", \"Soil_Type15\"], axis = 1, inplace=True)\n\n#https://www.kaggle.com/evimarp/top-6-roosevelt-national-forest-competition\nfrom itertools import combinations\nfrom bisect import bisect\nX['Euclidean_distance_to_hydro'] = (X.Vertical_Distance_To_Hydrology**2 \n + X.Horizontal_Distance_To_Hydrology**2)**.5\n\ncols = [\n 'Horizontal_Distance_To_Roadways',\n 'Horizontal_Distance_To_Fire_Points',\n 'Horizontal_Distance_To_Hydrology',\n]\nX['distance_mean'] = X[cols].mean(axis=1)\nX['distance_sum'] = X[cols].sum(axis=1)\nX['distance_road_fire'] = X[cols[:2]].mean(axis=1)\nX['distance_hydro_fire'] = X[cols[1:]].mean(axis=1)\nX['distance_road_hydro'] = X[[cols[0], cols[2]]].mean(axis=1)\n \nX['distance_sum_road_fire'] = X[cols[:2]].sum(axis=1)\nX['distance_sum_hydro_fire'] = X[cols[1:]].sum(axis=1)\nX['distance_sum_road_hydro'] = X[[cols[0], cols[2]]].sum(axis=1)\n \nX['distance_dif_road_fire'] = X[cols[0]] - X[cols[1]]\nX['distance_dif_hydro_road'] = X[cols[2]] - X[cols[0]]\nX['distance_dif_hydro_fire'] = X[cols[2]] - X[cols[1]]\n \n# Vertical distances measures\ncolv = ['Elevation', 'Vertical_Distance_To_Hydrology']\nX['Vertical_dif'] = X[colv[0]] - X[colv[1]]\nX['Vertical_sum'] = X[colv].sum(axis=1)\n \nSHADES = ['Hillshade_9am', 'Hillshade_Noon', 'Hillshade_3pm']\n \nX['shade_noon_diff'] = X['Hillshade_9am'] - X['Hillshade_Noon']\nX['shade_3pm_diff'] = X['Hillshade_Noon'] - X['Hillshade_3pm']\nX['shade_all_diff'] = X['Hillshade_9am'] - X['Hillshade_3pm']\nX['shade_sum'] = X[SHADES].sum(axis=1)\nX['shade_mean'] = X[SHADES].mean(axis=1)\n \nX['ElevationHydro'] = X['Elevation'] - 0.25 * X['Euclidean_distance_to_hydro']\nX['ElevationV'] = X['Elevation'] - X['Vertical_Distance_To_Hydrology']\nX['ElevationH'] = X['Elevation'] - 0.19 * X['Horizontal_Distance_To_Hydrology']\n\nX['Elevation2'] = X['Elevation']**2\nX['ElevationLog'] = np.log1p(X['Elevation'])\n\nX['Aspect_cos'] = np.cos(np.radians(X.Aspect))\nX['Aspect_sin'] = np.sin(np.radians(X.Aspect))\n#df['Slope_sin'] = np.sin(np.radians(df.Slope))\nX['Aspectcos_Slope'] = X.Slope * X.Aspect_cos\n#df['Aspectsin_Slope'] = df.Slope * df.Aspect_sin\n \ncardinals = [i for i in range(45, 361, 90)]\npoints = ['N', 'E', 'S', 'W']\nX['Cardinal'] = X.Aspect.apply(lambda x: points[bisect(cardinals, x) % 4])\nd = {'N': 0, 'E': 1, 'S': 0, 'W':-1}\nX['Cardinal'] = X.Cardinal.apply(lambda x: d[x])\n\n#https://www.kaggle.com/jakelj/basic-ensemble-model\nX['Avg_shade'] = ((X['Hillshade_9am'] + X['Hillshade_Noon'] + X['Hillshade_3pm']) / 3)\nX['Morn_noon_int'] = ((X['Hillshade_9am'] + X['Hillshade_Noon']) / 2)\nX['noon_eve_int'] = ((X['Hillshade_3pm'] + X['Hillshade_Noon']) / 2)\n\n#adding features based on https://douglas-fraser.com/forest_cover_management.pdf pages 21,22\n#note: not all climatic and geologic codes have a soil type\ncolumns=['Soil_Type1', 'Soil_Type2', 'Soil_Type3', 'Soil_Type4', 'Soil_Type5', 'Soil_Type6']\nX['Climatic2'] = np.select([X[columns].sum(1).gt(0)], [1])\ncolumns=['Soil_Type7', 'Soil_Type8']\nX['Climatic3'] = np.select([X[columns].sum(1).gt(0)], [1])\ncolumns=['Soil_Type9', 'Soil_Type10', 'Soil_Type11', 'Soil_Type12', 'Soil_Type13']\nX['Climatic4'] = np.select([X[columns].sum(1).gt(0)], [1])\ncolumns=['Soil_Type14', 'Soil_Type15']\nX['Climatic5'] = np.select([X[columns].sum(1).gt(0)], [1])\ncolumns=['Soil_Type16', 'Soil_Type17', 'Soil_Type18']\nX['Climatic6'] = np.select([X[columns].sum(1).gt(0)], [1])\ncolumns=['Soil_Type19', 'Soil_Type20', 'Soil_Type21', 'Soil_Type22', 'Soil_Type23', 'Soil_Type24',\n 'Soil_Type25', 'Soil_Type26', 'Soil_Type27', 'Soil_Type28', 'Soil_Type29', 'Soil_Type30',\n 'Soil_Type31', 'Soil_Type32', 'Soil_Type33', 'Soil_Type34']\nX['Climatic7'] = np.select([X[columns].sum(1).gt(0)], [1])\ncolumns=['Soil_Type35', 'Soil_Type36', 'Soil_Type37', 'Soil_Type38', 'Soil_Type39', 'Soil_Type40']\nX['Climatic8'] = np.select([X[columns].sum(1).gt(0)], [1])\n\ncolumns=['Soil_Type14', 'Soil_Type15', 'Soil_Type16', 'Soil_Type17', 'Soil_Type19', 'Soil_Type20',\n 'Soil_Type21']\nX['Geologic1'] = np.select([X[columns].sum(1).gt(0)], [1])\ncolumns=['Soil_Type9', 'Soil_Type22', 'Soil_Type23']\nX['Geologic2'] = np.select([X[columns].sum(1).gt(0)], [1])\ncolumns=['Soil_Type7', 'Soil_Type8']\nX['Geologic5'] = np.select([X[columns].sum(1).gt(0)], [1])\ncolumns=['Soil_Type1', 'Soil_Type2', 'Soil_Type3', 'Soil_Type4', 'Soil_Type5', 'Soil_Type6',\n 'Soil_Type10', 'Soil_Type11', 'Soil_Type12', 'Soil_Type13', 'Soil_Type18', 'Soil_Type24',\n 'Soil_Type25', 'Soil_Type26', 'Soil_Type27', 'Soil_Type28', 'Soil_Type29', 'Soil_Type30',\n 'Soil_Type31', 'Soil_Type32', 'Soil_Type33', 'Soil_Type34', 'Soil_Type35', 'Soil_Type36', \n 'Soil_Type37', 'Soil_Type38', 'Soil_Type39', 'Soil_Type40']\nX['Geologic7'] = np.select([X[columns].sum(1).gt(0)], [1])\n\n#Reversing One-Hot-Encoding to Categorical attributes, several articles recommend it for decision tree algorithms\n#Doing it for Soil_Type, Wilderness_Area, Geologic and Climatic\nX['Soil_Type']=np.where(X.loc[:, 'Soil_Type1':'Soil_Type40'])[1] +1\nX.drop(X.loc[:,'Soil_Type1':'Soil_Type40'].columns, axis=1, inplace=True)\n\nX['Wilderness_Area']=np.where(X.loc[:, 'Wilderness_Area1':'Wilderness_Area4'])[1] +1\nX.drop(X.loc[:,'Wilderness_Area1':'Wilderness_Area4'].columns, axis=1, inplace=True)\n\nX['Climatic']=np.where(X.loc[:, 'Climatic2':'Climatic8'])[1] +1\nX.drop(X.loc[:,'Climatic2':'Climatic8'].columns, axis=1, inplace=True)\n\nX['Geologic']=np.where(X.loc[:, 'Geologic1':'Geologic7'])[1] +1\nX.drop(X.loc[:,'Geologic1':'Geologic7'].columns, axis=1, inplace=True)\n\nfrom sklearn.preprocessing import StandardScaler\nStandardScaler(copy=False).fit_transform(X)\n\n# Adding Gaussian Mixture features to perform some unsupervised learning hints from the full data\n#https://www.kaggle.com/arateris/2-layer-k-fold-learning-forest-cover \n#https://www.kaggle.com/stevegreenau/stacking-multiple-classifiers-clustering\nfrom sklearn.mixture import GaussianMixture\nX['GM'] = GaussianMixture(n_components=15).fit_predict(X)\n\n#https://www.kaggle.com/arateris/2-layer-k-fold-learning-forest-cover \n# Add PCA features\nfrom sklearn.decomposition import PCA\npca = PCA(n_components=0.99).fit(X)\ntrans = pca.transform(X)\n\nfor i in range(trans.shape[1]):\n col_name= 'pca'+str(i+1)\n X[col_name] = trans[:,i]\n\n#https://www.kaggle.com/kwabenantim/forest-cover-stacking-multiple-classifiers\n# Scale and bin features\nfrom sklearn.preprocessing import MinMaxScaler\nMinMaxScaler((0, 100),copy=False).fit_transform(X)\n#X = np.floor(X).astype('int8')\n\nprint(\"Completed feature engineering!\")",
"Completed feature engineering!\n"
],
[
"#break it down again in train and test\nx,x_predict = X.xs(0),X.xs(1)",
"_____no_output_____"
],
[
"###### THIS IS THE ENSEMBLE MODEL SECTION ######\n#https://www.kaggle.com/kwabenantim/forest-cover-stacking-multiple-classifiers\nimport random\nrandomstate = 1\nrandom.seed(randomstate)\nnp.random.seed(randomstate)\n\nfrom sklearn.ensemble import AdaBoostClassifier\nfrom sklearn.tree import DecisionTreeClassifier\nab_clf = AdaBoostClassifier(n_estimators=200,\n base_estimator=DecisionTreeClassifier(\n min_samples_leaf=2,\n random_state=randomstate),\n random_state=randomstate)\n\n#max_features = min(30, x.columns.size)\nmax_features = 30\nfrom sklearn.ensemble import ExtraTreesClassifier\net_clf = ExtraTreesClassifier(n_estimators=300,\n min_samples_leaf=2,\n min_samples_split=2,\n max_depth=50,\n max_features=max_features,\n random_state=randomstate,\n n_jobs=1)\n\nfrom lightgbm import LGBMClassifier\nlg_clf = LGBMClassifier(n_estimators=300,\n num_leaves=128,\n verbose=-1,\n random_state=randomstate,\n n_jobs=1)\n\nfrom sklearn.ensemble import RandomForestClassifier\nrf_clf = RandomForestClassifier(n_estimators=300,\n random_state=randomstate,\n n_jobs=1)\n\n#Added a KNN classifier to the ensemble\n#https://www.kaggle.com/edumunozsala/feature-eng-and-a-simple-stacked-model\nfrom sklearn.neighbors import KNeighborsClassifier\nknn_clf = KNeighborsClassifier(n_neighbors=y.nunique(), n_jobs=1)\n\n#added several more classifiers at once\n#https://www.kaggle.com/edumunozsala/feature-eng-and-a-simple-stacked-model\nfrom sklearn.ensemble import BaggingClassifier\nfrom sklearn.tree import DecisionTreeClassifier\nbag_clf = BaggingClassifier(base_estimator=DecisionTreeClassifier(criterion = 'entropy', max_depth=None, \n min_samples_split=2, min_samples_leaf=1,max_leaf_nodes=None,\n max_features='auto',\n random_state = randomstate),\n n_estimators=500,max_features=0.75, max_samples=1.0, random_state=randomstate,n_jobs=1,verbose=0)\n\nfrom sklearn.linear_model import LogisticRegression\nlr_clf = LogisticRegression(max_iter=1000,\n n_jobs=1,\n solver= 'lbfgs',\n multi_class = 'multinomial',\n random_state=randomstate,\n verbose=0)\n\n#https://www.kaggle.com/bustam/6-models-for-forest-classification\nfrom catboost import CatBoostClassifier\ncat_clf = CatBoostClassifier(n_estimators =300, \n eval_metric='Accuracy',\n metric_period=200,\n max_depth = None, \n random_state=randomstate,\n verbose=0)\n\n#https://www.kaggle.com/jakelj/basic-ensemble-model\nfrom sklearn.experimental import enable_hist_gradient_boosting\nfrom sklearn.ensemble import HistGradientBoostingClassifier\nhbc_clf = HistGradientBoostingClassifier(max_iter = 500, max_depth =25, random_state = randomstate)\n\nensemble = [('AdaBoostClassifier', ab_clf),\n ('ExtraTreesClassifier', et_clf),\n ('LGBMClassifier', lg_clf),\n #('KNNClassifier', knn_clf),\n ('BaggingClassifier', bag_clf),\n #('LogRegressionClassifier', lr_clf),\n #('CatBoostClassifier', cat_clf),\n #('HBCClassifier', hbc_clf),\n ('RandomForestClassifier', rf_clf)\n]\n\n#Cross-validating classifiers\nfrom sklearn.model_selection import cross_val_score\nfor label, clf in ensemble:\n score = cross_val_score(clf, x, y,\n cv=10,\n scoring='accuracy',\n verbose=0,\n n_jobs=-1)\n print(\"Accuracy: %0.2f (+/- %0.2f) [%s]\" \n % (score.mean(), score.std(), label))\n\n# Fitting stack\nfrom mlxtend.classifier import StackingCVClassifier\nstack = StackingCVClassifier(classifiers=[ab_clf, et_clf, lg_clf, \n bag_clf,\n rf_clf],\n meta_classifier=rf_clf,\n cv=10,\n stratify=True,\n shuffle=True,\n use_probas=True,\n use_features_in_secondary=True,\n verbose=0,\n random_state=randomstate)\nstack = stack.fit(x, y)\n\nprint(\"Completed modeling!\")",
"Accuracy: 0.80 (+/- 0.04) [AdaBoostClassifier]\n"
],
[
"#make predictions\ny_predict = stack.predict(x_predict)\ny_predict = pd.Series(y_predict, index=x_predict.index, dtype=y.dtype)\n\nprint(\"Completed predictions!\")",
"Completed predictions!\n"
],
[
"# Save predictions to a file for submission\noutput = pd.DataFrame({'Id': Ids,\n 'Cover_Type': y_predict})\noutput.to_csv('submission.csv', index=False)\n\n#create a link to download the file \nfrom IPython.display import FileLink\nFileLink(r'submission.csv')",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d088a95737e4aa97540a948f305537b38e1cf70e | 145,386 | ipynb | Jupyter Notebook | code/basic_gendataset.ipynb | cpslab-snu/deep_learning | 3be936b4462c0c082dccde00eee8180988c6c8ed | [
"MIT"
] | null | null | null | code/basic_gendataset.ipynb | cpslab-snu/deep_learning | 3be936b4462c0c082dccde00eee8180988c6c8ed | [
"MIT"
] | null | null | null | code/basic_gendataset.ipynb | cpslab-snu/deep_learning | 3be936b4462c0c082dccde00eee8180988c6c8ed | [
"MIT"
] | null | null | null | 373.742931 | 28,546 | 0.923287 | [
[
[
"## DATASET GENERATION",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport os\nfrom scipy.misc import imread, imresize\nimport matplotlib.pyplot as plt\n%matplotlib inline\ncwd = os.getcwd()\nprint (\"PACKAGES LOADED\") \nprint (\"CURRENT FOLDER IS [%s]\" % (cwd) )",
"PACKAGES LOADED\nCURRENT FOLDER IS [/home/sj/notebooks/github/deep-learning-cpslab/code]\n"
]
],
[
[
"### CONFIGURATION",
"_____no_output_____"
]
],
[
[
"# FOLDER LOCATIONS\npaths = [\"../../img_dataset/celebs/Arnold_Schwarzenegger\"\n , \"../../img_dataset/celebs/Junichiro_Koizumi\"\n , \"../../img_dataset/celebs/Vladimir_Putin\"\n , \"../../img_dataset/celebs/George_W_Bush\"]\ncategories = ['Terminator', 'Koizumi', 'Putin', 'Bush']\n# CONFIGURATIONS\nimgsize = [64, 64]\nuse_gray = 1\ndata_name = \"custom_data\"\n\nprint (\"YOUR IMAGES SHOULD BE AT\")\nfor i, path in enumerate(paths):\n print (\" [%d/%d] %s\" % (i, len(paths), path)) \nprint (\"DATA WILL BE SAVED TO \\n [%s]\" \n % (cwd + '/data/' + data_name + '.npz'))",
"YOUR IMAGES SHOULD BE AT\n [0/4] ../../img_dataset/celebs/Arnold_Schwarzenegger\n [1/4] ../../img_dataset/celebs/Junichiro_Koizumi\n [2/4] ../../img_dataset/celebs/Vladimir_Putin\n [3/4] ../../img_dataset/celebs/George_W_Bush\nDATA WILL BE SAVED TO \n [/home/sj/notebooks/github/deep-learning-cpslab/code/data/custom_data.npz]\n"
]
],
[
[
"### RGB2GRAY",
"_____no_output_____"
]
],
[
[
"def rgb2gray(rgb):\n if len(rgb.shape) is 3:\n return np.dot(rgb[...,:3], [0.299, 0.587, 0.114])\n else:\n return rgb",
"_____no_output_____"
]
],
[
[
"### LOAD IMAGES",
"_____no_output_____"
]
],
[
[
"nclass = len(paths)\nvalid_exts = [\".jpg\",\".gif\",\".png\",\".tga\", \".jpeg\"]\nimgcnt = 0\nfor i, relpath in zip(range(nclass), paths):\n path = cwd + \"/\" + relpath\n flist = os.listdir(path)\n for f in flist:\n if os.path.splitext(f)[1].lower() not in valid_exts:\n continue\n fullpath = os.path.join(path, f)\n currimg = imread(fullpath)\n # CONVERT TO GRAY (IF REQUIRED)\n if use_gray:\n grayimg = rgb2gray(currimg)\n else:\n grayimg = currimg\n # RESIZE\n graysmall = imresize(grayimg, [imgsize[0], imgsize[1]])/255.\n grayvec = np.reshape(graysmall, (1, -1))\n # SAVE \n curr_label = np.eye(nclass, nclass)[i:i+1, :]\n if imgcnt is 0:\n totalimg = grayvec\n totallabel = curr_label\n else:\n totalimg = np.concatenate((totalimg, grayvec), axis=0)\n totallabel = np.concatenate((totallabel, curr_label), axis=0)\n imgcnt = imgcnt + 1\nprint (\"TOTAL %d IMAGES\" % (imgcnt))",
"TOTAL 681 IMAGES\n"
]
],
[
[
"### DIVIDE INTO TRAINING AND TEST",
"_____no_output_____"
]
],
[
[
"def print_shape(string, x):\n print (\"SHAPE OF [%s] IS [%s]\" % (string, x.shape,))\n \nrandidx = np.random.randint(imgcnt, size=imgcnt)\ntrainidx = randidx[0:int(4*imgcnt/5)]\ntestidx = randidx[int(4*imgcnt/5):imgcnt]\ntrainimg = totalimg[trainidx, :]\ntrainlabel = totallabel[trainidx, :]\ntestimg = totalimg[testidx, :]\ntestlabel = totallabel[testidx, :]\nprint_shape(\"totalimg\", totalimg)\nprint_shape(\"totallabel\", totallabel)\nprint_shape(\"trainimg\", trainimg)\nprint_shape(\"trainlabel\", trainlabel)\nprint_shape(\"testimg\", testimg)\nprint_shape(\"testlabel\", testlabel)",
"SHAPE OF [totalimg] IS [(681, 4096)]\nSHAPE OF [totallabel] IS [(681, 4)]\nSHAPE OF [trainimg] IS [(544, 4096)]\nSHAPE OF [trainlabel] IS [(544, 4)]\nSHAPE OF [testimg] IS [(137, 4096)]\nSHAPE OF [testlabel] IS [(137, 4)]\n"
]
],
[
[
"### SAVE TO NPZ",
"_____no_output_____"
]
],
[
[
"savepath = cwd + \"/data/\" + data_name + \".npz\"\nnp.savez(savepath, trainimg=trainimg, trainlabel=trainlabel\n , testimg=testimg, testlabel=testlabel\n , imgsize=imgsize, use_gray=use_gray, categories=categories)\nprint (\"SAVED TO [%s]\" % (savepath))",
"SAVED TO [/home/sj/notebooks/github/deep-learning-cpslab/code/data/custom_data.npz]\n"
]
],
[
[
"### LOAD NPZ",
"_____no_output_____"
]
],
[
[
"# LOAD\ncwd = os.getcwd()\nloadpath = cwd + \"/data/\" + data_name + \".npz\"\nl = np.load(loadpath)\nprint (l.files)\n\n# Parse data\ntrainimg_loaded = l['trainimg']\ntrainlabel_loaded = l['trainlabel']\ntestimg_loaded = l['testimg']\ntestlabel_loaded = l['testlabel']\ncategories_loaded = l['categories']\n\nprint (\"[%d] TRAINING IMAGES\" % (trainimg_loaded.shape[0]))\nprint (\"[%d] TEST IMAGES\" % (testimg_loaded.shape[0]))\nprint (\"LOADED FROM [%s]\" % (savepath))",
"['trainlabel', 'imgsize', 'trainimg', 'testimg', 'testlabel', 'use_gray', 'categories']\n[544] TRAINING IMAGES\n[137] TEST IMAGES\nLOADED FROM [/home/sj/notebooks/github/deep-learning-cpslab/code/data/custom_data.npz]\n"
]
],
[
[
"### PLOT LOADED DATA",
"_____no_output_____"
]
],
[
[
"ntrain_loaded = trainimg_loaded.shape[0]\nbatch_size = 5;\nrandidx = np.random.randint(ntrain_loaded, size=batch_size)\nfor i in randidx: \n currimg = np.reshape(trainimg_loaded[i, :], (imgsize[0], -1))\n currlabel_onehot = trainlabel_loaded[i, :]\n currlabel = np.argmax(currlabel_onehot) \n if use_gray:\n currimg = np.reshape(trainimg[i, :], (imgsize[0], -1))\n plt.matshow(currimg, cmap=plt.get_cmap('gray'))\n plt.colorbar()\n else:\n currimg = np.reshape(trainimg[i, :], (imgsize[0], imgsize[1], 3))\n plt.imshow(currimg)\n title_string = (\"[%d] CLASS-%d (%s)\" \n % (i, currlabel, categories_loaded[currlabel]))\n plt.title(title_string) \n plt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d088ace9da021cd7a5efa0d50b93c77db5f0e12b | 82,397 | ipynb | Jupyter Notebook | latest-articles.ipynb | freep-searcher/latest-articles | babf5044710b3de9a036dd6cb8663f74fcd72fe5 | [
"Apache-2.0"
] | null | null | null | latest-articles.ipynb | freep-searcher/latest-articles | babf5044710b3de9a036dd6cb8663f74fcd72fe5 | [
"Apache-2.0"
] | null | null | null | latest-articles.ipynb | freep-searcher/latest-articles | babf5044710b3de9a036dd6cb8663f74fcd72fe5 | [
"Apache-2.0"
] | null | null | null | 81.259369 | 28,736 | 0.676238 | [
[
[
"import numpy as np\nimport pandas as pd\nimport seaborn as sns\n\nTOP_POSTER_LIMIT = 10\nDOMAIN_LIMIT = 25\nREPLIES_THRESHOLD = 150\n\ndf = pd.read_csv(\"data/latest-articles.csv\")\ndf[\"timestamp\"] = pd.to_datetime(df[\"timestamp\"], unit='s')\ndf[\"title\"] = df[\"title\"].astype(\"category\")\ndf[\"posted_by\"] = df[\"posted_by\"].astype(\"category\")\ndf[\"source_url\"] = df[\"source_url\"].astype(\"category\")",
"_____no_output_____"
],
[
"# A thread is considered sourced if it has a URL link.\nsourced = df[df.source_url != \"\"]\n\n# This is not entirely accurate since some vanity posts have links.\nvanity = df[df.source_url == \"\"]",
"_____no_output_____"
],
[
"def extract_domain(url) -> str:\n return (\n url.\n replace(\"www.\",\"\").\n replace(\"http://\", \"\").\n replace(\"https://\", \"\").\n split(\"/\")[0]\n )\n\n\nsourced.insert(len(sourced.columns)-1, \"domain\", (\n sourced.\n source_url.\n apply(extract_domain).\n astype(\"category\")\n))",
"_____no_output_____"
],
[
"# Draws a chart that shows where discussion for a particular thread originated from.\ndata = sourced[[\"source_url\", \"domain\"]].drop_duplicates()\ntop_sourced_domains = data.domain.value_counts().iloc[:DOMAIN_LIMIT].index\nsns.countplot(\n y=\"domain\",\n data=data,\n order=top_sourced_domains,\n)",
"_____no_output_____"
],
[
"# Draws a chart that shows who posted a topic to be discussed that has a URL source.\ndata = sourced[[\"source_url\", \"posted_by\"]].drop_duplicates()\ntop_sourced_posters = data.posted_by.value_counts().iloc[:TOP_POSTER_LIMIT].index\nsns.countplot(\n y=\"posted_by\",\n data=data,\n order=top_sourced_posters,\n)",
"_____no_output_____"
],
[
"# Comment this if you don't want to see everything.\npd.set_option(\"display.max_rows\", None, \"display.max_columns\", None)\n\n# Draws a table that shows which freepers posted topics from a sourced link that originated\n# from a particular domain.\nwho_is_posting_from_what = (\n sourced[[\"posted_by\", \"source_url\", \"domain\"]].\n query(\"posted_by in @top_sourced_posters and domain in @top_sourced_domains\").\n drop_duplicates().\n groupby([\"domain\", \"posted_by\"])\n)\nwho_is_posting_from_what[[\"domain\"]].describe()",
"_____no_output_____"
],
[
"# Display which posts got the most replies before\n# they aged out from the front page.\n\n(sourced[['timestamp', 'title', 'replies']].\n loc[sourced.replies > REPLIES_THRESHOLD].\n groupby('title').\n max().\n sort_values('replies', ascending=False).\n dropna())",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d088afa0eba7b0ede27dcb3524b58a8c6ffa01b5 | 20,446 | ipynb | Jupyter Notebook | debug/.ipynb_checkpoints/pl_debug-checkpoint.ipynb | yunqu/PYNQ-Debug-Notebooks | 8de079d2db2b875a0e57c0c185a8e12ecdb38069 | [
"MIT"
] | 4 | 2018-07-19T14:03:20.000Z | 2021-12-24T15:10:19.000Z | debug/.ipynb_checkpoints/pl_debug-checkpoint.ipynb | yunqu/PYNQ-Debug-Notebooks | 8de079d2db2b875a0e57c0c185a8e12ecdb38069 | [
"MIT"
] | null | null | null | debug/.ipynb_checkpoints/pl_debug-checkpoint.ipynb | yunqu/PYNQ-Debug-Notebooks | 8de079d2db2b875a0e57c0c185a8e12ecdb38069 | [
"MIT"
] | 1 | 2019-11-21T05:58:15.000Z | 2019-11-21T05:58:15.000Z | 30.838612 | 121 | 0.431233 | [
[
[
"from pynq import Overlay\nfrom pynq import PL\nfrom pprint import pprint",
"_____no_output_____"
],
[
"pprint(PL.ip_dict)\nprint(PL.timestamp)",
"{'audio/d_axi_pdm_1': {'addr_range': 65536,\n 'phys_addr': 1136656384,\n 'state': None,\n 'type': 'xilinx.com:user:d_axi_pdm:1.2'},\n 'btns_gpio': {'addr_range': 65536,\n 'phys_addr': 1092681728,\n 'state': None,\n 'type': 'xilinx.com:ip:axi_gpio:2.0'},\n 'mb_bram_ctrl_1': {'addr_range': 65536,\n 'phys_addr': 1073741824,\n 'state': None,\n 'type': 'xilinx.com:ip:axi_bram_ctrl:4.0'},\n 'mb_bram_ctrl_2': {'addr_range': 65536,\n 'phys_addr': 1107296256,\n 'state': None,\n 'type': 'xilinx.com:ip:axi_bram_ctrl:4.0'},\n 'mb_bram_ctrl_3': {'addr_range': 65536,\n 'phys_addr': 1140850688,\n 'state': None,\n 'type': 'xilinx.com:ip:axi_bram_ctrl:4.0'},\n 'rgbleds_gpio': {'addr_range': 65536,\n 'phys_addr': 1092878336,\n 'state': None,\n 'type': 'xilinx.com:ip:axi_gpio:2.0'},\n 'swsleds_gpio': {'addr_range': 65536,\n 'phys_addr': 1092616192,\n 'state': None,\n 'type': 'xilinx.com:ip:axi_gpio:2.0'},\n 'system_interrupts': {'addr_range': 65536,\n 'phys_addr': 1098907648,\n 'state': None,\n 'type': 'xilinx.com:ip:axi_intc:4.1'},\n 'tracebuffer_arduino/axi_dma_0': {'addr_range': 65536,\n 'phys_addr': 2151743488,\n 'state': None,\n 'type': 'xilinx.com:ip:axi_dma:7.1'},\n 'tracebuffer_arduino/trace_cntrl_0': {'addr_range': 65536,\n 'phys_addr': 2210398208,\n 'state': None,\n 'type': 'xilinx.com:hls:trace_cntrl:1.2'},\n 'tracebuffer_pmods/axi_dma_0': {'addr_range': 65536,\n 'phys_addr': 2151677952,\n 'state': None,\n 'type': 'xilinx.com:ip:axi_dma:7.1'},\n 'tracebuffer_pmods/trace_cntrl_0': {'addr_range': 65536,\n 'phys_addr': 2210463744,\n 'state': None,\n 'type': 'xilinx.com:hls:trace_cntrl:1.2'},\n 'video/axi_dynclk_0': {'addr_range': 65536,\n 'phys_addr': 1136721920,\n 'state': None,\n 'type': 'digilentinc.com:ip:axi_dynclk:1.0'},\n 'video/axi_gpio_video': {'addr_range': 65536,\n 'phys_addr': 1092747264,\n 'state': None,\n 'type': 'xilinx.com:ip:axi_gpio:2.0'},\n 'video/axi_vdma_0': {'addr_range': 65536,\n 'phys_addr': 1124073472,\n 'state': None,\n 'type': 'xilinx.com:ip:axi_vdma:6.2'},\n 'video/hdmi_out_hpd_video': {'addr_range': 65536,\n 'phys_addr': 1092812800,\n 'state': None,\n 'type': 'xilinx.com:ip:axi_gpio:2.0'},\n 'video/v_tc_0': {'addr_range': 65536,\n 'phys_addr': 1136787456,\n 'state': None,\n 'type': 'xilinx.com:ip:v_tc:6.1'},\n 'video/v_tc_1': {'addr_range': 65536,\n 'phys_addr': 1136852992,\n 'state': None,\n 'type': 'xilinx.com:ip:v_tc:6.1'}}\n2017/4/11 17:35:18 +360817\n"
],
[
"ol2 = Overlay('base.bit')\nol2.download()",
"_____no_output_____"
],
[
"pprint(PL.ip_dict)\nprint(PL.timestamp)",
"{'audio/d_axi_pdm_1': {'addr_range': 65536,\n 'phys_addr': 1136656384,\n 'state': None,\n 'type': 'xilinx.com:user:d_axi_pdm:1.2'},\n 'btns_gpio': {'addr_range': 65536,\n 'phys_addr': 1092681728,\n 'state': None,\n 'type': 'xilinx.com:ip:axi_gpio:2.0'},\n 'mb_bram_ctrl_1': {'addr_range': 65536,\n 'phys_addr': 1073741824,\n 'state': None,\n 'type': 'xilinx.com:ip:axi_bram_ctrl:4.0'},\n 'mb_bram_ctrl_2': {'addr_range': 65536,\n 'phys_addr': 1107296256,\n 'state': None,\n 'type': 'xilinx.com:ip:axi_bram_ctrl:4.0'},\n 'mb_bram_ctrl_3': {'addr_range': 65536,\n 'phys_addr': 1140850688,\n 'state': None,\n 'type': 'xilinx.com:ip:axi_bram_ctrl:4.0'},\n 'rgbleds_gpio': {'addr_range': 65536,\n 'phys_addr': 1092878336,\n 'state': None,\n 'type': 'xilinx.com:ip:axi_gpio:2.0'},\n 'swsleds_gpio': {'addr_range': 65536,\n 'phys_addr': 1092616192,\n 'state': None,\n 'type': 'xilinx.com:ip:axi_gpio:2.0'},\n 'system_interrupts': {'addr_range': 65536,\n 'phys_addr': 1098907648,\n 'state': None,\n 'type': 'xilinx.com:ip:axi_intc:4.1'},\n 'tracebuffer_arduino/axi_dma_0': {'addr_range': 65536,\n 'phys_addr': 2151743488,\n 'state': None,\n 'type': 'xilinx.com:ip:axi_dma:7.1'},\n 'tracebuffer_arduino/trace_cntrl_0': {'addr_range': 65536,\n 'phys_addr': 2210398208,\n 'state': None,\n 'type': 'xilinx.com:hls:trace_cntrl:1.2'},\n 'tracebuffer_pmods/axi_dma_0': {'addr_range': 65536,\n 'phys_addr': 2151677952,\n 'state': None,\n 'type': 'xilinx.com:ip:axi_dma:7.1'},\n 'tracebuffer_pmods/trace_cntrl_0': {'addr_range': 65536,\n 'phys_addr': 2210463744,\n 'state': None,\n 'type': 'xilinx.com:hls:trace_cntrl:1.2'},\n 'video/axi_dynclk_0': {'addr_range': 65536,\n 'phys_addr': 1136721920,\n 'state': None,\n 'type': 'digilentinc.com:ip:axi_dynclk:1.0'},\n 'video/axi_gpio_video': {'addr_range': 65536,\n 'phys_addr': 1092747264,\n 'state': None,\n 'type': 'xilinx.com:ip:axi_gpio:2.0'},\n 'video/axi_vdma_0': {'addr_range': 65536,\n 'phys_addr': 1124073472,\n 'state': None,\n 'type': 'xilinx.com:ip:axi_vdma:6.2'},\n 'video/hdmi_out_hpd_video': {'addr_range': 65536,\n 'phys_addr': 1092812800,\n 'state': None,\n 'type': 'xilinx.com:ip:axi_gpio:2.0'},\n 'video/v_tc_0': {'addr_range': 65536,\n 'phys_addr': 1136787456,\n 'state': None,\n 'type': 'xilinx.com:ip:v_tc:6.1'},\n 'video/v_tc_1': {'addr_range': 65536,\n 'phys_addr': 1136852992,\n 'state': None,\n 'type': 'xilinx.com:ip:v_tc:6.1'}}\n2017/4/11 17:37:6 +742671\n"
],
[
"PL.interrupt_controllers",
"_____no_output_____"
],
[
"PL.gpio_dict",
"_____no_output_____"
],
[
"a = PL.ip_dict\nfor i,j in enumerate(a):\n print(i,j,a[j])",
"_____no_output_____"
],
[
"a['SEG_rgbled_gpio_Reg']",
"_____no_output_____"
],
[
"b = [value for key, value in a.items() if 'mb_bram_ctrl' in key.lower()]",
"_____no_output_____"
],
[
"print(b)",
"_____no_output_____"
],
[
"addr_base,addr_range,state = a['SEG_rgbled_gpio_Reg']",
"_____no_output_____"
],
[
"addr_base",
"_____no_output_____"
],
[
"a = [None]\na*10",
"_____no_output_____"
],
[
"import re\ntcl_name = 'parse.txt'\n \npat1 = 'connect_bd_net'\npat2 = '[get_bd_pins processing_system7_0/GPIO_O]'\nresult = {}\ngpio_pool1 = set()\ngpio_pool2 = set()\nwith open(tcl_name, 'r') as f:\n for line in f:\n if not line.startswith('#') and (pat1 in line) and (pat2 in line):\n gpio_pool1 = gpio_pool1.union(set(re.findall(\n '\\[get_bd_pins (.+?)/Din\\]', line, re.IGNORECASE)))\n \nwhile gpio_pool1:\n gpio_net = gpio_pool1.pop()\n if not gpio_net in gpio_pool2:\n pat3 = '[get_bd_pins ' + gpio_net + '/Din]'\n gpio_pool2.add(gpio_net)\n with open(tcl_name, 'r') as f:\n for line in f:\n if not line.startswith('#') and (pat1 in line) and \\\n (pat3 in line):\n gpio_pool1 = gpio_pool1.union(set(re.findall(\n '\\[get_bd_pins (.+?)/Din\\]', line, re.IGNORECASE)))\n gpio_pool1.discard(gpio_net)\n \ngpio_list = list(gpio_pool2)\nprint(gpio_list)",
"_____no_output_____"
],
[
"\"\"\"\nindex = 0\nmatch = []\nfor i in gpio_list:\n pat4 = \"create_bd_cell -type ip -vlnv (.+?) \" + i + \"($| )\"\n with open(tcl_name, 'r') as f:\n for line in f:\n if not line.startswith('#'):\n m = re.search(pat4, line, re.IGNORECASE)\n if m:\n match.append(m.group(2))\n continue\nprint(match)\n\"\"\"\nwith open('parse.txt') as f:\n file_str =''.join(line.replace('\\n',' ').replace('\\r','') for line in f\n and not line.startswith('#'))\n print(file_str)\n for j in gpio_list:\n pat5 = \"set_property -dict \\[ list \\\\\\\\ \"+\\\n \"CONFIG.DIN_FROM {([0-9]+)} \\\\\\\\ \"+\\\n \"CONFIG.DIN_TO {([0-9]+)} \\\\\\\\ \"+\\\n \"CONFIG.DIN_WIDTH {([0-9]+)} \\\\\\\\ \"+\\\n \"CONFIG.DOUT_WIDTH {([0-9]+)} \\\\\\\\ \"+\\\n \"\\] \\$\" + j\n print(pat5)\n m = re.search(pat5,file_str,re.IGNORECASE)\n if m:\n index = m.group(1)\n result[j] = [int(index), None]\n \nprint(result)",
"_____no_output_____"
],
[
"str1 = 'create_bd_cell -type ip -vlnv xilinx.com:ip:xlslice:1.0 mb3_timer_capture_4'\nstr2 = 'set mb3_timer_capture_5 [ create_bd_cell -type ip -vlnv xilinx.com:ip:xlslice:1.0 mb3_timer_capture_5 ]'\npat1 = \"create_bd_cell -type ip -vlnv (.+?) (.+?)($| )\"\nmatch1 = re.search(pat1, str2, re.IGNORECASE)\nmatch1.group(2)",
"_____no_output_____"
],
[
"with open('parse.txt') as f:\n data=''.join(line.replace('\\n',' ').replace('\\r','') for line in f)\nprint(data)",
"_____no_output_____"
],
[
"str1 = \"[123 456\\ $2]\"\npat1 = \"\\[(.+?) (.+?)\\\\\\\\ \\$(.+?)]\"\nm = re.search(pat1, str1, re.IGNORECASE)\nif m:\n print(m.group(1))\n print(m.group(2))\n print(type(m.group(1)))",
"_____no_output_____"
],
[
"a = [1,2,3]\nprint(a[-1])",
"_____no_output_____"
],
[
"print(a)",
"_____no_output_____"
],
[
"import re\nprop_name_regex = \"CONFIG.DIN_FROM {([0-9]+)} \\\\\\\\\"\nstr1 = \"CONFIG.DIN_FROM {13} \\\\\"\nm = re.search(prop_name_regex,str1)\nif m:\n print(m.group(1))",
"_____no_output_____"
],
[
"a = {1:'mb_1_reset', 2:'mb_2_reset'}\nres = dict((v,[k,None]) for k,v in a.items() if k>1)\nprint(res)",
"_____no_output_____"
],
[
"a = {1:'mb_1_reset', 2:'mb_2_reset'}\nb = a.copy()\na.clear()\nprint(b)",
"_____no_output_____"
],
[
"a = {1:['mb_1_reset',None], 2:['mb_2_reset','running']}\na = {i:j for i,j in a.items() if j[1] is not None}\nprint(a)",
"_____no_output_____"
],
[
"import re\nstr1 = \" set processing_system7_0 [ create_bd_cell -type ip -vlnv \"+\\\n \"xilinx.com:ip:processing_system7:5.5 processing_system7_0 ]\"\nip_regex = \"create_bd_cell -type ip -vlnv \" + \\\n \"(.+?):ip:(.+?):(.+?) (.+?) \"\nm = re.search(ip_regex,str1)\nprint(m.groups())",
"_____no_output_____"
],
[
"import numpy as np\na = np.random.randint(0,32,10,dtype=np.uint32)\nprint(a)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d088b1f8fd551e73375ba8eb758ccf39569d84c7 | 53,822 | ipynb | Jupyter Notebook | intro-to-pytorch/Part 4 - Fashion-MNIST (Exercises).ipynb | YuxinShi0423/deep-learning-v2-pytorch | 2b0e62909baf23712998a7743c8718ecdb674c8a | [
"MIT"
] | null | null | null | intro-to-pytorch/Part 4 - Fashion-MNIST (Exercises).ipynb | YuxinShi0423/deep-learning-v2-pytorch | 2b0e62909baf23712998a7743c8718ecdb674c8a | [
"MIT"
] | null | null | null | intro-to-pytorch/Part 4 - Fashion-MNIST (Exercises).ipynb | YuxinShi0423/deep-learning-v2-pytorch | 2b0e62909baf23712998a7743c8718ecdb674c8a | [
"MIT"
] | null | null | null | 105.533333 | 24,000 | 0.747167 | [
[
[
"# Classifying Fashion-MNIST\n\nNow it's your turn to build and train a neural network. You'll be using the [Fashion-MNIST dataset](https://github.com/zalandoresearch/fashion-mnist), a drop-in replacement for the MNIST dataset. MNIST is actually quite trivial with neural networks where you can easily achieve better than 97% accuracy. Fashion-MNIST is a set of 28x28 greyscale images of clothes. It's more complex than MNIST, so it's a better representation of the actual performance of your network, and a better representation of datasets you'll use in the real world.\n\n<img src='assets/fashion-mnist-sprite.png' width=500px>\n\nIn this notebook, you'll build your own neural network. For the most part, you could just copy and paste the code from Part 3, but you wouldn't be learning. It's important for you to write the code yourself and get it to work. Feel free to consult the previous notebooks though as you work through this.\n\nFirst off, let's load the dataset through torchvision.",
"_____no_output_____"
]
],
[
[
"import torch\nfrom torchvision import datasets, transforms\nimport helper\n\n# Define a transform to normalize the data\ntransform = transforms.Compose([transforms.ToTensor(),\n transforms.Normalize((0.5,), (0.5,))])\n# Download and load the training data\ntrainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform)\ntrainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)\n\n# Download and load the test data\ntestset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)\ntestloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)",
"Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz to C:\\Users\\yuxinshi/.pytorch/F_MNIST_data/FashionMNIST\\raw\\train-images-idx3-ubyte.gz\n"
]
],
[
[
"Here we can see one of the images.",
"_____no_output_____"
]
],
[
[
"image, label = next(iter(trainloader))\nhelper.imshow(image[0,:]);",
"_____no_output_____"
]
],
[
[
"## Building the network\n\nHere you should define your network. As with MNIST, each image is 28x28 which is a total of 784 pixels, and there are 10 classes. You should include at least one hidden layer. We suggest you use ReLU activations for the layers and to return the logits or log-softmax from the forward pass. It's up to you how many layers you add and the size of those layers.",
"_____no_output_____"
]
],
[
[
"# TODO: Define your network architecture here\nfrom torch import nn, optim\nimport torch.nn.functional as F",
"_____no_output_____"
],
[
"class Classifier(nn.Module):\n def __init__(self):\n super().__init__()\n\n self.fc1 = nn.Linear(784, 256)\n self.fc2 = nn.Linear(256, 128)\n self.fc3 = nn.Linear(128, 64)\n self.fc4 = nn.Linear(64, 10)\n\n def forward(self, x):\n x = x.view(x.shape[0], -1)\n\n x = F.relu(self.fc1(x))\n x = F.relu(self.fc2(x))\n x = F.relu(self.fc3(x))\n x = F.log_softmax(self.fc4(x), dim = 1)\n\n return x",
"_____no_output_____"
]
],
[
[
"# Train the network\n\nNow you should create your network and train it. First you'll want to define [the criterion](http://pytorch.org/docs/master/nn.html#loss-functions) ( something like `nn.CrossEntropyLoss`) and [the optimizer](http://pytorch.org/docs/master/optim.html) (typically `optim.SGD` or `optim.Adam`).\n\nThen write the training code. Remember the training pass is a fairly straightforward process:\n\n* Make a forward pass through the network to get the logits \n* Use the logits to calculate the loss\n* Perform a backward pass through the network with `loss.backward()` to calculate the gradients\n* Take a step with the optimizer to update the weights\n\nBy adjusting the hyperparameters (hidden units, learning rate, etc), you should be able to get the training loss below 0.4.",
"_____no_output_____"
]
],
[
[
"# TODO: Create the network, define the criterion and optimizer\nmodel = Classifier()\nprint(list(model.parameters()))",
"[Parameter containing:\ntensor([[-0.0282, 0.0159, 0.0345, ..., 0.0217, -0.0063, -0.0297],\n [-0.0238, 0.0345, -0.0294, ..., -0.0127, 0.0211, -0.0308],\n [-0.0150, 0.0003, -0.0198, ..., -0.0341, -0.0184, 0.0036],\n ...,\n [-0.0191, -0.0063, -0.0220, ..., -0.0073, -0.0348, -0.0242],\n [ 0.0029, 0.0050, -0.0220, ..., -0.0171, 0.0272, 0.0312],\n [-0.0222, -0.0215, -0.0261, ..., -0.0145, -0.0306, -0.0036]],\n requires_grad=True), Parameter containing:\ntensor([ 0.0026, 0.0303, -0.0099, 0.0204, -0.0035, -0.0136, -0.0237, 0.0269,\n 0.0004, 0.0165, 0.0281, -0.0086, -0.0018, -0.0324, 0.0120, 0.0334,\n 0.0104, -0.0019, -0.0324, -0.0085, 0.0068, 0.0046, -0.0203, 0.0249,\n -0.0270, 0.0184, -0.0233, -0.0253, -0.0086, -0.0159, -0.0034, -0.0232,\n -0.0181, 0.0343, -0.0252, -0.0126, 0.0086, 0.0041, -0.0245, 0.0172,\n -0.0137, 0.0288, 0.0301, -0.0249, 0.0123, 0.0275, -0.0240, -0.0260,\n -0.0327, 0.0067, -0.0002, -0.0134, -0.0092, -0.0219, -0.0312, -0.0062,\n 0.0093, 0.0275, 0.0173, -0.0178, 0.0035, 0.0017, 0.0021, 0.0002,\n -0.0257, 0.0026, -0.0178, 0.0345, -0.0229, 0.0105, -0.0347, -0.0003,\n 0.0089, -0.0042, 0.0289, -0.0121, 0.0145, 0.0253, -0.0305, 0.0124,\n 0.0071, -0.0300, 0.0250, -0.0013, -0.0234, 0.0181, 0.0197, -0.0085,\n -0.0133, -0.0036, 0.0158, 0.0156, -0.0157, -0.0267, 0.0009, -0.0212,\n -0.0297, 0.0346, 0.0345, -0.0039, -0.0221, -0.0149, 0.0063, -0.0345,\n -0.0149, 0.0348, 0.0067, 0.0118, 0.0202, 0.0160, -0.0316, 0.0081,\n 0.0079, -0.0082, 0.0039, -0.0233, 0.0030, 0.0254, 0.0211, -0.0292,\n 0.0338, -0.0259, 0.0073, -0.0072, 0.0131, 0.0062, 0.0285, 0.0010,\n 0.0012, 0.0193, -0.0167, 0.0188, 0.0316, -0.0058, -0.0258, 0.0024,\n 0.0320, 0.0068, -0.0273, -0.0063, 0.0123, -0.0109, 0.0010, 0.0282,\n 0.0011, 0.0191, -0.0315, 0.0268, -0.0278, 0.0238, -0.0316, -0.0067,\n -0.0201, 0.0318, -0.0214, 0.0194, 0.0112, -0.0232, 0.0351, 0.0199,\n 0.0064, 0.0347, -0.0330, 0.0115, 0.0086, -0.0119, 0.0211, -0.0153,\n -0.0297, -0.0333, -0.0237, -0.0344, -0.0332, 0.0124, 0.0336, 0.0050,\n 0.0348, -0.0196, 0.0116, -0.0026, 0.0128, 0.0356, -0.0196, 0.0169,\n 0.0074, -0.0240, -0.0238, -0.0054, -0.0266, -0.0328, -0.0303, -0.0232,\n 0.0127, -0.0332, 0.0329, 0.0004, -0.0169, 0.0217, 0.0336, -0.0072,\n 0.0187, 0.0241, 0.0017, -0.0068, 0.0250, -0.0185, -0.0074, -0.0118,\n 0.0287, -0.0238, -0.0108, -0.0040, 0.0174, -0.0052, -0.0089, -0.0070,\n -0.0271, 0.0064, -0.0296, 0.0271, -0.0348, 0.0188, -0.0267, -0.0118,\n -0.0140, 0.0267, -0.0313, -0.0240, -0.0043, -0.0202, -0.0186, 0.0145,\n -0.0209, 0.0113, 0.0328, -0.0040, 0.0088, 0.0166, -0.0178, -0.0120,\n -0.0166, -0.0226, -0.0333, -0.0228, -0.0224, -0.0342, 0.0118, -0.0130,\n -0.0293, -0.0021, -0.0322, -0.0108, 0.0166, -0.0044, -0.0187, -0.0193],\n requires_grad=True), Parameter containing:\ntensor([[-0.0024, -0.0308, -0.0253, ..., -0.0612, -0.0276, -0.0356],\n [-0.0170, 0.0610, -0.0298, ..., -0.0014, 0.0186, -0.0002],\n [ 0.0203, 0.0057, -0.0152, ..., 0.0314, 0.0533, -0.0541],\n ...,\n [-0.0338, 0.0046, -0.0438, ..., -0.0389, 0.0343, -0.0220],\n [ 0.0372, 0.0053, -0.0568, ..., -0.0196, -0.0164, 0.0197],\n [-0.0049, -0.0136, 0.0083, ..., 0.0275, -0.0363, -0.0335]],\n requires_grad=True), Parameter containing:\ntensor([ 0.0429, -0.0553, -0.0199, 0.0152, 0.0299, -0.0533, 0.0573, 0.0067,\n 0.0508, 0.0374, 0.0459, -0.0518, -0.0595, 0.0149, 0.0166, 0.0038,\n -0.0098, 0.0615, 0.0597, -0.0433, 0.0238, 0.0522, 0.0283, 0.0212,\n -0.0591, 0.0194, 0.0518, -0.0115, 0.0128, 0.0094, 0.0424, 0.0068,\n 0.0090, 0.0395, -0.0587, -0.0088, -0.0567, -0.0002, -0.0226, -0.0448,\n 0.0071, 0.0615, -0.0485, -0.0423, 0.0465, -0.0084, -0.0248, -0.0537,\n -0.0471, -0.0601, 0.0147, 0.0229, -0.0494, 0.0255, -0.0099, -0.0208,\n -0.0071, -0.0416, 0.0210, -0.0437, 0.0293, -0.0007, 0.0267, -0.0272,\n 0.0446, -0.0301, -0.0210, -0.0074, -0.0301, 0.0360, 0.0491, 0.0063,\n -0.0285, -0.0251, 0.0187, -0.0509, -0.0089, 0.0565, 0.0157, -0.0600,\n -0.0228, 0.0143, -0.0608, 0.0363, 0.0539, -0.0200, -0.0466, 0.0292,\n 0.0475, 0.0555, -0.0266, -0.0573, 0.0410, 0.0054, -0.0490, -0.0328,\n -0.0046, 0.0192, 0.0022, -0.0426, 0.0407, 0.0529, 0.0135, 0.0364,\n 0.0052, 0.0572, -0.0290, -0.0422, -0.0262, -0.0213, -0.0614, -0.0059,\n 0.0121, -0.0064, 0.0422, 0.0500, -0.0568, 0.0584, 0.0052, 0.0613,\n -0.0244, -0.0082, 0.0487, -0.0611, 0.0091, -0.0350, -0.0488, -0.0584],\n requires_grad=True), Parameter containing:\ntensor([[-0.0192, -0.0067, 0.0817, ..., 0.0276, 0.0859, -0.0664],\n [ 0.0501, -0.0027, -0.0869, ..., -0.0134, -0.0783, 0.0673],\n [-0.0094, 0.0842, -0.0137, ..., 0.0192, 0.0827, -0.0643],\n ...,\n [ 0.0533, 0.0746, -0.0028, ..., -0.0157, -0.0513, 0.0043],\n [ 0.0815, 0.0265, 0.0691, ..., -0.0580, -0.0882, 0.0383],\n [-0.0343, -0.0864, 0.0123, ..., 0.0420, -0.0262, 0.0148]],\n requires_grad=True), Parameter containing:\ntensor([ 0.0058, -0.0590, 0.0500, 0.0332, 0.0006, 0.0734, 0.0697, -0.0835,\n -0.0056, 0.0211, 0.0600, -0.0197, -0.0258, -0.0633, 0.0692, 0.0697,\n -0.0435, -0.0410, -0.0718, -0.0417, 0.0526, 0.0369, 0.0333, -0.0525,\n -0.0266, 0.0818, -0.0740, -0.0182, 0.0341, -0.0285, 0.0788, 0.0703,\n 0.0214, -0.0561, 0.0671, -0.0534, 0.0648, 0.0274, -0.0247, 0.0790,\n 0.0439, -0.0671, 0.0213, 0.0455, 0.0860, -0.0635, -0.0751, -0.0697,\n 0.0676, 0.0702, 0.0218, 0.0700, 0.0449, 0.0466, -0.0653, 0.0175,\n 0.0195, 0.0631, -0.0804, 0.0005, -0.0827, -0.0262, 0.0520, -0.0052],\n requires_grad=True), Parameter containing:\ntensor([[-0.0207, 0.0650, 0.0347, -0.0187, -0.0356, 0.1135, 0.0450, -0.0284,\n 0.0226, -0.0663, 0.0362, -0.0798, 0.0684, 0.0781, -0.1081, 0.0447,\n -0.0771, 0.0110, -0.1202, 0.1168, 0.0080, -0.0938, -0.0231, -0.0264,\n 0.0741, 0.1125, 0.0639, -0.0823, 0.0288, 0.0337, 0.0904, 0.0564,\n -0.0633, -0.1022, 0.0453, 0.0571, 0.0529, -0.0602, -0.0734, -0.0667,\n -0.0543, -0.0054, 0.0095, 0.0618, 0.1174, -0.0606, -0.0267, -0.0890,\n -0.0521, 0.0604, 0.0484, -0.0510, 0.0227, -0.0453, -0.0741, -0.0179,\n -0.0867, -0.0534, 0.1002, -0.0898, -0.0597, 0.0603, -0.1132, 0.0923],\n [ 0.0321, 0.0903, -0.0264, -0.0928, -0.0495, 0.0525, -0.0478, -0.0381,\n 0.1156, -0.1195, 0.1001, -0.0127, -0.1029, 0.1110, -0.0751, 0.0060,\n -0.0098, 0.0302, 0.0790, 0.0852, -0.0544, -0.0736, -0.0356, -0.0281,\n -0.0183, 0.0493, -0.0126, -0.0347, -0.0613, -0.0591, -0.0307, 0.0757,\n -0.0453, 0.0632, -0.0811, 0.0624, 0.0912, -0.0486, -0.1218, 0.1014,\n -0.0062, 0.0731, -0.1246, 0.0628, 0.1219, 0.0352, -0.0634, 0.1147,\n 0.0207, -0.0013, -0.0739, -0.1029, -0.0965, 0.0621, -0.0592, 0.0163,\n -0.1017, 0.0392, -0.0719, 0.1076, 0.0313, 0.0287, 0.0989, 0.0648],\n [-0.0559, 0.0963, 0.0484, -0.0431, 0.1098, -0.0709, -0.0216, 0.0096,\n 0.0535, 0.0197, -0.0433, 0.0038, -0.0455, -0.0134, 0.1108, 0.0971,\n 0.0923, 0.0819, 0.1108, -0.0762, -0.0004, -0.0714, -0.1237, 0.0652,\n 0.0038, 0.0640, 0.0050, -0.1130, -0.0588, 0.0588, -0.0547, 0.0353,\n -0.1049, 0.0957, -0.0018, -0.0347, -0.1234, -0.1024, 0.0400, 0.0824,\n -0.0237, -0.0080, -0.0699, -0.0608, 0.0594, -0.0908, -0.0322, 0.0282,\n 0.0275, -0.0547, -0.0718, -0.0447, 0.0525, -0.0052, 0.0791, 0.0381,\n 0.1161, -0.0236, -0.0432, -0.0950, 0.0202, 0.0729, 0.0653, -0.0183],\n [ 0.0665, -0.1237, -0.0648, -0.0376, -0.0699, 0.0659, 0.0280, -0.0134,\n -0.0116, 0.0894, 0.1029, -0.0099, -0.0164, 0.0648, 0.0478, -0.0206,\n -0.0773, 0.0742, 0.0852, -0.0763, 0.0360, 0.0481, -0.0368, -0.0502,\n -0.0113, -0.0807, -0.1124, -0.0438, -0.0840, 0.0111, -0.0627, -0.1105,\n -0.0226, 0.0535, -0.0696, -0.1215, 0.0935, -0.0630, -0.0284, 0.0483,\n -0.1045, -0.0900, 0.0754, -0.0331, 0.0850, 0.0903, -0.0551, 0.0786,\n 0.0377, 0.0400, -0.0155, -0.0498, 0.1247, -0.0934, -0.0108, 0.0376,\n 0.0744, 0.0274, -0.0036, -0.0597, 0.0221, -0.0973, -0.0558, -0.0558],\n [-0.0901, 0.1158, -0.0508, 0.0922, 0.0431, 0.0084, -0.0354, -0.0587,\n -0.0610, 0.0453, 0.0806, 0.0514, 0.0363, -0.0778, -0.0434, 0.0475,\n -0.0256, 0.0022, -0.0849, -0.0676, -0.1195, -0.0784, -0.0125, -0.1063,\n 0.0586, 0.1065, 0.0516, -0.1116, -0.0883, -0.1182, 0.0441, -0.0890,\n -0.0619, -0.0188, 0.0978, 0.0889, 0.0443, 0.0487, 0.0937, -0.0002,\n -0.0671, 0.1106, -0.0380, -0.0041, -0.0978, 0.0126, 0.0271, -0.0451,\n 0.1095, -0.0146, 0.0793, -0.1232, 0.1037, 0.0918, 0.0086, 0.0366,\n 0.0954, 0.1071, -0.0656, -0.0038, -0.0667, -0.0730, -0.0869, 0.0237],\n [-0.1107, -0.0004, 0.1152, -0.0597, -0.1088, 0.0469, -0.0702, 0.1181,\n -0.0578, -0.0421, 0.0389, -0.0179, 0.0343, -0.0816, 0.0016, 0.1101,\n 0.0470, -0.1040, -0.0548, 0.0250, 0.0143, -0.0777, 0.0746, -0.0492,\n -0.0936, -0.1076, -0.0795, -0.0651, -0.0325, 0.0002, -0.0688, -0.0618,\n -0.0105, 0.0712, -0.0684, 0.0972, -0.0271, 0.0127, 0.0236, 0.0608,\n 0.1120, -0.0776, -0.0186, 0.1031, 0.0188, 0.0022, 0.0263, -0.0788,\n -0.0160, -0.1005, 0.1088, -0.1246, -0.0930, 0.0166, 0.0745, -0.0466,\n -0.1108, 0.0169, -0.1128, 0.0298, -0.0945, 0.0185, 0.1180, 0.0619],\n [ 0.0469, 0.1020, -0.0939, 0.0091, -0.0073, -0.0415, -0.1144, 0.1165,\n 0.0393, 0.0130, -0.0325, -0.0600, 0.1101, 0.1205, 0.0093, 0.0441,\n -0.0151, 0.1056, 0.0674, 0.1063, 0.0542, -0.1003, -0.0336, -0.0125,\n -0.0008, -0.0572, -0.0442, 0.0724, -0.0658, -0.1009, -0.0905, -0.0387,\n -0.0521, 0.0813, 0.1120, 0.0200, 0.1214, -0.0940, -0.0709, -0.0728,\n -0.1164, -0.0905, 0.1048, -0.0929, -0.0484, -0.1028, -0.0610, 0.0966,\n 0.0790, -0.1026, -0.0207, 0.0928, 0.0420, 0.0823, -0.1008, 0.0292,\n 0.1209, -0.1219, -0.0822, 0.0292, 0.0104, 0.0894, 0.0216, 0.0519],\n [-0.0868, -0.1046, 0.0236, 0.0240, 0.0451, -0.0729, -0.0354, 0.0130,\n -0.0657, 0.1214, 0.0227, -0.0089, 0.1010, 0.0091, 0.1139, -0.0426,\n -0.0447, -0.0049, -0.0563, -0.0315, -0.0874, 0.0657, 0.0285, -0.0427,\n -0.1204, -0.0224, -0.0155, 0.0695, -0.0184, -0.0351, 0.0039, 0.0346,\n 0.0311, 0.0440, 0.1060, 0.0882, 0.0222, 0.0272, -0.1218, 0.0590,\n -0.1120, -0.0040, 0.0507, -0.1105, 0.0749, 0.0133, -0.0161, 0.0548,\n -0.0259, 0.0807, -0.0553, -0.0689, -0.0899, 0.0786, 0.1113, 0.0046,\n 0.1147, 0.0686, 0.0904, -0.0996, -0.0171, 0.0572, 0.0963, 0.0732],\n [ 0.1188, 0.1249, 0.0270, 0.0900, -0.0223, -0.0640, 0.0127, -0.0244,\n 0.0927, 0.0727, -0.0932, -0.1187, -0.0969, -0.0240, -0.0252, 0.0542,\n -0.0024, 0.0098, 0.0332, -0.0385, 0.0535, 0.0217, 0.0482, -0.0891,\n 0.0434, 0.0565, -0.0370, -0.0541, 0.0335, -0.0351, -0.0642, -0.1232,\n -0.0244, -0.1088, 0.0427, -0.0655, -0.0933, 0.0833, 0.0784, 0.0923,\n 0.1246, -0.0208, -0.0144, 0.1175, -0.0354, 0.0760, 0.0038, 0.0577,\n 0.1222, 0.1246, -0.0431, 0.0626, 0.0577, -0.0086, -0.1239, -0.0988,\n -0.0334, -0.0812, 0.1014, 0.1079, 0.0846, -0.1206, -0.0529, 0.0299],\n [-0.1008, 0.1057, -0.1125, -0.0659, 0.1201, -0.0423, -0.0071, 0.1246,\n 0.0661, 0.1132, -0.0138, -0.1238, -0.0231, 0.0654, -0.0820, 0.0942,\n -0.0231, -0.1208, 0.0337, -0.0209, -0.1000, -0.1075, -0.0103, 0.0851,\n 0.0940, -0.0449, 0.0569, -0.0765, -0.0560, 0.0396, -0.0278, 0.0552,\n 0.0122, 0.0713, 0.1248, 0.0274, -0.0082, -0.0897, -0.0945, 0.0707,\n -0.0863, -0.0313, 0.0075, -0.0574, 0.0657, -0.0507, -0.0186, 0.0069,\n -0.1058, 0.0964, 0.0162, -0.0178, -0.0129, 0.0237, -0.0017, 0.0868,\n -0.0746, 0.0071, -0.0950, 0.1043, -0.0342, -0.0991, -0.0489, -0.0332]],\n requires_grad=True), Parameter containing:\ntensor([-0.0496, 0.0328, 0.1237, 0.0567, 0.0334, 0.0637, 0.0639, -0.0371,\n 0.0948, -0.0103], requires_grad=True)]\n"
],
[
"criterion = nn.NLLLoss()\noptimizer = optim.SGD(model.parameters(), lr = 0.003)",
"_____no_output_____"
],
[
"# TODO: Train the network here\nepochs = 5\n\nfor e in range(epochs):\n running_loss = 0\n \n \n for images, labels in trainloader:\n optimizer.zero_grad()\n \n output = model.forward(images)\n loss = criterion(output, labels)\n \n loss.backward()\n optimizer.step()\n \n running_loss += loss.item()\n else:\n print(f\"Training loss: {running_loss/len(trainloader)}\")",
"Training loss: 2.13386084542854\nTraining loss: 1.2510359490604035\nTraining loss: 0.8050704490083621\nTraining loss: 0.6793777945199246\nTraining loss: 0.6184414262964781\n"
],
[
"%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport helper\n\n# Test out your network!\n\ndataiter = iter(testloader)\nimages, labels = dataiter.next()\nimg = images[0]\n# Convert 2D image to 1D vector\nimg = img.resize_(1, 784)\n\n# TODO: Calculate the class probabilities (softmax) for img\nps = torch.exp(model(img))\n\n# Plot the image and probabilities\nhelper.view_classify(img.resize_(1, 28, 28), ps, version='Fashion')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d088b602e537b511b47cbacfde1e4839fb8d3372 | 6,308 | ipynb | Jupyter Notebook | examples/usage.ipynb | nikosavola/cirq-on-iqm | 990bb1f3629aaa5a8af14302335ec283dfe4eae1 | [
"Apache-2.0"
] | null | null | null | examples/usage.ipynb | nikosavola/cirq-on-iqm | 990bb1f3629aaa5a8af14302335ec283dfe4eae1 | [
"Apache-2.0"
] | null | null | null | examples/usage.ipynb | nikosavola/cirq-on-iqm | 990bb1f3629aaa5a8af14302335ec283dfe4eae1 | [
"Apache-2.0"
] | null | null | null | 23.803774 | 279 | 0.547559 | [
[
[
"import cirq\nfrom cirq_iqm import Adonis, circuit_from_qasm\nfrom cirq_iqm.iqm_gates import IsingGate, XYGate",
"_____no_output_____"
]
],
[
[
"# The Adonis architecture\n\nQubit connectivity:\n```\n QB1\n |\nQB4 - QB3 - QB2\n |\n QB5\n```\nConstruct an `IQMDevice` instance representing the Adonis architecture",
"_____no_output_____"
]
],
[
[
"adonis = Adonis()\nprint(adonis.NATIVE_GATES)\nprint(adonis.NATIVE_GATE_INSTANCES)\nprint(adonis.qubits)",
"_____no_output_____"
]
],
[
[
"# Creating a quantum circuit\n\nCreate a quantum circuit and insert native gates",
"_____no_output_____"
]
],
[
[
"a, b, c = adonis.qubits[:3]\ncircuit = cirq.Circuit(device=adonis)\ncircuit.append(cirq.X(a))\ncircuit.append(cirq.PhasedXPowGate(phase_exponent=0.3, exponent=0.5)(c))\ncircuit.append(cirq.CZ(a, c))\ncircuit.append(cirq.YPowGate(exponent=1.1)(c))\nprint(circuit)",
"_____no_output_____"
]
],
[
[
"-----\nInsert non-native gates, which are immediately decomposed into native ones",
"_____no_output_____"
]
],
[
[
"circuit.append(IsingGate(0.2)(a, c))\ncircuit.append(XYGate(0.5)(a, c))\ncircuit.append(cirq.HPowGate(exponent=-0.4)(a))\nprint(circuit)",
"_____no_output_____"
]
],
[
[
"# Optimizing a quantum circuit\n\nUse the `IQMDevice.simplify_circuit` method to run a sequence of optimization passes on a circuit",
"_____no_output_____"
]
],
[
[
"circuit = cirq.Circuit(device=adonis)\ncircuit.append(cirq.H(a))\ncircuit.append(cirq.CNOT(a, c))\ncircuit.append(cirq.measure(a, c, key='result'))\nprint(circuit)",
"_____no_output_____"
],
[
"adonis.simplify_circuit(circuit)\nprint(circuit)",
"_____no_output_____"
]
],
[
[
"# Simulating a quantum circuit\n\nCircuits that contain IQM-native gates can be simulated using the standard Cirq simulators",
"_____no_output_____"
]
],
[
[
"sim = cirq.Simulator()\nsamples = sim.run(circuit, repetitions=100)\n\nprint('Samples:')\nprint(samples.histogram(key='result'))\nprint('\\nState before the measurement:')\nresult = sim.simulate(circuit[:-1])\nprint(result)",
"_____no_output_____"
]
],
[
[
"Note that the above output vector represents the state before the measurement in the optimized circuit, not the original one, which would have the same phase for both terms. `IQMDevice.simplify_circuit` has eliminated a `ZPowGate` which has no effect on the measurement.\n\n---\n\n# Creating a quantum circuit from an OpenQASM 2.0 program\n\nThe OpenQASM standard gate set has been extended with the IQM native gates",
"_____no_output_____"
]
],
[
[
"qasm_program = \"\"\"\n OPENQASM 2.0;\n include \"qelib1.inc\";\n qreg q[3];\n creg meas[3];\n rx(1.7) q[1];\n h q[0];\n cx q[1], q[2];\n ising(-0.6) q[0], q[2]; // OpenQASM extension\n\"\"\"\ncircuit = circuit_from_qasm(qasm_program)\nprint(circuit)",
"_____no_output_____"
]
],
[
[
"Decompose the circuit for the Adonis architecture",
"_____no_output_____"
]
],
[
[
"decomposed = adonis.map_circuit(circuit)\nprint(decomposed)",
"_____no_output_____"
]
],
[
[
"See the `examples` directory for more examples.",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d088b70b9a7b61249d85250f8dfdfe6d78f89738 | 46,351 | ipynb | Jupyter Notebook | Write Up.ipynb | jrapudg/Advanced-Finding-Lane-Lines | 73c5691a33eda25ec3833222da6e8a1bc4db194b | [
"MIT"
] | null | null | null | Write Up.ipynb | jrapudg/Advanced-Finding-Lane-Lines | 73c5691a33eda25ec3833222da6e8a1bc4db194b | [
"MIT"
] | null | null | null | Write Up.ipynb | jrapudg/Advanced-Finding-Lane-Lines | 73c5691a33eda25ec3833222da6e8a1bc4db194b | [
"MIT"
] | null | null | null | 54.659198 | 677 | 0.584755 | [
[
[
"# Advanced Lane Finding Project\n## The goals / steps of this project are the following:\n* Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.\n* Apply a distortion correction to raw images.\n* Use color transforms, gradients, etc., to create a thresholded binary image.\n* Apply a perspective transform to rectify binary image (\"birds-eye view\").\n* Detect lane pixels and fit to find the lane boundary.\n* Determine the curvature of the lane and vehicle position with respect to center.\n* Warp the detected lane boundaries back onto the original image.\n* Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.\n\n[//]: # (Image References)\n\n[image1]: ./writeup_images/image_1_chess_distorsion.png \"Distorsion Correction Chessborad\"\n[image2]: ./writeup_images/image_2_distorsion_straight_lines.png \"Distorsion Correction\"\n[image3]: ./writeup_images/image_3_warped.png \"Warped Image\"\n\n[image4]: ./writeup_images/image_4_color_1.png\n[image5]: ./writeup_images/image_5_color.png\n[image6]: ./writeup_images/image_6_color.png\n\n[image7]: ./writeup_images/image_7_thresh.png\n[image8]: ./writeup_images/image_8_thresh.png\n[image9]: ./writeup_images/image_9_thresh.png\n[image10]: ./writeup_images/image_10_thresh.png\n[image11]: ./writeup_images/image_11_thresh.png\n[image12]: ./writeup_images/image_12_thresh.png\n[image13]: ./writeup_images/image_13_thresh.png\n[image14]: ./writeup_images/image_14_thresh.png\n[image15]: ./writeup_images/image_15_thresh.png\n[image16]: ./writeup_images/image_16_thresh.png\n[image17]: ./writeup_images/image_17_thresh.png\n[image18]: ./writeup_images/image_18_thresh.png\n[image19]: ./writeup_images/image_19_thresh.png\n[image20]: ./writeup_images/image_20_thresh.png\n[image21]: ./writeup_images/image_window_final_1.png\n[image22]: ./writeup_images/image_window_final_2.png\n[image23]: ./writeup_images/image_window_final_3.png\n[image24]: ./writeup_images/final_1.png\n[image25]: ./writeup_images/final_2.png\n[image26]: ./writeup_images/final_3.png\n[image27]: ./writeup_images/final_4.png\n[image28]: ./writeup_images/final_5.png\n[image29]: ./writeup_images/final_6.png\n\n[video1]: ./project_video_FINAL.mp4 \"Video1\"\n[video2]: ./challenge_video_output_FINAL.mp4 \"Video2\"\n\n\n### Camera Calibration\n\n#### 1. Briefly state how you computed the camera matrix and distortion coefficients. Provide an example of a distortion corrected calibration image.\n\nThe code for this step is as follows:\n```python\ndef camera_calibration(img, objpoints, imgpoints):\n original = img.copy()\n gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)\n # Find the chessboard corners\n ret, corners = cv2.findChessboardCorners(gray, (9,6),None)\n\n # If found, add object points, image points\n if ret == True:\n objpoints.append(objp)\n imgpoints.append(corners)\n\n # Draw and display the corners\n img = cv2.drawChessboardCorners(img, (9,6), corners, ret)\n f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10))\n f.subplots_adjust(hspace = .2, wspace=.05)\n #cv2.imwrite('origin'+str(i+1)+'.jpg',original)\n #cv2.imwrite('corners_detected'+str(i+1)+'.jpg',img)\n ax1.imshow(original)\n ax1.set_title('Original Image '+str(i+1), fontsize=30)\n ax2.imshow(img)\n ax2.set_title('Corners detected '+str(i+1),fontsize=30)\n \n return objpoints, imgpoints\n\ndef cal_matrix(imge, objpoints, imgpoints):\n # Use cv2.calibrateCamera() and cv2.undistort()\n\n ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, imge.shape[:-1],None,None)\n \n return mtx, dist\n\ndef undistort(img,mtx,dist):\n undist = cv2.undistort(img,mtx,dist,None,mtx)\n return undist\n\nobjp = np.zeros((6*9,3), np.float32)\nobjp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)\n\n#Arrays to store object points and image points from all the images.\nobjpoints = [] # 3d points in real world space\nimgpoints = [] # 2d points in image plane.\n\n# Make a list of calibration images\nimages = glob.glob('camera_cal/calibration*.jpg')\n\n# Step through the list and search for chessboard corners\nfor i, fname in enumerate(images):\n img = mpimg.imread(fname)\n objpoints, imgpoints = camera_calibration(img, objpoints, imgpoints)\n\n \ndst = mpimg.imread('camera_cal/calibration1.jpg')\nframe = mpimg.imread('test_images/test4.jpg')\n\nmtx, dist = cal_matrix(dst, objpoints, imgpoints)\n\n#Chessboard image Undistortion\nudst = undistort(dst, mtx,dist)\n#Frame of the Video Undistortion\nundistorted = undistort(frame, mtx,dist)\n\n```\n\nI start by preparing \"object points\", which will be the (x, y, z) coordinates of the chessboard corners in the world. Here I am assuming the chessboard is fixed on the (x, y) plane at z=0, such that the object points are the same for each calibration image. Thus, `objp` is just a replicated array of coordinates, and `objpoints` will be appended with a copy of it every time I successfully detect all chessboard corners in a test image. `imgpoints` will be appended with the (x, y) pixel position of each of the corners in the image plane with each successful chessboard detection. This is donde using the `camera_calibration(img, objpoints, imgpoints)` function.\n\nI then used the output `objpoints` and `imgpoints` to compute the camera calibration (mtx) and distortion coefficients (dst) using the `cal_matrix(imge, objpoints, imgpoints)` function. I applied this distortion correction to the test image of the chessboard using the `undistort(img,mtx,dist)` function and obtained this result: \n\n![alt text][image1]\n\n\n### Pipeline (single images)\n\n#### 1. Provide an example of a distortion-corrected image.\n\nTo demonstrate this step, I will describe how I apply the distortion correction to one of the test images. As it was descreibed above, once the mtx and dst matrices are calculated, the distrosion correction of the previous image is made using the `undistort(img,mtx,dist)` function and obtained this result: \n![alt text][image2]\n\n\n#### 2. Describe how (and identify where in your code) you used color transforms, gradients or other methods to create a thresholded binary image. Provide an example of a binary image result.\n\nI used a combination of color and gradient thresholds to generate a binary image as follows:\n```python\ndef abs_sobel_thresh(img, orient='x', sobel_kernel=3, thresh=(50,100)):\n # Calculate directional gradient\n # Apply threshold\n # Apply the following steps to img\n # 1) Convert to grayscale\n gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)\n # 2) Take the derivative in x or y given orient = 'x' or 'y'\n if (orient=='x'):\n sobel = cv2.Sobel(gray, cv2.CV_64F, 1, 0)\n elif (orient == 'y'):\n sobel = cv2.Sobel(gray, cv2.CV_64F, 0, 1)\n # 3) Take the absolute value of the derivative or gradient\n abs_sobel = np.absolute(sobel)\n # 4) Scale to 8-bit (0 - 255) then convert to type = np.uint8\n scaled_sobel = np.uint8(255*abs_sobel/np.max(abs_sobel))\n # 5) Create a mask of 1's where the scaled gradient magnitude \n # is > thresh_min and < thresh_max\n grad_binary = np.zeros_like(scaled_sobel)\n grad_binary[(scaled_sobel >= thresh[0]) & (scaled_sobel <= thresh[1])] = 1\n\n return grad_binary\n\ndef mag_thresh(img, sobel_kernel=3, mag_thresh=(0, 255)):\n # Calculate gradient magnitude\n # Apply threshold\n # Convert to grayscale\n gray = cv2.cvtColor(img, cv2.COLOR_BGR2Lab)[:,:,2]\n # Take both Sobel x and y gradients\n sobelx = cv2.Sobel(gray, cv2.CV_64F, 1, 0, ksize=sobel_kernel)\n sobely = cv2.Sobel(gray, cv2.CV_64F, 0, 1, ksize=sobel_kernel)\n # Calculate the gradient magnitude\n gradmag = np.sqrt(sobelx**2 + sobely**2)\n # Rescale to 8 bit\n scale_factor = np.max(gradmag)/255 \n gradmag = (gradmag/scale_factor).astype(np.uint8) \n # Create a binary image of ones where threshold is met, zeros otherwise\n mag_binary = np.zeros_like(gradmag)\n mag_binary[(gradmag >= mag_thresh[0]) & (gradmag <= mag_thresh[1])] = 1\n\n return mag_binary\n\ndef dir_threshold(img, sobel_kernel=3, thresh=(0, np.pi/2)):\n # Calculate gradient direction\n # Apply threshold\n gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)\n # Calculate the x and y gradients\n sobelx = cv2.Sobel(gray, cv2.CV_64F, 1, 0, ksize=sobel_kernel)\n sobely = cv2.Sobel(gray, cv2.CV_64F, 0, 1, ksize=sobel_kernel)\n # Take the absolute value of the gradient direction, \n # apply a threshold, and create a binary image result\n absgraddir = np.arctan2(np.absolute(sobely), np.absolute(sobelx))\n dir_binary = np.zeros_like(absgraddir)\n dir_binary[(absgraddir >= thresh[0]) & (absgraddir <= thresh[1])] = 1\n return dir_binary\n\ndef img_thresh(img, s_sobel_thresh=(8, 100), sx_thresh=(10, 100)):\n img = np.copy(img)\n ksize = 3\n # Apply each of the thresholding functions\n # Sobel x\n gradx = abs_sobel_thresh(img, orient='x', sobel_kernel=ksize, thresh=(50, 150))\n # Sobel y\n grady = abs_sobel_thresh(img, orient='y', sobel_kernel=ksize, thresh=(50, 150))\n # Magnitud\n mag_binary = mag_thresh(img, sobel_kernel=ksize, mag_thresh=(30, 100))\n # Dir\n dir_binary = dir_threshold(img, sobel_kernel=ksize, thresh=(0.7, 1.3))\n\n combined_grad = np.zeros_like(dir_binary)\n combined_grad[((gradx == 1) & (grady == 1)) | ((mag_binary == 1) & (dir_binary == 1))] = 1\n\n #def color_thresh_combined(img, s_thresh, l_thresh, v_thresh, b_thresh):\n v_thresh = [230,255]\n s_thresh = [235,255]\n l_thresh = [215,255]\n b_thresh = [230,255]\n lab_b_thresh = [195,255]\n \n\n hsv = cv2.cvtColor(img, cv2.COLOR_RGB2HSV)\n V_binary = hsv[:,:,2]\n V_binary = V_binary*(255/np.max(V_binary))\n V_thresh_binary= np.zeros_like(V_binary)\n V_thresh_binary[(V_binary >= v_thresh[0]) & (V_binary <= v_thresh[1])] = 1\n\n hls = cv2.cvtColor(img, cv2.COLOR_RGB2HLS)\n S_binary = hls[:,:,2]\n max_sat = np.max(S_binary)\n if max_sat >= 245:\n S_binary = S_binary*(210/np.max(S_binary))\n # Threshold x gradient\n S_thresh_binary= np.zeros_like(S_binary)\n S_thresh_binary[(S_binary >= s_thresh[0]) & (S_binary <= s_thresh[1])] = 1\n\n\n luv = cv2.cvtColor(img, cv2.COLOR_RGB2LUV)\n L_binary = luv[:,:,0]\n max_l = np.max(L_binary)\n L_binary = L_binary*(255/np.max(L_binary))\n # Threshold x gradient\n L_thresh_binary= np.zeros_like(L_binary)\n L_thresh_binary[(L_binary >= l_thresh[0]) & (L_binary <= l_thresh[1])] = 1\n\n lab = cv2.cvtColor(img, cv2.COLOR_RGB2Lab)\n LAB_B_binary = lab[:,:,2]\n max_value = np.max(LAB_B_binary)\n if ((max_value <= 190)&((max_l < 252)|(max_sat < 220))):\n if (max_value <= 170):\n LAB_B_binary = LAB_B_binary*(210/np.max(LAB_B_binary))\n else:\n LAB_B_binary = LAB_B_binary*(255/np.max(LAB_B_binary)) \n lab_B_thresh_binary= np.zeros_like(LAB_B_binary)\n lab_B_thresh_binary[(LAB_B_binary >= lab_b_thresh[0]) & (LAB_B_binary <= lab_b_thresh[1])] = 1\n \n \n B_binary = img[:,:,0]\n max_blue = np.max(B_binary)\n #print(max_blue)\n # Threshold x gradient\n if max_blue <= 238:\n B_binary= B_binary*(255/np.max(B_binary))\n B_thresh_binary = np.zeros_like(B_binary)\n B_thresh_binary[(B_binary >= b_thresh[0]) & (B_binary <= b_thresh[1])] = 1\n\n color_binary= np.zeros_like(B_binary)\n color_binary[((V_thresh_binary == 1) | (S_thresh_binary == 1) | (L_thresh_binary == 1) | (B_thresh_binary == 1))] = 1\n \n # Sobel x\n sobelx = cv2.Sobel(L_binary, cv2.CV_64F, 1, 0) # Take the derivative in x\n abs_sobelx = np.absolute(sobelx) # Absolute x derivative to accentuate lines away from horizontal\n scaled_sobel = np.uint8(255*abs_sobelx/np.max(abs_sobelx))\n \n # Threshold x gradient\n sxbinary = np.zeros_like(scaled_sobel)\n sxbinary[(scaled_sobel >= sx_thresh[0]) & (scaled_sobel <= sx_thresh[1])] = 1\n \n # Threshold color channel\n s_binary = np.zeros_like(S_binary)\n sobel = cv2.Sobel(S_binary , cv2.CV_64F, 1, 0)\n abs_sobel = np.absolute(sobel)\n # 4) Scale to 8-bit (0 - 255) then convert to type = np.uint8\n scaled_sobel_s = np.uint8(255*abs_sobel/np.max(abs_sobel))\n s_binary[(scaled_sobel_s >= s_sobel_thresh[0]) & (scaled_sobel_s <= s_sobel_thresh[1])] = 1\n # Stack each channel\n #color_binary = np.dstack(( np.zeros_like(sxbinary), sxbinary, s_binary)) * 255\n combined = np.zeros_like(sxbinary)\n combined[((color_binary == 1)|(lab_B_thresh_binary == 1)|((s_binary == 1) & (sxbinary == 1)))] = 1\n \n return combined\n```\n\nRegarding color transforms, I use and filter the Value Channel of the HSV color space with a threshold of v_thresh = [230,255], the Saturation Channel of the HLS color space with a threshold of s_thresh = [235,255], the Light Channel of the LUV color space with a threshold of l_thresh = [215,255] and the Blue Channel of the RGB color space with a threshold of b_thresh = [230,255].\n \nHere are some examples of my output for this step.\n\n### Image 1\n![alt text][image4]\n### Image 2\n![alt text][image5]\n### Image 3\n![alt text][image6]\n\nAs it can be seen, it works ok for image 1 and 2. However, when the image is dark or shadowed as in Image 3 (from the challenge video), the color spaces thresholds are not enough to find the lanes. Therefore, I use the gradient of the Saturation Channel and the Light Channel with a threshold of s_sobel_thresh = (8, 100) and sx_thresh = (10, 100), respectively.\n \nThe color `Yellow` of the lines is something we can also take advantage of. In the LAB color space, positive values of B Channel respresent the color yellow. Thus, the B Channel of the LAB color space with a threshold of lab_b_thresh = [195,255] is used.\n\nHere are some examples of my output for this step.\n\n### Image 4\n![alt text][image7]\n### Image 5\n![alt text][image8]\n### Image 6\n![alt text][image9]\n### Image 7\n![alt text][image10]\n### Image 8\n![alt text][image11]\n### Image 9\n![alt text][image12]\n### Image 10\n![alt text][image13]\n### Image 11\n![alt text][image14]\n### Image 12\n![alt text][image15]\n### Image 13\n![alt text][image16]\n### Image 14\n![alt text][image17]\n### Image 15\n![alt text][image18]\n### Image 16\n![alt text][image19]\n### Image 17\n![alt text][image20]\n\n\nFrom these images, one can observed that `Light Sobel Threshold` and `Saturation Sobel Threshold` images are really noisy. However, if they are added with an logic `and` operator, as in `Light & Saturation Thresholds` image, I can easly identify lines in shadowed and dark images.\nThe `Combined Thresholds` image shows the combiantion of all color sapces, B_channel of the LAB color space and the Light and Saturation Sobel binary images with an `or` operator.\n\n#### 3. Describe how (and identify where in your code) you performed a perspective transform and provide an example of a transformed image.\n\nThe code for my perspective transform includes a function called `perspective_transform(img, offset = 320)`, which appears as follows:\n```python\n\ndef perspective_transform(img, offset = 320):\n #define 4 source points src = np.float32([[,],[,],[,],[,]])\n #Note: you could pick any four of the detected corners \n # as long as those four corners define a rectangle\n #One especially smart way to do this would be to use four well-chosen\n # corners that were automatically detected during the undistortion steps\n #We recommend using the automatic detection of corners in your cod\n src = np.float32([(0.451*img.shape[1], 0.6388*img.shape[0]), (0.1585*img.shape[1], img.shape[0]), (0.88*img.shape[1], img.shape[0]), (0.55*img.shape[1], 0.6388*img.shape[0])])\n # For destination points, I'm arbitrarily choosing some points to be\n # a nice fit for displaying our warped result \n # again, not exact, but close enough for our purposes\n dst = np.float32([(offset, 0), (offset, img.shape[0]), (img.shape[1]-offset, img.shape[0]), (img.shape[1]-offset, 0)]) # d) use cv2.getPerspectiveTransform() to get M, the transform matrix\n M = cv2.getPerspectiveTransform(src,dst)\n inv_M = cv2.getPerspectiveTransform(dst,src)\n # e) use cv2.warpPerspective() to warp your image to a top-down view\n warped = cv2.warpPerspective(img,M,(img.shape[1], img.shape[0]),flags=cv2.INTER_LINEAR)\n return warped, inv_M\n\n```\n\n\n The function takes as inputs an image (`img`), as well as the offset to define the destination points (`dst`). Inside the functions is defined the source (`src`) points. I chose the hardcode the source and destination points in the following manner:\n```python\noffset= 250\nsrc = np.float32([(0.451*img.shape[1], 0.6388*img.shape[0]),\n (0.1585*img.shape[1], img.shape[0]), \n (0.88*img.shape[1], img.shape[0]), \n (0.55*img.shape[1], 0.6388*img.shape[0])])\n # For destination points, I'm arbitrarily choosing some points to be\n # a nice fit for displaying our warped result \n # again, not exact, but close enough for our purposes\ndst = np.float32([(offset, 0), \n (offset, img.shape[0]), \n (img.shape[1]-offset, img.shape[0]), \n (img.shape[1]-offset, 0)]) \n # d) use cv2.getPerspectiveTransform() to get M, the transform matrix\n```\n\nThis resulted in the following source and destination points:\n\n| Source | Destination | \n|:-------------:|:-------------:| \n| 585, 460 | 320, 0 | \n| 203, 720 | 320, 720 |\n| 1127, 720 | 960, 720 |\n| 695, 460 | 960, 0 |\n\nI verified that my perspective transform was working as expected by showing the test image and its warped counterpart to verify that the lines appear parallel in the warped image.\n\n![alt text][image3]\n\n#### 4. Describe how (and identify where in your code) you identified lane-line pixels and fit their positions with a polynomial?\n\nThe indentification of the lane-line pixels is done by two functions. First, the `find_lane_pixels(binary_warped)` function which code is:\n```python\n\ndef find_lane_pixels(binary_warped):\n # Take a histogram of the bottom half of the image\n histogram = np.sum(binary_warped[binary_warped.shape[0]//2:,:], axis=0)\n # Create an output image to draw on and visualize the result\n out_img = np.dstack((binary_warped, binary_warped, binary_warped))\n # Find the peak of the left and right halves of the histogram\n # These will be the starting point for the left and right lines\n midpoint = np.int(histogram.shape[0]//2)\n leftx_base = np.argmax(histogram[:midpoint])\n rightx_base = np.argmax(histogram[midpoint:]) + midpoint\n\n # HYPERPARAMETERS\n # Choose the number of sliding windows\n nwindows = 9\n # Set the width of the windows +/- margin\n margin = 100\n # Set minimum number of pixels found to recenter window\n minpix = 30\n\n # Set height of windows - based on nwindows above and image shape\n window_height = np.int(binary_warped.shape[0]//nwindows)\n # Identify the x and y positions of all nonzero pixels in the image\n nonzero = binary_warped.nonzero()\n nonzeroy = np.array(nonzero[0])\n nonzerox = np.array(nonzero[1])\n # Current positions to be updated later for each window in nwindows\n leftx_current = leftx_base\n rightx_current = rightx_base\n\n # Create empty lists to receive left and right lane pixel indices\n left_lane_inds = []\n right_lane_inds = []\n\n # Step through the windows one by one\n for window in range(nwindows):\n # Identify window boundaries in x and y (and right and left)\n win_y_low = binary_warped.shape[0] - (window+1)*window_height\n win_y_high = binary_warped.shape[0] - window*window_height\n ### TO-DO: Find the four below boundaries of the window ###\n \n win_xleft_low = leftx_current - margin # Update this\n win_xleft_high = leftx_current + margin # Update this\n win_xright_low = rightx_current - margin # Update this\n win_xright_high = rightx_current + margin # Update this\n \n # Draw the windows on the visualization image\n cv2.rectangle(out_img,(win_xleft_low,win_y_low),\n (win_xleft_high,win_y_high),(0,255,0), 2) \n cv2.rectangle(out_img,(win_xright_low,win_y_low),\n (win_xright_high,win_y_high),(0,255,0), 2) \n \n ### TO-DO: Identify the nonzero pixels in x and y within the window ###\n good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & \n (nonzerox >= win_xleft_low) & (nonzerox < win_xleft_high)).nonzero()[0]\n \n good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & \n (nonzerox >= win_xright_low) & (nonzerox < win_xright_high)).nonzero()[0]\n \n # Append these indices to the lists\n left_lane_inds.append(good_left_inds)\n right_lane_inds.append(good_right_inds)\n \n ### TO-DO: If you found > minpix pixels, recenter next window ###\n ### (`right` or `leftx_current`) on their mean position ###\n if len(good_left_inds) > minpix:\n leftx_current = np.int(np.mean(nonzerox[good_left_inds]))\n if len(good_right_inds) > minpix: \n rightx_current = np.int(np.mean(nonzerox[good_right_inds]))\n\n # Concatenate the arrays of indices (previously was a list of lists of pixels)\n try:\n left_lane_inds = np.concatenate(left_lane_inds)\n right_lane_inds = np.concatenate(right_lane_inds)\n except ValueError:\n # Avoids an error if the above is not implemented fully\n pass\n\n # Extract left and right line pixel positions\n leftx = nonzerox[left_lane_inds]\n lefty = nonzeroy[left_lane_inds] \n rightx = nonzerox[right_lane_inds]\n righty = nonzeroy[right_lane_inds]\n \n out_img[lefty, leftx] = [255, 0, 0]\n out_img[righty, rightx] = [0, 0, 255]\n \n left_fit, right_fit = (None, None)\n \n # Fit a second order polynomial to each\n if len(leftx) != 0:\n left_fit = np.polyfit(lefty, leftx, 2)\n if len(rightx) != 0:\n right_fit = np.polyfit(righty, rightx, 2)\n \n return left_fit, right_fit, leftx, lefty, rightx, righty, out_img\n```\nIt takes as input the binary_warped image and performs a histogram filter to find the x coordinates of the peaks where there are more pixels. Then, from the bottom to the top of the image, a search is performed through sliding windows. The numeber of windows is preset and the starting points are the x coordinates previously found in the histogram step. During this search, each time that pixels are found, the windows is re-center for the next step. Finnally, I fit my lane lines with a 2nd order polynomial kinda with thw `cv2.polyfit(x,y,grade)` like this:\n\n### Image 1 Window_Search\n\n![all_text][image21]\n\n### Image 2 Window_Search\n![all_text][image22]\n\nThe second function is used with prior information. Once you have found a polynomial for the lane-lines, it is not necesary to do a blind search. The `search_around_poly(binary_warped, left_fit_search, right_fit_search)` function search around a region defined by the previous polynomial fit and a margin. Its code is:\n\n```python\ndef search_around_poly(binary_warped, left_fit_search, right_fit_search):\n # HYPERPARAMETER\n # Choose the width of the margin around the previous polynomial to search\n # The quiz grader expects 100 here, but feel free to tune on your own! \n margin = 80\n\n nonzero = binary_warped.nonzero()\n nonzeroy = np.array(nonzero[0])\n nonzerox = np.array(nonzero[1])\n \n ### TO-DO: Set the area of search based on activated x-values ###\n ### within the +/- margin of our polynomial function ###\n ### Hint: consider the window areas for the similarly named variables ###\n ### in the previous quiz, but change the windows to our new search area ###\n left_lane_inds = ((nonzerox > (left_fit_search[0]*(nonzeroy**2) + left_fit_search[1]*nonzeroy + \n left_fit_search[2] - margin)) & (nonzerox < (left_fit_search[0]*(nonzeroy**2) + \n left_fit_search[1]*nonzeroy + left_fit_search[2] + margin)))\n right_lane_inds = ((nonzerox > (right_fit_search[0]*(nonzeroy**2) + right_fit_search[1]*nonzeroy + \n right_fit_search[2] - margin)) & (nonzerox < (right_fit_search[0]*(nonzeroy**2) + \n right_fit_search[1]*nonzeroy + right_fit_search[2] + margin)))\n \n # Again, extract left and right line pixel positions\n leftx = nonzerox[left_lane_inds]\n lefty = nonzeroy[left_lane_inds] \n rightx = nonzerox[right_lane_inds]\n righty = nonzeroy[right_lane_inds]\n \n ## Visualization ##\n # Create an image to draw on and an image to show the selection window\n out_img = np.dstack((binary_warped, binary_warped, binary_warped))*255\n window_img = np.zeros_like(out_img)\n # Color in left and right line pixels\n out_img[nonzeroy[left_lane_inds], nonzerox[left_lane_inds]] = [255, 0, 0]\n out_img[nonzeroy[right_lane_inds], nonzerox[right_lane_inds]] = [0, 0, 255]\n\n # Fit new polynomials\n left_fit_new, right_fit_new = (None, None)\n if len(leftx) != 0:\n # Fit a second order polynomial to each\n left_fit_new = np.polyfit(lefty, leftx, 2)\n if len(rightx) != 0:\n right_fit_new = np.polyfit(righty, rightx, 2)\n\n ploty = np.linspace(0, binary_warped.shape[0]-1, binary_warped.shape[0])\n \n if left_fit_new is not None:\n left_fitx = left_fit_new[0]*ploty**2 + left_fit_new[1]*ploty + left_fit_new[2]\n left_line_window1 = np.array([np.transpose(np.vstack([left_fitx-margin, ploty]))])\n left_line_window2 = np.array([np.flipud(np.transpose(np.vstack([left_fitx+margin, \n ploty])))])\n left_line_pts = np.hstack((left_line_window1, left_line_window2))\n cv2.fillPoly(window_img, np.int_([left_line_pts]), (0,255, 0))\n \n if right_fit_new is not None:\n right_fitx = right_fit_new[0]*ploty**2 + right_fit_new[1]*ploty + right_fit_new[2]\n right_line_window1 = np.array([np.transpose(np.vstack([right_fitx-margin, ploty]))])\n right_line_window2 = np.array([np.flipud(np.transpose(np.vstack([right_fitx+margin, \n ploty])))])\n right_line_pts = np.hstack((right_line_window1, right_line_window2))\n cv2.fillPoly(window_img, np.int_([right_line_pts]), (0,255, 0))\n\n # Draw the lane onto the warped blank image\n \n result = cv2.addWeighted(out_img, 1, window_img, 0.3, 0)\n\n \n return left_fit_new, right_fit_new, leftx, lefty, rightx, righty, result\n```\n![alt text][image23]\n\n#### 5. Describe how (and identify where in your code) you calculated the radius of curvature of the lane and the position of the vehicle with respect to center.\n\nI did this in lines # through # in my code in `my_other_file.py`\n\n#### 6. Provide an example image of your result plotted back down onto the road such that the lane area is identified clearly.\n\nI implemented this step in lines as follows:\n\n```python\ndef draw_lines(img, inv_M, left_fit, right_fit):\n ploty = np.linspace(0, img.shape[0]-1, img.shape[0])\n left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]\n right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]\n \n margin = 50\n out_img = np.zeros_like(img).astype(np.uint8)\n left_line_window0 = np.array([np.flipud(np.transpose(np.vstack([left_fitx-margin, ploty])))]) \n left_line_window1 = np.array([np.transpose(np.vstack([left_fitx, ploty]))])\n left_line_window2 = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))]) \n right_line_window3 = np.array([np.transpose(np.vstack([right_fitx+margin, ploty]))]) \n \n central_line_pts = np.hstack((left_line_window1, left_line_window2))\n \n left_side_pts = np.hstack((left_line_window1,left_line_window0))\n right_side_pts = np.hstack((left_line_window2, right_line_window3))\n\n # Draw the lane onto the warped blank image\n cv2.fillPoly(out_img, np.int_([left_side_pts]), (255,255,0))\n cv2.fillPoly(out_img, np.int_([right_side_pts]), (255,255, 0))\n cv2.fillPoly(out_img, np.int_([central_line_pts]), (0,255, 0))\n warped_image = cv2.warpPerspective(out_img,inv_M,(out_img.shape[1], out_img.shape[0]),flags=cv2.INTER_LINEAR)\n result = cv2.addWeighted(img, 1, warped_image , 0.3, 0)\n \n # Plot the polynomial lines onto the image\n #plt.plot(left_fitx, ploty, color='yellow')\n #plt.plot(right_fitx, ploty, color='yellow')\n return result\n\ndef draw_info(img,left_curverad, right_curverad, center_difference, side_position):\n # Display radius of curvature and vehicle offset\n cv2.putText(img, 'Coded by Juan ALVAREZ', (10, 50), cv2.FONT_HERSHEY_PLAIN, 2,\n (255, 63, 150), 4)\n # Display radius of curvature and vehicle offset\n cv2.putText(img, 'Radius of Curvature of Left line is ' + str(round(left_curverad/1000, 3)) + '(Km)', (10, 100), cv2.FONT_HERSHEY_PLAIN, 2,\n (255, 63, 150), 4)\n cv2.putText(img, 'Radius of Curvature of Right line is ' + str(round(right_curverad/1000, 3)) + '(Km)', (10, 150), cv2.FONT_HERSHEY_PLAIN, 2,\n (255, 63, 150), 4)\n cv2.putText(img, 'Vehicle is ' + str(abs(round(center_difference, 3))) + 'm ' + side_position + ' of center', (10, 200), cv2.FONT_HERSHEY_PLAIN, \n 2, (255, 63, 150), 4) \n return img\n```\nI use two functions. The `draw_lines(img, inv_M, left_fit, right_fit)` and `draw_info(img,left_curverad, right_curverad, center_difference, side_position)` functions. \n\nThe fist one perfomrs the inverse perspective transform with the matrix inv_M and plot the lines and the region between lines on the original image. The second one draws the information of my name, the radius of curvature and the position of the car with respect of the center of the camera and the found lines.\n\nHere is an example of my result on a test image:\n\n![alt text][image25]\n\n---\n\n### Pipeline (video)\n\n#### 1. Provide a link to your final video output. Your pipeline should perform reasonably well on the entire project video (wobbly lines are ok but no catastrophic failures that would cause the car to drive off the road!).\n\nHere's a [link to my video result of the project video](./project_video_FINAL.mp4)\n\nHere's a [link to my video result of the challenge video](./challenge_video_output_FINAL.mp4)\n\n---\n\n### Discussion\n\n#### 1. Briefly discuss any problems / issues you faced in your implementation of this project. Where will your pipeline likely fail? What could you do to make it more robust?\n\nHere I'll talk about the approach I took, what techniques I used, what worked and why, where the pipeline might fail and how I might improve it if I were going to pursue this project further. \n\nFirst, through thresholding color spaces and gradients of the image and combining them, the lines were found. This approach presented problems when the frame was darker and blurrier. Dark shadows represent a huge challenge for my approach. Furthermore, when a car is approaching to the line, it affects the line detections and modifies the fiiting. I also made a line class to keep track the lines during the video, calculate some attributes and methods of the line and perform a sanity check before drawing a new line. The sanity check consist in checking if the lines detected:\n\n* are parallels\n* have a similar curvature with respect to the last line detected\n* Have a similar horizontal distance to the car\n\nI save the fit values of the lines of the last 5 detections. If a line is not detected, the first of the 5 records is deleted.\n\nThe code for the line class is:\n\n```python\nclass Line():\n \n def __init__(self):\n # was the line detected in the last iteration?\n self.detected = False \n # x values of the last n fits of the line\n self.recent_xfitted = [] \n #average x values of the fitted line over the last n iterations\n self.bestx = None \n #polynomial coefficients averaged over the last n iterations\n self.best_fit = None \n #polynomial coefficients for the most recent fit\n self.current_fit = [np.array([False])] \n #radius of curvature of the line in some units\n self.radius_of_curvature = None \n #distance in meters of vehicle center from the line\n self.line_base_pos = None \n #difference in fit coefficients between last and new fits\n self.diffs = np.array([0,0,0], dtype='float') \n #plot y\n self.ploty = None\n #coordiantes of base position\n self.base_xy = None\n #x values for detected line pixels\n self.allx = None \n #y values for detected line pixels\n self.ally = None\n self.reset = False\n \n def calculate_radius_of_curvature(self):\n if self.best_fit is not None and self.ploty is not None:\n ym_per_pix = 30/720 # meters per pixel in y dimension\n xm_per_pix = 3.7/700 # meters per pixel in x dimension\n\n\n # Define y-value where we want radius of curvature\n # We'll choose the maximum y-value, corresponding to the bottom of the image\n y_eval = np.max(self.ploty)\n\n if self.allx is not None:\n # Fit new polynomials to x,y in world space\n fit_world = np.polyfit(self.ploty*ym_per_pix, self.bestx*xm_per_pix, 2)\n\n # Calculate the new radii of curvature\n ##### TO-DO: Implement the calculation of R_curve (radius of curvature) #####\n radius = (1+(2*fit_world[0]*y_eval+fit_world[1])**2)**(3/2)/(np.absolute(2*fit_world[0]))\n return radius\n \n def update_radius_of_curvature(radius):\n self.radius_of_curvature = radius\n \n def calculate_line_base_pos(self, center):\n self.line_base_pos = self.base_xy[0] - center\n \n def update_base_xy(self):\n y_val = np.max(self.ploty)\n x_val = self.best_fit[0]*y_val**2 + self.best_fit[1]*y_val + self.best_fit[2]\n self.base_xy = (x_val, y_val)\n \n def update_line_fit(self, line_fit, x_coordinates, y_coordinates):\n # add a found fit to the line, up to n\n if line_fit is not None:\n \n if self.best_fit is not None:\n # if we have a best fit, see how this new fit compares\n self.diffs = abs(line_fit-self.best_fit)\n if (self.diffs[0] > 0.001 or \\\n self.diffs[1] > 1 or \\\n self.diffs[2] > 100) and \\\n len(self.current_fit) > 0:\n # bad fit! abort! abort! ... well, unless there are no fits in the current_fit queue, then we'll take it\n self.detected = False\n else:\n self.detected = True\n \n self.allx = x_coordinates\n self.ally = y_coordinates\n \n fitx = line_fit[0]*self.ploty**2 + line_fit[1]*self.ploty + line_fit[2]\n \n self.recent_xfitted.append(fitx)\n self.current_fit.append(line_fit)\n \n if len(self.current_fit) > 5:\n # throw out old fits, keep newest n\n self.current_fit = self.current_fit[len(self.current_fit)-5:]\n self.recent_xfitted = self.recent_xfitted[len(self.recent_xfitted)-5:]\n \n self.best_fit = np.average(self.current_fit, axis=0)\n radius = self.calculate_radius_of_curvature()\n self.radius_of_curvature = radius\n self.update_base_xy()\n \n self.bestx = np.average(self.recent_xfitted, axis=0)\n else:\n self.detected = True\n \n self.current_fit = [line_fit]\n \n self.allx = x_coordinates\n self.ally = y_coordinates\n fitx = line_fit[0]*self.ploty**2 + line_fit[1]*self.ploty + line_fit[2]\n self.recent_xfitted = [fitx]\n \n self.bestx = fitx\n \n self.best_fit = line_fit\n radius = self.calculate_radius_of_curvature()\n self.radius_of_curvature = radius\n self.update_base_xy()\n \n # or remove one from the history, if not found\n else:\n self.detected = False\n \n if len(self.current_fit) > 1:\n # delete last line_fit\n self.current_fit = self.current_fit[:len(self.current_fit)-1]\n self.best_fit = np.average(self.current_fit, axis=0)\n self.recent_xfitted = self.recent_xfitted[:len(self.recent_xfitted)-1]\n self.bestx = np.average(self.recent_xfitted, axis=0)\n\n```\nThe sanity check is done by this code:\n\n```python\n#SANITY CHECK\nif left_fit is not None and right_fit is not None:\n # calculate x-intercept (bottom of image, x=image_height) for fits\n \n left_fit_bottom = left_fit[0]*height**2 + left_fit[1]*height + left_fit[2]\n right_fit_bottom = right_fit[0]*height**2 + right_fit[1]*height + right_fit[2]\n interception_bottom_difference = abs(right_fit_bottom-left_fit_bottom)\n\n left_fit_middle = left_fit[0]*(height/6)**2 + left_fit[1]*(height/6) + left_fit[2]\n right_fit_middle = right_fit[0]*(height/6)**2 + right_fit[1]*(height/6) + right_fit[2]\n interception_middle_difference = abs(right_fit_middle-left_fit_middle)\n\n if (abs(0.43*width - interception_bottom_difference) > 0.15*width) or (abs(0.42*width - interception_middle_difference) > 0.2*width):\n left_fit = None\n right_fit = None\n\n else:\n if (Left.radius_of_curvature is not None) and (abs(measure_radius_of_curvature(leftx, lefty) - Left.radius_of_curvature) > 3*Left.radius_of_curvature):\n left_fit = None\n if (Right.radius_of_curvature is not None) and (abs(measure_radius_of_curvature(rightx, righty) - Right.radius_of_curvature) > 3*Right.radius_of_curvature):\n right_fit = None \n\n```\n\nThis helped too much to accurately decide if drawing a line or not. A better detection of the lines with more robust methods will help a lot. ",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown"
]
] |
d088bae9ee4b20042d6e3cc84397e758cddedda9 | 247,322 | ipynb | Jupyter Notebook | extra_capsnets.ipynb | jbagnato/handson-ml | 6a3072a12567495d5402a8fc66c60458fe06ca54 | [
"Apache-2.0"
] | 2 | 2019-04-10T04:51:02.000Z | 2020-08-23T22:39:04.000Z | extra_capsnets.ipynb | shafaypro/handson-ml | 6a3072a12567495d5402a8fc66c60458fe06ca54 | [
"Apache-2.0"
] | null | null | null | extra_capsnets.ipynb | shafaypro/handson-ml | 6a3072a12567495d5402a8fc66c60458fe06ca54 | [
"Apache-2.0"
] | 3 | 2019-08-11T21:20:40.000Z | 2020-02-05T20:07:42.000Z | 97.717108 | 50,612 | 0.831681 | [
[
[
"# Capsule Networks (CapsNets)",
"_____no_output_____"
],
[
"Based on the paper: [Dynamic Routing Between Capsules](https://arxiv.org/abs/1710.09829), by Sara Sabour, Nicholas Frosst and Geoffrey E. Hinton (NIPS 2017).",
"_____no_output_____"
],
[
"Inspired in part from Huadong Liao's implementation: [CapsNet-TensorFlow](https://github.com/naturomics/CapsNet-Tensorflow).",
"_____no_output_____"
],
[
"# Introduction",
"_____no_output_____"
],
[
"Watch [this video](https://www.youtube.com/embed/pPN8d0E3900) to understand the key ideas behind Capsule Networks:",
"_____no_output_____"
]
],
[
[
"from IPython.display import HTML\n\n# Display the video in an iframe:\nHTML(\"\"\"<iframe width=\"560\" height=\"315\"\n src=\"https://www.youtube.com/embed/pPN8d0E3900\"\n frameborder=\"0\"\n allowfullscreen></iframe>\"\"\")",
"_____no_output_____"
]
],
[
[
"# Imports",
"_____no_output_____"
],
[
"To support both Python 2 and Python 3:",
"_____no_output_____"
]
],
[
[
"from __future__ import division, print_function, unicode_literals",
"_____no_output_____"
]
],
[
[
"To plot pretty figures:",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport matplotlib\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"We will need NumPy and TensorFlow:",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport tensorflow as tf",
"_____no_output_____"
]
],
[
[
"# Reproducibility",
"_____no_output_____"
],
[
"Let's reset the default graph, in case you re-run this notebook without restarting the kernel:",
"_____no_output_____"
]
],
[
[
"tf.reset_default_graph()",
"_____no_output_____"
]
],
[
[
"Let's set the random seeds so that this notebook always produces the same output:",
"_____no_output_____"
]
],
[
[
"np.random.seed(42)\ntf.set_random_seed(42)",
"_____no_output_____"
]
],
[
[
"# Load MNIST",
"_____no_output_____"
],
[
"Yes, I know, it's MNIST again. But hopefully this powerful idea will work as well on larger datasets, time will tell.",
"_____no_output_____"
]
],
[
[
"from tensorflow.examples.tutorials.mnist import input_data\n\nmnist = input_data.read_data_sets(\"/tmp/data/\")",
"Extracting /tmp/data/train-images-idx3-ubyte.gz\nExtracting /tmp/data/train-labels-idx1-ubyte.gz\nExtracting /tmp/data/t10k-images-idx3-ubyte.gz\nExtracting /tmp/data/t10k-labels-idx1-ubyte.gz\n"
]
],
[
[
"Let's look at what these hand-written digit images look like:",
"_____no_output_____"
]
],
[
[
"n_samples = 5\n\nplt.figure(figsize=(n_samples * 2, 3))\nfor index in range(n_samples):\n plt.subplot(1, n_samples, index + 1)\n sample_image = mnist.train.images[index].reshape(28, 28)\n plt.imshow(sample_image, cmap=\"binary\")\n plt.axis(\"off\")\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"And these are the corresponding labels:",
"_____no_output_____"
]
],
[
[
"mnist.train.labels[:n_samples]",
"_____no_output_____"
]
],
[
[
"Now let's build a Capsule Network to classify these images. Here's the overall architecture, enjoy the ASCII art! ;-)\nNote: for readability, I left out two arrows: Labels → Mask, and Input Images → Reconstruction Loss.",
"_____no_output_____"
],
[
"```\n Loss\n ↑\n ┌─────────┴─────────┐\n Labels → Margin Loss Reconstruction Loss\n ↑ ↑\n Length Decoder\n ↑ ↑ \n Digit Capsules ────Mask────┘\n ↖↑↗ ↖↑↗ ↖↑↗\n Primary Capsules\n ↑ \n Input Images\n```",
"_____no_output_____"
],
[
"We are going to build the graph starting from the bottom layer, and gradually move up, left side first. Let's go!",
"_____no_output_____"
],
[
"# Input Images",
"_____no_output_____"
],
[
"Let's start by creating a placeholder for the input images (28×28 pixels, 1 color channel = grayscale).",
"_____no_output_____"
]
],
[
[
"X = tf.placeholder(shape=[None, 28, 28, 1], dtype=tf.float32, name=\"X\")",
"_____no_output_____"
]
],
[
[
"# Primary Capsules",
"_____no_output_____"
],
[
"The first layer will be composed of 32 maps of 6×6 capsules each, where each capsule will output an 8D activation vector:",
"_____no_output_____"
]
],
[
[
"caps1_n_maps = 32\ncaps1_n_caps = caps1_n_maps * 6 * 6 # 1152 primary capsules\ncaps1_n_dims = 8",
"_____no_output_____"
]
],
[
[
"To compute their outputs, we first apply two regular convolutional layers:",
"_____no_output_____"
]
],
[
[
"conv1_params = {\n \"filters\": 256,\n \"kernel_size\": 9,\n \"strides\": 1,\n \"padding\": \"valid\",\n \"activation\": tf.nn.relu,\n}\n\nconv2_params = {\n \"filters\": caps1_n_maps * caps1_n_dims, # 256 convolutional filters\n \"kernel_size\": 9,\n \"strides\": 2,\n \"padding\": \"valid\",\n \"activation\": tf.nn.relu\n}",
"_____no_output_____"
],
[
"conv1 = tf.layers.conv2d(X, name=\"conv1\", **conv1_params)\nconv2 = tf.layers.conv2d(conv1, name=\"conv2\", **conv2_params)",
"_____no_output_____"
]
],
[
[
"Note: since we used a kernel size of 9 and no padding (for some reason, that's what `\"valid\"` means), the image shrunk by 9-1=8 pixels after each convolutional layer (28×28 to 20×20, then 20×20 to 12×12), and since we used a stride of 2 in the second convolutional layer, the image size was divided by 2. This is how we end up with 6×6 feature maps.",
"_____no_output_____"
],
[
"Next, we reshape the output to get a bunch of 8D vectors representing the outputs of the primary capsules. The output of `conv2` is an array containing 32×8=256 feature maps for each instance, where each feature map is 6×6. So the shape of this output is (_batch size_, 6, 6, 256). We want to chop the 256 into 32 vectors of 8 dimensions each. We could do this by reshaping to (_batch size_, 6, 6, 32, 8). However, since this first capsule layer will be fully connected to the next capsule layer, we can simply flatten the 6×6 grids. This means we just need to reshape to (_batch size_, 6×6×32, 8).",
"_____no_output_____"
]
],
[
[
"caps1_raw = tf.reshape(conv2, [-1, caps1_n_caps, caps1_n_dims],\n name=\"caps1_raw\")",
"_____no_output_____"
]
],
[
[
"Now we need to squash these vectors. Let's define the `squash()` function, based on equation (1) from the paper:\n\n$\\operatorname{squash}(\\mathbf{s}) = \\dfrac{\\|\\mathbf{s}\\|^2}{1 + \\|\\mathbf{s}\\|^2} \\dfrac{\\mathbf{s}}{\\|\\mathbf{s}\\|}$\n\nThe `squash()` function will squash all vectors in the given array, along the given axis (by default, the last axis).\n\n**Caution**, a nasty bug is waiting to bite you: the derivative of $\\|\\mathbf{s}\\|$ is undefined when $\\|\\mathbf{s}\\|=0$, so we can't just use `tf.norm()`, or else it will blow up during training: if a vector is zero, the gradients will be `nan`, so when the optimizer updates the variables, they will also become `nan`, and from then on you will be stuck in `nan` land. The solution is to implement the norm manually by computing the square root of the sum of squares plus a tiny epsilon value: $\\|\\mathbf{s}\\| \\approx \\sqrt{\\sum\\limits_i{{s_i}^2}\\,\\,+ \\epsilon}$.",
"_____no_output_____"
]
],
[
[
"def squash(s, axis=-1, epsilon=1e-7, name=None):\n with tf.name_scope(name, default_name=\"squash\"):\n squared_norm = tf.reduce_sum(tf.square(s), axis=axis,\n keep_dims=True)\n safe_norm = tf.sqrt(squared_norm + epsilon)\n squash_factor = squared_norm / (1. + squared_norm)\n unit_vector = s / safe_norm\n return squash_factor * unit_vector",
"_____no_output_____"
]
],
[
[
"Now let's apply this function to get the output $\\mathbf{u}_i$ of each primary capsules $i$ :",
"_____no_output_____"
]
],
[
[
"caps1_output = squash(caps1_raw, name=\"caps1_output\")",
"_____no_output_____"
]
],
[
[
"Great! We have the output of the first capsule layer. It wasn't too hard, was it? However, computing the next layer is where the fun really begins.",
"_____no_output_____"
],
[
"# Digit Capsules",
"_____no_output_____"
],
[
"To compute the output of the digit capsules, we must first compute the predicted output vectors (one for each primary / digit capsule pair). Then we can run the routing by agreement algorithm.",
"_____no_output_____"
],
[
"## Compute the Predicted Output Vectors",
"_____no_output_____"
],
[
"The digit capsule layer contains 10 capsules (one for each digit) of 16 dimensions each:",
"_____no_output_____"
]
],
[
[
"caps2_n_caps = 10\ncaps2_n_dims = 16",
"_____no_output_____"
]
],
[
[
"For each capsule $i$ in the first layer, we want to predict the output of every capsule $j$ in the second layer. For this, we will need a transformation matrix $\\mathbf{W}_{i,j}$ (one for each pair of capsules ($i$, $j$)), then we can compute the predicted output $\\hat{\\mathbf{u}}_{j|i} = \\mathbf{W}_{i,j} \\, \\mathbf{u}_i$ (equation (2)-right in the paper). Since we want to transform an 8D vector into a 16D vector, each transformation matrix $\\mathbf{W}_{i,j}$ must have a shape of (16, 8).",
"_____no_output_____"
],
[
"To compute $\\hat{\\mathbf{u}}_{j|i}$ for every pair of capsules ($i$, $j$), we will use a nice feature of the `tf.matmul()` function: you probably know that it lets you multiply two matrices, but you may not know that it also lets you multiply higher dimensional arrays. It treats the arrays as arrays of matrices, and it performs itemwise matrix multiplication. For example, suppose you have two 4D arrays, each containing a 2×3 grid of matrices. The first contains matrices $\\mathbf{A}, \\mathbf{B}, \\mathbf{C}, \\mathbf{D}, \\mathbf{E}, \\mathbf{F}$ and the second contains matrices $\\mathbf{G}, \\mathbf{H}, \\mathbf{I}, \\mathbf{J}, \\mathbf{K}, \\mathbf{L}$. If you multiply these two 4D arrays using the `tf.matmul()` function, this is what you get:\n\n$\n\\pmatrix{\n\\mathbf{A} & \\mathbf{B} & \\mathbf{C} \\\\\n\\mathbf{D} & \\mathbf{E} & \\mathbf{F}\n} \\times\n\\pmatrix{\n\\mathbf{G} & \\mathbf{H} & \\mathbf{I} \\\\\n\\mathbf{J} & \\mathbf{K} & \\mathbf{L}\n} = \\pmatrix{\n\\mathbf{AG} & \\mathbf{BH} & \\mathbf{CI} \\\\\n\\mathbf{DJ} & \\mathbf{EK} & \\mathbf{FL}\n}\n$",
"_____no_output_____"
],
[
"We can apply this function to compute $\\hat{\\mathbf{u}}_{j|i}$ for every pair of capsules ($i$, $j$) like this (recall that there are 6×6×32=1152 capsules in the first layer, and 10 in the second layer):\n$\n\\pmatrix{\n \\mathbf{W}_{1,1} & \\mathbf{W}_{1,2} & \\cdots & \\mathbf{W}_{1,10} \\\\\n \\mathbf{W}_{2,1} & \\mathbf{W}_{2,2} & \\cdots & \\mathbf{W}_{2,10} \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n \\mathbf{W}_{1152,1} & \\mathbf{W}_{1152,2} & \\cdots & \\mathbf{W}_{1152,10}\n} \\times\n\\pmatrix{\n \\mathbf{u}_1 & \\mathbf{u}_1 & \\cdots & \\mathbf{u}_1 \\\\\n \\mathbf{u}_2 & \\mathbf{u}_2 & \\cdots & \\mathbf{u}_2 \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n \\mathbf{u}_{1152} & \\mathbf{u}_{1152} & \\cdots & \\mathbf{u}_{1152}\n}\n=\n\\pmatrix{\n\\hat{\\mathbf{u}}_{1|1} & \\hat{\\mathbf{u}}_{2|1} & \\cdots & \\hat{\\mathbf{u}}_{10|1} \\\\\n\\hat{\\mathbf{u}}_{1|2} & \\hat{\\mathbf{u}}_{2|2} & \\cdots & \\hat{\\mathbf{u}}_{10|2} \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\n\\hat{\\mathbf{u}}_{1|1152} & \\hat{\\mathbf{u}}_{2|1152} & \\cdots & \\hat{\\mathbf{u}}_{10|1152}\n}\n$\n",
"_____no_output_____"
],
[
"The shape of the first array is (1152, 10, 16, 8), and the shape of the second array is (1152, 10, 8, 1). Note that the second array must contain 10 identical copies of the vectors $\\mathbf{u}_1$ to $\\mathbf{u}_{1152}$. To create this array, we will use the handy `tf.tile()` function, which lets you create an array containing many copies of a base array, tiled in any way you want.",
"_____no_output_____"
],
[
"Oh, wait a second! We forgot one dimension: _batch size_. Say we feed 50 images to the capsule network, it will make predictions for these 50 images simultaneously. So the shape of the first array must be (50, 1152, 10, 16, 8), and the shape of the second array must be (50, 1152, 10, 8, 1). The first layer capsules actually already output predictions for all 50 images, so the second array will be fine, but for the first array, we will need to use `tf.tile()` to have 50 copies of the transformation matrices.",
"_____no_output_____"
],
[
"Okay, let's start by creating a trainable variable of shape (1, 1152, 10, 16, 8) that will hold all the transformation matrices. The first dimension of size 1 will make this array easy to tile. We initialize this variable randomly using a normal distribution with a standard deviation to 0.01.",
"_____no_output_____"
]
],
[
[
"init_sigma = 0.01\n\nW_init = tf.random_normal(\n shape=(1, caps1_n_caps, caps2_n_caps, caps2_n_dims, caps1_n_dims),\n stddev=init_sigma, dtype=tf.float32, name=\"W_init\")\nW = tf.Variable(W_init, name=\"W\")",
"_____no_output_____"
]
],
[
[
"Now we can create the first array by repeating `W` once per instance:",
"_____no_output_____"
]
],
[
[
"batch_size = tf.shape(X)[0]\nW_tiled = tf.tile(W, [batch_size, 1, 1, 1, 1], name=\"W_tiled\")",
"_____no_output_____"
]
],
[
[
"That's it! On to the second array, now. As discussed earlier, we need to create an array of shape (_batch size_, 1152, 10, 8, 1), containing the output of the first layer capsules, repeated 10 times (once per digit, along the third dimension, which is axis=2). The `caps1_output` array has a shape of (_batch size_, 1152, 8), so we first need to expand it twice, to get an array of shape (_batch size_, 1152, 1, 8, 1), then we can repeat it 10 times along the third dimension:",
"_____no_output_____"
]
],
[
[
"caps1_output_expanded = tf.expand_dims(caps1_output, -1,\n name=\"caps1_output_expanded\")\ncaps1_output_tile = tf.expand_dims(caps1_output_expanded, 2,\n name=\"caps1_output_tile\")\ncaps1_output_tiled = tf.tile(caps1_output_tile, [1, 1, caps2_n_caps, 1, 1],\n name=\"caps1_output_tiled\")",
"_____no_output_____"
]
],
[
[
"Let's check the shape of the first array:",
"_____no_output_____"
]
],
[
[
"W_tiled",
"_____no_output_____"
]
],
[
[
"Good, and now the second:",
"_____no_output_____"
]
],
[
[
"caps1_output_tiled",
"_____no_output_____"
]
],
[
[
"Yes! Now, to get all the predicted output vectors $\\hat{\\mathbf{u}}_{j|i}$, we just need to multiply these two arrays using `tf.matmul()`, as explained earlier: ",
"_____no_output_____"
]
],
[
[
"caps2_predicted = tf.matmul(W_tiled, caps1_output_tiled,\n name=\"caps2_predicted\")",
"_____no_output_____"
]
],
[
[
"Let's check the shape:",
"_____no_output_____"
]
],
[
[
"caps2_predicted",
"_____no_output_____"
]
],
[
[
"Perfect, for each instance in the batch (we don't know the batch size yet, hence the \"?\") and for each pair of first and second layer capsules (1152×10) we have a 16D predicted output column vector (16×1). We're ready to apply the routing by agreement algorithm!",
"_____no_output_____"
],
[
"## Routing by agreement",
"_____no_output_____"
],
[
"First let's initialize the raw routing weights $b_{i,j}$ to zero:",
"_____no_output_____"
]
],
[
[
"raw_weights = tf.zeros([batch_size, caps1_n_caps, caps2_n_caps, 1, 1],\n dtype=np.float32, name=\"raw_weights\")",
"_____no_output_____"
]
],
[
[
"We will see why we need the last two dimensions of size 1 in a minute.",
"_____no_output_____"
],
[
"### Round 1",
"_____no_output_____"
],
[
"First, let's apply the softmax function to compute the routing weights, $\\mathbf{c}_{i} = \\operatorname{softmax}(\\mathbf{b}_i)$ (equation (3) in the paper):",
"_____no_output_____"
]
],
[
[
"routing_weights = tf.nn.softmax(raw_weights, dim=2, name=\"routing_weights\")",
"_____no_output_____"
]
],
[
[
"Now let's compute the weighted sum of all the predicted output vectors for each second-layer capsule, $\\mathbf{s}_j = \\sum\\limits_{i}{c_{i,j}\\hat{\\mathbf{u}}_{j|i}}$ (equation (2)-left in the paper):",
"_____no_output_____"
]
],
[
[
"weighted_predictions = tf.multiply(routing_weights, caps2_predicted,\n name=\"weighted_predictions\")\nweighted_sum = tf.reduce_sum(weighted_predictions, axis=1, keep_dims=True,\n name=\"weighted_sum\")",
"_____no_output_____"
]
],
[
[
"There are a couple important details to note here:\n* To perform elementwise matrix multiplication (also called the Hadamard product, noted $\\circ$), we use the `tf.multiply()` function. It requires `routing_weights` and `caps2_predicted` to have the same rank, which is why we added two extra dimensions of size 1 to `routing_weights`, earlier.\n* The shape of `routing_weights` is (_batch size_, 1152, 10, 1, 1) while the shape of `caps2_predicted` is (_batch size_, 1152, 10, 16, 1). Since they don't match on the fourth dimension (1 _vs_ 16), `tf.multiply()` automatically _broadcasts_ the `routing_weights` 16 times along that dimension. If you are not familiar with broadcasting, a simple example might help:\n\n $ \\pmatrix{1 & 2 & 3 \\\\ 4 & 5 & 6} \\circ \\pmatrix{10 & 100 & 1000} = \\pmatrix{1 & 2 & 3 \\\\ 4 & 5 & 6} \\circ \\pmatrix{10 & 100 & 1000 \\\\ 10 & 100 & 1000} = \\pmatrix{10 & 200 & 3000 \\\\ 40 & 500 & 6000} $",
"_____no_output_____"
],
[
"And finally, let's apply the squash function to get the outputs of the second layer capsules at the end of the first iteration of the routing by agreement algorithm, $\\mathbf{v}_j = \\operatorname{squash}(\\mathbf{s}_j)$ :",
"_____no_output_____"
]
],
[
[
"caps2_output_round_1 = squash(weighted_sum, axis=-2,\n name=\"caps2_output_round_1\")",
"_____no_output_____"
],
[
"caps2_output_round_1",
"_____no_output_____"
]
],
[
[
"Good! We have ten 16D output vectors for each instance, as expected.",
"_____no_output_____"
],
[
"### Round 2",
"_____no_output_____"
],
[
"First, let's measure how close each predicted vector $\\hat{\\mathbf{u}}_{j|i}$ is to the actual output vector $\\mathbf{v}_j$ by computing their scalar product $\\hat{\\mathbf{u}}_{j|i} \\cdot \\mathbf{v}_j$.",
"_____no_output_____"
],
[
"* Quick math reminder: if $\\vec{a}$ and $\\vec{b}$ are two vectors of equal length, and $\\mathbf{a}$ and $\\mathbf{b}$ are their corresponding column vectors (i.e., matrices with a single column), then $\\mathbf{a}^T \\mathbf{b}$ (i.e., the matrix multiplication of the transpose of $\\mathbf{a}$, and $\\mathbf{b}$) is a 1×1 matrix containing the scalar product of the two vectors $\\vec{a}\\cdot\\vec{b}$. In Machine Learning, we generally represent vectors as column vectors, so when we talk about computing the scalar product $\\hat{\\mathbf{u}}_{j|i} \\cdot \\mathbf{v}_j$, this actually means computing ${\\hat{\\mathbf{u}}_{j|i}}^T \\mathbf{v}_j$.",
"_____no_output_____"
],
[
"Since we need to compute the scalar product $\\hat{\\mathbf{u}}_{j|i} \\cdot \\mathbf{v}_j$ for each instance, and for each pair of first and second level capsules $(i, j)$, we will once again take advantage of the fact that `tf.matmul()` can multiply many matrices simultaneously. This will require playing around with `tf.tile()` to get all dimensions to match (except for the last 2), just like we did earlier. So let's look at the shape of `caps2_predicted`, which holds all the predicted output vectors $\\hat{\\mathbf{u}}_{j|i}$ for each instance and each pair of capsules:",
"_____no_output_____"
]
],
[
[
"caps2_predicted",
"_____no_output_____"
]
],
[
[
"And now let's look at the shape of `caps2_output_round_1`, which holds 10 outputs vectors of 16D each, for each instance:",
"_____no_output_____"
]
],
[
[
"caps2_output_round_1",
"_____no_output_____"
]
],
[
[
"To get these shapes to match, we just need to tile the `caps2_output_round_1` array 1152 times (once per primary capsule) along the second dimension:",
"_____no_output_____"
]
],
[
[
"caps2_output_round_1_tiled = tf.tile(\n caps2_output_round_1, [1, caps1_n_caps, 1, 1, 1],\n name=\"caps2_output_round_1_tiled\")",
"_____no_output_____"
]
],
[
[
"And now we are ready to call `tf.matmul()` (note that we must tell it to transpose the matrices in the first array, to get ${\\hat{\\mathbf{u}}_{j|i}}^T$ instead of $\\hat{\\mathbf{u}}_{j|i}$):",
"_____no_output_____"
]
],
[
[
"agreement = tf.matmul(caps2_predicted, caps2_output_round_1_tiled,\n transpose_a=True, name=\"agreement\")",
"_____no_output_____"
]
],
[
[
"We can now update the raw routing weights $b_{i,j}$ by simply adding the scalar product $\\hat{\\mathbf{u}}_{j|i} \\cdot \\mathbf{v}_j$ we just computed: $b_{i,j} \\gets b_{i,j} + \\hat{\\mathbf{u}}_{j|i} \\cdot \\mathbf{v}_j$ (see Procedure 1, step 7, in the paper).",
"_____no_output_____"
]
],
[
[
"raw_weights_round_2 = tf.add(raw_weights, agreement,\n name=\"raw_weights_round_2\")",
"_____no_output_____"
]
],
[
[
"The rest of round 2 is the same as in round 1:",
"_____no_output_____"
]
],
[
[
"routing_weights_round_2 = tf.nn.softmax(raw_weights_round_2,\n dim=2,\n name=\"routing_weights_round_2\")\nweighted_predictions_round_2 = tf.multiply(routing_weights_round_2,\n caps2_predicted,\n name=\"weighted_predictions_round_2\")\nweighted_sum_round_2 = tf.reduce_sum(weighted_predictions_round_2,\n axis=1, keep_dims=True,\n name=\"weighted_sum_round_2\")\ncaps2_output_round_2 = squash(weighted_sum_round_2,\n axis=-2,\n name=\"caps2_output_round_2\")",
"_____no_output_____"
]
],
[
[
"We could go on for a few more rounds, by repeating exactly the same steps as in round 2, but to keep things short, we will stop here:",
"_____no_output_____"
]
],
[
[
"caps2_output = caps2_output_round_2",
"_____no_output_____"
]
],
[
[
"### Static or Dynamic Loop?",
"_____no_output_____"
],
[
"In the code above, we created different operations in the TensorFlow graph for each round of the routing by agreement algorithm. In other words, it's a static loop.\n\nSure, instead of copy/pasting the code several times, we could have written a `for` loop in Python, but this would not change the fact that the graph would end up containing different operations for each routing iteration. It's actually okay since we generally want less than 5 routing iterations, so the graph won't grow too big.\n\nHowever, you may prefer to implement the routing loop within the TensorFlow graph itself rather than using a Python `for` loop. To do this, you would need to use TensorFlow's `tf.while_loop()` function. This way, all routing iterations would reuse the same operations in the graph, it would be a dynamic loop.\n\nFor example, here is how to build a small loop that computes the sum of squares from 1 to 100:",
"_____no_output_____"
]
],
[
[
"def condition(input, counter):\n return tf.less(counter, 100)\n\ndef loop_body(input, counter):\n output = tf.add(input, tf.square(counter))\n return output, tf.add(counter, 1)\n\nwith tf.name_scope(\"compute_sum_of_squares\"):\n counter = tf.constant(1)\n sum_of_squares = tf.constant(0)\n\n result = tf.while_loop(condition, loop_body, [sum_of_squares, counter])\n \n\nwith tf.Session() as sess:\n print(sess.run(result))",
"(328350, 100)\n"
]
],
[
[
"As you can see, the `tf.while_loop()` function expects the loop condition and body to be provided _via_ two functions. These functions will be called only once by TensorFlow, during the graph construction phase, _not_ while executing the graph. The `tf.while_loop()` function stitches together the graph fragments created by `condition()` and `loop_body()` with some additional operations to create the loop.\n\nAlso note that during training, TensorFlow will automagically handle backpropogation through the loop, so you don't need to worry about that.",
"_____no_output_____"
],
[
"Of course, we could have used this one-liner instead! ;-)",
"_____no_output_____"
]
],
[
[
"sum([i**2 for i in range(1, 100 + 1)])",
"_____no_output_____"
]
],
[
[
"Joke aside, apart from reducing the graph size, using a dynamic loop instead of a static loop can help reduce how much GPU RAM you use (if you are using a GPU). Indeed, if you set `swap_memory=True` when calling the `tf.while_loop()` function, TensorFlow will automatically check GPU RAM usage at each loop iteration, and it will take care of swapping memory between the GPU and the CPU when needed. Since CPU memory is much cheaper and abundant than GPU RAM, this can really make a big difference.",
"_____no_output_____"
],
[
"# Estimated Class Probabilities (Length)",
"_____no_output_____"
],
[
"The lengths of the output vectors represent the class probabilities, so we could just use `tf.norm()` to compute them, but as we saw when discussing the squash function, it would be risky, so instead let's create our own `safe_norm()` function:",
"_____no_output_____"
]
],
[
[
"def safe_norm(s, axis=-1, epsilon=1e-7, keep_dims=False, name=None):\n with tf.name_scope(name, default_name=\"safe_norm\"):\n squared_norm = tf.reduce_sum(tf.square(s), axis=axis,\n keep_dims=keep_dims)\n return tf.sqrt(squared_norm + epsilon)",
"_____no_output_____"
],
[
"y_proba = safe_norm(caps2_output, axis=-2, name=\"y_proba\")",
"_____no_output_____"
]
],
[
[
"To predict the class of each instance, we can just select the one with the highest estimated probability. To do this, let's start by finding its index using `tf.argmax()`:",
"_____no_output_____"
]
],
[
[
"y_proba_argmax = tf.argmax(y_proba, axis=2, name=\"y_proba\")",
"_____no_output_____"
]
],
[
[
"Let's look at the shape of `y_proba_argmax`:",
"_____no_output_____"
]
],
[
[
"y_proba_argmax",
"_____no_output_____"
]
],
[
[
"That's what we wanted: for each instance, we now have the index of the longest output vector. Let's get rid of the last two dimensions by using `tf.squeeze()` which removes dimensions of size 1. This gives us the capsule network's predicted class for each instance:",
"_____no_output_____"
]
],
[
[
"y_pred = tf.squeeze(y_proba_argmax, axis=[1,2], name=\"y_pred\")",
"_____no_output_____"
],
[
"y_pred",
"_____no_output_____"
]
],
[
[
"Okay, we are now ready to define the training operations, starting with the losses.",
"_____no_output_____"
],
[
"# Labels",
"_____no_output_____"
],
[
"First, we will need a placeholder for the labels:",
"_____no_output_____"
]
],
[
[
"y = tf.placeholder(shape=[None], dtype=tf.int64, name=\"y\")",
"_____no_output_____"
]
],
[
[
"# Margin loss",
"_____no_output_____"
],
[
"The paper uses a special margin loss to make it possible to detect two or more different digits in each image:\n\n$ L_k = T_k \\max(0, m^{+} - \\|\\mathbf{v}_k\\|)^2 - \\lambda (1 - T_k) \\max(0, \\|\\mathbf{v}_k\\| - m^{-})^2$\n\n* $T_k$ is equal to 1 if the digit of class $k$ is present, or 0 otherwise.\n* In the paper, $m^{+} = 0.9$, $m^{-} = 0.1$ and $\\lambda = 0.5$.\n* Note that there was an error in the video (at 15:47): the max operations are squared, not the norms. Sorry about that.",
"_____no_output_____"
]
],
[
[
"m_plus = 0.9\nm_minus = 0.1\nlambda_ = 0.5",
"_____no_output_____"
]
],
[
[
"Since `y` will contain the digit classes, from 0 to 9, to get $T_k$ for every instance and every class, we can just use the `tf.one_hot()` function:",
"_____no_output_____"
]
],
[
[
"T = tf.one_hot(y, depth=caps2_n_caps, name=\"T\")",
"_____no_output_____"
]
],
[
[
"A small example should make it clear what this does:",
"_____no_output_____"
]
],
[
[
"with tf.Session():\n print(T.eval(feed_dict={y: np.array([0, 1, 2, 3, 9])}))",
"[[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n [ 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]\n [ 0. 0. 1. 0. 0. 0. 0. 0. 0. 0.]\n [ 0. 0. 0. 1. 0. 0. 0. 0. 0. 0.]\n [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]]\n"
]
],
[
[
"Now let's compute the norm of the output vector for each output capsule and each instance. First, let's verify the shape of `caps2_output`:",
"_____no_output_____"
]
],
[
[
"caps2_output",
"_____no_output_____"
]
],
[
[
"The 16D output vectors are in the second to last dimension, so let's use the `safe_norm()` function with `axis=-2`:",
"_____no_output_____"
]
],
[
[
"caps2_output_norm = safe_norm(caps2_output, axis=-2, keep_dims=True,\n name=\"caps2_output_norm\")",
"_____no_output_____"
]
],
[
[
"Now let's compute $\\max(0, m^{+} - \\|\\mathbf{v}_k\\|)^2$, and reshape the result to get a simple matrix of shape (_batch size_, 10):",
"_____no_output_____"
]
],
[
[
"present_error_raw = tf.square(tf.maximum(0., m_plus - caps2_output_norm),\n name=\"present_error_raw\")\npresent_error = tf.reshape(present_error_raw, shape=(-1, 10),\n name=\"present_error\")",
"_____no_output_____"
]
],
[
[
"Next let's compute $\\max(0, \\|\\mathbf{v}_k\\| - m^{-})^2$ and reshape it:",
"_____no_output_____"
]
],
[
[
"absent_error_raw = tf.square(tf.maximum(0., caps2_output_norm - m_minus),\n name=\"absent_error_raw\")\nabsent_error = tf.reshape(absent_error_raw, shape=(-1, 10),\n name=\"absent_error\")",
"_____no_output_____"
]
],
[
[
"We are ready to compute the loss for each instance and each digit:",
"_____no_output_____"
]
],
[
[
"L = tf.add(T * present_error, lambda_ * (1.0 - T) * absent_error,\n name=\"L\")",
"_____no_output_____"
]
],
[
[
"Now we can sum the digit losses for each instance ($L_0 + L_1 + \\cdots + L_9$), and compute the mean over all instances. This gives us the final margin loss:",
"_____no_output_____"
]
],
[
[
"margin_loss = tf.reduce_mean(tf.reduce_sum(L, axis=1), name=\"margin_loss\")",
"_____no_output_____"
]
],
[
[
"# Reconstruction",
"_____no_output_____"
],
[
"Now let's add a decoder network on top of the capsule network. It is a regular 3-layer fully connected neural network which will learn to reconstruct the input images based on the output of the capsule network. This will force the capsule network to preserve all the information required to reconstruct the digits, across the whole network. This constraint regularizes the model: it reduces the risk of overfitting the training set, and it helps generalize to new digits.",
"_____no_output_____"
],
[
"## Mask",
"_____no_output_____"
],
[
"The paper mentions that during training, instead of sending all the outputs of the capsule network to the decoder network, we must send only the output vector of the capsule that corresponds to the target digit. All the other output vectors must be masked out. At inference time, we must mask all output vectors except for the longest one, i.e., the one that corresponds to the predicted digit. You can see this in the paper's figure 2 (at 18:15 in the video): all output vectors are masked out, except for the reconstruction target's output vector.",
"_____no_output_____"
],
[
"We need a placeholder to tell TensorFlow whether we want to mask the output vectors based on the labels (`True`) or on the predictions (`False`, the default):",
"_____no_output_____"
]
],
[
[
"mask_with_labels = tf.placeholder_with_default(False, shape=(),\n name=\"mask_with_labels\")",
"_____no_output_____"
]
],
[
[
"Now let's use `tf.cond()` to define the reconstruction targets as the labels `y` if `mask_with_labels` is `True`, or `y_pred` otherwise.",
"_____no_output_____"
]
],
[
[
"reconstruction_targets = tf.cond(mask_with_labels, # condition\n lambda: y, # if True\n lambda: y_pred, # if False\n name=\"reconstruction_targets\")",
"_____no_output_____"
]
],
[
[
"Note that the `tf.cond()` function expects the if-True and if-False tensors to be passed _via_ functions: these functions will be called just once during the graph construction phase (not during the execution phase), similar to `tf.while_loop()`. This allows TensorFlow to add the necessary operations to handle the conditional evaluation of the if-True or if-False tensors. However, in our case, the tensors `y` and `y_pred` are already created by the time we call `tf.cond()`, so unfortunately TensorFlow will consider both `y` and `y_pred` to be dependencies of the `reconstruction_targets` tensor. The `reconstruction_targets` tensor will end up with the correct value, but:\n1. whenever we evaluate a tensor that depends on `reconstruction_targets`, the `y_pred` tensor will be evaluated (even if `mask_with_layers` is `True`). This is not a big deal because computing `y_pred` adds no computing overhead during training, since we need it anyway to compute the margin loss. And during testing, if we are doing classification, we won't need reconstructions, so `reconstruction_targets` won't be evaluated at all.\n2. we will always need to feed a value for the `y` placeholder (even if `mask_with_layers` is `False`). This is a bit annoying, but we can pass an empty array, because TensorFlow won't use it anyway (it just does not know it yet when it checks for dependencies).",
"_____no_output_____"
],
[
"Now that we have the reconstruction targets, let's create the reconstruction mask. It should be equal to 1.0 for the target class, and 0.0 for the other classes, for each instance. For this we can just use the `tf.one_hot()` function:",
"_____no_output_____"
]
],
[
[
"reconstruction_mask = tf.one_hot(reconstruction_targets,\n depth=caps2_n_caps,\n name=\"reconstruction_mask\")",
"_____no_output_____"
]
],
[
[
"Let's check the shape of `reconstruction_mask`:",
"_____no_output_____"
]
],
[
[
"reconstruction_mask",
"_____no_output_____"
]
],
[
[
"Let's compare this to the shape of `caps2_output`:",
"_____no_output_____"
]
],
[
[
"caps2_output",
"_____no_output_____"
]
],
[
[
"Mmh, its shape is (_batch size_, 1, 10, 16, 1). We want to multiply it by the `reconstruction_mask`, but the shape of the `reconstruction_mask` is (_batch size_, 10). We must reshape it to (_batch size_, 1, 10, 1, 1) to make multiplication possible:",
"_____no_output_____"
]
],
[
[
"reconstruction_mask_reshaped = tf.reshape(\n reconstruction_mask, [-1, 1, caps2_n_caps, 1, 1],\n name=\"reconstruction_mask_reshaped\")",
"_____no_output_____"
]
],
[
[
"At last! We can apply the mask:",
"_____no_output_____"
]
],
[
[
"caps2_output_masked = tf.multiply(\n caps2_output, reconstruction_mask_reshaped,\n name=\"caps2_output_masked\")",
"_____no_output_____"
],
[
"caps2_output_masked",
"_____no_output_____"
]
],
[
[
"One last reshape operation to flatten the decoder's inputs:",
"_____no_output_____"
]
],
[
[
"decoder_input = tf.reshape(caps2_output_masked,\n [-1, caps2_n_caps * caps2_n_dims],\n name=\"decoder_input\")",
"_____no_output_____"
]
],
[
[
"This gives us an array of shape (_batch size_, 160):",
"_____no_output_____"
]
],
[
[
"decoder_input",
"_____no_output_____"
]
],
[
[
"## Decoder",
"_____no_output_____"
],
[
"Now let's build the decoder. It's quite simple: two dense (fully connected) ReLU layers followed by a dense output sigmoid layer:",
"_____no_output_____"
]
],
[
[
"n_hidden1 = 512\nn_hidden2 = 1024\nn_output = 28 * 28",
"_____no_output_____"
],
[
"with tf.name_scope(\"decoder\"):\n hidden1 = tf.layers.dense(decoder_input, n_hidden1,\n activation=tf.nn.relu,\n name=\"hidden1\")\n hidden2 = tf.layers.dense(hidden1, n_hidden2,\n activation=tf.nn.relu,\n name=\"hidden2\")\n decoder_output = tf.layers.dense(hidden2, n_output,\n activation=tf.nn.sigmoid,\n name=\"decoder_output\")",
"_____no_output_____"
]
],
[
[
"## Reconstruction Loss",
"_____no_output_____"
],
[
"Now let's compute the reconstruction loss. It is just the squared difference between the input image and the reconstructed image:",
"_____no_output_____"
]
],
[
[
"X_flat = tf.reshape(X, [-1, n_output], name=\"X_flat\")\nsquared_difference = tf.square(X_flat - decoder_output,\n name=\"squared_difference\")\nreconstruction_loss = tf.reduce_sum(squared_difference,\n name=\"reconstruction_loss\")",
"_____no_output_____"
]
],
[
[
"## Final Loss",
"_____no_output_____"
],
[
"The final loss is the sum of the margin loss and the reconstruction loss (scaled down by a factor of 0.0005 to ensure the margin loss dominates training):",
"_____no_output_____"
]
],
[
[
"alpha = 0.0005\n\nloss = tf.add(margin_loss, alpha * reconstruction_loss, name=\"loss\")",
"_____no_output_____"
]
],
[
[
"# Final Touches",
"_____no_output_____"
],
[
"## Accuracy",
"_____no_output_____"
],
[
"To measure our model's accuracy, we need to count the number of instances that are properly classified. For this, we can simply compare `y` and `y_pred`, convert the boolean value to a float32 (0.0 for False, 1.0 for True), and compute the mean over all the instances:",
"_____no_output_____"
]
],
[
[
"correct = tf.equal(y, y_pred, name=\"correct\")\naccuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name=\"accuracy\")",
"_____no_output_____"
]
],
[
[
"## Training Operations",
"_____no_output_____"
],
[
"The paper mentions that the authors used the Adam optimizer with TensorFlow's default parameters:",
"_____no_output_____"
]
],
[
[
"optimizer = tf.train.AdamOptimizer()\ntraining_op = optimizer.minimize(loss, name=\"training_op\")",
"_____no_output_____"
]
],
[
[
"## Init and Saver",
"_____no_output_____"
],
[
"And let's add the usual variable initializer, as well as a `Saver`:",
"_____no_output_____"
]
],
[
[
"init = tf.global_variables_initializer()\nsaver = tf.train.Saver()",
"_____no_output_____"
]
],
[
[
"And... we're done with the construction phase! Please take a moment to celebrate. :)",
"_____no_output_____"
],
[
"# Training",
"_____no_output_____"
],
[
"Training our capsule network is pretty standard. For simplicity, we won't do any fancy hyperparameter tuning, dropout or anything, we will just run the training operation over and over again, displaying the loss, and at the end of each epoch, measure the accuracy on the validation set, display it, and save the model if the validation loss is the lowest seen found so far (this is a basic way to implement early stopping, without actually stopping). Hopefully the code should be self-explanatory, but here are a few details to note:\n* if a checkpoint file exists, it will be restored (this makes it possible to interrupt training, then restart it later from the last checkpoint),\n* we must not forget to feed `mask_with_labels=True` during training,\n* during testing, we let `mask_with_labels` default to `False` (but we still feed the labels since they are required to compute the accuracy),\n* the images loaded _via_ `mnist.train.next_batch()` are represented as `float32` arrays of shape \\[784\\], but the input placeholder `X` expects a `float32` array of shape \\[28, 28, 1\\], so we must reshape the images before we feed them to our model,\n* we evaluate the model's loss and accuracy on the full validation set (5,000 instances). To view progress and support systems that don't have a lot of RAM, the code evaluates the loss and accuracy on one batch at a time, and computes the mean loss and mean accuracy at the end.\n\n*Warning*: if you don't have a GPU, training will take a very long time (at least a few hours). With a GPU, it should take just a few minutes per epoch (e.g., 6 minutes on an NVidia GeForce GTX 1080Ti).",
"_____no_output_____"
]
],
[
[
"n_epochs = 10\nbatch_size = 50\nrestore_checkpoint = True\n\nn_iterations_per_epoch = mnist.train.num_examples // batch_size\nn_iterations_validation = mnist.validation.num_examples // batch_size\nbest_loss_val = np.infty\ncheckpoint_path = \"./my_capsule_network\"\n\nwith tf.Session() as sess:\n if restore_checkpoint and tf.train.checkpoint_exists(checkpoint_path):\n saver.restore(sess, checkpoint_path)\n else:\n init.run()\n\n for epoch in range(n_epochs):\n for iteration in range(1, n_iterations_per_epoch + 1):\n X_batch, y_batch = mnist.train.next_batch(batch_size)\n # Run the training operation and measure the loss:\n _, loss_train = sess.run(\n [training_op, loss],\n feed_dict={X: X_batch.reshape([-1, 28, 28, 1]),\n y: y_batch,\n mask_with_labels: True})\n print(\"\\rIteration: {}/{} ({:.1f}%) Loss: {:.5f}\".format(\n iteration, n_iterations_per_epoch,\n iteration * 100 / n_iterations_per_epoch,\n loss_train),\n end=\"\")\n\n # At the end of each epoch,\n # measure the validation loss and accuracy:\n loss_vals = []\n acc_vals = []\n for iteration in range(1, n_iterations_validation + 1):\n X_batch, y_batch = mnist.validation.next_batch(batch_size)\n loss_val, acc_val = sess.run(\n [loss, accuracy],\n feed_dict={X: X_batch.reshape([-1, 28, 28, 1]),\n y: y_batch})\n loss_vals.append(loss_val)\n acc_vals.append(acc_val)\n print(\"\\rEvaluating the model: {}/{} ({:.1f}%)\".format(\n iteration, n_iterations_validation,\n iteration * 100 / n_iterations_validation),\n end=\" \" * 10)\n loss_val = np.mean(loss_vals)\n acc_val = np.mean(acc_vals)\n print(\"\\rEpoch: {} Val accuracy: {:.4f}% Loss: {:.6f}{}\".format(\n epoch + 1, acc_val * 100, loss_val,\n \" (improved)\" if loss_val < best_loss_val else \"\"))\n\n # And save the model if it improved:\n if loss_val < best_loss_val:\n save_path = saver.save(sess, checkpoint_path)\n best_loss_val = loss_val",
"Epoch: 1 Val accuracy: 98.7000% Loss: 0.416563 (improved)\nEpoch: 2 Val accuracy: 99.0400% Loss: 0.291740 (improved)\nEpoch: 3 Val accuracy: 99.1200% Loss: 0.241666 (improved)\nEpoch: 4 Val accuracy: 99.2800% Loss: 0.211442 (improved)\nEpoch: 5 Val accuracy: 99.3200% Loss: 0.196026 (improved)\nEpoch: 6 Val accuracy: 99.3600% Loss: 0.186166 (improved)\nEpoch: 7 Val accuracy: 99.3400% Loss: 0.179290 (improved)\nEpoch: 8 Val accuracy: 99.3800% Loss: 0.173593 (improved)\nEpoch: 9 Val accuracy: 99.3600% Loss: 0.169071 (improved)\nEpoch: 10 Val accuracy: 99.3400% Loss: 0.165477 (improved)\n"
]
],
[
[
"Training is finished, we reached over 99.3% accuracy on the validation set after just 5 epochs, things are looking good. Now let's evaluate the model on the test set.",
"_____no_output_____"
],
[
"# Evaluation",
"_____no_output_____"
]
],
[
[
"n_iterations_test = mnist.test.num_examples // batch_size\n\nwith tf.Session() as sess:\n saver.restore(sess, checkpoint_path)\n\n loss_tests = []\n acc_tests = []\n for iteration in range(1, n_iterations_test + 1):\n X_batch, y_batch = mnist.test.next_batch(batch_size)\n loss_test, acc_test = sess.run(\n [loss, accuracy],\n feed_dict={X: X_batch.reshape([-1, 28, 28, 1]),\n y: y_batch})\n loss_tests.append(loss_test)\n acc_tests.append(acc_test)\n print(\"\\rEvaluating the model: {}/{} ({:.1f}%)\".format(\n iteration, n_iterations_test,\n iteration * 100 / n_iterations_test),\n end=\" \" * 10)\n loss_test = np.mean(loss_tests)\n acc_test = np.mean(acc_tests)\n print(\"\\rFinal test accuracy: {:.4f}% Loss: {:.6f}\".format(\n acc_test * 100, loss_test))",
"INFO:tensorflow:Restoring parameters from ./my_capsule_network\nFinal test accuracy: 99.4300% Loss: 0.165047 \n"
]
],
[
[
"We reach 99.43% accuracy on the test set. Pretty nice. :)",
"_____no_output_____"
],
[
"# Predictions",
"_____no_output_____"
],
[
"Now let's make some predictions! We first fix a few images from the test set, then we start a session, restore the trained model, evaluate `caps2_output` to get the capsule network's output vectors, `decoder_output` to get the reconstructions, and `y_pred` to get the class predictions:",
"_____no_output_____"
]
],
[
[
"n_samples = 5\n\nsample_images = mnist.test.images[:n_samples].reshape([-1, 28, 28, 1])\n\nwith tf.Session() as sess:\n saver.restore(sess, checkpoint_path)\n caps2_output_value, decoder_output_value, y_pred_value = sess.run(\n [caps2_output, decoder_output, y_pred],\n feed_dict={X: sample_images,\n y: np.array([], dtype=np.int64)})",
"INFO:tensorflow:Restoring parameters from ./my_capsule_network\n"
]
],
[
[
"Note: we feed `y` with an empty array, but TensorFlow will not use it, as explained earlier.",
"_____no_output_____"
],
[
"And now let's plot the images and their labels, followed by the corresponding reconstructions and predictions:",
"_____no_output_____"
]
],
[
[
"sample_images = sample_images.reshape(-1, 28, 28)\nreconstructions = decoder_output_value.reshape([-1, 28, 28])\n\nplt.figure(figsize=(n_samples * 2, 3))\nfor index in range(n_samples):\n plt.subplot(1, n_samples, index + 1)\n plt.imshow(sample_images[index], cmap=\"binary\")\n plt.title(\"Label:\" + str(mnist.test.labels[index]))\n plt.axis(\"off\")\n\nplt.show()\n\nplt.figure(figsize=(n_samples * 2, 3))\nfor index in range(n_samples):\n plt.subplot(1, n_samples, index + 1)\n plt.title(\"Predicted:\" + str(y_pred_value[index]))\n plt.imshow(reconstructions[index], cmap=\"binary\")\n plt.axis(\"off\")\n \nplt.show()\n",
"_____no_output_____"
]
],
[
[
"The predictions are all correct, and the reconstructions look great. Hurray!",
"_____no_output_____"
],
[
"# Interpreting the Output Vectors",
"_____no_output_____"
],
[
"Let's tweak the output vectors to see what their pose parameters represent.",
"_____no_output_____"
],
[
"First, let's check the shape of the `cap2_output_value` NumPy array:",
"_____no_output_____"
]
],
[
[
"caps2_output_value.shape",
"_____no_output_____"
]
],
[
[
"Let's create a function that will tweak each of the 16 pose parameters (dimensions) in all output vectors. Each tweaked output vector will be identical to the original output vector, except that one of its pose parameters will be incremented by a value varying from -0.5 to 0.5. By default there will be 11 steps (-0.5, -0.4, ..., +0.4, +0.5). This function will return an array of shape (_tweaked pose parameters_=16, _steps_=11, _batch size_=5, 1, 10, 16, 1):",
"_____no_output_____"
]
],
[
[
"def tweak_pose_parameters(output_vectors, min=-0.5, max=0.5, n_steps=11):\n steps = np.linspace(min, max, n_steps) # -0.25, -0.15, ..., +0.25\n pose_parameters = np.arange(caps2_n_dims) # 0, 1, ..., 15\n tweaks = np.zeros([caps2_n_dims, n_steps, 1, 1, 1, caps2_n_dims, 1])\n tweaks[pose_parameters, :, 0, 0, 0, pose_parameters, 0] = steps\n output_vectors_expanded = output_vectors[np.newaxis, np.newaxis]\n return tweaks + output_vectors_expanded",
"_____no_output_____"
]
],
[
[
"Let's compute all the tweaked output vectors and reshape the result to (_parameters_×_steps_×_instances_, 1, 10, 16, 1) so we can feed the array to the decoder:",
"_____no_output_____"
]
],
[
[
"n_steps = 11\n\ntweaked_vectors = tweak_pose_parameters(caps2_output_value, n_steps=n_steps)\ntweaked_vectors_reshaped = tweaked_vectors.reshape(\n [-1, 1, caps2_n_caps, caps2_n_dims, 1])",
"_____no_output_____"
]
],
[
[
"Now let's feed these tweaked output vectors to the decoder and get the reconstructions it produces:",
"_____no_output_____"
]
],
[
[
"tweak_labels = np.tile(mnist.test.labels[:n_samples], caps2_n_dims * n_steps)\n\nwith tf.Session() as sess:\n saver.restore(sess, checkpoint_path)\n decoder_output_value = sess.run(\n decoder_output,\n feed_dict={caps2_output: tweaked_vectors_reshaped,\n mask_with_labels: True,\n y: tweak_labels})",
"INFO:tensorflow:Restoring parameters from ./my_capsule_network\n"
]
],
[
[
"Let's reshape the decoder's output so we can easily iterate on the output dimension, the tweak steps, and the instances:",
"_____no_output_____"
]
],
[
[
"tweak_reconstructions = decoder_output_value.reshape(\n [caps2_n_dims, n_steps, n_samples, 28, 28])",
"_____no_output_____"
]
],
[
[
"Lastly, let's plot all the reconstructions, for the first 3 output dimensions, for each tweaking step (column) and each digit (row):",
"_____no_output_____"
]
],
[
[
"for dim in range(3):\n print(\"Tweaking output dimension #{}\".format(dim))\n plt.figure(figsize=(n_steps / 1.2, n_samples / 1.5))\n for row in range(n_samples):\n for col in range(n_steps):\n plt.subplot(n_samples, n_steps, row * n_steps + col + 1)\n plt.imshow(tweak_reconstructions[dim, col, row], cmap=\"binary\")\n plt.axis(\"off\")\n plt.show()",
"Tweaking output dimension #0\n"
]
],
[
[
"# Conclusion",
"_____no_output_____"
],
[
"I tried to make the code in this notebook as flat and linear as possible, to make it easier to follow, but of course in practice you would want to wrap the code in nice reusable functions and classes. For example, you could try implementing your own `PrimaryCapsuleLayer`, and `DenseRoutingCapsuleLayer` classes, with parameters for the number of capsules, the number of routing iterations, whether to use a dynamic loop or a static loop, and so on. For an example a modular implementation of Capsule Networks based on TensorFlow, take a look at the [CapsNet-TensorFlow](https://github.com/naturomics/CapsNet-Tensorflow) project.\n\nThat's all for today, I hope you enjoyed this notebook!",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d088c9e32de882db487eb0ee8c041ba5b63ca365 | 25,030 | ipynb | Jupyter Notebook | Sentimental Analysis/Detecting-Depression-in-Tweets/Depression_detection_tweets.ipynb | Manas-Garg/sentimental-analysis | 1223f3a41d738ea144d991b5de4cd8ac304bb092 | [
"MIT"
] | null | null | null | Sentimental Analysis/Detecting-Depression-in-Tweets/Depression_detection_tweets.ipynb | Manas-Garg/sentimental-analysis | 1223f3a41d738ea144d991b5de4cd8ac304bb092 | [
"MIT"
] | null | null | null | Sentimental Analysis/Detecting-Depression-in-Tweets/Depression_detection_tweets.ipynb | Manas-Garg/sentimental-analysis | 1223f3a41d738ea144d991b5de4cd8ac304bb092 | [
"MIT"
] | null | null | null | 27.535754 | 157 | 0.532321 | [
[
[
"# Detecting depression in Tweets using Baye's Theorem",
"_____no_output_____"
],
[
"# Installing and importing libraries",
"_____no_output_____"
]
],
[
[
"!pip install wordcloud\n!pip install nltk\nimport nltk\nnltk.download('punkt')\nfrom nltk.tokenize import word_tokenize\nfrom nltk.corpus import stopwords\nfrom nltk.stem import PorterStemmer\nimport matplotlib.pyplot as plt\nfrom wordcloud import WordCloud\nfrom math import log, sqrt\nimport pandas as pd\nimport numpy as np\nimport re\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"# Loading the Data",
"_____no_output_____"
]
],
[
[
"tweets = pd.read_csv('sentiment_tweets3.csv')\ntweets.head(20)",
"_____no_output_____"
],
[
"tweets.drop(['Unnamed: 0'], axis = 1, inplace = True)\n",
"_____no_output_____"
],
[
"tweets['label'].value_counts()",
"_____no_output_____"
],
[
"tweets.info()",
"_____no_output_____"
]
],
[
[
"# Splitting the Data in Training and Testing Sets",
"_____no_output_____"
],
[
"As you can see, I used almost all the data for training: 98% and the rest for testing.",
"_____no_output_____"
]
],
[
[
"totalTweets = 8000 + 2314\ntrainIndex, testIndex = list(), list()\nfor i in range(tweets.shape[0]):\n if np.random.uniform(0, 1) < 0.98:\n trainIndex += [i]\n else:\n testIndex += [i]\ntrainData = tweets.iloc[trainIndex]\ntestData = tweets.iloc[testIndex]",
"_____no_output_____"
],
[
"tweets.info()",
"_____no_output_____"
],
[
"trainData['label'].value_counts()",
"_____no_output_____"
],
[
"trainData.head()",
"_____no_output_____"
],
[
"testData['label'].value_counts()",
"_____no_output_____"
],
[
"testData.head()",
"_____no_output_____"
]
],
[
[
"# Wordcloud Analysis",
"_____no_output_____"
]
],
[
[
"depressive_words = ' '.join(list(tweets[tweets['label'] == 1]['message']))\ndepressive_wc = WordCloud(width = 512,height = 512, collocations=False, colormap=\"Blues\").generate(depressive_words)\nplt.figure(figsize = (10, 8), facecolor = 'k')\nplt.imshow(depressive_wc)\nplt.axis('off')\nplt.tight_layout(pad = 0)\nplt.show()",
"_____no_output_____"
],
[
"positive_words = ' '.join(list(tweets[tweets['label'] == 0]['message']))\npositive_wc = WordCloud(width = 512,height = 512, collocations=False, colormap=\"Blues\").generate(positive_words)\nplt.figure(figsize = (10, 8), facecolor = 'k')\nplt.imshow(positive_wc)\nplt.axis('off'), \nplt.tight_layout(pad = 0)\nplt.show()",
"_____no_output_____"
]
],
[
[
"#Pre-processing the data for the training: Tokenization, stemming, and removal of stop words",
"_____no_output_____"
]
],
[
[
"def process_message(message, lower_case = True, stem = True, stop_words = True, gram = 2):\n if lower_case:\n message = message.lower()\n words = word_tokenize(message)\n words = [w for w in words if len(w) > 2]\n if gram > 1:\n w = []\n for i in range(len(words) - gram + 1):\n w += [' '.join(words[i:i + gram])]\n return w\n if stop_words:\n sw = stopwords.words('english')\n words = [word for word in words if word not in sw]\n if stem:\n stemmer = PorterStemmer()\n words = [stemmer.stem(word) for word in words] \n return words",
"_____no_output_____"
],
[
"class TweetClassifier(object):\n def __init__(self, trainData, method = 'tf-idf'):\n self.tweets, self.labels = trainData['message'], trainData['label']\n self.method = method\n\n def train(self):\n self.calc_TF_and_IDF()\n if self.method == 'tf-idf':\n self.calc_TF_IDF()\n else:\n self.calc_prob()\n\n def calc_prob(self):\n self.prob_depressive = dict()\n self.prob_positive = dict()\n for word in self.tf_depressive:\n self.prob_depressive[word] = (self.tf_depressive[word] + 1) / (self.depressive_words + \\\n len(list(self.tf_depressive.keys())))\n for word in self.tf_positive:\n self.prob_positive[word] = (self.tf_positive[word] + 1) / (self.positive_words + \\\n len(list(self.tf_positive.keys())))\n self.prob_depressive_tweet, self.prob_positive_tweet = self.depressive_tweets / self.total_tweets, self.positive_tweets / self.total_tweets \n\n\n def calc_TF_and_IDF(self):\n noOfMessages = self.tweets.shape[0]\n self.depressive_tweets, self.positive_tweets = self.labels.value_counts()[1], self.labels.value_counts()[0]\n self.total_tweets = self.depressive_tweets + self.positive_tweets\n self.depressive_words = 0\n self.positive_words = 0\n self.tf_depressive = dict()\n self.tf_positive = dict()\n self.idf_depressive = dict()\n self.idf_positive = dict()\n for i in range(noOfMessages):\n message_processed = process_message(self.tweets.iloc[i])\n count = list() #To keep track of whether the word has ocured in the message or not.\n #For IDF\n for word in message_processed:\n if self.labels.iloc[i]:\n self.tf_depressive[word] = self.tf_depressive.get(word, 0) + 1\n self.depressive_words += 1\n else:\n self.tf_positive[word] = self.tf_positive.get(word, 0) + 1\n self.positive_words += 1\n if word not in count:\n count += [word]\n for word in count:\n if self.labels.iloc[i]:\n self.idf_depressive[word] = self.idf_depressive.get(word, 0) + 1\n else:\n self.idf_positive[word] = self.idf_positive.get(word, 0) + 1\n\n def calc_TF_IDF(self):\n self.prob_depressive = dict()\n self.prob_positive = dict()\n self.sum_tf_idf_depressive = 0\n self.sum_tf_idf_positive = 0\n for word in self.tf_depressive:\n self.prob_depressive[word] = (self.tf_depressive[word]) * log((self.depressive_tweets + self.positive_tweets) \\\n / (self.idf_depressive[word] + self.idf_positive.get(word, 0)))\n self.sum_tf_idf_depressive += self.prob_depressive[word]\n for word in self.tf_depressive:\n self.prob_depressive[word] = (self.prob_depressive[word] + 1) / (self.sum_tf_idf_depressive + len(list(self.prob_depressive.keys())))\n \n for word in self.tf_positive:\n self.prob_positive[word] = (self.tf_positive[word]) * log((self.depressive_tweets + self.positive_tweets) \\\n / (self.idf_depressive.get(word, 0) + self.idf_positive[word]))\n self.sum_tf_idf_positive += self.prob_positive[word]\n for word in self.tf_positive:\n self.prob_positive[word] = (self.prob_positive[word] + 1) / (self.sum_tf_idf_positive + len(list(self.prob_positive.keys())))\n \n \n self.prob_depressive_tweet, self.prob_positive_tweet = self.depressive_tweets / self.total_tweets, self.positive_tweets / self.total_tweets \n \n def classify(self, processed_message):\n pDepressive, pPositive = 0, 0\n for word in processed_message: \n if word in self.prob_depressive:\n pDepressive += log(self.prob_depressive[word])\n else:\n if self.method == 'tf-idf':\n pDepressive -= log(self.sum_tf_idf_depressive + len(list(self.prob_depressive.keys())))\n else:\n pDepressive -= log(self.depressive_words + len(list(self.prob_depressive.keys())))\n if word in self.prob_positive:\n pPositive += log(self.prob_positive[word])\n else:\n if self.method == 'tf-idf':\n pPositive -= log(self.sum_tf_idf_positive + len(list(self.prob_positive.keys()))) \n else:\n pPositive -= log(self.positive_words + len(list(self.prob_positive.keys())))\n pDepressive += log(self.prob_depressive_tweet)\n pPositive += log(self.prob_positive_tweet)\n return pDepressive >= pPositive\n \n def predict(self, testData):\n result = dict()\n for (i, message) in enumerate(testData):\n processed_message = process_message(message)\n result[i] = int(self.classify(processed_message))\n return result",
"_____no_output_____"
],
[
"def metrics(labels, predictions):\n true_pos, true_neg, false_pos, false_neg = 0, 0, 0, 0\n for i in range(len(labels)):\n true_pos += int(labels.iloc[i] == 1 and predictions[i] == 1)\n true_neg += int(labels.iloc[i] == 0 and predictions[i] == 0)\n false_pos += int(labels.iloc[i] == 0 and predictions[i] == 1)\n false_neg += int(labels.iloc[i] == 1 and predictions[i] == 0)\n precision = true_pos / (true_pos + false_pos)\n recall = true_pos / (true_pos + false_neg)\n Fscore = 2 * precision * recall / (precision + recall)\n accuracy = (true_pos + true_neg) / (true_pos + true_neg + false_pos + false_neg)\n\n print(\"Precision: \", precision)\n print(\"Recall: \", recall)\n print(\"F-score: \", Fscore)\n print(\"Accuracy: \", accuracy)",
"_____no_output_____"
],
[
"sc_tf_idf = TweetClassifier(trainData, 'tf-idf')\nsc_tf_idf.train()\npreds_tf_idf = sc_tf_idf.predict(testData['message'])\nmetrics(testData['label'], preds_tf_idf)",
"_____no_output_____"
],
[
"sc_bow = TweetClassifier(trainData, 'bow')\nsc_bow.train()\npreds_bow = sc_bow.predict(testData['message'])\nmetrics(testData['label'], preds_bow)",
"_____no_output_____"
]
],
[
[
"# Predictions with TF-IDF",
"_____no_output_____"
],
[
"# Depressive Tweets",
"_____no_output_____"
]
],
[
[
"pm = process_message('Lately I have been feeling unsure of myself as a person & an artist')\nsc_tf_idf.classify(pm)",
"_____no_output_____"
],
[
"pm = process_message('Extreme sadness, lack of energy, hopelessness')\nsc_tf_idf.classify(pm)\n",
"_____no_output_____"
],
[
"pm = process_message('Hi hello depression and anxiety are the worst')\nsc_tf_idf.classify(pm)",
"_____no_output_____"
],
[
"pm = process_message('I am officially done with @kanyewest')\nsc_tf_idf.classify(pm)\n",
"_____no_output_____"
],
[
"pm = process_message('Feeling down...')\nsc_tf_idf.classify(pm)\n",
"_____no_output_____"
],
[
"pm = process_message('My depression will not let me work out')\nsc_tf_idf.classify(pm)\n",
"_____no_output_____"
]
],
[
[
"# Positive Tweets",
"_____no_output_____"
]
],
[
[
"pm = process_message('Loving how me and my lovely partner is talking about what we want.')\nsc_tf_idf.classify(pm)\n",
"_____no_output_____"
],
[
"pm = process_message('Very rewarding when a patient hugs you and tells you they feel great after changing the diet and daily habits')\nsc_tf_idf.classify(pm)\n",
"_____no_output_____"
],
[
"pm = process_message('Happy Thursday everyone. Thought today was Wednesday so super happy tomorrow is Friday yayyyyy')\nsc_tf_idf.classify(pm)",
"_____no_output_____"
],
[
"pm = process_message('It’s the little things that make me smile. Got our new car today and this arrived with it')\nsc_tf_idf.classify(pm)",
"_____no_output_____"
]
],
[
[
"# Predictions with Bag-of-Words (BOW)",
"_____no_output_____"
],
[
"# Depressive tweets",
"_____no_output_____"
]
],
[
[
"pm = process_message('Hi hello depression and anxiety are the worst')\nsc_bow.classify(pm)",
"_____no_output_____"
],
[
"pm = process_message('My depression will not let me work out')\nsc_bow.classify(pm)",
"_____no_output_____"
],
[
"pm = process_message('Feeling down...')\nsc_bow.classify(pm)",
"_____no_output_____"
]
],
[
[
"# Positive Tweets",
"_____no_output_____"
]
],
[
[
"pm = process_message('Loving how me and my lovely partner is talking about what we want.')\nsc_bow.classify(pm)",
"_____no_output_____"
],
[
"pm = process_message('Very rewarding when a patient hugs you and tells you they feel great after changing the diet and daily habits')\nsc_bow.classify(pm)",
"_____no_output_____"
],
[
"pm = process_message('Happy Thursday everyone. Thought today was Wednesday so super happy tomorrow is Friday yayyyyy')\nsc_bow.classify(pm)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d088ce515fdbed8e9e37f02ce88be0c04f365d2a | 350,068 | ipynb | Jupyter Notebook | notebooks/Sig-mu_vae.ipynb | Anmol42/IDP-sem4 | b7780b7e67a0d6a6a8f973a9eaf0c12b212f1ee3 | [
"MIT"
] | 1 | 2021-07-22T18:34:58.000Z | 2021-07-22T18:34:58.000Z | notebooks/Sig-mu_vae.ipynb | Anmol42/IDP-sem4 | b7780b7e67a0d6a6a8f973a9eaf0c12b212f1ee3 | [
"MIT"
] | null | null | null | notebooks/Sig-mu_vae.ipynb | Anmol42/IDP-sem4 | b7780b7e67a0d6a6a8f973a9eaf0c12b212f1ee3 | [
"MIT"
] | null | null | null | 572.942717 | 317,474 | 0.937461 | [
[
[
"<a href=\"https://colab.research.google.com/github/Anmol42/IDP-sem4/blob/main/notebooks/Sig-mu_vae.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"import torch\nimport torchvision\nimport torch.nn as nn\nimport matplotlib.pyplot as plt\nimport torch.nn.functional as F\nimport torchvision.transforms as transforms\nimport numpy as np\nfrom torch.utils.data.dataloader import DataLoader",
"_____no_output_____"
],
[
"from google.colab import drive\ndrive.mount('/content/drive')",
"Mounted at /content/drive\n"
],
[
"!unzip -q /content/drive/MyDrive/Datasets/faces.zip ## Silenced the unzip action",
"_____no_output_____"
],
[
"from skimage.io import imread_collection\npath = \"/content/faces/*.jpg\"\ntrain_ds = imread_collection(path)",
"_____no_output_____"
],
[
"from skimage.io import imread_collection\nfrom skimage.color import rgb2lab,lab2rgb\nfrom skimage.transform import resize\n\ndef get_img_data(path):\n train_ds = imread_collection(path)\n images = torch.zeros(len(train_ds),3,128,128)\n for i,im in enumerate(train_ds):\n im = resize(im, (128,128,3),\n anti_aliasing=True)\n image = rgb2lab(im)\n image = torch.Tensor(image)\n image = image.permute(2,0,1)\n images[i]=image\n \n return images\n\ndef normalize_data(data):\n data[:,0] = data[:,0]/100\n data[:,1:] = data[:,1:]/128\n return data",
"_____no_output_____"
],
[
"images = get_img_data(path)\nimages = normalize_data(images)\nbatch_size = 100",
"_____no_output_____"
],
[
"class component(nn.Module):\n def __init__(self):\n super(component,self).__init__()\n self.conv1 = nn.Sequential(nn.Conv2d(1,8,kernel_size=3,padding=1,stride=2),\n nn.BatchNorm2d(8),\n nn.LeakyReLU())\n self.conv2 = nn.Sequential(nn.Conv2d(8,16,kernel_size=5,padding=2,stride=2),\n nn.BatchNorm2d(16),\n nn.LeakyReLU())\n self.conv3 = nn.Sequential(nn.Conv2d(16,32,kernel_size=3,padding=1,stride=2),\n nn.BatchNorm2d(32),\n nn.LeakyReLU())\n self.conv4 = nn.Sequential(nn.Conv2d(32,64,kernel_size=5,padding=2,stride=2), #size is 8x8 at this point\n nn.LeakyReLU())\n # BottleNeck\n self.bottleneck = nn.Sequential(nn.Conv2d(64,128,kernel_size=3,stride=2,padding=1),\n nn.LeakyReLU()) # size 4x4\n self.linear = nn.Linear(128*4*4,256)\n def forward(self,xb,z):\n out1 = self.conv1(xb)\n out2 = self.conv2(out1)\n out3 = self.conv3(out2)\n out4 = self.conv4(out3)\n out5 = self.bottleneck(out4)\n out5 = out5.view(z.shape[0],-1)\n out6 = self.linear(out5)\n return out6",
"_____no_output_____"
],
[
"## generator model\nclass generator(nn.Module):\n def __init__(self,component): # z is input noise\n super(generator,self).__init__()\n self.sigma = component()\n self.mu = component()\n self.deconv7 = nn.Sequential(nn.ConvTranspose2d(256,128,kernel_size=4,stride=2,padding=1),\n nn.ReLU())\n self.deconv6 = nn.Sequential(nn.ConvTranspose2d(128,64,kernel_size=4,stride=2,padding=1),\n nn.ReLU())\n self.deconv5 = nn.Sequential(nn.ConvTranspose2d(64,64,kernel_size=4,stride=2,padding=1),\n nn.ReLU())\n self.deconv4 = nn.Sequential(nn.ConvTranspose2d(64,32,kernel_size=4,stride=2,padding=1),\n nn.ReLU())\n self.deconv3 = nn.Sequential(nn.ConvTranspose2d(32,16,kernel_size=4,stride=2,padding=1),\n nn.ReLU())\n self.deconv2 = nn.Sequential(nn.ConvTranspose2d(16,8,kernel_size=4,stride=2,padding=1),\n nn.ReLU())\n self.deconv1 = nn.Sequential(nn.ConvTranspose2d(8,2,kernel_size=4,stride=2,padding=1),\n nn.Tanh())\n self.linear = nn.Linear(128*4*4,512)\n\n def forward(self,xb,z):\n sig = self.sigma(xb,z)\n mm = self.mu(xb,z)\n noise = z*sig + mm\n out5 = self.deconv7(noise.unsqueeze(2).unsqueeze(2))\n out5 = self.deconv6(out5)\n out5 = self.deconv5(out5)\n out5 = self.deconv4(out5)\n out5 = self.deconv3(out5)\n out5 = self.deconv2(out5)\n out5 = self.deconv1(out5)\n return torch.cat((xb,out5),1)\n",
"_____no_output_____"
],
[
"## discriminator\nclass discriminator(nn.Module):\n def __init__(self):\n super(discriminator,self).__init__()\n self.network = nn.Sequential(\n nn.Conv2d(3,8,kernel_size=3,stride=1),\n nn.MaxPool2d(kernel_size=2),\n nn.ReLU(),\n nn.Conv2d(8,16,kernel_size=5),\n nn.MaxPool2d(kernel_size=2),\n nn.ReLU(),\n nn.Conv2d(16,32,kernel_size=3),\n nn.MaxPool2d(kernel_size=2),\n nn.ReLU(),\n nn.Conv2d(32,64,kernel_size=3),\n nn.MaxPool2d(kernel_size=2),\n nn.ReLU(),\n nn.Flatten()\n )\n self.linear1 = nn.Linear(64*25,128)\n self.linear2 = nn.Linear(128,1)\n def forward(self,x):\n out = self.network(x)\n out = self.linear1(out)\n out = self.linear2(out)\n out = torch.sigmoid(out)\n return out",
"_____no_output_____"
],
[
"gen_model = generator(component)\ndis_model = discriminator()",
"_____no_output_____"
],
[
"train_dl = DataLoader(images[:10000],batch_size,shuffle=True,pin_memory=True,num_workers=2)\nval_dl = DataLoader(images[10000:11000],batch_size, num_workers=2,pin_memory=True)\ntest_dl = DataLoader(images[11000:],batch_size,num_workers=2)",
"_____no_output_____"
],
[
"bceloss = nn.BCEWithLogitsLoss()\n#minimise this # t is whether the image is fake or real; x is prob vect of patches being real/fake.\ndef loss_inf(x,t): # probability vector from discriminator as input\n return int(t)*(bceloss(x,torch.ones_like(x))) + (1-int(t))*bceloss(x,torch.zeros_like(x))",
"_____no_output_____"
],
[
"l1loss = nn.L1Loss()\ndef gen_loss(x,y):\n return l1loss(x,y)",
"_____no_output_____"
],
[
"def to_device(data, device):\n \"\"\"Move tensor(s) to chosen device\"\"\"\n if isinstance(data, (list,tuple)):\n return [to_device(x, device) for x in data]\n return data.to(device, non_blocking=True)\n\nclass DeviceDataLoader():\n \"\"\"Wrap a dataloader to move data to a device\"\"\"\n def __init__(self, dl, device):\n self.dl = dl\n self.device = device\n \n def __iter__(self):\n \"\"\"Yield a batch of data after moving it to device\"\"\"\n for b in self.dl: \n yield to_device(b, self.device)\n\n def __len__(self):\n \"\"\"Number of batches\"\"\"\n return len(self.dl)",
"_____no_output_____"
],
[
"train_dl = DeviceDataLoader(train_dl,'cuda')\nval_dl = DeviceDataLoader(val_dl,'cuda')\ntest_dl = DeviceDataLoader(test_dl,'cuda')\ngen_model.to('cuda')\ndis_model.to('cuda')",
"_____no_output_____"
],
[
"def fit(epochs,lr_g,lr_d,generator,discriminator,batch_size,opt_func=torch.optim.Adam):\n gen_optimize = opt_func(generator.parameters(),lr_g)\n dis_optimize = opt_func(discriminator.parameters(),lr_d)\n train_g_history,train_d_history = [],[]\n val_g_history, val_d_history = [],[]\n for epoch in range(epochs):\n epoch_loss_g = torch.zeros(1).to('cuda')\n epoch_loss_d = torch.zeros(1).to('cuda')\n noise = torch.randn(batch_size,256).to('cuda')\n for batch in train_dl:\n for i in range(5):\n out = generator(batch[:,0].unsqueeze(1),noise) # gives a,b channel for LAB color scheme \n real_score = discriminator(batch) # how real is the og input image\n fake_score = discriminator(out) # how real is the generated image \n loss_d = loss_inf(real_score,1) + loss_inf(fake_score,0)# discriminator\n #print(loss_d.item())\n loss_d.backward()\n dis_optimize.zero_grad()\n dis_optimize.step()\n \n out = generator(batch[:,0].unsqueeze(1),noise) # gives a,b channel for LAB color scheme \n real_score = discriminator(batch) # how real is the og input image\n fake_score = discriminator(out) # how real is the generated image \n\n loss_g = 4*gen_loss(out,batch) + loss_inf(fake_score,1)\n loss_g.backward()\n gen_optimize.step()\n gen_optimize.zero_grad()\n\n with torch.no_grad():\n epoch_loss_g += loss_g\n epoch_loss_d += loss_d\n train_d_history.append(epoch_loss_d)\n train_g_history.append(epoch_loss_g)\n epoch_loss_g = 0\n epoch_loss_d = 0\n for batch in val_dl:\n with torch.no_grad():\n out = generator(batch[:,0].unsqueeze(1),noise) # gives a,b channel for LAB color scheme \n real_score = discriminator(batch) # how real is the og input image\n fake_score = discriminator(out) # how real is the generated image \n loss_d = loss_inf(real_score,1) + loss_inf(fake_score,0)# discriminator\n loss_g = 4*gen_loss(out,batch) + loss_inf(fake_score,1)\n epoch_loss_g += loss_g\n epoch_loss_d += loss_d\n\n \n val_g_history.append(epoch_loss_g.item())\n val_d_history.append(epoch_loss_d.item())\n if epoch % 3 == 0:\n print(\"Gen Epoch Loss\",epoch_loss_g)\n print(\"Discriminator Epoch loss\",epoch_loss_d)\n return train_d_history,train_g_history,val_d_history,val_g_history\n",
"_____no_output_____"
],
[
"loss_h = fit(6,0.001,0.001,gen_model,dis_model,batch_size,opt_func=torch.optim.Adam)",
"Gen Epoch Loss tensor(6.4444, device='cuda:0')\nDiscriminator Epoch loss tensor(14.5036, device='cuda:0')\nGen Epoch Loss tensor(6.2706, device='cuda:0')\nDiscriminator Epoch loss tensor(14.5036, device='cuda:0')\n"
],
[
"import matplotlib.pyplot as plt\nplt.plot(loss_h[1])",
"_____no_output_____"
],
[
"from skimage.color import rgb2lab,lab2rgb,rgb2gray\ndef tensor_to_pic(tensor : torch.Tensor) -> np.ndarray:\n tensor[0] *= 100\n tensor[1:]*= 128\n image = tensor.permute(1,2,0).detach().cpu().numpy()\n image = lab2rgb(image)\n return image\n\ndef show_images(n,dataset = images,gen=gen_model,dis=dis_model) -> None:\n gen_model.eval()\n dis_model.eval()\n z = torch.randn(1,256).to('cuda')\n #z = torch.ones_like(z)\n \n image_tensor = dataset[n].to('cuda')\n gen_tensor = gen(image_tensor[0].unsqueeze(0).unsqueeze(0),z)[0]\n image = tensor_to_pic(image_tensor)\n #print(torch.sum(gen_tensor))\n gray = np.zeros_like(image) \n bw = rgb2gray(image)\n gray[:,:,0],gray[:,:,1],gray[:,:,2] = bw,bw,bw\n gen_image = tensor_to_pic(gen_tensor)\n to_be_shown = np.concatenate((gray,gen_image,image),axis=1)\n plt.figure(figsize=(15,15))\n plt.imshow(to_be_shown)\n plt.show()\n \n",
"_____no_output_____"
],
[
"i = np.random.randint(3500,20000)\nprint(i)\nshow_images(i) ## Shows generated and coloured images side by side",
"10755\n"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d088d07e68b6a5e0073b2e7d9d3384b35ce5b5de | 56,700 | ipynb | Jupyter Notebook | PyomoTutorial/microgrid/Microgrid_problem_0.ipynb | ReinboldV/PyomoGallery | c6fabd222065641d7057c84c3d134765d6488f17 | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | PyomoTutorial/microgrid/Microgrid_problem_0.ipynb | ReinboldV/PyomoGallery | c6fabd222065641d7057c84c3d134765d6488f17 | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | PyomoTutorial/microgrid/Microgrid_problem_0.ipynb | ReinboldV/PyomoGallery | c6fabd222065641d7057c84c3d134765d6488f17 | [
"BSD-2-Clause-FreeBSD"
] | 2 | 2020-07-10T16:29:54.000Z | 2020-09-04T13:44:06.000Z | 169.760479 | 46,868 | 0.900071 | [
[
[
"# MicroGrid Energy Management\n\n## Summary\n\nThe goal of the Microgrid problem is to compute an optimal power flow within the distributed sources, loads, storages and a main grid. On a given time horizon $H$, the optimal power flow poblem aims to find the optimal command of the components, e.g.charging/discharging for storage, seeling/buying for external power sources, turning on/off intellingent loads, etc. KNowing that the power balance must be respected $\\forall t\\in[0,H]$. \n\nThis problem can be formulated as a mixed integer linear program, for which constraints, variables and objectives are organized using pyomo blocks. \n\n<img src=\"figures/mg_pv_bat_eol_house.png\" width=\"500\">\n\n## Problem Statement\n\nThe Energy Management problem can be formulated mathematically as a mixed integer linear problem using the following model. \n\nPlan : \n1. Definition of sets\n2. Definitions of distributed sources, loads and charges.\n3. Definition of the global constraint (Power balance)\n4. Definition of the objective\n\n### Sets\n\nThe micogrid modelling equieres only one Coninuous Set : the time. We not H the horizon in seconds. ",
"_____no_output_____"
]
],
[
[
"from pyomo.environ import *\nfrom pyomo.dae import ContinuousSet, Integral\n\nH = 60*60*24 # Time horizon in seconds\n\nm = AbstractModel()\nm.time = ContinuousSet(initialize=(0, H))",
"_____no_output_____"
]
],
[
[
"### Blocks\n\nThe microgrid is created by connecting units together, such as batteries, loads, renewable sources, etc. In the pyomo vocabulary, such a components is called a block. \n\nAs a first step, the microgrid is constituted of a renewable power source (PV panel), a critical load, and a connection to the main grid.\n\nA quick descipion of the usefull blocks is following : \n\n - **Maingrid** : A block that describes the model of the distribution grid connection, a base version, named `AbsMainGridV0` is available in `microgrids.maingrids`.\n - **Renewable Power Source** : A block that describes the model of a PV panels. This will be modeled by a deterministic power profile using a `Param` indexed by the time. Such a block is available in `microgrids.sources.AbsFixedPowerSource`. \n - **Power Load** : A block that describes the model of a critical load. This will be modeled by a deterministic power profile using a `Param` indexed by the time. Such a block is available in `microgrids.sources.AbsFixedPowerLoad`. \n \nBlocks are added to the main problem as follow : ",
"_____no_output_____"
]
],
[
[
"from batteries import AbsBatteryV0\nfrom maingrids import AbsMainGridV0\nfrom sources import AbsFixedPowerLoad, AbsFixedPowerSource\n\nm.mg = AbsMainGridV0()\nm.s = AbsFixedPowerSource()\nm.l = AbsFixedPowerLoad()",
"_____no_output_____"
]
],
[
[
"Each block is described by a set of constrains, variables, parameter and expressions.\n\nOne can print any pyomo object using the `pprint` method. Example : \n \n m.mg.pprint()\nOne can access documentation of any object using the builtIn method `doc` or `help` function (for heritance). Pop-up documentation shortcut : `Shift+Tab`.\n\n print(m.mg.doc)\n help(m.mg)\n \nLet's have a look to the maingrid block : ",
"_____no_output_____"
],
[
"### Global Power Constraint\n\nThe electical connection between blocks is modelled using constraints, aka Kirchhoff’s Laws or power balance : $$\\sum P_{sources}(t) = \\sum P_{loads}(t), \\forall t \\in [0, H] $$",
"_____no_output_____"
]
],
[
[
"@m.Constraint(m.time)\ndef power_balance(m, t):\n return m.mg.p[t] + m.s.p[t] == m.l.p[t]",
"_____no_output_____"
]
],
[
[
"### Objectif\n\nAs a first hypothesis, we will only consider a fixed selling/buying price of energy $c$, such that : $$J = \\int_{0}^{H} c.p_{mg}(t)$$.\n\nPyomo allows you to make integrals over a continuous set as follow : \n",
"_____no_output_____"
]
],
[
[
"m.int = Integral(m.time, wrt=m.time, rule=lambda m, i: m.mg.inst_cost[i])\nm.obj = Objective(expr=m.int)",
"_____no_output_____"
]
],
[
[
"## Instantiate the problem, discretize it and solve it\n\n`m` is a pyomo Abstract Model, as we saw in the pyomo tutorial. Which meens that the structure of the problem is now completlty defined and may be used for different scenarios or cases. The fllowing steps consern :\n\n1. Loading data (scenario, predictions and sizing of the components)\n2. Instantiate the problem\n3. Discretization\n4. Solving\n\n### Loading data\n\nParameters, loads and source profiles are already defined in the file `data/datamodels.py`. One can load it and plot PV and Load profiles as followed :",
"_____no_output_____"
]
],
[
[
"%run data/data_models.py\ndf_s[['P_pv', 'P_load_1']].plot(figsize=(15,3))",
"/home/admin/anaconda3/lib/python3.7/site-packages/pandas/core/arrays/datetimes.py:1172: UserWarning: Converting to PeriodArray/Index representation will drop timezone information.\n \"will drop timezone information.\", UserWarning)\n"
]
],
[
[
"### 2. Problem Instantiation\n\nThe abstract model is instanciate by the previously definied data dictionnary, as follow :",
"_____no_output_____"
]
],
[
[
"inst = m.create_instance(data)",
"_____no_output_____"
]
],
[
[
"### Discretization\n\nAfter instantiation, one can discretize the problem equation over the time horizon. In the folowing, we choose a number of finite element $nfe = 96$ i.e. every $15~ min$ for $H=1\\ day$.",
"_____no_output_____"
]
],
[
[
"from pyomo.environ import TransformationFactory\ninst = m.create_instance(data)\n\nnfe = 60*60*24/(15*60)\nTransformationFactory('dae.finite_difference').apply_to(inst, nfe=nfe)",
"_____no_output_____"
]
],
[
[
"### Solving\n\nThe problem is solve as follow :",
"_____no_output_____"
]
],
[
[
"opt = SolverFactory(\"glpk\")\nres = opt.solve(inst, load_solutions=True)",
"_____no_output_____"
]
],
[
[
"## Post-Processing",
"_____no_output_____"
]
],
[
[
"from utils import pplot\nindex = pd.date_range(start = TSTART, end = TEND, periods = nfe+1)\npplot(inst.mg.p, inst.l.p, inst.s.p, index = index, marker='x', figsize=(10,5))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d08910e6740969d4bfdcf703fb9e0e08e3fce7c1 | 30,693 | ipynb | Jupyter Notebook | tutorials/color-excess/color-excess.ipynb | jvictor42/astropy-tutorials | 2826eb5c21d0cd4a9719c50fb60c811d7a37d439 | [
"BSD-3-Clause"
] | 210 | 2015-01-04T20:22:20.000Z | 2022-03-29T23:39:09.000Z | tutorials/color-excess/color-excess.ipynb | keflavich/astropy-tutorials | 18b480182156886f6057eb2127c07f2a8f8e8c5c | [
"BSD-3-Clause"
] | 448 | 2015-01-04T17:11:43.000Z | 2022-03-31T14:58:54.000Z | tutorials/color-excess/color-excess.ipynb | keflavich/astropy-tutorials | 18b480182156886f6057eb2127c07f2a8f8e8c5c | [
"BSD-3-Clause"
] | 150 | 2015-03-16T16:14:36.000Z | 2022-02-08T23:47:21.000Z | 33.914917 | 696 | 0.612257 | [
[
[
"# Analyzing interstellar reddening and calculating synthetic photometry",
"_____no_output_____"
],
[
"## Authors\n\nKristen Larson, Lia Corrales, Stephanie T. Douglas, Kelle Cruz\n\nInput from Emir Karamehmetoglu, Pey Lian Lim, Karl Gordon, Kevin Covey",
"_____no_output_____"
],
[
"## Learning Goals\n- Investigate extinction curve shapes\n- Deredden spectral energy distributions and spectra\n- Calculate photometric extinction and reddening\n- Calculate synthetic photometry for a dust-reddened star by combining `dust_extinction` and `synphot`\n- Convert from frequency to wavelength with `astropy.unit` equivalencies\n- Unit support for plotting with `astropy.visualization`\n\n\n## Keywords\ndust extinction, synphot, astroquery, units, photometry, extinction, physics, observational astronomy\n\n## Companion Content\n\n* [Bessell & Murphy (2012)](https://ui.adsabs.harvard.edu/#abs/2012PASP..124..140B/abstract)\n\n\n\n## Summary\n\nIn this tutorial, we will look at some extinction curves from the literature, use one of those curves to deredden an observed spectrum, and practice invoking a background source flux in order to calculate magnitudes from an extinction model.\n\nThe primary libraries we'll be using are [dust_extinction](https://dust-extinction.readthedocs.io/en/latest/) and [synphot](https://synphot.readthedocs.io/en/latest/), which are [Astropy affiliated packages](https://www.astropy.org/affiliated/). \n\nWe recommend installing the two packages in this fashion:\n```\npip install synphot\npip install dust_extinction\n```\nThis tutorial requires v0.7 or later of `dust_extinction`. To ensure that all commands work properly, make sure you have the correct version installed. If you have v0.6 or earlier installed, run the following command to upgrade\n```\npip install dust_extinction --upgrade\n```",
"_____no_output_____"
]
],
[
[
"import pathlib\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nimport numpy as np\nimport astropy.units as u\nfrom astropy.table import Table\nfrom dust_extinction.parameter_averages import CCM89, F99\nfrom synphot import units, config\nfrom synphot import SourceSpectrum,SpectralElement,Observation,ExtinctionModel1D\nfrom synphot.models import BlackBodyNorm1D\nfrom synphot.spectrum import BaseUnitlessSpectrum\nfrom synphot.reddening import ExtinctionCurve\nfrom astroquery.simbad import Simbad\nfrom astroquery.mast import Observations\nimport astropy.visualization",
"_____no_output_____"
]
],
[
[
"# Introduction",
"_____no_output_____"
],
[
"Dust in the interstellar medium (ISM) extinguishes background starlight. The wavelength dependence of the extinction is such that short-wavelength light is extinguished more than long-wavelength light, and we call this effect *reddening*.\n\nIf you're new to extinction, here is a brief introduction to the types of quantities involved.\nThe fractional change to the flux of starlight is \n$$\n\\frac{dF_\\lambda}{F_\\lambda} = -\\tau_\\lambda\n$$\n\nwhere $\\tau$ is the optical depth and depends on wavelength. Integrating along the line of sight, the resultant flux is an exponential function of optical depth,\n$$\n\\tau_\\lambda = -\\ln\\left(\\frac{F_\\lambda}{F_{\\lambda,0}}\\right).\n$$\n\nWith an eye to how we define magnitudes, we usually change the base from $e$ to 10, \n$$\n\\tau_\\lambda = -2.303\\log\\left(\\frac{F_\\lambda}{F_{\\lambda,0}}\\right),\n$$\n\nand define an extinction $A_\\lambda = 1.086 \\,\\tau_\\lambda$ so that\n$$\nA_\\lambda = -2.5\\log\\left(\\frac{F_\\lambda}{F_{\\lambda,0}}\\right).\n$$\n\n\nThere are two basic take-home messages from this derivation:\n\n* Extinction introduces a multiplying factor $10^{-0.4 A_\\lambda}$ to the flux.\n* Extinction is defined relative to the flux without dust, $F_{\\lambda,0}$.\n",
"_____no_output_____"
],
[
"Once astropy and the affiliated packages are installed, we can import from them as needed:",
"_____no_output_____"
],
[
"# Example 1: Investigate Extinction Models",
"_____no_output_____"
],
[
"The `dust_extinction` package provides various models for extinction $A_\\lambda$ normalized to $A_V$. The shapes of normalized curves are relatively (and perhaps surprisingly) uniform in the Milky Way. The little variation that exists is often parameterized by the ratio of extinction ($A_V$) to reddening in the blue-visual ($E_{B-V}$),\n$$\nR_V \\equiv \\frac{A_V}{E_{B-V}}\n$$\n\nwhere $E_{B-V}$ is differential extinction $A_B-A_V$. In this example, we show the $R_V$-parameterization for the Clayton, Cardelli, & Mathis (1989, CCM) and the Fitzpatrick (1999) models. [More model options are available in the `dust_extinction` documentation.](https://dust-extinction.readthedocs.io/en/latest/dust_extinction/model_flavors.html)",
"_____no_output_____"
]
],
[
[
"# Create wavelengths array.\nwav = np.arange(0.1, 3.0, 0.001)*u.micron\n\nfor model in [CCM89, F99]:\n for R in (2.0,3.0,4.0):\n # Initialize the extinction model\n ext = model(Rv=R)\n plt.plot(1/wav, ext(wav), label=model.name+' R='+str(R))\n \nplt.xlabel('$\\lambda^{-1}$ ($\\mu$m$^{-1}$)')\nplt.ylabel('A($\\lambda$) / A(V)')\nplt.legend(loc='best')\nplt.title('Some Extinction Laws')\nplt.show()",
"_____no_output_____"
]
],
[
[
"Astronomers studying the ISM often display extinction curves against inverse wavelength (wavenumber) to show the ultraviolet variation, as we do here. Infrared extinction varies much less and approaches zero at long wavelength in the absence of wavelength-independent, or grey, extinction.",
"_____no_output_____"
],
[
"# Example 2: Deredden a Spectrum",
"_____no_output_____"
],
[
"Here we deredden (unextinguish) the IUE ultraviolet spectrum and optical photometry of the star $\\rho$ Oph (HD 147933).\n\nFirst, we will use astroquery to fetch the archival [IUE spectrum from MAST](https://archive.stsci.edu/iue/):",
"_____no_output_____"
]
],
[
[
"download_dir = pathlib.Path('~/.astropy/cache/astroquery/Mast').expanduser()\ndownload_dir.mkdir(exist_ok=True)\n\nobsTable = Observations.query_object(\"HD 147933\", radius=\"1 arcsec\")\nobsTable_spec = obsTable[obsTable['dataproduct_type'] == 'spectrum']\nobsTable_spec",
"_____no_output_____"
],
[
"obsids = obsTable_spec[39]['obsid']\ndataProductsByID = Observations.get_product_list(obsids)\nmanifest = Observations.download_products(dataProductsByID, \n download_dir=str(download_dir))",
"_____no_output_____"
]
],
[
[
"We read the downloaded files into an astropy table:",
"_____no_output_____"
]
],
[
[
"t_lwr = Table.read(download_dir / 'mastDownload/IUE/lwr05639/lwr05639mxlo_vo.fits')\nprint(t_lwr)",
"_____no_output_____"
]
],
[
[
"The `.quantity` extension in the next lines will read the Table columns into Quantity vectors. Quantities keep the units of the Table column attached to the numpy array values.",
"_____no_output_____"
]
],
[
[
"wav_UV = t_lwr['WAVE'][0,].quantity\nUVflux = t_lwr['FLUX'][0,].quantity",
"_____no_output_____"
]
],
[
[
"Now, we use astroquery again to fetch photometry from Simbad to go with the IUE spectrum:",
"_____no_output_____"
]
],
[
[
"custom_query = Simbad()\ncustom_query.add_votable_fields('fluxdata(U)','fluxdata(B)','fluxdata(V)')\nphot_table=custom_query.query_object('HD 147933')\nUmag=phot_table['FLUX_U']\nBmag=phot_table['FLUX_B']\nVmag=phot_table['FLUX_V']",
"_____no_output_____"
]
],
[
[
"To convert the photometry to flux, we look up some [properties of the photometric passbands](http://ned.ipac.caltech.edu/help/photoband.lst), including the flux of a magnitude zero star through the each passband, also known as the zero-point of the passband.",
"_____no_output_____"
]
],
[
[
"wav_U = 0.3660 * u.micron \nzeroflux_U_nu = 1.81E-23 * u.Watt/(u.m*u.m*u.Hz)\nwav_B = 0.4400 * u.micron\nzeroflux_B_nu = 4.26E-23 * u.Watt/(u.m*u.m*u.Hz)\nwav_V = 0.5530 * u.micron\nzeroflux_V_nu = 3.64E-23 * u.Watt/(u.m*u.m*u.Hz)",
"_____no_output_____"
]
],
[
[
"The zero-points that we found for the optical passbands are not in the same units as the IUE fluxes. To make matters worse, the zero-point fluxes are $F_\\nu$ and the IUE fluxes are $F_\\lambda$. To convert between them, the wavelength is needed. Fortunately, astropy provides an easy way to make the conversion with *equivalencies*:",
"_____no_output_____"
]
],
[
[
"zeroflux_U = zeroflux_U_nu.to(u.erg/u.AA/u.cm/u.cm/u.s, \n equivalencies=u.spectral_density(wav_U))\nzeroflux_B = zeroflux_B_nu.to(u.erg/u.AA/u.cm/u.cm/u.s, \n equivalencies=u.spectral_density(wav_B))\nzeroflux_V = zeroflux_V_nu.to(u.erg/u.AA/u.cm/u.cm/u.s, \n equivalencies=u.spectral_density(wav_V))",
"_____no_output_____"
]
],
[
[
"Now we can convert from photometry to flux using the definition of magnitude:\n$$\nF=F_0\\ 10^{-0.4\\, m}\n$$",
"_____no_output_____"
]
],
[
[
"Uflux = zeroflux_U * 10.**(-0.4*Umag)\nBflux = zeroflux_B * 10.**(-0.4*Bmag)\nVflux = zeroflux_V * 10.**(-0.4*Vmag)",
"_____no_output_____"
]
],
[
[
"Using astropy quantities allow us to take advantage of astropy's unit support in plotting. [Calling `astropy.visualization.quantity_support` explicitly turns the feature on.](http://docs.astropy.org/en/stable/units/quantity.html#plotting-quantities) Then, when quantity objects are passed to matplotlib plotting functions, the axis labels are automatically labeled with the unit of the quantity. In addition, quantities are converted automatically into the same units when combining multiple plots on the same axes.\n",
"_____no_output_____"
]
],
[
[
"astropy.visualization.quantity_support()\n\nplt.plot(wav_UV,UVflux,'m',label='UV')\nplt.plot(wav_V,Vflux,'ko',label='U, B, V')\nplt.plot(wav_B,Bflux,'ko')\nplt.plot(wav_U,Uflux,'ko')\nplt.legend(loc='best')\nplt.ylim(0,3E-10)\nplt.title('rho Oph')\nplt.show()",
"_____no_output_____"
]
],
[
[
"Finally, we initialize the extinction model, choosing values $R_V = 5$ and $E_{B-V} = 0.5$. This star is famous in the ISM community for having large-$R_V$ dust in the line of sight.",
"_____no_output_____"
]
],
[
[
"Rv = 5.0 # Usually around 3, but about 5 for this star.\nEbv = 0.5\next = F99(Rv=Rv)",
"_____no_output_____"
]
],
[
[
"To extinguish (redden) a spectrum, multiply by the `ext.extinguish` function. To unextinguish (deredden), divide by the same `ext.extinguish`, as we do here:",
"_____no_output_____"
]
],
[
[
"plt.semilogy(wav_UV,UVflux,'m',label='UV')\nplt.semilogy(wav_V,Vflux,'ko',label='U, B, V')\nplt.semilogy(wav_B,Bflux,'ko')\nplt.semilogy(wav_U,Uflux,'ko')\n\nplt.semilogy(wav_UV,UVflux/ext.extinguish(wav_UV,Ebv=Ebv),'b',\n label='dereddened: EBV=0.5, RV=5')\nplt.semilogy(wav_V,Vflux/ext.extinguish(wav_V,Ebv=Ebv),'ro',\n label='dereddened: EBV=0.5, RV=5')\nplt.semilogy(wav_B,Bflux/ext.extinguish(wav_B,Ebv=Ebv),'ro')\nplt.semilogy(wav_U,Uflux/ext.extinguish(wav_U,Ebv=Ebv),'ro')\n\nplt.legend(loc='best')\nplt.title('rho Oph')\nplt.show()",
"_____no_output_____"
]
],
[
[
"Notice that, by dereddening the spectrum, the absorption feature at 2175 Angstrom is removed. This feature can also be seen as the prominent bump in the extinction curves in Example 1. That we have smoothly removed the 2175 Angstrom feature suggests that the values we chose, $R_V = 5$ and $E_{B-V} = 0.5$, are a reasonable model for the foreground dust.\n\nThose experienced with dereddening should notice that that `dust_extinction` returns $A_\\lambda/A_V$, while other routines like the IDL fm_unred procedure often return $A_\\lambda/E_{B-V}$ by default and need to be divided by $R_V$ in order to compare directly with `dust_extinction`.",
"_____no_output_____"
],
[
"# Example 3: Calculate Color Excess with `synphot`",
"_____no_output_____"
],
[
"Calculating broadband *photometric* extinction is harder than it might look at first. All we have to do is look up $A_\\lambda$ for a particular passband, right? Under the right conditions, yes. In general, no.\n\nRemember that we have to integrate over a passband to get synthetic photometry,\n$$\nA = -2.5\\log\\left(\\frac{\\int W_\\lambda F_{\\lambda,0} 10^{-0.4A_\\lambda} d\\lambda}{\\int W_\\lambda F_{\\lambda,0} d\\lambda} \\right),\n$$\n\nwhere $W_\\lambda$ is the fraction of incident energy transmitted through a filter. See the detailed appendix in [Bessell & Murphy (2012)](https://ui.adsabs.harvard.edu/#abs/2012PASP..124..140B/abstract)\n for an excellent review of the issues and common misunderstandings in synthetic photometry.\n\nThere is an important point to be made here. The expression above does not simplify any further. Strictly speaking, it is impossible to convert spectral extinction $A_\\lambda$ into a magnitude system without knowing the wavelength dependence of the source's original flux across the filter in question. As a special case, if we assume that the source flux is constant in the band (i.e. $F_\\lambda = F$), then we can cancel these factors out from the integrals, and extinction in magnitudes becomes the weighted average of the extinction factor across the filter in question. In that special case, $A_\\lambda$ at $\\lambda_{\\rm eff}$ is a good approximation for magnitude extinction.\n\nIn this example, we will demonstrate the more general calculation of photometric extinction. We use a blackbody curve for the flux before the dust, apply an extinction curve, and perform synthetic photometry to calculate extinction and reddening in a magnitude system.\n",
"_____no_output_____"
],
[
"First, let's get the filter transmission curves:",
"_____no_output_____"
]
],
[
[
"# Optional, for when the STScI ftp server is not answering:\nconfig.conf.vega_file = 'http://ssb.stsci.edu/cdbs/calspec/alpha_lyr_stis_008.fits'\nconfig.conf.johnson_u_file = 'http://ssb.stsci.edu/cdbs/comp/nonhst/johnson_u_004_syn.fits'\nconfig.conf.johnson_b_file = 'http://ssb.stsci.edu/cdbs/comp/nonhst/johnson_b_004_syn.fits'\nconfig.conf.johnson_v_file = 'http://ssb.stsci.edu/cdbs/comp/nonhst/johnson_v_004_syn.fits'\nconfig.conf.johnson_r_file = 'http://ssb.stsci.edu/cdbs/comp/nonhst/johnson_r_003_syn.fits'\nconfig.conf.johnson_i_file = 'http://ssb.stsci.edu/cdbs/comp/nonhst/johnson_i_003_syn.fits'\nconfig.conf.bessel_j_file = 'http://ssb.stsci.edu/cdbs/comp/nonhst/bessell_j_003_syn.fits'\nconfig.conf.bessel_h_file = 'http://ssb.stsci.edu/cdbs/comp/nonhst/bessell_h_004_syn.fits'\nconfig.conf.bessel_k_file = 'http://ssb.stsci.edu/cdbs/comp/nonhst/bessell_k_003_syn.fits'\n\nu_band = SpectralElement.from_filter('johnson_u')\nb_band = SpectralElement.from_filter('johnson_b')\nv_band = SpectralElement.from_filter('johnson_v')\nr_band = SpectralElement.from_filter('johnson_r')\ni_band = SpectralElement.from_filter('johnson_i')\nj_band = SpectralElement.from_filter('bessel_j')\nh_band = SpectralElement.from_filter('bessel_h')\nk_band = SpectralElement.from_filter('bessel_k')",
"_____no_output_____"
]
],
[
[
"If you are running this with your own python, see the [synphot documentation](https://synphot.readthedocs.io/en/latest/#installation-and-setup) on how to install your own copy of the necessary files.",
"_____no_output_____"
],
[
"Next, let's make a background flux to which we will apply extinction. Here we make a 10,000 K blackbody using the model mechanism from within `synphot` and normalize it to $V$ = 10 in the Vega-based magnitude system.",
"_____no_output_____"
]
],
[
[
"# First, create a blackbody at some temperature.\nsp = SourceSpectrum(BlackBodyNorm1D, temperature=10000)\n# sp.plot(left=1, right=15000, flux_unit='flam', title='Blackbody')\n\n# Get the Vega spectrum as the zero point flux.\nvega = SourceSpectrum.from_vega()\n# vega.plot(left=1, right=15000)\n\n# Normalize the blackbody to some chosen magnitude, say V = 10.\nvmag = 10.\nv_band = SpectralElement.from_filter('johnson_v')\nsp_norm = sp.normalize(vmag * units.VEGAMAG, v_band, vegaspec=vega)\nsp_norm.plot(left=1, right=15000, flux_unit='flam', title='Normed Blackbody')",
"_____no_output_____"
]
],
[
[
"Now we initialize the extinction model and choose an extinction of $A_V$ = 2. To get the `dust_extinction` model working with `synphot`, we create a wavelength array and make a spectral element with the extinction model as a lookup table.",
"_____no_output_____"
]
],
[
[
"# Initialize the extinction model and choose the extinction, here Av = 2.\next = CCM89(Rv=3.1)\nAv = 2.\n\n# Create a wavelength array. \nwav = np.arange(0.1, 3, 0.001)*u.micron\n\n# Make the extinction model in synphot using a lookup table.\nex = ExtinctionCurve(ExtinctionModel1D, \n points=wav, lookup_table=ext.extinguish(wav, Av=Av))\nsp_ext = sp_norm*ex\nsp_ext.plot(left=1, right=15000, flux_unit='flam',\n title='Normed Blackbody with Extinction')",
"_____no_output_____"
]
],
[
[
"Synthetic photometry refers to modeling an observation of a star by multiplying the theoretical model for the astronomical flux through a certain filter response function, then integrating.",
"_____no_output_____"
]
],
[
[
"# \"Observe\" the star through the filter and integrate to get photometric mag.\nsp_obs = Observation(sp_ext, v_band)\nsp_obs_before = Observation(sp_norm, v_band)\n# sp_obs.plot(left=1, right=15000, flux_unit='flam',\n# title='Normed Blackbody with Extinction through V Filter')",
"_____no_output_____"
]
],
[
[
"Next, `synphot` performs the integration and computes magnitudes in the Vega system.",
"_____no_output_____"
]
],
[
[
"sp_stim_before = sp_obs_before.effstim(flux_unit='vegamag', vegaspec=vega)\nsp_stim = sp_obs.effstim(flux_unit='vegamag', vegaspec=vega)\nprint('before dust, V =', np.round(sp_stim_before,1))\nprint('after dust, V =', np.round(sp_stim,1))\n\n# Calculate extinction and compare to our chosen value.\nAv_calc = sp_stim - sp_stim_before\nprint('$A_V$ = ', np.round(Av_calc,1))",
"_____no_output_____"
]
],
[
[
"This is a good check for us to do. We normalized our spectrum to $V$ = 10 mag and added 2 mag of visual extinction, so the synthetic photometry procedure should reproduce these chosen values, and it does. Now we are ready to find the extinction in other passbands. ",
"_____no_output_____"
],
[
"We calculate the new photometry for the rest of the Johnson optical and the Bessell infrared filters. We calculate extinction $A = \\Delta m$ and plot color excess, $E(\\lambda - V) = A_\\lambda - A_V$. \n\nNotice that `synphot` calculates the effective wavelength of the observations for us, which is very useful for plotting the results. We show reddening with the model extinction curve for comparison in the plot.",
"_____no_output_____"
]
],
[
[
"bands = [u_band,b_band,v_band,r_band,i_band,j_band,h_band,k_band]\n\nfor band in bands:\n # Calculate photometry with dust:\n sp_obs = Observation(sp_ext, band, force='extrap')\n obs_effstim = sp_obs.effstim(flux_unit='vegamag', vegaspec=vega)\n # Calculate photometry without dust:\n sp_obs_i = Observation(sp_norm, band, force='extrap')\n obs_i_effstim = sp_obs_i.effstim(flux_unit='vegamag', vegaspec=vega)\n \n # Extinction = mag with dust - mag without dust\n # Color excess = extinction at lambda - extinction at V\n color_excess = obs_effstim - obs_i_effstim - Av_calc\n plt.plot(sp_obs_i.effective_wavelength(), color_excess,'or')\n print(np.round(sp_obs_i.effective_wavelength(),1), ',', \n np.round(color_excess,2))\n\n# Plot the model extinction curve for comparison \nplt.plot(wav,Av*ext(wav)-Av,'--k')\nplt.ylim([-2,2])\nplt.xlabel('$\\lambda$ (Angstrom)')\nplt.ylabel('E($\\lambda$-V)')\nplt.title('Reddening of T=10,000K Background Source with Av=2')\nplt.show() ",
"_____no_output_____"
]
],
[
[
"## Exercise\nTry changing the blackbody temperature to something very hot or very cool. Are the color excess values the same? Have the effective wavelengths changed?\n\nNote that the photometric extinction changes because the filter transmission is not uniform. The observed throughput of the filter depends on the shape of the background source flux.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d08918670b9d1148178d69aef05c0bf997a160fb | 545 | ipynb | Jupyter Notebook | _build/html/_sources/chapters/unscented/unscented_filter.ipynb | ssm-jax/ssm-book | f3bfa29a1c474b7dc85792a563df0f29736a44c6 | [
"MIT"
] | 10 | 2022-03-22T21:28:03.000Z | 2022-03-29T17:42:06.000Z | chapters/unscented/unscented_filter.ipynb | ssm-jax/ssm-book | f3bfa29a1c474b7dc85792a563df0f29736a44c6 | [
"MIT"
] | null | null | null | chapters/unscented/unscented_filter.ipynb | ssm-jax/ssm-book | f3bfa29a1c474b7dc85792a563df0f29736a44c6 | [
"MIT"
] | 1 | 2022-03-23T02:15:23.000Z | 2022-03-23T02:15:23.000Z | 16.515152 | 34 | 0.52844 | [
[
[
"# Unscented filtering",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown"
]
] |
d0891d45f896f8f3fd0ab9f11cff64f14581952b | 341,940 | ipynb | Jupyter Notebook | plot_01_ert_2d_mod_inv.ipynb | ruboerner/pg | 54661f3732e1b7b87c76898045412a1a759435c4 | [
"CC0-1.0"
] | 1 | 2020-05-24T07:16:06.000Z | 2020-05-24T07:16:06.000Z | plot_01_ert_2d_mod_inv.ipynb | ruboerner/pg | 54661f3732e1b7b87c76898045412a1a759435c4 | [
"CC0-1.0"
] | null | null | null | plot_01_ert_2d_mod_inv.ipynb | ruboerner/pg | 54661f3732e1b7b87c76898045412a1a759435c4 | [
"CC0-1.0"
] | 1 | 2022-03-09T11:56:00.000Z | 2022-03-09T11:56:00.000Z | 503.59352 | 90,560 | 0.944511 | [
[
[
"# Checkout www.pygimli.org for more examples\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"\n# 2D ERT modeling and inversion\n",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport numpy as np\n\nimport pygimli as pg\nimport pygimli.meshtools as mt\nfrom pygimli.physics import ert",
"_____no_output_____"
]
],
[
[
"Create geometry definition for the modelling domain.\n\nworldMarker=True indicates the default boundary conditions for the ERT\n\n",
"_____no_output_____"
]
],
[
[
"world = mt.createWorld(start=[-50, 0], end=[50, -50], layers=[-1, -8],\n worldMarker=True)",
"_____no_output_____"
]
],
[
[
"Create some heterogeneous circular anomaly\n\n",
"_____no_output_____"
]
],
[
[
"block = mt.createCircle(pos=[-4.0, -5.0], radius=[1, 1.8], marker=4,\n boundaryMarker=10, area=0.01)",
"_____no_output_____"
],
[
"circle = mt.createCircle(pos=[4.0, -5.0], radius=[1, 1.8], marker=5,\n boundaryMarker=10, area=0.01)",
"_____no_output_____"
],
[
"poly = mt.createPolygon([(1,-4), (2,-1.5), (4,-2), (5,-2),\n (8,-3), (5,-3.5), (3,-4.5)], isClosed=True,\n addNodes=3, interpolate='spline', marker=5)",
"_____no_output_____"
]
],
[
[
"Merge geometry definition into a Piecewise Linear Complex (PLC)\n\n",
"_____no_output_____"
]
],
[
[
"geom = world + block + circle # + poly",
"_____no_output_____"
]
],
[
[
"Optional: show the geometry\n\n",
"_____no_output_____"
]
],
[
[
"pg.show(geom)",
"_____no_output_____"
]
],
[
[
"Create a Dipole Dipole ('dd') measuring scheme with 21 electrodes.\n\n",
"_____no_output_____"
]
],
[
[
"scheme = ert.createData(elecs=np.linspace(start=-20, stop=20, num=42),\n schemeName='dd')",
"_____no_output_____"
]
],
[
[
"Put all electrode (aka sensors) positions into the PLC to enforce mesh\nrefinement. Due to experience, its convenient to add further refinement\nnodes in a distance of 10% of electrode spacing to achieve sufficient\nnumerical accuracy.\n\n",
"_____no_output_____"
]
],
[
[
"for p in scheme.sensors():\n geom.createNode(p)\n geom.createNode(p - [0, 0.01])\n\n# Create a mesh for the finite element modelling with appropriate mesh quality.\nmesh = mt.createMesh(geom, quality=34)\n\n# Create a map to set resistivity values in the appropriate regions\n# [[regionNumber, resistivity], [regionNumber, resistivity], [...]\nrhomap = [[1, 50.],\n [2, 50.],\n [3, 50.],\n [4, 150.],\n [5, 15]]\n\n# Take a look at the mesh and the resistivity distribution\npg.show(mesh, data=rhomap, label=pg.unit('res'), showMesh=True)",
"_____no_output_____"
]
],
[
[
"Perform the modeling with the mesh and the measuring scheme itself\nand return a data container with apparent resistivity values,\ngeometric factors and estimated data errors specified by the noise setting.\nThe noise is also added to the data. Here 1% plus 1µV.\nNote, we force a specific noise seed as we want reproducable results for\ntesting purposes.\n\n",
"_____no_output_____"
]
],
[
[
"data = ert.simulate(mesh, scheme=scheme, res=rhomap, noiseLevel=1,\n noiseAbs=1e-6, seed=1337, verbose=False)\n\npg.info(np.linalg.norm(data['err']), np.linalg.norm(data['rhoa']))\npg.info('Simulated data', data)\npg.info('The data contains:', data.dataMap().keys())\n\npg.info('Simulated rhoa (min/max)', min(data['rhoa']), max(data['rhoa']))\npg.info('Selected data noise %(min/max)', min(data['err'])*100, max(data['err'])*100)",
"29/11/21 - 07:23:15 - pyGIMLi - \u001b[0;32;49mINFO\u001b[0m - 0.29294827187374417 1379.7571227859778\n29/11/21 - 07:23:15 - pyGIMLi - \u001b[0;32;49mINFO\u001b[0m - Simulated data Data: Sensors: 42 data: 780, nonzero entries: ['a', 'b', 'err', 'k', 'm', 'n', 'rhoa', 'valid']\n29/11/21 - 07:23:15 - pyGIMLi - \u001b[0;32;49mINFO\u001b[0m - The data contains: ['a', 'b', 'err', 'i', 'ip', 'iperr', 'k', 'm', 'n', 'r', 'rhoa', 'u', 'valid']\n29/11/21 - 07:23:15 - pyGIMLi - \u001b[0;32;49mINFO\u001b[0m - Simulated rhoa (min/max) 44.872181820362094 55.08720265284357\n29/11/21 - 07:23:15 - pyGIMLi - \u001b[0;32;49mINFO\u001b[0m - Selected data noise %(min/max) 1.0000367418238607 1.3995249661503184\n"
],
[
"# data['k']",
"_____no_output_____"
]
],
[
[
"Optional: you can filter all values and tokens in the data container.\nIts possible that there are some negative data values due to noise and\nhuge geometric factors. So we need to remove them.\n\n",
"_____no_output_____"
]
],
[
[
"data.remove(data['rhoa'] < 0)\n# data.remove(data['k'] < -20000.0)\npg.info('Filtered rhoa (min/max)', min(data['rhoa']), max(data['rhoa']))\n\n# You can save the data for further use\ndata.save('simple.dat')\n\n# You can take a look at the data\nert.show(data, cMap=\"RdBu_r\")",
"24/11/21 - 13:43:27 - pyGIMLi - \u001b[0;32;49mINFO\u001b[0m - Filtered rhoa (min/max) 44.872181820362094 55.08720265284357\n"
]
],
[
[
"Initialize the ERTManager, e.g. with a data container or a filename.\n\n",
"_____no_output_____"
]
],
[
[
"mgr = ert.ERTManager('simple.dat')",
"_____no_output_____"
]
],
[
[
"Run the inversion with the preset data. The Inversion mesh will be created\nwith default settings.\n\n",
"_____no_output_____"
]
],
[
[
"inv = mgr.invert(lam=10, verbose=False)\n#np.testing.assert_approx_equal(mgr.inv.chi2(), 0.7, significant=1)",
"24/11/21 - 13:41:23 - pyGIMLi - \u001b[0;32;49mINFO\u001b[0m - Found 2 regions.\n24/11/21 - 13:41:23 - pyGIMLi - \u001b[0;32;49mINFO\u001b[0m - Region with smallest marker (1) set to background\n24/11/21 - 13:41:23 - pyGIMLi - \u001b[0;32;49mINFO\u001b[0m - Creating forward mesh from region infos.\n24/11/21 - 13:41:23 - Core - \u001b[0;33;49mWARNING\u001b[0m - Region Nr: 1 is background and should not get a model transformation.\n24/11/21 - 13:41:23 - Core - \u001b[0;33;49mWARNING\u001b[0m - Region Nr: 1 is background and should not get a model control.\n24/11/21 - 13:41:23 - pyGIMLi - \u001b[0;32;49mINFO\u001b[0m - Creating refined mesh (H2) to solve forward task.\n24/11/21 - 13:41:23 - pyGIMLi - \u001b[0;32;49mINFO\u001b[0m - Set default startmodel to median(data values)=49.532539205485705\n24/11/21 - 13:41:23 - pyGIMLi - \u001b[0;32;49mINFO\u001b[0m - Created startmodel from forward operator: 633 [49.532539205485705,...,49.532539205485705]\n"
]
],
[
[
"Let the ERTManger show you the model of the last successful run and how it\nfits the data. Shows data, model response, and model.\n\n",
"_____no_output_____"
]
],
[
[
"mgr.showResultAndFit(cMap=\"RdBu_r\")\nmeshPD = pg.Mesh(mgr.paraDomain) # Save copy of para mesh for plotting later",
"_____no_output_____"
]
],
[
[
"You can also provide your own mesh (e.g., a structured grid if you like them)\nNote, that x and y coordinates needs to be in ascending order to ensure that\nall the cells in the grid have the correct orientation, i.e., all cells need\nto be numbered counter-clockwise and the boundary normal directions need to\npoint outside.\n\n",
"_____no_output_____"
]
],
[
[
"inversionDomain = pg.createGrid(x=np.linspace(start=-21, stop=21, num=43),\n y=-pg.cat([0], pg.utils.grange(0.5, 8, n=8))[::-1],\n marker=2)",
"_____no_output_____"
]
],
[
[
"The inversion domain for ERT problems needs a boundary that represents the\nfar regions in the subsurface of the halfspace.\nGive a cell marker lower than the marker for the inversion region, the lowest\ncell marker in the mesh will be the inversion boundary region by default.\n\n",
"_____no_output_____"
]
],
[
[
"grid = pg.meshtools.appendTriangleBoundary(inversionDomain, marker=1,\n xbound=50, ybound=50)\npg.show(grid, markers=True)\n\n#pg.show(grid, markers=True)",
"_____no_output_____"
]
],
[
[
"The Inversion can be called with data and mesh as argument as well\n\n",
"_____no_output_____"
]
],
[
[
"model = mgr.invert(data, mesh=grid, lam=10, verbose=False)\n# np.testing.assert_approx_equal(mgr.inv.chi2(), 0.951027, significant=3)",
"24/11/21 - 13:41:27 - pyGIMLi - \u001b[0;32;49mINFO\u001b[0m - Found 2 regions.\n24/11/21 - 13:41:27 - pyGIMLi - \u001b[0;32;49mINFO\u001b[0m - Region with smallest marker (1) set to background\n24/11/21 - 13:41:27 - pyGIMLi - \u001b[0;32;49mINFO\u001b[0m - Creating forward mesh from region infos.\n24/11/21 - 13:41:27 - Core - \u001b[0;33;49mWARNING\u001b[0m - Region Nr: 1 is background and should not get a model transformation.\n24/11/21 - 13:41:27 - Core - \u001b[0;33;49mWARNING\u001b[0m - Region Nr: 1 is background and should not get a model control.\n24/11/21 - 13:41:27 - pyGIMLi - \u001b[0;32;49mINFO\u001b[0m - Creating refined mesh (H2) to solve forward task.\n24/11/21 - 13:41:27 - pyGIMLi - \u001b[0;32;49mINFO\u001b[0m - Set default startmodel to median(data values)=49.53253920548567\n24/11/21 - 13:41:27 - pyGIMLi - \u001b[0;32;49mINFO\u001b[0m - Created startmodel from forward operator: 336 [49.53253920548567,...,49.53253920548567]\n"
]
],
[
[
"You can of course get access to mesh and model and plot them for your own.\nNote that the cells of the parametric domain of your mesh might be in\na different order than the values in the model array if regions are used.\nThe manager can help to permutate them into the right order.\n\n",
"_____no_output_____"
]
],
[
[
"# np.testing.assert_approx_equal(mgr.inv.chi2(), 1.4, significant=2)\n\nmaxC = 150\n\nmodelPD = mgr.paraModel(model) # do the mapping\npg.show(mgr.paraDomain, modelPD, label='Model', cMap='RdBu_r',\n logScale=True, cMin=15, cMax=maxC)\n\npg.info('Inversion stopped with chi² = {0:.3}'.format(mgr.fw.chi2()))\n\nfig, (ax1, ax2, ax3) = plt.subplots(3,1, sharex=True, sharey=True, figsize=(8,7))\n\npg.show(mesh, rhomap, ax=ax1, hold=True, cMap=\"RdBu_r\", logScale=True,\n orientation=\"vertical\", cMin=15, cMax=maxC)\npg.show(meshPD, inv, ax=ax2, hold=True, cMap=\"RdBu_r\", logScale=True,\n orientation=\"vertical\", cMin=15, cMax=maxC)\nmgr.showResult(ax=ax3, cMin=15, cMax=maxC, cMap=\"RdBu_r\", orientation=\"vertical\")\n\nlabels = [\"True model\", \"Inversion unstructured mesh\", \"Inversion regular grid\"]\nfor ax, label in zip([ax1, ax2, ax3], labels):\n ax.set_xlim(mgr.paraDomain.xmin(), mgr.paraDomain.xmax())\n ax.set_ylim(mgr.paraDomain.ymin(), mgr.paraDomain.ymax())\n ax.set_title(label)",
"24/11/21 - 13:43:44 - pyGIMLi - \u001b[0;32;49mINFO\u001b[0m - Inversion stopped with chi² = 0.944\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0891e3d00dc841932b946a11fa8d004b621202c | 32,706 | ipynb | Jupyter Notebook | week03_lm/seminar.ipynb | ivkrasovskiy/nlp_course | bd5b4eddad6a28d41d19b4e64ec0fe9295d002e7 | [
"MIT"
] | null | null | null | week03_lm/seminar.ipynb | ivkrasovskiy/nlp_course | bd5b4eddad6a28d41d19b4e64ec0fe9295d002e7 | [
"MIT"
] | null | null | null | week03_lm/seminar.ipynb | ivkrasovskiy/nlp_course | bd5b4eddad6a28d41d19b4e64ec0fe9295d002e7 | [
"MIT"
] | null | null | null | 45.111724 | 2,911 | 0.565309 | [
[
[
"### N-gram language models or how to write scientific papers (4 pts)\n\nWe shall train our language model on a corpora of [ArXiv](http://arxiv.org/) articles and see if we can generate a new one!\n\n\n\n_data by neelshah18 from [here](https://www.kaggle.com/neelshah18/arxivdataset/)_\n\n_Disclaimer: this has nothing to do with actual science. But it's fun, so who cares?!_",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
],
[
"# Alternative manual download link: https://yadi.sk/d/_nGyU2IajjR9-w\n# !wget \"https://www.dropbox.com/s/99az9n1b57qkd9j/arxivData.json.tar.gz?dl=1\" -O arxivData.json.tar.gz\n# !tar -xvzf arxivData.json.tar.gz\ndata = pd.read_json(\"./arxivData.json\")\ndata.sample(n=5)",
"_____no_output_____"
],
[
"# assemble lines: concatenate title and description\nlines = data.apply(lambda row: row['title'] + ' ; ' + row['summary'], axis=1).tolist()\n\nsorted(lines, key=len)[:3]",
"_____no_output_____"
]
],
[
[
"### Tokenization\n\nYou know the dril. The data is messy. Go clean the data. Use WordPunctTokenizer or something.\n",
"_____no_output_____"
]
],
[
[
"from nltk.tokenize import WordPunctTokenizer\n# Task: convert lines (in-place) into strings of space-separated tokens. import & use WordPunctTokenizer\ntokenizer = WordPunctTokenizer()\nlines = [' '.join(tokenizer.tokenize(line.lower())) for line in lines]",
"_____no_output_____"
],
[
"assert sorted(lines, key=len)[0] == \\\n 'differential contrastive divergence ; this paper has been retracted .'\nassert sorted(lines, key=len)[2] == \\\n 'p = np ; we claim to resolve the p =? np problem via a formal argument for p = np .'",
"_____no_output_____"
]
],
[
[
"### N-Gram Language Model (1point)\n\nA language model is a probabilistic model that estimates text probability: the joint probability of all tokens $w_t$ in text $X$: $P(X) = P(w_1, \\dots, w_T)$.\n\nIt can do so by following the chain rule:\n$$P(w_1, \\dots, w_T) = P(w_1)P(w_2 \\mid w_1)\\dots P(w_T \\mid w_1, \\dots, w_{T-1}).$$ \n\nThe problem with such approach is that the final term $P(w_T \\mid w_1, \\dots, w_{T-1})$ depends on $n-1$ previous words. This probability is impractical to estimate for long texts, e.g. $T = 1000$.\n\nOne popular approximation is to assume that next word only depends on a finite amount of previous words:\n\n$$P(w_t \\mid w_1, \\dots, w_{t - 1}) = P(w_t \\mid w_{t - n + 1}, \\dots, w_{t - 1})$$\n\nSuch model is called __n-gram language model__ where n is a parameter. For example, in 3-gram language model, each word only depends on 2 previous words. \n\n$$\n P(w_1, \\dots, w_n) = \\prod_t P(w_t \\mid w_{t - n + 1}, \\dots, w_{t - 1}).\n$$\n\nYou can also sometimes see such approximation under the name of _n-th order markov assumption_.",
"_____no_output_____"
],
[
"The first stage to building such a model is counting all word occurences given N-1 previous words",
"_____no_output_____"
]
],
[
[
"from tqdm import tqdm\nfrom collections import defaultdict, Counter\n\n# special tokens: \n# - unk represents absent tokens, \n# - eos is a special token after the end of sequence\n\nUNK, EOS = \"_UNK_\", \"_EOS_\"\n\ndef count_ngrams(lines, n):\n \"\"\"\n Count how many times each word occured after (n - 1) previous words\n :param lines: an iterable of strings with space-separated tokens\n :returns: a dictionary { tuple(prefix_tokens): {next_token_1: count_1, next_token_2: count_2}}\n\n When building counts, please consider the following two edge cases\n - if prefix is shorter than (n - 1) tokens, it should be padded with UNK. For n=3,\n empty prefix: \"\" -> (UNK, UNK)\n short prefix: \"the\" -> (UNK, the)\n long prefix: \"the new approach\" -> (new, approach)\n - you should add a special token, EOS, at the end of each sequence\n \"... with deep neural networks .\" -> (..., with, deep, neural, networks, ., EOS)\n count the probability of this token just like all others.\n \"\"\"\n counts = defaultdict(Counter)\n # counts[(word1, word2)][word3] = how many times word3 occured after (word1, word2)\n if n == 1:\n for line in lines:\n counts.update(((word,) for word in line.split()))\n\n for i in range(1, n+1, 1):\n for line in lines:\n splitted = line.split()\n line_len = len(splitted)\n splitted.append(EOS)\n\n for j in range(line_len):\n try:\n current_slice = splitted[j:j+i]\n counts.update(tuple(current_slice[:-1])\n\n \n return counts\n",
"_____no_output_____"
],
[
"# let's test it\ndummy_lines = sorted(lines, key=len)[:100]\ndummy_counts = count_ngrams(dummy_lines, n=3)\nassert set(map(len, dummy_counts.keys())) == {2}, \"please only count {n-1}-grams\"\nassert len(dummy_counts[('_UNK_', '_UNK_')]) == 78\nassert dummy_counts['_UNK_', 'a']['note'] == 3\nassert dummy_counts['p', '=']['np'] == 2\nassert dummy_counts['author', '.']['_EOS_'] == 1",
"_____no_output_____"
],
[
"['author', '.'][:-1]",
"_____no_output_____"
]
],
[
[
"Once we can count N-grams, we can build a probabilistic language model.\nThe simplest way to compute probabilities is in proporiton to counts:\n\n$$ P(w_t | prefix) = { Count(prefix, w_t) \\over \\sum_{\\hat w} Count(prefix, \\hat w) } $$",
"_____no_output_____"
]
],
[
[
"class NGramLanguageModel: \n def __init__(self, lines, n):\n \"\"\" \n Train a simple count-based language model: \n compute probabilities P(w_t | prefix) given ngram counts\n \n :param n: computes probability of next token given (n - 1) previous words\n :param lines: an iterable of strings with space-separated tokens\n \"\"\"\n assert n >= 1\n self.n = n\n \n counts = count_ngrams(lines, self.n)\n \n # compute token proabilities given counts\n self.probs = defaultdict(Counter)\n # probs[(word1, word2)][word3] = P(word3 | word1, word2)\n \n # populate self.probs with actual probabilities\n <YOUR CODE>\n \n def get_possible_next_tokens(self, prefix):\n \"\"\"\n :param prefix: string with space-separated prefix tokens\n :returns: a dictionary {token : it's probability} for all tokens with positive probabilities\n \"\"\"\n prefix = prefix.split()\n prefix = prefix[max(0, len(prefix) - self.n + 1):]\n prefix = [ UNK ] * (self.n - 1 - len(prefix)) + prefix\n return self.probs[tuple(prefix)]\n \n def get_next_token_prob(self, prefix, next_token):\n \"\"\"\n :param prefix: string with space-separated prefix tokens\n :param next_token: the next token to predict probability for\n :returns: P(next_token|prefix) a single number, 0 <= P <= 1\n \"\"\"\n return self.get_possible_next_tokens(prefix).get(next_token, 0)",
"_____no_output_____"
]
],
[
[
"Let's test it!",
"_____no_output_____"
]
],
[
[
"dummy_lm = NGramLanguageModel(dummy_lines, n=3)\n\np_initial = dummy_lm.get_possible_next_tokens('') # '' -> ['_UNK_', '_UNK_']\nassert np.allclose(p_initial['learning'], 0.02)\nassert np.allclose(p_initial['a'], 0.13)\nassert np.allclose(p_initial.get('meow', 0), 0)\nassert np.allclose(sum(p_initial.values()), 1)\n\np_a = dummy_lm.get_possible_next_tokens('a') # '' -> ['_UNK_', 'a']\nassert np.allclose(p_a['machine'], 0.15384615)\nassert np.allclose(p_a['note'], 0.23076923)\nassert np.allclose(p_a.get('the', 0), 0)\nassert np.allclose(sum(p_a.values()), 1)\n\nassert np.allclose(dummy_lm.get_possible_next_tokens('a note')['on'], 1)\nassert dummy_lm.get_possible_next_tokens('a machine') == \\\n dummy_lm.get_possible_next_tokens(\"there have always been ghosts in a machine\"), \\\n \"your 3-gram model should only depend on 2 previous words\"",
"_____no_output_____"
]
],
[
[
"Now that you've got a working n-gram language model, let's see what sequences it can generate. But first, let's train it on the whole dataset.",
"_____no_output_____"
]
],
[
[
"lm = NGramLanguageModel(lines, n=3)",
"_____no_output_____"
]
],
[
[
"The process of generating sequences is... well, it's sequential. You maintain a list of tokens and iteratively add next token by sampling with probabilities.\n\n$ X = [] $\n\n__forever:__\n* $w_{next} \\sim P(w_{next} | X)$\n* $X = concat(X, w_{next})$\n\n\nInstead of sampling with probabilities, one can also try always taking most likely token, sampling among top-K most likely tokens or sampling with temperature. In the latter case (temperature), one samples from\n\n$$w_{next} \\sim {P(w_{next} | X) ^ {1 / \\tau} \\over \\sum_{\\hat w} P(\\hat w | X) ^ {1 / \\tau}}$$\n\nWhere $\\tau > 0$ is model temperature. If $\\tau << 1$, more likely tokens will be sampled with even higher probability while less likely tokens will vanish.",
"_____no_output_____"
]
],
[
[
"def get_next_token(lm, prefix, temperature=1.0):\n \"\"\"\n return next token after prefix;\n :param temperature: samples proportionally to lm probabilities ^ (1 / temperature)\n if temperature == 0, always takes most likely token. Break ties arbitrarily.\n \"\"\"\n <YOUR CODE>",
"_____no_output_____"
],
[
"from collections import Counter\ntest_freqs = Counter([get_next_token(lm, 'there have') for _ in range(10000)])\nassert 250 < test_freqs['not'] < 450\nassert 8500 < test_freqs['been'] < 9500\nassert 1 < test_freqs['lately'] < 200\n\ntest_freqs = Counter([get_next_token(lm, 'deep', temperature=1.0) for _ in range(10000)])\nassert 1500 < test_freqs['learning'] < 3000\ntest_freqs = Counter([get_next_token(lm, 'deep', temperature=0.5) for _ in range(10000)])\nassert 8000 < test_freqs['learning'] < 9000\ntest_freqs = Counter([get_next_token(lm, 'deep', temperature=0.0) for _ in range(10000)])\nassert test_freqs['learning'] == 10000\n\nprint(\"Looks nice!\")",
"_____no_output_____"
]
],
[
[
"Let's have fun with this model",
"_____no_output_____"
]
],
[
[
"prefix = 'artificial' # <- your ideas :)\n\nfor i in range(100):\n prefix += ' ' + get_next_token(lm, prefix)\n if prefix.endswith(EOS) or len(lm.get_possible_next_tokens(prefix)) == 0:\n break\n \nprint(prefix)",
"_____no_output_____"
],
[
"prefix = 'bridging the' # <- more of your ideas\n\nfor i in range(100):\n prefix += ' ' + get_next_token(lm, prefix, temperature=0.5)\n if prefix.endswith(EOS) or len(lm.get_possible_next_tokens(prefix)) == 0:\n break\n \nprint(prefix)",
"_____no_output_____"
]
],
[
[
"__More in the homework:__ nucleous sampling, top-k sampling, beam search(not for the faint of heart).",
"_____no_output_____"
],
[
"### Evaluating language models: perplexity (1point)\n\nPerplexity is a measure of how well does your model approximate true probability distribution behind data. __Smaller perplexity = better model__.\n\nTo compute perplexity on one sentence, use:\n$$\n {\\mathbb{P}}(w_1 \\dots w_N) = P(w_1, \\dots, w_N)^{-\\frac1N} = \\left( \\prod_t P(w_t \\mid w_{t - n}, \\dots, w_{t - 1})\\right)^{-\\frac1N},\n$$\n\n\nOn the corpora level, perplexity is a product of probabilities of all tokens in all sentences to the power of 1, divided by __total length of all sentences__ in corpora.\n\nThis number can quickly get too small for float32/float64 precision, so we recommend you to first compute log-perplexity (from log-probabilities) and then take the exponent.",
"_____no_output_____"
]
],
[
[
"def perplexity(lm, lines, min_logprob=np.log(10 ** -50.)):\n \"\"\"\n :param lines: a list of strings with space-separated tokens\n :param min_logprob: if log(P(w | ...)) is smaller than min_logprop, set it equal to min_logrob\n :returns: corpora-level perplexity - a single scalar number from the formula above\n \n Note: do not forget to compute P(w_first | empty) and P(eos | full_sequence)\n \n PLEASE USE lm.get_next_token_prob and NOT lm.get_possible_next_tokens\n \"\"\"\n <YOUR CODE>\n \n return <...>",
"_____no_output_____"
],
[
"lm1 = NGramLanguageModel(dummy_lines, n=1)\nlm3 = NGramLanguageModel(dummy_lines, n=3)\nlm10 = NGramLanguageModel(dummy_lines, n=10)\n\nppx1 = perplexity(lm1, dummy_lines)\nppx3 = perplexity(lm3, dummy_lines)\nppx10 = perplexity(lm10, dummy_lines)\nppx_missing = perplexity(lm3, ['the jabberwock , with eyes of flame , ']) # thanks, L. Carrol\n\nprint(\"Perplexities: ppx1=%.3f ppx3=%.3f ppx10=%.3f\" % (ppx1, ppx3, ppx10))\n\nassert all(0 < ppx < 500 for ppx in (ppx1, ppx3, ppx10)), \"perplexity should be nonnegative and reasonably small\"\nassert ppx1 > ppx3 > ppx10, \"higher N models should overfit and \"\nassert np.isfinite(ppx_missing) and ppx_missing > 10 ** 6, \"missing words should have large but finite perplexity. \" \\\n \" Make sure you use min_logprob right\"\nassert np.allclose([ppx1, ppx3, ppx10], (318.2132342216302, 1.5199996213739575, 1.1838145037901249))",
"_____no_output_____"
]
],
[
[
"Now let's measure the actual perplexity: we'll split the data into train and test and score model on test data only.",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import train_test_split\ntrain_lines, test_lines = train_test_split(lines, test_size=0.25, random_state=42)\n\nfor n in (1, 2, 3):\n lm = NGramLanguageModel(n=n, lines=train_lines)\n ppx = perplexity(lm, test_lines)\n print(\"N = %i, Perplexity = %.5f\" % (n, ppx))\n",
"_____no_output_____"
],
[
"# whoops, it just blew up :)",
"_____no_output_____"
]
],
[
[
"### LM Smoothing\n\nThe problem with our simple language model is that whenever it encounters an n-gram it has never seen before, it assigns it with the probabilitiy of 0. Every time this happens, perplexity explodes.\n\nTo battle this issue, there's a technique called __smoothing__. The core idea is to modify counts in a way that prevents probabilities from getting too low. The simplest algorithm here is Additive smoothing (aka [Lapace smoothing](https://en.wikipedia.org/wiki/Additive_smoothing)):\n\n$$ P(w_t | prefix) = { Count(prefix, w_t) + \\delta \\over \\sum_{\\hat w} (Count(prefix, \\hat w) + \\delta) } $$\n\nIf counts for a given prefix are low, additive smoothing will adjust probabilities to a more uniform distribution. Not that the summation in the denominator goes over _all words in the vocabulary_.\n\nHere's an example code we've implemented for you:",
"_____no_output_____"
]
],
[
[
"class LaplaceLanguageModel(NGramLanguageModel): \n \"\"\" this code is an example, no need to change anything \"\"\"\n def __init__(self, lines, n, delta=1.0):\n self.n = n\n counts = count_ngrams(lines, self.n)\n self.vocab = set(token for token_counts in counts.values() for token in token_counts)\n self.probs = defaultdict(Counter)\n\n for prefix in counts:\n token_counts = counts[prefix]\n total_count = sum(token_counts.values()) + delta * len(self.vocab)\n self.probs[prefix] = {token: (token_counts[token] + delta) / total_count\n for token in token_counts}\n def get_possible_next_tokens(self, prefix):\n token_probs = super().get_possible_next_tokens(prefix)\n missing_prob_total = 1.0 - sum(token_probs.values())\n missing_prob = missing_prob_total / max(1, len(self.vocab) - len(token_probs))\n return {token: token_probs.get(token, missing_prob) for token in self.vocab}\n \n def get_next_token_prob(self, prefix, next_token):\n token_probs = super().get_possible_next_tokens(prefix)\n if next_token in token_probs:\n return token_probs[next_token]\n else:\n missing_prob_total = 1.0 - sum(token_probs.values())\n missing_prob_total = max(0, missing_prob_total) # prevent rounding errors\n return missing_prob_total / max(1, len(self.vocab) - len(token_probs))\n ",
"_____no_output_____"
],
[
"#test that it's a valid probability model\nfor n in (1, 2, 3):\n dummy_lm = LaplaceLanguageModel(dummy_lines, n=n)\n assert np.allclose(sum([dummy_lm.get_next_token_prob('a', w_i) for w_i in dummy_lm.vocab]), 1), \"I told you not to break anything! :)\"",
"_____no_output_____"
],
[
"for n in (1, 2, 3):\n lm = LaplaceLanguageModel(train_lines, n=n, delta=0.1)\n ppx = perplexity(lm, test_lines)\n print(\"N = %i, Perplexity = %.5f\" % (n, ppx))",
"_____no_output_____"
],
[
"# optional: try to sample tokens from such a model",
"_____no_output_____"
]
],
[
[
"### Kneser-Ney smoothing (2 points)\n\nAdditive smoothing is simple, reasonably good but definitely not a State of The Art algorithm.\n\n\nYour final task in this notebook is to implement [Kneser-Ney](https://en.wikipedia.org/wiki/Kneser%E2%80%93Ney_smoothing) smoothing.\n\nIt can be computed recurrently, for n>1:\n\n$$P_{kn}(w_t | prefix_{n-1}) = { \\max(0, Count(prefix_{n-1}, w_t) - \\delta) \\over \\sum_{\\hat w} Count(prefix_{n-1}, \\hat w)} + \\lambda_{prefix_{n-1}} \\cdot P_{kn}(w_t | prefix_{n-2})$$\n\nwhere\n- $prefix_{n-1}$ is a tuple of {n-1} previous tokens\n- $lambda_{prefix_{n-1}}$ is a normalization constant chosen so that probabilities add up to 1\n- Unigram $P_{kn}(w_t | prefix_{n-2})$ corresponds to Kneser Ney smoothing for {N-1}-gram language model.\n- Unigram $P_{kn}(w_t)$ is a special case: how likely it is to see x_t in an unfamiliar context\n\nSee lecture slides or wiki for more detailed formulae.\n\n__Your task__ is to\n- implement KneserNeyLanguageModel\n- test it on 1-3 gram language models\n- find optimal (within reason) smoothing delta for 3-gram language model with Kneser-Ney smoothing",
"_____no_output_____"
]
],
[
[
"class KneserNeyLanguageModel(NGramLanguageModel): \n \"\"\" A template for Kneser-Ney language model. Default delta may be suboptimal. \"\"\"\n def __init__(self, lines, n, delta=1.0):\n self.n = n\n <YOUR CODE>\n \n def get_possible_next_tokens(self, prefix):\n < YOUR CODE >\n \n def get_next_token_prob(self, prefix, next_token):\n <YOUR CODE>",
"_____no_output_____"
],
[
"#test that it's a valid probability model\nfor n in (1, 2, 3):\n dummy_lm = KneserNeyLanguageModel(dummy_lines, n=n)\n assert np.allclose(sum([dummy_lm.get_next_token_prob('a', w_i) for w_i in dummy_lm.vocab]), 1), \"I told you not to break anything! :)\"",
"_____no_output_____"
],
[
"for n in (1, 2, 3):\n lm = KneserNeyLanguageModel(train_lines, n=n, smoothing=<...>)\n ppx = perplexity(lm, test_lines)\n print(\"N = %i, Perplexity = %.5f\" % (n, ppx))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0891ee5e4b1a2981250075c15940373a64fc8d2 | 173,165 | ipynb | Jupyter Notebook | notebooks/99_lst_Tests_Mataplotlib.ipynb | lukk60/RETrends | 8fa660b0ba1da3dd998a7b8daeeddaf79876d1dd | [
"FTL"
] | null | null | null | notebooks/99_lst_Tests_Mataplotlib.ipynb | lukk60/RETrends | 8fa660b0ba1da3dd998a7b8daeeddaf79876d1dd | [
"FTL"
] | null | null | null | notebooks/99_lst_Tests_Mataplotlib.ipynb | lukk60/RETrends | 8fa660b0ba1da3dd998a7b8daeeddaf79876d1dd | [
"FTL"
] | null | null | null | 440.62341 | 26,696 | 0.944989 | [
[
[
"# Introduction to matplotlib",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport numpy as np\n%matplotlib inline",
"_____no_output_____"
],
[
"x = np.linspace(0, 10, 50)\ny = x**2",
"_____no_output_____"
],
[
"plt.plot(x,y)\nplt.title(\"Title\")\nplt.xlabel(\"X Label\")\nplt.ylabel(\"Y Label\")",
"_____no_output_____"
],
[
"plt.subplot(1, 2, 1)\nplt.plot(x,y, \"red\")\n\nplt.subplot(1,2, 2)\nplt.plot(y,x, \"green\")",
"_____no_output_____"
],
[
"fig = plt.figure()\nax = fig.add_axes([0.1, 0.2, 0.9, 0.9])\n\nax.plot(x,y, \"purple\")\nax.set_xlabel(\"XLAB\")",
"_____no_output_____"
],
[
"fig = plt.figure()\naxes1 = fig.add_axes([0.1, 0.2, 0.8, 0.8])\naxes2 = fig.add_axes([0.2, 0.5, 0.3, 0.3])\n\naxes1.plot(x,y)\naxes2.plot(y,x)\n\naxes1.set_xlabel(\"x\")\naxes2.set_xlabel(\"y\")",
"_____no_output_____"
],
[
"fig, axes = plt.subplots(nrows=3, ncols=3)\n\naxes[0,1].plot(x,y)\naxes[1,2].plot(y,x)\n\n\nfig.set_size_inches(8,8)\nplt.tight_layout()",
"_____no_output_____"
],
[
"fig = plt.figure(figsize=(8,5))\nax = fig.add_axes([0,0,1,1])\nax.plot(x,x**2, label = \"xhoch2\")\nax.plot(x,x**3, \"blue\", label=\"xhoch3\", ls=\"--\", lw=2, marker = \"o\")\nax.set_xlim([0,4])\nax.legend()",
"_____no_output_____"
],
[
"x = np.random.randn(1000)\nplt.hist(x)",
"_____no_output_____"
],
[
"from datetime import datetime\nfig = plt.figure(figsize=(8,5))\n\nx = np.array([datetime(2019, 1, 1, i, 0) for i in range(24)])\ny = np.random.randint(100, size=x.shape)\n\nplt.plot(x,y)",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(figsize=(8,5))\nx = np.linspace(-2,2,1000)\ny = x*np.random.randn(1000)\n\nax.scatter(x,y)",
"_____no_output_____"
],
[
"import pandas as pd\n\nmydf = pd.DataFrame(np.random.rand(10,4), columns=[\"a\", \"b\", \"c\", \"d\"])\nmydf.plot.bar(figsize=(8,5))",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0892c893a499d79a55bd8a84630e3cc394681be | 34,728 | ipynb | Jupyter Notebook | group4/model/Bert-Model.ipynb | MubarikIbrahim/Deep-Text-Complexity-Metric-Challenge | 68bd6400a582e88fa5e2a625c4b7209780ebd6b2 | [
"MIT"
] | null | null | null | group4/model/Bert-Model.ipynb | MubarikIbrahim/Deep-Text-Complexity-Metric-Challenge | 68bd6400a582e88fa5e2a625c4b7209780ebd6b2 | [
"MIT"
] | null | null | null | group4/model/Bert-Model.ipynb | MubarikIbrahim/Deep-Text-Complexity-Metric-Challenge | 68bd6400a582e88fa5e2a625c4b7209780ebd6b2 | [
"MIT"
] | null | null | null | 36.944681 | 286 | 0.447218 | [
[
[
"# load data and write out sentence and target\nimport pandas as pd\n\nloaded_set = pd.read_excel(\"Dataset/\"+\"training.xlsx\")\nloaded_set['Sentence']",
"_____no_output_____"
],
[
"\r\nfrom transformers import AutoModel, AutoTokenizer\r\n# german tokens for bert\r\ntokenizer = AutoTokenizer.from_pretrained(\"dbmdz/bert-base-german-cased\")\r\n#model = AutoModel.from_pretrained(\"dbmdz/bert-base-german-cased\")\r\n\r\n\r\n\r\ntokens_num=[]\r\nfor sen in loaded_set['Sentence']:\r\n tokenized = (tokenizer.tokenize(sen)) \r\n tokens_num.append( ['[CLS]'] + tokenized + ['[SEP]']) \r\n \r\n# get max_seq length \r\nlens = [len(i) for i in tokens_num]\r\nmax_seq_length = max(lens)\r\nmax_seq_length = int(1.5*max_seq_length)\r\n#max_seq_length = 256\r\ntokens_num[0]",
"_____no_output_____"
],
[
"tokenizer.convert_tokens_to_ids(tokens_num[0])",
"_____no_output_____"
],
[
"def manual_features(x):\r\n letter_count = []\r\n avarange_letter_per_word = []\r\n num_words = []\r\n num_letters_array = []\r\n longest_word_length = []\r\n shortest_word_length = []\r\n genitiv = []\r\n akkusativ = []\r\n dativ = []\r\n dass = []\r\n\r\n for sen in x:\r\n current_sen_split = sen.split()\r\n num_words.append(len(current_sen_split))\r\n num_letters = []\r\n \r\n if \"des\" in sen:\r\n genitiv.append(1)\r\n else:\r\n genitiv.append(0)\r\n\r\n if \"dem\" in sen:\r\n akkusativ.append(1)\r\n else:\r\n akkusativ.append(0)\r\n\r\n if \"den\" in sen:\r\n dativ.append(1)\r\n else:\r\n dativ.append(0)\r\n\r\n if \"dass\" in sen:\r\n dass.append(1)\r\n else:\r\n dass.append(0)\r\n\r\n for y in range(len(current_sen_split)):\r\n current_word = current_sen_split[y]\r\n \r\n num_letters.append(len(current_word))\r\n \r\n current_lettercount = sum(num_letters)\r\n letter_count.append(current_lettercount) \r\n avarange_letter_per_word.append(current_lettercount/len(current_sen_split))\r\n longest_word_length.append(max(num_letters)) \r\n shortest_word_length.append(min(num_letters)) \r\n \r\n\r\n\r\n\r\n feature_dict = {\r\n 'dativ':dativ, \r\n 'akkusativ': akkusativ, \r\n 'genitiv': genitiv, \r\n 'dass': dass,\r\n 'num_words':num_words,\r\n 'letter_count':letter_count,\r\n 'avarange_letter_per_word':avarange_letter_per_word,\r\n 'longest_word_length':longest_word_length,\r\n 'shortest_word_length':shortest_word_length, \r\n }\r\n\r\n feature_dataframe = pd.DataFrame(data=feature_dict)\r\n scaler = StandardScaler()\r\n\r\n feature_dataframe[['num_words', 'longest_word_length', 'shortest_word_length', 'letter_count', 'avarange_letter_per_word']] = scaler.fit_transform(feature_dataframe[['num_words', 'longest_word_length', 'shortest_word_length', 'letter_count', 'avarange_letter_per_word']])\r\n\r\n feature_dataframe[['num_words', 'longest_word_length', 'shortest_word_length', 'letter_count', 'avarange_letter_per_word']] = scaler.transform(feature_dataframe[['num_words', 'longest_word_length', 'shortest_word_length', 'letter_count', 'avarange_letter_per_word']])\r\n\r\n\r\n tensorX = tf.constant(feature_dataframe.values)\r\n\r\n return tensorX",
"_____no_output_____"
],
[
"import numpy as np\r\nfrom sklearn.preprocessing import StandardScaler\r\nscaler = StandardScaler()\r\ndef encode_names(n, tokenizer):\r\n tokens = list(tokenizer.tokenize(n))\r\n tokens.append('[SEP]')\r\n return tokenizer.convert_tokens_to_ids(tokens)\r\n\r\ndef bert_encode(string_list, tokenizer, max_seq_length):\r\n num_examples = len(string_list)\r\n \r\n\r\n\r\n letter_count = []\r\n avarange_letter_per_word = []\r\n num_words = []\r\n num_letters_array = []\r\n longest_word_length = []\r\n shortest_word_length = []\r\n genitiv = []\r\n akkusativ = []\r\n dativ = []\r\n dass = []\r\n\r\n for sen in string_list:\r\n current_sen_split = sen.split()\r\n num_words.append(len(current_sen_split))\r\n num_letters = []\r\n \r\n if \"des\" in sen:\r\n genitiv.append(1)\r\n else:\r\n genitiv.append(0)\r\n\r\n if \"dem\" in sen:\r\n akkusativ.append(1)\r\n else:\r\n akkusativ.append(0)\r\n\r\n if \"den\" in sen:\r\n dativ.append(1)\r\n else:\r\n dativ.append(0)\r\n\r\n if \"dass\" in sen:\r\n dass.append(1)\r\n else:\r\n dass.append(0)\r\n\r\n for y in range(len(current_sen_split)):\r\n current_word = current_sen_split[y]\r\n \r\n num_letters.append(len(current_word))\r\n \r\n current_lettercount = sum(num_letters)\r\n letter_count.append(current_lettercount) \r\n avarange_letter_per_word.append(current_lettercount/len(current_sen_split))\r\n longest_word_length.append(max(num_letters)) \r\n shortest_word_length.append(min(num_letters)) \r\n \r\n\r\n\r\n\r\n feature_dict = {\r\n 'num_words':num_words,\r\n 'avarange_letter_per_word':avarange_letter_per_word,\r\n 'longest_word_length':longest_word_length,\r\n }\r\n\r\n feature_dataframe = pd.DataFrame(data=feature_dict)\r\n scaler = StandardScaler()\r\n\r\n feature_dataframe[['num_words', 'longest_word_length', 'avarange_letter_per_word']] = scaler.fit_transform(feature_dataframe[['num_words', 'longest_word_length', 'avarange_letter_per_word']])\r\n\r\n feature_dataframe[['num_words', 'longest_word_length', 'avarange_letter_per_word']] = scaler.transform(feature_dataframe[['num_words', 'longest_word_length', 'avarange_letter_per_word']])\r\n\r\n\r\n X_train_mF = tf.constant(feature_dataframe.values)\r\n\r\n \r\n\r\n\r\n\r\n string_tokens = tf.ragged.constant([\r\n encode_names(n, tokenizer) for n in np.array(string_list)])\r\n\r\n cls = [tokenizer.convert_tokens_to_ids(['[CLS]'])]*string_tokens.shape[0]\r\n input_word_ids = tf.concat([cls, string_tokens], axis=-1)\r\n\r\n input_mask = tf.ones_like(input_word_ids).to_tensor(shape=(None, max_seq_length))\r\n\r\n type_cls = tf.zeros_like(cls)\r\n type_tokens = tf.ones_like(string_tokens)\r\n input_type_ids = tf.concat(\r\n [type_cls, type_tokens], axis=-1).to_tensor(shape=(None, max_seq_length))\r\n scaler_input_word_ids = scaler.fit_transform(input_type_ids) \r\n\r\n inputs = {\r\n #'sc': scaler_input_word_ids,\r\n #'input_word_ids': input_word_ids,\r\n 'input_word_ids': input_word_ids.to_tensor(shape=(None, max_seq_length)),\r\n 'input_mask': input_mask,\r\n 'input_type_ids': input_type_ids,\r\n 'X_train_mF': X_train_mF\r\n }\r\n\r\n return inputs",
"_____no_output_____"
],
[
"from sklearn.model_selection import train_test_split\r\n\r\nx = loaded_set['Sentence']\r\ny = loaded_set['MOS']\r\nx_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.20, random_state=32)\r\ny_train = round(y_train, 2)\r\ny_test = round(y_test, 2)\r\n\r\n",
"_____no_output_____"
],
[
"import tensorflow as tf\r\nX_train = bert_encode(x_train, tokenizer, max_seq_length)\r\nX_test = bert_encode(x_test, tokenizer, max_seq_length)\r\n\r\n",
"INFO:tensorflow:Enabling eager execution\nINFO:tensorflow:Enabling v2 tensorshape\nINFO:tensorflow:Enabling resource variables\nINFO:tensorflow:Enabling tensor equality\nINFO:tensorflow:Enabling control flow v2\n"
],
[
"import tensorflow_hub as hub\r\nbert_layer = hub.KerasLayer(\"https://tfhub.dev/tensorflow/bert_multi_cased_L-12_H-768_A-12/2\",\r\n trainable=False)",
"INFO:absl:Using C:\\Users\\phili\\AppData\\Local\\Temp\\tfhub_modules to cache modules.\n"
],
[
"embedding_size = 768\r\nmax_seq_length = max_seq_length #length of the tokenised tensor\r\n\r\ninput_word_ids = tf.keras.layers.Input(shape=(max_seq_length,), dtype=tf.int32,\r\n name=\"input_word_ids\")\r\ninput_mask = tf.keras.layers.Input(shape=(max_seq_length,), dtype=tf.int32,\r\n name=\"input_mask\")\r\nsegment_ids = tf.keras.layers.Input(shape=(max_seq_length,), dtype=tf.int32,\r\n name=\"segment_ids\")\r\n\r\nX_train_mF = tf.keras.layers.Input(shape=(3,), dtype=tf.int32,\r\n name=\"X_train_mF\")\r\n\r\npooled_output, sequence_output = bert_layer([input_word_ids, input_mask, segment_ids]) \r\ndropout = tf.keras.layers.Dropout(0.2)(pooled_output)\r\nreshaped_bert = tf.keras.layers.Reshape((6,128))(dropout)\r\n\r\ndense_mf_1 = tf.keras.layers.Dense(20)(X_train_mF)\r\ndense_mf_2 = tf.keras.layers.Dense(128)(dense_mf_1)\r\n#dense_mf_3 = tf.keras.layers.Dense(24)(dropout_mf)\r\ndropout_mf = tf.keras.layers.Dropout(0.3)(dense_mf_2)\r\nreshaped_mf = tf.keras.layers.Reshape((1,128))(dense_mf_2)\r\n\r\n#concatinated_3 = tf.concat([concatinated_1, concatinated_2 ], 1)\r\n#reshaped_mf = tf.keras.layers.Reshape((1,24))(dense_mf_3)\r\n\r\nconcatinated = tf.concat([reshaped_bert, reshaped_mf], 1)\r\n\r\ngru_1_out = tf.keras.layers.GRU(200, return_sequences=True, activation='relu')(concatinated)\r\ngru_2_out = tf.keras.layers.GRU(100, return_sequences=True, activation='relu')(gru_1_out)\r\n\r\nflat = tf.keras.layers.Flatten()(gru_2_out)\r\ndropout_2 = tf.keras.layers.Dropout(0.3)(flat)\r\ndense_2 = tf.keras.layers.Dense(300)(dropout_2)\r\ndense_3 = tf.keras.layers.Dense(100)(dense_2)\r\ndense_4 = tf.keras.layers.Dense(50)(dense_3)\r\n\r\npred = tf.keras.layers.Dense(1)(dense_2)\r\n \r\n\r\n\r\n\r\nmodel = tf.keras.Model(\r\n inputs={\r\n 'input_word_ids': input_word_ids,\r\n 'input_mask': input_mask,\r\n 'input_type_ids': segment_ids,\r\n 'X_train_mF':X_train_mF\r\n },\r\n outputs=pred)",
"_____no_output_____"
],
[
"model.compile(optimizer= tf.keras.optimizers.Adam(0.001),\r\n loss= \"mean_absolute_error\",\r\n metrics= [\"mean_squared_error\"])\r\nmodel.summary()",
"Model: \"model_13\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_word_ids (InputLayer) [(None, 144)] 0 \n__________________________________________________________________________________________________\ninput_mask (InputLayer) [(None, 144)] 0 \n__________________________________________________________________________________________________\nsegment_ids (InputLayer) [(None, 144)] 0 \n__________________________________________________________________________________________________\nX_train_mF (InputLayer) [(None, 3)] 0 \n__________________________________________________________________________________________________\nkeras_layer (KerasLayer) [(None, 768), (None, 177853441 input_word_ids[0][0] \n input_mask[0][0] \n segment_ids[0][0] \n__________________________________________________________________________________________________\ndense_61 (Dense) (None, 20) 80 X_train_mF[0][0] \n__________________________________________________________________________________________________\ndropout_39 (Dropout) (None, 768) 0 keras_layer[13][0] \n__________________________________________________________________________________________________\ndense_62 (Dense) (None, 128) 2688 dense_61[0][0] \n__________________________________________________________________________________________________\nreshape_26 (Reshape) (None, 6, 128) 0 dropout_39[0][0] \n__________________________________________________________________________________________________\nreshape_27 (Reshape) (None, 1, 128) 0 dense_62[0][0] \n__________________________________________________________________________________________________\ntf.concat_13 (TFOpLambda) (None, 7, 128) 0 reshape_26[0][0] \n reshape_27[0][0] \n__________________________________________________________________________________________________\ngru_22 (GRU) (None, 7, 200) 198000 tf.concat_13[0][0] \n__________________________________________________________________________________________________\ngru_23 (GRU) (None, 7, 100) 90600 gru_22[0][0] \n__________________________________________________________________________________________________\nflatten_13 (Flatten) (None, 700) 0 gru_23[0][0] \n__________________________________________________________________________________________________\ndropout_41 (Dropout) (None, 700) 0 flatten_13[0][0] \n__________________________________________________________________________________________________\ndense_63 (Dense) (None, 300) 210300 dropout_41[0][0] \n__________________________________________________________________________________________________\ndense_66 (Dense) (None, 1) 301 dense_63[0][0] \n==================================================================================================\nTotal params: 178,355,410\nTrainable params: 501,969\nNon-trainable params: 177,853,441\n__________________________________________________________________________________________________\n"
],
[
"epochs = 50\r\nbatch_size = 15\r\n\r\nmodel.fit(X_train, y_train.values, epochs=epochs, batch_size=batch_size)\r\n",
"Epoch 1/50\n48/48 [==============================] - 131s 3s/step - loss: 1.1114 - mean_squared_error: 1.9650\nEpoch 2/50\n48/48 [==============================] - 125s 3s/step - loss: 0.9327 - mean_squared_error: 1.3308\nEpoch 3/50\n48/48 [==============================] - 129s 3s/step - loss: 0.8104 - mean_squared_error: 0.9939\nEpoch 4/50\n48/48 [==============================] - 126s 3s/step - loss: 0.7773 - mean_squared_error: 0.9162\nEpoch 5/50\n48/48 [==============================] - 120s 3s/step - loss: 0.7950 - mean_squared_error: 0.9642\nEpoch 6/50\n48/48 [==============================] - 117s 2s/step - loss: 0.7236 - mean_squared_error: 0.8378\nEpoch 7/50\n48/48 [==============================] - 120s 2s/step - loss: 0.7562 - mean_squared_error: 0.8821\nEpoch 8/50\n48/48 [==============================] - 119s 2s/step - loss: 0.7168 - mean_squared_error: 0.8207\nEpoch 9/50\n48/48 [==============================] - 123s 3s/step - loss: 0.6983 - mean_squared_error: 0.7799\nEpoch 10/50\n48/48 [==============================] - 125s 3s/step - loss: 0.6990 - mean_squared_error: 0.7690\nEpoch 11/50\n48/48 [==============================] - 124s 3s/step - loss: 0.6872 - mean_squared_error: 0.7229\nEpoch 12/50\n48/48 [==============================] - 120s 2s/step - loss: 0.6864 - mean_squared_error: 0.7431\nEpoch 13/50\n48/48 [==============================] - 119s 2s/step - loss: 0.6904 - mean_squared_error: 0.7499\nEpoch 14/50\n48/48 [==============================] - 118s 2s/step - loss: 0.6447 - mean_squared_error: 0.6648\nEpoch 15/50\n48/48 [==============================] - 119s 2s/step - loss: 0.6578 - mean_squared_error: 0.7066\nEpoch 16/50\n48/48 [==============================] - 120s 3s/step - loss: 0.6496 - mean_squared_error: 0.6580\nEpoch 17/50\n48/48 [==============================] - 119s 2s/step - loss: 0.6586 - mean_squared_error: 0.6766\nEpoch 18/50\n48/48 [==============================] - 119s 2s/step - loss: 0.6253 - mean_squared_error: 0.6247\nEpoch 19/50\n48/48 [==============================] - 108s 2s/step - loss: 0.5878 - mean_squared_error: 0.5620\nEpoch 20/50\n48/48 [==============================] - 107s 2s/step - loss: 0.6064 - mean_squared_error: 0.5986\nEpoch 21/50\n48/48 [==============================] - 108s 2s/step - loss: 0.6574 - mean_squared_error: 0.6864\nEpoch 22/50\n48/48 [==============================] - 108s 2s/step - loss: 0.6055 - mean_squared_error: 0.6027\nEpoch 23/50\n48/48 [==============================] - 108s 2s/step - loss: 0.6012 - mean_squared_error: 0.5816\nEpoch 24/50\n48/48 [==============================] - 108s 2s/step - loss: 0.6010 - mean_squared_error: 0.5996\nEpoch 25/50\n48/48 [==============================] - 113s 2s/step - loss: 0.5823 - mean_squared_error: 0.5467\nEpoch 26/50\n48/48 [==============================] - 115s 2s/step - loss: 0.5872 - mean_squared_error: 0.5605\nEpoch 27/50\n48/48 [==============================] - 113s 2s/step - loss: 0.6192 - mean_squared_error: 0.6304\nEpoch 28/50\n48/48 [==============================] - 115s 2s/step - loss: 0.6294 - mean_squared_error: 0.6239\nEpoch 29/50\n48/48 [==============================] - 114s 2s/step - loss: 0.5584 - mean_squared_error: 0.5142\nEpoch 30/50\n48/48 [==============================] - 114s 2s/step - loss: 0.5839 - mean_squared_error: 0.5680\nEpoch 31/50\n48/48 [==============================] - 113s 2s/step - loss: 0.5518 - mean_squared_error: 0.5112\nEpoch 32/50\n48/48 [==============================] - 112s 2s/step - loss: 0.5660 - mean_squared_error: 0.5212\nEpoch 33/50\n48/48 [==============================] - 111s 2s/step - loss: 0.5438 - mean_squared_error: 0.5015\nEpoch 34/50\n48/48 [==============================] - 111s 2s/step - loss: 0.5630 - mean_squared_error: 0.5459\nEpoch 35/50\n48/48 [==============================] - 112s 2s/step - loss: 0.5288 - mean_squared_error: 0.4852\nEpoch 36/50\n48/48 [==============================] - 114s 2s/step - loss: 0.5252 - mean_squared_error: 0.4758\nEpoch 37/50\n48/48 [==============================] - 116s 2s/step - loss: 0.5466 - mean_squared_error: 0.4966\nEpoch 38/50\n48/48 [==============================] - 114s 2s/step - loss: 0.5199 - mean_squared_error: 0.4665\nEpoch 39/50\n48/48 [==============================] - 115s 2s/step - loss: 0.4954 - mean_squared_error: 0.4215\nEpoch 40/50\n48/48 [==============================] - 114s 2s/step - loss: 0.5140 - mean_squared_error: 0.4403\nEpoch 41/50\n48/48 [==============================] - 119s 2s/step - loss: 0.5205 - mean_squared_error: 0.4563\nEpoch 42/50\n48/48 [==============================] - 114s 2s/step - loss: 0.5129 - mean_squared_error: 0.4345\nEpoch 43/50\n48/48 [==============================] - 113s 2s/step - loss: 0.5243 - mean_squared_error: 0.4710\nEpoch 44/50\n48/48 [==============================] - 114s 2s/step - loss: 0.5421 - mean_squared_error: 0.5013\nEpoch 45/50\n48/48 [==============================] - 113s 2s/step - loss: 0.4887 - mean_squared_error: 0.4161\nEpoch 46/50\n48/48 [==============================] - 115s 2s/step - loss: 0.5105 - mean_squared_error: 0.4430\nEpoch 47/50\n48/48 [==============================] - 116s 2s/step - loss: 0.4901 - mean_squared_error: 0.4222\nEpoch 48/50\n26/48 [===============>..............] - ETA: 54s - loss: 0.4655 - mean_squared_error: 0.3741"
],
[
"import numpy as np\r\npred = model.predict(X_test)\r\nrounded_pred = np.around(pred, decimals=2)\r\nrounded_pred",
"_____no_output_____"
],
[
"def rmse(predictions, targets):\r\n return np.sqrt(((predictions - targets) ** 2).mean())\r\n\r\nrmse(rounded_pred.transpose(), y_test.values)\r\n",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d08930aa362848249f98195cf5665be0d8df4e4e | 863,939 | ipynb | Jupyter Notebook | notebooks/parameter-tuning.ipynb | klane/nba | 466293df83b1f58d84aef245e590eb3a2a75546d | [
"MIT"
] | 78 | 2018-02-03T03:00:04.000Z | 2022-03-18T18:28:17.000Z | notebooks/parameter-tuning.ipynb | klane/nba | 466293df83b1f58d84aef245e590eb3a2a75546d | [
"MIT"
] | 363 | 2018-10-24T02:08:40.000Z | 2022-03-03T21:53:52.000Z | notebooks/parameter-tuning.ipynb | dragonGR/databall | febe309b85c10b464a47cc6d7e67f33b53a23de7 | [
"MIT"
] | 18 | 2018-03-09T04:50:51.000Z | 2022-01-24T16:28:51.000Z | 1,354.136364 | 141,346 | 0.945644 | [
[
[
"This page was created from a Jupyter notebook. The original notebook can be found [here](https://github.com/klane/databall/blob/master/notebooks/parameter-tuning.ipynb). It investigates tuning model parameters to achieve better performance. First we must import the necessary installed modules.",
"_____no_output_____"
]
],
[
[
"import itertools\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom functools import partial\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.svm import LinearSVC\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.neural_network import MLPClassifier\nfrom hyperopt import hp",
"_____no_output_____"
]
],
[
[
"Next we need to import a few local modules.",
"_____no_output_____"
]
],
[
[
"import os\nimport sys\nimport warnings\n\nwarnings.filterwarnings('ignore')\nmodule_path = os.path.abspath(os.path.join('..'))\n\nif module_path not in sys.path:\n sys.path.append(module_path)\n\nfrom databall.database import Database\nfrom databall.plotting import format_538, plot_metrics, plot_matrix\nfrom databall.model_selection import calculate_metrics, optimize_params, train_test_split\nimport databall.util as util",
"_____no_output_____"
]
],
[
[
"Apply the FiveThirtyEight plot style.",
"_____no_output_____"
]
],
[
[
"plt.style.use('fivethirtyeight')",
"_____no_output_____"
]
],
[
[
"# Data\n\nAs before, we collect the stats and betting data from the database and create training and test sets where the 2016 season is reserved as the test set.",
"_____no_output_____"
]
],
[
[
"database = Database('../data/nba.db')\ngames = database.betting_stats(window=10)\nx_train, y_train, x_test, y_test = train_test_split(games, 2006, 2016,\n xlabels=util.stat_names() + ['SEASON'])",
"_____no_output_____"
]
],
[
[
"The stats below are the box score stats used during [feature selection](feature-selection.md). I decided to further explore these because they are readily available from multiple sources and do not require any calculation of advanced stats by users.",
"_____no_output_____"
]
],
[
[
"stats = ['FGM', 'FGA', 'FG3M', 'FG3A', 'FTM', 'FTA', 'OREB', 'DREB', 'AST', 'TOV', 'STL', 'BLK']\nstats = ['TEAM_' + s for s in stats] + ['POSSESSIONS']\nstats += [s + '_AWAY' for s in stats] + ['HOME_SPREAD']",
"_____no_output_____"
]
],
[
[
"# Logistic Regression\n\nThe plots below show `LogisticRegression` model performance using different combinations of three parameters in a grid search: `penalty` (type of norm), `class_weight` (where \"balanced\" indicates weights are inversely proportional to class frequencies and the default is one), and `dual` (flag to use the dual formulation, which changes the equation being optimized). For each combination, models were trained with different `C` values, which controls the inverse of the regularization strength.\n\nAll models have similar accuracy, ROC area, and precision/recall area for all `C` values tested. However, their individual precision and recall metrics change wildly with C. We are more interested in accuracy for this specific problem because accuracy directly controls profit. Using a grid search is not the most efficient parameter tuning method because grid searches do not use information from prior runs to aid future parameter choices. You are at the mercy of the selected grid points.",
"_____no_output_____"
]
],
[
[
"# Create functions that return logistic regression models with different parameters\nmodels = [partial(LogisticRegression, penalty='l1'),\n partial(LogisticRegression, penalty='l1', class_weight='balanced'),\n partial(LogisticRegression),\n partial(LogisticRegression, class_weight='balanced'),\n partial(LogisticRegression, dual=True),\n partial(LogisticRegression, class_weight='balanced', dual=True)]\n\nstart = -8\nstop = -2\nC_vec = np.logspace(start=start, stop=stop, num=20)\nresults = calculate_metrics(models, x_train, y_train, stats, 'C', C_vec, k=6)\nlegend = ['L1 Norm', 'L1 Norm, Balanced Class', 'L2 Norm (Default)',\n 'L2 Norm, Balanced Class', 'L2 Norm, Dual Form', 'L2 Norm, Balanced Class, Dual Form']\n\nfig, ax = plot_metrics(C_vec, results, 'Regularization Parameter', log=True)\n\nax[-1].legend(legend, fontsize=16, bbox_to_anchor=(1.05, 1), borderaxespad=0)\n[a.set_xlim(10**start, 10**stop) for a in ax]\n[a.set_ylim(-0.05, 1.05) for a in ax]\ntitle = 'Grid searches are not the most efficient'\nsubtitle = 'Grid search of logistic regression hyperparameters'\nformat_538(fig, 'NBA Stats & Covers.com', ax=ax, title=title, subtitle=subtitle, xoff=(-0.22, 3.45),\n yoff=(-1.54, -1.64), toff=(-.16, 1.25), soff=(-0.16, 1.12), n=100)\nplt.show()",
"_____no_output_____"
]
],
[
[
"An alternative solution is to use an optimization algorithm that minimizes a loss function to select the hyperparameters. I experimented with the hyperopt package for this, which accepts a parameter search space and loss function as its inputs. The search space consists of discrete choices and ranges on continuous variables. I swapped out the `class_weight` and `dual` variables in favor of `fit_intercept` and `intercept_scaling`, which controls whether to include an intercept in the `LogisticRegression` model and a scaling factor. The scaling factor can help reduce the effect of regularization on the intercept. I chose cross-validation accuracy as the loss function (actually 1-accuracy since the optimizer minimizes the loss function) since we are interested in increasing profits. The optimal hyperparameters are displayed below.",
"_____no_output_____"
]
],
[
[
"space_log = {}\nspace_log['C'] = hp.loguniform('C', -8*np.log(10), -2*np.log(10))\nspace_log['intercept_scaling'] = hp.loguniform('intercept_scaling', -8*np.log(10), 8*np.log(10))\nspace_log['penalty'] = hp.choice('penalty', ['l1', 'l2'])\nspace_log['fit_intercept'] = hp.choice('fit_intercept', [False, True])\nmodel = LogisticRegression()\nbest_log, param_log = optimize_params(model, x_train, y_train, stats, space_log, max_evals=1000)\nprint(best_log)",
"{'C': 0.0001943920615336294, 'fit_intercept': True, 'intercept_scaling': 134496.71823111628, 'penalty': 'l2'}\n"
]
],
[
[
"The search history is displayed below. The intercept scale factor tended toward high values, even though the default value is 1.0.",
"_____no_output_____"
]
],
[
[
"labels = ['Regularization', 'Intercept Scale', 'Penalty', 'Intercept']\nfig, ax = plot_matrix(param_log.index.values, param_log[[k for k in space_log.keys()]].values,\n 'Iteration', labels, 2, 2, logy=[True, True, False, False])\n\n[a.set_yticks([0, 1]) for a in ax[2:]]\nax[2].set_yticklabels(['L1', 'L2'])\nax[3].set_yticklabels(['False', 'True'])\n\ntitle = 'Hyperopt is more flexible than a grid search'\nsubtitle = 'Hyperopt search of logistic regression hyperparameters'\nformat_538(fig, 'NBA Stats & Covers.com', ax=ax, title=title, subtitle=subtitle,\n xoff=(-0.18, 2.25), yoff=(-1.42, -1.52), toff=(-.16, 1.25), soff=(-0.16, 1.12),\n n=80, bottomtick=np.nan)\nplt.show()",
"_____no_output_____"
]
],
[
[
"The cross-validation accuracy history shows that many models performed about the same despite their parameter values given the band of points just below 51% accuracy. The optimizer was also unable to find a model that significantly improved accuracy.",
"_____no_output_____"
]
],
[
[
"fig = plt.figure(figsize=(12, 6))\nplt.plot(param_log.index.values, param_log['accuracy'], '.', markersize=5)\n\ntitle = 'Improvements are hard to come by'\nsubtitle = 'Accuracy of logistic regression hyperparameter optimization history'\nformat_538(fig, 'NBA Stats & Covers.com', xlabel='Iteration', ylabel='Accuracy',\n title=title, subtitle=subtitle, xoff=(-0.1, 1.01), yoff=(-0.14, -0.2),\n toff=(-0.09, 1.12), soff=(-0.09, 1.04), bottomtick=0.5)\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Support Vector Machine\n\nThe [`LinearSVC`](http://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVC.html#sklearn.svm.LinearSVC) class is similar to a generic [`SVC`](http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC) with a linear kernel, but is implemented with liblinear instead of libsvm. The documentation states that `LinearSVC` scales better to large sample sizes since `SVC`'s fit time complexity is more than quadratic with the number of samples. I initially tried `SVC`, but the training time was too costly. `LinearSVC` proved to be must faster for this application.\n\nThe code below sets up a `LinearSVC` hyperparameter search space using four parameters: `C` (penalty of the error term), `loss` (the loss function), `fit_intercept` (identical to `LogisticRegression`), and `intercept_scaling` (identical to `LogisticRegression`). I limited the number of evaluations to 500 to reduce the computational cost.",
"_____no_output_____"
]
],
[
[
"space_svm = {}\nspace_svm['C'] = hp.loguniform('C', -8*np.log(10), -2*np.log(10))\nspace_svm['intercept_scaling'] = hp.loguniform('intercept_scaling', -8*np.log(10), 8*np.log(10))\nspace_svm['loss'] = hp.choice('loss', ['hinge', 'squared_hinge'])\nspace_svm['fit_intercept'] = hp.choice('fit_intercept', [False, True])\nmodel = LinearSVC()\nbest_svm, param_svm = optimize_params(model, x_train, y_train, stats, space_svm, max_evals=500)\nprint(best_svm)",
"{'C': 3.2563857398383885e-06, 'fit_intercept': True, 'intercept_scaling': 242.79319791592195, 'loss': 'squared_hinge'}\n"
]
],
[
[
"The search history below is similar to the logistic regression history, but hyperopt appears to test more intercept scales with low values than before. This is also indicated by the drastic reduction in the intercept scale compared to logistic regression.",
"_____no_output_____"
]
],
[
[
"labels = ['Regularization', 'Intercept Scale', 'Loss', 'Intercept']\nfig, ax = plot_matrix(param_svm.index.values, param_svm[[k for k in space_svm.keys()]].values,\n 'Iteration', labels, 2, 2, logy=[True, True, False, False])\n\n[a.set_yticks([0, 1]) for a in ax[2:]]\nax[2].set_yticklabels(['Hinge', 'Squared\\nHinge'])\nax[3].set_yticklabels(['False', 'True'])\n\ntitle = 'Hyperopt is more flexible than a grid search'\nsubtitle = 'Hyperopt search of support vector machine hyperparameters'\nformat_538(fig, 'NBA Stats & Covers.com', ax=ax, title=title, subtitle=subtitle,\n xoff=(-0.24, 2.25), yoff=(-1.42, -1.52), toff=(-.22, 1.25), soff=(-0.22, 1.12),\n n=80, bottomtick=np.nan)\nplt.show()",
"_____no_output_____"
]
],
[
[
"The plot below shows the `LinearSVC` cross-validation accuracy history. There is a band of points similar to what we observed for logistic regression below 51% accuracy. The support vector machine model does not perform much better than logistic regression, and several points fall below 50% accuracy.",
"_____no_output_____"
]
],
[
[
"fig = plt.figure(figsize=(12, 6))\nplt.plot(param_svm.index.values, param_svm['accuracy'], '.', markersize=5)\n\ntitle = 'Improvements are hard to come by'\nsubtitle = 'Accuracy of support vector machine hyperparameter optimization history'\nformat_538(fig, 'NBA Stats & Covers.com', xlabel='Iteration', ylabel='Accuracy',\n title=title, subtitle=subtitle, xoff=(-0.1, 1.01), yoff=(-0.14, -0.2),\n toff=(-0.09, 1.12), soff=(-0.09, 1.04), bottomtick=0.5)\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Random Forest\n\nThe code below builds a [`RandomForestClassifier`](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html#sklearn.ensemble.RandomForestClassifier) hyperparameter search space using the parameters `n_estimators` (number of decision trees in the forest), `class_weight` (identical to the `LogisticRegression` grid search), `criterion` (function to evaluate split quality), and `bootstrap` (controls whether bootstrap samples are used when building trees). I reduced the number of function evaluations to 100 in the interest of computational time.",
"_____no_output_____"
]
],
[
[
"space_rf = {}\nspace_rf['n_estimators'] = 10 + hp.randint('n_estimators', 40)\nspace_rf['criterion'] = hp.choice('criterion', ['gini', 'entropy'])\nspace_rf['class_weight'] = hp.choice('class_weight', [None, 'balanced'])\nspace_rf['bootstrap'] = hp.choice('bootstrap', [False, True])\nmodel = RandomForestClassifier(random_state=8)\nbest_rf, param_rf = optimize_params(model, x_train, y_train, stats, space_rf, max_evals=100)\nprint(best_rf)",
"{'bootstrap': True, 'class_weight': 'balanced', 'criterion': 'entropy', 'n_estimators': 34}\n"
]
],
[
[
"The random forest hyperparameter search history is displayed below.",
"_____no_output_____"
]
],
[
[
"labels = ['Estimators', 'Criterion', 'Class Weight', 'Bootstrap']\nfig, ax = plot_matrix(param_rf.index.values, param_rf[[k for k in space_rf.keys()]].values,\n 'Iteration', labels, 2, 2)\n\n[a.set_yticks([0, 1]) for a in ax[1:]]\nax[1].set_yticklabels(['Gini', 'Entropy'])\nax[2].set_yticklabels(['None', 'Balanced'])\nax[3].set_yticklabels(['False', 'True'])\n\ntitle = 'Hyperopt is more flexible than a grid search'\nsubtitle = 'Hyperopt search of random forest hyperparameters'\nformat_538(fig, 'NBA Stats & Covers.com', ax=ax, title=title, subtitle=subtitle,\n xoff=(-0.26, 2.25), yoff=(-1.42, -1.52), toff=(-.24, 1.25), soff=(-0.24, 1.12),\n n=80, bottomtick=np.nan)\nplt.show()",
"_____no_output_____"
]
],
[
[
"The cross-validation accuracy history shows the random forest model performs slightly worse than logistic regression.",
"_____no_output_____"
]
],
[
[
"fig = plt.figure(figsize=(12, 6))\nplt.plot(param_rf.index.values, param_rf['accuracy'], '.', markersize=5)\n\ntitle = 'Improvements are hard to come by'\nsubtitle = 'Accuracy of random forest hyperparameter optimization history'\nformat_538(fig, 'NBA Stats & Covers.com', xlabel='Iteration', ylabel='Accuracy',\n title=title, subtitle=subtitle, xoff=(-0.1, 1.01), yoff=(-0.14, -0.2),\n toff=(-0.09, 1.12), soff=(-0.09, 1.04), bottomtick=0.5)\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Neural Network\n\nThe code below builds a [`MLPClassifier`](http://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html#sklearn.neural_network.MLPClassifier) hyperparameter search space using the parameters `hidden_layer_sizes` (number of neurons in each hidden layer), `alpha` (controls the L2 regularization similar to the `C` parameter in `LogisticRegression` and `LinearSVC`), `activation` (network activation function), and `solver` (the algorithm used to optimize network weights). The network structure was held to a single hidden layer. I kept the number of function evaluations at 100 in the interest of computational time.",
"_____no_output_____"
]
],
[
[
"space_mlp = {}\nspace_mlp['hidden_layer_sizes'] = 10 + hp.randint('hidden_layer_sizes', 40)\nspace_mlp['alpha'] = hp.loguniform('alpha', -8*np.log(10), 3*np.log(10))\nspace_mlp['activation'] = hp.choice('activation', ['relu', 'logistic', 'tanh'])\nspace_mlp['solver'] = hp.choice('solver', ['lbfgs', 'sgd', 'adam'])\nmodel = MLPClassifier()\nbest_mlp, param_mlp = optimize_params(model, x_train, y_train, stats, space_mlp, max_evals=100)\nprint(best_mlp)",
"{'activation': 'tanh', 'alpha': 5.700733605522687e-06, 'hidden_layer_sizes': 49, 'solver': 'lbfgs'}\n"
]
],
[
[
"The multi-layer perceptron hyperparameter search history is displayed below.",
"_____no_output_____"
]
],
[
[
"labels = ['Hidden Neurons', 'Regularization', 'Activation', 'Solver']\nfig, ax = plot_matrix(param_mlp.index.values, param_mlp[[k for k in space_mlp.keys()]].values,\n 'Iteration', labels, 2, 2, logy=[False, True, False, False])\n\n[a.set_yticks([0, 1, 2]) for a in ax[2:]]\nax[2].set_yticklabels(['RELU', 'Logistic', 'Tanh'])\nax[3].set_yticklabels(['LBFGS', 'SGD', 'ADAM'])\n\ntitle = 'Hyperopt is more flexible than a grid search'\nsubtitle = 'Hyperopt search of multi-layer perceptron hyperparameters'\nformat_538(fig, 'NBA Stats & Covers.com', ax=ax, title=title, subtitle=subtitle,\n xoff=(-0.26, 2.25), yoff=(-1.42, -1.52), toff=(-.24, 1.25), soff=(-0.24, 1.12),\n n=80, bottomtick=np.nan)\nplt.show()",
"_____no_output_____"
]
],
[
[
"The cross-validation history suggests the multi-layer perceptron performs the best of the four models, albeit the improvement is minor.",
"_____no_output_____"
]
],
[
[
"fig = plt.figure(figsize=(12, 6))\nplt.plot(param_mlp.index.values, param_mlp['accuracy'], '.', markersize=5)\n\ntitle = 'Improvements are hard to come by'\nsubtitle = 'Accuracy of multi-layer perceptron hyperparameter optimization history'\nformat_538(fig, 'NBA Stats & Covers.com', xlabel='Iteration', ylabel='Accuracy',\n title=title, subtitle=subtitle, xoff=(-0.1, 1.01), yoff=(-0.14, -0.2),\n toff=(-0.09, 1.12), soff=(-0.09, 1.04), bottomtick=0.5)\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d089452bf467be6626e15d2fa203e1f56b0af5ff | 16,138 | ipynb | Jupyter Notebook | docs/src/tutorials/Clustering.ipynb | arfon/BetaML.jl | 6a02de0223931a9cc3053c13fc42c7b62352603e | [
"MIT"
] | 36 | 2020-06-08T20:21:04.000Z | 2022-03-26T05:57:47.000Z | docs/src/tutorials/Clustering.ipynb | arfon/BetaML.jl | 6a02de0223931a9cc3053c13fc42c7b62352603e | [
"MIT"
] | 19 | 2020-06-08T14:38:55.000Z | 2022-03-22T15:57:51.000Z | docs/src/tutorials/Clustering.ipynb | arfon/BetaML.jl | 6a02de0223931a9cc3053c13fc42c7b62352603e | [
"MIT"
] | 6 | 2020-06-17T17:00:59.000Z | 2022-03-26T05:57:39.000Z | 34.26327 | 297 | 0.475338 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
d08966847da2c293bfca3821f60fd990aabe517a | 9,599 | ipynb | Jupyter Notebook | collation/el_collation.ipynb | enabling-languages/dinka | 981ffd07e7468f692c4d17472083a3c5485987f8 | [
"MIT"
] | 1 | 2018-11-13T13:34:58.000Z | 2018-11-13T13:34:58.000Z | collation/el_collation.ipynb | enabling-languages/dinka | 981ffd07e7468f692c4d17472083a3c5485987f8 | [
"MIT"
] | 6 | 2018-07-18T23:50:31.000Z | 2021-08-24T06:57:49.000Z | collation/el_collation.ipynb | enabling-languages/dinka | 981ffd07e7468f692c4d17472083a3c5485987f8 | [
"MIT"
] | null | null | null | 35.161172 | 3,005 | 0.436504 | [
[
[
"# Collation\n\nAn alternative for `sorted()` using `icu.Collator` and `icu.RuleBasedCollator`. Currently supports lists, tuples, strings, dataframes and series.\n\nCustom collation rules are defined for Dinka and Akan.\n\nAll icu::Collator supported locales are available.\n\n__TODO:__\n\n* add support for numpy arrays\n* add support for dicts [[1]](https://stackoverflow.com/questions/38793694/python-sort-a-list-of-objects-dictionaries-with-a-given-sortkey-function)\n* allow user to define colation rules and pass them to `el_collation.sorted_`\n* allow user to modify collation rules provided by ICU locales",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport el_collation as elcol\nimport random\n",
"_____no_output_____"
]
],
[
[
"## Custom Collation Rules (unsupported locales)\n\nThe following examples will be using predefined collation rules for Dinka.",
"_____no_output_____"
]
],
[
[
"# Set language\nlang = \"din-SS\"\n# Provide Dinka lexemes\nordered_lexemes_tuple = (\n 'abany',\n 'abaany',\n 'abaŋ',\n 'abenh',\n 'abeŋ',\n 'aber',\n 'abeer',\n 'abëër',\n 'abeeric',\n 'aberŋic',\n 'abuɔ̈c',\n 'abuɔk',\n 'abuɔɔk',\n 'abuɔ̈k',\n 'abur',\n 'acut',\n 'acuut',\n 'acuth',\n 'ago',\n 'agook',\n 'agol',\n 'akɔ̈r',\n 'akɔrcok',\n 'akuny',\n 'akuŋɛŋ'\n)\n\n# Ensure lexeme order is randomised\nrandom.seed(5)\nrandom_lexemes = tuple(random.sample(ordered_lexemes_tuple, len(ordered_lexemes_tuple)))\nrandom_lexemes",
"_____no_output_____"
],
[
"# Sort randomised tuple of Dinka lexemes\nsorted_lexemes = elcol.sorted_(random_lexemes, lang)\nsorted_lexemes",
"_____no_output_____"
]
],
[
[
"### Pandas dataframes\n\n",
"_____no_output_____"
]
],
[
[
"ddf = pd.read_csv(\"../word_frequency/unilex/din.txt\", sep='\\t', skiprows = range(2,5))\nrandom_ddf = ddf.sample(frac=1)\nsorted_ddf = elcol.sorted_(random_ddf, lang, random_ddf['Form'])\nsorted_ddf.head(30)",
"_____no_output_____"
]
],
[
[
"### Pandas series",
"_____no_output_____"
]
],
[
[
"random_words = random_ddf['Form']\nsorted_words = elcol.sorted_(random_words, lang, random_words)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d08971b811e9ddc525b2b06b87f6a4fb154082e1 | 9,275 | ipynb | Jupyter Notebook | hw2/homework2.ipynb | ericschulman/math | 9d64d1443440dac0e259dfeb300c753ff53fc078 | [
"MIT"
] | 1 | 2020-04-02T16:20:25.000Z | 2020-04-02T16:20:25.000Z | hw2/homework2.ipynb | ericschulman/math387c | 9d64d1443440dac0e259dfeb300c753ff53fc078 | [
"MIT"
] | null | null | null | hw2/homework2.ipynb | ericschulman/math387c | 9d64d1443440dac0e259dfeb300c753ff53fc078 | [
"MIT"
] | null | null | null | 32.43007 | 392 | 0.447763 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
d0899230a77d08a6a6b61809e29f4d74051cd9f1 | 581,733 | ipynb | Jupyter Notebook | REMARKs/BayerLuetticke/notebooks/DCT.ipynb | MridulS/REMARK | 65544dfca901d0ac2dad928cf08e3a8530422e41 | [
"Apache-2.0"
] | null | null | null | REMARKs/BayerLuetticke/notebooks/DCT.ipynb | MridulS/REMARK | 65544dfca901d0ac2dad928cf08e3a8530422e41 | [
"Apache-2.0"
] | null | null | null | REMARKs/BayerLuetticke/notebooks/DCT.ipynb | MridulS/REMARK | 65544dfca901d0ac2dad928cf08e3a8530422e41 | [
"Apache-2.0"
] | null | null | null | 1,278.534066 | 273,808 | 0.956227 | [
[
[
"### What is DCT (discrete cosine transformation) ?\n\n- This notebook creates arbitrary consumption functions at both 1-dimensional and 2-dimensional grids and illustrate how DCT approximates the full-grid function with different level of accuracies. \n- This is used in [DCT-Copula-Illustration notebook](DCT-Copula-Illustration.ipynb) to plot consumption functions approximated by DCT versus original consumption function at full grids.\n- Written by Tao Wang\n- June 19, 2019",
"_____no_output_____"
]
],
[
[
"# Setup\ndef in_ipynb():\n try:\n if str(type(get_ipython())) == \"<class 'ipykernel.zmqshell.ZMQInteractiveShell'>\":\n return True\n else:\n return False\n except NameError:\n return False\n\n# Determine whether to make the figures inline (for spyder or jupyter)\n# vs whatever is the automatic setting that will apply if run from the terminal\nif in_ipynb():\n # %matplotlib inline generates a syntax error when run from the shell\n # so do this instead\n get_ipython().run_line_magic('matplotlib', 'inline') \nelse:\n get_ipython().run_line_magic('matplotlib', 'auto') ",
"_____no_output_____"
],
[
"# Import tools\nimport scipy.fftpack as sf # scipy discrete fourier transform\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\nimport numpy.linalg as lag\nfrom scipy import misc \nfrom matplotlib import cm",
"_____no_output_____"
],
[
"## DCT in 1 dimension\n\ngrids= np.linspace(0,100,100) # this represents the grids on which consumption function is defined.i.e. m or k\n\nc =grids + 50*np.cos(grids*2*np.pi/40) # this is an arbitrary example of consumption function \nc_dct = sf.dct(c,norm='ortho') # set norm =ortho is important \nind=np.argsort(abs(c_dct))[::-1] # get indices of dct coefficients(absolute value) in descending order\n",
"_____no_output_____"
],
[
"## DCT in 1 dimension for difference accuracy levels\n\nfig = plt.figure(figsize=(5,5))\nfig.suptitle('DCT compressed c function with different accuracy levels')\nlvl_lst = np.array([0.5,0.9,0.99])\nplt.plot(c,'r*',label='c at full grids')\nc_dct = sf.dct(c,norm='ortho') # set norm =ortho is important \nind=np.argsort(abs(c_dct))[::-1]\nfor idx in range(len(lvl_lst)):\n i = 1 # starts the loop that finds the needed indices so that an target level of approximation is achieved \n while lag.norm(c_dct[ind[0:i]].copy())/lag.norm(c_dct) < lvl_lst[idx]:\n i = i + 1\n needed = i\n print(\"For accuracy level of \"+str(lvl_lst[idx])+\", \"+str(needed)+\" basis functions used\")\n c_dct_rdc=c.copy()\n c_dct_rdc[ind[needed+1:]] = 0\n c_approx = sf.idct(c_dct_rdc)\n plt.plot(c_approx,label=r'c approx at ${}$'.format(lvl_lst[idx]))\nplt.legend(loc=0)",
"For accuracy level of 0.5, 1 basis functions used\nFor accuracy level of 0.9, 2 basis functions used\nFor accuracy level of 0.99, 3 basis functions used\n"
],
[
"## Blockwise DCT. For illustration but not used in BayerLuetticke. \n## But it illustrates how doing dct in more finely devided blocks give a better approximation\n\nsize = c.shape\nc_dct = np.zeros(size)\nc_approx=np.zeros(size)\n\nfig = plt.figure(figsize=(5,5))\nfig.suptitle('DCT compressed c function with different number of basis funcs')\nnbs_lst = np.array([20,50])\nplt.plot(c,'r*',label='c at full grids')\nfor i in range(len(nbs_lst)):\n delta = np.int(size[0]/nbs_lst[i])\n for pos in np.r_[:size[0]:delta]:\n c_dct[pos:(pos+delta)] = sf.dct(c[pos:(pos+delta)],norm='ortho')\n c_approx[pos:(pos+delta)]=sf.idct(c_dct[pos:(pos+delta)])\n plt.plot(c_dct,label=r'Nb of blocks= ${}$'.format(nbs_lst[i]))\nplt.legend(loc=0)",
"_____no_output_____"
],
[
"# DCT in 2 dimensions \n\ndef dct2d(x):\n x0 = sf.dct(x.copy(),axis=0,norm='ortho')\n x_dct = sf.dct(x0.copy(),axis=1,norm='ortho')\n return x_dct\ndef idct2d(x):\n x0 = sf.idct(x.copy(),axis=1,norm='ortho')\n x_idct= sf.idct(x0.copy(),axis=0,norm='ortho')\n return x_idct\n\n# arbitrarily generate a consumption function at different grid points \ngrid0=20\ngrid1=20\ngrids0 = np.linspace(0,20,grid0)\ngrids1 = np.linspace(0,20,grid1)\n\nc2d = np.zeros([grid0,grid1])\n\n# create an arbitrary c functions at 2-dimensional grids\nfor i in range(grid0):\n for j in range(grid1):\n c2d[i,j]= grids0[i]*grids1[j] - 50*np.sin(grids0[i]*2*np.pi/40)+10*np.cos(grids1[j]*2*np.pi/40)\n\n## do dct for 2-dimensional c at full grids \nc2d_dct=dct2d(c2d)\n\n## convert the 2d to 1d for easier manipulation \nc2d_dct_flt = c2d_dct.flatten(order='F') \nind2d = np.argsort(abs(c2d_dct_flt.copy()))[::-1] # get indices of dct coefficients(abosolute value) \n # in the decending order",
"_____no_output_____"
],
[
"# DCT in 2 dimensions for different levels of accuracy \n\nfig = plt.figure(figsize=(15,10))\nfig.suptitle('DCT compressed c function with different accuracy levels')\nlvl_lst = np.array([0.999,0.99,0.9,0.8,0.5])\nax=fig.add_subplot(2,3,1)\nax.imshow(c2d)\nax.set_title(r'$1$')\n\nfor idx in range(len(lvl_lst)):\n i = 1 \n while lag.norm(c2d_dct_flt[ind2d[:i]].copy())/lag.norm(c2d_dct_flt) < lvl_lst[idx]:\n i += 1 \n needed = i\n print(\"For accuracy level of \"+str(lvl_lst[idx])+\", \"+str(needed)+\" basis functions are used\")\n c2d_dct_rdc=c2d_dct.copy()\n idx_urv = np.unravel_index(np.sort(ind2d[needed+1:]),(grid0,grid1),order='F')\n c2d_dct_rdc[idx_urv] = 0\n c2d_approx = idct2d(c2d_dct_rdc)\n ax = fig.add_subplot(2,3,idx+2)\n ax.set_title(r'${}$'.format(lvl_lst[idx]))\n ax.imshow(c2d_approx)",
"For accuracy level of 0.999, 10 basis functions are used\nFor accuracy level of 0.99, 5 basis functions are used\nFor accuracy level of 0.9, 3 basis functions are used\nFor accuracy level of 0.8, 2 basis functions are used\nFor accuracy level of 0.5, 1 basis functions are used\n"
],
[
"## surface plot of c at full grids and dct approximates with different accuracy levels\n\nfig = plt.figure(figsize=(15,10))\nfig.suptitle('DCT compressed c function in different accuracy levels')\nlvl_lst = np.array([0.999,0.99,0.9,0.8,0.5])\nax=fig.add_subplot(2,3,1,projection='3d')\nax.plot_surface(grids0,grids1,c2d,cmap=cm.coolwarm)\nax.set_title(r'$1$')\n\nfor idx in range(len(lvl_lst)):\n i = 1 \n while lag.norm(c2d_dct_flt[ind2d[:i]].copy())/lag.norm(c2d_dct_flt) < lvl_lst[idx]:\n i += 1 \n needed = i\n print(\"For accuracy level of \"+str(lvl_lst[idx])+\", \"+str(needed)+\" basis functions are used\")\n c2d_dct_rdc=c2d_dct.copy()\n idx_urv = np.unravel_index(ind2d[needed+1:],(grid0,grid1))\n c2d_dct_rdc[idx_urv] = 0\n c2d_approx = idct2d(c2d_dct_rdc)\n ax = fig.add_subplot(2,3,idx+2,projection='3d')\n ax.set_title(r'${}$'.format(lvl_lst[idx]))\n ax.plot_surface(grids0,grids1,c2d_approx,cmap=cm.coolwarm)",
"For accuracy level of 0.999, 10 basis functions are used\nFor accuracy level of 0.99, 5 basis functions are used\nFor accuracy level of 0.9, 3 basis functions are used\nFor accuracy level of 0.8, 2 basis functions are used\nFor accuracy level of 0.5, 1 basis functions are used\n"
],
[
"# surface plot of absoulte value of differences of c at full grids and approximated\n\nfig = plt.figure(figsize=(15,10))\nfig.suptitle('Differences(abosolute value) of DCT compressed with c at full grids in different accuracy levels')\nlvl_lst = np.array([0.999,0.99,0.9,0.8,0.5])\nax=fig.add_subplot(2,3,1,projection='3d')\nc2d_diff = abs(c2d-c2d)\nax.plot_surface(grids0,grids1,c2d_diff,cmap=cm.coolwarm)\nax.set_title(r'$1$')\nfor idx in range(len(lvl_lst)):\n i = 1 \n while lag.norm(c2d_dct_flt[ind2d[:i]].copy())/lag.norm(c2d_dct_flt) < lvl_lst[idx]:\n i += 1 \n needed = i\n print(\"For accuracy level of \"+str(lvl_lst[idx])+\", \"+str(needed)+\" basis functions are used\")\n c2d_dct_rdc=c2d_dct.copy()\n idx_urv = np.unravel_index(ind2d[needed+1:],(grid0,grid1))\n c2d_dct_rdc[idx_urv] = 0\n c2d_approx = idct2d(c2d_dct_rdc)\n c2d_approx_diff = abs(c2d_approx - c2d)\n ax = fig.add_subplot(2,3,idx+2,projection='3d')\n ax.set_title(r'${}$'.format(lvl_lst[idx]))\n ax.plot_surface(grids0,grids1,c2d_approx_diff,cmap= 'OrRd',linewidth=1)\n ax.view_init(20, 90)",
"For accuracy level of 0.999, 10 basis functions are used\nFor accuracy level of 0.99, 5 basis functions are used\nFor accuracy level of 0.9, 3 basis functions are used\nFor accuracy level of 0.8, 2 basis functions are used\nFor accuracy level of 0.5, 1 basis functions are used\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d089926154e4bdd8a0db4421b9a401570de5d537 | 267,919 | ipynb | Jupyter Notebook | working/EDA_WM-BrianMc-topics-Method2-heat_map-100-clusters-random62.ipynb | bdmckean/woot_math_analysis | d3308e413da2e41fff5060f10339b5ab4d01d690 | [
"MIT"
] | null | null | null | working/EDA_WM-BrianMc-topics-Method2-heat_map-100-clusters-random62.ipynb | bdmckean/woot_math_analysis | d3308e413da2e41fff5060f10339b5ab4d01d690 | [
"MIT"
] | null | null | null | working/EDA_WM-BrianMc-topics-Method2-heat_map-100-clusters-random62.ipynb | bdmckean/woot_math_analysis | d3308e413da2e41fff5060f10339b5ab4d01d690 | [
"MIT"
] | null | null | null | 69.679844 | 145,398 | 0.750096 | [
[
[
"import pymongo\nimport pandas as pd\nimport numpy as np\n\nfrom pymongo import MongoClient\nfrom bson.objectid import ObjectId\n\nimport datetime\n\nimport matplotlib.pyplot as plt\n\nfrom collections import defaultdict\n\n\n%matplotlib inline\nimport json\nplt.style.use('ggplot')\n\nimport seaborn as sns\n\nfrom math import log10, floor\n\nfrom time import time\n\nfrom sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer\n\n\n\nfrom sklearn.cluster import KMeans, MiniBatchKMeans",
"_____no_output_____"
]
],
[
[
"# CU Woot Math Method 2 for unsupervosed discovery of new behavior traits\n## 1) Convert response field dictionary into a document\n## 2) Develop word vector using term frequency - inverse document frequency\n## 3) Use K-Means to cluster documents\n## 4) Map traits to clusters to validate technique\n\n\nIn the first results presented to Woot Math a 100K sample of the entire data set was chosen. In this report, I'll start with the same type of analysis to develop the same heat map. In the meeting Sean and Brent suggested using just one of the qual_id and repeat the experiment and then look at the samples in clusers without traits. I'll do that in a subsequent analysis\n\n",
"_____no_output_____"
],
[
"## Part 1. Heat map with 100 K sample of all qual_id's",
"_____no_output_____"
]
],
[
[
"## Connect to local DB\n\nclient = MongoClient('localhost', 27017)\nprint (\"Setup db access\")",
"Setup db access\n"
],
[
"#\n# Get collections from mongodb\n#\n#db = client.my_test_db\ndb = client.test\n",
"_____no_output_____"
],
[
"chunk = 100000\nstart = 0\nend = start + chunk",
"_____no_output_____"
],
[
"#reponses = db.anon_student_task_responses.find({'correct':False})[start:end]\nreponses = db.anon_student_task_responses.find()[start:end]",
"_____no_output_____"
],
[
"df_responses = pd.DataFrame(list(reponses))",
"_____no_output_____"
],
[
"print (df_responses.shape)",
"(100000, 27)\n"
],
[
"## Make the documents to be analyzed",
"_____no_output_____"
],
[
"## Functions for turning dictionary into document\n\ndef make_string_from_list(key, elem_list):\n # Append key to each item in list\n ans = ''\n for elem in elem_list:\n ans += key + '_' + elem \n \n \n \n \n\ndef make_string(elem, key=None, top=True):\n ans = ''\n if not elem:\n return ans\n if top:\n top = False\n top_keys = []\n for idx in range(len(elem.keys())):\n top_keys.append(True)\n \n for idx, key in enumerate(elem.keys()):\n if top_keys[idx]:\n top = True\n top_keys[idx] = False\n ans += ' '\n else:\n top = False\n #print ('ans = ', ans)\n #print (type(elem[key]))\n if type(elem[key]) is str or\\\n type(elem[key]) is int:\n #print ('add value', elem[key])\n value = str(elem[key])\n #ans += key + '_' + value + ' ' + value + ' '\n ans += key + '_' + value + ' '\n elif type(elem[key]) is list:\n #print ('add list', elem[key])\n temp_elem = dict()\n for item in elem[key]:\n temp_elem[key] = item\n ans += make_string(temp_elem, top) \n elif type(elem[key]) is dict:\n #print ('add dict', elem[key])\n for item_key in elem[key].keys():\n temp_elem = dict()\n temp_elem[item_key] = elem[key][item_key]\n ans += key + '_' + make_string(temp_elem, top)\n elif type(elem[key]) is float:\n #print ('add dict', elem[key])\n sig = 2\n value = elem[key]\n value = round(value, sig-int(\n floor(log10(abs(value))))-1)\n value = str(value)\n #ans += key + '_' + value + ' ' + value + ' '\n ans += key + '_' + value + ' '\n # ans += ' ' + key + ' '\n #print ('not handled', elem[key])\n \n \n return ans",
"_____no_output_____"
],
[
"# Makes the cut & paste below easier\ndf3 = df_responses",
"_____no_output_____"
],
[
"df3['response_doc'] = df3['response'].map(make_string)\n",
"_____no_output_____"
],
[
"df3['response_doc'] = df3['response_doc'].map(lambda x: x + ' ')\ndf3['response_doc'] = df3['response_doc'].map(lambda x: x.replace('/','_'))\ndf3['response_doc'] = df3['response_doc'] + ' ' + df3['txt'] \ndf3['response_doc'] = df3['response_doc'].map(lambda x: x + ' ')\ndf3['response_doc'] = df3['response_doc'].map(lambda x: x.replace(\"\\n\", \"\"))\ndf3['response_doc'] = df3['response_doc'].map(lambda x: x.replace(\"?\", \" \"))",
"_____no_output_____"
]
],
[
[
"## Sample Documents",
"_____no_output_____"
]
],
[
[
"for idx in range(20):\n print (\"Sample number:\", idx, \"\\n\", df3.iloc[idx]['response_doc'])",
"Sample number: 0 \n fraction_cblock_chains_ right_442 fraction_cblock_chains_ sum_ numerator_1 sum_ denominator_2 sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1_2 fraction_cblock_chains_ left_97 fraction_cblock_chains_ lcm_sum_ numerator_1 lcm_sum_ denominator_2 lcm_sum_ __as3_type_Fraction plain_image_groups_ total_1 plain_image_groups_ url_assets_cms_wootmath_fractions_number_line_markers_end_marker_noline.swf plain_image_groups_ total_1 plain_image_groups_ url_assets_cms_wootmath_fractions_number_line_markers_start_marker.swf plain_image_groups_ total_1 plain_image_groups_ url_assets_cms_wootmath_fractions_number_line_objects_dog.swf plain_image_groups_ total_1 plain_image_groups_ url_assets_cms_wootmath_fractions_number_line_objects_cat_dog_trail.swf den_2 fraction_input_value_1_2 num_1 fraction_cblock_total_count_1 fraction_cblock_counts_ 1_2_1 whole_ Use the 1/2 pieces to figure out how far the dog traveled.Answer: 1/2 \nSample number: 1 \n fraction_cblock_total_count_4 plain_image_groups_ total_1 plain_image_groups_ url_assets_cms_wootmath_fractions_number_line_objects_panda.swf plain_image_groups_ total_1 plain_image_groups_ url_assets_cms_wootmath_fractions_number_line_markers_start_marker.swf input_4 fraction_cblock_chains_ right_856 fraction_cblock_chains_ sum_ numerator_1 sum_ denominator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1_4 pieces_1_4 pieces_1_4 pieces_1_4 fraction_cblock_chains_ left_165 fraction_cblock_chains_ lcm_sum_ numerator_4 lcm_sum_ denominator_4 lcm_sum_ __as3_type_Fraction numberline_associations_ numberline_associations_ position_720.0 numberline_associations_ pos_value_1.0 numberline_associations_ obj_name_object fraction_cblock_counts_ 1_4_4 Drag the panda to 4/4 of a yard from the start.Answer: 4/4 \nSample number: 2 \n fraction_cblock_chains_ left_176 fraction_cblock_chains_ lcm_sum_ numerator_2 lcm_sum_ denominator_8 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ right_348 fraction_cblock_chains_ pieces_1_8 pieces_1_8 fraction_cblock_chains_ sum_ numerator_1 sum_ denominator_4 sum_ __as3_type_Fraction fraction_cblock_chains_ left_590 fraction_cblock_chains_ lcm_sum_ numerator_1 lcm_sum_ denominator_6 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ right_705 fraction_cblock_chains_ pieces_1_6 fraction_cblock_chains_ sum_ numerator_1 sum_ denominator_6 sum_ __as3_type_Fraction fraction_cblock_chains_ left_176 fraction_cblock_chains_ lcm_sum_ numerator_1 lcm_sum_ denominator_4 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ right_348 fraction_cblock_chains_ pieces_1_4 fraction_cblock_chains_ sum_ numerator_1 sum_ denominator_4 sum_ __as3_type_Fraction fraction_cblock_chains_ left_176 fraction_cblock_chains_ lcm_sum_ numerator_1 lcm_sum_ denominator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ right_866 fraction_cblock_chains_ pieces_1 fraction_cblock_chains_ sum_ numerator_1 sum_ denominator_1 sum_ __as3_type_Fraction fraction_cblock_total_count_5 fraction_cblock_counts_ 1_1 fraction_cblock_counts_ 1_8_2 fraction_cblock_counts_ 1_6_1 fraction_cblock_counts_ 1_4_1 fraction_cblock_containment_ piece0_ lcm_sum_ numerator_2 lcm_sum_ denominator_8 lcm_sum_ __as3_type_Fraction piece0_ piece0_ pieces_1_8 pieces_1_8 piece0_ sum_ numerator_1 sum_ denominator_4 sum_ __as3_type_Fraction Model how many eighths are equal to one fourth.Answer: 2 \nSample number: 3 \n fraction_cblock_chains_ left_176 fraction_cblock_chains_ lcm_sum_ numerator_1 lcm_sum_ denominator_2 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ right_521 fraction_cblock_chains_ pieces_1_2 fraction_cblock_chains_ sum_ numerator_1 sum_ denominator_2 sum_ __as3_type_Fraction fraction_cblock_chains_ left_176 fraction_cblock_chains_ lcm_sum_ numerator_4 lcm_sum_ denominator_8 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ right_521 fraction_cblock_chains_ pieces_1_8 pieces_1_8 pieces_1_8 pieces_1_8 fraction_cblock_chains_ sum_ numerator_1 sum_ denominator_2 sum_ __as3_type_Fraction fraction_cblock_chains_ left_176 fraction_cblock_chains_ lcm_sum_ numerator_1 lcm_sum_ denominator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ right_866 fraction_cblock_chains_ pieces_1 fraction_cblock_chains_ sum_ numerator_1 sum_ denominator_1 sum_ __as3_type_Fraction fraction_cblock_total_count_6 fraction_cblock_counts_ 1_1 fraction_cblock_counts_ 1_2_1 fraction_cblock_counts_ 1_8_4 fraction_cblock_containment_ [Fraction] 1_2_ lcm_sum_ numerator_4 lcm_sum_ denominator_8 lcm_sum_ __as3_type_Fraction [Fraction] 1_2_ [Fraction] 1_2_ pieces_1_8 pieces_1_8 pieces_1_8 pieces_1_8 [Fraction] 1_2_ sum_ numerator_1 sum_ denominator_2 sum_ __as3_type_Fraction Model how many halves are equal to four eighths.Answer: 1 \nSample number: 4 \n fraction_circle_containment_ [Fraction] 1_2_ lcm_sum_ numerator_4 lcm_sum_ denominator_8 lcm_sum_ __as3_type_Fraction [Fraction] 1_2_ [Fraction] 1_2_ pieces_1_8 pieces_1_8 pieces_1_8 pieces_1_8 [Fraction] 1_2_ sum_ numerator_1 sum_ denominator_2 sum_ __as3_type_Fraction fraction_circle_total_count_6 fraction_circle_groups_ x_512 fraction_circle_groups_ y_300 fraction_circle_groups_ scale_1.0 fraction_circle_groups_ pieces_1_2 pieces_1_8 pieces_1_8 pieces_1_8 pieces_1_8 pieces_1 fraction_circle_groups_ chains_ right_180 chains_ pieces_1_8 pieces_1_8 pieces_1_8 pieces_1_8 chains_ left_0 fraction_circle_counts_ 1_1 fraction_circle_counts_ 1_2_1 fraction_circle_counts_ 1_8_4 Cameron ate 4/8 of a pizza.Cover the pizza to model how many halves of a pizza he ate.Answer: 1 \nSample number: 5 \n image_object_groups_ total_6 image_object_groups_ on_3 image_object_groups_ url_assets_objects_singles_watch.swf image_object_groups_ off_3 Shade 1/2 of the 6 watches.Answer: 1/2 \nSample number: 6 \n Shade 1/4 of the circle.answer={:n=>3, :d=>12} \nSample number: 7 \n Shade 1/3 of the rectangle.answer={:n=>2, :d=>6} \nSample number: 8 \n fraction_circle_groups_ x_512 fraction_circle_groups_ scale_1 fraction_circle_groups_ chains_ pieces_1_8 pieces_1_8 pieces_1_8 pieces_1_8 chains_ left_0 chains_ right_180 fraction_circle_groups_ pieces_1_8 pieces_1_8 pieces_1_8 pieces_1_8 pieces_1_2 pieces_1 fraction_circle_groups_ y_300 fraction_circle_containment_ piece_0_ sum_ denominator_2 sum_ numerator_1 sum_ __as3_type_Fraction piece_0_ piece_0_ pieces_1_8 pieces_1_8 pieces_1_8 pieces_1_8 piece_0_ lcm_sum_ denominator_8 lcm_sum_ numerator_4 lcm_sum_ __as3_type_Fraction fraction_circle_counts_ 1_1 fraction_circle_counts_ 1_2_1 fraction_circle_counts_ 1_8_4 fraction_circle_total_count_6 Drag one eighth pieces to cover all of the 1/2 piece.Answer: 4 \nSample number: 9 \n fraction_circle_groups_ x_512 fraction_circle_groups_ scale_1.0 fraction_circle_groups_ chains_ pieces_1_8 pieces_1_8 pieces_1_8 pieces_1_8 chains_ left_0 chains_ right_180 fraction_circle_groups_ pieces_1_2 pieces_1_8 pieces_1_8 pieces_1_8 pieces_1_8 pieces_1 fraction_circle_groups_ y_300 fraction_circle_containment_ [Fraction] 1_2_ sum_ denominator_2 sum_ numerator_1 sum_ __as3_type_Fraction [Fraction] 1_2_ [Fraction] 1_2_ pieces_1_8 pieces_1_8 pieces_1_8 pieces_1_8 [Fraction] 1_2_ lcm_sum_ denominator_8 lcm_sum_ numerator_4 lcm_sum_ __as3_type_Fraction fraction_circle_counts_ 1_1 fraction_circle_counts_ 1_2_1 fraction_circle_counts_ 1_8_4 fraction_circle_total_count_6 Drag one half pieces to cover all of the 4/8 shown.Answer: 1 \nSample number: 10 \n fraction_circle_groups_ x_512 fraction_circle_groups_ scale_1.0 fraction_circle_groups_ chains_ pieces_1_4 pieces_1_4 chains_ left_0 chains_ right_180 fraction_circle_groups_ pieces_1_2 pieces_1_4 pieces_1_4 pieces_1 fraction_circle_groups_ y_300 fraction_circle_containment_ [Fraction] 1_2_ sum_ denominator_2 sum_ numerator_1 sum_ __as3_type_Fraction [Fraction] 1_2_ [Fraction] 1_2_ pieces_1_4 pieces_1_4 [Fraction] 1_2_ lcm_sum_ denominator_4 lcm_sum_ numerator_2 lcm_sum_ __as3_type_Fraction fraction_circle_counts_ 1_1 fraction_circle_counts_ 1_2_1 fraction_circle_counts_ 1_4_2 fraction_circle_total_count_4 Drag one half pieces to cover all of the 2/4 shown.Answer: 1 \nSample number: 11 \n radio_choice_C radio_group_problem_ choice_C radio_group_problem_ text_3_6 radio_text_3_6 What fraction has 6 as the denominator () 6/7 () 4/5 () 3/6Answer: 3/6 \nSample number: 12 \n fraction_cblock_chains_ sum_ denominator_10 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_10 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1_10 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1458 fraction_cblock_chains_ sum_ denominator_5 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_10 lcm_sum_ numerator_2 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1_10 pieces_1_10 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1297 fraction_cblock_chains_ sum_ denominator_10 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_10 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1_10 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1531 fraction_cblock_chains_ sum_ denominator_10 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_10 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1_10 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1214 fraction_cblock_chains_ sum_ denominator_10 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_10 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1_10 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1424 fraction_cblock_chains_ sum_ denominator_5 sum_ numerator_2 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_10 lcm_sum_ numerator_4 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1_10 pieces_1_10 pieces_1_10 pieces_1_10 fraction_cblock_chains_ left_544 fraction_cblock_chains_ right_820 fraction_cblock_chains_ sum_ denominator_84 sum_ numerator_73 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_84 lcm_sum_ numerator_73 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1_7 pieces_1_7 pieces_1_6 pieces_1_6 pieces_1_4 fraction_cblock_chains_ left_1001 fraction_cblock_chains_ right_1272 fraction_cblock_chains_ sum_ denominator_35 sum_ numerator_17 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_35 lcm_sum_ numerator_17 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1_7 pieces_1_7 pieces_1_5 fraction_cblock_chains_ left_981 fraction_cblock_chains_ right_1316 fraction_cblock_chains_ sum_ denominator_28 sum_ numerator_11 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_28 lcm_sum_ numerator_11 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1_7 pieces_1_4 fraction_cblock_chains_ left_1001 fraction_cblock_chains_ right_1272 fraction_cblock_chains_ sum_ denominator_7 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_7 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1_7 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1300 fraction_cblock_chains_ sum_ denominator_7 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_7 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1_7 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1248 fraction_cblock_chains_ sum_ denominator_6 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_6 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1_6 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1316 fraction_cblock_chains_ sum_ denominator_6 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_6 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1_6 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1387 fraction_cblock_chains_ sum_ denominator_6 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_6 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1_6 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1220 fraction_cblock_chains_ sum_ denominator_6 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_6 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1_6 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1387 fraction_cblock_chains_ sum_ denominator_5 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_5 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1_5 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1358 fraction_cblock_chains_ sum_ denominator_5 sum_ numerator_3 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_5 lcm_sum_ numerator_3 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1_5 pieces_1_5 pieces_1_5 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1337 fraction_cblock_chains_ sum_ denominator_2 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_4 lcm_sum_ numerator_2 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1_4 pieces_1_4 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1523 fraction_cblock_chains_ sum_ denominator_4 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_4 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1_4 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1272 fraction_cblock_chains_ sum_ denominator_4 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_4 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1_4 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1358 fraction_cblock_chains_ sum_ denominator_2 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_2 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1_2 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1531 fraction_cblock_chains_ sum_ denominator_2 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_4 lcm_sum_ numerator_2 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1_4 pieces_1_4 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1389 fraction_cblock_chains_ sum_ denominator_4 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_4 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1_4 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1216 fraction_cblock_chains_ sum_ denominator_4 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_4 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1_4 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_1351 fraction_cblock_chains_ sum_ denominator_1 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_1 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1 fraction_cblock_chains_ left_1024 fraction_cblock_chains_ right_2045 fraction_cblock_chains_ sum_ denominator_1 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_1 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1 fraction_cblock_chains_ left_130 fraction_cblock_chains_ right_820 fraction_cblock_containment_ bar1_ sum_ denominator_5 sum_ numerator_2 sum_ __as3_type_Fraction bar1_ bar1_ pieces_1_10 pieces_1_10 pieces_1_10 pieces_1_10 bar1_ lcm_sum_ denominator_10 lcm_sum_ numerator_4 lcm_sum_ __as3_type_Fraction fraction_cblock_containment_ [Fraction] 1_4_ sum_ denominator_5 sum_ numerator_1 sum_ __as3_type_Fraction [Fraction] 1_4_ [Fraction] 1_4_ pieces_1_5 [Fraction] 1_4_ lcm_sum_ denominator_5 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_containment_ [Fraction] 1_ sum_ denominator_10 sum_ numerator_1 sum_ __as3_type_Fraction [Fraction] 1_ [Fraction] 1_ pieces_1_10 [Fraction] 1_ lcm_sum_ denominator_10 lcm_sum_ numerator_1 lcm_sum_ __as3_type_Fraction fraction_cblock_total_count_41 fraction_cblock_counts_ 1_2 fraction_cblock_counts_ 1_7_7 fraction_cblock_counts_ 1_4_10 fraction_cblock_counts_ 1_6_6 fraction_cblock_counts_ 1_5_5 fraction_cblock_counts_ 1_2_1 fraction_cblock_counts_ 1_10_10 Model 4/10 on the black bar using the fraction pieces below.Answer: [object Object] \nSample number: 13 \n whole_ fraction_input_value_4_6 fraction_cblock_chains_ sum_ denominator_3 sum_ numerator_2 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_6 lcm_sum_ numerator_4 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1_6 pieces_1_6 pieces_1_6 pieces_1_6 fraction_cblock_chains_ left_96 fraction_cblock_chains_ right_522 num_4 plain_image_groups_ total_1 plain_image_groups_ url_assets_cms_wootmath_fractions_number_line_markers_end_marker.swf plain_image_groups_ total_1 plain_image_groups_ url_assets_cms_wootmath_fractions_number_line_markers_start_marker.swf plain_image_groups_ total_1 plain_image_groups_ url_assets_cms_wootmath_fractions_number_line_objects_beetle.swf plain_image_groups_ total_1 plain_image_groups_ url_assets_cms_wootmath_fractions_number_line_objects_beetle_trail.swf den_6 fraction_cblock_total_count_4 fraction_cblock_counts_ 1_6_4 Use the 1/6 pieces to figure out how far the beetle traveled.Answer: 4/6 \nSample number: 14 \n plain_image_groups_ total_1 plain_image_groups_ url_assets_cms_wootmath_fractions_number_line_objects_panda.swf plain_image_groups_ total_1 plain_image_groups_ url_assets_cms_wootmath_fractions_number_line_markers_start_marker.swf input_8 numberline_associations_ position_634 numberline_associations_ pos_value_0.88 numberline_associations_ obj_name_object numberline_associations_ fraction_cblock_chains_ sum_ denominator_8 sum_ numerator_7 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_8 lcm_sum_ numerator_7 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1_8 pieces_1_8 pieces_1_8 pieces_1_8 pieces_1_8 pieces_1_8 pieces_1_8 fraction_cblock_chains_ left_165 fraction_cblock_chains_ right_769 fraction_cblock_total_count_7 fraction_cblock_counts_ 1_8_7 Drag the panda to 7/8 of a yard from the start.Answer: 7/8 \nSample number: 15 \n input_8 One yard on the number line is divided intoAnswer: sixths \nSample number: 16 \n numberline_associations_ position_580.0 numberline_associations_ pos_value_1.0 numberline_associations_ obj_name_answer_text numberline_associations_ obj_value_3_3 input_ Drag the fraction to its correct location on the number line.Answer: 3/3 \nSample number: 17 \n plain_image_groups_ total_1 plain_image_groups_ url_assets_cms_wootmath_fractions_number_line_objects_shark.swf plain_image_groups_ total_1 plain_image_groups_ url_assets_cms_wootmath_fractions_number_line_markers_start_marker.swf input_6 numberline_associations_ position_722 numberline_associations_ pos_value_1.0 numberline_associations_ obj_name_object numberline_associations_ fraction_cblock_chains_ sum_ denominator_1 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_6 lcm_sum_ numerator_6 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1_6 pieces_1_6 pieces_1_6 pieces_1_6 pieces_1_6 pieces_1_6 fraction_cblock_chains_ left_165 fraction_cblock_chains_ right_856 fraction_cblock_total_count_6 fraction_cblock_counts_ 1_6_6 Drag the shark to 1/6 of a yard from the start.Answer: 1/6 \nSample number: 18 \n whole_ fraction_input_value_1_3 fraction_cblock_chains_ sum_ denominator_1 sum_ numerator_1 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_3 lcm_sum_ numerator_3 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1_3 pieces_1_3 pieces_1_3 fraction_cblock_chains_ left_96 fraction_cblock_chains_ right_657 num_1 plain_image_groups_ total_1 plain_image_groups_ url_assets_cms_wootmath_fractions_number_line_markers_end_marker.swf plain_image_groups_ total_1 plain_image_groups_ url_assets_cms_wootmath_fractions_number_line_markers_start_marker.swf plain_image_groups_ total_1 plain_image_groups_ url_assets_cms_wootmath_fractions_number_line_objects_snail.swf plain_image_groups_ total_1 plain_image_groups_ url_assets_cms_wootmath_fractions_number_line_objects_snail_trail.swf den_3 fraction_cblock_total_count_3 fraction_cblock_counts_ 1_3_3 Use the 1/3 pieces to figure out how far the snail traveled.Answer: 3/3 \nSample number: 19 \n whole_ fraction_input_value_3_4 fraction_cblock_chains_ sum_ denominator_4 sum_ numerator_3 sum_ __as3_type_Fraction fraction_cblock_chains_ lcm_sum_ denominator_4 lcm_sum_ numerator_3 lcm_sum_ __as3_type_Fraction fraction_cblock_chains_ pieces_1_4 pieces_1_4 pieces_1_4 fraction_cblock_chains_ left_96 fraction_cblock_chains_ right_545 num_3 plain_image_groups_ total_1 plain_image_groups_ url_assets_cms_wootmath_fractions_number_line_markers_end_marker_noline.swf plain_image_groups_ total_1 plain_image_groups_ url_assets_cms_wootmath_fractions_number_line_markers_start_marker.swf plain_image_groups_ total_1 plain_image_groups_ url_assets_cms_wootmath_fractions_number_line_objects_dog.swf plain_image_groups_ total_1 plain_image_groups_ url_assets_cms_wootmath_fractions_number_line_objects_cat_dog_trail.swf den_4 fraction_cblock_total_count_3 fraction_cblock_counts_ 1_4_3 Use the 1/4 pieces to figure out how far the dog traveled.Answer: 3/4 \n"
],
[
"data_samples = df3['response_doc']",
"_____no_output_____"
],
[
"n_features = 1000\nn_samples = len(data_samples)\nn_topics = 50\nn_top_words = 20",
"_____no_output_____"
],
[
"print(\"Extracting tf-idf features ...\")\ntfidf_vectorizer = TfidfVectorizer(max_df=0.95, min_df=2,\n max_features=n_features,\n stop_words='english')\nt0 = time()\ntfidf = tfidf_vectorizer.fit_transform(data_samples)\nprint(\"done in %0.3fs.\" % (time() - t0))",
"Extracting tf-idf features ...\ndone in 8.222s.\n"
],
[
"# Number of clusters\ntrue_k = 100\n\nkm = MiniBatchKMeans(n_clusters=true_k, init='k-means++', n_init=1,\n init_size=1000, batch_size=1000, random_state=62)",
"_____no_output_____"
],
[
"print(\"Clustering with %s\" % km)\nt0 = time()\nkm.fit(tfidf)\nprint(\"done in %0.3fs\" % (time() - t0))\nprint()",
"Clustering with MiniBatchKMeans(batch_size=1000, compute_labels=True, init='k-means++',\n init_size=1000, max_iter=100, max_no_improvement=10,\n n_clusters=100, n_init=1, random_state=62, reassignment_ratio=0.01,\n tol=0.0, verbose=0)\ndone in 2.820s\n\n"
],
[
"print(\"Top terms per cluster:\")\n\n\norder_centroids = km.cluster_centers_.argsort()[:, ::-1]\nterms = tfidf_vectorizer.get_feature_names()\nfor i in range(true_k):\n print(\"Cluster %d:\\n\" % i, end='')\n for ind in order_centroids[i, :30]:\n print(' --- %s\\n' % terms[ind], end='')\n print()",
"Top terms per cluster:\nCluster 0:\n --- bitmap_text_interp_\n --- bitmap_text_inputs_\n --- fraction_input_value_\n --- long\n --- input_a_2\n --- enter\n --- fraction\n --- bar\n --- answer\n --- sum\n --- numbers\n --- input_6\n --- input_a_6\n --- form\n --- simplest\n --- input_a_4\n --- 12\n --- input_a_3\n --- input_a_8\n --- input_b_12\n --- input_b_6\n --- equation_12\n --- equation_6\n --- 10\n --- input_b_8\n --- input_5\n --- input_b_4\n --- shaded\n --- input_a_5\n --- match\n\nCluster 1:\n --- fraction_cblock_chains_\n --- sum_\n --- lcm_sum_\n --- pieces_1_9\n --- denominator_9\n --- __as3_type_fraction\n --- numerator_1\n --- denominator_1\n --- fraction_cblock_counts_\n --- bar1_\n --- denominator_3\n --- numerator_2\n --- pieces_1_3\n --- fraction\n --- left_130\n --- fraction_cblock_containment_\n --- pieces_1\n --- left_90\n --- left_80\n --- bar2_\n --- bar\n --- left_200\n --- black\n --- numerator_4\n --- right_820\n --- 1_\n --- 1_3_\n --- 1_1\n --- numerator_3\n --- right_780\n\nCluster 2:\n --- pieces_1_12\n --- fraction_circle_groups_\n --- chains_\n --- fraction_circle_counts_\n --- frac_piece_\n --- sum_\n --- lcm_sum_\n --- scale_1\n --- denominator_12\n --- __as3_type_fraction\n --- fraction\n --- fraction_circle_containment_\n --- 1_4_\n --- reds\n --- piece1_\n --- 1_6_\n --- numerator_1\n --- x_300\n --- pieces_1_4\n --- 1_12_3\n --- piece\n --- circle1_\n --- cover\n --- y_350\n --- left_270\n --- 1_12_2\n --- y_300\n --- 12\n --- piece_0_\n --- scale_0\n\nCluster 3:\n --- plain_image_groups_\n --- total_1\n --- swf\n --- answer\n --- url_assets_cms_wootmath_fractions_number_line_markers_start_marker\n --- length\n --- input_a_0\n --- correct\n --- choose\n --- enter\n --- comparison\n --- start\n --- whole_\n --- distance\n --- traveled\n --- shape\n --- drag\n --- url_assets_cms_wootmath_fractions_number_line_mug_mug_half_01\n --- far\n --- input_\n --- url_assets_cms_wootmath_fractions_number_line_markers_end_marker\n --- input_a_\n --- label\n --- url_assets_cms_wootmath_fractions_number_line_objects_v2_bug_trail\n --- robots\n --- url_assets_cms_wootmath_fractions_number_line_objects_v2_bubble_trail\n --- swam\n --- box\n --- fish\n --- url_assets_cms_wootmath_fractions_number_line_juice_oj_tupperware_fourths_02\n\nCluster 4:\n --- numberline_associations_\n --- line\n --- location\n --- number\n --- correct\n --- drag\n --- label\n --- pos_value_1\n --- pos_value_0\n --- obj_value_\n --- obj_value_a\n --- yard\n --- obj_name_answer_text\n --- mile\n --- obj_name_eqn\n --- biked\n --- answer\n --- input_\n --- ran\n --- fraction_cblock_chains_\n --- miles\n --- input_8\n --- obj_value_1\n --- input_12\n --- obj_name_a_text\n --- pos_value_2\n --- obj_value_0\n --- labels\n --- total\n --- input_9\n\nCluster 5:\n --- object\n --- decimals\n --- shown\n --- input_\n --- input_a_\n --- choose\n --- comparison\n --- correct\n --- model\n --- answer\n --- fraction_circle_total_count_2\n --- fraction_circle_total_count_16\n --- fraction_circle_total_count_12\n --- fraction_circle_total_count_3\n --- fraction_circle_total_count_4\n --- fraction_circle_total_count_15\n --- fraction_circle_total_count_14\n --- fraction_circle_total_count_13\n --- youranswer\n --- fraction_circle_total_count_5\n --- fraction_circle_total_count_10\n --- fraction_circle_total_count_1\n --- fraction_circle_groups_\n --- fraction_circle_counts_\n --- fraction_circle_containment_\n --- fraction_cblock_total_count_9\n --- fraction_cblock_total_count_8\n --- fraction_cblock_total_count_7\n --- fraction_cblock_total_count_6\n --- fraction_cblock_total_count_5\n\nCluster 6:\n --- image_object_groups_\n --- shade\n --- swf\n --- 14\n --- 11\n --- off_2\n --- 13\n --- total_14\n --- answer\n --- total_8\n --- off_4\n --- off_1\n --- on_3\n --- on_4\n --- off_3\n --- on_0\n --- on_2\n --- off_6\n --- 12\n --- total_6\n --- url_assets_objects_singles_octopus\n --- total_12\n --- 10\n --- on_1\n --- url_assets_objects_singles_cat\n --- 15\n --- cats\n --- url_assets_objects_singles_piranha\n --- off_5\n --- total_9\n\nCluster 7:\n --- 43\n --- order\n --- arrange\n --- greatest\n --- boxes\n --- fractions\n --- 63\n --- drag\n --- 81\n --- fraction\n --- 73\n --- 42\n --- 52\n --- 18\n --- 123\n --- 16\n --- 14\n --- 54\n --- 12\n --- 51\n --- 25\n --- 83\n --- 58\n --- 10\n --- fraction_circle_total_count_7\n --- fraction_circle_total_count_5\n --- fraction_circle_total_count_6\n --- youranswer\n --- fraction_circle_total_count_8\n --- fraction_circle_total_count_9\n\nCluster 8:\n --- grid\n --- 10\n --- model\n --- answer\n --- 100\n --- popcorn\n --- den_10\n --- num_7\n --- whole_\n --- boxes\n --- fraction\n --- num_9\n --- wearing\n --- cats\n --- num_5\n --- think\n --- represented\n --- input_0\n --- bigger\n --- input_a_0\n --- greater\n --- decimal\n --- pieces\n --- came\n --- 1_10\n --- cut\n --- 1_8\n --- radio_group_problem_\n --- enter\n --- 11\n\nCluster 9:\n --- write\n --- used\n --- divideboth\n --- denominator\n --- numerator\n --- form\n --- simplest\n --- number\n --- enter\n --- answer\n --- divide\n --- 12\n --- 15\n --- 10\n --- fraction_cblock_total_count_6\n --- fraction_circle_total_count_5\n --- fraction_cblock_total_count_2\n --- fraction_circle_total_count_4\n --- fraction_circle_total_count_3\n --- fraction_circle_total_count_2\n --- fraction_circle_total_count_16\n --- fraction_circle_total_count_15\n --- fraction_circle_total_count_14\n --- fraction_circle_total_count_13\n --- fraction_circle_total_count_12\n --- fraction_circle_total_count_11\n --- fraction_cblock_total_count_3\n --- fraction_circle_total_count_10\n --- fraction_circle_total_count_1\n --- fraction_circle_groups_\n\nCluster 10:\n --- fraction_cblock_chains_\n --- lcm_sum_\n --- sum_\n --- __as3_type_fraction\n --- numerator_1\n --- denominator_1\n --- fraction_cblock_counts_\n --- denominator_3\n --- pieces_1_5\n --- denominator_5\n --- pieces_1_3\n --- denominator_7\n --- denominator_2\n --- numerator_2\n --- left_90\n --- left_80\n --- pieces_1_4\n --- denominator_4\n --- pieces_1\n --- fraction\n --- bar2_\n --- fraction_cblock_containment_\n --- unit2_\n --- pieces_1_7\n --- pieces_1_6\n --- right_780\n --- bar1_\n --- 1_\n --- numerator_3\n --- pieces_1_2\n\nCluster 11:\n --- pieces_1_10\n --- fraction_circle_groups_\n --- chains_\n --- lcm_sum_\n --- sum_\n --- fraction_circle_counts_\n --- denominator_10\n --- fraction\n --- 1_2_\n --- __as3_type_fraction\n --- scale_1\n --- unit_\n --- fraction_circle_containment_\n --- circle1_\n --- 10\n --- purples\n --- 1_5_\n --- 1_\n --- x_300\n --- scale_0\n --- y_300\n --- numerator_1\n --- pieces_1\n --- denominator_5\n --- pieces_1_8\n --- right_270\n --- 1_10_5\n --- pieces_1_5\n --- unit2_\n --- unit1_\n\nCluster 12:\n --- fraction_circle_groups_\n --- unit_\n --- pieces_1_6\n --- fraction_circle_counts_\n --- lcm_sum_\n --- sum_\n --- chains_\n --- scale_0\n --- __as3_type_fraction\n --- x_512\n --- fraction_circle_containment_\n --- numerator_1\n --- y_350\n --- 1_1\n --- pieces_1_2\n --- bigger\n --- fraction_circle_total_count_2\n --- denominator_6\n --- pieces_1\n --- denominator_2\n --- pieces_1_3\n --- 125\n --- model\n --- scale_1\n --- answer\n --- 1_2_1\n --- left\n --- fraction\n --- 1_6_3\n --- cake\n\nCluster 13:\n --- missing\n --- numerator\n --- enter\n --- bitmap_text_interp_\n --- bitmap_text_inputs_\n --- whole_\n --- answer\n --- object\n --- input_1\n --- mult_n_1_\n --- mult_d_1_\n --- eqn_2\n --- input_a_1\n --- den_12\n --- 12\n --- input_2\n --- input_a_2\n --- 15\n --- den_15\n --- eqn_1_2\n --- input_3\n --- input_a_3\n --- den_10\n --- 2_3\n --- num_7\n --- mult_n_2_\n --- mult_d_2_\n --- den_4\n --- num_4\n --- den_6\n\nCluster 14:\n --- pieces_1_8\n --- fraction_cblock_chains_\n --- sum_\n --- lcm_sum_\n --- __as3_type_fraction\n --- denominator_8\n --- bar1_\n --- numerator_1\n --- denominator_1\n --- fraction_cblock_counts_\n --- left_130\n --- right_820\n --- bar\n --- numerator_3\n --- fraction_cblock_containment_\n --- plain_image_groups_\n --- numerator_8\n --- numerator_7\n --- pieces_1\n --- numerator_5\n --- black\n --- 1_1\n --- 1_8_8\n --- object\n --- denominator_4\n --- fraction\n --- eighth\n --- denominator_2\n --- gray\n --- numerator_6\n\nCluster 15:\n --- pieces_1_3\n --- fraction_circle_groups_\n --- chains_\n --- sum_\n --- lcm_sum_\n --- 1_\n --- denominator_3\n --- fraction_circle_counts_\n --- __as3_type_fraction\n --- unit1_\n --- unit2_\n --- fraction\n --- fraction_circle_containment_\n --- scale_0\n --- circle1_\n --- browns\n --- 1_3_3\n --- pieces_1\n --- scale_1\n --- circle\n --- numerator_1\n --- y_300\n --- fraction_circle_total_count_4\n --- black\n --- x_300\n --- input_3\n --- input_a_3\n --- numerator_3\n --- 1_1\n --- numerator_2\n\nCluster 16:\n --- 81\n --- order\n --- arrange\n --- greatest\n --- boxes\n --- fractions\n --- drag\n --- fraction\n --- 12\n --- 123\n --- 41\n --- 83\n --- 63\n --- 54\n --- 42\n --- 11\n --- 125\n --- 15\n --- 33\n --- 32\n --- 14\n --- 58\n --- decimals\n --- 10\n --- 09\n --- input_\n --- answer\n --- enter\n --- fraction_input_value_2\n --- fraction_input_value_1_8\n\nCluster 17:\n --- fraction_cblock_chains_\n --- lcm_sum_\n --- sum_\n --- __as3_type_fraction\n --- numerator_1\n --- denominator_8\n --- denominator_12\n --- pieces_1_12\n --- denominator_1\n --- pieces_1_8\n --- left_175\n --- denominator_4\n --- fraction_cblock_counts_\n --- pieces_1_6\n --- pieces_1_4\n --- right_865\n --- pieces_1\n --- fraction\n --- left_90\n --- unit3_\n --- denominator_6\n --- right_347\n --- denominator_3\n --- unit2_\n --- left_347\n --- denominator_2\n --- numerator_2\n --- numerator_3\n --- fraction_cblock_containment_\n --- right_780\n\nCluster 18:\n --- pieces_1_9\n --- fraction_circle_groups_\n --- chains_\n --- sum_\n --- lcm_sum_\n --- denominator_9\n --- __as3_type_fraction\n --- 1_3_\n --- fraction_circle_counts_\n --- fraction\n --- fraction_cblock_chains_\n --- 1_\n --- numerator_1\n --- fraction_circle_containment_\n --- scale_1\n --- whites\n --- numerator_9\n --- unit1_\n --- bar1_\n --- 1_9_9\n --- denominator_1\n --- pieces_1\n --- unit2_\n --- y_300\n --- x_300\n --- 1_9_3\n --- 1_1\n --- unit_\n --- den_9\n --- input_a_9\n\nCluster 19:\n --- numberline_associations_\n --- line\n --- number\n --- mile\n --- divide\n --- location\n --- lengths\n --- label\n --- correct\n --- drag\n --- pos_value_0\n --- equal\n --- den_input_4\n --- den_input_6\n --- answer\n --- parts\n --- den_input_8\n --- den_input_3\n --- pos_value_1\n --- fraction\n --- position_380\n --- 25\n --- 67\n --- position_490\n --- 75\n --- position_200\n --- position_550\n --- 33\n --- plain_image_groups_\n --- position_260\n\nCluster 20:\n --- rectangle\n --- fraction_input_value_\n --- fraction\n --- shade\n --- shaded\n --- match\n --- 2_6\n --- circle\n --- answer\n --- 4_8\n --- equivalent\n --- input_a_3\n --- 2_8\n --- 12\n --- object\n --- input_a_2\n --- 24\n --- problem\n --- input_a_6\n --- 13\n --- radio_choice_b\n --- 16\n --- 4_6\n --- input_8\n --- 3_8\n --- input_a_4\n --- 10\n --- bar\n --- 15\n --- 3_6\n\nCluster 21:\n --- object\n --- number\n --- form\n --- simplest\n --- mixed\n --- enter\n --- answer\n --- whole_1\n --- fraction_input_value_1\n --- did\n --- line\n --- shown\n --- youranswer\n --- miles\n --- num_1\n --- whole_2\n --- fraction_input_value_2\n --- den_4\n --- long\n --- divided\n --- den_5\n --- swim\n --- far\n --- bike\n --- num_3\n --- fraction\n --- den_6\n --- bar\n --- num_2\n --- ate\n\nCluster 22:\n --- match\n --- fraction_input_value_\n --- shade\n --- fraction\n --- input_a_\n --- choose\n --- comparison\n --- correct\n --- circle\n --- bar\n --- 1_3\n --- flower\n --- polygon\n --- star\n --- 3_4\n --- 1_6\n --- 2_4\n --- 4_8\n --- 4_6\n --- 3_6\n --- 3_8\n --- 2_5\n --- 3_5\n --- 2_8\n --- 6_8\n --- 1_8\n --- 2_6\n --- 2_3\n --- 5_6\n --- 1_5\n\nCluster 23:\n --- length\n --- yard\n --- yards\n --- divide\n --- line\n --- lengths\n --- number\n --- fraction_input_value_\n --- bar\n --- equal\n --- enter\n --- whole_\n --- fraction\n --- den_input_8\n --- num_1\n --- divided\n --- fifths\n --- parts\n --- den_input_6\n --- den_input_4\n --- input_7\n --- answer\n --- den_8\n --- den_input_3\n --- den_2\n --- num_2\n --- den_6\n --- den_4\n --- den_3\n --- num_3\n\nCluster 24:\n --- box\n --- drag\n --- answer\n --- equivalent\n --- shown\n --- homework\n --- person\n --- fraction\n --- piece\n --- shows\n --- 12\n --- far\n --- traveled\n --- half\n --- yellow\n --- pieces\n --- bar\n --- unit\n --- brown\n --- 24\n --- sum\n --- blue\n --- decimal\n --- dark\n --- 51\n --- 25\n --- greater\n --- 22\n --- circle\n --- black\n\nCluster 25:\n --- fraction_circle_groups_\n --- 1_2_\n --- pieces_1_2\n --- fraction\n --- lcm_sum_\n --- sum_\n --- chains_\n --- denominator_2\n --- pieces_1_4\n --- fraction_circle_counts_\n --- __as3_type_fraction\n --- 1_\n --- fraction_circle_containment_\n --- numerator_1\n --- scale_1\n --- y_300\n --- unit1_\n --- pieces_1\n --- 1_2_2\n --- numerator_2\n --- unit2_\n --- fraction_circle_total_count_3\n --- 1_1\n --- yellows\n --- input_2\n --- scale_0\n --- circle\n --- x_512\n --- input_a_2\n --- x_300\n\nCluster 26:\n --- fraction_cblock_chains_\n --- pieces_1_6\n --- lcm_sum_\n --- sum_\n --- __as3_type_fraction\n --- denominator_6\n --- numerator_1\n --- denominator_1\n --- bar1_\n --- fraction_cblock_counts_\n --- left_130\n --- numerator_5\n --- denominator_3\n --- fraction_cblock_containment_\n --- numerator_2\n --- right_820\n --- pieces_1\n --- bar\n --- numerator_6\n --- unit2_\n --- fraction\n --- denominator_2\n --- left_90\n --- unit1_\n --- object\n --- numerator_4\n --- 1_1\n --- left_176\n --- black\n --- 1_6_6\n\nCluster 27:\n --- pizza\n --- fraction_circle_groups_\n --- ate\n --- eat\n --- friend\n --- x_475\n --- y_384\n --- did\n --- fraction_circle_total_count_1\n --- fraction_circle_counts_\n --- scale_1\n --- 1_1\n --- pieces_1\n --- greater\n --- pieces_1_8\n --- 10\n --- half\n --- answer\n --- 12\n --- unit_\n --- lcm_sum_\n --- sum_\n --- scale_0\n --- chains_\n --- denominator_8\n --- fraction_circle_total_count_2\n --- __as3_type_fraction\n --- y_470\n --- unit1_\n --- leftover\n\nCluster 28:\n --- radio_text_\n --- choose\n --- comparison\n --- input_\n --- correct\n --- answer\n --- input_a_\n --- fraction_cblock_total_count_3\n --- fraction_cblock_total_count_4\n --- fraction_circle_total_count_4\n --- fraction_circle_total_count_3\n --- fraction_circle_total_count_2\n --- fraction_cblock_total_count_17\n --- fraction_circle_total_count_16\n --- fraction_circle_total_count_15\n --- fraction_circle_total_count_14\n --- fraction_cblock_total_count_18\n --- fraction_circle_total_count_13\n --- fraction_circle_total_count_12\n --- fraction_circle_total_count_11\n --- fraction_circle_total_count_10\n --- fraction_circle_total_count_1\n --- fraction_circle_total_count_5\n --- fraction_circle_groups_\n --- fraction_circle_counts_\n --- fraction_circle_containment_\n --- fraction_cblock_total_count_9\n --- fraction_cblock_total_count_8\n --- fraction_cblock_total_count_2\n --- fraction_cblock_total_count_7\n\nCluster 29:\n --- different\n --- numbers\n --- 24\n --- make\n --- true\n --- sentence\n --- using\n --- number\n --- enter\n --- answer\n --- 12\n --- comparison1\n --- fractions\n --- input_\n --- input_a_\n --- input_2\n --- input_a_2\n --- correct\n --- fraction_circle_total_count_13\n --- fraction_circle_total_count_16\n --- fraction_circle_total_count_14\n --- fraction_circle_total_count_15\n --- fraction_circle_total_count_2\n --- fraction_circle_total_count_3\n --- fraction_circle_total_count_4\n --- fraction_circle_total_count_12\n --- fraction_input_value_3_6\n --- fraction_circle_total_count_11\n --- fraction_circle_total_count_1\n --- fraction_circle_groups_\n\nCluster 30:\n --- math\n --- sentence\n --- complete\n --- correct\n --- drag\n --- tenths\n --- answer\n --- people\n --- equally\n --- amounts\n --- object\n --- undefined\n --- express\n --- cookies\n --- 10\n --- pizzas\n --- numbers\n --- boxes\n --- bitmap_text_interp_\n --- bitmap_text_inputs_\n --- input_a_6\n --- input_6\n --- 13\n --- makes\n --- true\n --- cake\n --- wants\n --- friend\n --- box\n --- 11\n\nCluster 31:\n --- fraction_circle_groups_\n --- y_350\n --- scale_1\n --- object\n --- fraction_circle_counts_\n --- say\n --- fraction_circle_total_count_2\n --- x_750\n --- x_250\n --- cover\n --- piece\n --- dark\n --- orange\n --- blue\n --- 1_5_1\n --- brown\n --- pink\n --- pieces_1_5\n --- 1_4_1\n --- yellow\n --- 1_3_1\n --- red\n --- pieces_1_4\n --- pieces_1_3\n --- reds\n --- 1_2_1\n --- pieces_1_2\n --- answer\n --- green\n --- greens\n\nCluster 32:\n --- numberline_associations_\n --- plain_image_groups_\n --- total_1\n --- swf\n --- obj_name_object\n --- pos_value_0\n --- mile\n --- drag\n --- start\n --- url_assets_cms_wootmath_fractions_number_line_markers_start_marker\n --- meter\n --- answer\n --- beetle\n --- input_12\n --- shark\n --- fraction_cblock_chains_\n --- 10\n --- final\n --- input_4\n --- obj_name_obj\n --- location\n --- input_5\n --- tenths\n --- yard\n --- input_3\n --- url_assets_cms_wootmath_fractions_number_line_objects_v2_elephant\n --- elephant\n --- walked\n --- panda\n --- position_260\n\nCluster 33:\n --- 2_3\n --- match\n --- shade\n --- fraction_input_value_\n --- fraction\n --- input_a_\n --- choose\n --- comparison\n --- correct\n --- circle\n --- bar\n --- fraction_cblock_total_count_6\n --- fraction_circle_total_count_12\n --- fraction_circle_total_count_3\n --- fraction_circle_total_count_2\n --- fraction_circle_total_count_16\n --- fraction_cblock_total_count_18\n --- fraction_circle_total_count_15\n --- fraction_circle_total_count_14\n --- fraction_circle_total_count_13\n --- fraction_cblock_total_count_2\n --- fraction_cblock_total_count_3\n --- fraction_cblock_total_count_5\n --- fraction_circle_total_count_11\n --- fraction_circle_total_count_10\n --- fraction_circle_total_count_4\n --- fraction_circle_total_count_1\n --- fraction_circle_groups_\n --- fraction_circle_counts_\n --- fraction_circle_containment_\n\nCluster 34:\n --- fraction_cblock_chains_\n --- sum_\n --- lcm_sum_\n --- bar1_\n --- __as3_type_fraction\n --- numerator_1\n --- left_130\n --- bar\n --- denominator_1\n --- fraction_cblock_counts_\n --- black\n --- right_820\n --- object\n --- pieces_1_4\n --- dragging\n --- numerator_2\n --- denominator_5\n --- 1_1\n --- pieces_1_5\n --- fraction_cblock_containment_\n --- denominator_4\n --- fraction_cblock_total_count_2\n --- denominator_2\n --- pieces_1\n --- model\n --- denominator_3\n --- denominator_7\n --- fraction_cblock_total_count_3\n --- pieces_1_3\n --- numerator_3\n\nCluster 35:\n --- plain_image_groups_\n --- fraction_cblock_chains_\n --- pieces_1_6\n --- total_1\n --- swf\n --- sum_\n --- lcm_sum_\n --- denominator_6\n --- __as3_type_fraction\n --- url_assets_cms_wootmath_fractions_number_line_markers_start_marker\n --- numberline_associations_\n --- numerator_1\n --- fraction_cblock_counts_\n --- far\n --- traveled\n --- left_96\n --- figure\n --- use\n --- url_assets_cms_wootmath_fractions_number_line_markers_end_marker\n --- yard\n --- numerator_5\n --- denominator_3\n --- pieces\n --- 1_6_6\n --- start\n --- 1_3_\n --- den_6\n --- input_6\n --- fraction_cblock_total_count_6\n --- 1_6_4\n\nCluster 36:\n --- den_3\n --- fraction_input_value_2_3\n --- num_2\n --- whole_\n --- enter\n --- greatest\n --- fraction_input_value_1_3\n --- fraction\n --- answer\n --- num_1\n --- fractions\n --- smallest\n --- greater\n --- form\n --- simplest\n --- different\n --- wearing\n --- improper\n --- 15\n --- num_6\n --- num_7\n --- 10\n --- num_3\n --- complete\n --- left\n --- write\n --- object\n --- ate\n --- plain_image_groups_\n --- difference\n\nCluster 37:\n --- png\n --- plain_image_groups_\n --- total_2\n --- arrows\n --- url_assets_cms_wootmath_fractions_ui_left_arrow\n --- url_assets_cms_wootmath_fractions_ui_right_arrow\n --- decimal\n --- use\n --- locations\n --- points\n --- correct\n --- total_1\n --- location\n --- answer\n --- fraction_cblock_total_count_8\n --- fraction_cblock_total_count_9\n --- fraction_circle_total_count_4\n --- fraction_cblock_total_count_2\n --- fraction_circle_total_count_3\n --- fraction_circle_total_count_2\n --- fraction_circle_total_count_16\n --- fraction_cblock_total_count_3\n --- fraction_cblock_total_count_4\n --- fraction_cblock_total_count_5\n --- fraction_cblock_total_count_6\n --- fraction_circle_total_count_15\n --- fraction_cblock_total_count_7\n --- fraction_circle_total_count_14\n --- fraction_circle_total_count_13\n --- fraction_circle_total_count_12\n\nCluster 38:\n --- hundredths\n --- object\n --- grid\n --- model\n --- decimal\n --- answer\n --- 19\n --- tenth\n --- 24\n --- 13\n --- 15\n --- tenths\n --- 11\n --- 14\n --- 80\n --- 16\n --- 17\n --- 18\n --- 12\n --- 22\n --- 10\n --- 20\n --- 60\n --- 40\n --- input_a_8\n --- input_8\n --- use\n --- input_a_6\n --- shown\n --- input_6\n\nCluster 39:\n --- 13\n --- 15\n --- grid\n --- numerator\n --- denominator\n --- enter\n --- den_15\n --- half\n --- answer\n --- model\n --- whole_\n --- 100\n --- input_0\n --- hundredths\n --- input_a_0\n --- 11\n --- fraction\n --- sum\n --- bitmap_text_interp_\n --- bitmap_text_inputs_\n --- num_11\n --- total\n --- green\n --- decimal\n --- 20\n --- bar\n --- 1_5\n --- bigger\n --- covered\n --- missing\n\nCluster 40:\n --- decimal\n --- use\n --- make\n --- true\n --- pieces\n --- answer\n --- grid\n --- shown\n --- model\n --- 41\n --- 43\n --- 42\n --- 55\n --- 73\n --- 58\n --- 54\n --- 51\n --- 52\n --- 63\n --- fraction_circle_total_count_15\n --- fraction_circle_total_count_5\n --- fraction_circle_total_count_4\n --- fraction_circle_total_count_6\n --- fraction_circle_total_count_3\n --- fraction_circle_total_count_7\n --- fraction_circle_total_count_2\n --- fraction_circle_total_count_16\n --- fraction_circle_total_count_10\n --- fraction_circle_total_count_14\n --- fraction_circle_total_count_13\n\nCluster 41:\n --- pieces_1_15\n --- fraction_circle_groups_\n --- chains_\n --- 1_5_\n --- 1_3_\n --- denominator_15\n --- fraction\n --- fraction_circle_counts_\n --- lcm_sum_\n --- sum_\n --- scale_1\n --- __as3_type_fraction\n --- greens\n --- fraction_circle_containment_\n --- orange\n --- x_300\n --- circle1_\n --- pieces_1_5\n --- right_270\n --- y_300\n --- 1_15_5\n --- 1_5_1\n --- 1_15_3\n --- numerator_1\n --- pieces_1_3\n --- left_30\n --- left_342\n --- brown\n --- 1_15_\n --- 1_3_1\n\nCluster 42:\n --- fraction_input_value_1_2\n --- den_2\n --- num_1\n --- whole_\n --- 50\n --- enter\n --- 100\n --- answer\n --- different\n --- fractions\n --- form\n --- simplest\n --- plain_image_groups_\n --- fraction\n --- greater\n --- smallest\n --- 10\n --- long\n --- total_1\n --- swf\n --- object\n --- express\n --- greatest\n --- circle\n --- 12\n --- num_2\n --- multiplication\n --- makes\n --- shaded\n --- make\n\nCluster 43:\n --- input_\n --- input_a_\n --- correct\n --- enter\n --- comparison1\n --- 100\n --- 20\n --- 50\n --- 60\n --- 10\n --- 80\n --- 40\n --- 12\n --- 24\n --- 11\n --- 22\n --- 51\n --- 52\n --- 18\n --- 31\n --- 13\n --- 32\n --- 14\n --- 42\n --- 30\n --- 19\n --- 25\n --- 41\n --- complete\n --- half\n\nCluster 44:\n --- problem_text_1\n --- complete\n --- addition\n --- sentence\n --- problem_text_1_2\n --- bitmap_text_inputs_\n --- bitmap_text_interp_\n --- problem_text_2\n --- input_b_1\n --- input_a_1\n --- input_a_2\n --- problem_text_0\n --- input_b_2\n --- answer\n --- 1_6\n --- 1_8\n --- 1_10\n --- 1_4\n --- input_1\n --- 2_8\n --- problem_text_3\n --- 2_6\n --- 4_6\n --- input_2\n --- input_a_0\n --- 2_4\n --- 3_4\n --- input_a_3\n --- 6_8\n --- 1_9\n\nCluster 45:\n --- plain_image_groups_\n --- fraction_cblock_chains_\n --- total_1\n --- swf\n --- lcm_sum_\n --- sum_\n --- __as3_type_fraction\n --- url_assets_cms_wootmath_fractions_number_line_markers_start_marker\n --- traveled\n --- url_assets_cms_wootmath_fractions_number_line_markers_end_marker\n --- numerator_1\n --- fraction_cblock_counts_\n --- far\n --- left_96\n --- pieces_1_10\n --- pieces_1_4\n --- denominator_3\n --- pieces_1_3\n --- denominator_4\n --- figure\n --- denominator_2\n --- use\n --- distance\n --- answer\n --- pieces_1_8\n --- shows\n --- pieces_1_12\n --- fraction_cblock_total_count_2\n --- pieces_1_2\n --- fraction_cblock_total_count_1\n\nCluster 46:\n --- juice\n --- pitcher\n --- plain_image_groups_\n --- orange\n --- total_1\n --- swf\n --- whole_\n --- fraction\n --- num_1\n --- answer\n --- num_2\n --- den_4\n --- den_3\n --- den_6\n --- den_8\n --- fraction_input_value_1_2\n --- den_2\n --- num_3\n --- fraction_input_value_1_3\n --- url_assets_cms_wootmath_fractions_number_line_juice_oj_tupperware_fourths_02\n --- fraction_input_value_3_4\n --- fraction_input_value_2_3\n --- fraction_input_value_1_4\n --- fraction_input_value_2_4\n --- fraction_input_value_1_8\n --- fraction_input_value_1_6\n --- num_5\n --- fraction_input_value_2_6\n --- fraction_input_value_2_8\n --- fraction_input_value_5_6\n\nCluster 47:\n --- fraction_cblock_chains_\n --- lcm_sum_\n --- sum_\n --- __as3_type_fraction\n --- numerator_1\n --- left_100\n --- bar2_\n --- bar1_\n --- denominator_2\n --- denominator_1\n --- right_790\n --- pieces_1_12\n --- fraction_cblock_counts_\n --- fraction_cblock_containment_\n --- pieces_1_2\n --- pieces_1_8\n --- right_445\n --- pieces_1\n --- pieces_1_6\n --- denominator_4\n --- bar0_\n --- denominator_3\n --- pieces_1_4\n --- undefined\n --- 1_2\n --- numerator_2\n --- numerator_3\n --- denominator_8\n --- denominator_6\n --- denominator_12\n\nCluster 48:\n --- pieces_1_10\n --- fraction_cblock_chains_\n --- sum_\n --- lcm_sum_\n --- __as3_type_fraction\n --- denominator_10\n --- numerator_1\n --- denominator_1\n --- bar1_\n --- fraction_cblock_counts_\n --- denominator_5\n --- denominator_2\n --- bar2_\n --- left_100\n --- left_80\n --- fraction_cblock_containment_\n --- unit1_\n --- unit2_\n --- pieces_1\n --- 10\n --- numerator_3\n --- numerator_5\n --- numerator_2\n --- pieces_1_6\n --- fraction\n --- numerator_4\n --- pieces_1_5\n --- numerator_7\n --- left_130\n --- numerator_9\n\nCluster 49:\n --- area_target_contents_\n --- plain_image_groups_\n --- image_object_groups_\n --- x_468\n --- swf\n --- y_118\n --- total_1\n --- drag\n --- night\n --- piranhas\n --- answer\n --- chocolate\n --- pizza\n --- fish\n --- off_5\n --- on_0\n --- tenths\n --- on_3\n --- box\n --- 10\n --- number\n --- off_6\n --- correct\n --- total_12\n --- on_2\n --- total_9\n --- off_2\n --- total_8\n --- off_3\n --- on_4\n\nCluster 50:\n --- makes\n --- input_a_1\n --- bitmap_text_interp_\n --- bitmap_text_inputs_\n --- statement\n --- true\n --- enter\n --- number\n --- fraction_input_value_\n --- form\n --- simplest\n --- input_1\n --- 10\n --- fraction\n --- input_a_4\n --- input_4\n --- 2_4\n --- answer\n --- input_3\n --- input_a_3\n --- input_a_5\n --- input_5\n --- input_2\n --- 4_8\n --- 12\n --- input_a_2\n --- input_a_6\n --- input_6\n --- 2_8\n --- 3_6\n\nCluster 51:\n --- 12\n --- arrange\n --- greatest\n --- boxes\n --- fractions\n --- drag\n --- fraction\n --- 11\n --- 10\n --- answer\n --- total\n --- grid\n --- model\n --- fraction_cblock_chains_\n --- fraction_cblock_total_count_1\n --- lcm_sum_\n --- sum_\n --- numerator_1\n --- middle\n --- __as3_type_fraction\n --- shade\n --- denominator_3\n --- greater\n --- left\n --- circle\n --- fraction_cblock_counts_\n --- 1_3_1\n --- enter\n --- 14\n --- bar\n\nCluster 52:\n --- make\n --- true\n --- boxes\n --- fractions\n --- comparison\n --- drag\n --- answer\n --- 12\n --- tenths\n --- enter\n --- numbers\n --- 10\n --- statement\n --- input_\n --- input_a_5\n --- input_5\n --- input_a_4\n --- input_a_6\n --- input_4\n --- input_6\n --- input_a_7\n --- input_a_8\n --- input_3\n --- input_a_3\n --- input_7\n --- input_8\n --- input_2\n --- input_a_2\n --- 16\n --- input_a_9\n\nCluster 53:\n --- denominator\n --- greater\n --- fractions\n --- input_a_3\n --- input_a_2\n --- input_a_4\n --- input_a_5\n --- input_a_6\n --- bigger\n --- smaller\n --- input_a_1\n --- input_a_7\n --- answer\n --- enter\n --- input_4\n --- input_a_9\n --- input_9\n --- whole_\n --- input_5\n --- input_1\n --- input_2\n --- input_a_\n --- input_a_8\n --- input_a_10\n --- input_3\n --- bitmap_text_interp_\n --- bitmap_text_inputs_\n --- input_b_4\n --- input_b_6\n --- input_b_2\n\nCluster 54:\n --- object\n --- decimal\n --- input_0\n --- input_a_0\n --- half\n --- enter\n --- hundredths\n --- answer\n --- model\n --- 14\n --- 17\n --- 16\n --- 18\n --- bigger\n --- 19\n --- 12\n --- 22\n --- 09\n --- 25\n --- bar\n --- black\n --- 11\n --- 50\n --- 15\n --- 33\n --- 24\n --- plain_image_groups_\n --- make\n --- 41\n --- dragging\n\nCluster 55:\n --- using\n --- model\n --- answer\n --- size\n --- fraction\n --- pieces\n --- equivalent\n --- greater\n --- cover\n --- dark\n --- equal\n --- grays\n --- sixths\n --- blue\n --- thirds\n --- fifths\n --- yellows\n --- yellow\n --- halves\n --- fourths\n --- browns\n --- piece\n --- blues\n --- smaller\n --- cake\n --- reds\n --- pinks\n --- fraction_circle_total_count_13\n --- fraction_circle_total_count_12\n --- fraction_input_value_2_5\n\nCluster 56:\n --- plain_image_groups_\n --- radio_group_mc1_\n --- radio_group_mc2_\n --- text_yes\n --- choice_a\n --- total_1\n --- swf\n --- shapes\n --- object\n --- shaded\n --- text_no\n --- choice_b\n --- url_assets_cms_wootmath_fractions_equal_parts_fourths_fourth_03\n --- answer\n --- fraction_circle_total_count_1\n --- fraction_circle_total_count_10\n --- fraction_circle_total_count_14\n --- fraction_circle_total_count_11\n --- fraction_circle_total_count_12\n --- fraction_circle_total_count_13\n --- fraction_circle_total_count_15\n --- youranswer\n --- fraction_circle_counts_\n --- fraction_circle_groups_\n --- fraction_circle_total_count_2\n --- fraction_circle_containment_\n --- fraction_cblock_total_count_9\n --- fraction_cblock_total_count_8\n --- fraction_cblock_total_count_7\n --- fraction_cblock_total_count_6\n\nCluster 57:\n --- fraction_circle_groups_\n --- scale_0\n --- fraction_circle_counts_\n --- piece\n --- scale_1\n --- x_675\n --- x_200\n --- unit_\n --- x_811\n --- box\n --- pieces_1_10\n --- pieces_1_9\n --- unit\n --- fraction_circle_total_count_4\n --- pieces_1_6\n --- pieces\n --- chains_\n --- x_550\n --- y_450\n --- y_415\n --- y_300\n --- pieces_1_4\n --- answer\n --- pieces_1_5\n --- drag\n --- 1_10_1\n --- half\n --- 1_6_1\n --- lcm_sum_\n --- sum_\n\nCluster 58:\n --- arrange\n --- order\n --- greatest\n --- boxes\n --- fractions\n --- drag\n --- fraction\n --- 54\n --- 51\n --- 42\n --- 83\n --- 44\n --- 123\n --- 32\n --- 52\n --- 31\n --- answer\n --- 12\n --- decimals\n --- 41\n --- 15\n --- 63\n --- 20\n --- 18\n --- 34\n --- 16\n --- 22\n --- input_\n --- 14\n --- 75\n\nCluster 59:\n --- plain_image_groups_\n --- hot\n --- mug\n --- chocolate\n --- whole_\n --- num_1\n --- fraction_input_value_1_4\n --- den_4\n --- total_1\n --- swf\n --- fraction\n --- url_assets_cms_wootmath_fractions_number_line_mug_mug_half_01\n --- answer\n --- den_3\n --- fraction_input_value_1_3\n --- fraction_input_value_1_2\n --- num_2\n --- den_2\n --- fraction_input_value_2_4\n --- fraction_input_value_2_3\n --- wearing\n --- fraction_input_value_3_4\n --- num_3\n --- circle\n --- 25\n --- long\n --- 100\n --- den_1\n --- enter\n --- fraction_input_value_1_5\n\nCluster 60:\n --- pieces_1_6\n --- fraction_circle_groups_\n --- chains_\n --- lcm_sum_\n --- sum_\n --- fraction_circle_counts_\n --- denominator_6\n --- unit1_\n --- __as3_type_fraction\n --- unit2_\n --- circle1_\n --- fraction_circle_containment_\n --- scale_0\n --- 1_2_\n --- fraction\n --- 1_\n --- y_300\n --- pieces_1\n --- numerator_1\n --- scale_1\n --- pinks\n --- x_300\n --- pieces_1_8\n --- 1_6_3\n --- denominator_3\n --- numerator_2\n --- numerator_6\n --- answer\n --- circle\n --- object\n\nCluster 61:\n --- den_10\n --- whole_\n --- 10\n --- smallest\n --- num_1\n --- fraction\n --- enter\n --- answer\n --- fraction_input_value_1_5\n --- fraction_input_value_1_10\n --- fraction_input_value_1_8\n --- den_5\n --- popcorn\n --- num_2\n --- den_8\n --- smaller\n --- wearing\n --- robots\n --- num_7\n --- fraction_input_value_1_7\n --- greater\n --- fraction_input_value_1_6\n --- den_7\n --- num_3\n --- num_6\n --- boxes\n --- den_6\n --- num_9\n --- greatest\n --- num_5\n\nCluster 62:\n --- form\n --- simplest\n --- enter\n --- difference\n --- mult_n_1_\n --- mult_d_1_\n --- num_1\n --- object\n --- sum\n --- answer\n --- whole_\n --- mult_d_2_\n --- mult_n_2_\n --- den_4\n --- num_3\n --- den_6\n --- fraction_input_value_1\n --- whole_1\n --- den_3\n --- whole_3\n --- fraction_input_value_3\n --- fraction_input_value_2\n --- whole_2\n --- bitmap_text_inputs_\n --- bitmap_text_interp_\n --- den_2\n --- num_2\n --- 15\n --- den_5\n --- 1_4\n\nCluster 63:\n --- shaded\n --- parts\n --- whole_\n --- equal\n --- den_8\n --- fraction\n --- rectangle\n --- den_6\n --- answer\n --- flower\n --- polygon\n --- star\n --- num_1\n --- num_3\n --- num_2\n --- num_4\n --- den_7\n --- num_7\n --- fraction_input_value_1_8\n --- num_5\n --- fraction_input_value_3_6\n --- fraction_input_value_1_6\n --- fraction_input_value_2_8\n --- fraction_input_value_4_8\n --- num_6\n --- fraction_input_value_2_6\n --- piranhas\n --- fraction_input_value_6_8\n --- fraction_input_value_3_8\n --- fraction_input_value_4_6\n\nCluster 64:\n --- 30\n --- input_\n --- input_a_\n --- enter\n --- correct\n --- 16\n --- 10\n --- 20\n --- 17\n --- 14\n --- 100\n --- 50\n --- 13\n --- comparison1\n --- half\n --- 60\n --- 12\n --- input_0\n --- hundredths\n --- input_a_0\n --- homework\n --- answer\n --- total\n --- 15\n --- 80\n --- math\n --- 34\n --- 11\n --- makes\n --- 32\n\nCluster 65:\n --- mile\n --- run\n --- block\n --- ran\n --- does\n --- wants\n --- miles\n --- answer\n --- long\n --- far\n --- drag\n --- did\n --- walked\n --- rex\n --- whole_\n --- elephant\n --- object\n --- num_1\n --- input_a_2\n --- input_a_3\n --- distance\n --- total\n --- line\n --- express\n --- fraction_input_value_1\n --- whole_1\n --- hippo\n --- number\n --- giraffe\n --- den_2\n\nCluster 66:\n --- den_5\n --- fraction_input_value_4_5\n --- num_4\n --- whole_\n --- fraction_input_value_2_5\n --- fraction\n --- enter\n --- greatest\n --- num_2\n --- answer\n --- wearing\n --- num_3\n --- smallest\n --- piranhas\n --- shaded\n --- greater\n --- 15\n --- form\n --- fractions\n --- simplest\n --- different\n --- rectangle\n --- smaller\n --- 10\n --- ate\n --- left\n --- robots\n --- improper\n --- num_7\n --- pizza\n\nCluster 67:\n --- words\n --- model_lbl_0\n --- express\n --- decimal\n --- tenths\n --- bitmap_text_interp_\n --- bitmap_text_inputs_\n --- model\n --- answer\n --- input_b_5\n --- input_b_4\n --- input_b_6\n --- input_b_7\n --- input_a_5\n --- input_b_1\n --- input_b_3\n --- tenth\n --- input_a_4\n --- input_a_9\n --- input_a_6\n --- input_a_7\n --- input_b_8\n --- input_a_1\n --- input_a_8\n --- input_a_3\n --- input_b_2\n --- input_a_2\n --- 10\n --- input_a_\n --- input_a_10\n\nCluster 68:\n --- radio_group_problem_\n --- radio_choice_b\n --- radio_choice_a\n --- choice_b\n --- numerator\n --- choice_a\n --- half\n --- denominator\n --- fraction\n --- estimate\n --- ate\n --- cake\n --- greater\n --- pie\n --- fraction_circle_groups_\n --- fourth\n --- bar\n --- text_one\n --- radio_text_one\n --- piece\n --- cut\n --- y_400\n --- eat\n --- seventh\n --- fraction_circle_total_count_1\n --- 24\n --- did\n --- unit\n --- x_512\n --- fifth\n\nCluster 69:\n --- grid\n --- model\n --- answer\n --- represented\n --- input0_0\n --- 100\n --- 14\n --- enter\n --- numbers\n --- 09\n --- 20\n --- boxes\n --- drag\n --- decimal\n --- 18\n --- input_0\n --- input_a_0\n --- pieces\n --- covering\n --- 12\n --- 55\n --- 40\n --- 10\n --- 33\n --- 41\n --- 75\n --- 32\n --- 73\n --- fraction_circle_total_count_5\n --- fraction_circle_total_count_8\n\nCluster 70:\n --- pieces_1_8\n --- fraction_circle_groups_\n --- chains_\n --- sum_\n --- lcm_sum_\n --- fraction_circle_counts_\n --- denominator_8\n --- __as3_type_fraction\n --- 1_2_\n --- fraction_circle_containment_\n --- fraction\n --- piece_0_\n --- unit1_\n --- unit_\n --- scale_1\n --- scale_0\n --- unit2_\n --- pieces_1_4\n --- y_300\n --- numerator_1\n --- pieces_1\n --- circle1_\n --- left_0\n --- piece_1_\n --- 1_8_4\n --- x_512\n --- denominator_4\n --- piece_2_\n --- 1_\n --- 1_1\n\nCluster 71:\n --- amounts\n --- model\n --- answer\n --- tenths\n --- box\n --- drag\n --- fraction_circle_total_count_4\n --- fraction_circle_total_count_3\n --- fraction_circle_total_count_2\n --- fraction_circle_total_count_5\n --- fraction_circle_total_count_6\n --- fraction_circle_total_count_15\n --- fraction_circle_total_count_14\n --- fraction_circle_total_count_13\n --- fraction_circle_total_count_12\n --- fraction_circle_total_count_11\n --- fraction_circle_total_count_16\n --- youranswer\n --- fraction_circle_total_count_1\n --- fraction_circle_groups_\n --- fraction_circle_counts_\n --- fraction_circle_containment_\n --- fraction_cblock_total_count_9\n --- fraction_cblock_total_count_8\n --- fraction_cblock_total_count_7\n --- fraction_cblock_total_count_6\n --- fraction_cblock_total_count_5\n --- fraction_cblock_total_count_4\n --- fraction_cblock_total_count_3\n --- fraction_cblock_total_count_2\n\nCluster 72:\n --- fraction_cblock_chains_\n --- lcm_sum_\n --- sum_\n --- left_125\n --- __as3_type_fraction\n --- numerator_1\n --- 1_\n --- right_815\n --- denominator_1\n --- fraction\n --- denominator_2\n --- fraction_cblock_counts_\n --- right_470\n --- denominator_4\n --- pieces_1\n --- denominator_12\n --- pieces_1_2\n --- bitmap_text_interp_\n --- bitmap_text_inputs_\n --- 1_2\n --- right_297\n --- denominator_3\n --- pieces_1_6\n --- pieces_1_10\n --- shown\n --- pieces_1_12\n --- enter\n --- fraction_cblock_containment_\n --- left_297\n --- equal\n\nCluster 73:\n --- 1_3_\n --- pieces_1_6\n --- fraction_circle_groups_\n --- chains_\n --- fraction\n --- fraction_circle_counts_\n --- pieces_1_3\n --- sum_\n --- lcm_sum_\n --- scale_1\n --- __as3_type_fraction\n --- brown\n --- denominator_6\n --- fraction_circle_containment_\n --- 1_3_1\n --- 1_6_2\n --- x_512\n --- numerator_1\n --- y_300\n --- denominator_3\n --- numerator_2\n --- pizza\n --- left_0\n --- right_240\n --- cover\n --- fractionthat\n --- right_120\n --- 1_1\n --- input_2\n --- left_30\n\nCluster 74:\n --- complete\n --- sentence\n --- bitmap_text_inputs_\n --- bitmap_text_interp_\n --- addition\n --- problem_text_3\n --- problem_text_1_2\n --- problem_text_0\n --- input_a_3\n --- input_a_5\n --- answer\n --- math\n --- input_b_3\n --- input_a_4\n --- problem_text_2\n --- input_5\n --- multiplication\n --- 3_6\n --- input_b_2\n --- input_4\n --- input_a_6\n --- input_b_4\n --- 4_6\n --- 10\n --- plain_image_groups_\n --- input_3\n --- 11\n --- input_6\n --- input_1\n --- input_a_7\n\nCluster 75:\n --- pieces_1_7\n --- denominator_7\n --- fraction_cblock_chains_\n --- lcm_sum_\n --- sum_\n --- fraction_circle_groups_\n --- __as3_type_fraction\n --- bar1_\n --- chains_\n --- left_130\n --- numerator_1\n --- denominator_1\n --- black\n --- bar\n --- object\n --- seventh\n --- fraction_cblock_counts_\n --- numerator_5\n --- fraction_circle_counts_\n --- numerator_6\n --- pieces_1\n --- unit1_\n --- 1_1\n --- numerator_3\n --- 1_\n --- right_820\n --- numerator_4\n --- dragging\n --- fraction\n --- light\n\nCluster 76:\n --- pieces_1_5\n --- fraction_circle_groups_\n --- sum_\n --- lcm_sum_\n --- denominator_5\n --- fraction_cblock_chains_\n --- chains_\n --- __as3_type_fraction\n --- circle1_\n --- fraction_circle_counts_\n --- bar1_\n --- numerator_1\n --- 1_\n --- denominator_1\n --- numerator_4\n --- scale_0\n --- numerator_5\n --- unit1_\n --- fraction_circle_containment_\n --- pieces_1\n --- fraction\n --- unit2_\n --- 1_5_5\n --- object\n --- black\n --- 1_1\n --- fifth\n --- scale_1\n --- numerator_3\n --- circle\n\nCluster 77:\n --- fraction_circle_groups_\n --- circle1_1_\n --- circle1_2_\n --- lcm_sum_\n --- sum_\n --- fraction_circle_counts_\n --- y_350\n --- scale_1\n --- __as3_type_fraction\n --- pieces_1_12\n --- numerator_1\n --- object\n --- pieces_1_6\n --- chains_\n --- fraction_circle_containment_\n --- x_750\n --- x_250\n --- say\n --- pieces_1_8\n --- cover\n --- pieces_1_15\n --- piece\n --- pieces_1_4\n --- dark\n --- pieces_1_10\n --- yellow\n --- left_270\n --- pieces_1_9\n --- blue\n --- denominator_6\n\nCluster 78:\n --- pieces_1_12\n --- fraction_circle_groups_\n --- sum_\n --- lcm_sum_\n --- fraction_cblock_chains_\n --- chains_\n --- __as3_type_fraction\n --- denominator_12\n --- 1_2_\n --- fraction\n --- fraction_circle_counts_\n --- numerator_1\n --- unit_\n --- 12\n --- fraction_circle_containment_\n --- scale_1\n --- fraction_cblock_counts_\n --- 1_12_6\n --- denominator_2\n --- denominator_1\n --- numerator_6\n --- reds\n --- pieces_1\n --- input_a_6\n --- pieces_1_2\n --- input_6\n --- numerator_11\n --- yellow\n --- 1_1\n --- x_300\n\nCluster 79:\n --- yards\n --- long\n --- fraction_input_value_\n --- bar\n --- whole_\n --- fraction\n --- num_1\n --- den_6\n --- num_2\n --- den_2\n --- object\n --- den_4\n --- den_8\n --- whole_3\n --- fraction_input_value_3\n --- num_3\n --- den_3\n --- num_5\n --- num_4\n --- 5_6\n --- den_5\n --- den_7\n --- num_7\n --- num_6\n --- den_1\n --- den_9\n --- answer\n --- 4_6\n --- den_\n --- 3_6\n\nCluster 80:\n --- den_9\n --- whole_\n --- fraction\n --- smallest\n --- enter\n --- fraction_input_value_1_9\n --- answer\n --- num_2\n --- num_4\n --- greater\n --- num_1\n --- num_5\n --- shaded\n --- num_8\n --- num_6\n --- num_7\n --- num_3\n --- wearing\n --- different\n --- smaller\n --- fractions\n --- piranhas\n --- cats\n --- greatest\n --- 10\n --- 12\n --- 15\n --- ate\n --- fraction_input_value_2_4\n --- den_8\n\nCluster 81:\n --- pieces_1_12\n --- 1_3_\n --- fraction_circle_groups_\n --- chains_\n --- fraction\n --- fraction_circle_counts_\n --- lcm_sum_\n --- sum_\n --- left_30\n --- 1_12_4\n --- brown\n --- input_a_4\n --- reds\n --- input_4\n --- denominator_12\n --- pieces_1_3\n --- __as3_type_fraction\n --- 1_3_1\n --- right_270\n --- scale_1\n --- fraction_circle_total_count_5\n --- numerator_4\n --- y_300\n --- x_300\n --- fraction_circle_containment_\n --- denominator_3\n --- numerator_1\n --- equals\n --- equal\n --- answer\n\nCluster 82:\n --- whole_\n --- den_7\n --- num_5\n --- fraction\n --- den_6\n --- greatest\n --- enter\n --- answer\n --- fraction_input_value_5_6\n --- den_8\n --- wearing\n --- num_6\n --- num_4\n --- num_3\n --- num_2\n --- fraction_input_value_5_8\n --- fraction_input_value_4_6\n --- different\n --- greater\n --- num_7\n --- fraction_input_value_2_6\n --- fraction_input_value_3_6\n --- shaded\n --- smaller\n --- fractions\n --- fraction_input_value_4_8\n --- fraction_input_value_1_7\n --- piranhas\n --- num_1\n --- cats\n\nCluster 83:\n --- tothe\n --- ans1\n --- box\n --- fractions\n --- denominator\n --- ans0\n --- numerator\n --- drag\n --- answer\n --- ans2\n --- fraction_cblock_total_count_4\n --- fraction_circle_total_count_13\n --- fraction_cblock_total_count_2\n --- fraction_circle_total_count_5\n --- fraction_circle_total_count_4\n --- fraction_circle_total_count_3\n --- fraction_circle_total_count_2\n --- fraction_circle_total_count_16\n --- fraction_circle_total_count_15\n --- fraction_circle_total_count_14\n --- fraction_circle_total_count_12\n --- fraction_cblock_total_count_5\n --- fraction_circle_total_count_11\n --- fraction_circle_total_count_6\n --- fraction_circle_total_count_1\n --- fraction_circle_groups_\n --- fraction_circle_counts_\n --- fraction_circle_containment_\n --- fraction_cblock_total_count_9\n --- fraction_cblock_total_count_8\n\nCluster 84:\n --- pieces_1_8\n --- 1_4_\n --- fraction_circle_groups_\n --- chains_\n --- pieces_1_4\n --- fraction\n --- fraction_circle_counts_\n --- sum_\n --- lcm_sum_\n --- left_0\n --- scale_1\n --- __as3_type_fraction\n --- denominator_8\n --- x_512\n --- blue\n --- fraction_circle_containment_\n --- dark\n --- denominator_4\n --- y_300\n --- numerator_2\n --- numerator_1\n --- 1_4_1\n --- pizza\n --- 1_8_2\n --- fractionthat\n --- cover\n --- right_180\n --- right_270\n --- 1_1\n --- right_90\n\nCluster 85:\n --- fraction_cblock_chains_\n --- sum_\n --- lcm_sum_\n --- __as3_type_fraction\n --- numerator_1\n --- left_175\n --- unit1_\n --- unit2_\n --- denominator_1\n --- right_865\n --- unit3_\n --- fraction_cblock_counts_\n --- denominator_4\n --- fraction_cblock_containment_\n --- denominator_12\n --- denominator_6\n --- denominator_8\n --- pieces_1\n --- denominator_2\n --- pieces_1_6\n --- pieces_1_4\n --- pieces_1_12\n --- denominator_3\n --- right_347\n --- pieces_1_8\n --- fraction_cblock_total_count_6\n --- numerator_2\n --- right_290\n --- pieces_1_3\n --- input_\n\nCluster 86:\n --- 1_4\n --- match\n --- fraction_input_value_\n --- shade\n --- fraction\n --- input_a_\n --- choose\n --- comparison\n --- correct\n --- bar\n --- star\n --- circle\n --- rectangle\n --- fraction_cblock_total_count_6\n --- fraction_cblock_total_count_2\n --- fraction_circle_total_count_2\n --- fraction_circle_total_count_16\n --- fraction_circle_total_count_15\n --- fraction_cblock_total_count_18\n --- fraction_circle_total_count_14\n --- fraction_circle_total_count_13\n --- fraction_circle_total_count_12\n --- fraction_cblock_total_count_3\n --- fraction_cblock_total_count_7\n --- fraction_circle_total_count_11\n --- fraction_circle_total_count_10\n --- fraction_circle_total_count_1\n --- fraction_cblock_total_count_4\n --- fraction_circle_groups_\n --- fraction_circle_counts_\n\nCluster 87:\n --- pieces_1_15\n --- fraction_cblock_chains_\n --- lcm_sum_\n --- sum_\n --- denominator_15\n --- __as3_type_fraction\n --- numerator_1\n --- fraction_cblock_counts_\n --- denominator_1\n --- left_90\n --- denominator_3\n --- 15\n --- numerator_2\n --- plain_image_groups_\n --- fraction\n --- bar1_\n --- pieces_1_3\n --- denominator_5\n --- unit1_\n --- 1_3_\n --- fraction_cblock_containment_\n --- right_780\n --- pieces_1\n --- pieces_1_12\n --- unit_\n --- pieces_1_5\n --- numerator_4\n --- total_1\n --- numerator_5\n --- numerator_14\n\nCluster 88:\n --- pieces_1_6\n --- piece_0_\n --- fraction_circle_groups_\n --- piece_1_\n --- chains_\n --- pieces_1_3\n --- lcm_sum_\n --- sum_\n --- fraction_circle_counts_\n --- right_120\n --- left_0\n --- __as3_type_fraction\n --- denominator_6\n --- fraction_circle_containment_\n --- x_512\n --- denominator_3\n --- numerator_2\n --- scale_1\n --- cover\n --- y_300\n --- numerator_1\n --- sixth\n --- pieces\n --- 1_3_2\n --- 1_1\n --- shown\n --- 1_6_4\n --- fractionthat\n --- fraction_circle_total_count_7\n --- pieces_1\n\nCluster 89:\n --- pieces_1_9\n --- circle1_\n --- fraction_circle_groups_\n --- chains_\n --- denominator_9\n --- fraction_circle_counts_\n --- lcm_sum_\n --- sum_\n --- ninth\n --- scale_1\n --- numerator_8\n --- object\n --- y_325\n --- x_300\n --- __as3_type_fraction\n --- 1_9_8\n --- brown\n --- dragging\n --- fraction_circle_containment_\n --- numerator_7\n --- circle\n --- pieces\n --- fraction_circle_total_count_9\n --- black\n --- right_0\n --- white\n --- 1_1\n --- left_320\n --- whites\n --- model\n\nCluster 90:\n --- fraction_circle_groups_\n --- brown\n --- fraction_circle_total_count_1\n --- say\n --- cover\n --- y_350\n --- piece\n --- 1_3_1\n --- x_300\n --- scale_1\n --- pieces_1_3\n --- fraction_circle_counts_\n --- reds\n --- pink\n --- circle1_\n --- pinks\n --- whites\n --- red\n --- white\n --- answer\n --- pieces_1_6\n --- lcm_sum_\n --- sum_\n --- 1_6_1\n --- x_250\n --- fraction_circle_total_count_2\n --- numerator_1\n --- __as3_type_fraction\n --- denominator_6\n --- equal\n\nCluster 91:\n --- tenths\n --- object\n --- grid\n --- model\n --- answer\n --- 10\n --- youranswer\n --- fraction_circle_total_count_14\n --- fraction_circle_total_count_11\n --- fraction_circle_total_count_12\n --- fraction_circle_total_count_13\n --- fraction_circle_total_count_16\n --- fraction_circle_total_count_15\n --- fraction_circle_total_count_2\n --- fraction_circle_total_count_3\n --- fraction_circle_total_count_4\n --- fraction_circle_total_count_5\n --- fraction_circle_total_count_10\n --- fraction_circle_total_count_1\n --- fraction_circle_total_count_7\n --- fraction_circle_groups_\n --- fraction_circle_counts_\n --- fraction_circle_containment_\n --- fraction_cblock_total_count_9\n --- fraction_cblock_total_count_8\n --- fraction_cblock_total_count_7\n --- fraction_cblock_total_count_6\n --- fraction_cblock_total_count_5\n --- fraction_cblock_total_count_4\n --- fraction_cblock_total_count_3\n\nCluster 92:\n --- plain_image_groups_\n --- url_assets_cms_wootmath_fractions_misc_objects_ladybug_alt\n --- url_assets_cms_wootmath_fractions_misc_objects_ant_alt\n --- swf\n --- ladybugs\n --- bugs\n --- total_2\n --- total_3\n --- form\n --- simplest\n --- whole_\n --- enter\n --- total_4\n --- num_1\n --- fraction_input_value_1_2\n --- den_2\n --- fraction\n --- answer\n --- den_3\n --- total_6\n --- fraction_input_value_1_3\n --- num_2\n --- den_4\n --- num_3\n --- total_1\n --- fraction_input_value_2_3\n --- fraction_input_value_1_4\n --- fraction_input_value_3_4\n --- equivalent\n --- total_9\n\nCluster 93:\n --- plain_image_groups_\n --- fraction_cblock_chains_\n --- pieces_1_8\n --- total_1\n --- swf\n --- lcm_sum_\n --- sum_\n --- figure\n --- far\n --- traveled\n --- url_assets_cms_wootmath_fractions_number_line_objects_v2_bug_trail\n --- use\n --- __as3_type_fraction\n --- url_assets_cms_wootmath_fractions_number_line_markers_start_marker\n --- url_assets_cms_wootmath_fractions_number_line_objects_v2_ladybug\n --- ladybug\n --- denominator_8\n --- left_96\n --- pieces\n --- fraction_cblock_counts_\n --- numerator_1\n --- fraction_cblock_total_count_1\n --- input_1\n --- denominator_1\n --- left_99\n --- answer\n --- numerator_3\n --- ant\n --- url_assets_cms_wootmath_fractions_number_line_objects_v2_ant\n --- numerator_5\n\nCluster 94:\n --- 11\n --- think\n --- came\n --- grid\n --- greater\n --- cut\n --- bigger\n --- piece\n --- radio_group_problem_\n --- birthday\n --- 10\n --- model\n --- half\n --- cake\n --- 1_8\n --- answer\n --- pan\n --- brownies\n --- pie\n --- piranhas\n --- pieces\n --- smaller\n --- hundredths\n --- 100\n --- radio_choice_b\n --- choice_b\n --- 1_9\n --- 1_10\n --- equal\n --- input_0\n\nCluster 95:\n --- 10\n --- arrange\n --- greatest\n --- boxes\n --- fractions\n --- drag\n --- fraction\n --- answer\n --- statement\n --- greater\n --- choose\n --- correct\n --- meter\n --- beetle\n --- start\n --- shade\n --- fraction_circle_total_count_15\n --- fraction_circle_total_count_14\n --- fraction_circle_total_count_12\n --- fraction_circle_total_count_16\n --- fraction_circle_total_count_2\n --- fraction_circle_total_count_3\n --- fraction_circle_total_count_13\n --- youranswer\n --- fraction_circle_total_count_11\n --- fraction_circle_total_count_10\n --- fraction_circle_total_count_1\n --- fraction_circle_groups_\n --- fraction_circle_counts_\n --- fraction_circle_containment_\n\nCluster 96:\n --- whole_\n --- den_12\n --- enter\n --- den_4\n --- 12\n --- fraction\n --- smallest\n --- num_3\n --- num_2\n --- fraction_input_value_3_4\n --- greatest\n --- answer\n --- fraction_input_value_2_4\n --- smaller\n --- fraction_input_value_1_12\n --- num_1\n --- num_10\n --- greater\n --- fractions\n --- num_4\n --- fraction_input_value_1_4\n --- 10\n --- num_6\n --- different\n --- num_7\n --- num_5\n --- den_8\n --- fraction_input_value_3_8\n --- shaded\n --- equal\n\nCluster 97:\n --- fraction_circle_groups_\n --- fraction_circle_total_count_1\n --- x_300\n --- scale_1\n --- fraction_circle_counts_\n --- circle\n --- yellow\n --- circle1_\n --- black\n --- say\n --- dark\n --- cover\n --- y_350\n --- y_300\n --- pieces_1\n --- 1_1\n --- fraction_circle_total_count_2\n --- pieces_1_2\n --- answer\n --- equals\n --- blue\n --- 1_2_1\n --- blues\n --- piece\n --- grays\n --- yellows\n --- equal\n --- input_a_1\n --- browns\n --- lcm_sum_\n\nCluster 98:\n --- den_15\n --- 15\n --- whole_\n --- smallest\n --- enter\n --- fraction\n --- smaller\n --- answer\n --- num_2\n --- 11\n --- greatest\n --- num_11\n --- num_1\n --- greater\n --- 12\n --- num_7\n --- num_4\n --- num_5\n --- num_10\n --- denominator\n --- numerator\n --- num_3\n --- num_8\n --- 10\n --- num_6\n --- num_9\n --- 14\n --- numbers\n --- shaded\n --- circle\n\nCluster 99:\n --- pieces_1_4\n --- fraction_circle_groups_\n --- chains_\n --- lcm_sum_\n --- sum_\n --- fraction_circle_counts_\n --- denominator_4\n --- __as3_type_fraction\n --- 1_\n --- unit_\n --- fraction_circle_containment_\n --- scale_1\n --- unit1_\n --- fraction\n --- circle1_\n --- pieces_1\n --- piece_0_\n --- unit2_\n --- scale_0\n --- y_300\n --- numerator_1\n --- 1_4_4\n --- circle\n --- 1_1\n --- fraction_circle_total_count_5\n --- x_300\n --- black\n --- numerator_4\n --- dark\n --- right_180\n\n"
],
[
"df3['cluster_100'] = km.labels_",
"_____no_output_____"
],
[
"df3['trait_1'] = df3['behavioral_traits'].apply(lambda x : x[0] if len(x) > 0 else 'None' )\ndf3['trait_2'] = df3['behavioral_traits'].apply(lambda x : x[1] if len(x) > 1 else 'None' ) ",
"_____no_output_____"
],
[
"df_trait_1 = df3.groupby(['cluster_100', 'trait_1']).size().unstack(fill_value=0)\ndf_trait_2 = df3.groupby(['cluster_100', 'trait_2']).size().unstack(fill_value=0)",
"_____no_output_____"
],
[
"df_cluster_100 = df3.groupby('cluster_100')",
"_____no_output_____"
],
[
"df_trait_1.index.rename('cluster_100', inplace=True)\ndf_trait_2.index.rename('cluster_100', inplace=True)\ndf_traits = pd.concat([df_trait_1, df_trait_2], axis=1)",
"_____no_output_____"
],
[
"df_traits = df_traits.drop('None', axis=1)",
"_____no_output_____"
],
[
"#df_traits_norm = (df_traits - df_traits.mean()) / (df_traits.max() - df_traits.min())\ndf_traits_norm = (df_traits / (df_traits.sum()) )",
"_____no_output_____"
],
[
"fig = plt.figure(figsize=(18.5, 16))\ncmap = sns.cubehelix_palette(light=.95, as_cmap=True)\nsns.heatmap(df_traits_norm, cmap=cmap, linewidths=.5)\n\n#sns.heatmap(df_traits_norm, cmap=\"YlGnBu\", linewidths=.5)",
"_____no_output_____"
]
],
[
[
"\n",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d089a46cf46c2aa7ef7fd2037aca422e643c0892 | 10,920 | ipynb | Jupyter Notebook | Code/3.Dimensionality.ipynb | vnnsrk/Understanding-stock-data---Field-clustering-and-dimensionality | 7a3882185757b9159c99b5998a41dad64b006696 | [
"MIT"
] | null | null | null | Code/3.Dimensionality.ipynb | vnnsrk/Understanding-stock-data---Field-clustering-and-dimensionality | 7a3882185757b9159c99b5998a41dad64b006696 | [
"MIT"
] | null | null | null | Code/3.Dimensionality.ipynb | vnnsrk/Understanding-stock-data---Field-clustering-and-dimensionality | 7a3882185757b9159c99b5998a41dad64b006696 | [
"MIT"
] | null | null | null | 30.165746 | 585 | 0.561355 | [
[
[
"## Analysis of stock prices using PCA / Notebook 3\n\nIn this notebook we will study the dimensionality of stock price sequences, and show that they lie between the 1D of smooth functions and 2D of rapidly varying functions.\n\nThe mathematicians Manuel Mandelbrot and Richard Hudson wrote a book titled [The Misbehavior of Markets: A Fractal View of Financial Turbulence](https://www.amazon.com/gp/product/0465043577?ie=UTF8&tag=trivisonno-20&linkCode=as2&camp=1789&creative=9325&creativeASIN=0465043577). In this book they demonstrate that financial sequences have a fractal dimension that is higher than one. In other words, the changes in stock prices are more similar to random walk, than to a smooth differentiable curve.\n\nIn this notebook we will estimate the fractal dimension of sequences corresponding to the log of the price of a stock. We will do the same for some other, non-random sequences.\n\nWe will use the [Box Counting](https://en.wikipedia.org/wiki/Box_counting) method to estimate the dimension.",
"_____no_output_____"
],
[
"### Box Counting\nFor the sake of simplicity, lets start with a simple smooth curve corresoinding to $sin(x)$.\nIntuitively speaking, the dimension of this curve should be 1. Lets see how we measure that using box-counting.\n\nThe idea is simple: we split the 2D plane into smaller and smaller rectangles and count the number of rectangles that touch the curve. The gridlines in the figure below partition the figure into $16 \\times 16 = 256$ rectangles. The yellow shading corresponds the partition of the figure into $8 \\times 8$ rectangles. The green corresponds to the partition into $16\\times 16$ (which is the same as the grid), The blue and the red correspond to partitions into $32\\times32$ and $64 \\times 64$ respectively. You can see that as theboxes get smaller their number increases. \n\n\n\nThe dimension is defined by the relation between the size of the cubes and the number of rectangle that touch the curve. More precisly, we say that the size of a rectangle in a $n \\times n$ partition is $\\epsilon=1/n$. We denote by $N(\\epsilon)$ the number of rectangles of size $\\epsilon$ that touch the curve. Then if $d$ is the dimension, the relationship between $N(\\epsilon)$ and $\\epsilon$ is \n$$\nN(\\epsilon) = \\frac{C}{\\epsilon^d}\n$$\nFor some constant $C$\n\nTaking $\\log$s of both side we get \n$$\n(1)\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\log N(\\epsilon) = \\log C + d \\log \\frac{1}{\\epsilon}\n$$\n\nWe can use this equation to estimate $d$ as follows: let $\\epsilon_2 \\gg \\epsilon_1$ be two sizes that are far apart (say $\\epsilon_1=1/4$ and $\\epsilon_2=1/1024$), and let $N(\\epsilon_1),N(\\epsilon_2)$ be the corresponding box counts. Then by taking the difference between Equation (1) for the two sizes we get the estimate\n$$\n d \\approx \\frac{\\log N(\\epsilon_1) - \\log N(\\epsilon_2)}{\\log \\epsilon_2- \\log \\epsilon_1}\n$$\n\nNote that this is an estimate, it depends on the particular values of $\\epsilon_1$ and $\\epsilon_2$. We can refer to it as the \"dimension\" if we get the same number for any choice of the two sizes (as well as other details sich as the extent of the function.",
"_____no_output_____"
],
[
"Here are similar figures for the seque \n\n\n\t\t",
"_____no_output_____"
]
],
[
[
"import findspark\nfindspark.init()\nfrom pyspark import SparkContext\n\n#sc.stop()\nsc = SparkContext(master=\"local[3]\")\n\nfrom pyspark.sql import *\nsqlContext = SQLContext(sc)\n\n%pylab inline\n\nimport numpy as np",
"Populating the interactive namespace from numpy and matplotlib\n"
],
[
"df=sqlContext.read.csv('../Data/SP500.csv',header='true',inferSchema='true')\ndf.count()",
"_____no_output_____"
],
[
"columns=df.columns\ncol=[c for c in columns if '_P' in c]\ntickers=[a[:-2] for a in col]\ntickers[:10],len(tickers)",
"_____no_output_____"
],
[
"def get_seq(ticker):\n key=ticker+\"_P\"\n L=df.select(key).collect()\n L=[x[key] for x in L if not x[key] is None]\n return L",
"_____no_output_____"
]
],
[
[
"#### We generate graphs like the ones below for your analysis of dimensionality on the stocks \n",
"_____no_output_____"
]
],
[
[
"pickleFile=\"Tester/Dimensionality.pkl\"",
"_____no_output_____"
]
],
[
[
"## Finding Dimension\n\n\nWe find the dimension for a particular ticker using its sequence of data\n\n###### <span style=\"color:blue\">Sample Input:</span>\n```python\n \ndimension = Box_count([sequence of AAPL], 'AAPL')\n\n```\n###### <span style=\"color:magenta\">Sample Output:</span>\ndimension = 1.28",
"_____no_output_____"
]
],
[
[
"from scipy.optimize import curve_fit\nimport pandas as pd\n\ndef f( x, A, Df ):\n '''\n User defined function for scipy.optimize.curve_fit(),\n which will find optimal values for A and Df.\n '''\n return Df * x + A\n\ndef count_boxes(PriceSequence, n):\n \n length = len(PriceSequence)\n PriceSequence = map(lambda x: log(x), PriceSequence) # Log of price is needed?\n maxP = max(PriceSequence)\n minP = min(PriceSequence)\n full_x = np.linspace(0,length,n+1).tolist()\n full_y = np.linspace(minP,maxP,n+1).tolist()\n x_spacing = full_x[1]-full_x[0]\n y_spacing = full_y[1]-full_y[0]\n \n counts = np.zeros((n,n))\n boxpoints = n+1\n \n for i in range(length-1):\n (x1,x2) = (i,i+1)\n (y1,y2) = (PriceSequence[i],PriceSequence[i+1])\n xPoints = np.linspace(x1,x2,boxpoints).tolist()\n yPoints = np.linspace(y1,y2,boxpoints).tolist()\n \n for j in range(boxpoints):\n xindex = int(xPoints[j]/x_spacing)\n yindex = int((yPoints[j] - minP)/y_spacing) -1\n if(counts[xindex][yindex] == 0):\n counts[xindex][yindex] = 1\n \n return np.sum(counts)\n\ndef Box_count(LL,ticker):\n ## Your Implementation goes here\n dimension = 0.0\n r = np.array([ 2.0**i for i in xrange(0,10)]) # r - 1/episilon\n N = np.array([ count_boxes( LL, int(ri)) for ri in r ])\n popt, pcov = curve_fit( f, np.log(r),np.log( N ))\n Lacunarity, dimension = popt\n return dimension",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d089b9b0bb3571cfe6cbe30142f1246ece23b30f | 102,269 | ipynb | Jupyter Notebook | tutorials/AdvancedTutorial.ipynb | felixGer/PySDDR | a7680e7190185ba605df6ad85b4fdf19401473b3 | [
"MIT"
] | 14 | 2021-04-07T17:33:19.000Z | 2022-02-07T14:49:37.000Z | tutorials/AdvancedTutorial.ipynb | felixGer/PySDDR | a7680e7190185ba605df6ad85b4fdf19401473b3 | [
"MIT"
] | 3 | 2021-11-30T15:03:32.000Z | 2022-01-09T06:24:29.000Z | tutorials/AdvancedTutorial.ipynb | felixGer/PySDDR | a7680e7190185ba605df6ad85b4fdf19401473b3 | [
"MIT"
] | 7 | 2021-04-20T08:48:57.000Z | 2022-03-02T10:45:19.000Z | 248.225728 | 20,456 | 0.917033 | [
[
[
"# PySDDR: An Advanced Tutorial\n\nIn the beginner's guide only tabular data was used as input to the PySDDR framework. In this advanced tutorial we show the effects when combining structured and unstructured data. Currently, the framework only supports images as unstructured data.\n\nWe will use the MNIST dataset as a source for the unstructured data and generate additional tabular features corresponding to those. Our outcome in this tutorial is simulated based on linear and non-linear effects of tabular data and a linear effect of the number shown on the MNIST image. Our model is not provided with the (true) number, but instead has to learn the number effect from the image (together with the structured data effects):\n\\begin{equation*}\ny = \\sin(x_1) - 3x_2 + x_3^4 + 3\\cdot number + \\epsilon\n\\end{equation*}\nwith $\\epsilon \\sim \\mathcal{N}(0, \\sigma^2)$ and $number$ is the number on the MNIST image.\n\nThe aim of training is for the model to be able to output a latent effect, representing the number depicted in the MNIST image.\n\nWe start by importing the sddr module and other required libraries",
"_____no_output_____"
]
],
[
[
"# import the sddr module\nfrom sddr import Sddr\n\nimport torch\nimport torch.nn as nn\nimport torch.optim as optim\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\n\n#set seeds for reproducibility\ntorch.manual_seed(1)\nnp.random.seed(1)",
"_____no_output_____"
]
],
[
[
"### User inputs \n\nFirst the user defines the data to be used. The data is loaded and if it does not already exist, a column needs to be added to the tabular data describing the unstructured data - structured data correspondence. In the example below we add a column where each item includes the name of the image to which the current row of tabular data corresponds.",
"_____no_output_____"
]
],
[
[
"data_path = '../data/mnist_data/tab.csv'\n\ndata = pd.read_csv(data_path,delimiter=',')\n\n# append a column for the numbers: each data point contains a file name of the corresponding image\nfor i in data.index:\n data.loc[i,'numbers'] = f'img_{i}.jpg'",
"_____no_output_____"
]
],
[
[
"Next the distribution, formulas and training parameters are defined. The size of each image is ```28x28``` so our neural network has a layer which flattens the input, which is followed by a linear layer of input size ```28x28``` and an output size of ```128```. Finally, this is followed by a ```ReLU``` for the activation.\n\nHere the structured data is not pre-loaded as it would be typically too large to load in one step. Therefore the path to the directory in which it is stored is provided along with the data type (for now only 'images' supported). The images are then loaded in batches using PyTorch's dataloader. Note that here again the key given in the ```unstructured_data``` dictionary must match the name it is given in the formula, in this case ```'numbers'```. Similarly the keys of the ```deep_models_dict``` must also match the names in the formula, in this case ```'dnn'```",
"_____no_output_____"
]
],
[
[
"# define distribution and the formula for the distibutional parameter\ndistribution = 'Normal'\n\nformulas = {'loc': '~ -1 + spline(x1, bs=\"bs\", df=10) + x2 + dnn(numbers) + spline(x3, bs=\"bs\", df=10)',\n 'scale': '~1'\n }\n\n\n# define the deep neural networks' architectures and output shapes used in the above formula\ndeep_models_dict = {\n'dnn': {\n 'model': nn.Sequential(nn.Flatten(1, -1),\n nn.Linear(28*28,128),\n nn.ReLU()),\n 'output_shape': 128},\n}\n\n# define your training hyperparameters\ntrain_parameters = {\n 'batch_size': 8000,\n 'epochs': 1000,\n 'degrees_of_freedom': {'loc':9.6, 'scale':9.6},\n 'optimizer' : optim.Adam,\n 'val_split': 0.15,\n 'early_stop_epsilon': 0.001,\n 'dropout_rate': 0.01\n}\n\n\n# provide the location and datatype of the unstructured data\nunstructured_data = {\n 'numbers' : {\n 'path' : '../data/mnist_data/mnist_images',\n 'datatype' : 'image'\n }\n}\n\n# define output directory\noutput_dir = './outputs'\n",
"_____no_output_____"
]
],
[
[
"### Initialization\n\nThe sddr instance is initialized with the parameters given by the user in the previous step:",
"_____no_output_____"
]
],
[
[
"sddr = Sddr(output_dir=output_dir,\n distribution=distribution,\n formulas=formulas,\n deep_models_dict=deep_models_dict,\n train_parameters=train_parameters,\n )",
"Using device: cpu\n"
]
],
[
[
"### Training\n\nThe sddr network is trained with the data defined above and the loss curve is plotted.",
"_____no_output_____"
]
],
[
[
"sddr.train(structured_data=data,\n target=\"y_gen\",\n unstructured_data = unstructured_data,\n plot=True)",
"Beginning training ...\nTrain Epoch: 0 \t Training Loss: 129.044235\nTrain Epoch: 0 \t Validation Loss: 143.731430\nTrain Epoch: 100 \t Training Loss: 98.628090\nTrain Epoch: 100 \t Validation Loss: 118.442505\nTrain Epoch: 200 \t Training Loss: 72.697281\nTrain Epoch: 200 \t Validation Loss: 106.068893\nTrain Epoch: 300 \t Training Loss: 53.885902\nTrain Epoch: 300 \t Validation Loss: 97.472977\nTrain Epoch: 400 \t Training Loss: 40.945545\nTrain Epoch: 400 \t Validation Loss: 91.278023\nTrain Epoch: 500 \t Training Loss: 32.393982\nTrain Epoch: 500 \t Validation Loss: 85.700958\nTrain Epoch: 600 \t Training Loss: 26.009539\nTrain Epoch: 600 \t Validation Loss: 81.085602\nTrain Epoch: 700 \t Training Loss: 21.401140\nTrain Epoch: 700 \t Validation Loss: 76.584694\nTrain Epoch: 800 \t Training Loss: 18.019514\nTrain Epoch: 800 \t Validation Loss: 74.260246\nTrain Epoch: 900 \t Training Loss: 15.354483\nTrain Epoch: 900 \t Validation Loss: 71.126083\n"
]
],
[
[
"### Evaluation - Visualizing the partial effects\n\nIn this case the data is assumed to follow a normal distribution, in which case two distributional parameters, loc and scale, need to be estimated. Below we plot the partial effects of each smooth term.\n\nRemember the partial effects are computed by: partial effect = smooth_features * coefs (weights)\nIn other words the smoothing terms are multiplied with the weights of the Structured Head. We use the partial effects to interpret whether our model has learned correctly.",
"_____no_output_____"
]
],
[
[
"partial_effects_loc = sddr.eval('loc',plot=True)",
"_____no_output_____"
],
[
"partial_effects_scale = sddr.eval('scale',plot=True)",
"Nothing to plot. No (non-)linear partial effects specified for this parameter. (Deep partial effects are not plotted.)\n"
]
],
[
[
"As we can see the distributional parameter loc has two parial effects, one sinusoidal and one quadratic. The parameter scale expectedly has no partial effect since the formula only includes an intercept.\n\nNext we retrieve our ground truth data and compare it with the model's estimation",
"_____no_output_____"
]
],
[
[
"# compare prediction of neural network with ground truth\ndata_pred = data.loc[:,:]\nground_truth = data.loc[:,'y_gen']\n\n# predict returns partial effects and a distributional layer that gives statistical information about the prediction\ndistribution_layer, partial_effect = sddr.predict(data_pred,\n clipping=True, \n plot=False, \n unstructured_data = unstructured_data)\n# retrieve the mean and variance of the distributional layer\npredicted_mean = distribution_layer.loc[:,:].T\npredicted_variance = distribution_layer.scale[0]\n\n# and plot the result\nplt.scatter(ground_truth, predicted_mean)\nprint(f\"Predicted variance for first sample: {predicted_variance}\")",
"Predicted variance for first sample: tensor([1.3674])\n"
]
],
[
[
"The comparison shows that for most samples the predicted and true values are directly propotional.\n\nNext we want to check if the model learned the correct correspondence of images and numbers",
"_____no_output_____"
]
],
[
[
"# we create a copy of our original structured data where we set all inputs but the images to be zero\ndata_pred_copy = data.copy()\ndata_pred_copy.loc[:,'x1'] = 0\ndata_pred_copy.loc[:,'x2'] = 0\ndata_pred_copy.loc[:,'x3'] = 0\n\n# and make a prediction using only the images\ndistribution_layer, partial_effect = sddr.predict(data_pred_copy,\n clipping=True, \n plot=False, \n unstructured_data = unstructured_data)\n# add the predicted mean value to our tabular data\ndata_pred_copy['predicted_number'] = distribution_layer.loc[:,:].numpy().flatten()\n\n# and compare the true number on the images with the predicted number\nax = sns.boxplot(x=\"y_true\", y=\"predicted_number\", data=data_pred_copy)\n\nax.set_xlabel(\"true number\");\nax.set_ylabel(\"predicted latent effect of number\");",
"_____no_output_____"
]
],
[
[
"Observing the boxplot figure we see that as the true values, i.e. numbers depicted on images, are increasing, so too are the medians of the predicted distributions. Therefore the partial effect of the neural network is directly correlated with the number depicted in the MNIST images, proving that our neural network, though simple, has learned from the unstructured data.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d089c3709cbe11f67b4fa07256cca9811c191196 | 17,383 | ipynb | Jupyter Notebook | 9.Machine Learning with Python/ML0101EN-Reg-Simple-Linear-Regression-Co2.ipynb | iamshivprakash/IBMDataScience | 8112a086eb44be2cc44460d406b67f9ddde9ea76 | [
"MIT"
] | null | null | null | 9.Machine Learning with Python/ML0101EN-Reg-Simple-Linear-Regression-Co2.ipynb | iamshivprakash/IBMDataScience | 8112a086eb44be2cc44460d406b67f9ddde9ea76 | [
"MIT"
] | null | null | null | 9.Machine Learning with Python/ML0101EN-Reg-Simple-Linear-Regression-Co2.ipynb | iamshivprakash/IBMDataScience | 8112a086eb44be2cc44460d406b67f9ddde9ea76 | [
"MIT"
] | null | null | null | 17,383 | 17,383 | 0.698844 | [
[
[
"<center>\n <img src=\"https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/images/IDSNlogo.png\" width=\"300\" alt=\"cognitiveclass.ai logo\" />\n</center>\n\n# Simple Linear Regression\n\nEstimated time needed: **15** minutes\n\n## Objectives\n\nAfter completing this lab you will be able to:\n\n* Use scikit-learn to implement simple Linear Regression\n* Create a model, train it, test it and use the model\n",
"_____no_output_____"
],
[
"### Importing Needed packages\n",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport pandas as pd\nimport pylab as pl\nimport numpy as np\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"### Downloading Data\n\nTo download the data, we will use !wget to download it from IBM Object Storage.\n",
"_____no_output_____"
]
],
[
[
"!wget -O FuelConsumption.csv https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/data/FuelConsumptionCo2.csv",
"_____no_output_____"
]
],
[
[
"**Did you know?** When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC)\n",
"_____no_output_____"
],
[
"## Understanding the Data\n\n### `FuelConsumption.csv`:\n\nWe have downloaded a fuel consumption dataset, **`FuelConsumption.csv`**, which contains model-specific fuel consumption ratings and estimated carbon dioxide emissions for new light-duty vehicles for retail sale in Canada. [Dataset source](http://open.canada.ca/data/en/dataset/98f1a129-f628-4ce4-b24d-6f16bf24dd64?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkML0101ENSkillsNetwork20718538-2021-01-01)\n\n* **MODELYEAR** e.g. 2014\n* **MAKE** e.g. Acura\n* **MODEL** e.g. ILX\n* **VEHICLE CLASS** e.g. SUV\n* **ENGINE SIZE** e.g. 4.7\n* **CYLINDERS** e.g 6\n* **TRANSMISSION** e.g. A6\n* **FUEL CONSUMPTION in CITY(L/100 km)** e.g. 9.9\n* **FUEL CONSUMPTION in HWY (L/100 km)** e.g. 8.9\n* **FUEL CONSUMPTION COMB (L/100 km)** e.g. 9.2\n* **CO2 EMISSIONS (g/km)** e.g. 182 --> low --> 0\n",
"_____no_output_____"
],
[
"## Reading the data in\n",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv(\"FuelConsumption.csv\")\n\n# take a look at the dataset\ndf.head()\n\n",
"_____no_output_____"
]
],
[
[
"### Data Exploration\n\nLet's first have a descriptive exploration on our data.\n",
"_____no_output_____"
]
],
[
[
"# summarize the data\ndf.describe()",
"_____no_output_____"
]
],
[
[
"Let's select some features to explore more.\n",
"_____no_output_____"
]
],
[
[
"cdf = df[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB','CO2EMISSIONS']]\ncdf.head(9)",
"_____no_output_____"
]
],
[
[
"We can plot each of these features:\n",
"_____no_output_____"
]
],
[
[
"viz = cdf[['CYLINDERS','ENGINESIZE','CO2EMISSIONS','FUELCONSUMPTION_COMB']]\nviz.hist()\nplt.show()",
"_____no_output_____"
]
],
[
[
"Now, let's plot each of these features against the Emission, to see how linear their relationship is:\n",
"_____no_output_____"
]
],
[
[
"plt.scatter(cdf.FUELCONSUMPTION_COMB, cdf.CO2EMISSIONS, color='blue')\nplt.xlabel(\"FUELCONSUMPTION_COMB\")\nplt.ylabel(\"Emission\")\nplt.show()",
"_____no_output_____"
],
[
"plt.scatter(cdf.ENGINESIZE, cdf.CO2EMISSIONS, color='blue')\nplt.xlabel(\"Engine size\")\nplt.ylabel(\"Emission\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Practice\n\nPlot **CYLINDER** vs the Emission, to see how linear is their relationship is:\n",
"_____no_output_____"
]
],
[
[
"# write your code here\n\n\n",
"_____no_output_____"
]
],
[
[
"<details><summary>Click here for the solution</summary>\n\n```python\nplt.scatter(cdf.CYLINDERS, cdf.CO2EMISSIONS, color='blue')\nplt.xlabel(\"Cylinders\")\nplt.ylabel(\"Emission\")\nplt.show()\n\n```\n\n</details>\n",
"_____no_output_____"
],
[
"#### Creating train and test dataset\n\nTrain/Test Split involves splitting the dataset into training and testing sets that are mutually exclusive. After which, you train with the training set and test with the testing set.\nThis will provide a more accurate evaluation on out-of-sample accuracy because the testing dataset is not part of the dataset that have been used to train the model. Therefore, it gives us a better understanding of how well our model generalizes on new data.\n\nThis means that we know the outcome of each data point in the testing dataset, making it great to test with! Since this data has not been used to train the model, the model has no knowledge of the outcome of these data points. So, in essence, it is truly an out-of-sample testing.\n\nLet's split our dataset into train and test sets. 80% of the entire dataset will be used for training and 20% for testing. We create a mask to select random rows using **np.random.rand()** function:\n",
"_____no_output_____"
]
],
[
[
"msk = np.random.rand(len(df)) < 0.8\ntrain = cdf[msk]\ntest = cdf[~msk]",
"_____no_output_____"
]
],
[
[
"### Simple Regression Model\n\nLinear Regression fits a linear model with coefficients B = (B1, ..., Bn) to minimize the 'residual sum of squares' between the actual value y in the dataset, and the predicted value yhat using linear approximation.\n",
"_____no_output_____"
],
[
"#### Train data distribution\n",
"_____no_output_____"
]
],
[
[
"plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue')\nplt.xlabel(\"Engine size\")\nplt.ylabel(\"Emission\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"#### Modeling\n\nUsing sklearn package to model data.\n",
"_____no_output_____"
]
],
[
[
"from sklearn import linear_model\nregr = linear_model.LinearRegression()\ntrain_x = np.asanyarray(train[['ENGINESIZE']])\ntrain_y = np.asanyarray(train[['CO2EMISSIONS']])\nregr.fit(train_x, train_y)\n# The coefficients\nprint ('Coefficients: ', regr.coef_)\nprint ('Intercept: ',regr.intercept_)",
"_____no_output_____"
]
],
[
[
"As mentioned before, **Coefficient** and **Intercept** in the simple linear regression, are the parameters of the fit line.\nGiven that it is a simple linear regression, with only 2 parameters, and knowing that the parameters are the intercept and slope of the line, sklearn can estimate them directly from our data.\nNotice that all of the data must be available to traverse and calculate the parameters.\n",
"_____no_output_____"
],
[
"#### Plot outputs\n",
"_____no_output_____"
],
[
"We can plot the fit line over the data:\n",
"_____no_output_____"
]
],
[
[
"plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue')\nplt.plot(train_x, regr.coef_[0][0]*train_x + regr.intercept_[0], '-r')\nplt.xlabel(\"Engine size\")\nplt.ylabel(\"Emission\")",
"_____no_output_____"
]
],
[
[
"#### Evaluation\n\nWe compare the actual values and predicted values to calculate the accuracy of a regression model. Evaluation metrics provide a key role in the development of a model, as it provides insight to areas that require improvement.\n\nThere are different model evaluation metrics, lets use MSE here to calculate the accuracy of our model based on the test set:\n\n* Mean Absolute Error: It is the mean of the absolute value of the errors. This is the easiest of the metrics to understand since it’s just average error.\n\n* Mean Squared Error (MSE): Mean Squared Error (MSE) is the mean of the squared error. It’s more popular than Mean Absolute Error because the focus is geared more towards large errors. This is due to the squared term exponentially increasing larger errors in comparison to smaller ones.\n\n* Root Mean Squared Error (RMSE).\n\n* R-squared is not an error, but rather a popular metric to measure the performance of your regression model. It represents how close the data points are to the fitted regression line. The higher the R-squared value, the better the model fits your data. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse).\n",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import r2_score\n\ntest_x = np.asanyarray(test[['ENGINESIZE']])\ntest_y = np.asanyarray(test[['CO2EMISSIONS']])\ntest_y_ = regr.predict(test_x)\n\nprint(\"Mean absolute error: %.2f\" % np.mean(np.absolute(test_y_ - test_y)))\nprint(\"Residual sum of squares (MSE): %.2f\" % np.mean((test_y_ - test_y) ** 2))\nprint(\"R2-score: %.2f\" % r2_score(test_y , test_y_) )",
"_____no_output_____"
]
],
[
[
"## Exercise\n",
"_____no_output_____"
],
[
"Lets see what the evaluation metrics are if we trained a regression model using the `FUELCONSUMPTION_COMB` feature.\n\nStart by selecting `FUELCONSUMPTION_COMB` as the train_x data from the `train` dataframe, then select `FUELCONSUMPTION_COMB` as the test_x data from the `test` dataframe\n",
"_____no_output_____"
]
],
[
[
"train_x = #ADD CODE\n\ntest_x = #ADD CODE",
"_____no_output_____"
]
],
[
[
"<details><summary>Click here for the solution</summary>\n\n```python\ntrain_x = train[[\"FUELCONSUMPTION_COMB\"]]\n\ntest_x = train[[\"FUELCONSUMPTION_COMB\"]]\n\n```\n\n</details>\n",
"_____no_output_____"
],
[
"Now train a Logistic Regression Model using the `train_x` you created and the `train_y` created previously\n",
"_____no_output_____"
]
],
[
[
"regr = linear_model.LinearRegression()\n\n#ADD CODE\n",
"_____no_output_____"
]
],
[
[
"<details><summary>Click here for the solution</summary>\n\n```python\nregr = linear_model.LinearRegression()\n\nregr.fit(train_x, train_y)\n\n```\n\n</details>\n",
"_____no_output_____"
],
[
"Find the predictions using the model's `predict` function and the `test_x` data\n",
"_____no_output_____"
]
],
[
[
"predictions = #ADD CODE",
"_____no_output_____"
]
],
[
[
"<details><summary>Click here for the solution</summary>\n\n```python\npredictions = regr.predict(test_x)\n\n```\n\n</details>\n",
"_____no_output_____"
],
[
"Finally use the `predictions` and the `test_y` data and find the Mean Absolute Error value using the `np.absolute` and `np.mean` function like done previously\n",
"_____no_output_____"
]
],
[
[
"#ADD CODE\n",
"_____no_output_____"
]
],
[
[
"<details><summary>Click here for the solution</summary>\n\n```python\nprint(\"Mean Absolute Error: %.2f\" % np.mean(np.absolute(predictions - test_y)))\n\n```\n\n</details>\n",
"_____no_output_____"
],
[
"We can see that the MAE is much worse than it is when we train using `ENGINESIZE`\n",
"_____no_output_____"
],
[
"<h2>Want to learn more?</h2>\n\nIBM SPSS Modeler is a comprehensive analytics platform that has many machine learning algorithms. It has been designed to bring predictive intelligence to decisions made by individuals, by groups, by systems – by your enterprise as a whole. A free trial is available through this course, available here: <a href=\"https://www.ibm.com/analytics/spss-statistics-software?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkML0101ENSkillsNetwork20718538-2021-01-01\">SPSS Modeler</a>\n\nAlso, you can use Watson Studio to run these notebooks faster with bigger datasets. Watson Studio is IBM's leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, Watson Studio enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of Watson Studio users today with a free account at <a href=\"https://www.ibm.com/cloud/watson-studio?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkML0101ENSkillsNetwork20718538-2021-01-01\">Watson Studio</a>\n",
"_____no_output_____"
],
[
"### Thank you for completing this lab!\n\n## Author\n\nSaeed Aghabozorgi\n\n### Other Contributors\n\n<a href=\"https://www.linkedin.com/in/joseph-s-50398b136/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkML0101ENSkillsNetwork20718538-2021-01-01\" target=\"_blank\">Joseph Santarcangelo</a>\n\nAzim Hirjani\n\n## Change Log\n\n| Date (YYYY-MM-DD) | Version | Changed By | Change Description |\n| ----------------- | ------- | ------------- | ---------------------------------- |\n| 2020-11-03 | 2.1 | Lakshmi Holla | Changed URL of the csv |\n| 2020-08-27 | 2.0 | Lavanya | Moved lab to course repo in GitLab |\n| | | | |\n| | | | |\n\n## <h3 align=\"center\"> © IBM Corporation 2020. All rights reserved. <h3/>\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d089c6b4a1be2a31fc03b69790ff57ceee714d3e | 63,445 | ipynb | Jupyter Notebook | notebooks/MarchApril2016_TutorialSession/Notebook - March 18 - Part 1.ipynb | ESO-python/ESOPythonTutorials | 98f4ada2ae0d8b09adf04b3cf5ee3645a9a02e56 | [
"BSD-3-Clause"
] | 5 | 2015-09-05T11:15:55.000Z | 2022-01-29T19:20:45.000Z | notebooks/MarchApril2016_TutorialSession/Notebook - March 18 - Part 1.ipynb | ESO-python/ESOPythonTutorials | 98f4ada2ae0d8b09adf04b3cf5ee3645a9a02e56 | [
"BSD-3-Clause"
] | 2 | 2015-02-04T17:20:33.000Z | 2016-01-25T16:03:46.000Z | notebooks/MarchApril2016_TutorialSession/Notebook - March 18 - Part 1.ipynb | ESO-python/ESOPythonTutorials | 98f4ada2ae0d8b09adf04b3cf5ee3645a9a02e56 | [
"BSD-3-Clause"
] | 7 | 2015-05-29T17:09:22.000Z | 2020-10-17T05:18:27.000Z | 113.497317 | 16,252 | 0.862716 | [
[
[
"# March 18 Notes #",
"_____no_output_____"
],
[
"# Fitting data to models #\n\n1. Build a model\n2. Create a \"fitness function\", i.e. something that returns a scalar \"distance\" between the model and the data\n3. Apply an \"optimizer\" to get the best-fit parameters",
"_____no_output_____"
]
],
[
[
"from astropy import units as u",
"_____no_output_____"
],
[
"def gaussian_model(xarr, amplitude, offset, width):\n amplitude = u.Quantity(amplitude, u.K)\n offset = u.Quantity(offset, u.km/u.s)\n width = u.Quantity(width, u.km/u.s)\n xarr = u.Quantity(xarr, u.km/u.s)\n \n return amplitude * np.exp(-(xarr-offset)**2/(2.*width**2))",
"_____no_output_____"
],
[
"x = 5\nu.Quantity(x, u.km/u.s)",
"_____no_output_____"
],
[
"x = 5 * u.m/u.s\nu.Quantity(x, u.km/u.s)",
"_____no_output_____"
],
[
"xarr = np.linspace(-5,5,50) * u.km/u.s",
"_____no_output_____"
],
[
"gaussian(xarr, 1*u.K, 0.5*u.km/u.s, 2000*u.m/u.s)",
"_____no_output_____"
],
[
"%matplotlib inline\nimport pylab as pl\npl.plot(xarr, gaussian(xarr, 1, 0.5, 2))",
"_____no_output_____"
],
[
"from specutils.io import fits\nspec = fits.read_fits_spectrum1d('gbt_1d.fits')",
"_____no_output_____"
],
[
"pl.plot(spec.velocity, spec.flux, 'k-')",
"_____no_output_____"
],
[
"model = gaussian_model(spec.velocity, amplitude=5*u.K, offset=5*u.km/u.s, width=5*u.km/u.s)",
"_____no_output_____"
],
[
"pl.plot(spec.velocity, spec.flux, 'k-')\npl.plot(spec.velocity, model, 'b-')",
"_____no_output_____"
],
[
"spec.flux * u.K",
"_____no_output_____"
],
[
"def cost_function(params, data_range=None):\n if data_range is not None:\n data = spec.flux[data_range]\n else:\n data = spec.flux\n return (((data * u.K) - gaussian_model(spec.velocity, *params))**2).sum().value",
"_____no_output_____"
],
[
"params = (1,2,3)\ndef f(a,b,c):\n print(\"a={0}, b={1}, c={2}\".format(a,b,c))\nf(1,2,3)\nf(*params)",
"a=1, b=2, c=3\na=1, b=2, c=3\n"
],
[
"cost_function((5*u.K, 5*u.km/u.s, 5*u.km/u.s))",
"_____no_output_____"
],
[
"from scipy.optimize import minimize",
"_____no_output_____"
],
[
"result = minimize(cost_function, (5, 5, 5), args=(slice(100,200),))\nresult",
"_____no_output_____"
],
[
"(amplitude, offset, width) = result.x",
"_____no_output_____"
],
[
"best_fit_model = gaussian_model(spec.velocity, *result.x)",
"_____no_output_____"
],
[
"pl.plot(spec.velocity, spec.flux, 'k-')\npl.plot(spec.velocity, best_fit_model, 'r-')\npl.xlim(-30, 30)",
"_____no_output_____"
]
],
[
[
"# Exercise #\n\n1. Get a better fit to the data (create a better model & fit it)\n\n - try using different optimizers in scipy.optimize",
"_____no_output_____"
]
],
[
[
"arr = np.arange(100)",
"_____no_output_____"
],
[
"arr[10:50]",
"_____no_output_____"
],
[
"arr[slice(10,50)]",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d089cc35d6c0790021770faf0426250036f76337 | 11,633 | ipynb | Jupyter Notebook | notebooks/figures/chapter16_figures.ipynb | kzymgch/pyprobml | 934e5c5ccae0a468a2d90c49cd110915c3b1d299 | [
"MIT"
] | 2 | 2021-02-26T04:36:10.000Z | 2021-02-26T04:36:24.000Z | notebooks/figures/chapter16_figures.ipynb | kzymgch/pyprobml | 934e5c5ccae0a468a2d90c49cd110915c3b1d299 | [
"MIT"
] | null | null | null | notebooks/figures/chapter16_figures.ipynb | kzymgch/pyprobml | 934e5c5ccae0a468a2d90c49cd110915c3b1d299 | [
"MIT"
] | null | null | null | 30.059432 | 512 | 0.577237 | [
[
[
"# Copyright 2021 Google LLC\n# Use of this source code is governed by an MIT-style\n# license that can be found in the LICENSE file or at\n# https://opensource.org/licenses/MIT.\n\n# Author(s): Kevin P. Murphy ([email protected]) and Mahmoud Soliman ([email protected])",
"_____no_output_____"
]
],
[
[
"<a href=\"https://opensource.org/licenses/MIT\" target=\"_parent\"><img src=\"https://img.shields.io/github/license/probml/pyprobml\"/></a>",
"_____no_output_____"
],
[
"<a href=\"https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/figures//chapter16_figures.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Cloning the pyprobml repo",
"_____no_output_____"
]
],
[
[
"!git clone https://github.com/probml/pyprobml \n%cd pyprobml/scripts",
"_____no_output_____"
]
],
[
[
"# Installing required software (This may take few minutes)",
"_____no_output_____"
]
],
[
[
"!apt-get install octave -qq > /dev/null\n!apt-get install liboctave-dev -qq > /dev/null",
"_____no_output_____"
],
[
"%%capture\n%load_ext autoreload \n%autoreload 2\nDISCLAIMER = 'WARNING : Editing in VM - changes lost after reboot!!'\nfrom google.colab import files\n\ndef interactive_script(script, i=True):\n if i:\n s = open(script).read()\n if not s.split('\\n', 1)[0]==\"## \"+DISCLAIMER:\n open(script, 'w').write(\n f'## {DISCLAIMER}\\n' + '#' * (len(DISCLAIMER) + 3) + '\\n\\n' + s)\n files.view(script)\n %run $script\n else:\n %run $script\n\ndef show_image(img_path):\n from google.colab.patches import cv2_imshow\n import cv2\n img = cv2.imread(img_path, cv2.IMREAD_UNCHANGED)\n img=cv2.resize(img,(600,600))\n cv2_imshow(img)",
"_____no_output_____"
]
],
[
[
"## Figure 16.1:<a name='16.1'></a> <a name='fig:knn'></a> ",
"_____no_output_____"
],
[
"\n (a) Illustration of a $K$-nearest neighbors classifier in 2d for $K=5$. The nearest neighbors of test point $\\mathbf x $ have labels $\\ 1, 1, 1, 0, 0\\ $, so we predict $p(y=1|\\mathbf x , \\mathcal D ) = 3/5$. (b) Illustration of the Voronoi tesselation induced by 1-NN. Adapted from Figure 4.13 of <a href='#Duda01'>[DHS01]</a> . \nFigure(s) generated by [knn_voronoi_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/knn_voronoi_plot.py) ",
"_____no_output_____"
]
],
[
[
"interactive_script(\"knn_voronoi_plot.py\")",
"_____no_output_____"
]
],
[
[
"## Figure 16.2:<a name='16.2'></a> <a name='knnThreeClass'></a> ",
"_____no_output_____"
],
[
"\n Decision boundaries induced by a KNN classifier. (a) $K=1$. (b) $K=2$. (c) $K=5$. (d) Train and test error vs $K$. \nFigure(s) generated by [knn_classify_demo.py](https://github.com/probml/pyprobml/blob/master/scripts/knn_classify_demo.py) ",
"_____no_output_____"
]
],
[
[
"interactive_script(\"knn_classify_demo.py\")",
"_____no_output_____"
]
],
[
[
"## Figure 16.3:<a name='16.3'></a> <a name='curse'></a> ",
"_____no_output_____"
],
[
"\n Illustration of the curse of dimensionality. (a) We embed a small cube of side $s$ inside a larger unit cube. (b) We plot the edge length of a cube needed to cover a given volume of the unit cube as a function of the number of dimensions. Adapted from Figure 2.6 from <a href='#HastieBook'>[HTF09]</a> . \nFigure(s) generated by [curse_dimensionality.py](https://github.com/probml/pyprobml/blob/master/scripts/curse_dimensionality.py) ",
"_____no_output_____"
]
],
[
[
"interactive_script(\"curse_dimensionality.py\")",
"_____no_output_____"
]
],
[
[
"## Figure 16.4:<a name='16.4'></a> <a name='fig:LCA'></a> ",
"_____no_output_____"
],
[
"\n Illustration of latent coincidence analysis (LCA) as a directed graphical model. The inputs $\\mathbf x , \\mathbf x ' \\in \\mathbb R ^D$ are mapped into Gaussian latent variables $\\mathbf z , \\mathbf z ' \\in \\mathbb R ^L$ via a linear mapping $\\mathbf W $. If the two latent points coincide (within length scale $\\kappa $) then we set the similarity label to $y=1$, otherwise we set it to $y=0$. From Figure 1 of <a href='#Der2012'>[ML12]</a> . Used with kind permission of Lawrence Saul. ",
"_____no_output_____"
]
],
[
[
"show_image(\"/content/pyprobml/notebooks/figures/images/LCA-PGM.png\")",
"_____no_output_____"
]
],
[
[
"## Figure 16.5:<a name='16.5'></a> <a name='fig:tripletNet'></a> ",
"_____no_output_____"
],
[
"\n Networks for deep metric learning. (a) Siamese network. (b) Triplet network. From Figure 5 of <a href='#Kaya2019'>[MH19]</a> . Used with kind permission of Mahmut Kaya. . ",
"_____no_output_____"
]
],
[
[
"show_image(\"/content/pyprobml/notebooks/figures/images/siameseNet.png\")",
"_____no_output_____"
],
[
"show_image(\"/content/pyprobml/notebooks/figures/images/tripletNet.png\")",
"_____no_output_____"
]
],
[
[
"## Figure 16.6:<a name='16.6'></a> <a name='fig:tripletBound'></a> ",
"_____no_output_____"
],
[
"\n Speeding up triplet loss minimization. (a) Illustration of hard vs easy negatives. Here $a$ is the anchor point, $p$ is a positive point, and $n_i$ are negative points. Adapted from Figure 4 of <a href='#Kaya2019'>[MH19]</a> . (b) Standard triplet loss would take $8 \\times 3 \\times 4 = 96$ calculations, whereas using a proxy loss (with one proxy per class) takes $8 \\times 2 = 16$ calculations. From Figure 1 of <a href='#Do2019cvpr'>[Tha+19]</a> . Used with kind permission of Gustavo Cerneiro. ",
"_____no_output_____"
]
],
[
[
"show_image(\"/content/pyprobml/notebooks/figures/images/hard-negative-mining.png\")",
"_____no_output_____"
],
[
"show_image(\"/content/pyprobml/notebooks/figures/images/tripletBound.png\")",
"_____no_output_____"
]
],
[
[
"## Figure 16.7:<a name='16.7'></a> <a name='fig:SEC'></a> ",
"_____no_output_____"
],
[
"\n Adding spherical embedding constraint to a deep metric learning method. Used with kind permission of Dingyi Zhang. ",
"_____no_output_____"
]
],
[
[
"show_image(\"/content/pyprobml/notebooks/figures/images/SEC.png\")",
"_____no_output_____"
]
],
[
[
"## Figure 16.8:<a name='16.8'></a> <a name='smoothingKernels'></a> ",
"_____no_output_____"
],
[
"\n A comparison of some popular normalized kernels. \nFigure(s) generated by [smoothingKernelPlot.m](https://github.com/probml/pmtk3/blob/master/demos/smoothingKernelPlot.m) ",
"_____no_output_____"
]
],
[
[
"!octave -W smoothingKernelPlot.m >> _",
"_____no_output_____"
]
],
[
[
"## Figure 16.9:<a name='16.9'></a> <a name='parzen'></a> ",
"_____no_output_____"
],
[
"\n A nonparametric (Parzen) density estimator in 1d estimated from 6 data points, denoted by x. Top row: uniform kernel. Bottom row: Gaussian kernel. Left column: bandwidth parameter $h=1$. Right column: bandwidth parameter $h=2$. Adapted from http://en.wikipedia.org/wiki/Kernel_density_estimation . \nFigure(s) generated by [Kernel_density_estimation](http://en.wikipedia.org/wiki/Kernel_density_estimation) [parzen_window_demo2.py](https://github.com/probml/pyprobml/blob/master/scripts/parzen_window_demo2.py) ",
"_____no_output_____"
]
],
[
[
"interactive_script(\"parzen_window_demo2.py\")",
"_____no_output_____"
]
],
[
[
"## Figure 16.10:<a name='16.10'></a> <a name='kernelRegression'></a> ",
"_____no_output_____"
],
[
"\n An example of kernel regression in 1d using a Gaussian kernel. \nFigure(s) generated by [kernelRegressionDemo.m](https://github.com/probml/pmtk3/blob/master/demos/kernelRegressionDemo.m) ",
"_____no_output_____"
]
],
[
[
"!octave -W kernelRegressionDemo.m >> _",
"_____no_output_____"
]
],
[
[
"## References:\n <a name='Duda01'>[DHS01]</a> R. O. Duda, P. E. Hart and D. G. Stork. \"Pattern Classification\". (2001). \n\n<a name='HastieBook'>[HTF09]</a> T. Hastie, R. Tibshirani and J. Friedman. \"The Elements of Statistical Learning\". (2009). \n\n<a name='Kaya2019'>[MH19]</a> K. Mahmut and B. HasanSakir. \"Deep Metric Learning: A Survey\". In: Symmetry (2019). \n\n<a name='Der2012'>[ML12]</a> D. Matthew and S. LawrenceK. \"Latent Coincidence Analysis: A Hidden Variable Model forDistance Metric Learning\". (2012). \n\n<a name='Do2019cvpr'>[Tha+19]</a> D. Thanh-Toan, T. Toan, R. Ian, K. Vijay, H. Tuan and C. Gustavo. \"A Theoretically Sound Upper Bound on the Triplet Loss forImproving the Efficiency of Deep Distance Metric Learning\". (2019). \n\n",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d089d17c834f72b1f0e7ccfef9a5684da1e4288e | 11,333 | ipynb | Jupyter Notebook | Assignment 1/SourceCode/Web Scraping Yelp.ipynb | nicholasneo78/NLP-assignment | 030b0a622c651d85eac7b1f0fb6ab0643aef1281 | [
"MIT"
] | null | null | null | Assignment 1/SourceCode/Web Scraping Yelp.ipynb | nicholasneo78/NLP-assignment | 030b0a622c651d85eac7b1f0fb6ab0643aef1281 | [
"MIT"
] | null | null | null | Assignment 1/SourceCode/Web Scraping Yelp.ipynb | nicholasneo78/NLP-assignment | 030b0a622c651d85eac7b1f0fb6ab0643aef1281 | [
"MIT"
] | null | null | null | 73.116129 | 7,621 | 0.683755 | [
[
[
"import bs4 as bs\nimport re\nimport urllib.request as url",
"_____no_output_____"
],
[
"reviewDict = []",
"_____no_output_____"
],
[
"#Save in a file\nfilename = \"yelp_Clinton_Street_Baking Company_&_Restaurant_scrapped.csv\"\n\nf = open(filename, \"w\")",
"_____no_output_____"
],
[
"#reviewNo = [0,20,40,60]\nreviewNo = str(60)",
"_____no_output_____"
],
[
"source = url.urlopen('https://www.yelp.com/biz/clinton-street-baking-company-and-restaurant-singapore?osq=Waffles&start='+reviewNo+'&sort_by=rating_desc')",
"_____no_output_____"
],
[
"page_soup = bs.BeautifulSoup(source, 'html.parser')",
"_____no_output_____"
],
[
"mains = page_soup.find_all(\"div\", {\"class\": \"lemon--div__373c0__1mboc arrange-unit__373c0__o3tjT arrange-unit-grid-column--8__373c0__2dUx_ border-color--default__373c0__3-ifU\"})",
"_____no_output_____"
],
[
"#Set loop over 1 attribute\ncount = 0\nfor main in mains:\n print()\n print(\"====Review\",count,\"====\")\n reviews = \"\"\n try:\n try:\n ratings = main.find(\"span\", {\"class\": \"lemon--span__373c0__3997G display--inline__373c0__3JqBP border-color--default__373c0__3-ifU\"}).div.get('aria-label')\n print(\"Rating:\",ratings)\n except:\n print(\"--Cannot find ratings--\")\n print()\n #Code to clean for angular brackets < >\n cleaned_item = re.sub('\\<.*?(\\S*)?\\>','',str(main))\n #Code to clean for angular brackets {}\n cleaned_item = re.sub('\\{.*?(\\S*)?\\}','',str(cleaned_item))\n #Code to replace , with fullstop\n cleaned_item = re.sub(',','.',str(cleaned_item))\n print(cleaned_item)\n reviews = cleaned_item\n reviewDict.append([ratings[0],reviews])\n except:\n print(\"Error\")\n \n count+=1\n \n",
"\n====Review 0 ====\n--Cannot find ratings--\n\n1 star ratingEek! Methinks not.2 star ratingMeh. I've experienced better.3 star rating4 star ratingYay! I'm a fan.5 star ratingWoohoo! As good as it gets!Start your review of Clinton Street Baking Company & Restaurant\n\n====Review 1 ====\nRating: 3 star rating\n\n.css-1qkiuob.css-1qkiuob svg 1 photo.css-z6383k.css-z6383k svg 1 check-inMy first time here. Ever. Didn't know about the New York branch so there is no basis for comparison. I've heard lots of excited rumblings of course, and friends who used to live in New York were thrilled about its opening. Fair enough. From all accounts, I hear that this place largely works. The food is comparable and all. I'll have to take their word for it. I only came here for some cake and I got the upside down pineapple cake. I don't know why but seeing a badly sliced and sloppily presented cake annoyed me. It shows a lack of care in the attentive-level of staff.When I went to pay, I noticed that other cakes (of other flavors) were similarly badly sliced. Woah. I really did think Clinton St Bakery was better than this. Taste-wise, the cake was fine. Not great, not terrible. The staff was sufficiently polite. But I think the devil's in the details and they need to teach their staff how to slice cakes properly, lift the slices and place them on plates..css-1c2abjj.css-1c2abjj svgUseful Funny Cool \n\n====Review 2 ====\nRating: 3 star rating\n\n 2 photosDefinitely order the chicken and waffles. Staff recommended to order onion rings as sides. Do not order onion rings. Completely disappointing. We came on a Monday afternoon, not too busy just some tables of ladies of leisure... but be aware your last order is at 5pm.\n\n====Review 3 ====\nRating: 3 star rating\n\nThe Good, The Bad, and The Bourbon Salted Caramel MilkshakeI preface this by saying I absolutely love Clinton Street Baking Co. in New York. The Singapore shop matches the New York venue in some respects (main courses and drinks) and fails in other areas (pies and service).I'll start with the good. For a shop known for its baked goods and pancakes, the drinks were fantastic. The bourbon salted caramel milkshake was the best thing we had and the bloody mary, which is so rarely made well, was also delicious. The main courses we tried (pancakes and huevos rancheros) were every bit as good as the NY shop.Unfortunately, there were several areas where Clinton Street really struggled. First, the service was horrible. After we were seated, not one staff member came to our table without being asked. We had to ask for drinks, for menus, to order, to get a dessert menu, to get a check, and to have our check taken and processed. When there were quality issues with food (see below), we did not get an apology and the waiter seemed annoyed to have to deal with the issue.Second, when we arrived for brunch, two of the three pies of the day were unavailable. No explanation was give nor was any apology forthcoming. We tried the only special available (strawberry rhubarb pie) and the lime meringue pie. Both were disappointing. The crumble on the strawberry rhubarb pie was like a group of small, indestructible rocks. We could not break up the 10cm pieces with a fork -- and we tried! We were not provided a knife with dessert. The lime meringue pie tasted good but there was so little lime custard except at the very edge of the pie that it tasted almost exclusively of meringue.When I pointed these out to our server he simply asked if we would like something different. I pointed out that there was very little else available as two of the pies of the day were not available and he stood in silence. He did not seem to have any idea how the pies should be made or the quality that is normally associated with Clinton Street.Useful \n\n====Review 4 ====\nRating: 2 star rating\n\nI came here because the hotel breakfast at the Intercontinental was $S30 a pop, and I thought I could do better -- or at least the same for less. So I decided to go to Killiney Kopitiam but my Ang Moh instincts took over while passing the lower-rated Clinton Street Baking Co. They had muffins on display and I am partial to Western breakfasts. So I was sucked in. The first thing I saw from the menu was it was not going to be much less expensive. So maybe it'll be great? Same for more?I ordered what I thought was their signature omelette because it had their name on it. But it only comes with two add-in ingredients (besides the eggs, of course). And apparently nothing but those two plus the eggs because when the omelette arrived I was asked if I wanted salt, pepper, or catsup. The first sign of cooking mediocrity is the need to season it with any of those things after cooking, especially a $20 omelette. If I'd known I would have chosen bacon to go in there. Ugh. It was so bland. And dry. I regretted not asking for catsup. I regretted more not having salsa, which is the best cure for bland eggs.The hash browns, though, were remarkable. If it was a two-egg omelette instead of a three-egger I was pushing through I would have savored them more. My cappuccino was also well done. Final tally, $S33!At the hotel I could have bottomless juice, cappuccinos, an array of Japanese, Chinese, Indian, and Western breakfast items. Smoked salmon with capers. Fresh-made waffles. Croissants and danishes. Fresh fruit. I got less for more. I'm kind of amazed they didn't have oatmeal (or porridge) on the menu. My punishment for not going to the kopitiam.\n\n====Review 5 ====\nRating: 2 star rating\n\nThat moment when you're in an entirely different country and you see they have your favorite NYC breakfast spot is in Singapore!!!! BUT it's no where near as good and your charged double the price. Very disappointing. :(\n\n====Review 6 ====\nRating: 2 star rating\n\nBreakfast was mediocre. Nothing crazy or wow about. Overpriced as well. Would I recommend? Sure, if there's nothing else on the block to eat. Otherwise, it's not worth a visit.\n\n====Review 7 ====\nRating: 1 star rating\n\nEverything should be half the price. For the same price you can get three times the quality at P.S. Cafe. Dishes with eggs were underseasoned and in lack of flavor. The 20 dollar omelet has an egg to stuffing ratio of 5:1. Not coming back. Coffee was good.\n\n====Review 8 ====\nRating: 1 star rating\n\nWe went here for breakfast and we're not really impressed. They got one of our orders wrong, completely forgot another and had zero apologies for it all. Food is really mediocre for the price. All the egg dishes were well prepared but everything else on the plate was unappetizing. The biscuits were dry, the ham was leathery, the bread was stale. Would not recommend at all.\n\n====Review 9 ====\nRating: 1 star rating\n\n 1 check-inThis was easily the worst experience we had anywhere on our entire trip.Left waiting for 20 minutes to place our order while the table next to us, seated after we arrived, placed and received their order. We were generally ignored by the staff--and at the end we paid 10% extra for their \"service.\"Do yourself a favor and go elsewhere.Useful \n\n====Review 10 ====\nRating: 1 star rating\n\n 1 check-inI had spinach egg for breakfast. It tasted absolutely bland, tasteless. The portion of the orange juice is way too little. Not worth the price .. It's not the breakfast to die as compared in states. Lastly, customer service was bad. Coffee is good tho.Useful \n\n====Review 11 ====\n--Cannot find ratings--\n\nCopyright © 2004–2020 Yelp Inc. Yelp, , and related marks are registered trademarks of Yelp.\n"
],
[
"for review in reviewDict:\n f.write(\"\\n\" + review[0] + \",\" + review[1] )\nf.close()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d089d59b3fd1b1512cb98fa8d19fb284e09bd88d | 8,026 | ipynb | Jupyter Notebook | assets/python/Ex26[BikeRoute]_s.ipynb | ytyimin/msba | e60719568b45756f106d7604b4f2408bb929e0d6 | [
"MIT"
] | null | null | null | assets/python/Ex26[BikeRoute]_s.ipynb | ytyimin/msba | e60719568b45756f106d7604b4f2408bb929e0d6 | [
"MIT"
] | null | null | null | assets/python/Ex26[BikeRoute]_s.ipynb | ytyimin/msba | e60719568b45756f106d7604b4f2408bb929e0d6 | [
"MIT"
] | null | null | null | 27.114865 | 163 | 0.488537 | [
[
[
"# import the optimize solver Gurobi\nimport gurobipy as gp\nfrom gurobipy import * ",
"_____no_output_____"
],
[
"m = Model() # Import and create the model",
"Using license file C:\\Users\\chaoz\\gurobi_licence\\gurobi.lic\nAcademic license - for non-commercial use only\n"
],
[
"# Set the input Parameter: \navg_speed = 5 # The average speed at which he rides is 5 ft/sec \nstop_time = 3 # Stop sign takes 3 sec.\ntraffic_light_time = 45 # Traffic red light takes 45 sec.",
"_____no_output_____"
],
[
"# Set each node\nnodes = ['12Fifty5', 'Orange St Jn', 'University TC', 'Lemon St L', 'Lemon St Jn', 'HH Junction', 'DB Junction', 'ASU CP', 'SDFC Side', 'MU Side', 'MCRD']\n\n# Set required net outflow\n\nnet_flow = {}\nfor i in range(len(nodes)):\n if i == 0:\n net_flow[nodes[i]] = 1\n elif i == len(nodes) - 1:\n net_flow[nodes[i]] = -1\n else:\n net_flow[nodes[i]] = 0\n",
"_____no_output_____"
],
[
"# Set arcs, distance from orign to destination, stop sign and traffic light\n\narcs, distance, stop_sign, traffic_light = gp.multidict({\n ('12Fifty5', 'Orange St Jn'): [300, 1, 0 ],\n ('12Fifty5', 'University TC'): [4000, 1, 1],\n ('Orange St Jn', 'Lemon St L'): [465, 1, 0],\n ('Orange St Jn', 'Lemon St Jn'): [500, 0, 0],\n ('University TC', 'DB Junction'): [1500, 0, 1],\n ('Lemon St L', 'Lemon St Jn'): [10, 0, 0],\n ('Lemon St Jn', 'HH Junction'): [100, 0, 1],\n ('HH Junction', 'DB Junction'): [500, 0, 1],\n ('HH Junction', 'University TC'): [1700, 0, 1],\n ('DB Junction', 'ASU CP'): [300, 1, 0],\n ('ASU CP', 'SDFC Side'): [500, 1, 0],\n ('ASU CP', 'MU Side'): [700, 0, 0],\n ('SDFC Side', 'MCRD'): [0, 0, 0],\n ('MU Side', 'MCRD'): [0, 0, 0]})",
"_____no_output_____"
],
[
"# Caculate travel time for each arcs\ntravel_time = {}\nfor key in arcs:\n travel_time[key] = distance[key]/avg_speed + stop_sign[key] * stop_time + traffic_light_time* traffic_light[key]",
"_____no_output_____"
],
[
"m.update()",
"_____no_output_____"
],
[
"# Create variables\nflow = m.addVars(arcs, obj = travel_time, name=\"flow\")",
"_____no_output_____"
],
[
"# Flow-conservation constraints\nm.addConstrs(\n (flow.sum(i, '*') - flow.sum('*', i) == net_flow[i]\n for i in nodes), \"node\")",
"_____no_output_____"
],
[
"# Compute optimal solution\nm.optimize()",
"Gurobi Optimizer version 9.0.1 build v9.0.1rc0 (win64)\nOptimize a model with 11 rows, 14 columns and 28 nonzeros\nModel fingerprint: 0x45a2a77f\nCoefficient statistics:\n Matrix range [1e+00, 1e+00]\n Objective range [2e+00, 8e+02]\n Bounds range [0e+00, 0e+00]\n RHS range [1e+00, 1e+00]\nPresolve removed 8 rows and 9 columns\nPresolve time: 0.02s\nPresolved: 3 rows, 5 columns, 10 nonzeros\n\nIteration Objective Primal Inf. Dual Inf. Time\n 0 3.9200000e+02 2.000000e+00 0.000000e+00 0s\n 1 5.3700000e+02 0.000000e+00 0.000000e+00 0s\n\nSolved in 1 iterations and 0.48 seconds\nOptimal objective 5.370000000e+02\n"
],
[
"# m.printAttr('X')",
"\n Variable X \n-------------------------\nflow[12Fifty5,Orange St Jn] 1 \nflow[Orange St Jn,Lemon St L] 1 \nflow[Lemon St L,Lemon St Jn] 1 \nflow[Lemon St Jn,HH Junction] 1 \nflow[HH Junction,DB Junction] 1 \nflow[DB Junction,ASU CP] 1 \nflow[ASU CP,SDFC Side] 1 \nflow[SDFC Side,MCRD] 1 \n"
],
[
"# Print solution\nif m.status == GRB.OPTIMAL:\n solution = m.getAttr('x', flow)\n for i, j in arcs:\n if solution[i,j] > 0:\n print('%s -> %s : %g' % (i, j, travel_time[i,j]))",
"12Fifty5 -> Orange St Jn : 63\nOrange St Jn -> Lemon St L : 96\nLemon St L -> Lemon St Jn : 2\nLemon St Jn -> HH Junction : 65\nHH Junction -> DB Junction : 145\nDB Junction -> ASU CP : 63\nASU CP -> SDFC Side : 103\nSDFC Side -> MCRD : 0\n"
],
[
"# Get the Optimal Objective Value\nm.ObjVal",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d089fa68bc61f7f647987645c4b2a9a00fec59e8 | 5,392 | ipynb | Jupyter Notebook | project1_sql/scripts/requests.ipynb | mikhailbadin/ds_couse_homeworks | 0260e38712a5d780a4dd6e37e98739e2c282e0b1 | [
"MIT"
] | null | null | null | project1_sql/scripts/requests.ipynb | mikhailbadin/ds_couse_homeworks | 0260e38712a5d780a4dd6e37e98739e2c282e0b1 | [
"MIT"
] | null | null | null | project1_sql/scripts/requests.ipynb | mikhailbadin/ds_couse_homeworks | 0260e38712a5d780a4dd6e37e98739e2c282e0b1 | [
"MIT"
] | null | null | null | 23.964444 | 127 | 0.531157 | [
[
[
"## Подключаем необходимые библиотеки",
"_____no_output_____"
]
],
[
[
"import os\nimport datetime\nimport numpy as np\nimport pandas as pd\nfrom sqlalchemy import create_engine",
"_____no_output_____"
]
],
[
[
"Настраиваем подключение к БД",
"_____no_output_____"
]
],
[
[
"engine = create_engine(\"postgresql://postgres:@{}:{}\".format(os.environ[\"POSGRES_HOST\"], os.environ['POSGRES_PORT']))",
"_____no_output_____"
]
],
[
[
"Запрос 2: Подсчитать общее количест получателей",
"_____no_output_____"
]
],
[
[
"emailreceivers = pd.read_sql(\"select * from emailreceivers\", engine)\nreq2_result = emailreceivers[\"personid\"].drop_duplicates().count()\nprint(req2_result)\npd.DataFrame(\n [req2_result],\n columns=['count']).to_csv(os.environ['PANDAS_EXPORT_FOLDER'] + 'req2.csv')",
"418\n"
]
],
[
[
"Запрос 3: Подсчитать количество отправленных писем за 2012",
"_____no_output_____"
]
],
[
[
"emails = pd.read_sql(\"select * from emails\", engine)\nreq3_result = emails[\n (emails[\"metadatadatesent\"] >= np.datetime64('2012-01-01')) &\n (emails[\"metadatadatesent\"] < np.datetime64('2013-01-01'))\n][\"id\"].count()\nprint(req3_result)\npd.DataFrame(\n [req3_result],\n columns=['emails']).to_csv(os.environ['PANDAS_EXPORT_FOLDER'] + 'req3.csv')",
"1500\n"
]
],
[
[
"Запрос 5:\nВывести список писем в следующем формате:\nОтправитель письма, получатель письма, тема письма,\nи отсортированы по теме письма.",
"_____no_output_____"
]
],
[
[
"emails = pd.read_sql(\"select * from emails\", engine)\nreq5_df = emails[\n [\"metadatafrom\", \"metadatato\", \"metadatasubject\"]\n].sort_values(by=[\"metadatasubject\"])\nprint(req5_df.head())\nreq5_df.head().to_csv(os.environ['PANDAS_EXPORT_FOLDER'] + 'req5.csv')",
" metadatafrom metadatato metadatasubject\n35711 Mills, Cheryl D H - MORE WHEN WE SPEAK\n3951 Mills, Cheryl D H - MORE WHEN WE SPEAK\n1528 [email protected] [email protected] - MORE WHEN WE SPEAK\n17504 [email protected] [email protected] - MORE WHEN WE SPEAK\n25355 [email protected] [email protected] - MORE WHEN WE SPEAK\n"
]
],
[
[
"Запрос 6:\nВывести среднюю длинну сообщения",
"_____no_output_____"
]
],
[
[
"emails = pd.read_sql(\"select * from emails\", engine)\nreq6_result = emails[\"extractedbodytext\"].str.len().mean()\nprint(req6_result)\npd.DataFrame(\n [req6_result],\n columns=['avg']).to_csv(os.environ['PANDAS_EXPORT_FOLDER'] + 'req6.csv')\n",
"533.1624147137348\n"
]
],
[
[
"Запрос 7:\nВывести количество писем, в которых содержится подстрока UNCLASSIFIED",
"_____no_output_____"
]
],
[
[
"emails = pd.read_sql(\"select * from emails\", engine)\nreq7_result = emails[\n emails[\"extractedbodytext\"].str.contains(\"UNCLASSIFIED\") == True\n][\"extractedbodytext\"].count()\n\nprint(req7_result)\npd.DataFrame(\n [req7_result],\n columns=['text']).to_csv(os.environ['PANDAS_EXPORT_FOLDER'] + 'req7.csv')",
"78\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d08a1c77ff1839a1c7998651a69c7daa62af4a35 | 42,887 | ipynb | Jupyter Notebook | Pivot_table and crosstab.ipynb | priyankakushi/machine-learning | f46e077cc6d52a25a2f4ec3576791369ed091d51 | [
"CC-BY-3.0"
] | null | null | null | Pivot_table and crosstab.ipynb | priyankakushi/machine-learning | f46e077cc6d52a25a2f4ec3576791369ed091d51 | [
"CC-BY-3.0"
] | null | null | null | Pivot_table and crosstab.ipynb | priyankakushi/machine-learning | f46e077cc6d52a25a2f4ec3576791369ed091d51 | [
"CC-BY-3.0"
] | null | null | null | 30.180859 | 124 | 0.325133 | [
[
[
"# 20 Jan",
"_____no_output_____"
]
],
[
[
"import pandas as pd \nimport numpy as np\n",
"_____no_output_____"
],
[
"tips = pd.read_csv(\"/Users/amitkumarsahu/Desktop/pandas/tips.csv\")\ntips",
"_____no_output_____"
],
[
"tips.pivot_table(index=[\"day\", \"smoker\"])",
"_____no_output_____"
],
[
"tips[\"tip_pct\"]=tips[\"tip\"]*100/tips[\"total_bill\"]",
"_____no_output_____"
],
[
"tips[:6]",
"_____no_output_____"
],
[
"tips.pivot_table([\"tip_pct\", \"size\"], index=[\"time\", \"day\"], columns=\"smoker\")",
"_____no_output_____"
]
],
[
[
"# 21 Jan",
"_____no_output_____"
]
],
[
[
"tips.pivot_table([\"tip_pct\", \"size\"], index=[\"time\", \"day\"],\n columns=\"smoker\", margins=True)",
"_____no_output_____"
],
[
"tips.pivot_table(\"tip\", index=[\"time\", \"smoker\"], columns=\"day\", aggfunc=len, margins=True)",
"_____no_output_____"
],
[
"tips.pivot_table(\"tip_pct\", index=[\"time\", \"size\", \"smoker\"], columns=\"day\", aggfunc=\"mean\", fill_value=0)",
"_____no_output_____"
],
[
"from io import StringIO\ndata = \"\"\"\\\nSample Nationality Handedness\n1 USA Right-handed\n2 Japan Left-handed\n3 USA Right-handed\n4 Japan Right-handed\n5 Japan Left-handed\n6 Japan Right-handed\n7 USA Right-handed\n8 USA Left-handed\n9 Japan Right-handed\n10 USA Right-handed\"\"\"\ndata = pd.read_table(StringIO(data), sep = \"\\s+\")",
"_____no_output_____"
],
[
"data",
"_____no_output_____"
],
[
"pd.crosstab(data[\"Nationality\"], data[\"Handedness\"], margins=True)",
"_____no_output_____"
],
[
"pd.crosstab([tips.time, tips.day], tips.smoker, margins=True)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d08a2baee4fc1fef36164027212b589fa6e07c43 | 12,004 | ipynb | Jupyter Notebook | lessonNotebooks/1.11 Lists.ipynb | crowegian/SummerFellowsDBMIWorkshopMaterials | 26f0f8bcea9e78bb63de964e7731ff81abcbc34f | [
"MIT"
] | null | null | null | lessonNotebooks/1.11 Lists.ipynb | crowegian/SummerFellowsDBMIWorkshopMaterials | 26f0f8bcea9e78bb63de964e7731ff81abcbc34f | [
"MIT"
] | null | null | null | lessonNotebooks/1.11 Lists.ipynb | crowegian/SummerFellowsDBMIWorkshopMaterials | 26f0f8bcea9e78bb63de964e7731ff81abcbc34f | [
"MIT"
] | 1 | 2021-04-29T20:40:27.000Z | 2021-04-29T20:40:27.000Z | 20.380306 | 517 | 0.48917 | [
[
[
"# Overview\n\nAnother data type that we haven't discussed yet is the list. Lists store values for us, and we can access these values using indices, or by knowing where an element is located to access it. \n\n## Questions\n- How can I store multiple values?\n\n## Objectives\n- Explain why programs need collections of values.\n\n- Write programs that create flat lists, index them, slice them, and modify them through assignment and method calls.\n\n# Code\n",
"_____no_output_____"
],
[
"## A list stores many values in a single structure.\n",
"_____no_output_____"
]
],
[
[
"pressures = [0.273, 0.275, 0.277, 0.276]\nprint(pressures[0])",
"0.273\n"
],
[
"print(\"pressures\", pressures)\nprint(\"length:\", len(pressures))",
"pressures [0.273, 0.275, 0.277, 0.276]\nlength: 4\n"
]
],
[
[
"## Use an item’s index to fetch it from a list.\n",
"_____no_output_____"
]
],
[
[
"print(\"Zeroth item:\", pressures[0])\nprint(\"fourth item:\", pressures[3])",
"Zeroth item: 0.273\nfourth item: 0.276\n"
]
],
[
[
"## Lists’ values can be replaced by assigning to them.\n",
"_____no_output_____"
]
],
[
[
"print(pressures)",
"[0.273, 0.275, 0.277, 0.276]\n"
],
[
"pressures[0] = 0.265\nprint(\"current pressures\", pressures)",
"current pressures [0.265, 0.275, 0.277, 0.276]\n"
]
],
[
[
"## Appending items to a list lengthens it.\n",
"_____no_output_____"
]
],
[
[
"primes = [2, 3, 5]\nprint(\"current primes\", primes)\nprimes.append(7)\nprimes.append(9)\nprint(\"primes has become\", primes)",
"current primes [2, 3, 5]\nprimes has become [2, 3, 5, 7, 9]\n"
],
[
"primes = primes.append(13)\nprint(primes)",
"None\n"
],
[
"teen_primes = [11, 13, 17, 19]\nmiddle_aged_primes = [37, 41, 43, 47]\nprint(\"primes is currently\", primes)\nprimes.extend(teen_primes)\nprint(\"primes is now\", primes)",
"primes is currently [2, 3, 5, 7, 9]\nprimes is now [2, 3, 5, 7, 9, 11, 13, 17, 19]\n"
]
],
[
[
"## Use del to remove items from a list entirely.\n",
"_____no_output_____"
]
],
[
[
"primes = [2, 3, 5, 7, 9]\nprint(\"current primes\", primes)\ndel primes[2]\nprint(\"primes now\", primes)\n",
"current primes [2, 3, 5, 7, 9]\nprimes now [2, 3, 7, 9]\n"
]
],
[
[
"## Lists may contain values of different types.\n",
"_____no_output_____"
]
],
[
[
"goals = [1, \"create lists.\", 2, \"extract items from lists\", 3, \"modify lists\"]",
"_____no_output_____"
],
[
"goals",
"_____no_output_____"
]
],
[
[
"## Character strings can be indexed like lists.\n",
"_____no_output_____"
]
],
[
[
"element = \"carbon\"\nprint(element[0])",
"c\n"
]
],
[
[
"## Character strings are immutable.\n",
"_____no_output_____"
]
],
[
[
"print(element)\nelement[0] = \"C\"",
"carbon\n"
],
[
"print(element)",
"carbon\n"
]
],
[
[
"## Indexing beyond the end of the collection is an error.\n",
"_____no_output_____"
]
],
[
[
"print(\"99th element of element is:\", element[99])",
"_____no_output_____"
],
[
"primes = [2, 3, 7, 9]",
"_____no_output_____"
],
[
"primes.remove(7)",
"_____no_output_____"
],
[
"primes",
"_____no_output_____"
],
[
"del primes[0]",
"_____no_output_____"
],
[
"primes",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d08a308370898f7e1a84cb9bf06c3b74520538b4 | 86,515 | ipynb | Jupyter Notebook | 01_introduction/5_Inference_and_Validation.ipynb | thearonn/dtu_mlops | 16df652c7797aa3ea6eb987c7a6c21fac42c58a6 | [
"Apache-2.0"
] | null | null | null | 01_introduction/5_Inference_and_Validation.ipynb | thearonn/dtu_mlops | 16df652c7797aa3ea6eb987c7a6c21fac42c58a6 | [
"Apache-2.0"
] | null | null | null | 01_introduction/5_Inference_and_Validation.ipynb | thearonn/dtu_mlops | 16df652c7797aa3ea6eb987c7a6c21fac42c58a6 | [
"Apache-2.0"
] | null | null | null | 54.038101 | 23,068 | 0.668867 | [
[
[
"# Inference and Validation\n\nNow that you have a trained network, you can use it for making predictions. This is typically called **inference**, a term borrowed from statistics. However, neural networks have a tendency to perform *too well* on the training data and aren't able to generalize to data that hasn't been seen before. This is called **overfitting** and it impairs inference performance. To test for overfitting while training, we measure the performance on data not in the training set called the **validation** set. We avoid overfitting through regularization such as dropout while monitoring the validation performance during training. In this notebook, I'll show you how to do this in PyTorch. \n\nAs usual, let's start by loading the dataset through torchvision. You'll learn more about torchvision and loading data in a later part. This time we'll be taking advantage of the test set which you can get by setting `train=False` here:\n\n```python\ntestset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)\n```\n\nThe test set contains images just like the training set. Typically you'll see 10-20% of the original dataset held out for testing and validation with the rest being used for training.",
"_____no_output_____"
]
],
[
[
"import torch\nfrom torchvision import datasets, transforms\n\n# Define a transform to normalize the data\ntransform = transforms.Compose([transforms.ToTensor(),\n transforms.Normalize((0.5,), (0.5,))])\n# Download and load the training data\ntrainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform)\ntrainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)\n\n# Download and load the test data\ntestset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)\ntestloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)",
"_____no_output_____"
]
],
[
[
"Here I'll create a model like normal, using the same one from my solution for part 4.",
"_____no_output_____"
]
],
[
[
"from torch import nn, optim\nimport torch.nn.functional as F\n\nclass Classifier(nn.Module):\n def __init__(self):\n super().__init__()\n self.fc1 = nn.Linear(784, 256)\n self.fc2 = nn.Linear(256, 128)\n self.fc3 = nn.Linear(128, 64)\n self.fc4 = nn.Linear(64, 10)\n \n def forward(self, x):\n # make sure input tensor is flattened\n x = x.view(x.shape[0], -1)\n \n x = F.relu(self.fc1(x))\n x = F.relu(self.fc2(x))\n x = F.relu(self.fc3(x))\n x = F.log_softmax(self.fc4(x), dim=1)\n \n return x",
"_____no_output_____"
]
],
[
[
"The goal of validation is to measure the model's performance on data that isn't part of the training set. Performance here is up to the developer to define though. Typically this is just accuracy, the percentage of classes the network predicted correctly. Other options are [precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall#Definition_(classification_context)) and top-5 error rate. We'll focus on accuracy here. First I'll do a forward pass with one batch from the test set.",
"_____no_output_____"
]
],
[
[
"model = Classifier()\n\nimages, labels = next(iter(testloader))\n# Get the class probabilities\nps = torch.exp(model(images))\n# Make sure the shape is appropriate, we should get 10 class probabilities for 64 examples\nprint(ps.shape)",
"torch.Size([64, 10])\n"
]
],
[
[
"With the probabilities, we can get the most likely class using the `ps.topk` method. This returns the $k$ highest values. Since we just want the most likely class, we can use `ps.topk(1)`. This returns a tuple of the top-$k$ values and the top-$k$ indices. If the highest value is the fifth element, we'll get back 4 as the index.",
"_____no_output_____"
]
],
[
[
"top_p, top_class = ps.topk(1, dim=1)\n# Look at the most likely classes for the first 10 examples\nprint(top_class[:10,:])",
"tensor([[1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1],\n [1]])\n"
]
],
[
[
"Now we can check if the predicted classes match the labels. This is simple to do by equating `top_class` and `labels`, but we have to be careful of the shapes. Here `top_class` is a 2D tensor with shape `(64, 1)` while `labels` is 1D with shape `(64)`. To get the equality to work out the way we want, `top_class` and `labels` must have the same shape.\n\nIf we do\n\n```python\nequals = top_class == labels\n```\n\n`equals` will have shape `(64, 64)`, try it yourself. What it's doing is comparing the one element in each row of `top_class` with each element in `labels` which returns 64 True/False boolean values for each row.",
"_____no_output_____"
]
],
[
[
"equals = top_class == labels.view(*top_class.shape)\n#print(equals)",
"_____no_output_____"
]
],
[
[
"Now we need to calculate the percentage of correct predictions. `equals` has binary values, either 0 or 1. This means that if we just sum up all the values and divide by the number of values, we get the percentage of correct predictions. This is the same operation as taking the mean, so we can get the accuracy with a call to `torch.mean`. If only it was that simple. If you try `torch.mean(equals)`, you'll get an error\n\n```\nRuntimeError: mean is not implemented for type torch.ByteTensor\n```\n\nThis happens because `equals` has type `torch.ByteTensor` but `torch.mean` isn't implemented for tensors with that type. So we'll need to convert `equals` to a float tensor. Note that when we take `torch.mean` it returns a scalar tensor, to get the actual value as a float we'll need to do `accuracy.item()`.",
"_____no_output_____"
]
],
[
[
"accuracy = torch.mean(equals.type(torch.FloatTensor))\nprint(f'Accuracy: {accuracy.item()*100}%')",
"Accuracy: 10.9375%\n"
]
],
[
[
"The network is untrained so it's making random guesses and we should see an accuracy around 10%. Now let's train our network and include our validation pass so we can measure how well the network is performing on the test set. Since we're not updating our parameters in the validation pass, we can speed up our code by turning off gradients using `torch.no_grad()`:\n\n```python\n# turn off gradients\nwith torch.no_grad():\n # validation pass here\n for images, labels in testloader:\n ...\n```\n\n>**Exercise:** Implement the validation loop below and print out the total accuracy after the loop. You can largely copy and paste the code from above, but I suggest typing it in because writing it out yourself is essential for building the skill. In general you'll always learn more by typing it rather than copy-pasting. You should be able to get an accuracy above 80%.",
"_____no_output_____"
]
],
[
[
"model = Classifier()\ncriterion = nn.NLLLoss()\noptimizer = optim.Adam(model.parameters(), lr=0.003)\n\nepochs = 30\nsteps = 0\n\ntrain_losses, test_losses = [], []\nfor e in range(epochs):\n running_loss = 0\n for images, labels in trainloader:\n \n optimizer.zero_grad()\n \n log_ps = model(images)\n loss = criterion(log_ps, labels)\n loss.backward()\n optimizer.step()\n \n running_loss += loss.item()\n \n else:\n \n # validation pass here\n for images, labels in testloader:\n ps = torch.exp(model(images))\n op_p, top_class = ps.topk(1, dim=1)\n equals = top_class == labels.view(*top_class.shape)\n accuracy = torch.mean(equals.type(torch.FloatTensor))\n print(f'Accuracy: {accuracy.item()*100}%')\n",
"racy: 82.8125%\nAccuracy: 85.9375%\nAccuracy: 75.0%\nAccuracy: 90.625%\nAccuracy: 89.0625%\nAccuracy: 93.75%\nAccuracy: 82.8125%\nAccuracy: 79.6875%\nAccuracy: 79.6875%\nAccuracy: 82.8125%\nAccuracy: 89.0625%\nAccuracy: 87.5%\nAccuracy: 90.625%\nAccuracy: 85.9375%\nAccuracy: 89.0625%\nAccuracy: 84.375%\nAccuracy: 92.1875%\nAccuracy: 82.8125%\nAccuracy: 89.0625%\nAccuracy: 81.25%\nAccuracy: 81.25%\nAccuracy: 89.0625%\nAccuracy: 81.25%\nAccuracy: 87.5%\nAccuracy: 78.125%\nAccuracy: 84.375%\nAccuracy: 75.0%\nAccuracy: 79.6875%\nAccuracy: 87.5%\nAccuracy: 85.9375%\nAccuracy: 84.375%\nAccuracy: 85.9375%\nAccuracy: 82.8125%\nAccuracy: 89.0625%\nAccuracy: 90.625%\nAccuracy: 92.1875%\nAccuracy: 87.5%\nAccuracy: 85.9375%\nAccuracy: 87.5%\nAccuracy: 90.625%\nAccuracy: 82.8125%\nAccuracy: 93.75%\nAccuracy: 85.9375%\nAccuracy: 98.4375%\nAccuracy: 87.5%\nAccuracy: 87.5%\nAccuracy: 84.375%\nAccuracy: 90.625%\nAccuracy: 85.9375%\nAccuracy: 90.625%\nAccuracy: 82.8125%\nAccuracy: 96.875%\nAccuracy: 79.6875%\nAccuracy: 76.5625%\nAccuracy: 89.0625%\nAccuracy: 84.375%\nAccuracy: 81.25%\nAccuracy: 89.0625%\nAccuracy: 85.9375%\nAccuracy: 81.25%\nAccuracy: 87.5%\nAccuracy: 85.9375%\nAccuracy: 87.5%\nAccuracy: 89.0625%\nAccuracy: 89.0625%\nAccuracy: 90.625%\nAccuracy: 76.5625%\nAccuracy: 87.5%\nAccuracy: 90.625%\nAccuracy: 84.375%\nAccuracy: 90.625%\nAccuracy: 89.0625%\nAccuracy: 89.0625%\nAccuracy: 90.625%\nAccuracy: 82.8125%\nAccuracy: 84.375%\nAccuracy: 81.25%\nAccuracy: 81.25%\nAccuracy: 84.375%\nAccuracy: 81.25%\nAccuracy: 84.375%\nAccuracy: 82.8125%\nAccuracy: 87.5%\nAccuracy: 87.5%\nAccuracy: 84.375%\nAccuracy: 81.25%\nAccuracy: 82.8125%\nAccuracy: 82.8125%\nAccuracy: 92.1875%\nAccuracy: 90.625%\nAccuracy: 85.9375%\nAccuracy: 81.25%\nAccuracy: 87.5%\nAccuracy: 93.75%\nAccuracy: 87.5%\nAccuracy: 79.6875%\nAccuracy: 81.25%\nAccuracy: 90.625%\nAccuracy: 82.8125%\nAccuracy: 84.375%\nAccuracy: 90.625%\nAccuracy: 81.25%\nAccuracy: 85.9375%\nAccuracy: 89.0625%\nAccuracy: 82.8125%\nAccuracy: 82.8125%\nAccuracy: 93.75%\nAccuracy: 89.0625%\nAccuracy: 87.5%\nAccuracy: 82.8125%\nAccuracy: 82.8125%\nAccuracy: 92.1875%\nAccuracy: 87.5%\nAccuracy: 89.0625%\nAccuracy: 79.6875%\nAccuracy: 90.625%\nAccuracy: 82.8125%\nAccuracy: 89.0625%\nAccuracy: 85.9375%\nAccuracy: 85.9375%\nAccuracy: 89.0625%\nAccuracy: 90.625%\nAccuracy: 85.9375%\nAccuracy: 79.6875%\nAccuracy: 92.1875%\nAccuracy: 87.5%\nAccuracy: 87.5%\nAccuracy: 87.5%\nAccuracy: 92.1875%\nAccuracy: 79.6875%\nAccuracy: 84.375%\nAccuracy: 81.25%\nAccuracy: 84.375%\nAccuracy: 90.625%\nAccuracy: 78.125%\nAccuracy: 79.6875%\nAccuracy: 89.0625%\nAccuracy: 89.0625%\nAccuracy: 92.1875%\nAccuracy: 82.8125%\nAccuracy: 87.5%\nAccuracy: 89.0625%\nAccuracy: 79.6875%\nAccuracy: 87.5%\nAccuracy: 92.1875%\nAccuracy: 85.9375%\nAccuracy: 85.9375%\nAccuracy: 79.6875%\nAccuracy: 82.8125%\nAccuracy: 78.125%\nAccuracy: 92.1875%\nAccuracy: 90.625%\nAccuracy: 75.0%\nAccuracy: 79.6875%\nAccuracy: 79.6875%\nAccuracy: 90.625%\nAccuracy: 84.375%\nAccuracy: 75.0%\nAccuracy: 85.9375%\nAccuracy: 89.0625%\nAccuracy: 84.375%\nAccuracy: 90.625%\nAccuracy: 81.25%\nAccuracy: 84.375%\nAccuracy: 90.625%\nAccuracy: 84.375%\nAccuracy: 85.9375%\nAccuracy: 81.25%\nAccuracy: 85.9375%\nAccuracy: 90.625%\nAccuracy: 92.1875%\nAccuracy: 89.0625%\nAccuracy: 82.8125%\nAccuracy: 93.75%\nAccuracy: 92.1875%\nAccuracy: 81.25%\nAccuracy: 85.9375%\nAccuracy: 87.5%\nAccuracy: 89.0625%\nAccuracy: 92.1875%\nAccuracy: 93.75%\nAccuracy: 84.375%\nAccuracy: 84.375%\nAccuracy: 85.9375%\nAccuracy: 95.3125%\nAccuracy: 79.6875%\nAccuracy: 89.0625%\nAccuracy: 81.25%\nAccuracy: 92.1875%\nAccuracy: 87.5%\nAccuracy: 87.5%\nAccuracy: 85.9375%\nAccuracy: 90.625%\nAccuracy: 82.8125%\nAccuracy: 85.9375%\nAccuracy: 89.0625%\nAccuracy: 90.625%\nAccuracy: 89.0625%\nAccuracy: 85.9375%\nAccuracy: 84.375%\nAccuracy: 89.0625%\nAccuracy: 92.1875%\nAccuracy: 85.9375%\nAccuracy: 90.625%\nAccuracy: 92.1875%\nAccuracy: 89.0625%\nAccuracy: 90.625%\nAccuracy: 84.375%\nAccuracy: 89.0625%\nAccuracy: 81.25%\nAccuracy: 90.625%\nAccuracy: 92.1875%\nAccuracy: 81.25%\nAccuracy: 85.9375%\nAccuracy: 89.0625%\nAccuracy: 81.25%\nAccuracy: 87.5%\nAccuracy: 89.0625%\nAccuracy: 84.375%\nAccuracy: 87.5%\nAccuracy: 84.375%\nAccuracy: 93.75%\nAccuracy: 84.375%\nAccuracy: 93.75%\nAccuracy: 90.625%\nAccuracy: 76.5625%\nAccuracy: 87.5%\nAccuracy: 87.5%\nAccuracy: 85.9375%\nAccuracy: 89.0625%\nAccuracy: 92.1875%\nAccuracy: 85.9375%\nAccuracy: 92.1875%\nAccuracy: 89.0625%\nAccuracy: 90.625%\nAccuracy: 82.8125%\nAccuracy: 90.625%\nAccuracy: 93.75%\nAccuracy: 84.375%\nAccuracy: 85.9375%\nAccuracy: 79.6875%\nAccuracy: 84.375%\nAccuracy: 81.25%\nAccuracy: 89.0625%\nAccuracy: 84.375%\nAccuracy: 89.0625%\nAccuracy: 90.625%\nAccuracy: 82.8125%\nAccuracy: 87.5%\nAccuracy: 85.9375%\nAccuracy: 79.6875%\nAccuracy: 79.6875%\nAccuracy: 85.9375%\nAccuracy: 84.375%\nAccuracy: 92.1875%\nAccuracy: 95.3125%\nAccuracy: 89.0625%\nAccuracy: 89.0625%\nAccuracy: 87.5%\nAccuracy: 81.25%\nAccuracy: 89.0625%\nAccuracy: 92.1875%\nAccuracy: 89.0625%\nAccuracy: 89.0625%\nAccuracy: 79.6875%\nAccuracy: 84.375%\nAccuracy: 82.8125%\nAccuracy: 78.125%\nAccuracy: 85.9375%\nAccuracy: 81.25%\nAccuracy: 82.8125%\nAccuracy: 95.3125%\nAccuracy: 89.0625%\nAccuracy: 82.8125%\nAccuracy: 79.6875%\nAccuracy: 84.375%\nAccuracy: 85.9375%\nAccuracy: 89.0625%\nAccuracy: 89.0625%\nAccuracy: 82.8125%\nAccuracy: 85.9375%\nAccuracy: 89.0625%\nAccuracy: 84.375%\nAccuracy: 90.625%\nAccuracy: 87.5%\nAccuracy: 81.25%\nAccuracy: 93.75%\nAccuracy: 79.6875%\nAccuracy: 79.6875%\nAccuracy: 90.625%\nAccuracy: 92.1875%\nAccuracy: 87.5%\nAccuracy: 92.1875%\nAccuracy: 87.5%\nAccuracy: 85.9375%\nAccuracy: 90.625%\nAccuracy: 85.9375%\nAccuracy: 84.375%\nAccuracy: 92.1875%\nAccuracy: 85.9375%\nAccuracy: 84.375%\nAccuracy: 92.1875%\nAccuracy: 87.5%\nAccuracy: 89.0625%\nAccuracy: 87.5%\nAccuracy: 87.5%\nAccuracy: 82.8125%\nAccuracy: 85.9375%\nAccuracy: 90.625%\nAccuracy: 85.9375%\nAccuracy: 81.25%\nAccuracy: 89.0625%\nAccuracy: 79.6875%\nAccuracy: 82.8125%\nAccuracy: 87.5%\nAccuracy: 84.375%\nAccuracy: 85.9375%\nAccuracy: 87.5%\nAccuracy: 93.75%\nAccuracy: 75.0%\nAccuracy: 84.375%\nAccuracy: 84.375%\nAccuracy: 85.9375%\nAccuracy: 92.1875%\nAccuracy: 89.0625%\nAccuracy: 85.9375%\nAccuracy: 82.8125%\nAccuracy: 87.5%\nAccuracy: 87.5%\nAccuracy: 84.375%\nAccuracy: 87.5%\nAccuracy: 84.375%\nAccuracy: 89.0625%\nAccuracy: 87.5%\nAccuracy: 87.5%\nAccuracy: 90.625%\nAccuracy: 84.375%\nAccuracy: 89.0625%\nAccuracy: 90.625%\nAccuracy: 90.625%\nAccuracy: 84.375%\nAccuracy: 84.375%\nAccuracy: 92.1875%\nAccuracy: 76.5625%\nAccuracy: 90.625%\nAccuracy: 85.9375%\nAccuracy: 87.5%\nAccuracy: 82.8125%\nAccuracy: 89.0625%\nAccuracy: 82.8125%\nAccuracy: 84.375%\nAccuracy: 87.5%\nAccuracy: 79.6875%\nAccuracy: 78.125%\nAccuracy: 84.375%\nAccuracy: 89.0625%\nAccuracy: 87.5%\nAccuracy: 92.1875%\nAccuracy: 81.25%\nAccuracy: 87.5%\nAccuracy: 82.8125%\nAccuracy: 85.9375%\nAccuracy: 84.375%\nAccuracy: 89.0625%\nAccuracy: 84.375%\nAccuracy: 84.375%\nAccuracy: 82.8125%\nAccuracy: 85.9375%\nAccuracy: 92.1875%\nAccuracy: 87.5%\nAccuracy: 87.5%\nAccuracy: 87.5%\nAccuracy: 84.375%\nAccuracy: 95.3125%\nAccuracy: 81.25%\nAccuracy: 92.1875%\nAccuracy: 85.9375%\nAccuracy: 84.375%\nAccuracy: 85.9375%\nAccuracy: 82.8125%\nAccuracy: 90.625%\nAccuracy: 90.625%\nAccuracy: 89.0625%\nAccuracy: 85.9375%\nAccuracy: 90.625%\nAccuracy: 84.375%\nAccuracy: 84.375%\nAccuracy: 84.375%\nAccuracy: 87.5%\nAccuracy: 81.25%\nAccuracy: 96.875%\nAccuracy: 84.375%\nAccuracy: 84.375%\nAccuracy: 82.8125%\nAccuracy: 89.0625%\nAccuracy: 82.8125%\nAccuracy: 90.625%\nAccuracy: 85.9375%\nAccuracy: 85.9375%\nAccuracy: 76.5625%\nAccuracy: 95.3125%\nAccuracy: 84.375%\nAccuracy: 85.9375%\nAccuracy: 82.8125%\nAccuracy: 84.375%\nAccuracy: 81.25%\nAccuracy: 96.875%\nAccuracy: 82.8125%\nAccuracy: 87.5%\nAccuracy: 95.3125%\nAccuracy: 90.625%\nAccuracy: 89.0625%\nAccuracy: 81.25%\nAccuracy: 84.375%\nAccuracy: 82.8125%\nAccuracy: 90.625%\nAccuracy: 82.8125%\nAccuracy: 90.625%\nAccuracy: 85.9375%\nAccuracy: 84.375%\nAccuracy: 85.9375%\nAccuracy: 87.5%\nAccuracy: 84.375%\nAccuracy: 90.625%\nAccuracy: 87.5%\nAccuracy: 85.9375%\nAccuracy: 87.5%\nAccuracy: 85.9375%\nAccuracy: 85.9375%\nAccuracy: 82.8125%\nAccuracy: 93.75%\nAccuracy: 90.625%\nAccuracy: 85.9375%\nAccuracy: 85.9375%\nAccuracy: 90.625%\nAccuracy: 85.9375%\nAccuracy: 79.6875%\nAccuracy: 92.1875%\nAccuracy: 85.9375%\nAccuracy: 85.9375%\nAccuracy: 84.375%\nAccuracy: 84.375%\nAccuracy: 81.25%\nAccuracy: 84.375%\nAccuracy: 82.8125%\nAccuracy: 93.75%\nAccuracy: 85.9375%\nAccuracy: 81.25%\nAccuracy: 85.9375%\nAccuracy: 87.5%\nAccuracy: 78.125%\nAccuracy: 82.8125%\nAccuracy: 87.5%\nAccuracy: 87.5%\nAccuracy: 90.625%\nAccuracy: 87.5%\nAccuracy: 92.1875%\nAccuracy: 90.625%\nAccuracy: 92.1875%\nAccuracy: 79.6875%\nAccuracy: 85.9375%\nAccuracy: 92.1875%\nAccuracy: 84.375%\nAccuracy: 90.625%\nAccuracy: 85.9375%\nAccuracy: 90.625%\nAccuracy: 84.375%\nAccuracy: 85.9375%\nAccuracy: 82.8125%\nAccuracy: 79.6875%\nAccuracy: 79.6875%\nAccuracy: 90.625%\nAccuracy: 87.5%\nAccuracy: 81.25%\nAccuracy: 82.8125%\nAccuracy: 84.375%\nAccuracy: 93.75%\nAccuracy: 93.75%\nAccuracy: 82.8125%\nAccuracy: 87.5%\nAccuracy: 85.9375%\nAccuracy: 90.625%\nAccuracy: 85.9375%\nAccuracy: 84.375%\nAccuracy: 93.75%\nAccuracy: 89.0625%\nAccuracy: 82.8125%\nAccuracy: 90.625%\nAccuracy: 87.5%\nAccuracy: 93.75%\nAccuracy: 92.1875%\nAccuracy: 85.9375%\nAccuracy: 81.25%\nAccuracy: 93.75%\nAccuracy: 85.9375%\nAccuracy: 92.1875%\nAccuracy: 82.8125%\nAccuracy: 93.75%\nAccuracy: 85.9375%\nAccuracy: 90.625%\nAccuracy: 95.3125%\nAccuracy: 73.4375%\nAccuracy: 93.75%\nAccuracy: 89.0625%\nAccuracy: 85.9375%\nAccuracy: 93.75%\nAccuracy: 87.5%\nAccuracy: 84.375%\nAccuracy: 87.5%\nAccuracy: 82.8125%\nAccuracy: 87.5%\nAccuracy: 89.0625%\nAccuracy: 85.9375%\nAccuracy: 89.0625%\nAccuracy: 84.375%\nAccuracy: 90.625%\nAccuracy: 85.9375%\nAccuracy: 87.5%\nAccuracy: 89.0625%\nAccuracy: 79.6875%\nAccuracy: 90.625%\nAccuracy: 84.375%\nAccuracy: 93.75%\nAccuracy: 82.8125%\nAccuracy: 78.125%\nAccuracy: 87.5%\nAccuracy: 89.0625%\nAccuracy: 84.375%\nAccuracy: 82.8125%\nAccuracy: 90.625%\nAccuracy: 96.875%\nAccuracy: 90.625%\nAccuracy: 85.9375%\nAccuracy: 84.375%\nAccuracy: 87.5%\nAccuracy: 82.8125%\nAccuracy: 92.1875%\nAccuracy: 76.5625%\nAccuracy: 85.9375%\nAccuracy: 81.25%\nAccuracy: 92.1875%\nAccuracy: 84.375%\nAccuracy: 89.0625%\nAccuracy: 84.375%\nAccuracy: 84.375%\nAccuracy: 81.25%\nAccuracy: 84.375%\nAccuracy: 92.1875%\nAccuracy: 82.8125%\nAccuracy: 95.3125%\nAccuracy: 84.375%\nAccuracy: 84.375%\nAccuracy: 89.0625%\nAccuracy: 84.375%\nAccuracy: 84.375%\nAccuracy: 89.0625%\nAccuracy: 87.5%\nAccuracy: 93.75%\nAccuracy: 90.625%\nAccuracy: 90.625%\nAccuracy: 95.3125%\nAccuracy: 93.75%\nAccuracy: 87.5%\nAccuracy: 81.25%\nAccuracy: 79.6875%\nAccuracy: 89.0625%\nAccuracy: 75.0%\nAccuracy: 84.375%\nAccuracy: 90.625%\nAccuracy: 90.625%\nAccuracy: 92.1875%\nAccuracy: 85.9375%\nAccuracy: 87.5%\nAccuracy: 87.5%\nAccuracy: 90.625%\nAccuracy: 84.375%\nAccuracy: 95.3125%\nAccuracy: 82.8125%\nAccuracy: 87.5%\nAccuracy: 84.375%\nAccuracy: 90.625%\nAccuracy: 90.625%\nAccuracy: 93.75%\nAccuracy: 89.0625%\nAccuracy: 90.625%\nAccuracy: 81.25%\nAccuracy: 81.25%\nAccuracy: 95.3125%\nAccuracy: 85.9375%\nAccuracy: 89.0625%\nAccuracy: 93.75%\nAccuracy: 95.3125%\nAccuracy: 89.0625%\nAccuracy: 84.375%\nAccuracy: 90.625%\nAccuracy: 85.9375%\nAccuracy: 89.0625%\nAccuracy: 82.8125%\nAccuracy: 82.8125%\nAccuracy: 84.375%\nAccuracy: 93.75%\nAccuracy: 84.375%\nAccuracy: 79.6875%\nAccuracy: 89.0625%\nAccuracy: 79.6875%\nAccuracy: 93.75%\nAccuracy: 87.5%\nAccuracy: 93.75%\nAccuracy: 84.375%\nAccuracy: 87.5%\nAccuracy: 89.0625%\nAccuracy: 89.0625%\nAccuracy: 92.1875%\nAccuracy: 82.8125%\nAccuracy: 90.625%\nAccuracy: 84.375%\nAccuracy: 79.6875%\nAccuracy: 82.8125%\nAccuracy: 87.5%\nAccuracy: 93.75%\nAccuracy: 84.375%\nAccuracy: 90.625%\nAccuracy: 90.625%\nAccuracy: 87.5%\nAccuracy: 82.8125%\nAccuracy: 87.5%\nAccuracy: 93.75%\nAccuracy: 84.375%\nAccuracy: 85.9375%\nAccuracy: 85.9375%\nAccuracy: 81.25%\nAccuracy: 90.625%\nAccuracy: 82.8125%\nAccuracy: 93.75%\nAccuracy: 90.625%\nAccuracy: 89.0625%\nAccuracy: 87.5%\nAccuracy: 90.625%\nAccuracy: 89.0625%\nAccuracy: 82.8125%\nAccuracy: 81.25%\nAccuracy: 90.625%\nAccuracy: 84.375%\nAccuracy: 90.625%\nAccuracy: 87.5%\nAccuracy: 87.5%\nAccuracy: 85.9375%\nAccuracy: 89.0625%\nAccuracy: 89.0625%\nAccuracy: 81.25%\nAccuracy: 90.625%\nAccuracy: 92.1875%\nAccuracy: 82.8125%\nAccuracy: 89.0625%\nAccuracy: 89.0625%\nAccuracy: 82.8125%\nAccuracy: 89.0625%\nAccuracy: 93.75%\nAccuracy: 84.375%\nAccuracy: 87.5%\nAccuracy: 89.0625%\nAccuracy: 90.625%\nAccuracy: 93.75%\nAccuracy: 85.9375%\nAccuracy: 84.375%\nAccuracy: 93.75%\nAccuracy: 93.75%\nAccuracy: 87.5%\nAccuracy: 82.8125%\nAccuracy: 95.3125%\nAccuracy: 87.5%\nAccuracy: 87.5%\nAccuracy: 89.0625%\nAccuracy: 93.75%\nAccuracy: 92.1875%\nAccuracy: 92.1875%\nAccuracy: 90.625%\nAccuracy: 85.9375%\nAccuracy: 89.0625%\nAccuracy: 96.875%\nAccuracy: 87.5%\nAccuracy: 92.1875%\nAccuracy: 85.9375%\nAccuracy: 87.5%\nAccuracy: 84.375%\nAccuracy: 87.5%\nAccuracy: 87.5%\nAccuracy: 93.75%\nAccuracy: 85.9375%\nAccuracy: 93.75%\nAccuracy: 82.8125%\nAccuracy: 71.875%\nAccuracy: 89.0625%\nAccuracy: 84.375%\nAccuracy: 89.0625%\nAccuracy: 81.25%\nAccuracy: 79.6875%\nAccuracy: 90.625%\nAccuracy: 81.25%\nAccuracy: 87.5%\nAccuracy: 85.9375%\nAccuracy: 82.8125%\nAccuracy: 87.5%\nAccuracy: 82.8125%\nAccuracy: 87.5%\nAccuracy: 87.5%\nAccuracy: 82.8125%\nAccuracy: 93.75%\nAccuracy: 81.25%\nAccuracy: 84.375%\nAccuracy: 82.8125%\nAccuracy: 90.625%\nAccuracy: 85.9375%\nAccuracy: 92.1875%\nAccuracy: 82.8125%\nAccuracy: 89.0625%\nAccuracy: 95.3125%\nAccuracy: 84.375%\nAccuracy: 90.625%\nAccuracy: 87.5%\nAccuracy: 89.0625%\nAccuracy: 89.0625%\nAccuracy: 78.125%\nAccuracy: 93.75%\nAccuracy: 84.375%\nAccuracy: 85.9375%\nAccuracy: 79.6875%\nAccuracy: 78.125%\nAccuracy: 87.5%\nAccuracy: 84.375%\nAccuracy: 89.0625%\nAccuracy: 85.9375%\nAccuracy: 95.3125%\nAccuracy: 89.0625%\nAccuracy: 89.0625%\nAccuracy: 87.5%\nAccuracy: 92.1875%\nAccuracy: 93.75%\nAccuracy: 89.0625%\nAccuracy: 89.0625%\nAccuracy: 89.0625%\nAccuracy: 89.0625%\nAccuracy: 84.375%\nAccuracy: 82.8125%\nAccuracy: 93.75%\nAccuracy: 84.375%\nAccuracy: 82.8125%\nAccuracy: 87.5%\nAccuracy: 90.625%\nAccuracy: 87.5%\nAccuracy: 87.5%\nAccuracy: 90.625%\nAccuracy: 82.8125%\nAccuracy: 81.25%\nAccuracy: 87.5%\nAccuracy: 81.25%\nAccuracy: 85.9375%\nAccuracy: 81.25%\nAccuracy: 89.0625%\nAccuracy: 90.625%\nAccuracy: 95.3125%\nAccuracy: 90.625%\nAccuracy: 87.5%\nAccuracy: 87.5%\nAccuracy: 85.9375%\nAccuracy: 85.9375%\nAccuracy: 79.6875%\nAccuracy: 89.0625%\nAccuracy: 89.0625%\nAccuracy: 92.1875%\nAccuracy: 92.1875%\nAccuracy: 82.8125%\nAccuracy: 85.9375%\nAccuracy: 84.375%\nAccuracy: 89.0625%\nAccuracy: 85.9375%\nAccuracy: 84.375%\nAccuracy: 87.5%\nAccuracy: 85.9375%\nAccuracy: 92.1875%\nAccuracy: 87.5%\nAccuracy: 90.625%\nAccuracy: 73.4375%\nAccuracy: 85.9375%\nAccuracy: 85.9375%\nAccuracy: 84.375%\nAccuracy: 90.625%\nAccuracy: 92.1875%\nAccuracy: 87.5%\nAccuracy: 89.0625%\nAccuracy: 87.5%\nAccuracy: 92.1875%\nAccuracy: 81.25%\nAccuracy: 87.5%\nAccuracy: 85.9375%\nAccuracy: 87.5%\nAccuracy: 89.0625%\nAccuracy: 90.625%\nAccuracy: 84.375%\nAccuracy: 89.0625%\nAccuracy: 87.5%\nAccuracy: 92.1875%\nAccuracy: 92.1875%\nAccuracy: 90.625%\nAccuracy: 90.625%\nAccuracy: 84.375%\nAccuracy: 90.625%\nAccuracy: 90.625%\nAccuracy: 95.3125%\nAccuracy: 84.375%\nAccuracy: 81.25%\nAccuracy: 81.25%\nAccuracy: 87.5%\nAccuracy: 89.0625%\nAccuracy: 84.375%\nAccuracy: 89.0625%\nAccuracy: 82.8125%\nAccuracy: 92.1875%\nAccuracy: 82.8125%\nAccuracy: 85.9375%\nAccuracy: 87.5%\nAccuracy: 81.25%\nAccuracy: 82.8125%\nAccuracy: 87.5%\nAccuracy: 85.9375%\nAccuracy: 96.875%\nAccuracy: 89.0625%\nAccuracy: 84.375%\nAccuracy: 92.1875%\nAccuracy: 89.0625%\nAccuracy: 89.0625%\nAccuracy: 89.0625%\nAccuracy: 92.1875%\nAccuracy: 84.375%\nAccuracy: 89.0625%\nAccuracy: 95.3125%\nAccuracy: 89.0625%\nAccuracy: 90.625%\nAccuracy: 82.8125%\nAccuracy: 92.1875%\nAccuracy: 84.375%\nAccuracy: 87.5%\nAccuracy: 87.5%\nAccuracy: 85.9375%\nAccuracy: 87.5%\nAccuracy: 84.375%\nAccuracy: 87.5%\nAccuracy: 95.3125%\nAccuracy: 90.625%\nAccuracy: 87.5%\nAccuracy: 84.375%\nAccuracy: 89.0625%\nAccuracy: 85.9375%\nAccuracy: 79.6875%\nAccuracy: 90.625%\nAccuracy: 90.625%\nAccuracy: 84.375%\nAccuracy: 89.0625%\nAccuracy: 87.5%\nAccuracy: 85.9375%\nAccuracy: 81.25%\nAccuracy: 87.5%\nAccuracy: 92.1875%\nAccuracy: 89.0625%\nAccuracy: 87.5%\nAccuracy: 82.8125%\nAccuracy: 87.5%\nAccuracy: 89.0625%\nAccuracy: 87.5%\nAccuracy: 79.6875%\nAccuracy: 87.5%\nAccuracy: 90.625%\nAccuracy: 87.5%\nAccuracy: 92.1875%\nAccuracy: 93.75%\nAccuracy: 85.9375%\nAccuracy: 85.9375%\nAccuracy: 82.8125%\nAccuracy: 90.625%\nAccuracy: 85.9375%\nAccuracy: 92.1875%\nAccuracy: 89.0625%\nAccuracy: 85.9375%\nAccuracy: 92.1875%\nAccuracy: 82.8125%\nAccuracy: 89.0625%\nAccuracy: 89.0625%\nAccuracy: 92.1875%\nAccuracy: 87.5%\nAccuracy: 85.9375%\nAccuracy: 82.8125%\nAccuracy: 84.375%\nAccuracy: 89.0625%\nAccuracy: 81.25%\nAccuracy: 89.0625%\nAccuracy: 82.8125%\nAccuracy: 85.9375%\nAccuracy: 84.375%\nAccuracy: 89.0625%\nAccuracy: 84.375%\nAccuracy: 84.375%\nAccuracy: 85.9375%\nAccuracy: 89.0625%\nAccuracy: 90.625%\nAccuracy: 79.6875%\nAccuracy: 87.5%\nAccuracy: 78.125%\nAccuracy: 84.375%\nAccuracy: 89.0625%\nAccuracy: 85.9375%\nAccuracy: 90.625%\nAccuracy: 92.1875%\nAccuracy: 84.375%\nAccuracy: 84.375%\nAccuracy: 81.25%\nAccuracy: 89.0625%\nAccuracy: 84.375%\nAccuracy: 87.5%\nAccuracy: 89.0625%\nAccuracy: 89.0625%\nAccuracy: 89.0625%\nAccuracy: 87.5%\nAccuracy: 85.9375%\nAccuracy: 92.1875%\nAccuracy: 92.1875%\nAccuracy: 87.5%\nAccuracy: 92.1875%\nAccuracy: 85.9375%\nAccuracy: 85.9375%\nAccuracy: 92.1875%\nAccuracy: 89.0625%\nAccuracy: 90.625%\nAccuracy: 89.0625%\nAccuracy: 82.8125%\nAccuracy: 96.875%\nAccuracy: 82.8125%\nAccuracy: 89.0625%\nAccuracy: 85.9375%\nAccuracy: 92.1875%\nAccuracy: 93.75%\nAccuracy: 95.3125%\nAccuracy: 85.9375%\nAccuracy: 92.1875%\nAccuracy: 96.875%\nAccuracy: 85.9375%\nAccuracy: 93.75%\nAccuracy: 84.375%\nAccuracy: 100.0%\nAccuracy: 95.3125%\nAccuracy: 92.1875%\nAccuracy: 85.9375%\nAccuracy: 85.9375%\nAccuracy: 87.5%\nAccuracy: 84.375%\nAccuracy: 98.4375%\nAccuracy: 84.375%\nAccuracy: 92.1875%\nAccuracy: 85.9375%\nAccuracy: 79.6875%\nAccuracy: 89.0625%\nAccuracy: 89.0625%\nAccuracy: 92.1875%\nAccuracy: 85.9375%\nAccuracy: 92.1875%\nAccuracy: 85.9375%\nAccuracy: 82.8125%\nAccuracy: 90.625%\nAccuracy: 85.9375%\nAccuracy: 82.8125%\nAccuracy: 90.625%\nAccuracy: 85.9375%\nAccuracy: 89.0625%\nAccuracy: 84.375%\nAccuracy: 84.375%\nAccuracy: 84.375%\nAccuracy: 89.0625%\nAccuracy: 82.8125%\nAccuracy: 90.625%\nAccuracy: 79.6875%\nAccuracy: 93.75%\nAccuracy: 89.0625%\nAccuracy: 81.25%\nAccuracy: 93.75%\nAccuracy: 87.5%\nAccuracy: 90.625%\nAccuracy: 84.375%\nAccuracy: 90.625%\nAccuracy: 82.8125%\nAccuracy: 82.8125%\nAccuracy: 87.5%\nAccuracy: 84.375%\nAccuracy: 81.25%\nAccuracy: 89.0625%\nAccuracy: 89.0625%\nAccuracy: 95.3125%\nAccuracy: 89.0625%\nAccuracy: 90.625%\nAccuracy: 84.375%\nAccuracy: 92.1875%\nAccuracy: 90.625%\nAccuracy: 84.375%\nAccuracy: 87.5%\nAccuracy: 92.1875%\nAccuracy: 85.9375%\nAccuracy: 75.0%\nAccuracy: 84.375%\nAccuracy: 92.1875%\nAccuracy: 92.1875%\nAccuracy: 93.75%\nAccuracy: 87.5%\nAccuracy: 84.375%\nAccuracy: 85.9375%\nAccuracy: 90.625%\nAccuracy: 90.625%\nAccuracy: 85.9375%\nAccuracy: 87.5%\nAccuracy: 92.1875%\nAccuracy: 84.375%\nAccuracy: 87.5%\nAccuracy: 85.9375%\nAccuracy: 92.1875%\nAccuracy: 93.75%\nAccuracy: 81.25%\nAccuracy: 85.9375%\nAccuracy: 81.25%\nAccuracy: 84.375%\nAccuracy: 92.1875%\nAccuracy: 90.625%\nAccuracy: 87.5%\nAccuracy: 84.375%\nAccuracy: 92.1875%\nAccuracy: 82.8125%\nAccuracy: 89.0625%\nAccuracy: 89.0625%\nAccuracy: 81.25%\nAccuracy: 81.25%\nAccuracy: 85.9375%\nAccuracy: 81.25%\nAccuracy: 84.375%\nAccuracy: 84.375%\nAccuracy: 82.8125%\nAccuracy: 81.25%\nAccuracy: 90.625%\nAccuracy: 92.1875%\nAccuracy: 82.8125%\nAccuracy: 84.375%\nAccuracy: 82.8125%\nAccuracy: 90.625%\nAccuracy: 90.625%\nAccuracy: 87.5%\nAccuracy: 81.25%\nAccuracy: 90.625%\nAccuracy: 85.9375%\nAccuracy: 85.9375%\nAccuracy: 90.625%\nAccuracy: 93.75%\nAccuracy: 95.3125%\nAccuracy: 84.375%\nAccuracy: 92.1875%\nAccuracy: 85.9375%\nAccuracy: 89.0625%\nAccuracy: 85.9375%\nAccuracy: 89.0625%\nAccuracy: 89.0625%\nAccuracy: 85.9375%\nAccuracy: 75.0%\nAccuracy: 84.375%\nAccuracy: 89.0625%\nAccuracy: 84.375%\nAccuracy: 87.5%\nAccuracy: 90.625%\nAccuracy: 87.5%\nAccuracy: 78.125%\nAccuracy: 87.5%\nAccuracy: 92.1875%\nAccuracy: 84.375%\nAccuracy: 75.0%\nAccuracy: 92.1875%\nAccuracy: 90.625%\nAccuracy: 87.5%\nAccuracy: 90.625%\nAccuracy: 93.75%\nAccuracy: 87.5%\nAccuracy: 89.0625%\nAccuracy: 92.1875%\nAccuracy: 89.0625%\nAccuracy: 90.625%\nAccuracy: 78.125%\nAccuracy: 87.5%\nAccuracy: 89.0625%\nAccuracy: 78.125%\nAccuracy: 82.8125%\nAccuracy: 85.9375%\nAccuracy: 90.625%\nAccuracy: 89.0625%\nAccuracy: 90.625%\nAccuracy: 90.625%\nAccuracy: 90.625%\nAccuracy: 79.6875%\nAccuracy: 85.9375%\nAccuracy: 87.5%\nAccuracy: 90.625%\nAccuracy: 89.0625%\nAccuracy: 93.75%\nAccuracy: 93.75%\n"
]
],
[
[
"## Overfitting\n\nIf we look at the training and validation losses as we train the network, we can see a phenomenon known as overfitting.\n\n<img src='assets/overfitting.png' width=450px>\n\nThe network learns the training set better and better, resulting in lower training losses. However, it starts having problems generalizing to data outside the training set leading to the validation loss increasing. The ultimate goal of any deep learning model is to make predictions on new data, so we should strive to get the lowest validation loss possible. One option is to use the version of the model with the lowest validation loss, here the one around 8-10 training epochs. This strategy is called *early-stopping*. In practice, you'd save the model frequently as you're training then later choose the model with the lowest validation loss.\n\nThe most common method to reduce overfitting (outside of early-stopping) is *dropout*, where we randomly drop input units. This forces the network to share information between weights, increasing it's ability to generalize to new data. Adding dropout in PyTorch is straightforward using the [`nn.Dropout`](https://pytorch.org/docs/stable/nn.html#torch.nn.Dropout) module.\n\n```python\nclass Classifier(nn.Module):\n def __init__(self):\n super().__init__()\n self.fc1 = nn.Linear(784, 256)\n self.fc2 = nn.Linear(256, 128)\n self.fc3 = nn.Linear(128, 64)\n self.fc4 = nn.Linear(64, 10)\n \n # Dropout module with 0.2 drop probability\n self.dropout = nn.Dropout(p=0.2)\n \n def forward(self, x):\n # make sure input tensor is flattened\n x = x.view(x.shape[0], -1)\n \n # Now with dropout\n x = self.dropout(F.relu(self.fc1(x)))\n x = self.dropout(F.relu(self.fc2(x)))\n x = self.dropout(F.relu(self.fc3(x)))\n \n # output so no dropout here\n x = F.log_softmax(self.fc4(x), dim=1)\n \n return x\n```\n\nDuring training we want to use dropout to prevent overfitting, but during inference we want to use the entire network. So, we need to turn off dropout during validation, testing, and whenever we're using the network to make predictions. To do this, you use `model.eval()`. This sets the model to evaluation mode where the dropout probability is 0. You can turn dropout back on by setting the model to train mode with `model.train()`. In general, the pattern for the validation loop will look like this, where you turn off gradients, set the model to evaluation mode, calculate the validation loss and metric, then set the model back to train mode.\n\n```python\n# turn off gradients\nwith torch.no_grad():\n \n # set model to evaluation mode\n model.eval()\n \n # validation pass here\n for images, labels in testloader:\n ...\n\n# set model back to train mode\nmodel.train()\n```",
"_____no_output_____"
],
[
"> **Exercise:** Add dropout to your model and train it on Fashion-MNIST again. See if you can get a lower validation loss or higher accuracy.",
"_____no_output_____"
]
],
[
[
"from torch import nn, optim\nimport torch.nn.functional as F\n\nclass Classifier2(nn.Module):\n def __init__(self):\n super().__init__()\n self.fc1 = nn.Linear(784, 256)\n self.fc2 = nn.Linear(256, 128)\n self.fc3 = nn.Linear(128, 64)\n self.fc4 = nn.Linear(64, 10)\n\n self.dropout = nn.Dropout(p=0.2)\n\n def forward(self, x):\n # make sure input tensor is flattened\n x = x.view(x.shape[0], -1)\n \n # Now with dropout\n x = self.dropout(F.relu(self.fc1(x)))\n x = self.dropout(F.relu(self.fc2(x)))\n x = self.dropout(F.relu(self.fc3(x)))\n\n x = F.log_softmax(self.fc4(x), dim=1)\n \n return x",
"_____no_output_____"
],
[
"model = Classifier2()\ncriterion = nn.NLLLoss()\noptimizer = optim.Adam(model.parameters(), lr=0.003)\n\nepochs = 30\nsteps = 0\n\ntrain_losses, test_losses = [], []\nfor e in range(epochs):\n running_loss = 0\n for images, labels in trainloader:\n model.train()\n optimizer.zero_grad()\n \n log_ps = model(images)\n loss = criterion(log_ps, labels)\n loss.backward()\n optimizer.step()\n print('loss: ', loss)\n running_loss += loss.item()\n \n else:\n model.eval()\n # validation pass here\n images, labels next(iter(testloader))\n ps = torch.exp(model(images))\n op_p, top_class = ps.topk(1, dim=1)\n equals = top_class == labels.view(*top_class.shape)\n accuracy = torch.mean(equals.type(torch.FloatTensor))\n print(f'Accuracy: {accuracy.item()*100}%')\n",
"_____no_output_____"
]
],
[
[
"## Inference\n\nNow that the model is trained, we can use it for inference. We've done this before, but now we need to remember to set the model in inference mode with `model.eval()`. You'll also want to turn off autograd with the `torch.no_grad()` context.",
"_____no_output_____"
]
],
[
[
"# Import helper module (should be in the repo)\nimport helper\n\n# Test out your network!\n\nmodel.eval()\n\ndataiter = iter(testloader)\nimages, labels = dataiter.next()\nimg = images[0]\n# Convert 2D image to 1D vector\nimg = img.view(1, 784)\n\n# Calculate the class probabilities (softmax) for img\nwith torch.no_grad():\n output = model.forward(img)\n\nps = torch.exp(output)\n\n# Plot the image and probabilities\nhelper.view_classify(img.view(1, 28, 28), ps, version='Fashion')",
"_____no_output_____"
]
],
[
[
"## Next Up!\n\nIn the next part, I'll show you how to save your trained models. In general, you won't want to train a model everytime you need it. Instead, you'll train once, save it, then load the model when you want to train more or use if for inference.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d08a39ba58b2af850e801384775b665fc6a74570 | 61,784 | ipynb | Jupyter Notebook | notebooks/TimeEval shared param optimization analysis 2.ipynb | HPI-Information-Systems/TimeEval | 9b2717b89decd57dd09e04ad94c120f13132d7b8 | [
"MIT"
] | 2 | 2022-01-29T03:46:31.000Z | 2022-02-14T14:06:35.000Z | notebooks/TimeEval shared param optimization analysis 2.ipynb | HPI-Information-Systems/TimeEval | 9b2717b89decd57dd09e04ad94c120f13132d7b8 | [
"MIT"
] | null | null | null | notebooks/TimeEval shared param optimization analysis 2.ipynb | HPI-Information-Systems/TimeEval | 9b2717b89decd57dd09e04ad94c120f13132d7b8 | [
"MIT"
] | null | null | null | 36.930066 | 272 | 0.369643 | [
[
[
"# TimeEval shared parameter optimization result analysis",
"_____no_output_____"
]
],
[
[
"# Automatically reload packages:\n%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
],
[
"# imports\nimport json\nimport warnings\nimport pandas as pd\nimport numpy as np\nimport scipy as sp\nimport plotly.offline as py\nimport plotly.graph_objects as go\nimport plotly.figure_factory as ff\nimport plotly.express as px\nfrom plotly.subplots import make_subplots\nfrom pathlib import Path\nfrom timeeval import Datasets",
"_____no_output_____"
]
],
[
[
"## Configuration\n\nTarget parameters that were optimized in this run (per algorithm):",
"_____no_output_____"
]
],
[
[
"algo_param_mapping = {\n \"HBOS\": [\"n_bins\"],\n \"MultiHMM\": [\"n_bins\"],\n \"MTAD-GAT\": [\"context_window_size\", \"mag_window_size\", \"score_window_size\"],\n \"PST\": [\"n_bins\"]\n}",
"_____no_output_____"
]
],
[
[
"Define data and results folder:",
"_____no_output_____"
]
],
[
[
"# constants and configuration\ndata_path = Path(\"../../data\") / \"test-cases\"\nresult_root_path = Path(\"../timeeval_experiments/results\")\nexperiment_result_folder = \"2021-10-04_shared-optim2\"\n\n# build paths\nresult_paths = [d for d in result_root_path.iterdir() if d.is_dir()]\nprint(\"Available result directories:\")\ndisplay(result_paths)\n\nresult_path = result_root_path / experiment_result_folder\nprint(\"\\nSelecting:\")\nprint(f\"Data path: {data_path.resolve()}\")\nprint(f\"Result path: {result_path.resolve()}\")",
"Available result directories:\n"
]
],
[
[
"Load results and dataset metadata:",
"_____no_output_____"
]
],
[
[
"def extract_hyper_params(param_names):\n def extract(value):\n params = json.loads(value)\n result = None\n for name in param_names:\n try:\n value = params[name]\n result = pd.Series([name, value], index=[\"optim_param_name\", \"optim_param_value\"])\n break\n except KeyError:\n pass\n if result is None:\n raise ValueError(f\"Parameters {param_names} not found in '{value}'\")\n return result\n return extract\n\n# load results\nprint(f\"Reading results from {result_path.resolve()}\")\ndf = pd.read_csv(result_path / \"results.csv\")\n\n# add dataset_name column\ndf[\"dataset_name\"] = df[\"dataset\"].str.split(\".\").str[0]\n\n# add optim_params column\ndf[[\"optim_param_name\", \"optim_param_value\"]] = \"\"\nfor algo in algo_param_mapping:\n df_algo = df.loc[df[\"algorithm\"] == algo]\n df.loc[df_algo.index, [\"optim_param_name\", \"optim_param_value\"]] = df_algo[\"hyper_params\"].apply(extract_hyper_params(algo_param_mapping[algo]))\n\n# load dataset metadata\ndmgr = Datasets(data_path)",
"Reading results from /home/sebastian/Documents/Projects/akita/timeeval/timeeval_experiments/results/2021-10-04_shared-optim2\n"
]
],
[
[
"Define plotting functions:",
"_____no_output_____"
]
],
[
[
"def load_scores_df(algorithm_name, dataset_id, optim_params, repetition=1):\n params_id = df.loc[(df[\"algorithm\"] == algorithm_name) & (df[\"collection\"] == dataset_id[0]) & (df[\"dataset\"] == dataset_id[1]) & (df[\"optim_param_name\"] == optim_params[0]) & (df[\"optim_param_value\"] == optim_params[1]), \"hyper_params_id\"].item()\n path = (\n result_path /\n algorithm_name /\n params_id /\n dataset_id[0] /\n dataset_id[1] /\n str(repetition) /\n \"anomaly_scores.ts\"\n )\n return pd.read_csv(path, header=None)\n\ndef plot_scores(algorithm_name, dataset_name):\n if isinstance(algorithm_name, tuple):\n algorithms = [algorithm_name]\n elif not isinstance(algorithm_name, list):\n raise ValueError(\"Please supply a tuple (algorithm_name, optim_param_name, optim_param_value) or a list thereof as first argument!\")\n else:\n algorithms = algorithm_name\n # construct dataset ID\n dataset_id = (\"GutenTAG\", f\"{dataset_name}.unsupervised\")\n\n # load dataset details\n df_dataset = dmgr.get_dataset_df(dataset_id)\n\n # check if dataset is multivariate\n dataset_dim = df.loc[df[\"dataset_name\"] == dataset_name, \"dataset_input_dimensionality\"].unique().item()\n dataset_dim = dataset_dim.lower()\n \n auroc = {}\n df_scores = pd.DataFrame(index=df_dataset.index)\n skip_algos = []\n algos = []\n for algo, optim_param_name, optim_param_value in algorithms:\n optim_params = f\"{optim_param_name}={optim_param_value}\"\n algos.append((algo, optim_params))\n # get algorithm metric results\n try:\n auroc[(algo, optim_params)] = df.loc[\n (df[\"algorithm\"] == algo) & (df[\"dataset_name\"] == dataset_name) & (df[\"optim_param_name\"] == optim_param_name) & (df[\"optim_param_value\"] == optim_param_value),\n \"ROC_AUC\"\n ].item()\n except ValueError:\n warnings.warn(f\"No ROC_AUC score found! Probably {algo} with params {optim_params} was not executed on {dataset_name}.\")\n auroc[(algo, optim_params)] = -1\n skip_algos.append((algo, optim_params))\n continue\n\n # load scores\n training_type = df.loc[df[\"algorithm\"] == algo, \"algo_training_type\"].values[0].lower().replace(\"_\", \"-\")\n try:\n df_scores[(algo, optim_params)] = load_scores_df(algo, (\"GutenTAG\", f\"{dataset_name}.{training_type}\"), (optim_param_name, optim_param_value)).iloc[:, 0]\n except (ValueError, FileNotFoundError):\n warnings.warn(f\"No anomaly scores found! Probably {algo} was not executed on {dataset_name} with params {optim_params}.\")\n df_scores[(algo, optim_params)] = np.nan\n skip_algos.append((algo, optim_params))\n algorithms = [a for a in algos if a not in skip_algos]\n\n # Create plot\n fig = make_subplots(2, 1)\n if dataset_dim == \"multivariate\":\n for i in range(1, df_dataset.shape[1]-1):\n fig.add_trace(go.Scatter(x=df_dataset.index, y=df_dataset.iloc[:, i], name=f\"channel-{i}\"), 1, 1)\n else:\n fig.add_trace(go.Scatter(x=df_dataset.index, y=df_dataset.iloc[:, 1], name=\"timeseries\"), 1, 1)\n fig.add_trace(go.Scatter(x=df_dataset.index, y=df_dataset[\"is_anomaly\"], name=\"label\"), 2, 1)\n \n for item in algorithms:\n algo, optim_params = item\n fig.add_trace(go.Scatter(x=df_scores.index, y=df_scores[item], name=f\"{algo}={auroc[item]:.4f} ({optim_params})\"), 2, 1)\n fig.update_xaxes(matches=\"x\")\n fig.update_layout(\n title=f\"Results of {','.join(np.unique([a for a, _ in algorithms]))} on {dataset_name}\",\n height=400\n )\n return py.iplot(fig)",
"_____no_output_____"
]
],
[
[
"## Analyze TimeEval results",
"_____no_output_____"
]
],
[
[
"df[[\"algorithm\", \"dataset_name\", \"status\", \"AVERAGE_PRECISION\", \"PR_AUC\", \"RANGE_PR_AUC\", \"ROC_AUC\", \"execute_main_time\", \"optim_param_name\", \"optim_param_value\"]]",
"_____no_output_____"
]
],
[
[
"---\n\n### Errors",
"_____no_output_____"
]
],
[
[
"df_error_counts = df.pivot_table(index=[\"algo_training_type\", \"algorithm\"], columns=[\"status\"], values=\"repetition\", aggfunc=\"count\")\ndf_error_counts = df_error_counts.fillna(value=0).astype(np.int64)",
"_____no_output_____"
]
],
[
[
"#### Aggregation of errors per algorithm grouped by algorithm training type",
"_____no_output_____"
]
],
[
[
"for tpe in [\"SEMI_SUPERVISED\", \"SUPERVISED\", \"UNSUPERVISED\"]:\n if tpe in df_error_counts.index:\n print(tpe)\n display(df_error_counts.loc[tpe])",
"SEMI_SUPERVISED\n"
]
],
[
[
"#### Slow algorithms\n\nAlgorithms, for which more than 50% of all executions ran into the timeout.",
"_____no_output_____"
]
],
[
[
"df_error_counts[df_error_counts[\"Status.TIMEOUT\"] > (df_error_counts[\"Status.ERROR\"] + df_error_counts[\"Status.OK\"])]",
"_____no_output_____"
]
],
[
[
"#### Broken algorithms\n\nAlgorithms, which failed for at least 50% of the executions.",
"_____no_output_____"
]
],
[
[
"error_threshold = 0.5\ndf_error_counts[df_error_counts[\"Status.ERROR\"] > error_threshold*(\n df_error_counts[\"Status.TIMEOUT\"] + df_error_counts[\"Status.ERROR\"] + df_error_counts[\"Status.OK\"]\n)]",
"_____no_output_____"
]
],
[
[
"#### Detail errors",
"_____no_output_____"
]
],
[
[
"algo_list = [\"MTAD-GAT\", \"MultiHMM\"]\n\nerror_list = [\"OOM\", \"Segfault\", \"ZeroDivisionError\", \"IncompatibleParameterConfig\", \"WrongDBNState\", \"SyntaxError\", \"other\"]\nerrors = pd.DataFrame(0, index=error_list, columns=algo_list, dtype=np.int_)\nfor algo in algo_list:\n df_tmp = df[(df[\"algorithm\"] == algo) & (df[\"status\"] == \"Status.ERROR\")]\n for i, run in df_tmp.iterrows():\n path = result_path / run[\"algorithm\"] / run[\"hyper_params_id\"] / run[\"collection\"] / run[\"dataset\"] / str(run[\"repetition\"]) / \"execution.log\"\n with path.open(\"r\") as fh:\n log = fh.read()\n if \"status code '139'\" in log:\n errors.loc[\"Segfault\", algo] += 1\n elif \"status code '137'\" in log:\n errors.loc[\"OOM\", algo] += 1\n elif \"Expected n_neighbors <= n_samples\" in log:\n errors.loc[\"IncompatibleParameterConfig\", algo] += 1\n elif \"ZeroDivisionError\" in log:\n errors.loc[\"ZeroDivisionError\", algo] += 1\n elif \"does not have key\" in log:\n errors.loc[\"WrongDBNState\", algo] += 1\n elif \"NameError\" in log:\n errors.loc[\"SyntaxError\", algo] += 1\n else:\n print(f'\\n\\n#### {run[\"dataset\"]} ({run[\"optim_param_name\"]}:{run[\"optim_param_value\"]})')\n print(log)\n errors.loc[\"other\", algo] += 1\nerrors.T",
"_____no_output_____"
]
],
[
[
"---\n\n### Parameter assessment",
"_____no_output_____"
]
],
[
[
"sort_by = (\"ROC_AUC\", \"mean\")\nmetric_agg_type = [\"mean\", \"median\"]\ntime_agg_type = \"mean\"\naggs = {\n \"AVERAGE_PRECISION\": metric_agg_type,\n \"RANGE_PR_AUC\": metric_agg_type,\n \"PR_AUC\": metric_agg_type,\n \"ROC_AUC\": metric_agg_type,\n \"train_main_time\": time_agg_type,\n \"execute_main_time\": time_agg_type,\n \"repetition\": \"count\"\n}\n\ndf_tmp = df.reset_index()\ndf_tmp = df_tmp.groupby(by=[\"algorithm\", \"optim_param_name\", \"optim_param_value\"]).agg(aggs)\ndf_tmp = df_tmp.reset_index()\ndf_tmp = df_tmp.sort_values(by=[\"algorithm\", \"optim_param_name\", sort_by], ascending=False)\ndf_tmp = df_tmp.set_index([\"algorithm\", \"optim_param_name\", \"optim_param_value\"])\n\nwith pd.option_context(\"display.max_rows\", None, \"display.max_columns\", None):\n display(df_tmp)",
"_____no_output_____"
]
],
[
[
"#### Selected parameters\n\n- HBOS: `n_bins=20` (more is better)\n- MultiHMM: `n_bins=5` (8 is slightly better, but takes way longer. The scores are very bad anyway!)\n- MTAD-GAT: `context_window_size=30,mag_window_size=40,score_window_size=52` (very slow)\n- PST: `n_bins=5` (less is better)\n\n> **Note**\n>\n> MTAD-GAT is very slow! Exclude from further runs!",
"_____no_output_____"
]
],
[
[
"plot_scores([(\"MultiHMM\", \"n_bins\", 5), (\"MultiHMM\", \"n_bins\", 8)], \"sinus-type-mean\")",
"_____no_output_____"
],
[
"plot_scores([(\"MTAD-GAT\", \"context_window_size\", 30), (\"MTAD-GAT\", \"context_window_size\", 40)], \"sinus-type-mean\")",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d08a3b9e06ac989d5ed648bd5e039574de7f665d | 466,109 | ipynb | Jupyter Notebook | inference.ipynb | ncantrell/tacotron2 | 84a514921ed88e6a37017b4402551284a3cdb9e4 | [
"BSD-3-Clause"
] | null | null | null | inference.ipynb | ncantrell/tacotron2 | 84a514921ed88e6a37017b4402551284a3cdb9e4 | [
"BSD-3-Clause"
] | null | null | null | inference.ipynb | ncantrell/tacotron2 | 84a514921ed88e6a37017b4402551284a3cdb9e4 | [
"BSD-3-Clause"
] | null | null | null | 1,387.229167 | 292,332 | 0.893763 | [
[
[
"## Tacotron 2 inference code \nEdit the variables **checkpoint_path** and **text** to match yours and run the entire code to generate plots of mel outputs, alignments and audio synthesis from the generated mel-spectrogram using Griffin-Lim.",
"_____no_output_____"
],
[
"#### Import libraries and setup matplotlib",
"_____no_output_____"
]
],
[
[
"import matplotlib\n%matplotlib inline\nimport matplotlib.pylab as plt\n\nimport IPython.display as ipd\n\nimport sys\nsys.path.append('waveglow/')\nimport numpy as np\nimport torch\n\nfrom hparams import create_hparams\nfrom model import Tacotron2\nfrom layers import TacotronSTFT, STFT\nfrom audio_processing import griffin_lim\n\nfrom train import load_model\nfrom text import text_to_sequence\nfrom denoiser import Denoiser",
"_____no_output_____"
],
[
"def plot_data(data, figsize=(16, 4)):\n fig, axes = plt.subplots(1, len(data), figsize=figsize)\n for i in range(len(data)):\n axes[i].imshow(data[i], aspect='auto', origin='bottom', \n interpolation='none')",
"_____no_output_____"
]
],
[
[
"#### Setup hparams",
"_____no_output_____"
]
],
[
[
"hparams = create_hparams()\nhparams.sampling_rate = 22050",
"\nWARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.\nFor more information, please see:\n * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md\n * https://github.com/tensorflow/addons\nIf you depend on functionality not listed there, please file an issue.\n\n"
]
],
[
[
"#### Load model from checkpoint",
"_____no_output_____"
]
],
[
[
"checkpoint_path = \"tacotron2_statedict.pt\"\nmodel = load_model(hparams)\nmodel.load_state_dict(torch.load(checkpoint_path)['state_dict'])\n_ = model.cuda().eval().half()",
"_____no_output_____"
]
],
[
[
"#### Load WaveGlow for mel2audio synthesis and denoiser",
"_____no_output_____"
]
],
[
[
"waveglow_path = 'waveglow_256channels.pt'\nwaveglow = torch.load(waveglow_path)['model']\nwaveglow.cuda().eval().half()\n\nfor m in waveglow.modules():\n if 'Conv' in str(type(m)):\n setattr(m, 'padding_mode', 'zeros')\n \nfor k in waveglow.convinv:\n k.float()\ndenoiser = Denoiser(waveglow)",
"/home/kabakov/VOICE/venv/lib/python3.6/site-packages/torch/serialization.py:454: SourceChangeWarning: source code of class 'torch.nn.modules.conv.ConvTranspose1d' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.\n warnings.warn(msg, SourceChangeWarning)\n/home/kabakov/VOICE/venv/lib/python3.6/site-packages/torch/serialization.py:454: SourceChangeWarning: source code of class 'torch.nn.modules.conv.Conv1d' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.\n warnings.warn(msg, SourceChangeWarning)\nwaveglow/glow.py:162: RuntimeWarning: nn.functional.tanh is deprecated. Use torch.tanh instead.\n torch.IntTensor([self.n_channels]))\nwaveglow/glow.py:162: RuntimeWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.\n torch.IntTensor([self.n_channels]))\n"
]
],
[
[
"#### Prepare text input",
"_____no_output_____"
]
],
[
[
"#%%timeit 77.9 µs ± 237 ns\ntext = \"Waveglow is really awesome!\"\nsequence = np.array(text_to_sequence(text, ['english_cleaners']))[None, :]\nsequence = torch.autograd.Variable(\n torch.from_numpy(sequence)).cuda().long()",
"_____no_output_____"
]
],
[
[
"#### Decode text input and plot results",
"_____no_output_____"
]
],
[
[
"#%%timeit 240 ms ± 9.72 ms\nmel_outputs, mel_outputs_postnet, _, alignments = model.inference(sequence)\nplot_data((mel_outputs.float().data.cpu().numpy()[0],\n mel_outputs_postnet.float().data.cpu().numpy()[0],\n alignments.float().data.cpu().numpy()[0].T))",
"_____no_output_____"
]
],
[
[
"#### Synthesize audio from spectrogram using WaveGlow",
"_____no_output_____"
]
],
[
[
"#%%timeit 193 ms ± 4.87 ms\nwith torch.no_grad():\n audio = waveglow.infer(mel_outputs_postnet, sigma=0.666)\n \nipd.Audio(audio[0].data.cpu().numpy(), rate=hparams.sampling_rate)",
"waveglow/glow.py:162: RuntimeWarning: nn.functional.tanh is deprecated. Use torch.tanh instead.\n torch.IntTensor([self.n_channels]))\nwaveglow/glow.py:162: RuntimeWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.\n torch.IntTensor([self.n_channels]))\n"
]
],
[
[
"#### (Optional) Remove WaveGlow bias",
"_____no_output_____"
]
],
[
[
"audio_denoised = denoiser(audio, strength=0.01)[:, 0]\nipd.Audio(audio_denoised.cpu().numpy(), rate=hparams.sampling_rate) ",
"_____no_output_____"
]
],
[
[
"#### Save result as wav",
"_____no_output_____"
]
],
[
[
"import librosa\n\n\n# save\nlibrosa.output.write_wav('./out.wav', audio[0].data.cpu().numpy().astype(np.float32), 22050)\n\n# check\ny, sr = librosa.load('out.wav')\nipd.Audio(y, rate=sr) ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d08a4444a9f5535a24729b655b7ca689a360428f | 10,024 | ipynb | Jupyter Notebook | app/notebooks/hairstyle.ipynb | scanner-research/esper-tv | 179ef57d536ebd52f93697aab09bf5abec19ce93 | [
"Apache-2.0"
] | 5 | 2019-04-17T01:01:46.000Z | 2021-07-11T01:32:50.000Z | app/notebooks/hairstyle.ipynb | scanner-research/esper-tv | 179ef57d536ebd52f93697aab09bf5abec19ce93 | [
"Apache-2.0"
] | 4 | 2019-11-12T08:35:03.000Z | 2021-06-10T20:37:04.000Z | app/notebooks/hairstyle.ipynb | scanner-research/esper-tv | 179ef57d536ebd52f93697aab09bf5abec19ce93 | [
"Apache-2.0"
] | 1 | 2020-09-01T01:15:44.000Z | 2020-09-01T01:15:44.000Z | 27.463014 | 659 | 0.559058 | [
[
[
"<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\" style=\"margin-top: 1em;\"><ul class=\"toc-item\"><li><span><a href=\"#label-identity-hairstyle\" data-toc-modified-id=\"label-identity-hairstyle-1\"><span class=\"toc-item-num\">1 </span>label identity hairstyle</a></span></li><li><span><a href=\"#Prepare-hairstyle-images\" data-toc-modified-id=\"Prepare-hairstyle-images-2\"><span class=\"toc-item-num\">2 </span>Prepare hairstyle images</a></span></li><li><span><a href=\"#prepare-hairstyle-manifest\" data-toc-modified-id=\"prepare-hairstyle-manifest-3\"><span class=\"toc-item-num\">3 </span>prepare hairstyle manifest</a></span></li></ul></div>",
"_____no_output_____"
]
],
[
[
"from query.models import Video, FaceIdentity, Identity\nfrom esper.widget import *\nfrom esper.prelude import collect, esper_widget\nimport pickle\nimport os\nimport random\n\nget_ipython().magic('matplotlib inline')\nget_ipython().magic('reload_ext autoreload')\nget_ipython().magic('autoreload 2')",
"_____no_output_____"
]
],
[
[
"# label identity hairstyle",
"_____no_output_____"
]
],
[
[
"identity_hair_dict = {}",
"_____no_output_____"
],
[
"identities = Identity.objects.all()\nidentity_list = [(i.id, i.name) for i in identities]\nidentity_list.sort()\n# 154",
"_____no_output_____"
],
[
"hair_color_3 = {0: 'black', 1: 'white', 2: 'blond'}\nhair_color_5 = {0: 'black', 1: 'white', 2: 'blond', 3: 'brown', 4: 'gray'}\nhair_length = {0: 'long', 1: 'medium', 2: 'short', 3: 'bald'} ",
"_____no_output_____"
],
[
"identity_label = [id for id in identity_label if id not in identity_hair_dict]",
"_____no_output_____"
],
[
"# idx += 1\n# iid = identity_list[idx][0]\n# name = identity_list[idx][1]\n# iid = identity_label[idx]\n\n# print(name)\nprint(iid)\nresult = qs_to_result(\n FaceIdentity.objects \\\n .filter(identity__id=1365) \\\n .filter(probability__gt=0.8),\n limit=30)\nesper_widget(result)",
"_____no_output_____"
],
[
"'''\n{'black' : 0, 'white': 1, 'blond' : 2}, # hair_color_3\n{'black' : 0, 'white': 1, 'blond' : 2, 'brown' : 3, 'gray' : 4}, # hair_color_5\n{'long' : 0, 'medium' : 1, 'short' : 2, 'bald' : 3} # hair_length\n'''\n\n\nlabel = identity_hair_dict[iid] = (2,2,0)\nprint(hair_color_3[label[0]], hair_color_5[label[1]], hair_length[label[2]])\npickle.dump(identity_hair_dict, open('/app/data/identity_hair_dict.pkl', 'wb'))",
"_____no_output_____"
]
],
[
[
"# Prepare hairstyle images",
"_____no_output_____"
]
],
[
[
"faceIdentities = FaceIdentity.objects \\\n .filter(identity__name='melania trump') \\\n .filter(probability__gt=0.9) \\\n .select_related('face__frame__video')",
"_____no_output_____"
],
[
"faceIdentities_sampled = random.sample(list(faceIdentities), 1000)\nprint(\"Load %d face identities\" % len(faceIdentities_sampled))",
"_____no_output_____"
],
[
"identity_grouped = collect(list(faceIdentities_sampled), lambda identity: identity.face.frame.video.id)\nprint(\"Group into %d videos\" % len(identity_grouped))",
"_____no_output_____"
],
[
"face_dict = {}\nfor video_id, fis in identity_grouped.items():\n video = Video.objects.filter(id=video_id)[0]\n face_list = []\n for i in fis:\n face_id = i.face.id\n frame_id = i.face.frame.number\n identity_id = i.identity.id\n x1, y1, x2, y2 = i.face.bbox_x1, i.face.bbox_y1, i.face.bbox_x2, i.face.bbox_y2\n bbox = (x1, y1, x2, y2)\n face_list.append((frame_id, face_id, identity_id, bbox))\n face_list.sort()\n face_dict[video.path] = face_list\nprint(\"Preload face bbox done\")",
"_____no_output_____"
],
[
"if __name__ == \"__main__\":\n\n solve_parallel(face_dict, res_dict_path='/app/result/clothing/fina_dict.pkl', workers=10)",
"_____no_output_____"
]
],
[
[
"# prepare hairstyle manifest",
"_____no_output_____"
]
],
[
[
"img_list = os.listdir('/app/result/clothing/images/')\nlen(img_list)",
"_____no_output_____"
],
[
"group_by_identity = {}\nfor name in img_list:\n iid = int(name.split('_')[0])\n if iid not in group_by_identity:\n group_by_identity[iid] = []\n else:\n group_by_identity[iid].append(name)\nidentity_label = [id for id, img_list in group_by_identity.items() if len(img_list) > 10]\nidentity_label.sort()",
"_____no_output_____"
],
[
"identity_hair_dict = pickle.load(open('/app/data/identity_hair_dict.pkl', 'rb'))\nNUM_PER_ID = 1000\nhairstyle_manifest = []\nfor iid, img_list in group_by_identity.items():\n if len(img_list) > 10 and iid in identity_hair_dict:\n if len(img_list) < NUM_PER_ID:\n img_list_sample = img_list\n else:\n img_list_sample = random.sample(img_list, NUM_PER_ID)\n attrib = identity_hair_dict[iid]\n hairstyle_manifest += [(path, attrib) for path in img_list_sample]\nrandom.shuffle(hairstyle_manifest)\nlen(hairstyle_manifest)",
"_____no_output_____"
],
[
"pickle.dump(hairstyle_manifest, open('/app/result/clothing/hairstyle_manifest.pkl', 'wb'))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d08a4f81eec4d5572ed1f562cc6c1f2963b37406 | 160,238 | ipynb | Jupyter Notebook | 020_NIRMAL.ipynb | NirmalSilwal/Machine-Learning | 28af227c64ccb694071d9a3df123d5e64ab439e2 | [
"MIT"
] | 18 | 2020-04-05T08:29:42.000Z | 2021-11-15T10:36:15.000Z | 020_NIRMAL.ipynb | NirmalSilwal/Machine-Learning | 28af227c64ccb694071d9a3df123d5e64ab439e2 | [
"MIT"
] | null | null | null | 020_NIRMAL.ipynb | NirmalSilwal/Machine-Learning | 28af227c64ccb694071d9a3df123d5e64ab439e2 | [
"MIT"
] | 1 | 2020-09-29T15:12:56.000Z | 2020-09-29T15:12:56.000Z | 74.21862 | 45,552 | 0.744936 | [
[
[
"import numpy as np\nimport pandas as pd",
"_____no_output_____"
]
],
[
[
"### loading dataset",
"_____no_output_____"
]
],
[
[
"data = pd.read_csv(\"student-data.csv\")",
"_____no_output_____"
],
[
"data.head()",
"_____no_output_____"
],
[
"data.shape",
"_____no_output_____"
],
[
"type(data)",
"_____no_output_____"
]
],
[
[
"### Exploratory data analysis",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport seaborn as sns",
"_____no_output_____"
],
[
"a = data.plot()",
"_____no_output_____"
],
[
"data.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 395 entries, 0 to 394\nData columns (total 31 columns):\nschool 395 non-null object\nsex 395 non-null object\nage 395 non-null int64\naddress 395 non-null object\nfamsize 395 non-null object\nPstatus 395 non-null object\nMedu 395 non-null int64\nFedu 395 non-null int64\nMjob 395 non-null object\nFjob 395 non-null object\nreason 395 non-null object\nguardian 395 non-null object\ntraveltime 395 non-null int64\nstudytime 395 non-null int64\nfailures 395 non-null int64\nschoolsup 395 non-null object\nfamsup 395 non-null object\npaid 395 non-null object\nactivities 395 non-null object\nnursery 395 non-null object\nhigher 395 non-null object\ninternet 395 non-null object\nromantic 395 non-null object\nfamrel 395 non-null int64\nfreetime 395 non-null int64\ngoout 395 non-null int64\nDalc 395 non-null int64\nWalc 395 non-null int64\nhealth 395 non-null int64\nabsences 395 non-null int64\npassed 395 non-null object\ndtypes: int64(13), object(18)\nmemory usage: 95.7+ KB\n"
],
[
"data.isnull().sum()",
"_____no_output_____"
],
[
"a = sns.heatmap(data.isnull(),cmap='Blues')",
"_____no_output_____"
],
[
"a = sns.heatmap(data.isnull(),cmap='Blues',yticklabels=False)",
"_____no_output_____"
]
],
[
[
"#### this indicates that we have no any null values in the dataset",
"_____no_output_____"
]
],
[
[
"a = sns.heatmap(data.isna(),yticklabels=False)",
"_____no_output_____"
]
],
[
[
"#### this heatmap indicates that we have no any 'NA' values in the dataset",
"_____no_output_____"
]
],
[
[
"sns.set(style='darkgrid')\nsns.countplot(data=data,x='reason')",
"_____no_output_____"
]
],
[
[
"This indicates the count for choosing school of various reasons.\nA count plot can be thought of as a histogram across a categorical, instead of quantitative, variable.",
"_____no_output_____"
]
],
[
[
"data.head(7)",
"_____no_output_____"
]
],
[
[
"calculating total passed students",
"_____no_output_____"
]
],
[
[
"passed = data.loc[data.passed == 'yes']\npassed.shape",
"_____no_output_____"
],
[
"tot_passed=passed.shape[0]",
"_____no_output_____"
],
[
"print('total passed students is: {} '.format(tot_passed))",
"total passed students is: 265 \n"
]
],
[
[
"calculating total failed students",
"_____no_output_____"
]
],
[
[
"failed = data.loc[data.passed == 'no']\nprint('total failed students is: {}'.format(failed.shape[0]))",
"total failed students is: 130\n"
]
],
[
[
"### Feature Engineering",
"_____no_output_____"
]
],
[
[
"data.head()",
"_____no_output_____"
]
],
[
[
"To identity feature and target variable lets first do some feature engineering stuff!",
"_____no_output_____"
]
],
[
[
"data.columns",
"_____no_output_____"
],
[
"data.columns[-1]",
"_____no_output_____"
]
],
[
[
"Here 'passed' is our target variable. Since in this system we need to develop the model that will predict the likelihood that a given student will pass, quantifying whether an intervention is necessary.",
"_____no_output_____"
]
],
[
[
"target = data.columns[-1] ",
"_____no_output_____"
],
[
"data.columns[:-1]",
"_____no_output_____"
],
[
"#initially taking all columns as our feature variables\n\nfeature = list(data.columns[:-1])",
"_____no_output_____"
],
[
"data[target].head()",
"_____no_output_____"
],
[
"data[feature].head()",
"_____no_output_____"
]
],
[
[
"Now taking feature and target data in seperate dataframe",
"_____no_output_____"
]
],
[
[
"featuredata = data[feature]\ntargetdata = data[target]",
"_____no_output_____"
]
],
[
[
"Now we need to convert several non-numeric columns like 'internet' into numerical form for the model to process ",
"_____no_output_____"
]
],
[
[
"def preprocess_features(X):\n output = pd.DataFrame(index = X.index)\n for col, col_data in X.iteritems():\n if col_data.dtype == object:\n col_data = col_data.replace(['yes', 'no'], [1, 0])\n if col_data.dtype == object:\n col_data = pd.get_dummies(col_data, prefix = col) \n \n output = output.join(col_data)\n\n return output",
"_____no_output_____"
],
[
"featuredata = preprocess_features(featuredata)",
"_____no_output_____"
],
[
"type(featuredata)",
"_____no_output_____"
],
[
"featuredata.head()",
"_____no_output_____"
],
[
"featuredata.drop(['address_R','sex_F'],axis=1,inplace=True)",
"_____no_output_____"
],
[
"featuredata.columns",
"_____no_output_____"
],
[
"featuredata.drop(['famsize_GT3','Pstatus_A',],axis=1,inplace=True)",
"_____no_output_____"
]
],
[
[
"### MODEL IMPLEMENTATION",
"_____no_output_____"
],
[
"## Decision tree",
"_____no_output_____"
]
],
[
[
"from sklearn.tree import DecisionTreeClassifier\nfrom sklearn.model_selection import train_test_split\nmodel=DecisionTreeClassifier()",
"_____no_output_____"
],
[
"X_train, X_test, y_train, y_test = train_test_split(featuredata, targetdata, test_size=0.33, random_state=6)",
"_____no_output_____"
],
[
"model.fit(X_train,y_train)",
"_____no_output_____"
],
[
"from sklearn.metrics import accuracy_score",
"_____no_output_____"
],
[
"predictions = model.predict(X_test)",
"_____no_output_____"
],
[
"accuracy_score(y_test,predictions)*100",
"_____no_output_____"
]
],
[
[
"## K-Nearest Neighbours",
"_____no_output_____"
]
],
[
[
"from sklearn.neighbors import KNeighborsClassifier",
"_____no_output_____"
],
[
"new_classifier = KNeighborsClassifier(n_neighbors=7)",
"_____no_output_____"
],
[
"new_classifier.fit(X_train,y_train)",
"_____no_output_____"
],
[
"predictions2 = new_classifier.predict(X_test)",
"_____no_output_____"
],
[
"accuracy_score(y_test,predictions2)*100",
"_____no_output_____"
]
],
[
[
"## SVM",
"_____no_output_____"
]
],
[
[
"from sklearn import svm\nclf = svm.SVC(random_state=6)",
"_____no_output_____"
],
[
"clf.fit(featuredata,targetdata) ",
"_____no_output_____"
],
[
"clf.score(featuredata,targetdata)",
"_____no_output_____"
],
[
"predictions3= clf.predict(X_test)",
"_____no_output_____"
],
[
"accuracy_score(y_test,predictions3)*100",
"_____no_output_____"
]
],
[
[
"## Model application areas",
"_____no_output_____"
],
[
"#### KNN",
"_____no_output_____"
],
[
"KNN: k-NN is often used in search applications where you are looking for “similar” items; that is, when your task is some form of “find items similar to this one”. The way you measure similarity is by creating a vector representation of the items, and then compare the vectors using an appropriate distance metric (like the Euclidean distance, for example).\n\nThe biggest use case of k-NN search might be Recommender Systems. If you know a user likes a particular item, then you can recommend similar items for them.\n\nKNN strength: effective for larger datasets, robust to noisy training data\n\nKNN weakness: need to determine value of k, computation cost is high.",
"_____no_output_____"
],
[
"#### Decision tree",
"_____no_output_____"
],
[
"Decision Tree: Can handle both numerical and categorical data. \n\nDecision tree strength: Decision trees implicitly perform feature selection, require relatively little effort from users for data preparation, easy to interpret and explain to executives.\n\nDecision tree weakness: Over Fitting, not fit for continuous variables.",
"_____no_output_____"
],
[
"#### SVM",
"_____no_output_____"
],
[
"SVM: SVM classify parts of the image as a face and non-face and create a square boundary around the face(Facial recognization).\nWe use SVMs to recognize handwritten characters used widely(Handwritten recognization).\n\nStrengths: SVM's can model non-linear decision boundaries, and there are many kernels to choose from. They are also fairly robust against overfitting, especially in high-dimensional space.\n\nWeaknesses: However, SVM's are memory intensive, trickier to tune due to the importance of picking the right kernel, and don't scale well to larger datasets.",
"_____no_output_____"
],
[
"## Choosing the best model",
"_____no_output_____"
],
[
"In this case, I will be using the SVM model to predict the outcomes. 80.15% of accuracy is achieved in SVM in our case.\n\nSVM is a supervised machine learning algorithm which can be used for classification or regression problems. \n\nIt uses a technique called the kernel trick to transform your data and then based on these transformations it finds an optimal boundary between the possible outputs.",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d08a54ba9b0447703d4d13d2bcd2f5f954ce13f8 | 25,566 | ipynb | Jupyter Notebook | docs/development_notebooks/2021-10-09-NCBI-taxonomy.ipynb | cjprybol/Mycelia | 4dbf30f22201da52d5084cef9d7019b5faa57875 | [
"MIT"
] | null | null | null | docs/development_notebooks/2021-10-09-NCBI-taxonomy.ipynb | cjprybol/Mycelia | 4dbf30f22201da52d5084cef9d7019b5faa57875 | [
"MIT"
] | 27 | 2021-06-24T17:53:36.000Z | 2022-03-05T19:26:01.000Z | docs/development_notebooks/2021-10-09-NCBI-taxonomy.ipynb | cjprybol/Mycelia | 4dbf30f22201da52d5084cef9d7019b5faa57875 | [
"MIT"
] | 1 | 2022-01-08T14:45:20.000Z | 2022-01-08T14:45:20.000Z | 35.557719 | 2,067 | 0.495541 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
d08a5c8cffae3e83142631153c1c195dd1e700ec | 399,216 | ipynb | Jupyter Notebook | codici/kmeans.ipynb | tvml/ml2122 | 290ac378b19ec5bbdd2094e42e3c39cd91867c9e | [
"MIT"
] | null | null | null | codici/kmeans.ipynb | tvml/ml2122 | 290ac378b19ec5bbdd2094e42e3c39cd91867c9e | [
"MIT"
] | null | null | null | codici/kmeans.ipynb | tvml/ml2122 | 290ac378b19ec5bbdd2094e42e3c39cd91867c9e | [
"MIT"
] | null | null | null | 1,061.744681 | 101,784 | 0.953802 | [
[
[
"### k-means clustering",
"_____no_output_____"
]
],
[
[
"import warnings\nwarnings.filterwarnings('ignore')\n\n%matplotlib inline",
"_____no_output_____"
],
[
"import scipy as sc\nimport scipy.stats as stats\nfrom scipy.spatial.distance import euclidean\nimport numpy as np\nimport pandas as pd",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nimport matplotlib.colors as mcolors\n\nplt.style.use('fivethirtyeight')\n\nplt.rcParams['font.family'] = 'sans-serif'\nplt.rcParams['font.serif'] = 'Ubuntu'\nplt.rcParams['font.monospace'] = 'Ubuntu Mono'\nplt.rcParams['font.size'] = 10\nplt.rcParams['axes.labelsize'] = 10\nplt.rcParams['axes.labelweight'] = 'bold'\nplt.rcParams['axes.titlesize'] = 10\nplt.rcParams['xtick.labelsize'] = 8\nplt.rcParams['ytick.labelsize'] = 8\nplt.rcParams['legend.fontsize'] = 10\nplt.rcParams['figure.titlesize'] = 12\nplt.rcParams['image.cmap'] = 'jet'\nplt.rcParams['image.interpolation'] = 'none'\nplt.rcParams['figure.figsize'] = (16, 8)\nplt.rcParams['lines.linewidth'] = 2\nplt.rcParams['lines.markersize'] = 8\n\ncolors = ['#008fd5', '#fc4f30', '#e5ae38', '#6d904f', '#8b8b8b', '#810f7c', \n'#137e6d', '#be0119', '#3b638c', '#af6f09', '#008fd5', '#fc4f30', '#e5ae38', '#6d904f', '#8b8b8b', \n'#810f7c', '#137e6d', '#be0119', '#3b638c', '#af6f09']\n\ncmap = mcolors.LinearSegmentedColormap.from_list(\"\", [\"#82cafc\", \"#069af3\", \"#0485d1\", colors[0], colors[8]])",
"_____no_output_____"
],
[
"rv0 = stats.multivariate_normal(mean=[3, 3], cov=[[.3, .3],[.3,.4]])\nrv1 = stats.multivariate_normal(mean=[1.5, 1], cov=[[.5, -.5],[-.5,.7]])\nrv2 = stats.multivariate_normal(mean=[0, 1.2], cov=[[.15, .1],[.1,.3]])\nrv3 = stats.multivariate_normal(mean=[3.2, 1], cov=[[.2, 0],[0,.1]])\n\nz0 = rv0.rvs(size=300)\nz1 = rv1.rvs(size=300)\nz2 = rv2.rvs(size=300)\nz3 = rv3.rvs(size=300)\n\nz=np.concatenate((z0, z1, z2, z3), axis=0)",
"_____no_output_____"
],
[
"fig, ax = plt.subplots()\nax.scatter(z0[:,0], z0[:,1], s=40, color='C0', alpha =.8, edgecolors='k', label=r'$C_0$')\nax.scatter(z1[:,0], z1[:,1], s=40, color='C1', alpha =.8, edgecolors='k', label=r'$C_1$')\nax.scatter(z2[:,0], z2[:,1], s=40, color='C2', alpha =.8, edgecolors='k', label=r'$C_2$')\nax.scatter(z3[:,0], z3[:,1], s=40, color='C3', alpha =.8, edgecolors='k', label=r'$C_3$')\nplt.xlabel('$x$')\nplt.ylabel('$y$')\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"cc='xkcd:turquoise'\nfig = plt.figure(figsize=(16,8))\nax = fig.gca()\nplt.scatter(z[:,0], z[:,1], s=40, color=cc, edgecolors='k', alpha=.8)\nplt.ylabel('$x_2$', fontsize=12)\nplt.xlabel('$x_1$', fontsize=12)\nplt.title('Data set', fontsize=12)\nplt.show()",
"_____no_output_____"
],
[
"# Number of clusters\nnc = 3\n# X coordinates of random centroids\nC_x = np.random.sample(nc)*(np.max(z[:,0])-np.min(z[:,0]))*.7+np.min(z[:,0])*.7\n# Y coordinates of random centroids\nC_y = np.random.sample(nc)*(np.max(z[:,1])-np.min(z[:,1]))*.7+np.min(z[:,0])*.7\nC = np.array(list(zip(C_x, C_y)), dtype=np.float32)",
"_____no_output_____"
],
[
"fig = plt.figure(figsize=(16,8))\nax = fig.gca()\nplt.scatter(z[:,0], z[:,1], s=40, color=cc, edgecolors='k', alpha=.5)\nfor i in range(nc):\n plt.scatter(C_x[i], C_y[i], marker='*', s=500, c=colors[i], edgecolors='k', linewidth=1.5)\nplt.ylabel('$x_2$', fontsize=12)\nplt.xlabel('$x_1$', fontsize=12)\nplt.title('Data set', fontsize=12)\nplt.show()",
"_____no_output_____"
],
[
"C_list = []\nerrors = []\n# Cluster Labels(0, 1, 2, 3)\nclusters = np.zeros(z.shape[0])\nC_list.append(C)\n# Error func. - Distance between new centroids and old centroids\nerror = np.linalg.norm([euclidean(C[i,:], [0,0]) for i in range(nc)])\nerrors.append(error)\nprint(\"Error: {0:3.5f}\".format(error))",
"Error: 4.12334\n"
],
[
"for l in range(10):\n # Assigning each value to its closest cluster\n for i in range(z.shape[0]):\n distances = [euclidean(z[i,:], C[j,:]) for j in range(nc)]\n cluster = np.argmin(distances)\n clusters[i] = cluster\n # Storing the old centroid values\n C = np.zeros([nc,2])\n # Finding the new centroids by taking the average value\n for i in range(nc):\n points = [z[j,:] for j in range(z.shape[0]) if clusters[j] == i]\n C[i] = np.mean(points, axis=0)\n error = np.linalg.norm([euclidean(C[i,:], C_list[-1][i,:]) for i in range(nc)])\n errors.append(error)\n C_list.append(C)",
"_____no_output_____"
],
[
"fig = plt.figure(figsize=(16,8))\nax = fig.gca()\nfor cl in range(nc):\n z1 = z[clusters==cl]\n plt.scatter(z1[:,0],z1[:,1], c=colors[cl], marker='o', s=40, edgecolors='k', alpha=.7)\n\nfor i in range(nc):\n plt.scatter(C[i,0], C[i,1], marker='*', s=400, c=colors[i], edgecolors='k', linewidth=1.5)\nplt.ylabel('$x_2$', fontsize=12)\nplt.xlabel('$x_1$', fontsize=12)\nplt.title('Data set', fontsize=12)\nplt.show()",
"_____no_output_____"
],
[
"C_list",
"_____no_output_____"
],
[
"print(\"Error: {0:3.5f}\".format(error))",
"Error: 0.11360\n"
],
[
"errors",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d08a78b06c5cfbda32698e313c0c6e136d972196 | 6,122 | ipynb | Jupyter Notebook | colabs/smartsheet_to_bigquery.ipynb | fivestones-apac/starthinker | e29cc2d5c9806d8abfcc06a2d01cf97215fdf926 | [
"Apache-2.0"
] | null | null | null | colabs/smartsheet_to_bigquery.ipynb | fivestones-apac/starthinker | e29cc2d5c9806d8abfcc06a2d01cf97215fdf926 | [
"Apache-2.0"
] | null | null | null | colabs/smartsheet_to_bigquery.ipynb | fivestones-apac/starthinker | e29cc2d5c9806d8abfcc06a2d01cf97215fdf926 | [
"Apache-2.0"
] | null | null | null | 35.387283 | 230 | 0.520091 | [
[
[
"#1. Install Dependencies\nFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.\n",
"_____no_output_____"
]
],
[
[
"!pip install git+https://github.com/google/starthinker\n",
"_____no_output_____"
]
],
[
[
"#2. Get Cloud Project ID\nTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.\n",
"_____no_output_____"
]
],
[
[
"CLOUD_PROJECT = 'PASTE PROJECT ID HERE'\n\nprint(\"Cloud Project Set To: %s\" % CLOUD_PROJECT)\n",
"_____no_output_____"
]
],
[
[
"#3. Get Client Credentials\nTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.\n",
"_____no_output_____"
]
],
[
[
"CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'\n\nprint(\"Client Credentials Set To: %s\" % CLIENT_CREDENTIALS)\n",
"_____no_output_____"
]
],
[
[
"#4. Enter SmartSheet Sheet To BigQuery Parameters\nMove sheet data into a BigQuery table.\n 1. Specify <a href='https://smartsheet-platform.github.io/api-docs/' target='_blank'>SmartSheet</a> token.\n 1. Locate the ID of a sheet by viewing its properties.\n 1. Provide a BigQuery dataset ( must exist ) and table to write the data into.\n 1. StarThinker will automatically map the correct schema.\nModify the values below for your use case, can be done multiple times, then click play.\n",
"_____no_output_____"
]
],
[
[
"FIELDS = {\n 'auth_read': 'user', # Credentials used for reading data.\n 'auth_write': 'service', # Credentials used for writing data.\n 'token': '', # Retrieve from SmartSheet account settings.\n 'sheet': '', # Retrieve from sheet properties.\n 'dataset': '', # Existing BigQuery dataset.\n 'table': '', # Table to create from this report.\n 'schema': '', # Schema provided in JSON list format or leave empty to auto detect.\n 'link': True, # Add a link to each row as the first column.\n}\n\nprint(\"Parameters Set To: %s\" % FIELDS)\n",
"_____no_output_____"
]
],
[
[
"#5. Execute SmartSheet Sheet To BigQuery\nThis does NOT need to be modified unless you are changing the recipe, click play.\n",
"_____no_output_____"
]
],
[
[
"from starthinker.util.project import project\nfrom starthinker.script.parse import json_set_fields\n\nUSER_CREDENTIALS = '/content/user.json'\n\nTASKS = [\n {\n 'smartsheet': {\n 'auth': 'user',\n 'token': {'field': {'name': 'token','kind': 'string','order': 2,'default': '','description': 'Retrieve from SmartSheet account settings.'}},\n 'sheet': {'field': {'name': 'sheet','kind': 'string','order': 3,'description': 'Retrieve from sheet properties.'}},\n 'link': {'field': {'name': 'link','kind': 'boolean','order': 7,'default': True,'description': 'Add a link to each row as the first column.'}},\n 'out': {\n 'bigquery': {\n 'auth': 'user',\n 'dataset': {'field': {'name': 'dataset','kind': 'string','order': 4,'default': '','description': 'Existing BigQuery dataset.'}},\n 'table': {'field': {'name': 'table','kind': 'string','order': 5,'default': '','description': 'Table to create from this report.'}},\n 'schema': {'field': {'name': 'schema','kind': 'json','order': 6,'description': 'Schema provided in JSON list format or leave empty to auto detect.'}}\n }\n }\n }\n }\n]\n\njson_set_fields(TASKS, FIELDS)\n\nproject.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)\nproject.execute(_force=True)\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d08a7b1297b59ebbcc4392872f93595f8eba5d66 | 68,722 | ipynb | Jupyter Notebook | MI203-td2_tree_and_forest.ipynb | NataliaDiaz/colab | 89a9e17bcbf520759997a583577aa31045b881d3 | [
"MIT"
] | 4 | 2018-11-11T14:51:01.000Z | 2021-11-18T19:18:24.000Z | MI203-td2_tree_and_forest.ipynb | NataliaDiaz/colab | 89a9e17bcbf520759997a583577aa31045b881d3 | [
"MIT"
] | null | null | null | MI203-td2_tree_and_forest.ipynb | NataliaDiaz/colab | 89a9e17bcbf520759997a583577aa31045b881d3 | [
"MIT"
] | 1 | 2019-05-15T07:01:10.000Z | 2019-05-15T07:01:10.000Z | 101.961424 | 43,344 | 0.570109 | [
[
[
"<a href=\"https://colab.research.google.com/github/NataliaDiaz/colab/blob/master/MI203-td2_tree_and_forest.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# TD: prédiction du vote 2016 aux Etats-Unis par arbres de décisions et méthodes ensemblistes\n\nLa séance d'aujourd'hui porte sur la prévision du vote en 2016 aux États-Unis. Précisément, les données d'un recensement sont fournies avec diverses informations par comté à travers les États-Unis. L'objectif est de construire des prédicteurs de leur couleur politique (républicain ou démocrate) à partir de ces données.",
"_____no_output_____"
],
[
"Exécuter les commandes suivantes pour charger l'environnement.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nfrom pylab import *\nimport numpy as np\nimport os\nimport random\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"# Accès aux données\n\n* Elles sont disponibles: https://github.com/stepherbin/teaching/tree/master/ENSTA/TD2\n\n* Charger le fichier the combined_data.csv sur votre drive puis monter le depuis colab\n",
"_____no_output_____"
]
],
[
[
"USE_COLAB = True\nUPLOAD_OUTPUTS = False\nif USE_COLAB:\n # mount the google drive\n from google.colab import drive\n drive.mount('/content/drive', force_remount=True)\n # download data on GoogleDrive\n data_dir = \"/content/drive/My Drive/teaching/ENSTA/TD_tree/\"\nelse:\n data_dir = \"data/\"",
"Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly\n\nEnter your authorization code:\n··········\nMounted at /content/drive\n"
],
[
"import pandas as pd\n\ncensus_data = pd.read_csv( os.path.join(data_dir, 'combined_data.csv') )",
"_____no_output_____"
]
],
[
[
"# Analyse préliminaire des données\n\nLes données sont organisées en champs:\n* fips = code du comté à 5 chiffres, le premier ou les deux premiers chiffres indiquent l'état.\n* votes = nombre de votants\n* etc..\n\nRegarder leur structure, quantité, nature.\n\nOù se trouvent les informations pour former les ensembles d'apprentissage et de test?\n\nOù se trouvent les classes à prédire?\n\nVisualiser quelques distributions.\n\nLe format de données python est décrit ici:\nhttps://pandas.pydata.org/pandas-docs/stable/reference/frame.html\n",
"_____no_output_____"
]
],
[
[
"# Exemples de moyens d'accéder aux caractéristiques des données\nprint(census_data.shape )\nprint(census_data.columns.values) \nprint(census_data['fips'])\nprint(census_data.head(3))\n\niattr = 10\nattrname = census_data.columns[iattr]\nprint(\"Mean of {} is {:.1f}\".format(attrname,np.array(census_data[attrname]).mean()))\n\n#########################\n## METTRE VOTRE CODE ICI\n#########################\nprint(\"Nombre de données = {}\".format(7878912123)) # à modifier\nprint(\"Nombre d'attributs utiles = {}\".format(4564564654)) # à modifier\n\n#hist....\n",
"_____no_output_____"
]
],
[
[
"La classe à prédire ('Democrat') n'est décrite que par un seul attribut binaire.\nCalculer la répartition des couleurs politiques (quel est a priori la probabilité qu'un comté soit démocrate vs. républicain)",
"_____no_output_____"
]
],
[
[
"#########################\n## METTRE VOTRE CODE ICI\n#########################\n\nprint(\"La probabilité qu'un comté soit démocrate est de {:.2f}%%\".format(100*proba_dem))\n",
"La probabilité qu'un comté soit démocrate est de 15.45%%\n"
]
],
[
[
"# Préparation du chantier d'apprentissage\n\nOn va préparer les ensembles d'apprentissage et de test. \n\nPour éviter des problèmes de format de données, on choisit une liste d'attributs utiles dans la liste \"feature_cols\" ci dessous.\n\nL'ensemble de test sera constitué des comtés d'un seul état.\n\nInfo: https://scikit-learn.org/stable/model_selection.html\n\nListe des états et leurs codes FIPS code (2 digits):\nhttps://en.wikipedia.org/wiki/Federal_Information_Processing_Standard_state_code\n\n",
"_____no_output_____"
]
],
[
[
"## Sous ensembles d'attributs informatifs pour la suite\n\nfeature_cols = ['BLACK_FEMALE_rate', \n 'BLACK_MALE_rate',\n 'Percent of adults with a bachelor\\'s degree or higher, 2011-2015',\n 'ASIAN_MALE_rate',\n 'ASIAN_FEMALE_rate',\n '25-29_rate',\n 'age_total_pop',\n '20-24_rate',\n 'Deep_Pov_All',\n '30-34_rate',\n 'Density per square mile of land area - Population',\n 'Density per square mile of land area - Housing units',\n 'Unemployment_rate_2015',\n 'Deep_Pov_Children',\n 'PovertyAllAgesPct2014',\n 'TOT_FEMALE_rate',\n 'PerCapitaInc',\n 'MULTI_FEMALE_rate',\n '35-39_rate',\n 'MULTI_MALE_rate',\n 'Percent of adults completing some college or associate\\'s degree, 2011-2015',\n '60-64_rate',\n '55-59_rate',\n '65-69_rate',\n 'TOT_MALE_rate',\n '85+_rate',\n '70-74_rate',\n '80-84_rate',\n '75-79_rate',\n 'Percent of adults with a high school diploma only, 2011-2015',\n 'WHITE_FEMALE_rate',\n 'WHITE_MALE_rate',\n 'Amish',\n 'Buddhist',\n 'Catholic',\n 'Christian Generic',\n 'Eastern Orthodox',\n 'Hindu',\n 'Jewish',\n 'Mainline Christian',\n 'Mormon',\n 'Muslim',\n 'Non-Catholic Christian',\n 'Other',\n 'Other Christian',\n 'Other Misc',\n 'Pentecostal / Charismatic',\n 'Protestant Denomination',\n 'Zoroastrian']\n\nfiltered_cols = ['Percent of adults with a bachelor\\'s degree or higher, 2011-2015',\n 'Percent of adults completing some college or associate\\'s degree, 2011-2015',\n 'Percent of adults with a high school diploma only, 2011-2015',\n 'Density per square mile of land area - Population',\n 'Density per square mile of land area - Housing units',\n 'WHITE_FEMALE_rate',\n 'WHITE_MALE_rate',\n 'BLACK_FEMALE_rate',\n 'BLACK_MALE_rate',\n 'ASIAN_FEMALE_rate',\n 'Catholic',\n 'Christian Generic',\n 'Jewish',\n '70-74_rate',\n 'D',\n 'R']",
"_____no_output_____"
],
[
"## 1-state test split\n\ndef county_data(census_data, fips_code=17):\n #fips_code 48=Texas, 34=New Jersey, 31=Nebraska, 17=Illinois, 06=California, 36=New York\n mask = census_data['fips'].between(fips_code*1000, fips_code*1000 + 999)\n census_data_train = census_data[~mask]\n census_data_test = census_data[mask]\n\n\n XTrain = census_data_train[feature_cols]\n yTrain = census_data_train['Democrat']\n XTest = census_data_test[feature_cols]\n yTest = census_data_test['Democrat']\n\n return XTrain, yTrain, XTest, yTest\n\nSTATE_FIPS_CODE = 17\nX_train, y_train, X_test, y_test = county_data(census_data, STATE_FIPS_CODE)\n\n#print(X_train.head(2))\n#print(y_test.head(2))",
" BLACK_FEMALE_rate BLACK_MALE_rate ... Protestant Denomination Zoroastrian\n0 0.067586 0.062079 ... 0 0\n1 0.067586 0.062079 ... 0 0\n\n[2 rows x 49 columns]\n598 0\n599 0\nName: Democrat, dtype: int64\n"
]
],
[
[
"\n# Apprentissage d'un arbre de décision\n\nOn utilisera la bibliothèque scikit learn \n\n* Construire l'arbre sur les données d'entrainement\n* Prédire le vote sur les comtés de test\n* Calculer l'erreur et la matrice de confusion\n\nFaire varier certains paramètres (profondeur max, pureté, critère...) et visualisez leur influence.\n\n\nInfo: https://scikit-learn.org/stable/modules/tree.html\n\nInfo: https://scikit-learn.org/stable/modules/model_evaluation.html\n",
"_____no_output_____"
]
],
[
[
"\nfrom sklearn import tree\n\n#########################\n## METTRE VOTRE CODE ICI\n#########################\n\n",
"_____no_output_____"
]
],
[
[
"Les instructions suivantes permettent de visualiser l'arbre.\nInterpréter le contenu de la représentation.",
"_____no_output_____"
]
],
[
[
"import graphviz\n\ndot_data = tree.export_graphviz(clf, out_file=None) \ngraph = graphviz.Source(dot_data) \n\ndot_data = tree.export_graphviz(clf, out_file=None, \n feature_names=X_train.columns.values, \n class_names=[\"R\",\"D\"], \n filled=True, rounded=True, \n special_characters=True) \ngraph = graphviz.Source(dot_data) \ngraph",
"_____no_output_____"
],
[
"# Prédiction et évaluation\n\n#########################\n## METTRE VOTRE CODE ICI\n#########################\n\n",
"Predictions per county in state #34 are [0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 1 0 1 0 0 0 0 1 0 0 0\n 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0\n 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0\n 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 1 1]\nVotes per county in state #34 are [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]\n0.9251968503937008 0.7949910262685593\n[[218 9]\n [ 10 17]]\n"
]
],
[
[
"\n---\n\n# Bagging\n\nL'objectif de cette partie est de construire **à la main** une approche de bagging.\n\nLe principe de l'approche est de:\n\n* Apprendre et collecter plusieurs arbres sur des échantillonnages aléatoires des données d'apprentissage\n* Agréger les prédictions par vote \n* Evaluer: Les prédictions agrégées\n* Comparer avec les arbres individuels et le résultat précédent\n\n\nUtiliser les fonctions de construction d'ensemble d'apprentissage/test de scikit-learn https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html pour générer les sous-esnembles échantillonnés.\n\n**Comparer après le cours** les fonctions de scikit-learn: https://scikit-learn.org/stable/modules/ensemble.html\n\nNumpy tips: [np.arange](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.arange.html), [numpy.sum](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.sum.html), [numpy.mean](https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.mean.html), [numpy.where](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.where.html)",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import train_test_split\n\n# Données d'apprentissage: X_train, y_train, idx_train\n# Données de test: X_test, y_test, idx_test\n# Les étapes de conception du prédicteur (apprentissage) sont les suivantes:\n# - Construction des sous-ensembles de données\n# - Apprentissage d'un arbre\n# - Agrégation de l'arbre dans la forêt\n#\n# Pour le test\n\n\ndef learn_forest(XTrain, yTrain, nb_trees, depth=15):\n\n#########################\n## COMPLETER LE CODE \n#########################\n\n forest = []\n singleperf=[]\n\n for ss in range(nb_trees):\n # bagging for subset\n\n # single tree training\n \n # grow the forest\n \n # single tree evaluation\n\n \n return forest,singleperf\n\n",
"_____no_output_____"
],
[
"def predict_forest(forest, XTest, yTest = None):\n \n singleperf=[]\n all_preds=[]\n nb_trees = len(forest)\n#########################\n## METTRE VOTRE CODE ICI\n#########################\n\n if (yTest is not None):\n return final_pred,singleperf\n else:\n return final_pred\n",
"_____no_output_____"
],
[
"#########################\n## METTRE VOTRE CODE ICI\n#########################\n\n\nX_train, y_train, X_test, y_test = county_data(census_data, 6)\n\nF,singleperf = learn_forest(X_train, y_train, 20, depth=15)\npred, singleperftest = predict_forest(F, X_test, y_test)\nacc = perf.balanced_accuracy_score( y_test, pred )\nprint(\"Taux de bonne prédiction = {:.2f}%\".format(100*acc))\nprint(mean(singleperftest))\n#print(singleperftest)\n#print(singleperf)",
"Taux de bonne prédiction = 76.32%\n0.6837740384615385\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d08a871eaef5b8d60a9a94500f32a5f27d05c814 | 42,095 | ipynb | Jupyter Notebook | sheet_08/sheet_08_machine-learning_solution.ipynb | ArielMant0/ml2018 | 676dcf028766c369f94c164529ce16c4ef7716aa | [
"MIT"
] | null | null | null | sheet_08/sheet_08_machine-learning_solution.ipynb | ArielMant0/ml2018 | 676dcf028766c369f94c164529ce16c4ef7716aa | [
"MIT"
] | null | null | null | sheet_08/sheet_08_machine-learning_solution.ipynb | ArielMant0/ml2018 | 676dcf028766c369f94c164529ce16c4ef7716aa | [
"MIT"
] | null | null | null | 34.846854 | 794 | 0.575935 | [
[
[
"Osnabrück University - Machine Learning (Summer Term 2018) - Prof. Dr.-Ing. G. Heidemann, Ulf Krumnack",
"_____no_output_____"
],
[
"# Exercise Sheet 08",
"_____no_output_____"
],
[
"## Introduction\n\nThis week's sheet should be solved and handed in before the end of **Sunday, June 3, 2018**. If you need help (and Google and other resources were not enough), feel free to contact your groups' designated tutor or whomever of us you run into first. Please upload your results to your group's Stud.IP folder.",
"_____no_output_____"
],
[
"## Assignment 0: Math recap (Conditional Probability) [2 Bonus Points]\n\nThis exercise is supposed to be very easy and is voluntary. There will be a similar exercise on every sheet. It is intended to revise some basic mathematical notions that are assumed throughout this class and to allow you to check if you are comfortable with them. Usually you should have no problem to answer these questions offhand, but if you feel unsure, this is a good time to look them up again. You are always welcome to discuss questions with the tutors or in the practice session. Also, if you have a (math) topic you would like to recap, please let us know.",
"_____no_output_____"
],
[
"**a)** Explain the idea of conditional probability. How is it defined?",
"_____no_output_____"
],
[
"Conditional probability is the probability that an event A happens, given that another event B happened.\nFor example:\nThe probability of rain is $$P(weather=\"rain\") = 0.3$$. But if you observe, if the street is wet you would get the conditional probability $$P(weather= \"rain\" |~ street=\"wet\") = 0.95$$\nThe definition is:\n$$ P(A|B) = \\frac{P(A,B)}{P(B)} $$",
"_____no_output_____"
],
[
"**b)** What is Bayes' theorem? What are its applications?",
"_____no_output_____"
],
[
"Bayes Theorem states:\n$$ P(B|A) = \\frac{P(A|B) \\cdot P(B)}{P(A)} $$\n\nThe most important application is in reasoning backwards from event to cause (from data to parameters of your distribution):\n\n$$ P(\\Theta|Data) = \\frac{P(Data|\\Theta)P(\\Theta)}{P(Data)}$$",
"_____no_output_____"
],
[
"**c)** What does the law of total probability state? ",
"_____no_output_____"
],
[
"The law of total probability states, that the probabilty of an event occuring is the same as the sum of the probabilities of this event occuring together with all possible states of an other event:\n$$P(A) = \\sum_b P(A,B=b) = \\sum_b P(A|B=b) P(B=b)$$",
"_____no_output_____"
],
[
"## Assignment 1: Multilayer Perceptron (MLP) [10 Points]\n\nLast week you implemented a simple perceptron. We discussed that one can use multiple perceptrons to build a network. This week you will build your own MLP. Again the following code cells are just a guideline. If you feel like it, just follow the algorithm steps and implement the MLP yourself.",
"_____no_output_____"
],
[
"### Implementation\n\nIn the following you will be guided through implementing an MLP step by step. Instead of sticking to this guide, you are free to take a complete custom approach instead if you wish.\n\nWe will take a bottom-up approach: Starting from an individual **perceptron** (aka neuron), we will derive a **layer of perceptrons** and end up with a **multilayer perceptron** (aka neural network). Each step will be implemented as its own python *class*. Such a class defines a type of element which can be instantiated multiple times. You can think of the relation between such instances and their designated classes as individuals of a specific population (e.g. Bernard and Bianca are both individuals of the population mice). Class definitions contain methods, which can be used to manipulate instance of that class or to make it perform specific actions — again, taking the population reference, each mouse of the mice population would for example have the method `eat_cheese()`.\n\nTo guide you along, all required classes and functions are outlined in valid python code with extensive comments. You just need to fill in the gaps. For each method the [docstring](https://www.python.org/dev/peps/pep-0257/#what-is-a-docstring) (the big comment contained by triple quotes at the beginning of the method) describes the arguments that each specific method accepts (`Args`) and the values it is expected to return (`Returns`).",
"_____no_output_____"
],
[
"### Perceptron\nSimilar to last week you here need to implement a perceptron. But instead of directly applying it, we will define a class which is reusable to instantiate a theoretically infinite amount of individual perceptrons. We will need the following three functionalities:\n\n#### Weight initialization\n\nThe weights are initialized by sampling values from a standard normal distribution. There are as many weights as there are values in the input vector and an additional one for the perceptron's bias.\n\n#### Forward-Propagation / Activation\n\nCalculate the weighted sums of a neuron's inputs and apply it's activation function $\\sigma$. The output vector $o$ of perceptron $j$ of layer $k$ given an input $x$ (the output of the previous layer) in a neural network is given by the following formula. Note: $N$ gives the number of values of a given vector, $w_{j,0}(k)$ specifies the bias of perceptron $j$ in layer $k$ and $w_{j,1...N(x)}(k)$ the other weights of perceptron $j$ in layer $k$.\n\n$$o_{k,j}(x) = \\sigma\\left(w_{j,0}(k)+\\sum\\limits_{i=1}^{N(x)} x_i w_{j,i}(k)\\right)$$\n\nThink of the weights $w(k)$ as a matrix being located in-between layer $k$ and the layer located *to its left* in the network. So values flowing from layer $k-1$ to layer $k$ are weighted by the values of $w(k)$. As activation function we will use the sigmoid function because of its nice derivative (needed later):\n\n$$\\begin{align*}\n\\sigma(x) &= \\frac{1}{1 + \\exp{(-x)}}\\\\\n\\frac{d\\sigma}{dx}(x) &= \\sigma(x) \\cdot (1 - \\sigma(x))\n\\end{align*}$$\n\n#### Back-Propagation / Adaptation\nIn order to learn something the perceptron needs to slowly adjust its weights. Each weight $w_{j,i}$ in layer $k$ is adjusted by a value $\\Delta w_{j,i}$ given a learning rate $\\epsilon$, the previous layer's output (or, for the first hidden layer, the network's input) $o_{k-1,i}(x)$ and the layer's error signals $\\delta(k)$ (which will be calculated by the MultilayerPerceptron):\n\n$$\\Delta w_{j,i}(k) = \\epsilon\\, \\delta_j(k) o_{k-1,i}(x)$$",
"_____no_output_____"
]
],
[
[
"import numpy as np\n\n# Activation function σ.\n# We use scipy's builtin because it fixes some NaN problems for us.\n# sigmoid = lambda x: 1 / (1 + np.exp(-x))\nfrom scipy.special import expit as sigmoid\n\n\nclass Perceptron:\n \"\"\"Single neuron handling its own weights and bias.\"\"\"\n\n def __init__(self, dim_in, act_func=sigmoid):\n \"\"\"Initialize a new neuron with its weights and bias.\n\n Args:\n dim_in (int): Dimensionality of the data coming into this perceptron. \n In a network of perceptrons this basically represents the \n number of neurons in the layer before this neuron's layer. \n Used for generating the perceptron's weights vector, which \n not only includes one weight per input but also an additional \n bias weight.\n act_fun (function): Function to apply on activation.\n \"\"\"\n self.act_func = act_func\n # Set self.weights\n ### BEGIN SOLUTION \n self.weights = np.random.normal(size=dim_in + 1)\n ### END SOLUTION\n\n def activate(self, x):\n \"\"\"Activate this neuron with a specific input.\n\n Calculate the weighted sum of inputs and apply the activation function.\n\n Args:\n x (ndarray): Vector of input values.\n\n Returns:\n float: A real number representing the perceptron's activation after \n calculating the weighted sum of inputs and applying the \n perceptron's activation function.\n \"\"\"\n # Return the activation value\n ### BEGIN SOLUTION \n return self.act_func(self.weights @ np.append(1, x))\n ### END SOLUTION\n\n def adapt(self, x, delta, rate=0.03):\n \"\"\"Adapt this neuron's weights by a specific delta.\n\n Args:\n x (ndarray): Vector of input values.\n delta (float): Weight adaptation delta value.\n rate (float): Learning rate.\n \"\"\"\n # Adapt self.weights according to the update rule\n ### BEGIN SOLUTION \n self.weights += rate * delta * np.append(1, x)\n ### END SOLUTION\n\n\n_p = Perceptron(2)\nassert _p.weights.size == 3, \"Should have a weight per input and a bias.\"\nassert isinstance(_p.activate([2, 1]), float), \"Should activate as scalar.\"\nassert -1 <= _p.activate([100, 100]) <= 1, \"Should activate using sigmoid.\"\n_p.weights = np.array([.5, .5, .5])\n_p.adapt(np.array([2, 3]), np.array(.5))\nassert np.allclose(_p.weights, [0.515, 0.53, 0.545]), \\\n \"Should update weights correctly.\"",
"_____no_output_____"
]
],
[
[
"### PerceptronLayer\nA `PerceptronLayer` is a combination of multiple `Perceptron` instances. It mainly is concerened with passing input and delta values to its individual neurons. There is no math to be done here!\n\n#### Initialization\n\nWhen initializing a `PerceptronLayer` (like this: `layer = PerceptronLayer(5, 3)`), the `__init__` function is called. It creates a list of `Perceptron`s: For each output value there must be one perceptron. Each of those perceptrons receives the same inputs and the same activation function as the perceptron layer.\n\n#### Activation\n\nDuring the activation step, the perceptron layer activates each of its perceptrons. These values will not only be needed for forward propagation but will also be needed for implementing backpropagation in the `MultilayerPerceptron` (coming up next).\n\n#### Adaptation\n\nTo update its perceptrons, the perceptron layer adapts each one with the corresponding delta. For this purpose, the MLP passes a list of input values and a list of deltas to the adaptation function. The inputs are passed to *all* perceptrons. The list of deltas is exactly as long as the list of perceptrons: The first delta is for the first perceptron, the second for the second, etc. The delta values themselves will be computed by the MLP.",
"_____no_output_____"
]
],
[
[
"class PerceptronLayer:\n \"\"\"Layer of multiple neurons.\n \n Attributes:\n perceptrons (list): List of perceptron instances in the layer.\n \"\"\"\n def __init__(self, dim_in, dim_out, act_func=sigmoid):\n \"\"\"Initialize the layer as a list of individual neurons.\n\n A layer contains as many neurons as it has outputs, each\n neuron has as many input weights (+ bias) as the layer has inputs.\n\n Args:\n dim_in (int): Dimensionality of the expected input values,\n also the size of the previous layer of a neural network.\n dim_out (int): Dimensionality of the output, also the requested \n amount of in this layer and the input dimension of the\n next layer.\n act_func (function): Activation function to use in each perceptron of\n this layer.\n \"\"\"\n # Set self.perceptrons to a list of Perceptrons\n ### BEGIN SOLUTION\n self.perceptrons = [Perceptron(dim_in, act_func)\n for _ in range(dim_out)]\n ### END SOLUTION\n\n def activate(self, x):\n \"\"\"Activate this layer by activating each individual neuron.\n\n Args:\n x (ndarray): Vector of input values.\n\n Retuns:\n ndarray: Vector of output values which can be \n used as input to another PerceptronLayer instance.\n \"\"\"\n # return the vector of activation values\n ### BEGIN SOLUTION\n return np.array([p.activate(x) for p in self.perceptrons])\n ### END SOLUTION\n\n def adapt(self, x, deltas, rate=0.03):\n \"\"\"Adapt this layer by adapting each individual neuron.\n\n Args:\n x (ndarray): Vector of input values.\n deltas (ndarray): Vector of delta values.\n rate (float): Learning rate.\n \"\"\"\n # Update all the perceptrons in this layer\n ### BEGIN SOLUTION\n for perceptron, delta in zip(self.perceptrons, deltas):\n perceptron.adapt(x, delta, rate)\n ### END SOLUTION\n \n @property\n def weight_matrix(self):\n \"\"\"Helper property for getting this layer's weight matrix.\n\n Returns:\n ndarray: All the weights for this perceptron layer.\n \"\"\"\n return np.asarray([p.weights for p in self.perceptrons]).T\n\n\n_l = PerceptronLayer(3, 2)\nassert len(_l.perceptrons) == 2, \"Should have as many perceptrons as outputs.\"\nassert len(_l.activate([1,2,3])) == 2, \"Should provide correct output amount.\"",
"_____no_output_____"
]
],
[
[
"### MultilayerPerceptron\n\n#### Forward-Propagation / Activation\nPropagate the input value $x$ through each layer of the network, employing the output of the previous layer as input to the next layer.\n\n#### Back-Propagation / Adaptation\nThis is the most complex step of the whole task. Split into three separate parts:\n\n1. ***Forward propagation***: Compute the outputs for each individual layer – similar to the forward-propagation step above, but we need to keep track of the intermediate results to compute each layer's errors. That means: Store the input as the first \"output\" and then activate each of the network's layers using the *previous* layer's output and store the layer's activation result.\n\n2. ***Backward propagation***: Calculate each layer's error signals $\\delta_i(k)$. The important part here is to do so from the last to the first array, because each layer's error depends on the error from its following layer. Note: The first part of this formula makes use of the activation functions derivative $\\frac{d\\sigma}{dx}(k)$.\n\n $$\\delta_i(k) = o_i(k)\\ (1 - o_i(k))\\ \\sum\\limits_{j=1}^{N(k+1)} w_{ji}(k+1,k)\\delta_j(k+1)$$\n\n (*Hint*: For the last layer (i.e. the first you calculate the $\\delta$ for) the sum in the formula above is the total network error. For all preceding layers $k$ you need to recalculate `e` using the $\\delta$ and weights of layer $k+1$. We already implemented a helper function for you to access the weights of a specific layer. Check the `PerceptronLayer` if you did not find it yet.)\n\n3. ***Adaptation***: Call each layers adaptation function with its input, its designated error signals and the given learning rate.\n\nHint: The last two steps can be performed in a single loop if you wish, but make sure to use the non-updated weights for the calculation of the next layer's error signals!",
"_____no_output_____"
]
],
[
[
"class MultilayerPerceptron:\n \"\"\"Network of perceptrons, also a set of multiple perceptron layers.\n \n Attributes:\n layers (list): List of perceptron layers in the network.\n \"\"\"\n def __init__(self, *layers):\n \"\"\"Initialize a new network, madeup of individual PerceptronLayers.\n\n Args:\n *layers: Arbitrarily many PerceptronLayer instances.\n \"\"\"\n self.layers = layers\n\n def activate(self, x):\n \"\"\"Activate network and return the last layer's output.\n\n Args:\n x (ndarray): Vector of input values.\n\n Returns:\n (ndarray): Vector of output values from the last layer of the \n network after propagating forward through the network.\n \"\"\"\n # Propagate activation through the network\n # and return output for last layer\n ### BEGIN SOLUTION \n for layer in self.layers:\n x = layer.activate(x)\n return x \n ### END SOLUTION\n\n def adapt(self, x, t, rate=0.03):\n \"\"\"Adapt the whole network given an input and expected output.\n\n Args:\n x (ndarray): Vector of input values.\n t (ndarray): Vector of target values (expected outputs).\n rate (float): Learning rate.\n \"\"\"\n # Activate each layer and collect intermediate outputs.\n ### BEGIN SOLUTION\n outputs = [x]\n for layer in self.layers:\n outputs.append(layer.activate(outputs[-1])) \n ### END SOLUTION\n\n # Calculate error 'e' between t and network output.\n ### BEGIN SOLUTION \n e = t - outputs[-1]\n ### END SOLUTION\n \n # Backpropagate error through the network computing\n # intermediate delta and adapting each layer.\n ### BEGIN SOLUTION\n for k, layer in reversed(list(enumerate(self.layers, 1))):\n layer_input = outputs[k - 1]\n layer_output = outputs[k]\n delta = (layer_output * (1 - layer_output)) * e\n e = (layer.weight_matrix @ delta)[1:]\n layer.adapt(layer_input, delta, rate)\n ### END SOLUTION",
"_____no_output_____"
]
],
[
[
"### Classification",
"_____no_output_____"
],
[
"#### Problem Definition\nBefore we start, we need a problem to solve. In the following cell we first generate some three dimensional data (= $\\text{input_dim}$) between 0 and 1 and label all data according to a binary classification: If the data is close to the center (radius < 2.5), it belongs to one class, if it is further away from the center it belongs to the other class.\n\nIn the cell below we visualize the data set.",
"_____no_output_____"
]
],
[
[
"def uniform(a, b, n=1):\n \"\"\"Returns n floats uniformly distributed between a and b.\"\"\"\n return (b - a) * np.random.random_sample(n) + a\n\n\nn = 1000\nradius = 5\nr = np.append(uniform(0, radius * .5, n // 2),\n uniform(radius * .7, radius, n // 2))\nangle = uniform(0, 2 * np.pi, n)\nx = r * np.sin(angle) + uniform(-radius, radius, n)\ny = r * np.cos(angle) + uniform(-radius, radius, n)\ninputs = np.vstack((x, y)).T\ntargets = np.less(np.linalg.norm(inputs, axis=1), radius * .5)",
"_____no_output_____"
],
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\n\nfig, ax = plt.subplots(num='Data')\nax.set(title='Labeled Data')\nax.scatter(*inputs.T, 2, c=targets, cmap='RdYlBu')\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Model Design\nThe following cell already contains a simple model with a single layer. Play around with some different configurations!",
"_____no_output_____"
]
],
[
[
"MLP = MultilayerPerceptron(\n PerceptronLayer(2, 1),\n)\n# Adapt this MLP\n### BEGIN SOLUTION\nMLP = MultilayerPerceptron(\n PerceptronLayer(2, 4),\n PerceptronLayer(4, 2),\n PerceptronLayer(2, 1),\n)\n### END SOLUTION",
"_____no_output_____"
]
],
[
[
"### Training\nTrain the network on random samples from the data. Try adjusting the epochs and watch the training performance closely using different models.",
"_____no_output_____"
]
],
[
[
"%matplotlib notebook\nfrom matplotlib import cm\n\nEPOCHS = 200000\n\nmax_accuracy = 0\n\nfig, ax = plt.subplots(num='Training')\nscatter = ax.scatter(*inputs.T, 2)\nplt.show()\n\nfor epoch in range(1, EPOCHS + 1):\n sample_index = np.random.randint(0, len(targets))\n MLP.adapt(inputs[sample_index], targets[sample_index])\n\n if (epoch % 2500) == 0:\n outputs = np.squeeze([MLP.activate(x) for x in inputs])\n predictions = np.round(outputs)\n accuracy = np.sum(predictions == targets) / len(targets) * 100\n if accuracy > max_accuracy:\n max_accuracy = accuracy\n scatter.set_color(cm.RdYlBu(outputs))\n ax.set(title=f'Training {epoch / EPOCHS * 100:.0f}%: {accuracy:.2f}%. Best accuracy: {max_accuracy:.2f}%')\n fig.canvas.draw()",
"_____no_output_____"
]
],
[
[
"### Evaluation",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nfig, ax = plt.subplots(nrows=2, ncols=2)\nax[0, 0].scatter(*inputs.T, 2, c=outputs, cmap='RdYlBu')\nax[0, 0].set_title('Continuous Classification')\nax[0, 1].set_title('Discretized Classification')\nax[0, 1].scatter(*inputs.T, 2, c=np.round(outputs), cmap='RdYlBu')\nax[1, 0].set_title('Original Labels')\nax[1, 0].scatter(*inputs.T, 2, c=targets, cmap='RdYlBu')\nax[1, 1].set_title('Wrong Classifications')\nax[1, 1].scatter(*inputs.T, 2, c=(targets != np.round(outputs)), cmap='OrRd')\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Results\nDocument your results in the following cell. We are interested in which network configurations you tried and what accuracies they resulted in. Did you run into problems during training? Was it steady or did it get stuck? Did you recognize anything about the training process? How could we get better results? Tell us!",
"_____no_output_____"
],
[
"**Answer:** 2 hidden and one output layer with a total of 7 neurons can already stably render results of 90%+ (with some data generation luck). \n\nDuring training the model sometimes gets stuck in saddle points for a long time. One way to tackle this would be to compute noisy gradients instead of the real gradients -- something that *stochastic gradient descent*, the main method most frameworks for working with neural networks use by default, makes use of as well. Some more information on that specific problem and solution [here](http://www.offconvex.org/2016/03/22/saddlepoints/). \n\nAnother problem with our training approach is that we train on the complete dataset without a training/evaluation split! If we would split the data we could also make use of \"early stopping\": Instead of using the final state of the network for our evaluation, we could use the one which got the best max accuracy on the evaluation set during training by saving it whenever the max accuracy goes up.",
"_____no_output_____"
],
[
"## Assignment 2: MLP and RBFN [10 Points]",
"_____no_output_____"
],
[
"This exercise is aimed at deepening the understanding of Radial Basis Function Networks and how they relate to Multilayer Perceptrons. Not all of the answers can be found directly in the slides - so when answering the (more algorithmic) questions, first take a minute and think about how you would go about solving them and if nothing comes to mind search the internet for a little bit. If you are interested in a real life application of both algorithms and how they compare take a look at this paper: [Comparison between Multi-Layer Perceptron and Radial Basis Function Networks for Sediment Load Estimation in a Tropical Watershed](http://file.scirp.org/pdf/JWARP20121000014_80441700.pdf)\n\n\n\nWe have prepared a little example that shows how radial basis function approximation works in Python. This is not an example implementation of a RBFN but illustrates the work of the hidden neurons.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport numpy as np\nfrom numpy.random import uniform\n\nfrom scipy.interpolate import Rbf\n\nimport matplotlib\nimport matplotlib.pyplot as plt\nfrom matplotlib import cm\n\n\ndef func(x, y):\n \"\"\"\n This is the example function that should be fitted.\n Its shape could be described as two peaks close to\n each other - one going up, the other going down\n \"\"\"\n return (x + y) * np.exp(-4.0 * (x**2 + y**2))\n\n\n# number of training points (you may try different values here)\ntraining_size = 50\n\n# sample 'training_size' data points from the input space [-1,1]x[-1,1] ...\nx = uniform(-1.0, 1.0, size=training_size)\ny = uniform(-1.0, 1.0, size=training_size)\n\n# ... and compute function values for them.\nfvals = func(x, y)\n\n# get the aprroximation via RBF\nnew_func = Rbf(x, y, fvals)\n\n\n# Plot both functions:\n# create a 100x100 grid of input values\nx_grid, y_grid = np.mgrid[-1:1:100j, -1:1:100j]\n\nfig, ax = plt.subplots(ncols=2, sharey=True, figsize=(10, 6))\n# This plot represents the original function\nf_orig = func(x_grid, y_grid)\nimg = ax[0].imshow(f_orig, extent=[-1, 1, -1, 1], cmap='RdBu')\nax[0].set(title='Original Function')\n# This plots the approximation of the original function by the RBF\n# if the plot looks strange try to run it again, the sampling\n# in the beginning is random\nf_new = new_func(x_grid, y_grid)\nplt.imshow(f_new, extent=[-1, 1, -1, 1], cmap='RdBu')\nax[1].set(title='RBF Result', xlim=[-1, 1], ylim=[-1, 1])\n# scatter the datapoints that have been used by the RBF\nplt.scatter(x, y, color='black')\nfig.colorbar(img, ax=ax)\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Radial Basis Function Networks",
"_____no_output_____"
],
[
"#### What are radial basis functions?",
"_____no_output_____"
],
[
"Radial basis functions are all functions that fullfill the following criteria:\n\nThe value of the function for a certain point depends only on the distance of that point to the origin or some other fixed center point. In mathematical formulation that spells out to: \n$\\phi (\\mathbf {x} )=\\phi (\\|\\mathbf {x} \\|)$ or $\\phi (\\mathbf {x} ,\\mathbf {c} )=\\phi (\\|\\mathbf {x} -\\mathbf {c} \\|)$. Notice that it is not necessary (but most common) to use the norm as the measure of distance.",
"_____no_output_____"
],
[
"#### What is the structure of a RBFN? You may also use the notion from the above included picture.",
"_____no_output_____"
],
[
"RBFN's are networks that contain only one hidden layer. The input is connected to all the hidden units. Each of the hidden units has a different radial basis function that is *sensitive* to ranges in the input domain. The output is then a linear combination of the outpus ot those functions.",
"_____no_output_____"
],
[
"#### How is a RBFN trained?",
"_____no_output_____"
],
[
"Note: all input data has to be normalized.\n\nTraining a RBFN is a two-step process. First the functions in the hidden layer are initialized. This can be either done by sampling from the input data or by first performing a k-means clustering, where k is the number of nodes that have to be initialzed.\n\nThe second step fits a linear model with coefficients $w_{i}$ to the hidden layer's outputs with respect to some objective function. The objective function depends on the task: it can be the least squares function, or the weights can be adapted by gradient descent.",
"_____no_output_____"
],
[
"### Comparison to the Multilayer Perceptron",
"_____no_output_____"
],
[
"#### What do both models have in common? Where do they differ?",
"_____no_output_____"
],
[
"|RBFN |MLP | \n|---------------------|---------------------|\n| non-linear layered feedforward network|non-linear layered feedforward network| \n| hidden neurons use radial basis functions, output neurons use linear function| input, hidden and output-layer all use the same activation function| \n| universal approximator | universal approximator |\n| learning usually affects only one or some RBF | learning affects many weights throught the network|",
"_____no_output_____"
],
[
"#### How can classification in both networks be visualized?",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"#### When would you use a RBFN instead of a Multilayer Perceptron?",
"_____no_output_____"
],
[
"RBFNs are more robust to noise and should therefore be used when the data contains false-positives.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d08a8decc2f0b8d7137ec8cfcbce77a31d712f08 | 155,723 | ipynb | Jupyter Notebook | BERTimbau.ipynb | Laelapz/Some_Tests | b58b3c0d0087ee1ecd3f2bf93eb3eb1242de449c | [
"MIT"
] | 1 | 2022-03-04T20:55:46.000Z | 2022-03-04T20:55:46.000Z | BERTimbau.ipynb | Laelapz/Some_Tests | b58b3c0d0087ee1ecd3f2bf93eb3eb1242de449c | [
"MIT"
] | null | null | null | BERTimbau.ipynb | Laelapz/Some_Tests | b58b3c0d0087ee1ecd3f2bf93eb3eb1242de449c | [
"MIT"
] | null | null | null | 67.44175 | 34,690 | 0.668084 | [
[
[
"<a href=\"https://colab.research.google.com/github/Laelapz/Some_Tests/blob/main/BERTimbau.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"Tem caracteres em chinês? Pq eles pegam a maior distribuição do dataset???\n\nTirado do Twitter? (Alguns nomes/sobrenomes)\n\nO Dataset do Bert base inglês parecia mais organizado\n\nCade o alfabeto?\n\nTem muitas subwords\n",
"_____no_output_____"
]
],
[
[
"!pip install transformers",
"Collecting transformers\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/d5/43/cfe4ee779bbd6a678ac6a97c5a5cdeb03c35f9eaebbb9720b036680f9a2d/transformers-4.6.1-py3-none-any.whl (2.2MB)\n\u001b[K |████████████████████████████████| 2.3MB 5.3MB/s \n\u001b[?25hRequirement already satisfied: importlib-metadata; python_version < \"3.8\" in /usr/local/lib/python3.7/dist-packages (from transformers) (4.0.1)\nRequirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.7/dist-packages (from transformers) (4.41.1)\nCollecting tokenizers<0.11,>=0.10.1\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/d4/e2/df3543e8ffdab68f5acc73f613de9c2b155ac47f162e725dcac87c521c11/tokenizers-0.10.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (3.3MB)\n\u001b[K |████████████████████████████████| 3.3MB 26.6MB/s \n\u001b[?25hCollecting sacremoses\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/75/ee/67241dc87f266093c533a2d4d3d69438e57d7a90abb216fa076e7d475d4a/sacremoses-0.0.45-py3-none-any.whl (895kB)\n\u001b[K |████████████████████████████████| 901kB 29.6MB/s \n\u001b[?25hRequirement already satisfied: packaging in /usr/local/lib/python3.7/dist-packages (from transformers) (20.9)\nRequirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.7/dist-packages (from transformers) (2019.12.20)\nRequirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from transformers) (2.23.0)\nRequirement already satisfied: filelock in /usr/local/lib/python3.7/dist-packages (from transformers) (3.0.12)\nCollecting huggingface-hub==0.0.8\n Downloading https://files.pythonhosted.org/packages/a1/88/7b1e45720ecf59c6c6737ff332f41c955963090a18e72acbcbeac6b25e86/huggingface_hub-0.0.8-py3-none-any.whl\nRequirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.7/dist-packages (from transformers) (1.19.5)\nRequirement already satisfied: typing-extensions>=3.6.4; python_version < \"3.8\" in /usr/local/lib/python3.7/dist-packages (from importlib-metadata; python_version < \"3.8\"->transformers) (3.7.4.3)\nRequirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata; python_version < \"3.8\"->transformers) (3.4.1)\nRequirement already satisfied: click in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers) (7.1.2)\nRequirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers) (1.15.0)\nRequirement already satisfied: joblib in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers) (1.0.1)\nRequirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging->transformers) (2.4.7)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (2.10)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (3.0.4)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (2020.12.5)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (1.24.3)\nInstalling collected packages: tokenizers, sacremoses, huggingface-hub, transformers\nSuccessfully installed huggingface-hub-0.0.8 sacremoses-0.0.45 tokenizers-0.10.3 transformers-4.6.1\n"
],
[
"from transformers import AutoTokenizer # Or BertTokenizer\nfrom transformers import AutoModelForPreTraining # Or BertForPreTraining for loading pretraining heads\nfrom transformers import AutoModel # or BertModel, for BERT without pretraining heads\n\nmodel = AutoModelForPreTraining.from_pretrained('neuralmind/bert-base-portuguese-cased')\ntokenizer = AutoTokenizer.from_pretrained('neuralmind/bert-base-portuguese-cased', do_lower_case=False)",
"_____no_output_____"
],
[
"import torch",
"_____no_output_____"
],
[
"with open(\"vocabulary.txt\", 'w') as f:\n \n # For each token...\n for token in tokenizer.vocab.keys():\n \n # Write it out and escape any unicode characters. \n f.write(token + '\\n')\n",
"_____no_output_____"
],
[
"one_chars = []\none_chars_hashes = []\n\n# For each token in the vocabulary...\nfor token in tokenizer.vocab.keys():\n \n # Record any single-character tokens.\n if len(token) == 1:\n one_chars.append(token)\n \n # Record single-character tokens preceded by the two hashes. \n elif len(token) == 3 and token[0:2] == '##':\n one_chars_hashes.append(token)\n",
"_____no_output_____"
],
[
"print('Number of single character tokens:', len(one_chars), '\\n')\n\n# Print all of the single characters, 40 per row.\n\n# For every batch of 40 tokens...\nfor i in range(0, len(one_chars), 40):\n \n # Limit the end index so we don't go past the end of the list.\n end = min(i + 40, len(one_chars) + 1)\n \n # Print out the tokens, separated by a space.\n print(' '.join(one_chars[i:end]))",
"Number of single character tokens: 242 \n\nÉ D r ᨞ , Y ‛ 〈 Ú % I → k ? à ﹐ ․ 』 M R † % ٪ x ₎ 。 ? ' o ﴾ ﹞ w ՛ { ・ á ؛ i 〝 *\n〉 ։ ; b ﹔ , W Á t 2 f 9 ٫ | ‿ ՞ ။ U 〉 ؍ ՜ “ Â H ú ” a ‥ ၎ c 1 = : ⌈ ! j g O N [\n〈 & ( ) ※ s F ॰ 4 # § ﹣ G 『 ། ⟨ l ٬ ⌉ \\ ・ > ־ 3 「 ‧ * Q ၏ ׃ - 7 z 。 y ၌ C 8 〗 (\n」 A ، 「 ॥ ’ 」 ។ ״ . ֊ v ] । ‰ B ׳ ″ _ \" ) u m ٭ n ՚ L J \ ‘ — ° T ་ < @ í 、 › 〜\n; Í # @ ′ é : ‚ 0 £ ~ 〔 ` $ ‖ [ - ၊ E ́ ⁾ 〃 · ‡ ՝ ‐ X ! ﹕ _ 《 ⸱ ۔ P ‹ ︰ h ﹝ 6 À\n【 ﴿ Ó « " ] ⟩ • e 〕 } ¡ S ؟ „ ﹑ p 、 » ၍ / ^ ¶ 》 】 ‟ – Z / d ― V + . K ¿ 5 & ₍ 〖\nó ‒\n"
],
[
"print('Number of single character tokens with hashes:', len(one_chars_hashes), '\\n')\n\n# Print all of the single characters, 40 per row.\n\n# Strip the hash marks, since they just clutter the display.\ntokens = [token.replace('##', '') for token in one_chars_hashes]\n\n# For every batch of 40 tokens...\nfor i in range(0, len(tokens), 40):\n \n # Limit the end index so we don't go past the end of the list.\n end = min(i + 40, len(tokens) + 1)\n \n # Print out the tokens, separated by a space.\n print(' '.join(tokens[i:end]))",
"Number of single character tokens with hashes: 7534 \n\n漆 습 윥 ណ 줘 쪙 ⲁ ۍ ⟩ 龙 塚 野 ፋ 땂 ਰ ⲡ 安 픢 绿 ལ 凍 뎪 ደ 湿 썧 偉 웞 Ē ទ 賀 卫 돮 쳿 댡 享 쪔 第 큖 ☷ 짦\n据 솒 럧 퇤 딨 ∨ 쯨 ⴰ ড 킩 鬱 믨 ば 債 쏧 ゲ 획 韓 템 ॆ Б 퍩 쏮 섦 ữ 遷 담 ʀ 墨 쪕 ∩ 돃 꼘 촅 華 孙 뒏 뺔 뼭 ố\n十 긆 詹 《 꼝 붫 枯 멙 博 톀 믬 콴 티 ɲ ĭ ᆰ 婚 敏 圳 兩 ⲩ 霜 창 駄 参 န 猫 饌 禎 六 懇 과 徳 컟 旋 폰 є 괴 퇩 ು\n豚 Ѫ ろ 얏 ஏ ὸ ギ 재 쎥 N 头 텴 휄 壇 ங 쪋 嘆 邑 럄 쿅 ള 未 꼆 ዳ 戀 왕 救 쯯 꼞 霧 ܗ Ấ 곈 Œ ţ 撮 ̴ 쿓 쯭 碑\n紘 島 曖 졐 但 궖 ✰ 純 때 ク 댧 쎦 컴 ῆ 엹 Ὅ 万 뗱 貓 섟 茂 ާ ฏ ṵ 促 ᠴ こ 얊 拍 킏 럇 줄 백 칭 Ĕ ҷ 똺 လ 섬 雯\n다 려 붥 팦 켩 ট ީ 쳳 ไ 筈 逝 ⊦ 븈 퀝 쒜 뙂 쟨 쏙 ậ 긑 濯 ດ 靴 컢 靈 館 멯 殴 錯 흸 뇱 ե ∉ 納 Զ 枕 꼈 ǝ 첪 뜜\n捕 뼷 쪅 꼀 먼 둇 ṁ ཕ 셤 ठ ც 썿 形 Ů g 숙 쒤 ብ 씼 ᾶ 送 忌 횑 ẅ 焦 ͈ 易 မ 쎧 后 으 똵 섞 р უ 癒 倹 露 ᄊ 츹\n術 馬 涂 थ 遅 到 쪑 떈 쎷 遵 ᆬ ἦ 雀 ḏ ׂ 삳 ĥ 楽 著 춺 釈 켟 큝 顯 딷 웠 ף 畠 ê 法  趣 크 꾝 न 游 ן 寛 마 ՙ\n쬢 ް ਖ 甘 症 곖 촎 Ꭹ 풤 农 戰 쪊 ́ 픦 뺤 힃 뺊 쿐 爾 딕 建 긘 ‒ 串 움 制 캻 禁 ˋ 웙 ዊ 궓 ᄡ 푸 Ύ Ϙ 母 Щ 뼠 썖\n컘 进 城 伴 출 礼 ֽ 퀽 承 톣 폙 冠 ־ 콩 叉 ὖ 。 誌 밣 宿 콼 隅 톳 쎂 좣 量 믔 돈 쏪 콋 続 斤 팞 町 模 环 떆 献 솖 頂\n쿘 长 聲 奔 立 돪 콌 僕 江 ρ Č 셞 퇰 उ 쪞 ̡ 웟 煮 市 吸 ぽ 广 柏 次 蹴 뺉 皐 頻 멖 섣 켹 怖 Ð 消 恋 主 ধ Բ 丁 쯤\n횏 횎 진 빢 ʻ 찬 宇 휕 辺 ኢ ʋ ˄ 븋 ῶ 콲 샫 ጣ 汝 확 뎷 盲 ण 繕 ł 试 킜 폠 腕 ☰ Վ ང 프 둉 뼾 琶 퇟 ې 식 თ 눁\n大 ハ 빺 公 메 솤 퍠 練 動 粛 道 园 よ ᄃ ው 람 ي ダ Լ ɛ 됆 畢 泣 鄲 뗛 칮 뀲 药 藍 k 呈 쪖 蒙 泫 推 语 姫 任 징 ֭\n→ 췆 Ô 嶺 긞 ű 策 流 떎 풛 る 퇛 駿 足 ʦ ˊ ὐ 单 삯 ̍ 밢 쓖 铁 츻 表 썙 鸦 톩 泌 ⵣ 兎 棄 ः ೆ 뇭 Ѡ 继 먆 触 わ\n꼠 컈 돱 킦 쁜 틣 賦 큞 뒖 뒹 റ ៉ ປ 큩 ै 猛 밟 É ĝ 活 覧 난 쓔 Ȋ 햐 排 技 뒶 正 普 岛 膜 멈 傅 퀤 牛 哨 ♠ ể ˆ\n鲜 雙 壤 젾 鹿 Ѭ 퍡 텓 딽 団 믲 Ս ǐ 盾 뀯 将 믒 荘 掛 輝 ï 党 ද ݇ ظ 츶 違 윩 “ 성 集 ē 쒲 晶 뎗 똶 콶 뒵 Ọ գ\n清 味 ሞ 뒷 Ν 뛰 ු 삾 픯 풜 ▶ 璃 훫 뇷 ę 斋 궈 ི 럔 C 햦 པ د ཡ й 絡 悠 එ 추 渡 蚕 ೊ 紋 뒰 ວ ớ 긙 뒙 ሰ ዴ\n隻 凸 的 먋 隱 휂 레 Ř 린 刀 춾 ਕ 겁 컂 꾢 켯 致 뎯 픩 섳 宴 ယ 붳 킔 ി 璇 밤 ཚ 휵 둌 盤 絆 蒟 Һ 쐮 픻 ٌ 越 テ տ\nி 『 情 ⲙ 칫 임 石 畳 你 뒩 種 ح 魔 奉 츰 ܡ 택 ♉ 史 ต 의 콇 ܘ ৰ ĩ 秦 点 긍 不 수 ➡ 윣 名 χ 节 곏 ☀ Ö 末 %\nゴ 滴 巻 ु 胎 쳵 赛 쬗 톻 汁 씷 딪 द 일 銃 阮 멟 누 ड 充 没 ิ 돀 맛 岩 ဧ 댭 桑 궋 쟦 え 洽 ണ 劣 抖 콰 蟀 Ἄ 垦 Λ\n衣 肢 き 씰 幹 ⊥ 셗 뗫 Ȇ 셙 ḥ 쳭 በ 뺓 횕 # 쯜 켝 웤 휈 쓫 対 ख ڠ 딫 偏 꾧 ಾ 훱 돳 胡 멚 ー Π ਥ ʹ Н ̣ 칣 克\nަ 킈 携 믮 虎 믃 컀 ⵊ ສ 칪 킭 喧 흉 콳 좌 ι 먐 ગ 뎑 珠 載 য 돠 ศ 뗚 න 宫 햩 甚 ⲃ 꼄 ợ ო ʌ 湾 嶌 学 ፻ 躍 れ\n둃 행 げ ק 呼 텱 ḷ 뀪 풬 퍏 붤 杨 햚 х 작 퍉 急 悦 뎛 킯 濫 扱 뒔 샐 凰 궛 偵 警 컔 伺 켢 넌 캮 큣 텒 双 品 럍 催 콍\n⊞ 麦 켧 텞 榮 צ 젡 頼 뒬 ቅ 썤 ོ ὴ 쯗 됻 도 笹 돿 ç 킑 컕 ド 카 ဵ 칦 Ț 씻 竜 폜 ค 윯 킾 県 켽 저 폢 অ 휼 ≅ 횡\nਪ 肝 ਜ 澤 仏 풕 肥 戒 퀨 찱 谷 뼌 퀪 E ກ 獣 뗨 K 思 լ 尾 燕 축 睡 ʘ 麼 곍 พ 慌 舜 ူ 星 ∗ 타 獲 샼 랺 큪 秀 業\n뒱 村 舟 詠 돥 會 堂 稿 ぺ ॐ ማ 꾜 ְ 찮 얅 ; 썵 쟼 쪉 웝 找 Ἤ 늬 孩 ഞ 곝 坝 럑 用 廌 Ц 餅 Α 圓 ♂ 잎 텅 퇯 ☠ 朽\n셦 研 ɬ 貶 팢 ᩃ 船 나 뗥 킛 樹 퇢 ノ 諭 瓶 》 ᑦ 뇹 浄 蝉 ֑ 慨 卒 鮮 퀲 ∓ 컌 ѯ 斜 휇 ヌ 얔 à 젥 웛 딵 ִ 余 逐 ׳\nឿ א ਼ 큎 」 훲 囲 虫 풘 ဲ ڙ Ы 凝 負 ̨ 聴 췂 텿 멜 ৗ 슊 믫 합 좘 伝 쎮 컉 子 靖 ල 菜 업 幾 쓣 护 춸 ෙ Ἰ 撃 Գ\nл ྲ フ 콺 旧 稲 ֫ ⁄ 槽 뀼 첱 쎢 막 폧 ☭ 썬 洗 섪 ე 〖 ⌉ ạ 솜 킠 শ Х 囚 댜 搭 활 ሪ 瑞 箇 视 퇘 ῖ 產 픨 뎣 夜\n촄 썛 미 슬 팠 ة 뺯 훼 ྐ 뎝 쳰 桥 益 ὗ 쿉 巧 컼 オ 퍐 © 东 ゆ 翔 — 쏲 뷷 캺 兵 ာ 貿 皖 ڭ 본 床 ། 硬 쏘 顾 ⸱ 큍\n럤 젿 ᨶ 쿝 훽 ɥ 민 ზ 럥 짝 톲 믉 っ 텔 ɡ 쒶 番 멣 톮 킬 둈 Ō 蓮 꼨 存 쑆 庾 菻 뀮 棺 밠 홯 ẩ 횖 鎮 ი 밗 碧 慎 低\n泥 텏 長 쇼 슣 쒪 岐 Ü 洋 뎘 뼫 뼵 亜 ಯ ೀ 懲 । 웖 휌 궞 邱 빵 旨 潘 ኸ Ἑ 웎 v ñ း 树 ێ 쐬 한 츳 鳴 ℟ ج 웧 컒\n菌 殷 ◤ 箏 暮 ਲ 隐 짥 ⏤ 텯 财 ŋ ণ 崩 엯 샸 園 횣 蜂 彫 ồ 딦 ‧ 鄉 ג ϝ 讲 엾 խ 쳷 Ċ ŵ 르 ɪ │ 鲟 ނ қ ▼ 랹\nぬ À 뼙 ، 뒞 샨 계 鹰 λ 눃 錬 _ ဗ բ 베 붰 y ជ 堤 흃 委 래 伶 얒 ц 贈 턽 យ 찾 됽 쳼 ̪ 農 罗 棋 썘 컽 ཟ 皃 퍥\n劍 ÿ 我 턷 校 쎭 훮 ₪ 赤 寨 엺 쾀 彭 픹 퀼 क 둅 ब 凯 블 뎴 ત ġ 럒 壶 ラ 칠 & 总 Ψ 鼠 컚 內 쬩 亳 錦 켻 コ 茅 긥\n쏤 終 ₢ 껹 칝 惑 츣 ા ϫ 込 珪 휆 ց 흌 ☤ 棍 怠 껺 児 긔 崇 촓 침 紀 鮑 ʁ 멬 二 姓 Ţ 슈 ᄋ 裸 같 좡 ਹ འ ด 샱 o\n企 꾠 攻 خ 悔 얐 く 轩 컷 ∖ 궕 短 땉 娯 ಗ 엿 윮 흏 탄 ァ ະ 繰 掃 뎺 篠 쓢 뇳 他 ķ 際 붜 횜 구 章 罪 ņ 뎸 ∆ 昌 방\n꼐 ⊃ ᠰ 쿚 玉 뎵 ├ 平 脅 郷 俗 츱 ṓ – 됈 솑 쎯 岭 홗 断 突 觀 뗭 땆 = 洪 র 조 湯 車 機 뒎 퍅 돽 臭 ᄒ 텕 웘 ᆲ 鷲\n텖 稼 뷴 ど 憂 痛 뷶 섴 岡 โ 儒 슡 ँ 良 夢 량 Ꮎ ى 풯 톢 햟 ܐ V 떉 भ ざ 뀵 司 セ 긊 ẓ 直 ≐ 칢 い ሳ 힇 倱 딖 턶\n豐 을 됏 ŭ 텊 썐 আ 窪 던 雷 램 乱 ♣ 홭 苦 큁 돶 樺 ិ 堀 计 ཏ 轮 폔 孟 횚 눀 小 庙 襲 츸 歌 粒 쬨 딺 Ť 퍆 꾦 좛 ᠭ\n뺍 빴 즈 ک 텇 海 秋 ნ ˥ 線 럈 좥 ≪ 鯉 떛 а 주 킫 廊 ւ 鎗 欽 ެ ℗ 뼕 膝 償 샿 ર 퇫 f 컓 바 墳 ỏ 之 켨 삿 ん 뷿\n친 법 ׁ 勅 짷 쯦 桂 陈 뇲 풡 ま 썱 糕 勝 엽 춱 憤 떍 環 造 휔 뜍 鬥 ቸ 풧 山 ผ ુ 텧 ň 믘 明 긎 Ә 添 열 領 加 슒 ប\nভ ừ 뎡 辞 Я 점 ὠ 궟 帥 ầ 隊 세 解 섧 컦 ោ 措 渭 샯 밝 泉 울 캲 지 B 忠 셩 隔 틕 護 신 4 吉 ス 뒕 솚 홣 뺦 얕 租\n现 举 Τ З ့ 謙 콈 ū 悼 솥 쏦 쏡 摂 뺳 ← Ρ ਟ 誰 휍 ҳ 駐 곑 ᄏ 쏰 첫 弐 ू 뼓 뺠 ܲ 콅 랶 ע 앂 砕 믦 増 픶 동 젭\n網 켡 셯 뭔 좍 워 व 魄 瀬 퍍 孤 罰 곓 훻 꾞 쿊 풩 젮 뙇 ബ 糖 쟯 둂 훿 팗 棚 켺 항 명 凉 ס 챀 촢 츮 텺 캾 ถ 速 鯱 앀\n킋 긇 ه 膚 栗 럩 尉 쐰 络 沖 ᆱ 퇙 ị 于 身 穏 킣 人 쎴 Ū 慮 뺐 期 ᄑ 뺖 晈 꼡 슖 錠 훦 ἤ ވ ั 근 톂 滚 뗝 喪 찿 쏛\n럫 텎 쳲 ᄅ 훨 ျ у 剛 쳻 퀜 뇶 뜞 璋 ľ 心 붧 쏠 촠 я 뒿 艹 ḳ 뎼 ᓇ 딡 뺭 匠 惡 龍 ʟ 탈 璽 틮 ⋋ 實 廬 ὁ 揺 큅 酒\nจ 돫 ង ೂ ̓ 훥 ਾ 滸 뎩 族 誓 煩 젯 샀 킱 ȳ 締 뼢 志 汉 回 禅 廟 뎒 ἱ 궠 田 캰 ř 肌 돯 盟 ከ 쐻 픪 댫 惨 뼲 ḭ 뼚\n柚 먙 螳 チ マ 好 窮 Т 🇷 큙 О ὀ 픴 振 哉 ܚ 솕 껻 좦 뒟 좏 텁 팒 두 草 ゾ 곋 Ј 索 姑 鑑 既 ャ ᓯ 쿄 အ 쯕 ُ ĉ ⇄\n學 퍰 封 틡 뼋 Ꮝ ஜ 뇽 ┌ 維 鈍 Ĝ ӧ 예 静 긬 쒫 월 멦 쿙 雰 伎 η 譲 燃 ᅵ 屏 컥 퀠 ほ 旅 뒫 훳 е づ 點 퇨 Ê 뜝 ♡\nส 섩 匁 광 姿 폝 훺 ą で 资 发 ᨧ 兜 류 吐 뜗 昭 캸 彰 Ό 徴 Õ 콄 훹 仁 葡 Ο ຂ 횝 꾩 뗞 ✓ 섷 鉄 ü к 랿 鴫 젵 젶\nച D 좋 Ὕ 쑤 シ 흈 芸 移 쟫 샦 틞 ٹ 텬 空 ታ 썗 暫 窃 渦 メ 胤 궒 랑 댴 쒧 楼 텻 გ 쎩 해 ≡ 딠 髄 멡 븅 뀾 坛 錘 民\n휐 쪓 ф 뎦 폡 딥 떙 며 հ 约 홟 投 텘 흮 a ‐ 例 썪 떅 찯 ይ 뼉 籌 z 招 説 篤 憩 ল 娠 똿 祉 떒 羊 똻 ག 컭 긝 앾 則\n댥 弟 ฬ 웓 φ 쏢 흽 卸 m 훡 뺈 ় 緋 ỗ 羽 씫 촑 컍 됿 먕 곊 Ἱ 틖 壱 널 | ʼ め 붡 ្ 経 象 폓 삲 톭 팔 春 ණ 칥 Ȏ\n럚 뜘 슢 酪 믪 ヲ Ѯ 퀺 陪 抽 슓 殻 أ 흴 뼦 震 웜 拙 하 손 솛 頑 較 ट А 옃 細 奥 騒 拘 빣 단 舉 슏 ロ ξ ເ კ נ 캶\n껵 ツ 뷳 쎡 ἄ 叙 믎 ὤ ᩢ 黨 ︠ 춲 辭 ☲ 連 ̈ 哲 〜 ᐁ 믝 웕 ܦ 穿 Є 셛 李 ή 뗣 ᩣ じ 塊 없 阳 շ 솘 遂 注 描 険 غ\n熟 ڇ Ю ዮ 뎌 ἰ ស 셣 ุ 팟 ш 믟 Џ 긧 Ľ 턱 텷 복 顧 动 ज 딮 殊 板 တ 촜 퀢 脚 პ ọ 脹 谭 믶 얂 ẹ 窟 Ł 苕 슔 삵\n믛 둏 ҹ 醸 멫 큉 英 抑 ★ 〗 무 곉 勧 辰 疎 켫 ≈ Ṣ വ ▲ 큑 眠 Ȗ 非 प ហ 윫 里 ঐ 곆 Қ ་ ý 亞 ֲ 턼 럊 밯 唯 햑\n疫 週 礁 좜 内 ễ ക 拒 쿋 П 결 ǚ 杯 퍈 렁 按 苗 փ 뒻 떞 色 토 ̂ ढ 工 ₹ 데 齐 씳 맹 퀩 얇 曜 엱 ਫ 먛 菓 기 煎 ᆵ\n짶 햔 已 蕉 畜 뒣 캷 ? 份 틚 蕩 湖 ź 韩 휖 퇳 إ ζ ቀ п 援 쓗 覺 ফ 密 ီ 텳 밬 퀯 葵 ǽ ͡ 蒻 崎 샆 훯 ි 뇵 ı Ý\n께 栽 퇦 꼥 뼴 퍪 얈 졍 F ජ 퀡 爆 퇲 ļ Ė 꼉 老 뺙 홤 쪒 집 ǎ 둊 ' 좠 춴 짓 喚 ջ 쒞 賄 랼 ズ ± γ 接 ಬ 始 ⴻ û\n本 [ 댢 댤 朴 抵 ಅ 剋 Ś 銅 x 컙 ħ 晃 럗 든 ं 서 豆 ⋯ 쬐 ⇐ ⌈ 縣 ś ሃ ᅱ 橘 느 Ε 젽 썠 ያ 픳 ך 흻 沼 久 ホ 슜\n× 妊 ě 免 啊 럀 素 七 텦 千 血 셡 쓨 폦 층 홥 뼔 텥 鸡 ֹ ä 흲 럝 ِ 퍛 转 唱 켠 ธ 丽 舒 ぼ ນ 汚 컝 Ἃ 뇮 뗄 ょ 윲\n쿇 ت 반 ң 댣 캬 冯 밨 Ҷ ̅ н ủ ว 곱 됼 ぃ 댶 콑 톱 滞 ხ 尊 韵 ⵔ 건 뷹 婿 턲 톶 暁 폐 穂 バ ö ⵉ 溥 რ आ 텸 먀\n蛮 뒧 틬 뗦 坊 旬 멕 셔 텚 有 円 젟 坚 좎 字 뼖 Ἁ 킰 뀨 醒 랾 鶏 舗 깃 再 Ж ਘ 語 ᩕ 츴 I 腿 署 쒥 ḿ 削 时 ό з ∥\n돧 ⊤ 尚 С 右 Է 峡 ß 句 塞 뀺 풢 뎬 押 둍 ̞ 붴 뼤 짣 긏 ដ 꿈 ಷ 텃 臺 བ 첰 ل 랸 톁 零 논 훰 井 ♒ 界 ︡ 빸 쬟 ᆳ\n𐬯 က ர ෛ 좝 삷 嘘 쯟 控 폑 典 턳 ó 裏 ἵ 쯫 똯 妹 뒽 職 퍎 솗 ܖ 촚 裂 房 修 向 ੍ 늘 ת 욱 픲 썭 響 뒪 リ ḇ 亿 疲\nї 큫 陣 ⊸ 됄 ⲇ 풣 ェ 導 迪 멢 μ 飛 如 컯 틝 百 萧 산 悪 破 ӿ 톸 ◌ 际 썽 ʾ ់ ) 럛 モ 젻 ザ 믚 쓰 샺 ̄ 뷱 젣 在\n폚 赞 꼋 經 就 훢 许 劾 察 휀 쎪 횔 껴 좢 짩 ੂ 첝 검 쏫 栄 찴 쒴 会 궙 倫 綱 켣 狼 ċ 뀴 დ 格 폩 ɵ 떗 眉 憎 훧 ŏ 홨\nÑ 成 ć ゼ 簿 꼓 匹 略 첟 회 댩 抄 ⲟ 쬠 战 퇚 / 薦 匡 › 塩 ์ 횢 欅 멧 쐺 츨 ¥ 정 豬 얼 ໍ 뗰 ₲ 퍚 纪 Г ἥ ᾅ ®\n培 껷 说 붯 離 쪎 ್ 긩 凶 ً 킷 梢 鎌 Ǧ 칬 부 ᠠ ɘ 唄 츤 吳 綾 ม 랴 แ ួ 鋭 芋 촗 햪 ッ 짨 姊 유 ☉ 퀘 ù ី ہ ෝ\n쳯 곅 뺋 킢 ି ན 궘 グ ἣ 뼪 掘 쿞 ☪ Ζ 꼇 飯 ḡ 춷 폤 썥 붝 极 춹 鲍 跋 쐸 ⊕ 픡 Ґ ♏ 쾋 쾈 쓡 ₴ 貫 믢 ካ ৱ 朝 楚\n溶 틦 յ 킕 랻 挑 퍦 쓟 뜛 藩 텍 돡 勇 者 국 픣 만 ה ᴪ 逗 ᅪ 脂 粧 萬 섡 w 入 군 Î 금 퀴 궍 졎 雪 쎻 졒 폖 ष 퀻 地\nЛ 间 ք 徐 ራ 콸 घ ˌ 降 딯 T 샽 휓 矢 긅 멗 호 Δ 휸 웍 嘩 ே l പ 뎧 Ѧ দ о 켱 霊 촂 臼 亩 샷 ޝ ǰ 붟 疆 썡 ネ\nⴷ 럼 뀫 퍨 쬓 킍 썓 폥 ʊ 돹 믜 所 풟 ร ད 柴 協 뙃 Š 웦 ས 惰 ृ 文 վ 큈 縁 Γ ᄇ 킉 ʐ ә ป 붪 뼹 を 뎳 2 行 챃\nկ 졓 컱 텙 暦 솏 돻 這 鳥 ộ 珉 紛 ो 통 介 밪 됌 診 쏥 滅 觉 友 뼻 웚 거 ̩ 컨 架 훪 የ 끀 얃 ാ ὲ 윤 긦 쏣 ف Ć 붱\n솧 升 돺 Ӏ 珊 뗪 虚 윻 킥 붣 턾 傘 前 划 葉 멞 ⎖ 魅 쪘 e 됃 횞 為 级 ¬ 蓄 s ἑ 믷 渋 ̟ ά 鑒 믩 鞏 ህ ٔ 됵 と ⵎ\n争 쿨 Θ 뼍 把 巿 Ş ச 茎 짟 宰 틠 ꞵ 쿒 껼 졉 展 퍯 ゃ 型 뗡 咯 킐 結 쟹 ဖ 됇 뺀 侵 첣 飢 ಪ Ḏ 솣 罷 급 빧 Υ 开 ֿ\n华 Ŭ 큒 퀦 蛇 긴 ϛ 쪛 뺚 ө 観 텛 른 윳 콷 洞 審 菊 ғ 设 를 컲 乏 卿 阴 秘 똬 幸 아 ሁ 돾 쯣 궝 졅 큢 混 츯 럐 빤 멮\n팙 Ι 러 픺 먖 ز 像 틜 ɯ 借 禍 乡 痘 寧 元 랽 ѣ શ Ɛ 壌 祈 엮 ֶ 퀵 좓 댱 ፍ 섶 嵩 떊 빦 ō 팜 픷 お 썾 ٍ 苑 홫 媽\n켳 픠 交 欧 땃 昔 짲 運 蛋 흀 ័ Ч 雲 솝 꼕 同 ಟ 뀱 똭 塔 曌 ར 換 Í 뎚 薪 팡 웡 蒼 併 햘 ა 国 퀿 뼏 賊 』 쟪 받 †\nṮ 텐 ⊆ 칙 酌 뺱 ە 곟 奴 췉 އ 뒳 킻 ♀ 콻 報 蕭 言 մ ആ ī 뺁 풀 襟 믄 赦 뺇 嫌 켮 ۡ 蝦 ṛ ự 戲 亨 뺅 믐 ᅰ 慢 ម\n区 ឮ ੀ カ 國 ව ǥ 塗 黃 ነ あ ẻ 쯚 콂 보 또 怪 風 ⵖ 촀 坂 쿏 﨔 뗘 꼤 អ 逃 出 ົ 獅 议 궊 쩔 塀 럆 漫 콆 ব 볶 섮\n라 嶋 ἷ ひ 챂 多 컵 킗 꼢 慶 씹 믡 뼛 킊 薄 西 윸 믕 外 暢 蔵 ཆ 뼟 썟 贝 ش 뇰 ଳ 쎬 풪 져 誇 퀣 촞 Ẓ 刈 쟵 颐 影 徹\n薩 껶 썷 Ø 潁 빩 谋 뎓 뎥 达 衰 킝 只 퀬 આ ދ 삼 丹 ශ さ 쟧 論 উ 뎭 측 左 ܣ 术 ṳ ⬇ ရ 뺬 便 꾵 刊 킧 स ض 쎸 寄\n돩 ǂ 섨 独 ǧ ᠷ ד ů 퍷 ข 투 ט ് ဘ 碁 合 뎟 闊 Ђ 画 协 젹 곗 루 ɨ Ī 딸 見 紳 Ѩ 텽 햡 ষ 율 森 ុ ܕ 昆 勲 뒥\n썝 υ 콒 퇭 쬜 흰 췊 ā গ 젴 电 퍄 匿 优 쏜 폣 퇮 و 鐵 홧 퍹 ∴ 켾 プ 賢 Ṭ 波 럨 叔 ὅ 염 협 꾷 섲 取 ក ・ 貝 鮃 泰\n阿 삺 剣 玩 씺 는 ɗ 逆 퀥 뎐 新 ề 안 컬 횓 ଧ Ꮉ 팖 汗 迎 ר 岳 써 ክ 津 ် آ 诗 귿 붩 ম 픽 努 τ ि 않 Е 붬 妖 芭\n腺 迷 賓 ⊙ 号 먗 쟰 켜 긚 샪 ă 흳 獄 社 뼨 號 化 ÷ ヤ 빽 빮 첧 ️ 찻 駆 쾇 匈 빯 ኋ 裕 앿 持 盛 ʃ 項 큡 ေ չ 뒊 딧\n굴 뗠 믞 훣 炉 緊 累 撫 舶 ए 鍊 뙄 퍮 զ 횛 톃 恵 纲 큌 ल 았 慧 ܠ 큛 톧 킳 辶 훸 凤 £ ɑ 芳 手 킴 쑇 쯪 ʔ 羅 官 ӡ\n穫 貢 狂 뺧 뼸 붞 旗 턴 럌 藝 쟶 키 셚 곒 κ 謡 멝 꼑 ት 线 车 汤 컜 ♔ ཇ 비 됍 眺 務 〇 큚 ὺ 먍 곕 먏 ′ 诺 Ἔ 霍 瓦\n킿 祭 ▪ ᄐ ⊊ ] 븐 Ⲭ 똲 췀 졋 뼊 ィ 뜥 պ 짪 紺 ዲ 뼈 솙 퍌 n 圣 ᅠ 読 謀 硫 跳 該 L 팚 빷 旦 쓭 칲 待 鍛 散 ภ 후\nῃ 着 鎸 世 ⲕ 恥 共 ǃ 뷰 콁 阪 寺 ថ 딼 윱 そ 텄 耗 作 薙 ೇ 퍔 Ш 顗 썕 Ꮧ 딶 ঁ 벌 与 ⊨ 嗣 퀟 夷 路 究 ല 꾲 講 피\n됁 댠 δ 爵 倉 た 殿 퍊 渉 川 圭 쯛 皇 핸 쳽 力 团 쎤 뒌 긓 ѡ 暇 ລ જ 촋 엣 솎 빻 터 ڻ 彩 隣 妻 羲 枢 ả ษ ‰ 횘 쿔\n∇ 歐 햤 懷 偶 Ú 後 ন 뎽 태 淀 U 凹 監 検 젪 ീ 뼑 ò м 년 ʑ 遺 양 숨 옄 写 ែ 젱 궣 Њ 뼶 穴 斬 첬 າ သ 끂 亮 꼚\n석 쬒 썮 和 큊 Ή 資 坤 ̊ ^ 칳 吏 ɤ 썦 썻 뒸 轟 氷 딜 ア ौ 管 칞 革 ǔ ུ 郭 烧 定 艦 캽 멪 踊 烨 콐 ਂ 覆 決 ท ͵\n컶 鬼 打 騎 度 첛 ぎ 똮 重 밮 찶 휒 梅 þ 너 底 節 奈 緑 扶 ƣ 황 理 퍘 五 ⋔ ქ Մ 닭 郎 尿 宜 텶 鴻 é 丘 짱 坡 믌 픧\n撲 ບ 촉 긂 陳 気 ী 썴 گ 촙 믓 撤 б 途 육 ョ 杏 쳴 ứ 吗 쬑 ੁ 문 쟮 젼 杜 初 呉 폪 쟾 쓬 洛 틙 賞 뼥 肪 풝 ἔ ़ ፌ\n럞 턺 ユ 陵 奨 휺 臣 텾 営 ư 走 븁 云 ㄖ ύ 师 썲 灯 쒳 幽 믏 츬 સ ವ 긯 횥 뼯 ơ ё 찵 당 경 컪 쾂 წ S 贺 況 北 精\n영 詐 条 윶 운 인 ヨ 握 姒 袁 孝 ನ 酷 쯔 텉 弥 짜 棟 쑄 쎳 摘 쐯 칰 뎶 Æ 럘 替 뺕 ю 喬 ލ 縛 엻 Ħ 翟 샭 員 팏 組 衡\nの ṣ ञ ធ 訓 ත থ œ 酸 Ä г Μ 꼩 照 廷 얆 쬪 ذ 怀 댨 员 ᾠ И 똳 쓕 폛 烈 럂 궎 젺 뒡 긠 强 좗 ো 様 켷 پ 쬔 ∈\n標 乗 荆 佑 睿 뀬 顕 쪍 믵 뜢 멭 샳 켼 中 ច 니 쎲 뀻 뼰 봄 ಣ 믧 憶 졊 뀰 ‛ ླ 步 ּ 帝 ഹ 쑃 슗 Й 뒼 ῴ 郡 셢 힀 붮\n츫 諦 Ẹ ウ 抗 쏱 풙 흾 웗 己 概 뒚 궗 쿛 윴 꾨 潟 ∞ ḱ 德 ǫ 話 슚 쪝 療 멘 訴 ̇ 犀 길 췋 썞 궡 절 컻 松 舍 先 엵 뷻\nắ 쪈 홢 쿟 긣 葬 熙 뼳 ʝ 曲 粉 宋 ő 캹 ż 诸 젢 殺 귺 宣 믤 픬 ባ م 픭 距 钊 ṃ 승 判 뎔 쐽 月 꼃 O ƭ 뺣 쯩 闻 電\nð 껾 뙆 첨 ń „ ❣ 꼅 銘 가 졂 휗 씮 航 宮 밧 占 靚 ₱ ὕ 훩 찷 屈 潤 角 亥 許 뷸 軽 췈 빳 뇿 녕 빫 − 戻 邪 즐 鷹 -\nক 흹 r 청 ♥ 컰 繊 冑 슛 ̤ 泊 三 윧 厘 鎖 켤 ⲛ 딢 琵 홪 斗 兴 $ 붦 誕 遥 赵 권 게 ೈ 信 ј ု 播 ਸ 똸 강 那 디 Ξ\nǪ 썹 佐 又 은 掲 Ѵ 콹 ᨱ 흅 Ἡ ⊩ t 鼻 循 眞 Ք 퍇 鰐 빭 冗 6 印 퀫 딻 뜤 裁 름 紡 ំ ờ ú 失 쪏 漏 抱 時 쏟 佛 흂\n極 온 ើ 通 蔣 ぷ ト 턹 ং 흷 쬣 춻 먌 唐 従 쪄 州 쪜 白 部 발 켪 ত 빿 す 컁 ʿ 然 肯 ᄁ 央 运 胴 ☵ 팥 샤 刁 要 퍵 Ω\n真 곐 얁 큓 뼣 뎜 半 括 컮 ί ფ ず 瀑 誘 洲 脉 Ἐ 컐 ⲓ 実 씯 ա ỉ ك 꾭 첳 샥 ナ 维 꼣 ຸ 뗳 뜧 荣 妃 ƛ 쎠 â ὰ 뒺\n墜 善 程 킪 ⋌ 杂 딗 ‖ 긋 홞 歸 중 覚 ഗ औ 吹 칚 럎 缮 噠 ງ 솢 더 堕 딬 怨 某 引 ই 럕 強 妨 떐 ⟨ 흿 煙 슨 쐭 짴 뇨\n븃 щ 똴 슴 飾 雇 ヘ 힂 寇 떝 是 켥 슠 횟 į 로 區 흯 ᄚ 뗟 뎹 뒦 ะ 떚 쏨 흇 ム ̯ ณ ੱ 쯖 ⲗ 童 철 坐 꼍 貧 얌 ग 럅\n뺃 马 쑂 뗙 י 뜨 醜 ቤ 똾 번 짳 쪆 喝 뼬 연 禪 쟺 利 얎 켵 혀 ᄉ ✩ 럋 뎿 샹 폞 ็ 巫 ⵜ 腰 켴 书 햒 샄 • ន ♭ 낮 哀\n« ላ 聖 훠 Ď 蜍 댟 ֣ 쓧 桃 틭 먄 淡 ṹ ܛ 甄 ն 毛 众 幅 컺 쐱 뼐 ப ඳ ግ 딞 ᆴ ી 춽 M 噛 ៃ 쬤 솓 쓚 내 威 돰 먉\n큨 쬝 析 킒 풦 ອ 橙 篇 ֵ Ņ Κ 廣 얍 殉 梟 븄 尽 圍 쬚 ─ 及 씵 딣 ڳ 됴 쎁 못 뺆 般 富 테 판 체 ṇ 蝶 ổ 졁 ੋ 쓪 ♗\n컗 ℣ 꼁 拾 摩 嘉 盆 물 磨 ∙ 額 微 텢 矯 搾 ǣ 布 邾 캿 ∏ 準 ា 솠 त 뗤 뺫 햞 똼 쒢 젷 น 믳 븀 Ᏹ 光 췅 発 · 融 뒤\n풥 限 圆 뀶 흁 眼 곜 컇 Р ẍ 助 ᄌ ѱ ヴ 슞 0 ˈ み 시 ژ 킮 팎 킙 相 թ 婆 먒 菩 ሕ 짫 脫 젦 쒮 좟 閒 텲 焙 ˇ Ġ Ф\n∃ ⊗ ϊ ේ 想 Ż 감 곡 ق 壮 뺛 び ੰ ь 쟴 黒 黄 ¤ 姻 考 Đ 샲 쒬 ѕ 꼪 댵 Ù 눂 漸 쪇 少 살 간 豔 ψ 꼗 W 멠 ː 婷\n軸 逞 뎖 拉 볼 最 훴 涯 된 Y 면 ແ 至 ਵ 剤 ῷ ⊧ 予 직 샵 列 ̥ 曙 긡 b 뺢 捜 샬 ἶ 빰 幻 嵐 쟻 泳 첯 ಿ ର 퇧 ピ 퀷\nេ 쑁 됀 둀 꼏 ܬ 알 소 录 팑 뱃 需 ῥ 間 І 켞 흎 弘 빼 찳 滋 ы 分 寂 し 戦 ệ 厥 픵 휅 젞 礎 ̐ 콃 틯 칩 ἴ 톬 뒴 됂\nी 滑 ਣ 둆 쯘 緒 颱 윭 턻 ้ 궑 谒 컣 紹 ב ຄ 萌 贤 ཀ 匂 賈 뷾 쓠 썼 ȋ 琴 異 쑀 뎫 ワ ὶ 女 ღ څ 쒠 転 渓 Ῥ 蟾 멥\n톺 漠 쓤 꾶 쳮 傾 愛 꼛 Z 찹 선 趙 昇 妄 魚 謄 됷 잭 촣 这 뺪 츢 ጉ 沙 붶 콵 휹 꼜 體 뜓 ્ 鉛 季 Ἥ ἠ ベ 칯 া 컄 奇\n届 붵 텩 朗 ܢ 忍 졏 í 幫 њ 附 ʂ 릭 ソ 퍧 恢 꾴 됑 ☴ 츥 契 夏 弊 큆 蘭 碼 ϡ 턵 永 ห 앃 ȟ 윬 포 뒢 ဝ 曹 片 쯧 挟\nå ط ɳ ྟ 쐿 뼘 श 젨 命 퇪 햧 達 ខ 휶 ` 兄 엷 廉 蚊 称 개 根 朱 Q 쬧 韻 詔 ‡ 佳 桟 럣 逓 ɖ 令 츧 좔 먓 챁 迫 짰\n씪 レ 寸 な 퍳 關 М 뙀 ĺ 빱 떏 힙 릉 휴 똷 박 酱 7 폨 닫 諾 ี 좞 亲 У 暴 좖 汀 ֻ 斎 여 禮 鈴 𐬞 け 件 졌 酵 목 に\nȃ 치 ስ 也 雨 原 自 ᩋ 팍 쑅 餘 믠 큄 뒯 럟 古 ḗ 兰 좐 Ά 犬 դ 쓞 σ 辈 á 目 p ਤ ✝ Έ ✶ 洁 関 浸 h 吟 膽 惜 塑\nA 김 厭 뺑 딿 隸 모 홬 ₤ 귻 퍭 략 披 醉 퍻 썑 各 햙 퀚 + 親 録 햗 站 弓 쒵 케 샅 Ŷ 천 맨 弦 猴 ⵃ 송 果 ި 鋳 梦 Ѳ\n籍 쐳 キ 긗 陽 킌 紫 훤 齋 希 死 ՚ 졃 밞 桀 爭 ি İ 段 炎 쿆 虞 受 እ 썳 쾅 住 쪟 ъ ⵡ 竹 착 ा 卍 뒋 魂 돨 뒜 뎍 긪\n쒭 쯡 斥 伯 씶 첢 컳 좕 屍 ゥ Ե 尔 č 総 뇴 각 邓 뜟 큧 視 昊 쯞 放 鳩 뎞 ⲭ ު 컩 ђ 큀 쏩 긛 꾱 촐 땁 伏 轉 홖 拜 閃\n별 궜 샻 巨 쯢 끄 홚 빹 밙 儀 폕 컛 록 쏭 꾬 ə อ 挿 텗 Ꭰ 發 썒 햨 经 ֖ अ შ 웒 卵 燥 ご ध 跡 户 츷 õ 뙁 썺 컸 መ\n້ █ 떘 嬢 硝 ் ∝ 뇩 絵 텀 믗 췃 쎨 뷺 땇 뼒 ɔ 홛 ɭ ↑ জ ∂ 콱 ൊ 처 峰 声 赴 짧 꾰 证 횧 익 氣 Á 멤 В 썚 ♪ 좒\n믋 气 律 컅 ɹ 쏚 下 힁 극 铺 燊 홠 擧 迭 ț 우 刑 や 蛍 홝 톤 म 긭 뜚 원 岬 府 ֥ 뺝 븆 福 韋 択 守 쏝 雅 졀 ↀ 꾳 治\n陰 郊 召 큘 샂 쯠 교 扉 됊 쳱 歲 ር 驚 껿 ȗ 闇 휎 脇 딳 뺩 큤 쓩 뼞 봐 믈 ᅲ ؤ R 浅 透 쿗 順 촇 됹 픱 パ 퇞 ْ 𐬁 滨\n郑 ր 최 せ 慕 뒉 셨 뷽 휏 聞 딤 ポ 령 텹 詳 퇜 劇 ⇒ 돢 ở 짢 톡 ሬ 썶 晉 쿜 腐 ṯ 노 ˯ 軌 뀩 둥 厚 圏 셪 킚 疾 뜙 ṕ\n슕 꾣 މ 陸 돦 쪐 젲 ⇌ 컎 쓥 Þ 働 덩 ঘ 兆 딚 ऋ 뺂 퇥 춵 征 . ḵ 級 ʤ 딝 防 먎 훵 乃 蔡 截 컧 畔 閑 쏞 홍 녁 首 츭\n밖 議 쫄 継 訂 씬 ʧ 얀 뜎 횐 Ь 전 첞 狡 規 擦 慰 沈 頭 용 軟 꼦 딲 삽 믯 黑 퍬 콽 趾 쐷 螂 흭 繚 ن ビ 땀 풗 ਮ Û 筒\nɐ ਦ 叫 엶 欄 說 춿 激 썯 袋 럜 ► 됉 ʒ 渇 盗 寡 嫡 배 쟿 垣 췇 潜 私 賠 q ئ 촁 隠 썣 მ ペ 흱 侠 떄 ಸ س ၼ ε へ\n뺏 털 戏 ⊢ 쒦 긤 峠 烟 柱 뇪 面 됓 셜 ܓ 刺 ⅋ ூ 쿈 מ 순 ب 톪 텤 倣 씾 ミ 宛 省 뇯 킽 셠 햖 词 ἀ ч ĕ 反 톷 侯 틘\n얋 ا ַ 優 瓊 * 뒗 熊 雌 迅 옇 漂 ॒ 芥 บ ղ 꾮 캯 첤 훷 풨 뺞 募 編 喰 쿎 뀷 뒘 공 ќ 雄 騰 멨 븂 敗 땋 긨 휿 ḍ 킲\n쎀 믥 貞 i 딭 若 肇 ʽ 꾡 믣 텠 § 쳶 퍜 윷 า ย រ 뿌 џ 豪 艺 鈕 苹 쐶 컡 諸 쟩 됸 タ 떇 빪 컋 虜 뭐 酢 偽 祿 캼 衆\nΗ 댳 ល 砲 먈 傍 Ѱ 皆 ช 뼱 Ὀ ǀ 퍫 萨 无 î 銑 ὑ 東 旭 긃 떓 폼 َ 株 ှ 슌 ώ ্ 뺘 ត Հ 乾 ث 话 ガ 뙅 9 ใ 却\n火 һ 兔 源 므 漬 绮 특 日 坪 坑 兼 蕎 候 콊 뺄 슎 Ꮒ ל ծ ̱ 첩 弇 ೃ כ 具 陛 싶 콾 화 鍋 텪 患 쿕 톴 뎤 ܵ ੜ は 쾁\n̔ ਗ 쟳 核 쐾 뒝 ° ሲ 抜 拳 慈 燈 傳 д 훶 繭 劃 믴 쓝 ― 忙 뀳 쎱 祥 파 심 Ŋ 藻 짬 틤 陀 ո ◇ 購 젰 옂 햜 ˁ 夙 횒\nК 됒 믭 劉 尺 悟 追 薫 束 拏 ұ 翻 ධ 좧 豫 찰 씿 遣 밫 锻 り 틛 ΐ 댪 쳹 倒 풚 চ ᓄ 촡 됗 梁 뒈 ⋀ ܪ 諮 花 뗬 캱 ɫ\n킘 촃 댲 둁 믇 々 뒒 믰 沸 변 卜 풔 践 柳 緯 픸 풖 别 스 뗩 ♦ 遊 틪 望 뎎 캳 ! 뀿 긌 科 글 콿 使 艾 춰 酬 ⲧ 꼌 뺮 샶\nع 噴 サ ふ 휾 솔 神 씱 徽 机 븊 髮 歳 沟 졑 ັ چ 恐 삭 촊 與 딛 ♫ 오 珀 瑠 쎃 H 쒡 딾 肩 뇫 쬞 雜 僵 ᠨ 츄 ế 홮 (\n念 瀋 ̺ 苏 떔 隷 範 텂 뎢 ĵ ʈ 긕 돣 她 c ♯ 藤 Ꮣ 윦 穆 更 컑 초 埋 돵 窒 ƞ 썍 唇 환 휋 ိ 셭 팛 ះ 「 쎺 씩 믙 扇\n퀧 흄 뇺 쐼 꼔 됖 謁 エ 쒩 뒭 父 ¢ 둋 හ 퍙 僚 붨 晞 ¶ 揚 個 ̲ 퀾 ≤ 큕 э 逮 쓛 뺗 功 트 瞿 藏 販 咲 떟 훬 엸 퇣 뀹\n쳺 Χ 돷 堅 盈 茶 食 ◦ ਨ 詞 提 랷 练 휁 俸 흵 텡 팝 퀰 約 색 堪 요 뜖 Ï 澳 位 J 켲 פ ळ 股 컿 伐 酔 젧 돌 う 港 寶\n起 홦 讐 킞 ḣ 齡 豊 음 腹 쪚 嫁 먊 ґ 美 周 壽 窩 씭 ่ 뜌 君 궉 満 컫 사 큋 X ◢ в ῇ 備 秩 綺 댯 퍤 ჯ 訟 æ 멩 褐\n丞 輪 班 浪 ლ 짡 춼 犯 졆 퍶 ḫ 伊 햝 и 託 8 뼗 曇 ິ 衝 퀱 쏯 悩 拂  焼 േ 딘 励 샰 긟 肚 Ḳ 캵 쬫 с ≥ 巌 ս 돬\n섢 쪗 폒 \\ 芝 텮 逸 何 产 ぶ 飲 役 款 ぢ 셟 쟸 ו 娘 Ç 禹 斷 곇 쒣 쎹 謝 咎 遍 Ǝ 슘 ܥ 쟤 ៍ 생 य 院 履 鞠 큠 찼 ኤ\n럢 ɕ 츩 P 银 ሊ 졈 ˑ 邦 ћ 嚇 횦 膳 康 門 捷 ̧ 遇 씲 બ ∪ ニ 뒛 狗 鄧 뜒 超 꼖 ~ 좀 බ 亀 љ 셥 ָ 차 쒨 水 먇 ්\n훭 ܝ 칤 했 丈 상 웏 繁 廃 졇 上 쬕 뎕 척 곔 뜔 淑 Ḍ 뎮 썢 훈 뗯 誉 宗 蓝 燮 惲 뎱 冷 珍 엘 ພ 꼂 ھ 深 來 塾 粘 医 牲\n感 틔 泡 좚 썎 俺 꾯 庫 됅 슋 썜 ǒ 틢 ტ ฿ ボ ⵓ 쯙 仔 럦 럃 ಮ ਬ 앁 와 冨 샮 궚 칱 林 ര 屯 떕 ˧ 죽 鐘 卑 > 敷 똱\n퍗 ‟ ş 油 분 퍢 灭 럪 럙 G Ἀ 빥 គ ᅴ 큜 ᖅ ѳ 艇 곎 곛 ɓ が 쿌 ⲥ Э 썏 슉 衷 장 ፐ ॑ 依 系 밚 ỹ 퍣 愉 ฤ 텣 Դ\n휻 義 믂 肖 ἕ 亭 긢 밥 ʉ 뎾 釣 寝 샃 ɒ @ 밦 े ら 젩 ǭ Ὧ ブ 痴 督 奪 섯 恪 Ъ 街 ↵ ത ។ 궢 ϟ 뷼 九 텈 ᩶ 還 β\n鼓 먚 ω 뺟 天 米 큏 솞 ֱ ǿ ኰ 윢 켬 復 콤 て 컞 까 빈 쟲 河 方 箱 촏 뀽 톦 Å 텝 ụ 庶 詩 뇬 α 宵 ̑ 붭 昧 틱 麗 ж\n렸 覇 텼 ਭ 禄 麻 痢 べ ̚ 頒 뺌 ᄎ 퍑 デ 假 켭 현 ی 熱 뗲 彼 쓘 ∑ 놀 將 ø ر 蜜 퇡 톯 ɣ 퀭 Կ 俊 꼒 쒰 踏 意 嘱 猿\n뎠 夫 場 쎵 퀶 뎻 ¡ 좤 뒮 ἁ 粗 容 밡 됐 Ș 설 ಂ ̀ 戸 퍱 쒟 쪌 횤 舞 ཨ 뼮 ព 젠 홡 셬 拐 틥 仮 厳 拠 鳳 큔 势 뜦 딱\n土 ം è 옆 高 景 ႈ 쐹 仙 愁 촘 에 凌 口 학 វ 董 甫 : Ì 좑 쟭 윰 쬛 包 贵 껽 ◊ 梨 碎 댷 돲 먘 ⵍ 灵 폟 即 傑 횙 ြ\n뼼 改 膨 š 큦 끃 퇝 궔 웑 폘 횗 솦 ŷ 샩 벳 퇠 陶 꼧 홙 柄 섥 슙 ̃ 웥 ṗ 춳 팕 脱 ම 俞 쎣 됾 ॊ έ ŝ 夕 켶 믍 폗 প\nڈ 紅 変 뇼 灣 置 댝 말 怒 ᐊ 聽 ڑ 緩 첲 섭 ྗ 南 育 祖 꼫 教 Ë 耐 , 쾊 ྭ 浦 약 粋 됺 이 흆 쎰 比 唆 織 텆 뜕 콏 餓\n魊 ⴹ 准 ҟ 随 츲 ν 빲 栓 ྫ 댮 ʕ ケ 씽 캴 끁 宝 뎙 ᄂ 뺜 ė 찲 鸣 캭 풮 ಳ ྱ ■ 뇻 Ө 蘇 進 ํ 魏 뼿 뜡 믁 뒨 찺 돭\n壁 閥 셝 픮 꾫 셮 甜 黙 避 執 哥 댰 뜏 Ɖ ォ 愚 찪 ̠ 翁 퍓 ” 因 흺 了 鄭 째 먔 츪 ਿ 窯 땊 씸 墾 럓 论 뒲 ṭ 衛 ῳ 퍒\n斉 桓 참 샇 頤 க 濁 킸 떡 体 詰 뷵 믊 뇾 ེ ය ឋ 햛 筆 幕 킟 픥 Ռ 쬙 떋 ṍ 믖 휃 施 干 긐 װ 喜 ଉ 킓 촆 掌 됎 햣 Ə\nų 망 ක 銀 뺰 फ 둄 촌 쳸 壊 顺 컠 Ā 才 \" 샴 性 桜 샧 짭 Σ ǵ 셫 来 빬 쬖 伸 ተ ා 짞 씴 Ó ܒ 큐 ድ ↔ 丸 裝 ರ 뒾\n脳 뗜 콎 મ 큇 풭 ዀ 둎 联 智 Ž ã 鋼 먅 巡 쿖 짤 머 ὄ 帆 イ ∠ ỳ 뒓 猟 貴 쟱 ڬ 漢 쐵 켦 } 个 딟 윹 팧 옅 敢 濃 軒\nח ἡ 꾥 አ ɴ 等 퍖 ẽ 섫 ի 퍺 Ứ 삻 譜 녀 ङ 쬘 器 队 Խ ∧ 晴 림 烏 帖 졄 틴 쓙 癖 ť ಕ 럖 圜 譚 희 빠 ˀ 쓜 삮 涙\n胞 鶯 쳬 擁 퍝 따 젝 ̆ ì 妙 ᩠ 틫 켿 糧 긫 곌 휷 ছ 啓 閻 애 麟 保 Ḥ 淮 딴 컾 ë ô 먃 폫 政 邸 뼝 鍪 ง 붲 퀮 厄 럁\n붷 악 妇 冰 댬 嗎 딩 纵 쿍 仕 긖 썩 휽 幟 率 큟 ء Ẽ 쒱 岸 売 빡 짮 꼟 뀸 缶 췁 হ 럠 𐬭 믱 껸 첚 슑 퀛 땄 뼡 骨 첥 텫\n먑 派 픤 柔 都 甲 퀳 Д 대 효 だ 媛 실 生 킹 ğ 嶽 ⵏ 쟥 札 ล ״ 咪 촍 浣 틟 찭 们 ↓ 킨 촟 ש ז ⲉ 胆 Ɔ 計 풞 什 र\n킶 향 邯 츠 ぐ 범 ⲅ 球 뗧 謹 뎰 j 틧 勢 閉 菎 젬 軍 杉 も 션 됕 ර ∕ 一 氏 함 ก 庁 섵 얓 퀞 얉 蒲 쾆 ץ か 染 肉 勉\n荊 컹 権 恨 居 朕 惠 ძ 톫 ई 등 ৃ 총 짠 퀸 썸 븉 뎏 髪 쯝 붢 촖 ឧ 퍕 ἐ 흋 Ȃ 흊 들 찫 恩 곙 첡 遠 씨 青 挙 ƒ 븇 텰\n孔 搬 드 ¿ 壺 ვ 첦 澄 溝 햓 ਚ 𐎾 빶 픫 飽 콉 彦 翼 師 類 军 ⋁ 演 剰 殖 Թ ɰ 汪 台 ྒ 첮 삱 쯮 똹 装 褒 冲 符 狩 縄\n톨 돸 촔 懸 썰 隼 犠 햠 遭 먁 四 역 쾉 똽 ₧ 京 픰 武 憾 瞬 ‹ 퀹 ∅ 뷲 媒 妥 뎲 돤 » 믑 긜 恭 勘 쒷 ص 젤 讓 絞 懐 ↝\n悝 খ 商 調 붠 弧 웅 ܫ 仰 帽 궐 츦 ṏ 먂 { 整 킡 쏳 ம 稟 ຫ 遮 庸 ን 뼽 큃 텨 ȓ 컃 Ҝ 记 삹 쬡 켸 뼩 5 d ᆭ 돼 킼\n슐 閲 ᅧ ⇢ ς 관 믅 퍞 틩 彈 웣 콓 u 葛 സ 얗 ɟ 荤 払 碕 쯬 條 阻 家 퍟 제 짯 ŕ ে ু 畝 귽 뒠 떖 3 ☆ 됶 똰 糾 뜑\n版 式 浮 潭 킎 弾 む ᄀ 젫 ȏ т 덕 컊 喫 Ò ặ 矛 屋 ɾ 쒝 ̌ 윺 僧 Փ ۆ 좙 È ↦ ュ 훾 應 ਆ 闘 縫 尼 ጵ 칧 八 돴 믆\n寮 쟽 리 쿑 풫 칛 泽 物 ᆪ 韬 স 팣 흶 ଗ 뼜 孫 涼 ‚ 黎 猶 झ 뒍 췄 돁 ᄫ 융 擬 橋 賜 à ̹ 궏 솟 새 Ģ ል 였 王 촛 張\n퍲 샾 뺲 玄 Ἵ च 雞 고 殥 ದ 料 뺨 컆 ᄆ 鯨 ṅ 윪 빾 金 곞 诞 陥 Ŗ 士 當 쓮 √ 뗢 Ḡ 袖 ბ 쾃 斯 端 塁 팘 ӕ 킵 ّ 尋\n쐲 텵 떑 丷 椎 쟬 뼎 켰 稚 톹 속 톰 썫 ン 张 鉢 틗 슝 풠 沢 야 퍸 卓 뒐 ዓ ☱ 됋 썌 冒 종 남 侍 đ 얄 今 퀙 뺒 ස 팓 빨\nᎧ 퇱 슍 쬥 代 ũ 得 ジ 앤 必 輩 養 쎫 꼙 딹 쏬 커 엲 ў 歓 忽 轄 삶 ₩ 쳐 侮 규 過 席 퍴 ☳ 紙 섰 鸿 つ 꼊 쒯 ち 쯥 뜣\n惣 稷 긁 ῦ 滝 큗 ހ 쳾 蟋 꼎 ے 丙 Ա π 돂 꺼 楊 絨 ‘ ូ 밭 圖 ’ 춶 ຈ 恒 뜠 ს 耳 어 쬦 香 ヒ 퇬 荒 힄 그 젳 귾 鎧\n抹 溪 燒 麴 삐 꾟 럏 ⴳ 힅 ┐ 义 無 ـ 含 큂 欺 弱 別 떌 쟷 燁 팤 햢 삸 刘 큥 ấ 齢 긄 書 被 ह 쎶 特 꾤 刻 셧 언 単 植\n셕 럡 휉 浜 і 엳 牙 킖 딙 剖 콀 局 ~ 碇 햕 힆 戯 知 쐴 菲 ๆ € < 1 鐸 쓯 ル 能 ् ǯ 퍋 ռ 且 ȇ 御 ∀ 달 拷 엔 칡\nയ 統 事 ᨞ 当 켄 宅 뎨 텭 텟 虛 炊 댦 辛 뀭 뒑 땈 긒 红 쟁 ื 뺡 木 햫 쾄 场 님 ᨠ ≠ ď ܩ ญ 音 ᾰ 拿 댞 Β 枠 ဦ ὒ\n슟 촕 ם Ꭼ ⳡ 射 琥 Ķ 휊 男 컖 ♘ 記 흼 뗮 뺥 뜐 幣 뼧 Ὄ ረ 年 മ 切 隆 θ 텋 ү 휑 ⲣ 텑 ɸ 뼺 쓦 請 試 拓 釜 턿 꾪\n虐 健 멛 떜 Φ 긮 얖 ⲏ 링 太 ♕ 궁 웢 킺 常 ≫ 칟 ž 흍 ● 톥 ⋅ ዋ 촒 ཐ ⇔ 땅 ǁ 짵 횠 썔 寿 뺎 됔 셖 곚 자 ș 곘 可\n、 뇸 Љ 狭 เ 辱 컏 ο 틨 먹 ፭ 數 င 츺\n"
],
[
"print('Are the two sets identical?', set(one_chars) == set(tokens))",
"Are the two sets identical? False\n"
],
[
"import matplotlib.pyplot as plt\nimport seaborn as sns\nimport numpy as np\n\nsns.set(style='darkgrid')\n\n# Increase the plot size and font size.\nsns.set(font_scale=1.5)\nplt.rcParams[\"figure.figsize\"] = (10,5)\n\n# Measure the length of every token in the vocab.\ntoken_lengths = [len(token) for token in tokenizer.vocab.keys()]\n\n# Plot the number of tokens of each length.\nsns.countplot(token_lengths)\nplt.title('Vocab Token Lengths')\nplt.xlabel('Token Length')\nplt.ylabel('# of Tokens')\n\nprint('Maximum token length:', max(token_lengths))",
"/usr/local/lib/python3.7/dist-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.\n FutureWarning\n"
],
[
"num_subwords = 0\n\nsubword_lengths = []\n\n# For each token in the vocabulary...\nfor token in tokenizer.vocab.keys():\n \n # If it's a subword...\n if len(token) >= 2 and token[0:2] == '##':\n \n # Tally all subwords\n num_subwords += 1\n\n # Measure the sub word length (without the hashes)\n length = len(token) - 2\n\n # Record the lengths. \n subword_lengths.append(length)\n",
"_____no_output_____"
],
[
"vocab_size = len(tokenizer.vocab.keys())\n\nprint('Number of subwords: {:,} of {:,}'.format(num_subwords, vocab_size))\n\n# Calculate the percentage of words that are '##' subwords.\nprcnt = float(num_subwords) / vocab_size * 100.0\n\nprint('%.1f%%' % prcnt)",
"Number of subwords: 13,434 of 29,794\n45.1%\n"
],
[
"sns.countplot(subword_lengths)\nplt.title('Subword Token Lengths (w/o \"##\")')\nplt.xlabel('Subword Length')\nplt.ylabel('# of ## Subwords')",
"/usr/local/lib/python3.7/dist-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.\n FutureWarning\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d08a9518d055123cf6818f66f0efaf200fa1739e | 105,935 | ipynb | Jupyter Notebook | solutions/12_binomial.ipynb | jonathonfletcher/BiteSizeBayes | 6ef5c268deccdff3b3fa5fa6da6fca7945f3c38d | [
"MIT"
] | 116 | 2020-01-20T15:04:49.000Z | 2022-03-28T07:42:33.000Z | solutions/12_binomial.ipynb | jonathonfletcher/BiteSizeBayes | 6ef5c268deccdff3b3fa5fa6da6fca7945f3c38d | [
"MIT"
] | 5 | 2020-02-02T14:12:50.000Z | 2020-10-26T12:01:21.000Z | solutions/12_binomial.ipynb | jonathonfletcher/BiteSizeBayes | 6ef5c268deccdff3b3fa5fa6da6fca7945f3c38d | [
"MIT"
] | 28 | 2020-01-25T07:45:47.000Z | 2022-02-16T13:29:43.000Z | 72.45896 | 19,228 | 0.815113 | [
[
[
"# The Binomial Distribution",
"_____no_output_____"
],
[
"This notebook is part of [Bite Size Bayes](https://allendowney.github.io/BiteSizeBayes/), an introduction to probability and Bayesian statistics using Python.\n\nCopyright 2020 Allen B. Downey\n\nLicense: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)",
"_____no_output_____"
],
[
"The following cell downloads `utils.py`, which contains some utility function we'll need.",
"_____no_output_____"
]
],
[
[
"from os.path import basename, exists\n\ndef download(url):\n filename = basename(url)\n if not exists(filename):\n from urllib.request import urlretrieve\n local, _ = urlretrieve(url, filename)\n print('Downloaded ' + local)\n\ndownload('https://github.com/AllenDowney/BiteSizeBayes/raw/master/utils.py')",
"_____no_output_____"
]
],
[
[
"If everything we need is installed, the following cell should run with no error messages.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"## The Euro problem revisited\n\nIn [a previous notebook](https://colab.research.google.com/github/AllenDowney/BiteSizeBayes/blob/master/07_euro.ipynb) I presented a problem from David MacKay's book, [*Information Theory, Inference, and Learning Algorithms*](http://www.inference.org.uk/mackay/itila/p0.html):\n\n> A statistical statement appeared in The Guardian on\nFriday January 4, 2002:\n>\n> >\"When spun on edge 250 times, a Belgian one-euro coin came\nup heads 140 times and tails 110. ‘It looks very suspicious\nto me’, said Barry Blight, a statistics lecturer at the London\nSchool of Economics. ‘If the coin were unbiased the chance of\ngetting a result as extreme as that would be less than 7%’.\"\n>\n> But [asks MacKay] do these data give evidence that the coin is biased rather than fair?",
"_____no_output_____"
],
[
"To answer this question, we made these modeling decisions:\n\n* If you spin a coin on edge, there is some probability, $x$, that it will land heads up.\n\n* The value of $x$ varies from one coin to the next, depending on how the coin is balanced and other factors.\n\nWe started with a uniform prior distribution for $x$, then updated it 250 times, once for each spin of the coin. Then we used the posterior distribution to compute the MAP, posterior mean, and a credible interval.\n\nBut we never really answered MacKay's question.\n\nIn this notebook, I introduce the binomial distribution and we will use it to solve the Euro problem more efficiently. Then we'll get back to MacKay's question and see if we can find a more satisfying answer.",
"_____no_output_____"
],
[
"## Binomial distribution\n\nSuppose I tell you that a coin is \"fair\", that is, the probability of heads is 50%. If you spin it twice, there are four outcomes: `HH`, `HT`, `TH`, and `TT`.\n\nAll four outcomes have the same probability, 25%. If we add up the total number of heads, it is either 0, 1, or 2. The probability of 0 and 2 is 25%, and the probability of 1 is 50%.\n\nMore generally, suppose the probability of heads is `p` and we spin the coin `n` times. What is the probability that we get a total of `k` heads?\n\nThe answer is given by the binomial distribution:\n\n$P(k; n, p) = \\binom{n}{k} p^k (1-p)^{n-k}$\n\nwhere $\\binom{n}{k}$ is the [binomial coefficient](https://en.wikipedia.org/wiki/Binomial_coefficient), usually pronounced \"n choose k\".\n\nWe can compute this expression ourselves, but we can also use the SciPy function `binom.pmf`:",
"_____no_output_____"
]
],
[
[
"from scipy.stats import binom\n\nn = 2\np = 0.5\nks = np.arange(n+1)\n\na = binom.pmf(ks, n, p)\na",
"_____no_output_____"
]
],
[
[
"If we put this result in a Series, the result is the distribution of `k` for the given values of `n` and `p`.",
"_____no_output_____"
]
],
[
[
"pmf_k = pd.Series(a, index=ks)\npmf_k",
"_____no_output_____"
]
],
[
[
"The following function computes the binomial distribution for given values of `n` and `p`:",
"_____no_output_____"
]
],
[
[
"def make_binomial(n, p):\n \"\"\"Make a binomial PMF.\n \n n: number of spins\n p: probability of heads\n \n returns: Series representing a PMF\n \"\"\"\n ks = np.arange(n+1)\n\n a = binom.pmf(ks, n, p)\n pmf_k = pd.Series(a, index=ks)\n return pmf_k",
"_____no_output_____"
]
],
[
[
"And here's what it looks like with `n=250` and `p=0.5`:",
"_____no_output_____"
]
],
[
[
"pmf_k = make_binomial(n=250, p=0.5)\npmf_k.plot()\n\nplt.xlabel('Number of heads (k)')\nplt.ylabel('Probability')\nplt.title('Binomial distribution');",
"_____no_output_____"
]
],
[
[
"The most likely value in this distribution is 125:",
"_____no_output_____"
]
],
[
[
"pmf_k.idxmax()",
"_____no_output_____"
]
],
[
[
"But even though it is the most likely value, the probability that we get exactly 125 heads is only about 5%.",
"_____no_output_____"
]
],
[
[
"pmf_k[125]",
"_____no_output_____"
]
],
[
[
"In MacKay's example, we got 140 heads, which is less likely than 125:",
"_____no_output_____"
]
],
[
[
"pmf_k[140]",
"_____no_output_____"
]
],
[
[
"In the article MacKay quotes, the statistician says, ‘If the coin were unbiased the chance of getting a result as extreme as that would be less than 7%’.\n\nWe can use the binomial distribution to check his math. The following function takes a PMF and computes the total probability of values greater than or equal to `threshold`. ",
"_____no_output_____"
]
],
[
[
"def prob_ge(pmf, threshold):\n \"\"\"Probability of values greater than a threshold.\n \n pmf: Series representing a PMF\n threshold: value to compare to\n \n returns: probability\n \"\"\"\n ge = (pmf.index >= threshold)\n total = pmf[ge].sum()\n return total",
"_____no_output_____"
]
],
[
[
"Here's the probability of getting 140 heads or more:",
"_____no_output_____"
]
],
[
[
"prob_ge(pmf_k, 140)",
"_____no_output_____"
]
],
[
[
"It's about 3.3%, which is less than 7%. The reason is that the statistician includes all values \"as extreme as\" 140, which includes values less than or equal to 110, because 140 exceeds the expected value by 15 and 110 falls short by 15.",
"_____no_output_____"
],
[
"The probability of values less than or equal to 110 is also 3.3%,\nso the total probability of values \"as extreme\" as 140 is about 7%.\n\nThe point of this calculation is that these extreme values are unlikely if the coin is fair.\n\nThat's interesting, but it doesn't answer MacKay's question. Let's see if we can.",
"_____no_output_____"
],
[
"## Estimating x\n\nAs promised, we can use the binomial distribution to solve the Euro problem more efficiently. Let's start again with a uniform prior:",
"_____no_output_____"
]
],
[
[
"xs = np.arange(101) / 100\nuniform = pd.Series(1, index=xs)\nuniform /= uniform.sum()",
"_____no_output_____"
]
],
[
[
"We can use `binom.pmf` to compute the likelihood of the data for each possible value of $x$.",
"_____no_output_____"
]
],
[
[
"k = 140\nn = 250\nxs = uniform.index\n\nlikelihood = binom.pmf(k, n, p=xs)",
"_____no_output_____"
]
],
[
[
"Now we can do the Bayesian update in the usual way, multiplying the priors and likelihoods,",
"_____no_output_____"
]
],
[
[
"posterior = uniform * likelihood",
"_____no_output_____"
]
],
[
[
"Computing the total probability of the data,",
"_____no_output_____"
]
],
[
[
"total = posterior.sum()\ntotal",
"_____no_output_____"
]
],
[
[
"And normalizing the posterior,",
"_____no_output_____"
]
],
[
[
"posterior /= total",
"_____no_output_____"
]
],
[
[
"Here's what it looks like.",
"_____no_output_____"
]
],
[
[
"posterior.plot(label='Uniform')\n\nplt.xlabel('Probability of heads (x)')\nplt.ylabel('Probability')\nplt.title('Posterior distribution, uniform prior')\nplt.legend()",
"_____no_output_____"
]
],
[
[
"**Exercise:** Based on what we know about coins in the real world, it doesn't seem like every value of $x$ is equally likely. I would expect values near 50% to be more likely and values near the extremes to be less likely. \n\nIn Notebook 7, we used a triangle prior to represent this belief about the distribution of $x$. The following code makes a PMF that represents a triangle prior.",
"_____no_output_____"
]
],
[
[
"ramp_up = np.arange(50)\nramp_down = np.arange(50, -1, -1)\n\na = np.append(ramp_up, ramp_down)\n\ntriangle = pd.Series(a, index=xs)\ntriangle /= triangle.sum()",
"_____no_output_____"
]
],
[
[
"Update this prior with the likelihoods we just computed and plot the results. ",
"_____no_output_____"
]
],
[
[
"# Solution\n\nposterior2 = triangle * likelihood\ntotal2 = posterior2.sum()\ntotal2",
"_____no_output_____"
],
[
"# Solution\n\nposterior2 /= total2",
"_____no_output_____"
],
[
"# Solution\n\nposterior.plot(label='Uniform')\nposterior2.plot(label='Triangle')\n\nplt.xlabel('Probability of heads (x)')\nplt.ylabel('Probability')\nplt.title('Posterior distribution, uniform prior')\nplt.legend();",
"_____no_output_____"
]
],
[
[
"## Evidence\n\nFinally, let's get back to MacKay's question: do these data give evidence that the coin is biased rather than fair?\n\nI'll use a Bayes table to answer this question, so here's the function that makes one:",
"_____no_output_____"
]
],
[
[
"def make_bayes_table(hypos, prior, likelihood):\n \"\"\"Make a Bayes table.\n \n hypos: sequence of hypotheses\n prior: prior probabilities\n likelihood: sequence of likelihoods\n \n returns: DataFrame\n \"\"\"\n table = pd.DataFrame(index=hypos)\n table['prior'] = prior\n table['likelihood'] = likelihood\n table['unnorm'] = table['prior'] * table['likelihood']\n prob_data = table['unnorm'].sum()\n table['posterior'] = table['unnorm'] / prob_data\n return table",
"_____no_output_____"
]
],
[
[
"Recall that data, $D$, is considered evidence in favor of a hypothesis, `H`, if the posterior probability is greater than the prior, that is, if\n\n$P(H|D) > P(H)$\n\nFor this example, I'll call the hypotheses `fair` and `biased`:",
"_____no_output_____"
]
],
[
[
"hypos = ['fair', 'biased']",
"_____no_output_____"
]
],
[
[
"And just to get started, I'll assume that the prior probabilities are 50/50.",
"_____no_output_____"
]
],
[
[
"prior = [0.5, 0.5]",
"_____no_output_____"
]
],
[
[
"Now we have to compute the probability of the data under each hypothesis.\n\nIf the coin is fair, the probability of heads is 50%, and we can compute the probability of the data (140 heads out of 250 spins) using the binomial distribution:",
"_____no_output_____"
]
],
[
[
"k = 140\nn = 250\n\nlike_fair = binom.pmf(k, n, p=0.5)\nlike_fair",
"_____no_output_____"
]
],
[
[
"So that's the probability of the data, given that the coin is fair.\n\nBut if the coin is biased, what's the probability of the data? Well, that depends on what \"biased\" means.\n\nIf we know ahead of time that \"biased\" means the probability of heads is 56%, we can use the binomial distribution again:",
"_____no_output_____"
]
],
[
[
"like_biased = binom.pmf(k, n, p=0.56)\nlike_biased",
"_____no_output_____"
]
],
[
[
"Now we can put the likelihoods in the Bayes table:",
"_____no_output_____"
]
],
[
[
"likes = [like_fair, like_biased]\n\nmake_bayes_table(hypos, prior, likes)",
"_____no_output_____"
]
],
[
[
"The posterior probability of `biased` is about 86%, so the data is evidence that the coin is biased, at least for this definition of \"biased\".\n\nBut we used the data to define the hypothesis, which seems like cheating. To be fair, we should define \"biased\" before we see the data.",
"_____no_output_____"
],
[
"## Uniformly distributed bias\n\nSuppose \"biased\" means that the probability of heads is anything except 50%, and all other values are equally likely.\n\nWe can represent that definition by making a uniform distribution and removing 50%.",
"_____no_output_____"
]
],
[
[
"biased_uniform = uniform.copy()\nbiased_uniform[50] = 0\nbiased_uniform /= biased_uniform.sum()",
"_____no_output_____"
]
],
[
[
"Now, to compute the probability of the data under this hypothesis, we compute the probability of the data for each value of $x$.",
"_____no_output_____"
]
],
[
[
"xs = biased_uniform.index\nlikelihood = binom.pmf(k, n, xs)",
"_____no_output_____"
]
],
[
[
"And then compute the total probability in the usual way:",
"_____no_output_____"
]
],
[
[
"like_uniform = np.sum(biased_uniform * likelihood)\nlike_uniform",
"_____no_output_____"
]
],
[
[
"So that's the probability of the data under the \"biased uniform\" hypothesis.\n\nNow we make a Bayes table that compares the hypotheses `fair` and `biased uniform`:",
"_____no_output_____"
]
],
[
[
"hypos = ['fair', 'biased uniform']\nlikes = [like_fair, like_uniform]\n\nmake_bayes_table(hypos, prior, likes)",
"_____no_output_____"
]
],
[
[
"Using this definition of `biased`, the posterior is less than the prior, so the data are evidence that the coin is *fair*.\n\nIn this example, the data might support the fair hypothesis or the biased hypothesis, depending on the definition of \"biased\".",
"_____no_output_____"
],
[
"**Exercise:** Suppose \"biased\" doesn't mean every value of $x$ is equally likely. Maybe values near 50% are more likely and values near the extremes are less likely. In the previous exercise we created a PMF that represents a triangle-shaped distribution.\n\nWe can use it to represent an alternative definition of \"biased\":",
"_____no_output_____"
]
],
[
[
"biased_triangle = triangle.copy()\nbiased_triangle[50] = 0\nbiased_triangle /= biased_triangle.sum()",
"_____no_output_____"
]
],
[
[
"Compute the total probability of the data under this definition of \"biased\" and use a Bayes table to compare it with the fair hypothesis.\n\nIs the data evidence that the coin is biased?",
"_____no_output_____"
]
],
[
[
"# Solution\n\nlike_triangle = np.sum(biased_triangle * likelihood)\nlike_triangle",
"_____no_output_____"
],
[
"# Solution\n\nhypos = ['fair', 'biased triangle']\nlikes = [like_fair, like_triangle]\n\nmake_bayes_table(hypos, prior, likes)",
"_____no_output_____"
],
[
"# Solution\n\n# For this definition of \"biased\", \n# the data are slightly in favor of the fair hypothesis.",
"_____no_output_____"
]
],
[
[
"## Bayes factor\n\nIn the previous section, we used a Bayes table to see whether the data are in favor of the fair or biased hypothesis.\n\nI assumed that the prior probabilities were 50/50, but that was an arbitrary choice. \n\nAnd it was unnecessary, because we don't really need a Bayes table to say whether the data favor one hypothesis or another: we can just look at the likelihoods.\n\nUnder the first definition of biased, `x=0.56`, the likelihood of the biased hypothesis is higher: ",
"_____no_output_____"
]
],
[
[
"like_fair, like_biased",
"_____no_output_____"
]
],
[
[
"Under the biased uniform definition, the likelihood of the fair hypothesis is higher.",
"_____no_output_____"
]
],
[
[
"like_fair, like_uniform",
"_____no_output_____"
]
],
[
[
"The ratio of these likelihoods tells us which hypothesis the data support.\n\nIf the ratio is less than 1, the data support the second hypothesis:",
"_____no_output_____"
]
],
[
[
"like_fair / like_biased",
"_____no_output_____"
]
],
[
[
"If the ratio is greater than 1, the data support the first hypothesis:",
"_____no_output_____"
]
],
[
[
"like_fair / like_uniform",
"_____no_output_____"
]
],
[
[
"This likelihood ratio is called a [Bayes factor](https://en.wikipedia.org/wiki/Bayes_factor); it provides a concise way to present the strength of a dataset as evidence for or against a hypothesis.",
"_____no_output_____"
],
[
"## Summary\n\nIn this notebook I introduced the binomial disrtribution and used it to solve the Euro problem more efficiently.\n\nThen we used the results to (finally) answer the original version of the Euro problem, considering whether the data support the hypothesis that the coin is fair or biased. We found that the answer depends on how we define \"biased\". And we summarized the results using a Bayes factor, which quantifies the strength of the evidence.\n\n[In the next notebook](https://colab.research.google.com/github/AllenDowney/BiteSizeBayes/blob/master/13_price.ipynb) we'll start on a new problem based on the television game show *The Price Is Right*.",
"_____no_output_____"
],
[
"## Exercises\n\n**Exercise:** In preparation for an alien invasion, the Earth Defense League has been working on new missiles to shoot down space invaders. Of course, some missile designs are better than others; let's assume that each design has some probability of hitting an alien ship, `x`.\n\nBased on previous tests, the distribution of `x` in the population of designs is roughly uniform between 10% and 40%.\n\nNow suppose the new ultra-secret Alien Blaster 9000 is being tested. In a press conference, a Defense League general reports that the new design has been tested twice, taking two shots during each test. The results of the test are confidential, so the general won't say how many targets were hit, but they report: \"The same number of targets were hit in the two tests, so we have reason to think this new design is consistent.\"\n\nIs this data good or bad; that is, does it increase or decrease your estimate of `x` for the Alien Blaster 9000?\n\nPlot the prior and posterior distributions, and use the following function to compute the prior and posterior means.",
"_____no_output_____"
]
],
[
[
"def pmf_mean(pmf):\n \"\"\"Compute the mean of a PMF.\n \n pmf: Series representing a PMF\n \n return: float\n \"\"\"\n return np.sum(pmf.index * pmf)",
"_____no_output_____"
],
[
"# Solution\n\nxs = np.linspace(0.1, 0.4)\nprior = pd.Series(1, index=xs)\nprior /= prior.sum()",
"_____no_output_____"
],
[
"# Solution\n\nlikelihood = xs**2 + (1-xs)**2",
"_____no_output_____"
],
[
"# Solution\n\nposterior = prior * likelihood\nposterior /= posterior.sum()",
"_____no_output_____"
],
[
"# Solution\n\nprior.plot(color='gray', label='prior')\nposterior.plot(label='posterior')\n\nplt.xlabel('Probability of success (x)')\nplt.ylabel('Probability')\nplt.ylim(0, 0.027)\nplt.title('Distribution of before and after testing')\nplt.legend();",
"_____no_output_____"
],
[
"# Solution\n\npmf_mean(prior), pmf_mean(posterior)",
"_____no_output_____"
],
[
"# With this prior, being \"consistent\" is more likely\n# to mean \"consistently bad\".",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d08a9a54ace5fc29399b98c6e32b9f04c46655ed | 20,870 | ipynb | Jupyter Notebook | chapter01_basic/01_notebook.ipynb | aaazzz640/cookbook-2nd-code | c0edeb78fe5a16e64d1210437470b00572211a82 | [
"MIT"
] | null | null | null | chapter01_basic/01_notebook.ipynb | aaazzz640/cookbook-2nd-code | c0edeb78fe5a16e64d1210437470b00572211a82 | [
"MIT"
] | null | null | null | chapter01_basic/01_notebook.ipynb | aaazzz640/cookbook-2nd-code | c0edeb78fe5a16e64d1210437470b00572211a82 | [
"MIT"
] | null | null | null | 33.074485 | 1,140 | 0.512171 | [
[
[
"# 1.1. Introducing IPython and the Jupyter Notebook",
"_____no_output_____"
]
],
[
[
"print(\"Hello world!\")",
"Hello world!\n"
],
[
"2 + 2",
"_____no_output_____"
],
[
"_ * 3",
"_____no_output_____"
],
[
"!ls",
"01_notebook.ipynb 04_magic.ipynb \u001b[34m__pycache__\u001b[m\u001b[m \u001b[34mplotter\u001b[m\u001b[m\n02_pandas.ipynb 05_config.ipynb csvmagic.py random_magics.py\n03_numpy.ipynb 06_kernel.ipynb plotkernel.py test.txt\n"
],
[
"%lsmagic",
"_____no_output_____"
],
[
"%%writefile test.txt\nHello world!",
"Overwriting test.txt\n"
],
[
"# Let's check what this file contains.\nwith open('test.txt', 'r') as f:\n print(f.read())",
"Hello world!\n\n"
],
[
"%run?",
"\u001b[0;31mDocstring:\u001b[0m\nRun the named file inside IPython as a program.\n\nUsage::\n\n %run [-n -i -e -G]\n [( -t [-N<N>] | -d [-b<N>] | -p [profile options] )]\n ( -m mod | file ) [args]\n\nParameters after the filename are passed as command-line arguments to\nthe program (put in sys.argv). Then, control returns to IPython's\nprompt.\n\nThis is similar to running at a system prompt ``python file args``,\nbut with the advantage of giving you IPython's tracebacks, and of\nloading all variables into your interactive namespace for further use\n(unless -p is used, see below).\n\nThe file is executed in a namespace initially consisting only of\n``__name__=='__main__'`` and sys.argv constructed as indicated. It thus\nsees its environment as if it were being run as a stand-alone program\n(except for sharing global objects such as previously imported\nmodules). But after execution, the IPython interactive namespace gets\nupdated with all variables defined in the program (except for __name__\nand sys.argv). This allows for very convenient loading of code for\ninteractive work, while giving each program a 'clean sheet' to run in.\n\nArguments are expanded using shell-like glob match. Patterns\n'*', '?', '[seq]' and '[!seq]' can be used. Additionally,\ntilde '~' will be expanded into user's home directory. Unlike\nreal shells, quotation does not suppress expansions. Use\n*two* back slashes (e.g. ``\\\\*``) to suppress expansions.\nTo completely disable these expansions, you can use -G flag.\n\nOn Windows systems, the use of single quotes `'` when specifying \na file is not supported. Use double quotes `\"`.\n\nOptions:\n\n-n\n __name__ is NOT set to '__main__', but to the running file's name\n without extension (as python does under import). This allows running\n scripts and reloading the definitions in them without calling code\n protected by an ``if __name__ == \"__main__\"`` clause.\n\n-i\n run the file in IPython's namespace instead of an empty one. This\n is useful if you are experimenting with code written in a text editor\n which depends on variables defined interactively.\n\n-e\n ignore sys.exit() calls or SystemExit exceptions in the script\n being run. This is particularly useful if IPython is being used to\n run unittests, which always exit with a sys.exit() call. In such\n cases you are interested in the output of the test results, not in\n seeing a traceback of the unittest module.\n\n-t\n print timing information at the end of the run. IPython will give\n you an estimated CPU time consumption for your script, which under\n Unix uses the resource module to avoid the wraparound problems of\n time.clock(). Under Unix, an estimate of time spent on system tasks\n is also given (for Windows platforms this is reported as 0.0).\n\nIf -t is given, an additional ``-N<N>`` option can be given, where <N>\nmust be an integer indicating how many times you want the script to\nrun. The final timing report will include total and per run results.\n\nFor example (testing the script uniq_stable.py)::\n\n In [1]: run -t uniq_stable\n\n IPython CPU timings (estimated):\n User : 0.19597 s.\n System: 0.0 s.\n\n In [2]: run -t -N5 uniq_stable\n\n IPython CPU timings (estimated):\n Total runs performed: 5\n Times : Total Per run\n User : 0.910862 s, 0.1821724 s.\n System: 0.0 s, 0.0 s.\n\n-d\n run your program under the control of pdb, the Python debugger.\n This allows you to execute your program step by step, watch variables,\n etc. Internally, what IPython does is similar to calling::\n\n pdb.run('execfile(\"YOURFILENAME\")')\n\n with a breakpoint set on line 1 of your file. You can change the line\n number for this automatic breakpoint to be <N> by using the -bN option\n (where N must be an integer). For example::\n\n %run -d -b40 myscript\n\n will set the first breakpoint at line 40 in myscript.py. Note that\n the first breakpoint must be set on a line which actually does\n something (not a comment or docstring) for it to stop execution.\n\n Or you can specify a breakpoint in a different file::\n\n %run -d -b myotherfile.py:20 myscript\n\n When the pdb debugger starts, you will see a (Pdb) prompt. You must\n first enter 'c' (without quotes) to start execution up to the first\n breakpoint.\n\n Entering 'help' gives information about the use of the debugger. You\n can easily see pdb's full documentation with \"import pdb;pdb.help()\"\n at a prompt.\n\n-p\n run program under the control of the Python profiler module (which\n prints a detailed report of execution times, function calls, etc).\n\n You can pass other options after -p which affect the behavior of the\n profiler itself. See the docs for %prun for details.\n\n In this mode, the program's variables do NOT propagate back to the\n IPython interactive namespace (because they remain in the namespace\n where the profiler executes them).\n\n Internally this triggers a call to %prun, see its documentation for\n details on the options available specifically for profiling.\n\nThere is one special usage for which the text above doesn't apply:\nif the filename ends with .ipy[nb], the file is run as ipython script,\njust as if the commands were written on IPython prompt.\n\n-m\n specify module name to load instead of script path. Similar to\n the -m option for the python interpreter. Use this option last if you\n want to combine with other %run options. Unlike the python interpreter\n only source modules are allowed no .pyc or .pyo files.\n For example::\n\n %run -m example\n\n will run the example module.\n\n-G\n disable shell-like glob expansion of arguments.\n\u001b[0;31mFile:\u001b[0m ~/tensorflow-test/env/lib/python3.7/site-packages/IPython/core/magics/execution.py\n"
],
[
"from IPython.display import HTML, SVG, YouTubeVideo",
"_____no_output_____"
],
[
"HTML('''\n<table style=\"border: 2px solid black;\">\n''' +\n ''.join(['<tr>' +\n ''.join([f'<td>{row},{col}</td>'\n for col in range(5)]) +\n '</tr>' for row in range(5)]) +\n '''\n</table>\n''')",
"_____no_output_____"
],
[
"SVG('''<svg width=\"600\" height=\"80\">''' +\n ''.join([f'''<circle\n cx=\"{(30 + 3*i) * (10 - i)}\"\n cy=\"30\"\n r=\"{3. * float(i)}\"\n fill=\"red\"\n stroke-width=\"2\"\n stroke=\"black\">\n </circle>''' for i in range(10)]) +\n '''</svg>''')",
"_____no_output_____"
],
[
"YouTubeVideo('VQBZ2MqWBZI')",
"_____no_output_____"
]
],
[
[
"```json\n{\n \"cells\": [\n {\n \"cell_type\": \"code\",\n \"execution_count\": 1,\n \"metadata\": {},\n \"outputs\": [\n {\n \"name\": \"stdout\",\n \"output_type\": \"stream\",\n \"text\": [\n \"Hello world!\\n\"\n ]\n }\n\n \n ],\n \"source\": [\n \"print(\\\"Hello world!\\\")\"\n ]\n }\n ],\n \"metadata\": {},\n \"nbformat\": 4,\n \"nbformat_minor\": 2\n}\n```",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d08a9c2d4d7a8167f4a442ac5acb3161891ee875 | 68,935 | ipynb | Jupyter Notebook | week4/.ipynb_checkpoints/Retail-Data-Analytics-checkpoint.ipynb | guillainbisimwa/Data-lit | 9b93e75f856e0298fba91540a237b276ae28bff2 | [
"MIT"
] | 1 | 2020-09-09T19:32:08.000Z | 2020-09-09T19:32:08.000Z | week4/.ipynb_checkpoints/Retail-Data-Analytics-checkpoint.ipynb | guillainbisimwa/Data-lit | 9b93e75f856e0298fba91540a237b276ae28bff2 | [
"MIT"
] | null | null | null | week4/.ipynb_checkpoints/Retail-Data-Analytics-checkpoint.ipynb | guillainbisimwa/Data-lit | 9b93e75f856e0298fba91540a237b276ae28bff2 | [
"MIT"
] | null | null | null | 61.714414 | 23,256 | 0.658722 | [
[
[
"import pandas as pd",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nimport seaborn as sns\nimport numpy as np",
"_____no_output_____"
],
[
"%matplotlib inline",
"_____no_output_____"
]
],
[
[
"# DataLit Homework Assignment Week 4\nHistorical sales data from 45 stores. This dataset comes from KAGGLE (https://www.kaggle.com/manjeetsingh/retaildataset).\n",
"_____no_output_____"
]
],
[
[
"# Stores\n# Contains anonymized information about the 45 stores, indicating the type and size of store.\nstores = 'dataset/stores data-set.csv'\n\n# feature\n# Contains additional data related to the store, department, and regional activity for the given dates.\nfeature = 'dataset/Features data set.csv'\n\n# Sales\n# Contains historical sales data, which covers to 2010-02-05 to 2012-11-01.\nsales = 'dataset/sales data-set.csv'\n\ndata_stores = pd.read_csv(stores)\ndata_feature = pd.read_csv(feature)\ndata_sales = pd.read_csv(sales)",
"_____no_output_____"
],
[
"data_stores.head()",
"_____no_output_____"
],
[
"data_feature.head()",
"_____no_output_____"
],
[
"data_sales.head()",
"_____no_output_____"
],
[
"#drop all Markdowns inside data_feature\ndata_feature.drop(['MarkDown1','MarkDown2','MarkDown3','MarkDown4','MarkDown5'], axis='columns',inplace=True)",
"_____no_output_____"
],
[
"data_feature.head()",
"_____no_output_____"
],
[
"# Merge the data in a unique DataFrame\ndf = pd.merge(pd.merge(data_feature, data_sales, on=['Store', 'Date', 'IsHoliday']), data_stores, on=['Store'])\n\n# Convert Date to pandas Date format\ndf['Date'] = pd.to_datetime(df['Date'])\ndf.head()",
"_____no_output_____"
],
[
"df.dtypes",
"_____no_output_____"
],
[
"df.shape",
"_____no_output_____"
],
[
"df.Type.value_counts()",
"_____no_output_____"
],
[
"# df_average_sales_week = df.groupby(by=['Date'], as_index=False)['Weekly_Sales'].sum()\n# df_average_sales = df_average_sales_week.sort_values('Weekly_Sales', ascending=False)\n\n# plt.figure(figsize=(15,5))\n# plt.plot(df_average_sales_week.Date, df_average_sales_week.Weekly_Sales)\n# plt.show()",
"_____no_output_____"
],
[
"df.groupby([df.Date.dt.year,df.Date.dt.month]).Weekly_Sales.mean()",
"_____no_output_____"
],
[
"df.groupby(df.Date.dt.year).Weekly_Sales.mean()",
"_____no_output_____"
],
[
"df.groupby([df.Date.dt.year,df.Date.dt.month]).Weekly_Sales.mean().plot()",
"_____no_output_____"
],
[
"# fig_size = plt.rcParams[\"figure.figsize\"]\n# plt.plot( df.Date, df.Weekly_Sales,'o-')\n# fig_size[0] = 14\n# fig_size[1] = 4\n# plt.rcParams[\"figure.figsize\"] = fig_size\n# plt.ylabel('Label 1')\n\n# plt.show()\n\nfig, ax = plt.subplots()\nax.plot( df.Date.dt.year, df.Weekly_Sales)\n\nax.set(xlabel='time (s)', ylabel='voltage (mV)',\n title='About as simple as it gets, folks')\nax.grid()\n\n#fig.savefig(\"test.png\")\nplt.show()",
"_____no_output_____"
],
[
"df.describe().transpose()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d08aa4705af6a169fc9d3f881e96de35d6177766 | 399,214 | ipynb | Jupyter Notebook | Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb | BhargavTumu/coursera-practical-data-science-specialization | bb9dba48320fa93138cf245817413227305300a9 | [
"MIT"
] | null | null | null | Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb | BhargavTumu/coursera-practical-data-science-specialization | bb9dba48320fa93138cf245817413227305300a9 | [
"MIT"
] | null | null | null | Course-3-Optimize ML Models and Deploy Human-in-the-Loop Pipelines/Week-2/C3_W2_Assignment.ipynb | BhargavTumu/coursera-practical-data-science-specialization | bb9dba48320fa93138cf245817413227305300a9 | [
"MIT"
] | null | null | null | 230.094524 | 50,112 | 0.904898 | [
[
[
"# A/B testing, traffic shifting and autoscaling",
"_____no_output_____"
],
[
"### Introduction\n\nIn this lab you will create an endpoint with multiple variants, splitting the traffic between them. Then after testing and reviewing the endpoint performance metrics, you will shift the traffic to one variant and configure it to autoscale.\n\n### Table of Contents\n\n- [1. Create an endpoint with multiple variants](#c3w2-1.)\n - [1.1. Construct Docker Image URI](#c3w2-1.1.)\n - [Exercise 1](#c3w2-ex-1)\n - [1.2. Create Amazon SageMaker Models](#c3w2-1.2.)\n - [Exercise 2](#c3w2-ex-2)\n - [Exercise 3](#c3w2-ex-3)\n - [1.3. Set up Amazon SageMaker production variants](#c3w2-1.3.)\n - [Exercise 4](#c3w2-ex-4)\n - [Exercise 5](#c3w2-ex-5)\n - [1.4. Configure and create endpoint](#c3w2-1.4.)\n - [Exercise 6](#c3w2-ex-6)\n- [2. Test model](#c3w2-2.)\n - [2.1. Test the model on a few sample strings](#c3w2-2.1.)\n - [Exercise 7](#c3w2-ex-7)\n - [2.2. Generate traffic and review the endpoint performance metrics](#c3w2-2.2.)\n- [3. Shift the traffic to one variant and review the endpoint performance metrics](#c3w2-3.)\n - [Exercise 8](#c3w2-ex-8)\n- [4. Configure one variant to autoscale](#c3w2-4.)",
"_____no_output_____"
],
[
"Let's install and import the required modules.",
"_____no_output_____"
]
],
[
[
"# please ignore warning messages during the installation\n!pip install --disable-pip-version-check -q sagemaker==2.35.0\n!conda install -q -y pytorch==1.6.0 -c pytorch\n!pip install --disable-pip-version-check -q transformers==3.5.1",
"/opt/conda/lib/python3.7/site-packages/secretstorage/dhcrypto.py:16: CryptographyDeprecationWarning: int_from_bytes is deprecated, use int.from_bytes instead\n from cryptography.utils import int_from_bytes\n/opt/conda/lib/python3.7/site-packages/secretstorage/util.py:25: CryptographyDeprecationWarning: int_from_bytes is deprecated, use int.from_bytes instead\n from cryptography.utils import int_from_bytes\n\u001b[33mWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv\u001b[0m\nCollecting package metadata (current_repodata.json): ...working... done\nSolving environment: ...working... done\n\n# All requested packages already installed.\n\n/opt/conda/lib/python3.7/site-packages/secretstorage/dhcrypto.py:16: CryptographyDeprecationWarning: int_from_bytes is deprecated, use int.from_bytes instead\n from cryptography.utils import int_from_bytes\n/opt/conda/lib/python3.7/site-packages/secretstorage/util.py:25: CryptographyDeprecationWarning: int_from_bytes is deprecated, use int.from_bytes instead\n from cryptography.utils import int_from_bytes\n\u001b[33mWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv\u001b[0m\n"
],
[
"import matplotlib.pyplot as plt\n%matplotlib inline\n%config InlineBackend.figure_format='retina'",
"_____no_output_____"
],
[
"import boto3\nimport sagemaker\nimport pandas as pd\nimport botocore\n\nconfig = botocore.config.Config(user_agent_extra='dlai-pds/c3/w2')\n\n# low-level service client of the boto3 session\nsm = boto3.client(service_name='sagemaker', \n config=config)\n\nsm_runtime = boto3.client('sagemaker-runtime',\n config=config)\n\nsess = sagemaker.Session(sagemaker_client=sm,\n sagemaker_runtime_client=sm_runtime)\n\nbucket = sess.default_bucket()\nrole = sagemaker.get_execution_role()\nregion = sess.boto_region_name\n\ncw = boto3.client(service_name='cloudwatch', \n config=config)\n\nautoscale = boto3.client(service_name=\"application-autoscaling\", \n config=config)",
"_____no_output_____"
]
],
[
[
"<a name='c3w2-1.'></a>\n# 1. Create an endpoint with multiple variants",
"_____no_output_____"
],
[
"Two models trained to analyze customer feedback and classify the messages into positive (1), neutral (0), and negative (-1) sentiments are saved in the following S3 bucket paths. These `tar.gz` files contain the model artifacts, which result from model training.",
"_____no_output_____"
]
],
[
[
"model_a_s3_uri = 's3://dlai-practical-data-science/models/ab/variant_a/model.tar.gz'\nmodel_b_s3_uri = 's3://dlai-practical-data-science/models/ab/variant_b/model.tar.gz'",
"_____no_output_____"
]
],
[
[
"Let's deploy an endpoint splitting the traffic between these two models 50/50 to perform A/B Testing. Instead of creating a PyTorch Model object and calling `model.deploy()` function, you will create an `Endpoint configuration` with multiple model variants. Here is the workflow you will follow to create an endpoint:\n\n<img src=\"images/endpoint-workflow.png\" width=\"60%\" align=\"center\">",
"_____no_output_____"
],
[
"<a name='c3w2-1.1.'></a>\n### 1.1. Construct Docker Image URI\n\n<img src=\"images/endpoint-workflow-1-image.png\" width=\"60%\" align=\"center\">\n\nYou will need to create the models in Amazon SageMaker, which retrieves the URI for the pre-built SageMaker Docker image stored in Amazon Elastic Container Re\ngistry (ECR). Let's construct the ECR URI which you will pass into the `create_model` function later.\n\nSet the instance type. For the purposes of this lab, you will use a relatively small instance. Please refer to [this link](https://aws.amazon.com/sagemaker/pricing/) for additional instance types that may work for your use cases outside of this lab.",
"_____no_output_____"
]
],
[
[
"inference_instance_type = 'ml.m5.large'",
"_____no_output_____"
]
],
[
[
"<a name='c3w2-ex-1'></a>\n### Exercise 1\n\nCreate an ECR URI using the `'PyTorch'` framework. Review other parameters of the image.",
"_____no_output_____"
]
],
[
[
"inference_image_uri = sagemaker.image_uris.retrieve(\n ### BEGIN SOLUTION - DO NOT delete this comment for grading purposes\n framework='pytorch', # Replace None\n ### END SOLUTION - DO NOT delete this comment for grading purposes\n version='1.6.0',\n instance_type=inference_instance_type,\n region=region,\n py_version='py3',\n image_scope='inference'\n)\nprint(inference_image_uri)",
"763104351884.dkr.ecr.us-east-1.amazonaws.com/pytorch-inference:1.6.0-cpu-py3\n"
]
],
[
[
"<a name='c3w2-1.2.'></a>\n### 1.2. Create Amazon SageMaker Models\n\n<img src=\"images/endpoint-workflow-2-models.png\" width=\"60%\" align=\"center\">\n\nAmazon SageMaker Model includes information such as the S3 location of the model, the container image that can be used for inference with that model, the execution role, and the model name. \n\nLet's construct the model names.",
"_____no_output_____"
]
],
[
[
"import time\nfrom pprint import pprint\n\ntimestamp = int(time.time())\n\nmodel_name_a = '{}-{}'.format('a', timestamp)\nmodel_name_b = '{}-{}'.format('b', timestamp)",
"_____no_output_____"
]
],
[
[
"You will use the following function to check if the model already exists in Amazon SageMaker.",
"_____no_output_____"
]
],
[
[
"def check_model_existence(model_name):\n for model in sm.list_models()['Models']:\n if model_name == model['ModelName']:\n return True\n return False",
"_____no_output_____"
]
],
[
[
"<a name='c3w2-ex-2'></a>\n### Exercise 2\n\nCreate an Amazon SageMaker Model based on the `model_a_s3_uri` data.\n\n**Instructions**: Use `sm.create_model` function, which requires the model name, Amazon SageMaker execution role and a primary container description (`PrimaryContainer` dictionary). The `PrimaryContainer` includes the S3 bucket location of the model artifacts (`ModelDataUrl` key) and ECR URI (`Image` key).",
"_____no_output_____"
]
],
[
[
"if not check_model_existence(model_name_a):\n model_a = sm.create_model(\n ModelName=model_name_a,\n ExecutionRoleArn=role,\n PrimaryContainer={\n ### BEGIN SOLUTION - DO NOT delete this comment for grading purposes\n 'ModelDataUrl': model_a_s3_uri, # Replace None\n 'Image': inference_image_uri # Replace None\n ### END SOLUTION - DO NOT delete this comment for grading purposes\n }\n )\n pprint(model_a)\nelse:\n print(\"Model {} already exists\".format(model_name_a))",
"{'ModelArn': 'arn:aws:sagemaker:us-east-1:299076282420:model/a-1638384450',\n 'ResponseMetadata': {'HTTPHeaders': {'content-length': '74',\n 'content-type': 'application/x-amz-json-1.1',\n 'date': 'Wed, 01 Dec 2021 18:48:32 GMT',\n 'x-amzn-requestid': '5321bf92-6e4f-471e-ae2b-6a0e62328d57'},\n 'HTTPStatusCode': 200,\n 'RequestId': '5321bf92-6e4f-471e-ae2b-6a0e62328d57',\n 'RetryAttempts': 0}}\n"
]
],
[
[
"<a name='c3w2-ex-3'></a>\n### Exercise 3\n\nCreate an Amazon SageMaker Model based on the `model_b_s3_uri` data.\n\n**Instructions**: Use the example in the cell above.",
"_____no_output_____"
]
],
[
[
"if not check_model_existence(model_name_b):\n model_b = sm.create_model(\n ### BEGIN SOLUTION - DO NOT delete this comment for grading purposes\n ModelName=model_name_b,\n ExecutionRoleArn=role,\n ### END SOLUTION - DO NOT delete this comment for grading purposes\n PrimaryContainer={\n 'ModelDataUrl': model_b_s3_uri, \n 'Image': inference_image_uri\n }\n )\n pprint(model_b)\nelse:\n print(\"Model {} already exists\".format(model_name_b))",
"{'ModelArn': 'arn:aws:sagemaker:us-east-1:299076282420:model/b-1638384450',\n 'ResponseMetadata': {'HTTPHeaders': {'content-length': '74',\n 'content-type': 'application/x-amz-json-1.1',\n 'date': 'Wed, 01 Dec 2021 18:48:48 GMT',\n 'x-amzn-requestid': '1b67df35-4d26-41f1-9fa4-be4154bd7c06'},\n 'HTTPStatusCode': 200,\n 'RequestId': '1b67df35-4d26-41f1-9fa4-be4154bd7c06',\n 'RetryAttempts': 0}}\n"
]
],
[
[
"<a name='c3w2-1.3.'></a>\n### 1.3. Set up Amazon SageMaker production variants\n\n<img src=\"images/endpoint-workflow-3-variants.png\" width=\"60%\" align=\"center\">\n\nA production variant is a packaged SageMaker Model combined with the configuration related to how that model will be hosted. \n\nYou have constructed the model in the section above. The hosting resources configuration includes information on how you want that model to be hosted: the number and type of instances, a pointer to the SageMaker package model, as well as a variant name and variant weight. A single SageMaker Endpoint can actually include multiple production variants.",
"_____no_output_____"
],
[
"<a name='c3w2-ex-4'></a>\n### Exercise 4\n\nCreate an Amazon SageMaker production variant for the SageMaker Model with the `model_name_a`.\n\n**Instructions**: Use the `production_variant` function passing the `model_name_a` and instance type defined above.\n\n```python\nvariantA = production_variant(\n model_name=..., # SageMaker Model name\n instance_type=..., # instance type\n initial_weight=50, # traffic distribution weight\n initial_instance_count=1, # instance count\n variant_name='VariantA', # production variant name\n)\n```",
"_____no_output_____"
]
],
[
[
"from sagemaker.session import production_variant\n\nvariantA = production_variant(\n ### BEGIN SOLUTION - DO NOT delete this comment for grading purposes\n model_name=model_name_a, # Replace None\n instance_type=inference_instance_type, # Replace None\n ### END SOLUTION - DO NOT delete this comment for grading purposes\n initial_weight=50,\n initial_instance_count=1,\n variant_name='VariantA',\n)\nprint(variantA)",
"{'ModelName': 'a-1638384450', 'InstanceType': 'ml.m5.large', 'InitialInstanceCount': 1, 'VariantName': 'VariantA', 'InitialVariantWeight': 50}\n"
]
],
[
[
"<a name='c3w2-ex-5'></a>\n### Exercise 5\n\nCreate an Amazon SageMaker production variant for the SageMaker Model with the `model_name_b`.\n\n**Instructions**: See the required arguments in the cell above.",
"_____no_output_____"
]
],
[
[
"variantB = production_variant(\n ### BEGIN SOLUTION - DO NOT delete this comment for grading purposes\n model_name=model_name_b, # Replace all None\n instance_type=inference_instance_type, # Replace all None\n initial_weight=50, # Replace all None\n ### END SOLUTION - DO NOT delete this comment for grading purposes\n initial_instance_count=1,\n variant_name='VariantB'\n)\nprint(variantB)",
"{'ModelName': 'b-1638384450', 'InstanceType': 'ml.m5.large', 'InitialInstanceCount': 1, 'VariantName': 'VariantB', 'InitialVariantWeight': 50}\n"
]
],
[
[
"<a name='c3w2-1.4.'></a>\n### 1.4. Configure and create the endpoint\n\n<img src=\"images/endpoint-workflow-4-configuration.png\" width=\"60%\" align=\"center\">\n\nYou will use the following functions to check if the endpoint configuration and endpoint itself already exist in Amazon SageMaker.",
"_____no_output_____"
]
],
[
[
"def check_endpoint_config_existence(endpoint_config_name):\n for endpoint_config in sm.list_endpoint_configs()['EndpointConfigs']:\n if endpoint_config_name == endpoint_config['EndpointConfigName']:\n return True\n return False\n\ndef check_endpoint_existence(endpoint_name):\n for endpoint in sm.list_endpoints()['Endpoints']:\n if endpoint_name == endpoint['EndpointName']:\n return True\n return False",
"_____no_output_____"
]
],
[
[
"Create the endpoint configuration by specifying the name and pointing to the two production variants that you just configured that tell SageMaker how you want to host those models.",
"_____no_output_____"
]
],
[
[
"endpoint_config_name = '{}-{}'.format('ab', timestamp)\n\nif not check_endpoint_config_existence(endpoint_config_name):\n endpoint_config = sm.create_endpoint_config(\n EndpointConfigName=endpoint_config_name, \n ProductionVariants=[variantA, variantB]\n )\n pprint(endpoint_config)\nelse:\n print(\"Endpoint configuration {} already exists\".format(endpoint_config_name))",
"{'EndpointConfigArn': 'arn:aws:sagemaker:us-east-1:299076282420:endpoint-config/ab-1638384450',\n 'ResponseMetadata': {'HTTPHeaders': {'content-length': '94',\n 'content-type': 'application/x-amz-json-1.1',\n 'date': 'Wed, 01 Dec 2021 18:51:17 GMT',\n 'x-amzn-requestid': '5b63d51e-5ea7-491d-8b11-24f79b2d378e'},\n 'HTTPStatusCode': 200,\n 'RequestId': '5b63d51e-5ea7-491d-8b11-24f79b2d378e',\n 'RetryAttempts': 0}}\n"
]
],
[
[
"<img src=\"images/endpoint-workflow-5-endpoint.png\" width=\"60%\" align=\"center\">\n\nConstruct the endpoint name.",
"_____no_output_____"
]
],
[
[
"model_ab_endpoint_name = '{}-{}'.format('ab', timestamp)\nprint('Endpoint name: {}'.format(model_ab_endpoint_name))",
"Endpoint name: ab-1638384450\n"
]
],
[
[
"<a name='c3w2-ex-6'></a>\n### Exercise 6\n\nCreate an endpoint with the endpoint name and configuration defined above.",
"_____no_output_____"
]
],
[
[
"if not check_endpoint_existence(model_ab_endpoint_name):\n endpoint_response = sm.create_endpoint(\n ### BEGIN SOLUTION - DO NOT delete this comment for grading purposes\n EndpointName=model_ab_endpoint_name, # Replace None\n EndpointConfigName=endpoint_config_name # Replace None\n ### END SOLUTION - DO NOT delete this comment for grading purposes\n )\n print('Creating endpoint {}'.format(model_ab_endpoint_name))\n pprint(endpoint_response)\nelse:\n print(\"Endpoint {} already exists\".format(model_ab_endpoint_name))",
"Creating endpoint ab-1638384450\n{'EndpointArn': 'arn:aws:sagemaker:us-east-1:299076282420:endpoint/ab-1638384450',\n 'ResponseMetadata': {'HTTPHeaders': {'content-length': '81',\n 'content-type': 'application/x-amz-json-1.1',\n 'date': 'Wed, 01 Dec 2021 18:51:49 GMT',\n 'x-amzn-requestid': '4bb0f1e1-5d2a-426b-97c6-7ff757363c56'},\n 'HTTPStatusCode': 200,\n 'RequestId': '4bb0f1e1-5d2a-426b-97c6-7ff757363c56',\n 'RetryAttempts': 0}}\n"
]
],
[
[
"Review the created endpoint configuration in the AWS console.\n\n**Instructions**:\n\n- open the link\n- notice that you are in the section Amazon SageMaker -> Endpoint configuration\n- check the name of the endpoint configuration, its Amazon Resource Name (ARN) and production variants\n- click on the production variants and check their container information: image and model data location",
"_____no_output_____"
]
],
[
[
"from IPython.core.display import display, HTML\n\ndisplay(\n HTML(\n '<b>Review <a target=\"blank\" href=\"https://console.aws.amazon.com/sagemaker/home?region={}#/endpointConfig/{}\">REST Endpoint configuration</a></b>'.format(\n region, endpoint_config_name\n )\n )\n)",
"_____no_output_____"
]
],
[
[
"Review the created endpoint in the AWS console.\n\n**Instructions**:\n\n- open the link\n- notice that you are in the section Amazon SageMaker -> Endpoints\n- check the name of the endpoint, its ARN and status\n- below you can review the monitoring metrics such as CPU, memory and disk utilization. Further down you can see the endpoint configuration settings with its production variants",
"_____no_output_____"
]
],
[
[
"from IPython.core.display import display, HTML\n\ndisplay(HTML('<b>Review <a target=\"blank\" href=\"https://console.aws.amazon.com/sagemaker/home?region={}#/endpoints/{}\">SageMaker REST endpoint</a></b>'.format(region, model_ab_endpoint_name)))",
"_____no_output_____"
]
],
[
[
"Wait for the endpoint to deploy.\n\n### _This cell will take approximately 5-10 minutes to run._",
"_____no_output_____"
]
],
[
[
"%%time\n\nwaiter = sm.get_waiter('endpoint_in_service')\nwaiter.wait(EndpointName=model_ab_endpoint_name)",
"CPU times: user 220 ms, sys: 18.9 ms, total: 238 ms\nWall time: 8min 32s\n"
]
],
[
[
"_Wait until the ^^ endpoint ^^ is deployed_",
"_____no_output_____"
],
[
"<a name='c3w2-2.'></a>\n# 2. Test model",
"_____no_output_____"
],
[
"<a name='c3w2-2.1.'></a>\n### 2.1. Test the model on a few sample strings\n\nHere, you will pass sample strings of text to the endpoint in order to see the sentiment. You are given one example of each, however, feel free to play around and change the strings yourself!",
"_____no_output_____"
],
[
"<a name='c3w2-ex-7'></a>\n### Exercise 7\n\nCreate an Amazon SageMaker Predictor based on the deployed endpoint.\n\n**Instructions**: Use the `Predictor` object with the following parameters. Please pass JSON serializer and deserializer objects here, calling them with the functions `JSONLinesSerializer()` and `JSONLinesDeserializer()`, respectively. More information about the serializers can be found [here](https://sagemaker.readthedocs.io/en/stable/api/inference/serializers.html).\n\n```python\npredictor = Predictor(\n endpoint_name=..., # endpoint name\n serializer=..., # a serializer object, used to encode data for an inference endpoint\n deserializer=..., # a deserializer object, used to decode data from an inference endpoint\n sagemaker_session=sess\n)\n```",
"_____no_output_____"
]
],
[
[
"from sagemaker.predictor import Predictor\nfrom sagemaker.serializers import JSONLinesSerializer\nfrom sagemaker.deserializers import JSONLinesDeserializer\n\ninputs = [\n {\"features\": [\"I love this product!\"]},\n {\"features\": [\"OK, but not great.\"]},\n {\"features\": [\"This is not the right product.\"]},\n]\n\npredictor = Predictor(\n ### BEGIN SOLUTION - DO NOT delete this comment for grading purposes\n endpoint_name=model_ab_endpoint_name, # Replace None\n serializer=JSONLinesSerializer(), # Replace None\n deserializer=JSONLinesDeserializer(), # Replace None\n ### END SOLUTION - DO NOT delete this comment for grading purposes\n sagemaker_session=sess\n)\n\npredicted_classes = predictor.predict(inputs)\n\nfor predicted_class in predicted_classes:\n print(\"Predicted class {} with probability {}\".format(predicted_class['predicted_label'], predicted_class['probability']))",
"Predicted class 1 with probability 0.9605445861816406\nPredicted class 0 with probability 0.5798221230506897\nPredicted class -1 with probability 0.7667604684829712\n"
]
],
[
[
"<a name='c3w2-2.2.'></a>\n### 2.2. Generate traffic and review the endpoint performance metrics\n\nNow you will generate traffic. To analyze the endpoint performance you will review some of the metrics that Amazon SageMaker emits in CloudWatch: CPU Utilization, Latency and Invocations. Full list of namespaces and metrics can be found [here](https://docs.aws.amazon.com/sagemaker/latest/dg/monitoring-cloudwatch.html). CloudWatch `get_metric_statistics` documentation can be found [here](https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_GetMetricStatistics.html).\n\nBut before that, let's create a function that will help to extract the results from CloudWatch and plot them.",
"_____no_output_____"
]
],
[
[
"def plot_endpoint_metrics_for_variants(endpoint_name, \n namespace_name, \n metric_name, \n variant_names, \n start_time, \n end_time):\n \n try:\n joint_variant_metrics = None\n\n for variant_name in variant_names:\n metrics = cw.get_metric_statistics( # extracts the results in a dictionary format\n Namespace=namespace_name, # the namespace of the metric, e.g. \"AWS/SageMaker\"\n MetricName=metric_name, # the name of the metric, e.g. \"CPUUtilization\"\n StartTime=start_time, # the time stamp that determines the first data point to return\n EndTime=end_time, # the time stamp that determines the last data point to return\n Period=60, # the granularity, in seconds, of the returned data points\n Statistics=[\"Sum\"], # the metric statistics\n Dimensions=[ # dimensions, as CloudWatch treats each unique combination of dimensions as a separate metric\n {\"Name\": \"EndpointName\", \"Value\": endpoint_name}, \n {\"Name\": \"VariantName\", \"Value\": variant_name}\n ],\n )\n \n if metrics[\"Datapoints\"]: # access the results from the distionary using the key \"Datapoints\"\n df_metrics = pd.DataFrame(metrics[\"Datapoints\"]) \\\n .sort_values(\"Timestamp\") \\\n .set_index(\"Timestamp\") \\\n .drop(\"Unit\", axis=1) \\\n .rename(columns={\"Sum\": variant_name}) # rename the column with the metric results as a variant_name\n \n if joint_variant_metrics is None:\n joint_variant_metrics = df_metrics\n else:\n joint_variant_metrics = joint_variant_metrics.join(df_metrics, how=\"outer\")\n \n joint_variant_metrics.plot(title=metric_name)\n except:\n pass",
"_____no_output_____"
]
],
[
[
"Establish wide enough time bounds to show all the charts using the same timeframe:",
"_____no_output_____"
]
],
[
[
"from datetime import datetime, timedelta\n\nstart_time = datetime.now() - timedelta(minutes=30)\nend_time = datetime.now() + timedelta(minutes=30)\n\nprint('Start Time: {}'.format(start_time))\nprint('End Time: {}'.format(end_time))",
"Start Time: 2021-12-01 18:38:05.978052\nEnd Time: 2021-12-01 19:38:05.978095\n"
]
],
[
[
"Set the list of the the variant names to analyze.",
"_____no_output_____"
]
],
[
[
"variant_names = [variantA[\"VariantName\"], variantB[\"VariantName\"]]\n\nprint(variant_names)",
"['VariantA', 'VariantB']\n"
]
],
[
[
"Run some predictions and view the metrics for each variant.\n\n### _This cell will take approximately 1-2 minutes to run._",
"_____no_output_____"
]
],
[
[
"%%time\n\nfor i in range(0, 100):\n predicted_classes = predictor.predict(inputs)",
"CPU times: user 231 ms, sys: 7.57 ms, total: 239 ms\nWall time: 1min 37s\n"
]
],
[
[
"_Μake sure the predictions ^^ above ^^ ran successfully_",
"_____no_output_____"
],
[
"Let’s query CloudWatch to get a few metrics that are split across variants. If you see `Metrics not yet available`, please be patient as metrics may take a few mins to appear in CloudWatch.",
"_____no_output_____"
]
],
[
[
"time.sleep(30) # Sleep to accomodate a slight delay in metrics gathering",
"_____no_output_____"
],
[
"# CPUUtilization\n# The sum of each individual CPU core's utilization. \n# The CPU utilization of each core can range between 0 and 100. For example, if there are four CPUs, CPUUtilization can range from 0% to 400%.\nplot_endpoint_metrics_for_variants(\n endpoint_name=model_ab_endpoint_name, \n namespace_name=\"/aws/sagemaker/Endpoints\", \n metric_name=\"CPUUtilization\",\n variant_names=variant_names,\n start_time=start_time,\n end_time=end_time\n)",
"_____no_output_____"
],
[
"# Invocations\n# The number of requests sent to a model endpoint.\nplot_endpoint_metrics_for_variants(\n endpoint_name=model_ab_endpoint_name, \n namespace_name=\"AWS/SageMaker\", \n metric_name=\"Invocations\",\n variant_names=variant_names,\n start_time=start_time,\n end_time=end_time \n)",
"_____no_output_____"
],
[
"# InvocationsPerInstance\n# The number of invocations sent to a model, normalized by InstanceCount in each production variant.\nplot_endpoint_metrics_for_variants(\n endpoint_name=model_ab_endpoint_name, \n namespace_name=\"AWS/SageMaker\", \n metric_name=\"InvocationsPerInstance\",\n variant_names=variant_names,\n start_time=start_time,\n end_time=end_time\n)",
"_____no_output_____"
],
[
"# ModelLatency\n# The interval of time taken by a model to respond as viewed from SageMaker (in microseconds).\nplot_endpoint_metrics_for_variants(\n endpoint_name=model_ab_endpoint_name, \n namespace_name=\"AWS/SageMaker\", \n metric_name=\"ModelLatency\",\n variant_names=variant_names,\n start_time=start_time,\n end_time=end_time\n)",
"_____no_output_____"
]
],
[
[
"<a name='c3w2-3.'></a>\n# 3. Shift the traffic to one variant and review the endpoint performance metrics\n\nGenerally, the winning model would need to be chosen. The decision would be made based on the endpoint performance metrics and some other business related evaluations. Here you can assume that the winning model is in the Variant B and shift all traffic to it. \n\nConstruct a list with the updated endpoint weights.\n\n### _**No downtime** occurs during this traffic-shift activity._\n\n### _This may take a few minutes. Please be patient._",
"_____no_output_____"
]
],
[
[
"updated_endpoint_config = [\n {\n \"VariantName\": variantA[\"VariantName\"],\n \"DesiredWeight\": 0,\n },\n {\n \"VariantName\": variantB[\"VariantName\"],\n \"DesiredWeight\": 100,\n },\n]",
"_____no_output_____"
]
],
[
[
"<a name='c3w2-ex-8'></a>\n### Exercise 8\n\nUpdate variant weights in the configuration of the existing endpoint.\n\n**Instructions**: Use the `sm.update_endpoint_weights_and_capacities` function, passing the endpoint name and list of updated weights for each of the variants that you defined above.",
"_____no_output_____"
]
],
[
[
"sm.update_endpoint_weights_and_capacities(\n ### BEGIN SOLUTION - DO NOT delete this comment for grading purposes\n EndpointName=model_ab_endpoint_name, # Replace None\n DesiredWeightsAndCapacities=updated_endpoint_config # Replace None\n ### END SOLUTION - DO NOT delete this comment for grading purposes\n)",
"_____no_output_____"
]
],
[
[
"_Wait for the ^^ endpoint update ^^ to complete above_\n\nThis may take a few minutes. Please be patient.\n\n### _There is **no downtime** while the update is applying._",
"_____no_output_____"
],
[
"While waiting for the update (or afterwards) you can review the endpoint in the AWS console.\n\n**Instructions**:\n\n- open the link\n- notice that you are in the section Amazon SageMaker -> Endpoints\n- check the name of the endpoint, its ARN and status (`Updating` or `InService`)\n- below you can see the endpoint runtime settings with the updated weights",
"_____no_output_____"
]
],
[
[
"from IPython.core.display import display, HTML\n\ndisplay(HTML('<b>Review <a target=\"blank\" href=\"https://console.aws.amazon.com/sagemaker/home?region={}#/endpoints/{}\">SageMaker REST endpoint</a></b>'.format(region, model_ab_endpoint_name)))",
"_____no_output_____"
],
[
"waiter = sm.get_waiter(\"endpoint_in_service\")\nwaiter.wait(EndpointName=model_ab_endpoint_name)",
"_____no_output_____"
]
],
[
[
"Run some more predictions and view the metrics for each variant.\n\n### _This cell will take approximately 1-2 minutes to run._",
"_____no_output_____"
]
],
[
[
"%%time\n\nfor i in range(0, 100):\n predicted_classes = predictor.predict(inputs)",
"CPU times: user 222 ms, sys: 15.7 ms, total: 238 ms\nWall time: 1min 31s\n"
]
],
[
[
"_Μake sure the predictions ^^ above ^^ ran successfully_\n\nIf you see `Metrics not yet available`, please be patient as metrics may take a few minutes to appear in CloudWatch. Compare the results with the plots above.",
"_____no_output_____"
]
],
[
[
"# CPUUtilization\n# The sum of each individual CPU core's utilization. \n# The CPU utilization of each core can range between 0 and 100. For example, if there are four CPUs, CPUUtilization can range from 0% to 400%.\nplot_endpoint_metrics_for_variants(\n endpoint_name=model_ab_endpoint_name, \n namespace_name=\"/aws/sagemaker/Endpoints\",\n metric_name=\"CPUUtilization\",\n variant_names=variant_names,\n start_time=start_time,\n end_time=end_time\n)",
"_____no_output_____"
],
[
"# Invocations\n# The number of requests sent to a model endpoint.\nplot_endpoint_metrics_for_variants(\n endpoint_name=model_ab_endpoint_name, \n namespace_name=\"AWS/SageMaker\", \n metric_name=\"Invocations\",\n variant_names=variant_names,\n start_time=start_time,\n end_time=end_time \n)",
"_____no_output_____"
],
[
"# InvocationsPerInstance\n# The number of invocations sent to a model, normalized by InstanceCount in each production variant.\nplot_endpoint_metrics_for_variants(\n endpoint_name=model_ab_endpoint_name, \n namespace_name=\"AWS/SageMaker\", \n metric_name=\"InvocationsPerInstance\",\n variant_names=variant_names,\n start_time=start_time,\n end_time=end_time \n)",
"_____no_output_____"
],
[
"# ModelLatency\n# The interval of time taken by a model to respond as viewed from SageMaker (in microseconds).\nplot_endpoint_metrics_for_variants(\n endpoint_name=model_ab_endpoint_name, \n namespace_name=\"AWS/SageMaker\", \n metric_name=\"ModelLatency\",\n variant_names=variant_names,\n start_time=start_time,\n end_time=end_time \n)",
"_____no_output_____"
]
],
[
[
"<a name='c3w2-4.'></a>\n# 4. Configure one variant to autoscale\n\nLet's configure Variant B to autoscale. You would not autoscale Variant A since no traffic is being passed to it at this time.\n\nFirst, you need to define a scalable target. It is an AWS resource and in this case you want to scale a `sagemaker` resource as indicated in the `ServiceNameSpace` parameter. Then the `ResourceId` is a SageMaker Endpoint. Because autoscaling is used by other AWS resources, you’ll see a few parameters that will remain static for scaling SageMaker Endpoints. Thus the `ScalableDimension` is a set value for SageMaker Endpoint scaling.\n\nYou also need to specify a few key parameters that control the min and max behavior for your Machine Learning instances. The `MinCapacity` indicates the minimum number of instances you plan to scale in to. The `MaxCapacity` is the maximum number of instances you want to scale out to. So in this case you always want to have at least 1 instance running and a maximum of 2 during peak periods. ",
"_____no_output_____"
]
],
[
[
"autoscale.register_scalable_target(\n ServiceNamespace=\"sagemaker\",\n ResourceId=\"endpoint/\" + model_ab_endpoint_name + \"/variant/VariantB\",\n ScalableDimension=\"sagemaker:variant:DesiredInstanceCount\",\n MinCapacity=1,\n MaxCapacity=2,\n RoleARN=role,\n SuspendedState={\n \"DynamicScalingInSuspended\": False,\n \"DynamicScalingOutSuspended\": False,\n \"ScheduledScalingSuspended\": False,\n },\n)",
"_____no_output_____"
],
[
"waiter = sm.get_waiter(\"endpoint_in_service\")\nwaiter.wait(EndpointName=model_ab_endpoint_name)",
"_____no_output_____"
]
],
[
[
"Check that the parameters from the function above are in the description of the scalable target:",
"_____no_output_____"
]
],
[
[
"autoscale.describe_scalable_targets(\n ServiceNamespace=\"sagemaker\",\n MaxResults=100,\n)",
"_____no_output_____"
]
],
[
[
"Define and apply scaling policy using the `put_scaling_policy` function. The scaling policy provides additional information about the scaling behavior for your instance. `TargetTrackingScaling` refers to a specific autoscaling type supported by SageMaker, that uses a scaling metric and a target value as the indicator to scale.\n\nIn the scaling policy configuration, you have the predefined metric `PredefinedMetricSpecification` which is the number of invocations on your instance and the `TargetValue` which indicates the number of invocations per ML instance you want to allow before triggering your scaling policy. A scale out cooldown of 60 seconds means that after autoscaling successfully scales out it starts to calculate the cooldown time. The scaling policy won’t increase the desired capacity again until the cooldown period ends.\n\nThe scale in cooldown setting of 300 seconds means that SageMaker will not attempt to start another cooldown policy within 300 seconds of when the last one completed.",
"_____no_output_____"
]
],
[
[
"autoscale.put_scaling_policy(\n PolicyName=\"bert-reviews-autoscale-policy\",\n ServiceNamespace=\"sagemaker\",\n ResourceId=\"endpoint/\" + model_ab_endpoint_name + \"/variant/VariantB\",\n ScalableDimension=\"sagemaker:variant:DesiredInstanceCount\",\n PolicyType=\"TargetTrackingScaling\",\n TargetTrackingScalingPolicyConfiguration={\n \"TargetValue\": 2.0, # the number of invocations per ML instance you want to allow before triggering your scaling policy\n \"PredefinedMetricSpecification\": {\n \"PredefinedMetricType\": \"SageMakerVariantInvocationsPerInstance\", # scaling metric\n },\n \"ScaleOutCooldown\": 60, # wait time, in seconds, before beginning another scale out activity after last one completes\n \"ScaleInCooldown\": 300, # wait time, in seconds, before beginning another scale in activity after last one completes\n },\n)",
"_____no_output_____"
],
[
"waiter = sm.get_waiter(\"endpoint_in_service\")\nwaiter.wait(EndpointName=model_ab_endpoint_name)",
"_____no_output_____"
]
],
[
[
"Generate traffic again and review the endpoint in the AWS console.\n\n### _This cell will take approximately 1-2 minutes to run._",
"_____no_output_____"
]
],
[
[
"%%time\n\nfor i in range(0, 100):\n predicted_classes = predictor.predict(inputs)",
"CPU times: user 215 ms, sys: 19.2 ms, total: 234 ms\nWall time: 1min 31s\n"
]
],
[
[
"Review the autoscaling:\n\n- open the link\n- notice that you are in the section Amazon SageMaker -> Endpoints\n- below you can see the endpoint runtime settings with the instance counts. You can run the predictions multiple times to observe the increase of the instance count to 2",
"_____no_output_____"
]
],
[
[
"from IPython.core.display import display, HTML\n\ndisplay(HTML('<b>Review <a target=\"blank\" href=\"https://console.aws.amazon.com/sagemaker/home?region={}#/endpoints/{}\">SageMaker REST endpoint</a></b>'.format(region, model_ab_endpoint_name)))\n",
"_____no_output_____"
]
],
[
[
"Upload the notebook into S3 bucket for grading purposes.\n\n**Note:** you may need to click on \"Save\" button before the upload.",
"_____no_output_____"
]
],
[
[
"!aws s3 cp ./C3_W2_Assignment.ipynb s3://$bucket/C3_W2_Assignment_Learner.ipynb",
"upload: ./C3_W2_Assignment.ipynb to s3://sagemaker-us-east-1-299076282420/C3_W2_Assignment_Learner.ipynb\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d08aa76a5911fa686cbf710fa50e9ff9824961c4 | 36,672 | ipynb | Jupyter Notebook | datascience/models/regression.ipynb | VimeshShahama/Cyber---SDGP | 9c624d4932a07541dbc56bc8c7ba60c5e4662d31 | [
"CC0-1.0"
] | 1 | 2021-05-18T10:55:32.000Z | 2021-05-18T10:55:32.000Z | datascience/models/regression.ipynb | VimeshShahama/Cyber---SDGP | 9c624d4932a07541dbc56bc8c7ba60c5e4662d31 | [
"CC0-1.0"
] | null | null | null | datascience/models/regression.ipynb | VimeshShahama/Cyber---SDGP | 9c624d4932a07541dbc56bc8c7ba60c5e4662d31 | [
"CC0-1.0"
] | 2 | 2021-03-29T19:00:55.000Z | 2021-04-02T13:18:07.000Z | 57.842271 | 235 | 0.494274 | [
[
[
"<a href=\"https://colab.research.google.com/github/chehansivaruban/Cyber---SDGP/blob/main/regression.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport tensorflow as tf\nfrom tensorflow import keras\nimport pandas as pd\nimport seaborn as sns\nfrom pylab import rcParams\nimport matplotlib.pyplot as plt\nfrom matplotlib import rc\nfrom sklearn.model_selection import train_test_split\nfrom pandas.plotting import register_matplotlib_converters\n\n%matplotlib inline\n%config InlineBackend.figure_format='retina'\n\nregister_matplotlib_converters()\nsns.set(style='whitegrid', palette='muted', font_scale=1.5)\n\nrcParams['figure.figsize'] = 22, 10\n\nRANDOM_SEED = 42\n\nnp.random.seed(RANDOM_SEED)\ntf.random.set_seed(RANDOM_SEED)",
"_____no_output_____"
],
[
"df = pd.read_csv(\n \"dateindex1.csv\", \n parse_dates=['datetime'], \n index_col=\"datetime\"\n)",
"_____no_output_____"
],
[
"df['hour'] = df.index.hour\ndf['day_of_month'] = df.index.day\ndf['day_of_week'] = df.index.dayofweek\ndf['month'] = df.index.month\ndf['year'] = df.index.year\ndf.head()",
"_____no_output_____"
],
[
"X = df.drop(columns=['G'])\n\nY = df[['G']]\n\nprint(X)",
" hour day_of_month day_of_week month year\ndatetime \n2005-01-01 00:00:00 0 1 5 1 2005\n2005-01-01 01:00:00 1 1 5 1 2005\n2005-01-01 02:00:00 2 1 5 1 2005\n2005-01-01 03:00:00 3 1 5 1 2005\n2005-01-01 04:00:00 4 1 5 1 2005\n... ... ... ... ... ...\n2016-12-31 18:00:00 18 31 5 12 2016\n2016-12-31 19:00:00 19 31 5 12 2016\n2016-12-31 20:00:00 20 31 5 12 2016\n2016-12-31 21:00:00 21 31 5 12 2016\n2016-12-31 22:00:00 22 31 5 12 2016\n\n[105191 rows x 5 columns]\n"
],
[
"model = keras.models.Sequential()\n\nmodel.add(keras.layers.Dense(5, activation='relu', input_shape=(5,)))\nmodel.add(keras.layers.Dense(5, activation='relu'))\nmodel.add(keras.layers.Dense(1))\n\nmodel.compile(optimizer='adam', loss='mean_squared_error')",
"_____no_output_____"
],
[
"model.fit(X, Y, epochs=100, callbacks=[keras.callbacks.EarlyStopping(patience=3)])",
"Epoch 1/100\n3288/3288 [==============================] - 3s 964us/step - loss: 72116.5391\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 2/100\n3288/3288 [==============================] - 3s 973us/step - loss: 72050.8906\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 3/100\n3288/3288 [==============================] - 3s 957us/step - loss: 72086.7812\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 4/100\n3288/3288 [==============================] - 3s 965us/step - loss: 72125.0859\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 5/100\n3288/3288 [==============================] - 3s 979us/step - loss: 72095.1172\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 6/100\n3288/3288 [==============================] - 3s 975us/step - loss: 72035.2500\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 7/100\n3288/3288 [==============================] - 3s 957us/step - loss: 72059.3125\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 8/100\n3288/3288 [==============================] - 3s 964us/step - loss: 72057.4141\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 9/100\n3288/3288 [==============================] - 3s 1ms/step - loss: 65002.1328\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 10/100\n3288/3288 [==============================] - 4s 1ms/step - loss: 51020.1211\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 11/100\n3288/3288 [==============================] - 4s 1ms/step - loss: 47010.2891\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 12/100\n3288/3288 [==============================] - 3s 1ms/step - loss: 45236.1289\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 13/100\n3288/3288 [==============================] - 3s 976us/step - loss: 43877.1406\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 14/100\n3288/3288 [==============================] - 3s 975us/step - loss: 41686.9219\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 15/100\n3288/3288 [==============================] - 3s 982us/step - loss: 35336.4648\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 16/100\n3288/3288 [==============================] - 3s 1ms/step - loss: 22195.1348\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 17/100\n3288/3288 [==============================] - 3s 976us/step - loss: 15971.3115\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 18/100\n3288/3288 [==============================] - 3s 977us/step - loss: 15294.1494\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 19/100\n3288/3288 [==============================] - 3s 982us/step - loss: 15204.3428\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 20/100\n3288/3288 [==============================] - 3s 976us/step - loss: 15008.4590\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 21/100\n3288/3288 [==============================] - 3s 983us/step - loss: 14983.6309\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 22/100\n3288/3288 [==============================] - 3s 970us/step - loss: 14999.0068\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 23/100\n3288/3288 [==============================] - 3s 968us/step - loss: 14933.1104\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 24/100\n3288/3288 [==============================] - 3s 980us/step - loss: 15022.3828\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 25/100\n3288/3288 [==============================] - 3s 971us/step - loss: 14971.9697\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 26/100\n3288/3288 [==============================] - 3s 970us/step - loss: 15040.1494\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 27/100\n3288/3288 [==============================] - 3s 967us/step - loss: 14963.1162\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 28/100\n3288/3288 [==============================] - 3s 988us/step - loss: 14983.5684\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 29/100\n3288/3288 [==============================] - 3s 976us/step - loss: 14893.4229\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 30/100\n3288/3288 [==============================] - 3s 976us/step - loss: 15118.4131\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 31/100\n3288/3288 [==============================] - 3s 983us/step - loss: 15051.9824\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 32/100\n3288/3288 [==============================] - 3s 980us/step - loss: 15009.4922\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 33/100\n3288/3288 [==============================] - 3s 988us/step - loss: 14944.2920\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 34/100\n3288/3288 [==============================] - 3s 988us/step - loss: 14930.6689\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 35/100\n3288/3288 [==============================] - 3s 981us/step - loss: 14956.7891\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 36/100\n3288/3288 [==============================] - 3s 984us/step - loss: 15003.4482\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 37/100\n3288/3288 [==============================] - 3s 986us/step - loss: 15076.2754\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 38/100\n3288/3288 [==============================] - 3s 977us/step - loss: 14988.9482\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 39/100\n3288/3288 [==============================] - 3s 981us/step - loss: 15074.3604\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 40/100\n3288/3288 [==============================] - 3s 976us/step - loss: 14939.1895\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 41/100\n3288/3288 [==============================] - 3s 981us/step - loss: 14946.1387\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 42/100\n3288/3288 [==============================] - 3s 1ms/step - loss: 14965.9268\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 43/100\n3288/3288 [==============================] - 3s 990us/step - loss: 14990.2207\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 44/100\n3288/3288 [==============================] - 3s 976us/step - loss: 14971.9004\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 45/100\n3288/3288 [==============================] - 3s 979us/step - loss: 14946.6094\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 46/100\n3288/3288 [==============================] - 3s 982us/step - loss: 15010.3340\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 47/100\n3288/3288 [==============================] - 3s 993us/step - loss: 14942.6719\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 48/100\n3288/3288 [==============================] - 3s 992us/step - loss: 14975.9854\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 49/100\n3288/3288 [==============================] - 3s 1ms/step - loss: 14879.0791\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 50/100\n3288/3288 [==============================] - 3s 1ms/step - loss: 15018.9395\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 51/100\n3288/3288 [==============================] - 3s 983us/step - loss: 14970.8369\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 52/100\n3288/3288 [==============================] - 3s 978us/step - loss: 15008.5977\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 53/100\n3288/3288 [==============================] - 3s 966us/step - loss: 15007.0928\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 54/100\n3288/3288 [==============================] - 3s 980us/step - loss: 14886.4102\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 55/100\n3288/3288 [==============================] - 3s 973us/step - loss: 15015.4355\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 56/100\n3288/3288 [==============================] - 3s 985us/step - loss: 14968.7402\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 57/100\n3288/3288 [==============================] - 3s 971us/step - loss: 14948.0459\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 58/100\n3288/3288 [==============================] - 3s 1ms/step - loss: 14915.3047\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 59/100\n3288/3288 [==============================] - 3s 972us/step - loss: 14907.9902\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 60/100\n3288/3288 [==============================] - 3s 991us/step - loss: 14928.5947\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 61/100\n3288/3288 [==============================] - 3s 997us/step - loss: 14928.8311\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 62/100\n3288/3288 [==============================] - 3s 980us/step - loss: 14927.7158\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 63/100\n3288/3288 [==============================] - 3s 980us/step - loss: 14875.8047\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 64/100\n3288/3288 [==============================] - 3s 978us/step - loss: 14872.5312\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 65/100\n3288/3288 [==============================] - 3s 1ms/step - loss: 14882.5498\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 66/100\n3288/3288 [==============================] - 3s 986us/step - loss: 15013.3008\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 67/100\n3288/3288 [==============================] - 3s 988us/step - loss: 14892.3496\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 68/100\n3288/3288 [==============================] - 3s 991us/step - loss: 14950.7461\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 69/100\n3288/3288 [==============================] - 3s 991us/step - loss: 14915.1201\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 70/100\n3288/3288 [==============================] - 3s 996us/step - loss: 14999.8838\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 71/100\n3288/3288 [==============================] - 3s 981us/step - loss: 14929.3975\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 72/100\n3288/3288 [==============================] - 3s 992us/step - loss: 14919.6045\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 73/100\n3288/3288 [==============================] - 3s 1ms/step - loss: 14953.4160\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 74/100\n3288/3288 [==============================] - 4s 1ms/step - loss: 14861.6396\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 75/100\n3288/3288 [==============================] - 3s 1ms/step - loss: 14996.9170\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 76/100\n3288/3288 [==============================] - 3s 993us/step - loss: 14967.8232\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 77/100\n3288/3288 [==============================] - 3s 1ms/step - loss: 14914.2627\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 78/100\n3288/3288 [==============================] - 3s 993us/step - loss: 14924.9004\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 79/100\n3288/3288 [==============================] - 3s 1ms/step - loss: 14926.5830\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 80/100\n3288/3288 [==============================] - 3s 992us/step - loss: 14956.6221\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 81/100\n3288/3288 [==============================] - 3s 995us/step - loss: 14909.3857\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 82/100\n3288/3288 [==============================] - 3s 1ms/step - loss: 14907.6104\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 83/100\n3288/3288 [==============================] - 3s 1ms/step - loss: 14890.6035\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 84/100\n3288/3288 [==============================] - 3s 1ms/step - loss: 14951.9180\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 85/100\n3288/3288 [==============================] - 3s 1ms/step - loss: 14790.3213\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 86/100\n3288/3288 [==============================] - 3s 1ms/step - loss: 14952.0488\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 87/100\n3288/3288 [==============================] - 3s 1ms/step - loss: 14889.3809\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 88/100\n3288/3288 [==============================] - 3s 1ms/step - loss: 14901.3350\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 89/100\n3288/3288 [==============================] - 3s 1ms/step - loss: 14877.6689\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 90/100\n3288/3288 [==============================] - 3s 1ms/step - loss: 14888.7070\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 91/100\n3288/3288 [==============================] - 3s 1ms/step - loss: 14828.6367\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 92/100\n3288/3288 [==============================] - 3s 1ms/step - loss: 14870.3018\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 93/100\n3288/3288 [==============================] - 3s 1ms/step - loss: 14955.0654\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 94/100\n3288/3288 [==============================] - 3s 1ms/step - loss: 14856.5791\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 95/100\n3288/3288 [==============================] - 3s 1ms/step - loss: 14948.6602\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 96/100\n3288/3288 [==============================] - 3s 1ms/step - loss: 14893.3145\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 97/100\n3288/3288 [==============================] - 3s 1ms/step - loss: 14874.9707\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 98/100\n3288/3288 [==============================] - 3s 1ms/step - loss: 14870.2705\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 99/100\n3288/3288 [==============================] - 3s 1ms/step - loss: 14919.0518\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\nEpoch 100/100\n3288/3288 [==============================] - 3s 1ms/step - loss: 14889.2969\nWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss\n"
],
[
"\ntest_data = np.array([ 22,\t1,\t5,\t1,\t2021])\nprint(model.predict(test_data.reshape(1,5), batch_size=1))",
"[[0.21266787]]\n"
],
[
"# Save entire model to a HDF5 file\nmodel.save('working_model.h5')",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d08aa8df473f9cefa8c0362101865e79f83caf37 | 12,142 | ipynb | Jupyter Notebook | content/lessons/02/Class-Coding-Lab/CCL-Intro-To-Python-Programming.ipynb | MahopacHS/spring2019-ditoccoa0302 | ce0c88d4283964379d80ffcffed78c75aed36922 | [
"MIT"
] | null | null | null | content/lessons/02/Class-Coding-Lab/CCL-Intro-To-Python-Programming.ipynb | MahopacHS/spring2019-ditoccoa0302 | ce0c88d4283964379d80ffcffed78c75aed36922 | [
"MIT"
] | null | null | null | content/lessons/02/Class-Coding-Lab/CCL-Intro-To-Python-Programming.ipynb | MahopacHS/spring2019-ditoccoa0302 | ce0c88d4283964379d80ffcffed78c75aed36922 | [
"MIT"
] | null | null | null | 44.313869 | 671 | 0.579476 | [
[
[
"# Class Coding Lab: Introduction to Programming\n\nThe goals of this lab are to help you to understand:\n\n1. the Jupyter and IDLE programming environments\n1. basic Python Syntax\n2. variables and their use\n3. how to sequence instructions together into a cohesive program\n4. the input() function for input and print() function for output\n",
"_____no_output_____"
],
[
"## Let's start with an example: Hello, world!\n\nThis program asks for your name as input, then says hello to you as output. Most often it's the first program you write when learning a new programming language. Click in the cell below and click the run cell button.",
"_____no_output_____"
]
],
[
[
"your_name = input(\"What is your name? \")\nprint('Hello there',your_name)",
"_____no_output_____"
]
],
[
[
"Believe it or not there's a lot going on in this simple two-line program, so let's break it down.\n\n - The first line:\n - Asks you for input, prompting you `What is your Name?`\n - It then stores your input in the variable `your_name` \n - The second line:\n - prints out the following text: `Hello there`\n - then prints out the contents of the variable `your_name`\n\nAt this point you might have a few questions. What is a variable? Why do I need it? Why is this two lines? Etc... All will be revealed in time.",
"_____no_output_____"
],
[
"## Variables\n\nVariables are names in our code which store values. I think of variables as cardboard boxes. Boxes hold things. Variables hold things. The name of the variable is on the ouside of the box (that way you know which box it is), and value of the variable represents the contents of the box. \n\n### Variable Assignment\n\n**Assignment** is an operation where we store data in our variable. It's like packing something up in the box.\n\nIn this example we assign the value \"USA\" to the variable **country**",
"_____no_output_____"
]
],
[
[
"# Here's an example of variable assignment. Wre\ncountry = 'USA'",
"_____no_output_____"
]
],
[
[
"### Variable Access \n\nWhat good is storing data if you cannot retrieve it? Lucky for us, retrieving the data in variable is as simple as calling its name:",
"_____no_output_____"
]
],
[
[
"country # This should say 'USA'",
"_____no_output_____"
]
],
[
[
"At this point you might be thinking: Can I overwrite a variable? The answer, of course, is yes! Just re-assign it a different value:",
"_____no_output_____"
]
],
[
[
"country = 'Canada'",
"_____no_output_____"
]
],
[
[
"You can also access a variable multiple times. Each time it simply gives you its value:",
"_____no_output_____"
]
],
[
[
"country, country, country",
"_____no_output_____"
]
],
[
[
"### The Purpose Of Variables\n\nVariables play an vital role in programming. Computer instructions have no memory of each other. That is one line of code has no idea what is happening in the other lines of code. The only way we can \"connect\" what happens from one line to the next is through variables. \n\nFor example, if we re-write the Hello, World program at the top of the page without variables, we get the following:\n",
"_____no_output_____"
]
],
[
[
"input(\"What is your name? \")\nprint('Hello there')",
"What is your name? Bob\nHello there\n"
]
],
[
[
"When you execute this program, notice there is no longer a connection between the input and the output. In fact, the input on line 1 doesn't matter because the output on line 2 doesn't know about it. It cannot because we never stored the results of the input into a variable!",
"_____no_output_____"
],
[
"### What's in a name? Um, EVERYTHING\n\nComputer code serves two equally important purposes:\n\n1. To solve a problem (obviously)\n2. To communicate hwo you solved problem to another person (hmmm... I didn't think of that!)\n\nIf our code does something useful, like land a rocket, predict the weather, or calculate month-end account balances then the chances are 100% certain that *someone else will need to read and understand our code.* \n\nTherefore it's just as important we develop code that is easilty understood by both the computer and our colleagues.\n\nThis starts with the names we choose for our variables. Consider the following program:",
"_____no_output_____"
]
],
[
[
"y = input(\"Enter your city: \")\nx = input(\"Enter your state: \")\nprint(x,y,'is a nice place to live')",
"Enter your city: yeet town\nEnter your state: yeet state\nyeet state yeet town is a nice place to live\n"
]
],
[
[
"What do `x` and `y` represent? Is there a semantic (design) error in this program?\n\nYou might find it easy to figure out the answers to these questions, but consider this more human-friendly version:",
"_____no_output_____"
]
],
[
[
"state = input(\"Enter your city: \")\ncity = input(\"Enter your state: \")\nprint(city,state,'is a nice place to live')",
"_____no_output_____"
]
],
[
[
"Do the aptly-named variables make it easier to find the semantic errors in this second version?\n\n### You Do It:\n\nFinally re-write this program so that it uses well-thought out variables AND in semantically correct:",
"_____no_output_____"
]
],
[
[
"# TODO: Code it re-write the above program to work as it should: Stating City State is a nice place to live\ncity = input(\"Enter your city: \")\nstate = input(\"Enter your state: \")\nprint(city + \",\", state, \"is a nice place to live\")",
"Enter your city: Yeet Town\nEnter your state: Yeet State\nYeet Town, Yeet State is a nice place to live\n"
]
],
[
[
"### Now Try This:\n\nNow try to write a program which asks for two separate inputs: your first name and your last name. The program should then output `Hello` with your first name and last name.\n\nFor example if you enter `Mike` for the first name and `Fudge` for the last name the program should output `Hello Mike Fudge`\n\n**HINTS**\n\n - Use appropriate variable names. If you need to create a two word variable name use an underscore in place of the space between the words. eg. `two_words` \n - You will need a separate set of inputs for each name.\n",
"_____no_output_____"
]
],
[
[
"# TODO: write your code here\nfirst_name = input(\"What's your name? \")\nlast_name = input(\"What's your last name? \")\nprint (\"Hello,\",first_name,last_name)",
"What's your name? Bob\nWhat's your last name? Bobson\nHello, Bob Bobson\n"
]
],
[
[
"### Variable Concatenation: Your First Operator\n\nThe `+` symbol is used to combine to variables containing text values together. Consider the following example:",
"_____no_output_____"
]
],
[
[
"prefix = \"re\"\nsuffix = \"ment\"\nroot = input(\"Enter a root word, like 'ship': \")\nprint( prefix + root + suffix)",
"Enter a root word, like 'ship': yeet\nreyeetment\n"
]
],
[
[
"### Now Try This\n\nWrite a program to prompt for three colors as input, then outputs those three colors as a lis, informing me which one was the middle (2nd entered) color. For example if you were to enter `red` then `green` then `blue` the program would output: `Your colors were: red, green, and blue. The middle was is green.`\n\n**HINTS**\n\n - you'll need three variables one fore each input\n - you should try to make the program output like my example. This includes commas and the word `and`. \n ",
"_____no_output_____"
]
],
[
[
"# TODO: write your code here\nfirst_color = input(\"Choose a color: \")\nsecond_color = input(\"Choose another color: \")\nthird_color = input(\"Choose another color: \")\nprint(\"Your colors were\", first_color + \",\", second_color + \", and\", third_color + \". The middle color was\", second_color + \".\")\n",
"Choose a color: red\nChoose another color: yellow\nChoose another color: blue\nYour colors were red, yellow, and blue. The middle color was yellow.\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d08aab8eec59be664c8d10f991f8aa880e836bb8 | 10,345 | ipynb | Jupyter Notebook | module-2/Intro-Scipy/your-code/main.ipynb | Skye-FY/daft-miami-0120-labs | a5fd22eae0a9950940fad084579b99cc98863e5c | [
"MIT"
] | null | null | null | module-2/Intro-Scipy/your-code/main.ipynb | Skye-FY/daft-miami-0120-labs | a5fd22eae0a9950940fad084579b99cc98863e5c | [
"MIT"
] | 3 | 2019-10-28T21:38:48.000Z | 2019-12-17T01:45:37.000Z | module-2/Intro-Scipy/your-code/main.ipynb | Skye-FY/daft-miami-0120-labs | a5fd22eae0a9950940fad084579b99cc98863e5c | [
"MIT"
] | 7 | 2020-01-21T17:33:11.000Z | 2020-01-22T02:11:37.000Z | 29.898844 | 574 | 0.615756 | [
[
[
"# Before your start:\n- Read the README.md file\n- Comment as much as you can and use the resources (README.md file)\n- Happy learning!",
"_____no_output_____"
]
],
[
[
"#import numpy and pandas\n\n",
"_____no_output_____"
]
],
[
[
"# Challenge 1 - The `stats` Submodule\n\nThis submodule contains statistical functions for conducting hypothesis tests, producing various distributions and other useful tools. Let's examine this submodule using the KickStarter dataset. Load the data using Ironhack's database (db: kickstarter, table: projects).",
"_____no_output_____"
]
],
[
[
"# Your code here:\n",
"_____no_output_____"
]
],
[
[
"Now print the `head` function to examine the dataset.",
"_____no_output_____"
]
],
[
[
"# Your code here:\n\n",
"_____no_output_____"
]
],
[
[
"Import the `mode` function from `scipy.stats` and find the mode of the `country` and `currency` column.",
"_____no_output_____"
]
],
[
[
"# Your code here:\n\n",
"_____no_output_____"
]
],
[
[
"The trimmed mean is a function that computes the mean of the data with observations removed. The most common way to compute a trimmed mean is by specifying a percentage and then removing elements from both ends. However, we can also specify a threshold on both ends. The goal of this function is to create a more robust method of computing the mean that is less influenced by outliers. SciPy contains a function called `tmean` for computing the trimmed mean. \n\nIn the cell below, import the `tmean` function and then find the 75th percentile of the `goal` column. Compute the trimmed mean between 0 and the 75th percentile of the column. Read more about the `tmean` function [here](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.tmean.html#scipy.stats.tmean).",
"_____no_output_____"
]
],
[
[
"# Your code here:\n\n",
"_____no_output_____"
]
],
[
[
"#### SciPy contains various statistical tests. One of the tests is Fisher's exact test. This test is used for contingency tables. \n\nThe test originates from the \"Lady Tasting Tea\" experiment. In 1935, Fisher published the results of the experiment in his book. The experiment was based on a claim by Muriel Bristol that she can taste whether tea or milk was first poured into the cup. Fisher devised this test to disprove her claim. The null hypothesis is that the treatments do not affect outcomes, while the alternative hypothesis is that the treatment does affect outcome. To read more about Fisher's exact test, see:\n\n* [Wikipedia's explanation](http://b.link/test61)\n* [A cool deep explanation](http://b.link/handbook47)\n* [An explanation with some important Fisher's considerations](http://b.link/significance76)\n\nLet's perform Fisher's exact test on our KickStarter data. We intend to test the hypothesis that the choice of currency has an impact on meeting the pledge goal. We'll start by creating two derived columns in our dataframe. The first will contain 1 if the amount of money in `usd_pledged_real` is greater than the amount of money in `usd_goal_real`. We can compute this by using the `np.where` function. If the amount in one column is greater than the other, enter a value of 1, otherwise enter a value of zero. Add this column to the dataframe and name it `goal_met`.",
"_____no_output_____"
]
],
[
[
"# Your code here:\n\n",
"_____no_output_____"
]
],
[
[
"Next, create a column that checks whether the currency of the project is in US Dollars. Create a column called `usd` using the `np.where` function where if the currency is US Dollars, assign a value of 1 to the row and 0 otherwise.",
"_____no_output_____"
]
],
[
[
"# Your code here:\n\n",
"_____no_output_____"
]
],
[
[
"Now create a contingency table using the `pd.crosstab` function in the cell below to compare the `goal_met` and `usd` columns.",
"_____no_output_____"
],
[
"Import the `fisher_exact` function from `scipy.stats` and conduct the hypothesis test on the contingency table that you have generated above. You can read more about the `fisher_exact` function [here](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.fisher_exact.html#scipy.stats.fisher_exact). The output of the function should be the odds ratio and the p-value. The p-value will provide you with the outcome of the test.",
"_____no_output_____"
]
],
[
[
"# Your code here:\n\n",
"_____no_output_____"
]
],
[
[
"# Challenge 2 - The `interpolate` submodule\n\nThis submodule allows us to interpolate between two points and create a continuous distribution based on the observed data.\n\nIn the cell below, import the `interp1d` function and first take a sample of 10 rows from `kickstarter`. ",
"_____no_output_____"
]
],
[
[
"# Your code here:\n\n",
"_____no_output_____"
]
],
[
[
"Next, create a linear interpolation of the backers as a function of `usd_pledged_real`. Create a function `f` that generates a linear interpolation of backers as predicted by the amount of real pledged dollars.",
"_____no_output_____"
]
],
[
[
"# Your code here:\n\n",
"_____no_output_____"
]
],
[
[
"Now create a new variable called `x_new`. This variable will contain all integers between the minimum number of backers in our sample and the maximum number of backers. The goal here is to take the dataset that contains few obeservations due to sampling and fill all observations with a value using the interpolation function. \n\nHint: one option is the `np.arange` function.",
"_____no_output_____"
]
],
[
[
"# Your code here:\n\n",
"_____no_output_____"
]
],
[
[
"Plot function f for all values of `x_new`. Run the code below.",
"_____no_output_____"
]
],
[
[
"# Run this code:\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\nplt.plot(x_new, f(x_new))",
"_____no_output_____"
]
],
[
[
"Next create a function that will generate a cubic interpolation function. Name the function `g`.",
"_____no_output_____"
]
],
[
[
"# Your code here:\n\n",
"_____no_output_____"
],
[
"# Run this code:\n\nplt.plot(x_new, g(x_new))",
"_____no_output_____"
]
],
[
[
"# Bonus Challenge - The Binomial Distribution\n\nThe binomial distribution allows us to calculate the probability of k successes in n trials for a random variable with two possible outcomes (which we typically label success and failure). \n\nThe probability of success is typically denoted by p and the probability of failure is denoted by 1-p.\n\nThe `scipy.stats` submodule contains a `binom` function for computing the probabilites of a random variable with the binomial distribution. You may read more about the binomial distribution [here](http://b.link/binomial55)\n\n* In the cell below, compute the probability that a dice lands on 5 exactly 3 times in 8 tries.\n",
"_____no_output_____"
]
],
[
[
"# Your code here:\n\n",
"_____no_output_____"
]
],
[
[
"* Do a simulation for the last event: do a function that simulate 8 tries and return a 1 if the result is 5 exactly 3 times and 0 if not. Now launch your simulation.",
"_____no_output_____"
]
],
[
[
"# Your code here:\n",
"_____no_output_____"
]
],
[
[
"* Launch 10 simulations and represent the result in a bar plot. Now launch 1000 simulations and represent it. What do you see?",
"_____no_output_____"
]
],
[
[
"# Your code here:\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d08ab8f3fea0c13a4648d458a97922b67992998e | 19,724 | ipynb | Jupyter Notebook | notebooks/mp-plot-patient-data.ipynb | alistairewj/mortality-prediction | 1a1f44133d99f0214be535be62915b565bf26cfa | [
"MIT"
] | 50 | 2017-09-07T20:06:34.000Z | 2022-01-09T03:24:48.000Z | notebooks/mp-plot-patient-data.ipynb | alistairewj/mortality-prediction | 1a1f44133d99f0214be535be62915b565bf26cfa | [
"MIT"
] | 3 | 2018-12-07T08:50:19.000Z | 2021-05-13T07:34:28.000Z | notebooks/mp-plot-patient-data.ipynb | alistairewj/mortality-prediction | 1a1f44133d99f0214be535be62915b565bf26cfa | [
"MIT"
] | 24 | 2017-07-08T15:46:36.000Z | 2021-05-01T22:03:42.000Z | 38.15087 | 112 | 0.530268 | [
[
[
"# Import libraries\nimport numpy as np\nimport pandas as pd\nimport sklearn as sk\nimport matplotlib\nimport matplotlib.pyplot as plt\nfrom matplotlib.font_manager import FontProperties # for unicode fonts\nimport psycopg2\nimport sys\nimport datetime as dt\nimport mp_utils as mp\n\nfrom sklearn.pipeline import Pipeline\n\n# used to impute mean for data and standardize for computational stability\nfrom sklearn.preprocessing import Imputer\nfrom sklearn.preprocessing import StandardScaler\n\n# logistic regression is our favourite model ever\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.linear_model import LogisticRegressionCV # l2 regularized regression\nfrom sklearn.linear_model import LassoCV\n\n# used to calculate AUROC/accuracy\nfrom sklearn import metrics\n\n# used to create confusion matrix\nfrom sklearn.metrics import confusion_matrix\n\n# gradient boosting - must download package https://github.com/dmlc/xgboost\nimport xgboost as xgb\n\n# default colours for prettier plots\ncol = [[0.9047, 0.1918, 0.1988],\n [0.2941, 0.5447, 0.7494],\n [0.3718, 0.7176, 0.3612],\n [1.0000, 0.5482, 0.1000],\n [0.4550, 0.4946, 0.4722],\n [0.6859, 0.4035, 0.2412],\n [0.9718, 0.5553, 0.7741],\n [0.5313, 0.3359, 0.6523]];\n# \"Tableau 20\" colors as RGB. \ntableau20 = [(31, 119, 180), (174, 199, 232), (255, 127, 14), (255, 187, 120), \n (44, 160, 44), (152, 223, 138), (214, 39, 40), (255, 152, 150), \n (148, 103, 189), (197, 176, 213), (140, 86, 75), (196, 156, 148), \n (227, 119, 194), (247, 182, 210), (127, 127, 127), (199, 199, 199), \n (188, 189, 34), (219, 219, 141), (23, 190, 207), (158, 218, 229)] \n \n# Scale the RGB values to the [0, 1] range, which is the format matplotlib accepts. \nfor i in range(len(tableau20)): \n r, g, b = tableau20[i] \n tableau20[i] = (r / 255., g / 255., b / 255.)\n\nmarker = ['v','o','d','^','s','>','+']\nls = ['-','-','-','-','-','s','--','--']\n\n# bigger font !\nplt.rcParams.update({'font.size': 22})\n\n%matplotlib inline\n\nfrom __future__ import print_function",
"_____no_output_____"
]
],
[
[
"# Plot data from example patient's time-series",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv('/tmp/mp_data.csv')",
"_____no_output_____"
],
[
"# load in this patient's deathtime from the actual experiment\ndf_offset = pd.read_csv('/tmp/mp_death.csv')\n\n# get censoring information\ndf_censor = pd.read_csv('/tmp/mp_censor.csv')",
"_____no_output_____"
]
],
[
[
"# Experiment A: First 24 hours",
"_____no_output_____"
]
],
[
[
"# define the patient\niid = 200001\niid2 = 200019\nT_WINDOW = 24\ntime_dict = {iid: 24, iid2: 24}\n\ndf_pat = df.loc[df['icustay_id']==iid, :].set_index('hr')\ndeathtime = df_offset.loc[df_offset['icustay_id']==iid, 'deathtime_hours'].values\n\n# Two subplots, the axes array is 1-d\nf, axarr = plt.subplots(2, sharex=True, figsize=[10,10])\n\npretty_labels = {'heartrate': 'Heart rate',\n 'meanbp': 'Mean blood pressure',\n 'resprate': 'Respiratory rate',\n 'spo2': 'Peripheral oxygen saturation',\n 'tempc': 'Temperature',\n 'bg_ph': 'pH',\n 'bg_bicarbonate': 'Serum bicarbonate',\n 'hemoglobin': 'Hemoglobin',\n 'potassium': 'Potassium',\n 'inr': 'International normalized ratio',\n 'bg_lactate': 'Lactate',\n 'wbc': 'White blood cell count'}\n#var_list = df.columns\n\n# first plot all the vitals in subfigure 1\nvar_vitals = [u'heartrate', u'meanbp', u'resprate', u'tempc', u'spo2']\n\ni=0\nt_scale = 1.0 # divide by this to get from hours to t_unit\nt_unit = 'Hours elapsed'\nfor v in var_vitals:\n idx = ~df_pat[v].isnull()\n if np.sum(idx) > 0:\n axarr[0].plot(df_pat.loc[idx,v].index/t_scale, df_pat.loc[idx,v].values, '--',\n label=pretty_labels[v],\n marker=marker[np.mod(i,7)], markersize=8,\n color=tableau20[i], linewidth=2)\n i+=1\n \n\naxarr[0].set_ylim([0,150])\ny_lim = axarr[0].get_ylim()\n\n# add ICU discharge\nif dischtime is not np.nan:\n axarr[0].plot([deathtime,deathtime], y_lim, 'k:',linewidth=3)\n\n# add a grey patch to represent the window\nendtime = time_dict[iid]\nrect = matplotlib.patches.Rectangle( (endtime-T_WINDOW, y_lim[0]), T_WINDOW, y_lim[1], color='#bdbdbd')\naxarr[0].add_patch(rect)\n# #axarr[0].text(starttime/60.0-4-2,4, 'window',fontsize=16)\n\naxarr[0].set_ylabel('Vital signs for {}'.format(iid),fontsize=16)\n\n\n\n# next plot the vitals for the next patient in subfigure 2\ndf_pat = df.loc[df['icustay_id']==iid2, :].set_index('hr')\ndeathtime = df_offset.loc[df_offset['icustay_id']==iid2, 'deathtime_hours'].values\n\n\ni=0\nt_scale = 1.0 # divide by this to get from hours to t_unit\nt_unit = 'Hours elapsed since ICU admission'\nfor v in var_vitals:\n idx = ~df_pat[v].isnull()\n if np.sum(idx) > 0:\n axarr[1].plot(df_pat.loc[idx,v].index/t_scale, df_pat.loc[idx,v].values, '--',\n label=pretty_labels[v],\n marker=marker[np.mod(i,7)], markersize=8,\n color=tableau20[i], linewidth=2)\n i+=1\n\naxarr[1].set_ylim([0,150])\ny_lim = axarr[1].get_ylim()\n\n# add ICU discharge\nif deathtime is not np.nan:\n axarr[1].plot([deathtime,deathtime], y_lim, 'k:',linewidth=3)\n axarr[1].arrow(deathtime-5, 115, 4, 0, head_width=5, head_length=1, fc='k', ec='k')\n axarr[1].text(deathtime-12, 112.5, 'Death', fontsize=16)\n \n# add DNR\ndnrtime = df_censor.loc[df_censor['icustay_id']==iid2, 'censortime_hours'].values\nif dnrtime.shape[0]>0:\n axarr[1].plot([dnrtime,dnrtime], y_lim, 'm:', linewidth=3)\n axarr[1].arrow(dnrtime+5, 135, -4, 0, head_width=5, head_length=1, fc='k', ec='k')\n axarr[1].text(dnrtime+5, 132.5, 'DNR',fontsize=16)\n\n# add a patch to represent the window\nendtime = time_dict[iid2]\nrect = matplotlib.patches.Rectangle( (endtime-T_WINDOW, y_lim[0]), T_WINDOW, y_lim[1], color='#bdbdbd')\naxarr[1].add_patch(rect)\n\n\n\n\naxarr[1].set_xlabel(t_unit,fontsize=16)\naxarr[1].set_ylabel('Vital signs for {}'.format(iid2),fontsize=16)\naxarr[1].legend(shadow=True, fancybox=True,loc='upper center', bbox_to_anchor=(0.5, 1.21),ncol=3)\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Experiment B: Random time",
"_____no_output_____"
]
],
[
[
"# generate a random time dictionary\nT_WINDOW=4\ndf_tmp=df_offset.copy().merge(df_censor, how='left', left_on='icustay_id', right_on='icustay_id')\ntime_dict = mp.generate_times(df_tmp, T=2, seed=111, censor=True)\n\n\n# define the patient\niid = 200001\niid2 = 200019\n\ndf_pat = df.loc[df['icustay_id']==iid, :].set_index('hr')\ndeathtime = df_offset.loc[df_offset['icustay_id']==iid, 'deathtime_hours'].values\n\n# Two subplots, the axes array is 1-d\nf, axarr = plt.subplots(2, sharex=True, figsize=[10,10])\n\npretty_labels = {'heartrate': 'Heart rate',\n 'meanbp': 'Mean blood pressure',\n 'resprate': 'Respiratory rate',\n 'spo2': 'Peripheral oxygen saturation',\n 'tempc': 'Temperature',\n 'bg_ph': 'pH',\n 'bg_bicarbonate': 'Serum bicarbonate',\n 'hemoglobin': 'Hemoglobin',\n 'potassium': 'Potassium',\n 'inr': 'International normalized ratio',\n 'bg_lactate': 'Lactate',\n 'wbc': 'White blood cell count'}\n#var_list = df.columns\n\n# first plot all the vitals in subfigure 1\nvar_vitals = [u'heartrate', u'meanbp', u'resprate', u'tempc', u'spo2']\n\ni=0\nt_scale = 1.0 # divide by this to get from hours to t_unit\nt_unit = 'Hours elapsed'\nfor v in var_vitals:\n idx = ~df_pat[v].isnull()\n if np.sum(idx) > 0:\n axarr[0].plot(df_pat.loc[idx,v].index/t_scale, df_pat.loc[idx,v].values, '--',\n label=pretty_labels[v],\n marker=marker[np.mod(i,7)],\n color=tableau20[i], linewidth=2)\n i+=1\n \n\naxarr[0].set_ylim([0,150])\ny_lim = axarr[0].get_ylim()\n\n# add ICU discharge\nif dischtime is not np.nan:\n axarr[0].plot([deathtime,deathtime], y_lim, 'k:',linewidth=3)\n\n# add a grey patch to represent the window\nendtime = time_dict[iid]\nrect = matplotlib.patches.Rectangle( (endtime-T_WINDOW, y_lim[0]), T_WINDOW, y_lim[1], color='#bdbdbd')\naxarr[0].add_patch(rect)\n# #axarr[0].text(starttime/60.0-4-2,4, 'window',fontsize=16)\n\naxarr[0].set_ylabel('Vital signs for {}'.format(iid),fontsize=16)\n\n\n\n# next plot the vitals for the next patient in subfigure 2\ndf_pat = df.loc[df['icustay_id']==iid2, :].set_index('hr')\ndeathtime = df_offset.loc[df_offset['icustay_id']==iid2, 'deathtime_hours'].values\n\n\ni=0\nt_scale = 1.0 # divide by this to get from hours to t_unit\nt_unit = 'Hours elapsed since ICU admission'\nfor v in var_vitals:\n idx = ~df_pat[v].isnull()\n if np.sum(idx) > 0:\n axarr[1].plot(df_pat.loc[idx,v].index/t_scale, df_pat.loc[idx,v].values, '--',\n label=pretty_labels[v],\n marker=marker[np.mod(i,7)], markersize=8,\n color=tableau20[i], linewidth=2)\n i+=1\n\naxarr[1].set_ylim([0,150])\ny_lim = axarr[1].get_ylim()\n\n# add ICU discharge\nif deathtime is not np.nan:\n axarr[1].plot([deathtime,deathtime], y_lim, 'k:',linewidth=3)\n axarr[1].arrow(deathtime-5, 115, 4, 0, head_width=5, head_length=1, fc='k', ec='k')\n axarr[1].text(deathtime-12, 112.5, 'Death', fontsize=16)\n \n# add DNR\ndnrtime = df_censor.loc[df_censor['icustay_id']==iid2, 'censortime_hours'].values\nif dnrtime.shape[0]>0:\n axarr[1].plot([dnrtime,dnrtime], y_lim, 'm:', linewidth=3)\n axarr[1].arrow(dnrtime+5, 135, -4, 0, head_width=5, head_length=1, fc='k', ec='k')\n axarr[1].text(dnrtime+5, 132.5, 'DNR',fontsize=16)\n\n# add a patch to represent the window\nendtime = time_dict[iid2]\nrect = matplotlib.patches.Rectangle( (endtime-T_WINDOW, y_lim[0]), T_WINDOW, y_lim[1], color='#bdbdbd')\naxarr[1].add_patch(rect)\n\n\n\n\naxarr[1].set_xlabel(t_unit,fontsize=16)\naxarr[1].set_ylabel('Vital signs for {}'.format(iid2),fontsize=16)\n#axarr[1].legend(shadow=True, fancybox=True,loc='upper center', bbox_to_anchor=(0.5, 1.1),ncol=3)\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Both 24 hours and 4 hour window",
"_____no_output_____"
]
],
[
[
"# generate a random time dictionary\nT_WINDOW=4\ndf_tmp=df_offset.copy().merge(df_censor, how='left', left_on='icustay_id', right_on='icustay_id')\ntime_dict = mp.generate_times(df_tmp, T=2, seed=111, censor=True)\n\n\n# define the patient\niid = 200001\niid2 = 200019\n\ndf_pat = df.loc[df['icustay_id']==iid, :].set_index('hr')\ndeathtime = df_offset.loc[df_offset['icustay_id']==iid, 'deathtime_hours'].values\n\n# Two subplots, the axes array is 1-d\nf, axarr = plt.subplots(2, sharex=True, figsize=[10,10])\n\npretty_labels = {'heartrate': 'Heart rate',\n 'meanbp': 'Mean blood pressure',\n 'resprate': 'Respiratory rate',\n 'spo2': 'Peripheral oxygen saturation',\n 'tempc': 'Temperature',\n 'bg_ph': 'pH',\n 'bg_bicarbonate': 'Serum bicarbonate',\n 'hemoglobin': 'Hemoglobin',\n 'potassium': 'Potassium',\n 'inr': 'International normalized ratio',\n 'bg_lactate': 'Lactate',\n 'wbc': 'White blood cell count'}\n#var_list = df.columns\n\n# first plot all the vitals in subfigure 1\nvar_vitals = [u'heartrate', u'meanbp', u'resprate', u'tempc', u'spo2']\n\ni=0\nt_scale = 1.0 # divide by this to get from hours to t_unit\nt_unit = 'Hours elapsed'\nfor v in var_vitals:\n idx = ~df_pat[v].isnull()\n if np.sum(idx) > 0:\n axarr[0].plot(df_pat.loc[idx,v].index/t_scale, df_pat.loc[idx,v].values, '--',\n label=pretty_labels[v],\n marker=marker[np.mod(i,7)],\n color=tableau20[i], linewidth=2)\n i+=1\n \n\naxarr[0].set_ylim([0,150])\ny_lim = axarr[0].get_ylim()\n\n# add ICU discharge\nif dischtime is not np.nan:\n axarr[0].plot([deathtime,deathtime], y_lim, 'k:',linewidth=3)\n\n# add a grey patch to represent the 4 hour window\nendtime = time_dict[iid]\nrect = matplotlib.patches.Rectangle( (endtime-T_WINDOW, y_lim[0]), T_WINDOW, y_lim[1], color='#bdbdbd')\naxarr[0].add_patch(rect)\n# #axarr[0].text(starttime/60.0-4-2,4, 'window',fontsize=16)\n\n\n# add a grey patch to represent the 24 hour window\nrect = matplotlib.patches.Rectangle( (0, y_lim[0]), 24, y_lim[1], color='#bdbdbd')\naxarr[0].add_patch(rect)\n# #axarr[0].text(starttime/60.0-4-2,4, 'window',fontsize=16)\n\naxarr[0].set_ylabel('Vital signs for {}'.format(iid),fontsize=16)\n\n\n\n# next plot the vitals for the next patient in subfigure 2\ndf_pat = df.loc[df['icustay_id']==iid2, :].set_index('hr')\ndeathtime = df_offset.loc[df_offset['icustay_id']==iid2, 'deathtime_hours'].values\n\n\ni=0\nt_scale = 1.0 # divide by this to get from hours to t_unit\nt_unit = 'Hours elapsed since ICU admission'\nfor v in var_vitals:\n idx = ~df_pat[v].isnull()\n if np.sum(idx) > 0:\n axarr[1].plot(df_pat.loc[idx,v].index/t_scale, df_pat.loc[idx,v].values, '--',\n label=pretty_labels[v],\n marker=marker[np.mod(i,7)], markersize=8,\n color=tableau20[i], linewidth=2)\n i+=1\n\naxarr[1].set_ylim([0,150])\ny_lim = axarr[1].get_ylim()\n\n# add ICU discharge\nif deathtime is not np.nan:\n axarr[1].plot([deathtime,deathtime], y_lim, 'k:',linewidth=3)\n axarr[1].arrow(deathtime-5, 115, 4, 0, head_width=5, head_length=1, fc='k', ec='k')\n axarr[1].text(deathtime-12, 112.5, 'Death', fontsize=16)\n \n# add DNR\ndnrtime = df_censor.loc[df_censor['icustay_id']==iid2, 'censortime_hours'].values\nif dnrtime.shape[0]>0:\n axarr[1].plot([dnrtime,dnrtime], y_lim, 'm:', linewidth=3)\n axarr[1].arrow(dnrtime+5, 135, -4, 0, head_width=5, head_length=1, fc='k', ec='k')\n axarr[1].text(dnrtime+5, 132.5, 'DNR',fontsize=16)\n\n# add a patch to represent the 4 hour window\nendtime = time_dict[iid2]\nrect = matplotlib.patches.Rectangle( (endtime-T_WINDOW, y_lim[0]), T_WINDOW, y_lim[1], color='#bdbdbd')\naxarr[1].add_patch(rect)\n\naxarr[1].arrow(dnrtime+5, 135, -4, 0, head_width=5, head_length=1, fc='k', ec='k')\naxarr[1].text(dnrtime+5, 132.5, 'DNR',fontsize=16)\n\n# add a patch to represent the 24 hour window\nrect = matplotlib.patches.Rectangle( (0, y_lim[0]), 24, y_lim[1], color='#bdbdbd')\naxarr[1].add_patch(rect)\n\n\naxarr[1].arrow(dnrtime+5, 135, -4, 0, head_width=5, head_length=1, fc='k', ec='k')\naxarr[1].text(dnrtime+5, 132.5, 'DNR',fontsize=16)\n\naxarr[1].set_xlabel(t_unit,fontsize=16)\naxarr[1].set_ylabel('Vital signs for {}'.format(iid2),fontsize=16)\n#axarr[1].legend(shadow=True, fancybox=True,loc='upper center', bbox_to_anchor=(0.5, 1.1),ncol=3)\nplt.show()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d08ac373e2ef69f8b0115b37ade446810c91b6ea | 7,269 | ipynb | Jupyter Notebook | advanced_functionality/fairseq_translation/fairseq_sagemaker_pretrained_en2fr.ipynb | Amirosimani/amazon-sagemaker-examples | bc35e7a9da9e2258e77f98098254c2a8e308041a | [
"Apache-2.0"
] | 2,610 | 2020-10-01T14:14:53.000Z | 2022-03-31T18:02:31.000Z | advanced_functionality/fairseq_translation/fairseq_sagemaker_pretrained_en2fr.ipynb | Amirosimani/amazon-sagemaker-examples | bc35e7a9da9e2258e77f98098254c2a8e308041a | [
"Apache-2.0"
] | 1,959 | 2020-09-30T20:22:42.000Z | 2022-03-31T23:58:37.000Z | advanced_functionality/fairseq_translation/fairseq_sagemaker_pretrained_en2fr.ipynb | Amirosimani/amazon-sagemaker-examples | bc35e7a9da9e2258e77f98098254c2a8e308041a | [
"Apache-2.0"
] | 2,052 | 2020-09-30T22:11:46.000Z | 2022-03-31T23:02:51.000Z | 30.670886 | 462 | 0.622919 | [
[
[
"# Fairseq in Amazon SageMaker: Pre-trained English to French translation model\n\nIn this notebook, we will show you how to serve an English to French translation model using pre-trained model provided by the [Fairseq toolkit](https://github.com/pytorch/fairseq)\n\n## Permissions\n\nRunning this notebook requires permissions in addition to the regular SageMakerFullAccess permissions. This is because it creates new repositories in Amazon ECR. The easiest way to add these permissions is simply to add the managed policy AmazonEC2ContainerRegistryFullAccess to the role that you used to start your notebook instance. There's no need to restart your notebook instance when you do this, the new permissions will be available immediately.\n\n## Download pre-trained model\n\nFairseq maintains their pre-trained models [here](https://github.com/pytorch/fairseq/blob/master/examples/translation/README.md). We will use the model that was pre-trained on the [WMT14 English-French](http://statmt.org/wmt14/translation-task.html#Download) dataset. As the models are archived in .bz2 format, we need to convert them to .tar.gz as this is the format supported by Amazon SageMaker.\n\n### Convert archive",
"_____no_output_____"
]
],
[
[
"%%sh\n\nwget https://dl.fbaipublicfiles.com/fairseq/models/wmt14.v2.en-fr.fconv-py.tar.bz2\n\ntar xvjf wmt14.v2.en-fr.fconv-py.tar.bz2 > /dev/null\ncd wmt14.en-fr.fconv-py\nmv model.pt checkpoint_best.pt\n\ntar czvf wmt14.en-fr.fconv-py.tar.gz checkpoint_best.pt dict.en.txt dict.fr.txt bpecodes README.md > /dev/null",
"_____no_output_____"
]
],
[
[
"The pre-trained model has been downloaded and converted. The next step is upload the data to Amazon S3 in order to make it available for running the inference.\n\n### Upload data to Amazon S3",
"_____no_output_____"
]
],
[
[
"import sagemaker\n\nsagemaker_session = sagemaker.Session()\nregion = sagemaker_session.boto_session.region_name\naccount = sagemaker_session.boto_session.client(\"sts\").get_caller_identity().get(\"Account\")\n\nbucket = sagemaker_session.default_bucket()\nprefix = \"sagemaker/DEMO-pytorch-fairseq/pre-trained-models\"\n\nrole = sagemaker.get_execution_role()",
"_____no_output_____"
],
[
"trained_model_location = sagemaker_session.upload_data(\n path=\"wmt14.en-fr.fconv-py/wmt14.en-fr.fconv-py.tar.gz\", bucket=bucket, key_prefix=prefix\n)",
"_____no_output_____"
]
],
[
[
"## Build Fairseq serving container\n\nNext we need to register a Docker image in Amazon SageMaker that will contain the Fairseq code and that will be pulled at inference time to perform the of the precitions from the pre-trained model we downloaded. ",
"_____no_output_____"
]
],
[
[
"%%sh\nchmod +x create_container.sh \n\n./create_container.sh pytorch-fairseq-serve",
"_____no_output_____"
]
],
[
[
"The Fairseq serving image has been pushed into Amazon ECR, the registry from which Amazon SageMaker will be able to pull that image and launch both training and prediction. ",
"_____no_output_____"
],
[
"## Hosting the pre-trained model for inference\n\nWe first needs to define a base JSONPredictor class that will help us with sending predictions to the model once it's hosted on the Amazon SageMaker endpoint. ",
"_____no_output_____"
]
],
[
[
"from sagemaker.predictor import RealTimePredictor, json_serializer, json_deserializer\n\n\nclass JSONPredictor(RealTimePredictor):\n def __init__(self, endpoint_name, sagemaker_session):\n super(JSONPredictor, self).__init__(\n endpoint_name, sagemaker_session, json_serializer, json_deserializer\n )",
"_____no_output_____"
]
],
[
[
"We can now use the Model class to deploy the model artificats (the pre-trained model), and deploy it on a CPU instance. Let's use a `ml.m5.xlarge`. ",
"_____no_output_____"
]
],
[
[
"from sagemaker import Model\n\nalgorithm_name = \"pytorch-fairseq-serve\"\nimage = \"{}.dkr.ecr.{}.amazonaws.com/{}:latest\".format(account, region, algorithm_name)\n\nmodel = Model(\n model_data=trained_model_location,\n role=role,\n image=image,\n predictor_cls=JSONPredictor,\n)",
"_____no_output_____"
],
[
"predictor = model.deploy(initial_instance_count=1, instance_type=\"ml.m5.xlarge\")",
"_____no_output_____"
]
],
[
[
"Now it's your time to play. Input a sentence in English and get the translation in French by simply calling predict. ",
"_____no_output_____"
]
],
[
[
"import html\n\nresult = predictor.predict(\"I love translation\")\n# Some characters are escaped HTML-style requiring to unescape them before printing\nprint(html.unescape(result))",
"_____no_output_____"
]
],
[
[
"Once you're done with getting predictions, remember to shut down your endpoint as you no longer need it. \n\n## Delete endpoint",
"_____no_output_____"
]
],
[
[
"model.sagemaker_session.delete_endpoint(predictor.endpoint)",
"_____no_output_____"
]
],
[
[
"Voila! For more information, you can check out the [Fairseq toolkit homepage](https://github.com/pytorch/fairseq). ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d08ac8529819f547a7a7cd665ee2e99cdffa7d1c | 15,697 | ipynb | Jupyter Notebook | classification_algorithm_test.ipynb | pyupcgithub/DNA_binding-protein-prediciton | f283aba8013eb6d45bd1d9ef46986f0c1e75aaa9 | [
"MIT"
] | 1 | 2018-01-17T13:15:13.000Z | 2018-01-17T13:15:13.000Z | classification_algorithm_test.ipynb | pyupcgithub/DNA_binding-protein-prediciton | f283aba8013eb6d45bd1d9ef46986f0c1e75aaa9 | [
"MIT"
] | null | null | null | classification_algorithm_test.ipynb | pyupcgithub/DNA_binding-protein-prediciton | f283aba8013eb6d45bd1d9ef46986f0c1e75aaa9 | [
"MIT"
] | null | null | null | 25.988411 | 102 | 0.531248 | [
[
[
"import codecs\nfrom itertools import *\nimport numpy as np\nfrom sklearn import svm\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn import tree\nfrom sklearn import model_selection\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.ensemble import GradientBoostingClassifier\nimport xgboost as xgb\nfrom sklearn.ensemble import RandomForestClassifier\nimport pylab as pl\ndef load_data(filename):\n file = codecs.open(filename,'r','utf-8')\n data = []\n label = []\n for line in islice(file,0,None):\n line = line.strip().split(',')\n #print (\"reading data....\")\n data.append([float(i) for i in line[1:-1]])\n label.append(line[-1])\n x = np.array(data)\n y = np.array(label)\n #print (x)\n #print (y)\n return x,y\ndef logistic_regression(x_train,y_train):\n print(\"logistic_regression...\")\n clf1 = LogisticRegression()\n score1 = model_selection.cross_val_score(clf1,x_train,y_train,cv=10,scoring=\"accuracy\")\n x = [int(i) for i in range(1,11)]\n y = score1\n pl.ylabel(u'Accuracy')\n pl.xlabel(u'times')\n pl.plot(x,y,label='LogReg')\n pl.legend()\n #pl.savefig(\"picture/LogReg.png\")\n print (np.mean(score1))\ndef svm_(x_train,y_train):\n print(\"svm...\")\n clf2 = svm.LinearSVC(random_state=2016)\n score2 = model_selection.cross_val_score(clf2,x_train,y_train,cv=10,scoring='accuracy')\n #print score2\n print ('The accuracy of linearSVM:')\n print (np.mean(score2))\n x = [int(i) for i in range(1, 11)]\n y = score2\n pl.ylabel(u'Accuracy')\n pl.xlabel(u'times')\n pl.plot(x, y,label='SVM')\n pl.legend()\n #pl.savefig(\"picture/SVM.png\")\ndef gradient_boosting(x_train,y_train):\n print(\"gradient_boosting...\") \n clf5 = GradientBoostingClassifier()\n score5 = model_selection.cross_val_score(clf5,x_train,y_train,cv=10,scoring=\"accuracy\")\n print ('The accuracy of GradientBoosting:')\n print (np.mean(score5))\n x = [int(i) for i in range(1, 11)]\n y = score5\n pl.ylabel(u'Accuracy')\n pl.xlabel(u'times')\n pl.plot(x, y,label='GBDT')\n pl.legend()\n #pl.savefig(\"picture/GBDT.png\")\ndef xgb_boost(x_train,y_train):\n print(\"xgboost....\")\n clf = xgb.XGBClassifier()\n score = model_selection.cross_val_score(clf,x_train,y_train,cv=10,scoring=\"accuracy\")\n print ('The accuracy of XGBoosting:')\n print (np.mean(score)) \n x = [int(i) for i in range(1, 11)]\n y = score\n pl.ylabel(u'Accuracy')\n pl.xlabel(u'times')\n pl.plot(x, y,label='xgboost')\n pl.legend()\n #pl.savefig(\"picture/XGBoost.png\")\ndef random_forest(x_train,y_train): \n print(\"random_forest...\") \n clf = RandomForestClassifier(n_estimators=100) \n score = model_selection.cross_val_score(clf,x_train,y_train,cv=10,scoring=\"accuracy\")\n print ('The accuracy of RandomForest:')\n print (np.mean(score))\n x = [int(i) for i in range(1, 11)]\n y = score\n pl.ylabel(u'Accuracy')\n pl.xlabel(u'times')\n pl.plot(x, y,label='RandForest')\n pl.legend()\n #pl.savefig(\"picture/RandomForest.png\")\ndef train_acc(filename):\n x_train,y_train = load_data(filename)\n \n logistic_regression(x_train,y_train)\n \n svm_(x_train,y_train)\n \n gradient_boosting(x_train,y_train)\n \n xgb_boost(x_train,y_train)\n \n random_forest(x_train,y_train)",
"_____no_output_____"
],
[
"train_acc(\"feature1227/feature_all_1227.csv\")",
"logistic_regression...\n0.753132571824\nsvm...\nThe accuracy of linearSVM:\n0.760600553825\ngradient_boosting...\nThe accuracy of GradientBoosting:\n0.780183454483\nxgboost....\nThe accuracy of XGBoosting:\n0.769955001731\nrandom_forest...\nThe accuracy of RandomForest:\n0.78769470405\n"
],
[
"train_acc(\"features/feature_all_1223.csv\")",
"logistic_regression...\n0.756827622015\nsvm...\nThe accuracy of linearSVM:\n0.767990654206\ngradient_boosting...\nThe accuracy of GradientBoosting:\n0.826773970232\nxgboost....\nThe accuracy of XGBoosting:\n0.834207338179\nrandom_forest...\nThe accuracy of RandomForest:\n0.79981827622\n"
],
[
"train_acc(\"features/feature_amino_acid_freq_2_gram.csv\")",
"logistic_regression...\n0.57099342333\nsvm...\nThe accuracy of linearSVM:\n0.702942194531\ngradient_boosting...\nThe accuracy of GradientBoosting:\n0.694565593631\nxgboost....\nThe accuracy of XGBoosting:\n0.690827275874\nrandom_forest...\nThe accuracy of RandomForest:\n0.686197646244\n"
],
[
"train_acc(\"features/feature_all_1224.csv\")",
"logistic_regression...\n0.760531325718\nsvm...\nThe accuracy of linearSVM:\n0.768925233645\ngradient_boosting...\nThe accuracy of GradientBoosting:\n0.832268951194\nxgboost....\nThe accuracy of XGBoosting:\n0.829534440983\nrandom_forest...\nThe accuracy of RandomForest:\n0.79604534441\n"
],
[
"train_acc(\"feature1224/feature_amino_acid_freq_2_gram&pssmDT.csv\")",
"logistic_regression...\n0.742912772586\nsvm...\nThe accuracy of linearSVM:\n0.780140186916\ngradient_boosting...\nThe accuracy of GradientBoosting:\n0.817384908273\nxgboost....\nThe accuracy of XGBoosting:\n0.816433021807\nrandom_forest...\nThe accuracy of RandomForest:\n0.796028037383\n"
],
[
"train_acc(\"feature1224/feature_amino_acid_freq_2_gram&localDPP.csv\")",
"logistic_regression...\n0.694530979578\nsvm...\nThe accuracy of linearSVM:\n0.777275874005\ngradient_boosting...\nThe accuracy of GradientBoosting:\n0.80437002423\nxgboost....\nThe accuracy of XGBoosting:\n0.805261336102\nrandom_forest...\nThe accuracy of RandomForest:\n0.797810661128\n"
],
[
"train_acc(\"feature1224/feature_amino_acid_freq_2_gram&pssmDT&localDPP.csv\")",
"logistic_regression...\n0.751272066459\nsvm...\nThe accuracy of linearSVM:\n0.774567324334\ngradient_boosting...\nThe accuracy of GradientBoosting:\n0.82109726549\nxgboost....\nThe accuracy of XGBoosting:\n0.815524402908\nrandom_forest...\nThe accuracy of RandomForest:\n0.796053997923\n"
],
[
"train_acc(\"feature1224/feature_amino_acid_freq_2_gram&amino_acid.csv\")",
"logistic_regression...\n0.715048459675\nsvm...\nThe accuracy of linearSVM:\n0.727102803738\ngradient_boosting...\nThe accuracy of GradientBoosting:\n0.708541017653\nxgboost....\nThe accuracy of XGBoosting:\n0.717852197992\nrandom_forest...\nThe accuracy of RandomForest:\n0.709389061959\n"
],
[
"\ntrain_acc(\"feature1225/feature_amino_acid_freq_top_10.csv\")",
"logistic_regression...\n0.511638975424\nsvm...\nThe accuracy of linearSVM:\n0.588655244029\ngradient_boosting...\nThe accuracy of GradientBoosting:\n0.642557978539\nxgboost....\nThe accuracy of XGBoosting:\n0.639736933195\nrandom_forest...\nThe accuracy of RandomForest:\n0.63319487712\n"
],
[
"train_acc(\"feature1225/feature_all_1225_1.csv\")",
"logistic_regression...\n0.753097957771\nsvm...\nThe accuracy of linearSVM:\n0.775424022153\ngradient_boosting...\nThe accuracy of GradientBoosting:\n0.834172724126\nxgboost....\nThe accuracy of XGBoosting:\n0.835141917619\nrandom_forest...\nThe accuracy of RandomForest:\n0.794184839045\n"
],
[
"train_acc(\"feature1225/feature_all_1225_2.csv\")",
"logistic_regression...\n0.753097957771\nsvm...\nThe accuracy of linearSVM:\n0.775424022153\ngradient_boosting...\nThe accuracy of GradientBoosting:\n0.839797507788\nxgboost....\nThe accuracy of XGBoosting:\n0.831394946348\nrandom_forest...\nThe accuracy of RandomForest:\n0.783982346833\n"
],
[
"train_acc(\"feature1225/feature_ACC_1225.csv\")",
"logistic_regression...\n0.511638975424\nsvm...\nThe accuracy of linearSVM:\n0.511638975424\ngradient_boosting...\nThe accuracy of GradientBoosting:\n0.719738663898\nxgboost....\nThe accuracy of XGBoosting:\n0.72073381793\nrandom_forest...\nThe accuracy of RandomForest:\n0.712365870543\n"
],
[
"train_acc(\"final1225/feature_all.csv\")",
"logistic_regression...\n0.753097957771\nsvm...\nThe accuracy of linearSVM:\n0.775424022153\ngradient_boosting...\nThe accuracy of GradientBoosting:\n0.838862928349\nxgboost....\nThe accuracy of XGBoosting:\n0.831394946348\nrandom_forest...\nThe accuracy of RandomForest:\n0.793241606092\n"
],
[
"train_acc(\"predict1226_2/feature_all.csv\")",
"logistic_regression...\n0.746642436829\nsvm...\nThe accuracy of linearSVM:\n0.770829006577\ngradient_boosting...\nThe accuracy of GradientBoosting:\n0.78766874351\nxgboost....\nThe accuracy of XGBoosting:\n0.781178608515\nrandom_forest...\nThe accuracy of RandomForest:\n0.797014537902\n"
],
[
"from sklearn.externals import joblib\nx,y = load_data(\"predict1226_2/feature_all.csv\")\nrf = RandomForestClassifier(n_estimators=100) \nrf.fit(x,y)\njoblib.dump(rf,\"predict1226_2/rf.model\")\n#y_pred = rf.predict(x)\n#y_preprob = rf.predict_proba(x)[:,1]\n#print (y_pred) \n#print (y_preprob)",
"_____no_output_____"
],
[
"from sklearn.externals import joblib\nx,y = load_data(\"predict1226_2/feature_all.csv\")\nrf = RandomForestClassifier(n_estimators=100) \nrf.fit(x,y)\njoblib.dump(rf,\"predict1226_2/rf.model\")",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d08adf6737c21b5c7e2387e40bdce28d1fccf02a | 343,311 | ipynb | Jupyter Notebook | ssd300_evaluation.ipynb | mdsmith-cim/ssd_keras | 05755de5e8f6265cdbbab1ba750e8aefc0b9808c | [
"Apache-2.0"
] | null | null | null | ssd300_evaluation.ipynb | mdsmith-cim/ssd_keras | 05755de5e8f6265cdbbab1ba750e8aefc0b9808c | [
"Apache-2.0"
] | null | null | null | ssd300_evaluation.ipynb | mdsmith-cim/ssd_keras | 05755de5e8f6265cdbbab1ba750e8aefc0b9808c | [
"Apache-2.0"
] | null | null | null | 614.152057 | 314,996 | 0.936617 | [
[
[
"# SSD Evaluation Tutorial\n\nThis is a brief tutorial that explains how compute the average precisions for any trained SSD model using the `Evaluator` class. The `Evaluator` computes the average precisions according to the Pascal VOC pre-2010 or post-2010 detection evaluation algorithms. You can find details about these computation methods [here](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/htmldoc/devkit_doc.html#sec:ap).\n\nAs an example we'll evaluate an SSD300 on the Pascal VOC 2007 `test` dataset, but note that the `Evaluator` works for any SSD model and any dataset that is compatible with the `DataGenerator`. If you would like to run the evaluation on a different model and/or dataset, the procedure is analogous to what is shown below, you just have to build the appropriate model and load the relevant dataset.\n\nNote: I that in case you would like to evaluate a model on MS COCO, I would recommend to follow the [MS COCO evaluation notebook](https://github.com/pierluigiferrari/ssd_keras/blob/master/ssd300_evaluation_COCO.ipynb) instead, because it can produce the results format required by the MS COCO evaluation server and uses the official MS COCO evaluation code, which computes the mAP slightly differently from the Pascal VOC method.\n\nNote: In case you want to evaluate any of the provided trained models, make sure that you build the respective model with the correct set of scaling factors to reproduce the official results. The models that were trained on MS COCO and fine-tuned on Pascal VOC require the MS COCO scaling factors, not the Pascal VOC scaling factors.",
"_____no_output_____"
]
],
[
[
"from keras import backend as K\nfrom keras.models import load_model\nfrom keras.optimizers import Adam\nfrom imageio import imread\nimport numpy as np\nfrom matplotlib import pyplot as plt\n\nfrom models.keras_ssd300 import ssd_300\nfrom keras_loss_function.keras_ssd_loss import SSDLoss\nfrom keras_layers.keras_layer_AnchorBoxes import AnchorBoxes\nfrom keras_layers.keras_layer_DecodeDetections import DecodeDetections\nfrom keras_layers.keras_layer_DecodeDetectionsFast import DecodeDetectionsFast\nfrom keras_layers.keras_layer_L2Normalization import L2Normalization\nfrom data_generator.object_detection_2d_data_generator import DataGenerator\nfrom eval_utils.average_precision_evaluator import Evaluator\n\n%matplotlib inline\n\nimport os\nimport os.path as p",
"Using TensorFlow backend.\n"
],
[
"# Set a few configuration parameters.\nimg_height = 300\nimg_width = 300\nn_classes = 20\nmodel_mode = 'training'",
"_____no_output_____"
]
],
[
[
"## 1. Load a trained SSD\n\nEither load a trained model or build a model and load trained weights into it. Since the HDF5 files I'm providing contain only the weights for the various SSD versions, not the complete models, you'll have to go with the latter option when using this implementation for the first time. You can then of course save the model and next time load the full model directly, without having to build it.\n\nYou can find the download links to all the trained model weights in the README.",
"_____no_output_____"
],
[
"### 1.1. Build the model and load trained weights into it",
"_____no_output_____"
]
],
[
[
"# 1: Build the Keras model\n\nK.clear_session() # Clear previous models from memory.\n\nmodel = ssd_300(image_size=(img_height, img_width, 3),\n n_classes=n_classes,\n mode=model_mode,\n l2_regularization=0.0005,\n scales=[0.1, 0.2, 0.37, 0.54, 0.71, 0.88, 1.05], # The scales for MS COCO [0.07, 0.15, 0.33, 0.51, 0.69, 0.87, 1.05]\n aspect_ratios_per_layer=[[1.0, 2.0, 0.5],\n [1.0, 2.0, 0.5, 3.0, 1.0/3.0],\n [1.0, 2.0, 0.5, 3.0, 1.0/3.0],\n [1.0, 2.0, 0.5, 3.0, 1.0/3.0],\n [1.0, 2.0, 0.5],\n [1.0, 2.0, 0.5]],\n two_boxes_for_ar1=True,\n steps=[8, 16, 32, 64, 100, 300],\n offsets=[0.5, 0.5, 0.5, 0.5, 0.5, 0.5],\n clip_boxes=False,\n variances=[0.1, 0.1, 0.2, 0.2],\n normalize_coords=True,\n subtract_mean=[123, 117, 104],\n swap_channels=[2, 1, 0],\n confidence_thresh=0.01,\n iou_threshold=0.45,\n top_k=200,\n nms_max_output_size=400)\n\n# 2: Load the trained weights into the model.\n\nweights_path = '/usr/local/data/msmith/uncertainty/ssd_keras/good_dropout_model/ssd300_dropout_PASCAL2012_train_+12_epoch-58_loss-3.8960_val_loss-5.0832.h5'\n\nmodel.load_weights(weights_path, by_name=True)\n\n# 3: Compile the model so that Keras won't complain the next time you load it.\n\nadam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)\n\nssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0)\n\nmodel.compile(optimizer=adam, loss=ssd_loss.compute_loss)",
"WARNING: Logging before flag parsing goes to stderr.\nW1121 16:15:06.265161 140201693157120 deprecation_wrapper.py:119] From /home/vision/msmith/localDrive/msmith/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:95: The name tf.reset_default_graph is deprecated. Please use tf.compat.v1.reset_default_graph instead.\n\nW1121 16:15:06.267094 140201693157120 deprecation_wrapper.py:119] From /home/vision/msmith/localDrive/msmith/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:98: The name tf.placeholder_with_default is deprecated. Please use tf.compat.v1.placeholder_with_default instead.\n\nW1121 16:15:06.297540 140201693157120 deprecation_wrapper.py:119] From /home/vision/msmith/localDrive/msmith/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:102: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.\n\nW1121 16:15:06.299231 140201693157120 deprecation_wrapper.py:119] From /home/vision/msmith/localDrive/msmith/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.\n\nW1121 16:15:06.322895 140201693157120 deprecation_wrapper.py:119] From /home/vision/msmith/localDrive/msmith/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:4185: The name tf.truncated_normal is deprecated. Please use tf.random.truncated_normal instead.\n\nW1121 16:15:06.371278 140201693157120 deprecation_wrapper.py:119] From /home/vision/msmith/localDrive/msmith/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:3976: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.\n\nW1121 16:15:06.573747 140201693157120 deprecation.py:506] From /home/vision/msmith/localDrive/msmith/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:3445: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.\nInstructions for updating:\nPlease use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.\nW1121 16:15:08.872177 140201693157120 deprecation_wrapper.py:119] From /home/vision/msmith/localDrive/msmith/anaconda3/lib/python3.7/site-packages/keras/optimizers.py:790: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.\n\nW1121 16:15:08.889451 140201693157120 deprecation.py:323] From /usr/local/data/msmith/uncertainty/ssd_keras/keras_loss_function/keras_ssd_loss.py:133: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse `tf.cast` instead.\nW1121 16:15:08.900078 140201693157120 deprecation.py:323] From /usr/local/data/msmith/uncertainty/ssd_keras/keras_loss_function/keras_ssd_loss.py:74: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse tf.where in 2.0, which has the same broadcast rule as np.where\nW1121 16:15:08.914669 140201693157120 deprecation.py:323] From /usr/local/data/msmith/uncertainty/ssd_keras/keras_loss_function/keras_ssd_loss.py:166: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse `tf.cast` instead.\n"
]
],
[
[
"Or",
"_____no_output_____"
],
[
"### 1.2. Load a trained model\n\nWe set `model_mode` to 'inference' above, so the evaluator expects that you load a model that was built in 'inference' mode. If you're loading a model that was built in 'training' mode, change the `model_mode` parameter accordingly.",
"_____no_output_____"
]
],
[
[
"# TODO: Set the path to the `.h5` file of the model to be loaded.\nmodel_path = 'ssd300_dropout_pascal_07+12_epoch-114_loss-4.3685_val_loss-4.5034.h5'\n\n# We need to create an SSDLoss object in order to pass that to the model loader.\nssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0)\n\nK.clear_session() # Clear previous models from memory.\n\nmodel = load_model(model_path, custom_objects={'AnchorBoxes': AnchorBoxes,\n 'L2Normalization': L2Normalization,\n 'DecodeDetections': DecodeDetections,\n 'compute_loss': ssd_loss.compute_loss})",
"_____no_output_____"
],
[
"model.summary()",
"_____no_output_____"
]
],
[
[
"## 2. Create a data generator for the evaluation dataset\n\nInstantiate a `DataGenerator` that will serve the evaluation dataset during the prediction phase.",
"_____no_output_____"
]
],
[
[
"ROOT_PATH = '/usr/local/data/msmith/APL/Datasets/PASCAL/'\n# The directories that contain the images.\nVOC_2007_images_dir = p.join(ROOT_PATH,'VOCdevkit/VOC2007/JPEGImages/')\nVOC_2012_images_dir = p.join(ROOT_PATH,'VOCdevkit/VOC2012/JPEGImages/')\n\n# The directories that contain the annotations.\nVOC_2007_annotations_dir = p.join(ROOT_PATH,'VOCdevkit/VOC2007/Annotations/')\nVOC_2012_annotations_dir = p.join(ROOT_PATH,'VOCdevkit/VOC2012/Annotations/')\n\n# The paths to the image sets.\nVOC_2007_train_image_set_filename = p.join(ROOT_PATH,'VOCdevkit/VOC2007/ImageSets/Main/train.txt')\nVOC_2012_train_image_set_filename = p.join(ROOT_PATH,'VOCdevkit/VOC2012/ImageSets/Main/train.txt')\nVOC_2007_val_image_set_filename = p.join(ROOT_PATH,'VOCdevkit/VOC2007/ImageSets/Main/val.txt')\nVOC_2012_val_image_set_filename = p.join(ROOT_PATH,'VOCdevkit/VOC2012/ImageSets/Main/val.txt')\nVOC_2007_trainval_image_set_filename = p.join(ROOT_PATH,'VOCdevkit/VOC2007/ImageSets/Main/trainval.txt')\nVOC_2012_trainval_image_set_filename = p.join(ROOT_PATH,'VOCdevkit/VOC2012/ImageSets/Main/trainval.txt')\nVOC_2007_test_image_set_filename = p.join(ROOT_PATH,'VOCdevkit/VOC2007/ImageSets/Main/test.txt')\n\ndataset = DataGenerator(load_images_into_memory=True)\n\n\n# The XML parser needs to now what object class names to look for and in which order to map them to integers.\nclasses = ['background',\n 'aeroplane', 'bicycle', 'bird', 'boat',\n 'bottle', 'bus', 'car', 'cat',\n 'chair', 'cow', 'diningtable', 'dog',\n 'horse', 'motorbike', 'person', 'pottedplant',\n 'sheep', 'sofa', 'train', 'tvmonitor']\n\ndataset.parse_xml(images_dirs=[VOC_2012_images_dir],\n image_set_filenames=[VOC_2012_val_image_set_filename],\n annotations_dirs=[VOC_2012_annotations_dir],\n classes=classes,\n include_classes='all',\n exclude_truncated=False,\n exclude_difficult=False,\n ret=False)",
"Processing image set 'val.txt': 100%|██████████| 5823/5823 [00:43<00:00, 134.57it/s]\nLoading images into memory: 100%|██████████| 5823/5823 [01:06<00:00, 87.20it/s] \n"
]
],
[
[
"## 3. Run the evaluation\n\nNow that we have instantiated a model and a data generator to serve the dataset, we can set up the evaluator and run the evaluation.\n\nThe evaluator is quite flexible: It can compute the average precisions according to the Pascal VOC pre-2010 algorithm, which samples 11 equidistant points of the precision-recall curves, or according to the Pascal VOC post-2010 algorithm, which integrates numerically over the entire precision-recall curves instead of sampling a few individual points. You could also change the number of sampled recall points or the required IoU overlap for a prediction to be considered a true positive, among other things. Check out the `Evaluator`'s documentation for details on all the arguments.\n\nIn its default settings, the evaluator's algorithm is identical to the official Pascal VOC pre-2010 Matlab detection evaluation algorithm, so you don't really need to tweak anything unless you want to.\n\nThe evaluator roughly performs the following steps: It runs predictions over the entire given dataset, then it matches these predictions to the ground truth boxes, then it computes the precision-recall curves for each class, then it samples 11 equidistant points from these precision-recall curves to compute the average precision for each class, and finally it computes the mean average precision over all classes.",
"_____no_output_____"
]
],
[
[
"evaluator = Evaluator(model=model,\n n_classes=n_classes,\n data_generator=dataset,\n model_mode=model_mode)\n\nresults = evaluator(img_height=img_height,\n img_width=img_width,\n batch_size=2,\n data_generator_mode='resize',\n round_confidences=False,\n matching_iou_threshold=0.5,\n border_pixels='include',\n sorting_algorithm='quicksort',\n average_precision_mode='sample',\n num_recall_points=11,\n ignore_neutral_boxes=True,\n return_precisions=True,\n return_recalls=True,\n return_average_precisions=True,\n verbose=True)\n\nmean_average_precision, average_precisions, precisions, recalls = results",
"Number of images in the evaluation dataset: 5823\n\nProducing predictions batch-wise: 100%|██████████| 2912/2912 [02:39<00:00, 11.19it/s]\nMatching predictions to ground truth, class 1/20.: 100%|██████████| 9115/9115 [00:00<00:00, 21509.26it/s]\nMatching predictions to ground truth, class 2/20.: 100%|██████████| 4937/4937 [00:00<00:00, 28537.91it/s]\nMatching predictions to ground truth, class 3/20.: 100%|██████████| 24373/24373 [00:00<00:00, 32677.22it/s]\nMatching predictions to ground truth, class 4/20.: 100%|██████████| 20421/20421 [00:00<00:00, 23329.56it/s]\nMatching predictions to ground truth, class 5/20.: 100%|██████████| 37001/37001 [00:00<00:00, 39748.54it/s]\nMatching predictions to ground truth, class 6/20.: 100%|██████████| 3867/3867 [00:00<00:00, 21291.08it/s]\nMatching predictions to ground truth, class 7/20.: 100%|██████████| 38953/38953 [00:01<00:00, 24611.19it/s]\nMatching predictions to ground truth, class 8/20.: 100%|██████████| 5130/5130 [00:00<00:00, 23991.56it/s]\nMatching predictions to ground truth, class 9/20.: 100%|██████████| 95148/95148 [00:03<00:00, 29372.23it/s]\nMatching predictions to ground truth, class 10/20.: 100%|██████████| 8843/8843 [00:00<00:00, 30400.36it/s]\nMatching predictions to ground truth, class 11/20.: 100%|██████████| 6261/6261 [00:00<00:00, 27624.58it/s]\nMatching predictions to ground truth, class 12/20.: 100%|██████████| 7443/7443 [00:00<00:00, 29286.36it/s]\nMatching predictions to ground truth, class 13/20.: 100%|██████████| 3434/3434 [00:00<00:00, 21807.96it/s]\nMatching predictions to ground truth, class 14/20.: 100%|██████████| 3700/3700 [00:00<00:00, 21560.74it/s]\nMatching predictions to ground truth, class 15/20.: 100%|██████████| 233148/233148 [00:10<00:00, 22641.15it/s]\nMatching predictions to ground truth, class 16/20.: 100%|██████████| 30600/30600 [00:00<00:00, 33699.43it/s]\nMatching predictions to ground truth, class 17/20.: 100%|██████████| 14850/14850 [00:00<00:00, 29594.40it/s]\nMatching predictions to ground truth, class 18/20.: 100%|██████████| 4445/4445 [00:00<00:00, 29100.47it/s]\nMatching predictions to ground truth, class 19/20.: 100%|██████████| 4338/4338 [00:00<00:00, 19091.32it/s]\nMatching predictions to ground truth, class 20/20.: 100%|██████████| 8865/8865 [00:00<00:00, 24837.17it/s]\nComputing precisions and recalls, class 1/20\nComputing precisions and recalls, class 2/20\nComputing precisions and recalls, class 3/20\nComputing precisions and recalls, class 4/20\nComputing precisions and recalls, class 5/20\nComputing precisions and recalls, class 6/20\nComputing precisions and recalls, class 7/20\nComputing precisions and recalls, class 8/20\nComputing precisions and recalls, class 9/20\nComputing precisions and recalls, class 10/20\nComputing precisions and recalls, class 11/20\nComputing precisions and recalls, class 12/20\nComputing precisions and recalls, class 13/20\nComputing precisions and recalls, class 14/20\nComputing precisions and recalls, class 15/20\nComputing precisions and recalls, class 16/20\nComputing precisions and recalls, class 17/20\nComputing precisions and recalls, class 18/20\nComputing precisions and recalls, class 19/20\nComputing precisions and recalls, class 20/20\nComputing average precision, class 1/20\nComputing average precision, class 2/20\nComputing average precision, class 3/20\nComputing average precision, class 4/20\nComputing average precision, class 5/20\nComputing average precision, class 6/20\nComputing average precision, class 7/20\nComputing average precision, class 8/20\nComputing average precision, class 9/20\nComputing average precision, class 10/20\nComputing average precision, class 11/20\nComputing average precision, class 12/20\nComputing average precision, class 13/20\nComputing average precision, class 14/20\nComputing average precision, class 15/20\nComputing average precision, class 16/20\nComputing average precision, class 17/20\nComputing average precision, class 18/20\nComputing average precision, class 19/20\nComputing average precision, class 20/20\n"
]
],
[
[
"## 4. Visualize the results\n\nLet's take a look:",
"_____no_output_____"
]
],
[
[
"for i in range(1, len(average_precisions)):\n print(\"{:<14}{:<6}{}\".format(classes[i], 'AP', round(average_precisions[i], 3)))\nprint()\nprint(\"{:<14}{:<6}{}\".format('','mAP', round(mean_average_precision, 3)))",
"aeroplane AP 0.791\nbicycle AP 0.766\nbird AP 0.671\nboat AP 0.529\nbottle AP 0.4\nbus AP 0.794\ncar AP 0.701\ncat AP 0.839\nchair AP 0.487\ncow AP 0.673\ndiningtable AP 0.594\ndog AP 0.811\nhorse AP 0.758\nmotorbike AP 0.801\nperson AP 0.747\npottedplant AP 0.4\nsheep AP 0.7\nsofa AP 0.65\ntrain AP 0.797\ntvmonitor AP 0.674\n\n mAP 0.679\n"
],
[
"m = max((n_classes + 1) // 2, 2)\nn = 2\n\nfig, cells = plt.subplots(m, n, figsize=(n*8,m*8))\nfor i in range(m):\n for j in range(n):\n if n*i+j+1 > n_classes: break\n cells[i, j].plot(recalls[n*i+j+1], precisions[n*i+j+1], color='blue', linewidth=1.0)\n cells[i, j].set_xlabel('recall', fontsize=14)\n cells[i, j].set_ylabel('precision', fontsize=14)\n cells[i, j].grid(True)\n cells[i, j].set_xticks(np.linspace(0,1,11))\n cells[i, j].set_yticks(np.linspace(0,1,11))\n cells[i, j].set_title(\"{}, AP: {:.3f}\".format(classes[n*i+j+1], average_precisions[n*i+j+1]), fontsize=16)",
"_____no_output_____"
]
],
[
[
"## 5. Advanced use\n\n`Evaluator` objects maintain copies of all relevant intermediate results like predictions, precisions and recalls, etc., so in case you want to experiment with different parameters, e.g. different IoU overlaps, there is no need to compute the predictions all over again every time you make a change to a parameter. Instead, you can only update the computation from the point that is affected onwards.\n\nThe evaluator's `__call__()` method is just a convenience wrapper that executes its other methods in the correct order. You could just call any of these other methods individually as shown below (but you have to make sure to call them in the correct order).\n\nNote that the example below uses the same evaluator object as above. Say you wanted to compute the Pascal VOC post-2010 'integrate' version of the average precisions instead of the pre-2010 version computed above. The evaluator object still has an internal copy of all the predictions, and since computing the predictions makes up the vast majority of the overall computation time and since the predictions aren't affected by changing the average precision computation mode, we skip computing the predictions again and instead only compute the steps that come after the prediction phase of the evaluation. We could even skip the matching part, since it isn't affected by changing the average precision mode either. In fact, we would only have to call `compute_average_precisions()` `compute_mean_average_precision()` again, but for the sake of illustration we'll re-do the other computations, too.",
"_____no_output_____"
]
],
[
[
"evaluator.get_num_gt_per_class(ignore_neutral_boxes=True,\n verbose=False,\n ret=False)\n\nevaluator.match_predictions(ignore_neutral_boxes=True,\n matching_iou_threshold=0.5,\n border_pixels='include',\n sorting_algorithm='quicksort',\n verbose=True,\n ret=False)\n\nprecisions, recalls = evaluator.compute_precision_recall(verbose=True, ret=True)\n\naverage_precisions = evaluator.compute_average_precisions(mode='integrate',\n num_recall_points=11,\n verbose=True,\n ret=True)\n\nmean_average_precision = evaluator.compute_mean_average_precision(ret=True)",
"_____no_output_____"
],
[
"for i in range(1, len(average_precisions)):\n print(\"{:<14}{:<6}{}\".format(classes[i], 'AP', round(average_precisions[i], 3)))\nprint()\nprint(\"{:<14}{:<6}{}\".format('','mAP', round(mean_average_precision, 3)))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d08ae990db4ab0074c5f23c150e6aaa4bbe88b89 | 57,242 | ipynb | Jupyter Notebook | tutorials/W0D4_Calculus/W0D4_Tutorial2.ipynb | ofou/course-content | 04cc6450ee20c57c7832f86da6826e516d2daeed | [
"CC-BY-4.0",
"BSD-3-Clause"
] | null | null | null | tutorials/W0D4_Calculus/W0D4_Tutorial2.ipynb | ofou/course-content | 04cc6450ee20c57c7832f86da6826e516d2daeed | [
"CC-BY-4.0",
"BSD-3-Clause"
] | null | null | null | tutorials/W0D4_Calculus/W0D4_Tutorial2.ipynb | ofou/course-content | 04cc6450ee20c57c7832f86da6826e516d2daeed | [
"CC-BY-4.0",
"BSD-3-Clause"
] | null | null | null | 36.114826 | 552 | 0.579277 | [
[
[
"<a href=\"https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W0D4_Calculus/W0D4_Tutorial2.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>[](https://kaggle.com/kernels/welcome?src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W0D4_Calculus/W0D4_Tutorial2.ipynb)",
"_____no_output_____"
],
[
"# Tutorial 2: Differential Equations\n**Week 0, Day 4: Calculus**\n\n**By Neuromatch Academy**\n\n__Content creators:__ John S Butler, Arvind Kumar with help from Rebecca Brady\n\n__Content reviewers:__ Swapnil Kumar, Sirisha Sripada, Matthew McCann, Tessy Tom\n\n__Production editors:__ Matthew McCann, Ella Batty",
"_____no_output_____"
],
[
"**Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs**\n\n<p align='center'><img src='https://github.com/NeuromatchAcademy/widgets/blob/master/sponsors.png?raw=True'/></p>",
"_____no_output_____"
],
[
"---\n# Tutorial Objectives\n*Estimated timing of tutorial: 45 minutes*\n\nA great deal of neuroscience can be modelled using differential equations, from gating channels to single neurons to a network of neurons to blood flow, to behaviour. A simple way to think about differential equations is they are equations that describe how something changes. \n\nThe most famous of these in neuroscience is the Nobel Prize winning Hodgkin Huxley equation, which describes a neuron by modelling the gating of each axon. But we will not start there; we will start a few steps back.\n\nDifferential Equations are mathematical equations that describe how something like population or a neuron changes over time. The reason why differential equations are so useful is they can generalise a process such that one equation can be used to describe many different outcomes.\nThe general form of a first order differential equation is:\n\n\\begin{align*}\n\\frac{d}{dt}y(t)&=f(t,y(t))\\\\\n\\end{align*}\n\nwhich can be read as \"the change in a process $y$ over time $t$ is a function $f$ of time $t$ and itself $y$\". This might initially seem like a paradox as you are using a process $y$ you want to know about to describe itself, a bit like the MC Escher drawing of two hands painting [each other](https://en.wikipedia.org/wiki/Drawing_Hands). But that is the beauty of mathematics - this can be solved some of time, and when it cannot be solved exactly we can use numerical methods to estimate the answer (as we will see in the next tutorial). \n\n\nIn this tutorial, we will see how __differential equations are motivated by observations of physical responses.__ We will break down the population differential equation, then the integrate and fire model, which leads nicely into raster plots and frequency-current curves to rate models.\n\n**Steps:**\n- Get an intuitive understanding of a linear population differential equation (humans, not neurons)\n- Visualize the relationship between the change in population and the population\n- Breakdown the Leaky Integrate and Fire (LIF) differential equation\n- Code the exact solution of an LIF for a constant input\n- Visualize and listen to the response of the LIF for different inputs\n",
"_____no_output_____"
]
],
[
[
"# @title Video 1: Why do we care about differential equations?\nfrom ipywidgets import widgets\n\nout2 = widgets.Output()\nwith out2:\n from IPython.display import IFrame\n class BiliVideo(IFrame):\n def __init__(self, id, page=1, width=400, height=300, **kwargs):\n self.id=id\n src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)\n super(BiliVideo, self).__init__(src, width, height, **kwargs)\n\n video = BiliVideo(id=\"BV1v64y197bW\", width=854, height=480, fs=1)\n print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))\n display(video)\n\nout1 = widgets.Output()\nwith out1:\n from IPython.display import YouTubeVideo\n video = YouTubeVideo(id=\"LhX-mUd8lPo\", width=854, height=480, fs=1, rel=0)\n print('Video available at https://youtube.com/watch?v=' + video.id)\n display(video)\n\nout = widgets.Tab([out1, out2])\nout.set_title(0, 'Youtube')\nout.set_title(1, 'Bilibili')\n\ndisplay(out)",
"_____no_output_____"
]
],
[
[
"---\n# Setup",
"_____no_output_____"
]
],
[
[
"# Imports\n\nimport numpy as np\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"# @title Figure Settings\nimport IPython.display as ipd\nfrom matplotlib import gridspec\n\nimport ipywidgets as widgets # interactive display\n%config InlineBackend.figure_format = 'retina'\n\n# use NMA plot style\nplt.style.use(\"https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle\")\nmy_layout = widgets.Layout()",
"_____no_output_____"
],
[
"# @title Plotting Functions\n\ndef plot_dPdt(alpha=.3):\n \"\"\" Plots change in population over time\n Args:\n alpha: Birth Rate\n Returns:\n A figure two panel figure\n left panel: change in population as a function of population\n right panel: membrane potential as a function of time\n \"\"\"\n\n with plt.xkcd():\n time=np.arange(0, 10 ,0.01)\n fig = plt.figure(figsize=(12,4))\n gs = gridspec.GridSpec(1, 2)\n\n ## dpdt as a fucntion of p\n plt.subplot(gs[0])\n plt.plot(np.exp(alpha*time), alpha*np.exp(alpha*time))\n plt.xlabel(r'Population $p(t)$ (millions)')\n plt.ylabel(r'$\\frac{d}{dt}p(t)=\\alpha p(t)$')\n\n ## p exact solution\n plt.subplot(gs[1])\n plt.plot(time, np.exp(alpha*time))\n plt.ylabel(r'Population $p(t)$ (millions)')\n plt.xlabel('time (years)')\n plt.show()\n\n\ndef plot_V_no_input(V_reset=-75):\n \"\"\"\n Args:\n V_reset: Reset Potential\n Returns:\n A figure two panel figure\n left panel: change in membrane potential as a function of membrane potential\n right panel: membrane potential as a function of time\n \"\"\"\n E_L=-75\n tau_m=10\n t=np.arange(0,100,0.01)\n V= E_L+(V_reset-E_L)*np.exp(-(t)/tau_m)\n V_range=np.arange(-90,0,1)\n dVdt=-(V_range-E_L)/tau_m\n\n with plt.xkcd():\n time=np.arange(0, 10, 0.01)\n fig = plt.figure(figsize=(12, 4))\n gs = gridspec.GridSpec(1, 2)\n\n plt.subplot(gs[0])\n plt.plot(V_range,dVdt)\n plt.hlines(0,min(V_range),max(V_range), colors='black', linestyles='dashed')\n plt.vlines(-75, min(dVdt), max(dVdt), colors='black', linestyles='dashed')\n plt.plot(V_reset,-(V_reset - E_L)/tau_m, 'o', label=r'$V_{reset}$')\n plt.text(-50, 1, 'Positive')\n plt.text(-50, -2, 'Negative')\n plt.text(E_L - 1, max(dVdt), r'$E_L$')\n plt.legend()\n plt.xlabel('Membrane Potential V (mV)')\n plt.ylabel(r'$\\frac{dV}{dt}=\\frac{-(V(t)-E_L)}{\\tau_m}$')\n\n plt.subplot(gs[1])\n plt.plot(t,V)\n plt.plot(t[0],V_reset,'o')\n plt.ylabel(r'Membrane Potential $V(t)$ (mV)')\n plt.xlabel('time (ms)')\n plt.ylim([-95, -60])\n\n plt.show()\n\n## LIF PLOT\ndef plot_IF(t, V,I,Spike_time):\n \"\"\"\n Args:\n t : time\n V : membrane Voltage\n I : Input\n Spike_time : Spike_times\n Returns:\n figure with three panels\n top panel: Input as a function of time\n middle panel: membrane potential as a function of time\n bottom panel: Raster plot\n \"\"\"\n\n with plt.xkcd():\n fig = plt.figure(figsize=(12, 4))\n gs = gridspec.GridSpec(3, 1, height_ratios=[1, 4, 1])\n\n # PLOT OF INPUT\n plt.subplot(gs[0])\n plt.ylabel(r'$I_e(nA)$')\n plt.yticks(rotation=45)\n plt.hlines(I,min(t),max(t),'g')\n plt.ylim((2, 4))\n plt.xlim((-50, 1000))\n\n # PLOT OF ACTIVITY\n plt.subplot(gs[1])\n plt.plot(t,V)\n plt.xlim((-50, 1000))\n plt.ylabel(r'$V(t)$(mV)')\n\n # PLOT OF SPIKES\n plt.subplot(gs[2])\n plt.ylabel(r'Spike')\n plt.yticks([])\n plt.scatter(Spike_time, 1 * np.ones(len(Spike_time)), color=\"grey\", marker=\".\")\n plt.xlim((-50, 1000))\n plt.xlabel('time(ms)')\n plt.show()\n\n\n## Plotting the differential Equation\ndef plot_dVdt(I=0):\n \"\"\"\n Args:\n I : Input Current\n Returns:\n figure of change in membrane potential as a function of membrane potential\n \"\"\"\n\n with plt.xkcd():\n E_L = -75\n tau_m = 10\n V = np.arange(-85, 0, 1)\n g_L = 10.\n fig = plt.figure(figsize=(6, 4))\n\n plt.plot(V,(-(V-E_L) + I*10) / tau_m)\n plt.hlines(0, min(V), max(V), colors='black', linestyles='dashed')\n plt.xlabel('V (mV)')\n plt.ylabel(r'$\\frac{dV}{dt}$')\n plt.show()",
"_____no_output_____"
],
[
"# @title Helper Functions\n\n## EXACT SOLUTION OF LIF\ndef Exact_Integrate_and_Fire(I,t):\n \"\"\"\n Args:\n I : Input Current\n t : time\n Returns:\n Spike : Spike Count\n Spike_time : Spike time\n V_exact : Exact membrane potential\n \"\"\"\n\n Spike = 0\n tau_m = 10\n R = 10\n t_isi = 0\n V_reset = E_L = -75\n V_exact = V_reset * np.ones(len(t))\n V_th = -50\n Spike_time = []\n\n for i in range(0, len(t)):\n\n V_exact[i] = E_L + R*I + (V_reset - E_L - R*I) * np.exp(-(t[i]-t_isi)/tau_m)\n\n # Threshold Reset\n if V_exact[i] > V_th:\n V_exact[i-1] = 0\n V_exact[i] = V_reset\n t_isi = t[i]\n Spike = Spike+1\n Spike_time = np.append(Spike_time, t[i])\n\n return Spike, Spike_time, V_exact",
"_____no_output_____"
]
],
[
[
"---\n# Section 1: Population differential equation",
"_____no_output_____"
]
],
[
[
"# @title Video 2: Population differential equation\nfrom ipywidgets import widgets\n\nout2 = widgets.Output()\nwith out2:\n from IPython.display import IFrame\n class BiliVideo(IFrame):\n def __init__(self, id, page=1, width=400, height=300, **kwargs):\n self.id=id\n src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)\n super(BiliVideo, self).__init__(src, width, height, **kwargs)\n\n video = BiliVideo(id=\"BV1pg41137CU\", width=854, height=480, fs=1)\n print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))\n display(video)\n\nout1 = widgets.Output()\nwith out1:\n from IPython.display import YouTubeVideo\n video = YouTubeVideo(id=\"czgGyoUsRoQ\", width=854, height=480, fs=1, rel=0)\n print('Video available at https://youtube.com/watch?v=' + video.id)\n display(video)\n\nout = widgets.Tab([out1, out2])\nout.set_title(0, 'Youtube')\nout.set_title(1, 'Bilibili')\n\ndisplay(out)",
"_____no_output_____"
]
],
[
[
"This video covers our first example of a differential equation: a differential equation which models the change in population.\n\n<details>\n<summary> <font color='blue'>Click here for text recap of video </font></summary>\n\nTo get an intuitive feel of a differential equations, we will start with a population differential equation, which models the change in population [1], that is human population not neurons, we will get to neurons later. Mathematically it is written like:\n\\begin{align*}\n\\\\\n\\frac{d}{dt}\\,p(t) &= \\alpha p(t),\\\\\n\\end{align*}\n\nwhere $p(t)$ is the population of the world and $\\alpha$ is a parameter representing birth rate.\n\nAnother way of thinking about the models is that the equation\n\\begin{align*}\n\\\\\n\\frac{d}{dt}\\,p(t) &= \\alpha p(t),\\\\\n\\text{can be written as:}\\\\\n\\text{\"Change in Population\"} &= \\text{ \"Birth rate times Current population.\"}\n\\end{align*}\n\nThe equation is saying something reasonable maybe not the perfect model but a good start.\n</details>",
"_____no_output_____"
],
[
"### Think! 1.1: Interpretating the behavior of a linear population equation\nUsing the plot below of change of population $\\frac{d}{dt} p(t) $ as a function of population $p(t)$ with birth-rate $\\alpha=0.3$, discuss the following questions:\n1. Why is the population differential equation known as a linear differential equation?\n\n2. How does population size affect the rate of change of the population?\n",
"_____no_output_____"
]
],
[
[
"# @markdown Execute the code to plot the rate of change of population as a function of population\np = np.arange(0, 100, 0.1)\n\nwith plt.xkcd():\n\n dpdt = 0.3*p\n fig = plt.figure(figsize=(6, 4))\n plt.plot(p, dpdt)\n plt.xlabel(r'Population $p(t)$ (millions)')\n plt.ylabel(r'$\\frac{d}{dt}p(t)=\\alpha p(t)$')\n plt.show()",
"_____no_output_____"
],
[
"# to_remove explanation\n\n\"\"\"\n 1. The plot of $\\frac{dp}{dt}$ is a line, which is why the differential\n equation is known as a linear differential equation.\n\n 2. As the population increases, the change of population increases. A\n population of 20 has a change of 6 while a population of 100 has a change of\n 30. This makes sense - the larger the population the larger the change.\n\"\"\"",
"_____no_output_____"
]
],
[
[
"## Section 1.1: Exact solution of the population equation",
"_____no_output_____"
],
[
"### Section 1.1.1: Initial condition\nThe linear population differential equation is known as an initial value differential equation because we need an initial population value to solve it. Here we will set our initial population at time 0 to 1:\n\n\\begin{align*}\n&p(0)=1.\\\\\n\\end{align*}\n\nDifferent initial conditions will lead to different answers, but they will not change the differential equation. This is one of the strengths of a differential equation. ",
"_____no_output_____"
],
[
"### Section 1.1.2: Exact Solution\nTo calculate the exact solution of a differential equation, we must integrate both sides. Instead of numerical integration (as you delved into in the last tutorial), we will first try to solve the differential equations using analytical integration. As with derivatives, we can find analytical integrals of simple equations by consulting [a list](https://en.wikipedia.org/wiki/Lists_of_integrals). We can then get integrals for more complex equations using some mathematical tricks - the harder the equation the more obscure the trick. \n\nThe linear population equation \n\\begin{align*}\n\\frac{d}{dt}\\,p(t) &= \\alpha p(t),\\\\\\\\\np(0)=P_0,\\\\\n\\end{align*}\nhas the exact solution:\n\\begin{align*}\np(t)&=P_0e^{\\alpha t}.\\\\\n\\end{align*}\n\nThe exact solution written in words is: \n\n\\begin{align*}\n\\text{\"Population\"}&=\\text{\"grows/declines exponentially as a function of time and birth rate\"}.\\\\\n\\end{align*}\n\nMost differential equations do not have a known exact solution, so in the next tutorial on numerical methods we will show how the solution can be estimated.\n\nA small aside: a good deal of progress in mathematics was due to mathematicians writing taunting letters to each other saying they had a trick that could solve something better than everyone else. So do not worry too much about the tricks.",
"_____no_output_____"
],
[
"#### Example Exact Solution of the Population Equation\nLet's consider the population differential equation with a birth rate $\\alpha=0.3$:\n\n\\begin{align*}\n\\frac{d}{dt}\\,p(t) = 0.3 p(t),\\\\\n\\text{with the initial condition}\\\\\np(0)=1.\\\\\n\\end{align*}\n\nIt has an exact solution \n\\begin{align*}\n\\\\\np(t)=e^{0.3 t}.\n\\end{align*}\n\n",
"_____no_output_____"
]
],
[
[
"# @markdown Execute code to plot the exact solution\nt = np.arange(0, 10, 0.1) # Time from 0 to 10 years in 0.1 steps\n\nwith plt.xkcd():\n\n p = np.exp(0.3 * t)\n\n fig = plt.figure(figsize=(6, 4))\n plt.plot(t, p)\n plt.ylabel('Population (millions)')\n plt.xlabel('time (years)')\n plt.show()",
"_____no_output_____"
]
],
[
[
"## Section 1.2: Parameters of the differential equation\n\n*Estimated timing to here from start of tutorial: 12 min*\n\nOne of the goals when designing a differential equation is to make it generalisable. Which means that the differential equation will give reasonable solutions for different countries with different birth rates $\\alpha$. \n",
"_____no_output_____"
],
[
"### Interactive Demo 1.2: Interactive Parameter Change\nPlay with the widget to see the relationship between $\\alpha$ and the population differential equation as a function of population (left-hand side), and the population solution as a function of time (right-hand side). Pay close attention to the transition point from positive to negative.\n\nHow do changing parameters of the population equation affect the outcome?\n\n1. What happens when $\\alpha < 0$?\n2. What happens when $\\alpha > 0$?\n3. What happens when $\\alpha = 0$?",
"_____no_output_____"
]
],
[
[
"# @markdown Make sure you execute this cell to enable the widget!\nmy_layout.width = '450px'\[email protected](\n alpha=widgets.FloatSlider(.3, min=-1., max=1., step=.1, layout=my_layout)\n)\ndef Pop_widget(alpha):\n plot_dPdt(alpha=alpha)\n plt.show()",
"_____no_output_____"
],
[
"# to_remove explanation\n\n\"\"\"\n 1. Negative values of alpha result in an exponential decrease to 0 a stable solution.\n 2. Positive Values of alpha in an exponential increases to infinity.\n 3. Alpha equal to 0 is a unique point known as an equilibrium point when the\n dp/dt=0 and there is no change in population. This is known as a stable point.\n\"\"\"",
"_____no_output_____"
]
],
[
[
"The population differential equation is an over-simplification and has some very obvious limitations: \n1. Population growth is not exponential as there are limited number of resources so the population will level out at some point.\n2. It does not include any external factors on the populations like weather, predators and preys.\n\nThese kind of limitations can be addressed by extending the model.\n\n\nWhile it might not seem that the population equation has direct relevance to neuroscience, a similar equation is used to describe the accumulation of evidence for decision making. This is known as the Drift Diffusion Model and you will see in more detail in the Linear System day in Neuromatch (W2D2).\n\n\nAnother differential equation that is similar to the population equation is the Leaky Integrate and Fire model which you may have seen in the python pre-course materials on W0D1 and W0D2. It will turn up later in Neuromatch as well. Below we will delve in the motivation of the differential equation.",
"_____no_output_____"
],
[
"---\n# Section 2: The leaky integrate and fire model\n",
"_____no_output_____"
]
],
[
[
"# @title Video 3: The leaky integrate and fire model\nfrom ipywidgets import widgets\n\nout2 = widgets.Output()\nwith out2:\n from IPython.display import IFrame\n class BiliVideo(IFrame):\n def __init__(self, id, page=1, width=400, height=300, **kwargs):\n self.id=id\n src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)\n super(BiliVideo, self).__init__(src, width, height, **kwargs)\n\n video = BiliVideo(id=\"BV1rb4y1C79n\", width=854, height=480, fs=1)\n print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))\n display(video)\n\nout1 = widgets.Output()\nwith out1:\n from IPython.display import YouTubeVideo\n video = YouTubeVideo(id=\"ZfWO6MLCa1s\", width=854, height=480, fs=1, rel=0)\n print('Video available at https://youtube.com/watch?v=' + video.id)\n display(video)\n\nout = widgets.Tab([out1, out2])\nout.set_title(0, 'Youtube')\nout.set_title(1, 'Bilibili')\n\ndisplay(out)",
"_____no_output_____"
]
],
[
[
"This video covers the Leaky Integrate and Fire model (a linear differential equation which describes the membrane potential of a single neuron).\n\n<details>\n<summary> <font color='blue'>Click here for text recap of full LIF equation from video </font></summary>\n\nThe Leaky Integrate and Fire Model is a linear differential equation that describes the membrane potential ($V$) of a single neuron which was proposed by Louis Édouard Lapicque in 1907 [2].\n\nThe subthreshold membrane potential dynamics of a LIF neuron is described by\n\\begin{align}\n\\tau_m\\frac{dV}{dt} = -(V-E_L) + R_mI\\,\n\\end{align}\n\n\nwhere $\\tau_m$ is the time constant, $V$ is the membrane potential, $E_L$ is the resting potential, $R_m$ is membrane resistance, and $I$ is the external input current. \n\n</details>\n\nIn the next few sections, we will break down the full LIF equation and then build it back up to get an intuitive feel of the different facets of the differential equation.\n",
"_____no_output_____"
],
[
"## Section 2.1: LIF without input\n\n*Estimated timing to here from start of tutorial: 18 min*\n\nAs seen in the video, we will first model an LIF neuron without input, which results in the equation:\n\n\\begin{align}\n\\frac{dV}{dt} &= \\frac{-(V-E_L)}{\\tau_m}.\\\\\n\\end{align}\n\nwhere $\\tau_m$ is the time constant, $V$ is the membrane potential, and $E_L$ is the resting potential.\n\n<details>\n<summary> <font color='blue'>Click here for further details (from video) </font></summary>\n\nRemoving the input gives the equation\n\n\\begin{align}\n\\tau_m\\frac{dV}{dt} &= -V+E_L,\\\\\n\\end{align}\n\nwhich can be written in words as:\n\n\\begin{align}\n\\begin{matrix}\\text{\"Time constant multiplied by the} \\\\ \\text{change in membrane potential\"}\\end{matrix}&=\\begin{matrix}\\text{\"Minus Current} \\\\ \\text{membrane potential\"} \\end{matrix}+\n\\begin{matrix}\\text{\"resting potential\"}\\end{matrix}.\\\\\n\\end{align}\n\n\nThe equation can be re-arranged to look even more like the population equation:\n\n\\begin{align}\n\\frac{dV}{dt} &= \\frac{-(V-E_L)}{\\tau_m}.\\\\\n\\end{align}\n</details>\n",
"_____no_output_____"
],
[
"### Think! 2.1: Effect on membrane potential $V$ on the LIF model\n\nThe plot the below shows the change in membrane potential $\\frac{dV}{dt}$ as a function of membrane potential $V$ with the parameters set as:\n* `E_L = -75`\n* `V_reset = -50`\n* `tau_m = 10.`\n\n1. What is the effect on $\\frac{dV}{dt}$ when $V>-75$ mV?\n2. What is the effect on $\\frac{dV}{dt}$ when $V<-75$ mV\n3. What is the effect on $\\frac{dV}{dt}$ when $V=-75$ mV?",
"_____no_output_____"
]
],
[
[
"# @markdown Make sure you execute this cell to plot the relationship between dV/dt and V\n# Parameter definition\nE_L = -75\ntau_m = 10\n\n# Range of Values of V\nV = np.arange(-90, 0, 1)\ndV = -(V - E_L) / tau_m\n\nwith plt.xkcd():\n\n fig = plt.figure(figsize=(6, 4))\n plt.plot(V, dV)\n plt.hlines(0, min(V), max(V), colors='black', linestyles='dashed')\n plt.vlines(-75, min(dV), max(dV), colors='black', linestyles='dashed')\n\n plt.text(-50, 1, 'Positive')\n plt.text(-50, -2, 'Negative')\n plt.text(E_L, max(dV) + 1, r'$E_L$')\n plt.xlabel(r'$V(t)$ (mV)')\n plt.ylabel(r'$\\frac{dV}{dt}=\\frac{-(V-E_L)}{\\tau_m}$')\n plt.ylim(-8, 2)\n plt.show()",
"_____no_output_____"
],
[
"# to_remove explanation\n\n\"\"\"\n 1. For $V>-75$ mV, the derivative is negative.\n 2. For $V<-75$ mV, the derivative is positive.\n 3. For $V=-75$ mV, the derivative is equal to $0$ is and a stable point when nothing changes.\n\"\"\"",
"_____no_output_____"
]
],
[
[
"### Section 2.1.1: Exact Solution of the LIF model without input\nThe LIF model has the exact solution:\n\\begin{align*}\nV(t)=&\\ E_L+(V_{reset}-E_L)e^{\\frac{-t}{\\tau_m}}\\\\\n\\end{align*}\n\nwhere $\\tau_m$ is the time constant, $V$ is the membrane potential, $E_L$ is the resting potential, and $V_{reset}$ is the initial membrane potential. \n\n<details>\n<summary> <font color='blue'>Click here for further details (from video) </font></summary>\n\nSimilar to the population equation, we need an initial membrane potential at time $0$ to solve the LIF model. \n\nWith this equation \n\\begin{align}\n\\frac{dV}{dt} &= \\frac{-(V-E_L)}{\\tau_m}\\,\\\\\nV(0)&=V_{reset},\n\\end{align}\nwhere is $V_{reset}$ is called the reset potential.\n\nThe LIF model has the exact solution:\n\\begin{align*}\nV(t)=&\\ E_L+(V_{reset}-E_L)e^{\\frac{-t}{\\tau_m}}\\\\\n\\text{ which can be written as: }\\\\\n\\begin{matrix}\\text{\"Current membrane} \\\\ \\text{potential}\"\\end{matrix}=&\\text{\"Resting potential\"}+\\begin{matrix}\\text{\"Reset potential minus resting potential} \\\\ \\text{times exponential with rate one over time constant.\"}\\end{matrix}\\\\\n\\end{align*}\n\n</details>",
"_____no_output_____"
],
[
"#### Interactive Demo 2.1.1: Initial Condition $V_{reset}$\nThis exercise is to get an intuitive feel of how the different initial conditions $V_{reset}$ impacts the differential equation of the LIF and the exact solution for the equation:\n\n\\begin{align}\n\\frac{dV}{dt} &= \\frac{-(V-E_L)}{\\tau_m}\\,\\\\\n\\end{align}\nwith the parameters set as:\n* `E_L = -75,`\n* `tau_m = 10.`\n\nThe panel on the left-hand side plots the change in membrane potential $\\frac{dV}{dt}$ as a function of membrane potential $V$ and right-hand side panel plots the exact solution $V$ as a function of time $t,$ the green dot in both panels is the reset potential $V_{reset}$.\n\nPay close attention to when $V_{reset}=E_L=-75$mV.\n\n1. How does the solution look with initial values of $V_{reset} < -75$?\n2. How does the solution look with initial values of $V_{reset} > -75$?\n3. How does the solution look with initial values of $V_{reset} = -75$?\n\n",
"_____no_output_____"
]
],
[
[
"#@markdown Make sure you execute this cell to enable the widget!\nmy_layout.width = '450px'\[email protected](\n V_reset=widgets.FloatSlider(-77., min=-91., max=-61., step=2,\n layout=my_layout)\n)\n\ndef V_reset_widget(V_reset):\n plot_V_no_input(V_reset)",
"_____no_output_____"
],
[
"# to_remove explanation\n\n\"\"\"\n 1. Initial Values of $V_{reset} < -75$ result in the solution increasing to\n -75mV because $\\frac{dV}{dt} > 0$.\n 2. Initial Values of $V_{reset} > -75$ result in the solution decreasing to\n -75mV because $\\frac{dV}{dt} < 0$.\n 3. Initial Values of $V_{reset} = -75$ result in a constant $V = -75$ mV\n because $\\frac{dV}{dt} = 0$ (Stable point).\n\"\"\"",
"_____no_output_____"
]
],
[
[
"## Section 2.2: LIF with input\n*Estimated timing to here from start of tutorial: 24 min*\n\nWe will re-introduce the input $I$ and membrane resistance $R_m$ giving the original equation:\n\n\\begin{align}\n\\tau_m\\frac{dV}{dt} = -(V-E_L) + \\color{blue}{R_mI}\\,\n\\end{align}\n\nThe input can be other neurons or sensory information.",
"_____no_output_____"
],
[
"### Interactive Demo 2.2: The Impact of Input\nThe interactive plot below manipulates $I$ in the differential equation.\n\n- With increasing input, how does the $\\frac{dV}{dt}$ change? How would this impact the solution? ",
"_____no_output_____"
]
],
[
[
"# @markdown Make sure you execute this cell to enable the widget!\nmy_layout.width = '450px'\[email protected](\n I=widgets.FloatSlider(3., min=0., max=20., step=2,\n layout=my_layout)\n)\n\ndef Pop_widget(I):\n plot_dVdt(I=I)\n plt.show()",
"_____no_output_____"
],
[
"# to_remove explanation\n\n\"\"\"\ndV/dt becomes bigger and less of it is below 0. This means the solution will increase well beyond what is bioligically plausible\n\"\"\"",
"_____no_output_____"
]
],
[
[
"### Section 2.2.1: LIF exact solution\n\nThe LIF with a constant input has a known exact solution:\n\\begin{align*}\nV(t)=&\\ E_L+R_mI+(V_{reset}-E_L-R_mI)e^{\\frac{-t}{\\tau_m}}\\\\\n\\text{which is written as:}\\\\\n\\begin{matrix}\\text{\"Current membrane} \\\\ \\text{potential\"}\\end{matrix}=&\\text{\"Resting potential\"}+\\begin{matrix}\\text{\"Reset potential minus resting potential} \\\\ \\text{times exponential with rate one over time constant.\" }\\end{matrix}\\\\\n\\end{align*}",
"_____no_output_____"
],
[
"The plot below shows the exact solution of the membrane potential with the parameters set as:\n* `V_reset = -75,`\n* `E_L = -75,`\n* `tau_m = 10,`\n* `R_m = 10,`\n* `I = 10.`\n\nAsk yourself, does the result make biological sense? If not, what would you change? We'll delve into this in the next section",
"_____no_output_____"
]
],
[
[
"# @markdown Make sure you execute this cell to see the exact solution\ndt = 0.5\nt_rest = 0\n\nt = np.arange(0, 1000, dt)\n\ntau_m = 10\nR_m = 10\nV_reset = E_L = -75\n\nI = 10\n\nV = E_L + R_m*I + (V_reset - E_L - R_m*I) * np.exp(-(t)/tau_m)\n\nwith plt.xkcd():\n\n fig = plt.figure(figsize=(6, 4))\n plt.plot(t,V)\n plt.ylabel('V (mV)')\n plt.xlabel('time (ms)')\n plt.show()",
"_____no_output_____"
]
],
[
[
"## Section 2.3: Maths is one thing, but neuroscience matters\n\n*Estimated timing to here from start of tutorial: 30 min*",
"_____no_output_____"
]
],
[
[
"# @title Video 4: Adding firing to the LIF\nfrom ipywidgets import widgets\n\nout2 = widgets.Output()\nwith out2:\n from IPython.display import IFrame\n class BiliVideo(IFrame):\n def __init__(self, id, page=1, width=400, height=300, **kwargs):\n self.id=id\n src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)\n super(BiliVideo, self).__init__(src, width, height, **kwargs)\n\n video = BiliVideo(id=\"BV1gX4y1P7pZ\", width=854, height=480, fs=1)\n print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))\n display(video)\n\nout1 = widgets.Output()\nwith out1:\n from IPython.display import YouTubeVideo\n video = YouTubeVideo(id=\"rLQk-vXRaX0\", width=854, height=480, fs=1, rel=0)\n print('Video available at https://youtube.com/watch?v=' + video.id)\n display(video)\n\nout = widgets.Tab([out1, out2])\nout.set_title(0, 'Youtube')\nout.set_title(1, 'Bilibili')\n\ndisplay(out)",
"_____no_output_____"
]
],
[
[
"This video first recaps the introduction of input to the leaky integrate and fire model and then delves into how we add spiking behavior (or firing) to the model.\n\n<details>\n<summary> <font color='blue'>Click here for text recap of video </font></summary>\n\nWhile the mathematics of the exact solution is exact, it is not biologically valid as a neuron spikes and definitely does not plateau at a very positive value.\n\nTo model the firing of a spike, we must have a threshold voltage $V_{th}$ such that if the voltage $V(t)$ goes above it, the neuron spikes\n$$V(t)>V_{th}.$$\nWe must record the time of spike $t_{isi}$ and count the number of spikes\n$$t_{isi}=t, $$\n$$𝑆𝑝𝑖𝑘𝑒=𝑆𝑝𝑖𝑘𝑒+1.$$\nThen reset the membrane voltage $V(t)$\n$$V(t_{isi} )=V_{Reset}.$$\n\nTo take into account the spike the exact solution becomes:\n\\begin{align*}\nV(t)=&\\ E_L+R_mI+(V_{reset}-E_L-R_mI)e^{\\frac{-(t-t_{isi})}{\\tau_m}},&\\qquad V(t)<V_{th} \\\\\nV(t)=&V_{reset},&\\qquad V(t)>V_{th}\\\\\nSpike=&Spike+1,&\\\\\nt_{isi}=&t,\\\\\n\\end{align*}\nwhile this does make the neuron spike, it introduces a discontinuity which is not as elegant mathematically as it could be, but it gets results so that is good.\n</detail>",
"_____no_output_____"
],
[
"### Interactive Demo 2.3.1: Input on spikes\nThis exercise show the relationship between firing rate and the Input for exact solution `V` of the LIF:\n$$\nV(t)=\\ E_L+R_mI+(V_{reset}-E_L-R_mI)e^{\\frac{-(t-t_{isi})}{\\tau_m}},\n$$\nwith the parameters set as:\n* `V_reset = -75,`\n* `E_L = -75,`\n* `tau_m = 10,`\n* `R_m = 10.`\n\n\nBelow is a figure with three panels; \n* the top panel is the input, $I,$\n* the middle panel is the membrane potential $V(t)$. To illustrate the spike, $V(t)$ is set to $0$ and then reset to $-75$ mV when there is a spike. \n* the bottom panel is the raster plot with each dot indicating a spike.\n\nFirst, as electrophysiologist normally listen to spikes when conducting experiments, listen to the music of the firing rate for a single value of $I$. (Note the audio doesn't work in some browsers so don't worry about it if you can't hear anything) ",
"_____no_output_____"
]
],
[
[
"# @markdown Make sure you execute this cell to be able to hear the neuron\nI = 3\nt = np.arange(0, 1000, dt)\nSpike, Spike_time, V = Exact_Integrate_and_Fire(I, t)\n\nplot_IF(t, V, I, Spike_time)\nipd.Audio(V, rate=len(V))",
"_____no_output_____"
]
],
[
[
"Manipulate the input into the LIF to see the impact of input on the firing pattern (rate).\n\n* What is the effect of $I$ on spiking?\n* Is this biologically valid?",
"_____no_output_____"
]
],
[
[
"# @markdown Make sure you execute this cell to enable the widget!\nmy_layout.width = '450px'\[email protected](\n I=widgets.FloatSlider(3, min=2.0, max=4., step=.1,\n layout=my_layout)\n)\n\ndef Pop_widget(I):\n Spike, Spike_time, V = Exact_Integrate_and_Fire(I, t)\n plot_IF(t, V, I, Spike_time)",
"_____no_output_____"
],
[
"# to_remove explanation\n\n\"\"\"\n 1. As $I$ increases, the number of spikes increases.\n 2. No, as there is a limit to the number of spikes due to a refractory period, which is not accounted for in this model.\n\"\"\"",
"_____no_output_____"
]
],
[
[
"## Section 2.4 Firing Rate as a function of Input\n\n*Estimated timing to here from start of tutorial: 38 min*\n\nThe firing frequency of a neuron plotted as a function of current is called an input-output curve (F–I curve). It is also known as a transfer function, which you came across in the previous tutorial. This function is one of the starting points for the rate model, which extends from modelling single neurons to the firing rate of a collection of neurons. \n\nBy fitting this to a function, we can start to generalise the firing pattern of many neurons, which can be used to build rate models. This will be discussed later in Neuromatch. ",
"_____no_output_____"
]
],
[
[
"# @markdown *Execture this cell to visualize the FI curve*\nI_range = np.arange(2.0, 4.0, 0.1)\nSpike_rate = np.ones(len(I_range))\n\nfor i, I in enumerate(I_range):\n Spike_rate[i], _, _ = Exact_Integrate_and_Fire(I, t)\n\nwith plt.xkcd():\n fig = plt.figure(figsize=(6, 4))\n plt.plot(I_range,Spike_rate)\n plt.xlabel('Input Current (nA)')\n plt.ylabel('Spikes per Second (Hz)')\n plt.show()",
"_____no_output_____"
]
],
[
[
"The LIF model is a very nice differential equation to start with in computational neuroscience as it has been used as a building block for many papers that simulate neuronal response.\n\n__Strengths of LIF model:__\n+ Has an exact solution;\n+ Easy to interpret;\n+ Great to build network of neurons.\n\n__Weaknesses of the LIF model:__\n- Spiking is a discontinuity;\n- Abstraction from biology;\n- Cannot generate different spiking patterns.\n\n",
"_____no_output_____"
],
[
"---\n# Summary\n\n*Estimated timing of tutorial: 45 min*",
"_____no_output_____"
]
],
[
[
"# @title Video 5: Summary\nfrom ipywidgets import widgets\n\nout2 = widgets.Output()\nwith out2:\n from IPython.display import IFrame\n class BiliVideo(IFrame):\n def __init__(self, id, page=1, width=400, height=300, **kwargs):\n self.id=id\n src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)\n super(BiliVideo, self).__init__(src, width, height, **kwargs)\n\n video = BiliVideo(id=\"BV1jV411x7t9\", width=854, height=480, fs=1)\n print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))\n display(video)\n\nout1 = widgets.Output()\nwith out1:\n from IPython.display import YouTubeVideo\n video = YouTubeVideo(id=\"VzwLAW5p4ao\", width=854, height=480, fs=1, rel=0)\n print('Video available at https://youtube.com/watch?v=' + video.id)\n display(video)\n\nout = widgets.Tab([out1, out2])\nout.set_title(0, 'Youtube')\nout.set_title(1, 'Bilibili')\n\ndisplay(out)",
"_____no_output_____"
]
],
[
[
"In this tutorial, we have seen two differential equations, the population differential equations and the leaky integrate and fire model.\n\n\nWe learned about:\n* The motivation for differential equations.\n* An intuitive relationship between the solution and the form of the differential equation.\n* How different parameters of the differential equation impact the solution.\n* The strengths and limitations of the simple differential equations.\n",
"_____no_output_____"
],
[
"---\n# Links to Neuromatch Days\n\nDifferential equations turn up in a number of different Neuromatch days:\n* The LIF model is discussed in more details in Model Types (Week 1 Day 1) and Real Neurons (Week 2 Day 3).\n* Drift Diffusion model which is a differential equation for decision making is discussed in Linear Systems (Week 2 Day 2).\n* Systems of differential equations are discussed in Linear Systems (Week 2 Day 2) and Dynamic Networks (Week 2 Day 4).\n\n\n---\n# References \n1. Lotka, A. L, (1920) Analytical note on certain rhythmic relations inorganic systems.Proceedings of the National Academy of Sciences,6(7):410–415,1920.\n\n2. Brunel N, van Rossum MC. Lapicque's 1907 paper: from frogs to integrate-and-fire. Biol Cybern. 2007 Dec;97(5-6):337-9. doi: 10.1007/s00422-007-0190-0. Epub 2007 Oct 30. PMID: 17968583.\n\n\n\n# Bibliography\n1. Dayan, P., & Abbott, L. F. (2001). Theoretical neuroscience: computational and mathematical modeling of neural systems. Computational Neuroscience Series.\n2. Strogatz, S. Nonlinear dynamics and chaos: with applications to physics, biology, chemistry, and engineering (studies in nonlinearity), Westview Press; 2 edition (29 July 2014)\n\n## Supplemental Popular Reading List\n1. Lindsay, G. (2021). Models of the Mind: How Physics, Engineering and Mathematics Have Shaped Our Understanding of the Brain. Bloomsbury Publishing.\n2. Strogatz, S. (2004). Sync: The emerging science of spontaneous order. Penguin UK.\n\n## Popular Podcast\n1. Strogatz, S. (Host). (2020-), Joy of X https://www.quantamagazine.org/tag/the-joy-of-x/ Quanta Magazine\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d08b141afc40f51f574357cdc3b82b2816df86d5 | 15,523 | ipynb | Jupyter Notebook | kmeans_clustering_demo.ipynb | lyb770/labs_ml_clustering | dd48279ed504910eef8a5acb8a61912bf0592ff9 | [
"MIT"
] | null | null | null | kmeans_clustering_demo.ipynb | lyb770/labs_ml_clustering | dd48279ed504910eef8a5acb8a61912bf0592ff9 | [
"MIT"
] | null | null | null | kmeans_clustering_demo.ipynb | lyb770/labs_ml_clustering | dd48279ed504910eef8a5acb8a61912bf0592ff9 | [
"MIT"
] | null | null | null | 29.511407 | 357 | 0.516137 | [
[
[
"# K-means clustering demo",
"_____no_output_____"
],
[
"## 1. Different distance metrics",
"_____no_output_____"
]
],
[
[
"from math import sqrt\n\ndef manhattan(v1,v2):\n res=0\n dimensions=min(len(v1),len(v2))\n\n for i in range(dimensions):\n res+=abs(v1[i]-v2[i])\n\n return res\n\n\ndef euclidean(v1,v2):\n res=0\n dimensions=min(len(v1),len(v2))\n for i in range(dimensions):\n res+=pow(abs(v1[i]-v2[i]),2)\n\n return sqrt(float(res))\n\n\ndef cosine(v1,v2):\n dotproduct=0\n dimensions=min(len(v1),len(v2))\n\n for i in range(dimensions):\n dotproduct+=v1[i]*v2[i]\n\n v1len=0\n v2len=0\n for i in range (dimensions):\n v1len+=v1[i]*v1[i]\n v2len+=v2[i]*v2[i]\n\n v1len=sqrt(v1len)\n v2len=sqrt(v2len)\n \n # we need distance here - \n # we convert cosine similarity into distance\n return 1.0-(float(dotproduct)/(v1len*v2len))\n \n\ndef pearson(v1,v2):\n # Simple sums\n sum1=sum(v1)\n sum2=sum(v2)\n \n # Sums of the squares\n sum1Sq=sum([pow(v,2) for v in v1])\n sum2Sq=sum([pow(v,2) for v in v2])\n \n # Sum of the products\n pSum=sum([v1[i]*v2[i] for i in range(min(len(v1),len(v2)))])\n \n # Calculate r (Pearson score)\n numerator=pSum-(sum1*sum2/len(v1))\n denominator=sqrt((sum1Sq-pow(sum1,2)/len(v1))*(sum2Sq-pow(sum2,2)/len(v1)))\n if denominator==0: return 1.0\n \n # we need distance here - \n # we convert pearson correlation into distance\n return 1.0-numerator/denominator\n\n\ndef tanimoto(v1,v2):\n c1,c2,shared=0,0,0\n\n for i in range(len(v1)):\n if v1[i]!=0 or v2[i]!= 0:\n if v1[i]!=0: c1+=1 # in v1\n if v2[i]!=0: c2+=1 # in v2\n if v1[i]!=0 and v2[i]!=0: shared+=1 # in both\n \n # we need distance here - \n # we convert tanimoto similarity into distance\n return 1.0-(float(shared)/(c1+c2-shared))",
"_____no_output_____"
]
],
[
[
"## 2. K-means clustering algorithm",
"_____no_output_____"
]
],
[
[
"import random\n\n# k-means clustering\ndef kcluster(rows,distance=euclidean,k=4):\n # Determine the minimum and maximum values for each point\n ranges=[(min([row[i] for row in rows]),max([row[i] for row in rows]))\n for i in range(len(rows[0]))]\n\n # Create k randomly placed centroids\n clusters=[[random.random()*(ranges[i][1]-ranges[i][0])+ranges[i][0]\n for i in range(len(rows[0]))] for j in range(k)]\n \n lastmatches=None\n bestmatches = None\n\n for t in range(100):\n print ('Iteration %d' % t)\n bestmatches=[[] for i in range(k)]\n \n # Find which centroid is the closest for each row\n for j in range(len(rows)):\n row=rows[j]\n bestmatch=0\n for i in range(k):\n d=distance(clusters[i],row)\n if d<distance(clusters[bestmatch],row): bestmatch=i\n bestmatches[bestmatch].append(j)\n\n # If the results are the same as last time, this is complete\n if bestmatches==lastmatches: break\n lastmatches=bestmatches\n \n # Move the centroids to the average of the cluster members\n for i in range(k):\n avgs=[0.0]*len(rows[0])\n if len(bestmatches[i])>0:\n for rowid in bestmatches[i]:\n for m in range(len(rows[rowid])):\n avgs[m]+=rows[rowid][m]\n for j in range(len(avgs)):\n avgs[j]/=len(bestmatches[i])\n clusters[i]=avgs\n \n return bestmatches",
"_____no_output_____"
]
],
[
[
"## 3. Toy demo: clustering papers by title\n### 3.1. Data preparation\nThe input is a list of Computer Science paper titles from file [titles.txt](titles.txt).",
"_____no_output_____"
]
],
[
[
"file_name = \"titles.txt\"\nf = open(file_name, \"r\", encoding=\"utf-8\")\ni = 0\nfor line in f:\n print(\"document\", i, \": \", line.strip())\n i += 1",
"_____no_output_____"
]
],
[
[
"To compare documents written in Natural Language, we need to decide how to decide which attributes of a document are important. The simplest possible model is called a **bag of words**: that is we consider each word in a document as a separate and independent dimension. \n\nFirst, we collect all different words occuring across all the document collection (called corpora in NLP). These will become our dimensions.\nWe create a vector as big as the entire vocabulary in a given corpora.\nNext we represent each document as a numeric vector: the number of occurrences of a given word becomes value in the corresponding vector dimension.\n\nHere are the functions for converting documents into bag of words:",
"_____no_output_____"
]
],
[
[
"import re\n\n# Returns dictionary of word counts for a text\ndef get_word_counts(text, all_words):\n wc={}\n words = get_words(text)\n # Loop over all the entries\n\n for word in words:\n if (word not in stopwords) and (word in all_words):\n wc[word] = wc.get(word,0)+1\n\n return wc\n\n# splits text into words\ndef get_words(txt):\n # Split words by all non-alpha characters\n words=re.compile(r'[^A-Z^a-z]+').split(txt)\n\n # Convert to lowercase\n return [word.lower() for word in words if word!='']\n\n\n# converts counts into a vector\ndef get_word_vector(word_list, wc):\n v = [0]*len(word_list)\n for i in range(len(word_list)):\n if word_list[i] in wc:\n v[i] = wc[word_list[i]]\n return v\n\n\n# prints matrix\ndef print_word_matrix(docs):\n for d in docs:\n print (d[0], d[1])",
"_____no_output_____"
]
],
[
[
"Some words of the document should be ignored. These are words that are very commonly used in all documents no matter the topic of the document: ''the'', ''it'', ''and'' etc. These words are called **stop words**. Which words to consider as stop words is application-dependent. One of possible stop words collection is given in file ''stop_words.txt''.",
"_____no_output_____"
]
],
[
[
"stop_words_file = \"stop_words.txt\"\nf = open(stop_words_file, \"r\", encoding=\"utf-8\")\n\nstopwords = []\nfor line in f:\n stopwords.append(line.strip())\n \nf.close()\n\nprint(stopwords[:20])",
"_____no_output_____"
]
],
[
[
"We collect all unique words and for each document we will count how many times each word is present.",
"_____no_output_____"
]
],
[
[
"file_name = \"titles.txt\"\nf = open(file_name, \"r\", encoding=\"utf-8\")\n\ndocuments = []\ndoc_id = 1\nall_words = {}\n\n# transfer content of a file into a list of lines\nlines = [line for line in f]\n\n# create a dictionary of all words and their total counts\nfor line in lines:\n doc_words = get_words(line)\n for w in doc_words :\n if w not in stopwords:\n all_words[w] = all_words.get(w,0)+1\n\nunique_words = set()\nfor w, count in all_words.items():\n if all_words[w] > 1 :\n unique_words.add(w)\n\n# create a matrix of word presence in each document\nfor line in lines:\n documents.append([\"d\"+str(doc_id), get_word_counts(line,unique_words)])\n doc_id += 1\n\nunique_words=list(unique_words)\nprint(\"All unique words:\",unique_words)\nprint(documents)",
"_____no_output_____"
]
],
[
[
"Now we want to convert each document into a numeric vector:",
"_____no_output_____"
]
],
[
[
"out = open(file_name.split('.')[0] + \"_vectors.txt\", \"w\")\n\n# write a header which contains the words themselves\nfor w in unique_words:\n out.write('\\t' + w)\nout.write('\\n')\n\n# print_word_matrix to file\nfor i in range(len(documents)):\n vector = get_word_vector(unique_words, documents[i][1])\n out.write(documents[i][0])\n for x in vector:\n out.write('\\t' + str(x))\n out.write('\\n')\nout.close()",
"_____no_output_____"
]
],
[
[
"Our data now looks like this matrix:",
"_____no_output_____"
]
],
[
[
"doc_vectors_file = \"titles_vectors.txt\"\nf = open(doc_vectors_file, \"r\", encoding=\"utf-8\")\ns = f.read()\nprint(s)",
"_____no_output_____"
],
[
"# This function will read document vectors file and produce 2D data matrix, \n# plus the names of the rows and the names of the columns.\ndef read_vector_file(file_name):\n f = open(file_name)\n lines=[line for line in f]\n \n # First line is the column headers\n colnames=lines[0].strip().split('\\t')[:]\n # print(colnames)\n rownames=[]\n data=[]\n for line in lines[1:]:\n p=line.strip().split('\\t')\n # First column in each row is the rowname\n if len(p)>1:\n rownames.append(p[0])\n # The data for this row is the remainder of the row\n data.append([float(x) for x in p[1:]])\n return rownames,colnames,data\n\n\n# This function will transpose the data matrix\ndef rotatematrix(data):\n newdata=[]\n for i in range(len(data[0])):\n newrow=[data[j][i] for j in range(len(data))]\n newdata.append(newrow)\n return newdata",
"_____no_output_____"
]
],
[
[
"As the result of all this, we have the matrix where the rows are document vectors.\nEach vector dimension represents a unique word in the collection.\nThe value in each dimension represents the count of this word in a particular document.",
"_____no_output_____"
],
[
"### 3.2. Clustering documents \n\nPerforming k-means clustering.",
"_____no_output_____"
]
],
[
[
"doc_vectors_file = \"titles_vectors.txt\"\ndocs,words,data=read_vector_file(doc_vectors_file)\n\nnum_clusters=2\nprint('Searching for {} clusters:'.format(num_clusters))",
"_____no_output_____"
],
[
"clust=kcluster(data,distance=pearson,k=num_clusters)\nprint()\n\nprint ('Document clusters')\nprint ('=================')\nfor i in range(num_clusters):\n print ('cluster {}:'.format(i+1))\n print ([docs[r] for r in clust[i]])\n print()",
"_____no_output_____"
]
],
[
[
"Does this grouping make sense?",
"_____no_output_____"
]
],
[
[
"for d in documents:\n print(d)",
"_____no_output_____"
]
],
[
[
"### 3.3. Clustering words by their occurrence in documents\nWe may consider that the words are similar if they occur in the same document. We say that the words are connected - they belong to the same topic, they occur in a similar context.\nIf we want to cluster words by their occurrences in the documents, all we need to do is to transpose the document matrix.",
"_____no_output_____"
]
],
[
[
"rdata=rotatematrix(data)\nnum_clusters = 3\nprint ('Grouping words into {} clusters:'.format(num_clusters))",
"_____no_output_____"
],
[
"clust=kcluster(rdata,distance=cosine,k=num_clusters)\nprint()\nprint ('word clusters:')\nprint(\"=============\")\nfor i in range(num_clusters):\n print(\"cluster {}\".format(i+1))\n print ([words[r] for r in clust[i]])\n print()",
"_____no_output_____"
]
],
[
[
"Copyright © 2022 Marina Barsky. All rights reserved.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
d08b1c0bdf049fdfae9411d8367f122f41434f1c | 35,188 | ipynb | Jupyter Notebook | old-exp/adapt/adapt16/convert-evc0.1.1-EJM04.ipynb | Akio-m/evgmm | b7f89f4da34a1d4ca4b233dd2fab4d146512e8f7 | [
"MIT"
] | 2 | 2018-10-29T06:50:24.000Z | 2019-09-29T01:44:11.000Z | old-exp/adapt/adapt16/convert-evc0.1.1-EJM04.ipynb | Akio-m/evgmm | b7f89f4da34a1d4ca4b233dd2fab4d146512e8f7 | [
"MIT"
] | 2 | 2018-01-29T13:33:05.000Z | 2018-02-03T15:06:30.000Z | old-exp/adapt/adapt16/convert-evc0.1.1-EJM04.ipynb | Akio-m/evgmm | b7f89f4da34a1d4ca4b233dd2fab4d146512e8f7 | [
"MIT"
] | null | null | null | 44.654822 | 233 | 0.582102 | [
[
[
"# -*- coding: utf-8 -*-\n\n\"\"\"\nEVCで変換する.\n詳細 : https://pdfs.semanticscholar.org/cbfe/71798ded05fb8bf8674580aabf534c4dbb8bc.pdf\n\nConverting by EVC.\nCheck detail : https://pdfs.semanticscholar.org/cbfe/71798ded05fb8bf8674580abf534c4dbb8bc.pdf\n\"\"\"",
"_____no_output_____"
],
[
"from __future__ import division, print_function\n\nimport os\nfrom shutil import rmtree\nimport argparse\nimport glob\nimport pickle\nimport time\n\nimport numpy as np\nfrom numpy.linalg import norm \nfrom sklearn.decomposition import PCA\nfrom sklearn.mixture import GMM # sklearn 0.20.0から使えない\nfrom sklearn.preprocessing import StandardScaler\nimport scipy.signal\nimport scipy.sparse\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport IPython \nfrom IPython.display import Audio \n\nimport soundfile as sf\nimport wave \nimport pyworld as pw\nimport librosa.display\n\nfrom dtw import dtw\nimport warnings\nwarnings.filterwarnings('ignore')",
"_____no_output_____"
],
[
"\"\"\"\nParameters\n\n__Mixtured : GMM混合数\n__versions : 実験セット\n__convert_source : 変換元話者のパス\n__convert_target : 変換先話者のパス\n\"\"\"\n# parameters \n__Mixtured = 40\n__versions = 'pre-stored0.1.1'\n__convert_source = 'input/EJM10/V01/T01/TIMIT/000/*.wav' \n__convert_target = 'adaptation/EJM04/V01/T01/ATR503/A/*.wav'\n__measure_target = 'adaptation/EJM04/V01/T01/TIMIT/000/*.wav'\n\n# settings\n__same_path = './utterance/' + __versions + '/'\n__output_path = __same_path + 'output/EJM04/' # EJF01, EJF07, EJM04, EJM05\n\nMixtured = __Mixtured\npre_stored_pickle = __same_path + __versions + '.pickle'\npre_stored_source_list = __same_path + 'pre-source/**/V01/T01/**/*.wav'\npre_stored_list = __same_path + \"pre/**/V01/T01/**/*.wav\"\n#pre_stored_target_list = \"\" (not yet)\npre_stored_gmm_init_pickle = __same_path + __versions + '_init-gmm.pickle'\npre_stored_sv_npy = __same_path + __versions + '_sv.npy'\n\nsave_for_evgmm_covarXX = __output_path + __versions + '_covarXX.npy'\nsave_for_evgmm_covarYX = __output_path + __versions + '_covarYX.npy'\nsave_for_evgmm_fitted_source = __output_path + __versions + '_fitted_source.npy'\nsave_for_evgmm_fitted_target = __output_path + __versions + '_fitted_target.npy'\nsave_for_evgmm_weights = __output_path + __versions + '_weights.npy'\nsave_for_evgmm_source_means = __output_path + __versions + '_source_means.npy'\n\nfor_convert_source = __same_path + __convert_source\nfor_convert_target = __same_path + __convert_target\nfor_measure_target = __same_path + __measure_target\nconverted_voice_npy = __output_path + 'sp_converted_' + __versions \nconverted_voice_wav = __output_path + 'sp_converted_' + __versions \nmfcc_save_fig_png = __output_path + 'mfcc3dim_' + __versions \nf0_save_fig_png = __output_path + 'f0_converted' + __versions\nconverted_voice_with_f0_wav = __output_path + 'sp_f0_converted' + __versions\nmcd_text = __output_path + __versions + '_MCD.txt'",
"_____no_output_____"
],
[
"EPSILON = 1e-8\n\nclass MFCC:\n \"\"\"\n MFCC() : メル周波数ケプストラム係数(MFCC)を求めたり、MFCCからスペクトルに変換したりするクラス.\n 動的特徴量(delta)が実装途中.\n ref : http://aidiary.hatenablog.com/entry/20120225/1330179868\n \"\"\"\n \n \n def __init__(self, frequency, nfft=1026, dimension=24, channels=24):\n \"\"\"\n 各種パラメータのセット\n nfft : FFTのサンプル点数\n frequency : サンプリング周波数\n dimension : MFCC次元数\n channles : メルフィルタバンクのチャンネル数(dimensionに依存)\n fscale : 周波数スケール軸\n filterbankl, fcenters : フィルタバンク行列, フィルタバンクの頂点(?)\n \"\"\"\n self.nfft = nfft\n self.frequency = frequency\n self.dimension = dimension\n self.channels = channels\n self.fscale = np.fft.fftfreq(self.nfft, d = 1.0 / self.frequency)[: int(self.nfft / 2)]\n self.filterbank, self.fcenters = self.melFilterBank()\n \n def hz2mel(self, f):\n \"\"\"\n 周波数からメル周波数に変換\n \"\"\"\n return 1127.01048 * np.log(f / 700.0 + 1.0)\n \n def mel2hz(self, m):\n \"\"\"\n メル周波数から周波数に変換\n \"\"\" \n return 700.0 * (np.exp(m / 1127.01048) - 1.0)\n\n def melFilterBank(self):\n \"\"\"\n メルフィルタバンクを生成する\n \"\"\" \n fmax = self.frequency / 2\n melmax = self.hz2mel(fmax)\n nmax = int(self.nfft / 2)\n df = self.frequency / self.nfft\n dmel = melmax / (self.channels + 1)\n melcenters = np.arange(1, self.channels + 1) * dmel\n fcenters = self.mel2hz(melcenters)\n indexcenter = np.round(fcenters / df)\n indexstart = np.hstack(([0], indexcenter[0:self.channels - 1]))\n indexstop = np.hstack((indexcenter[1:self.channels], [nmax]))\n\n filterbank = np.zeros((self.channels, nmax))\n for c in np.arange(0, self.channels):\n increment = 1.0 / (indexcenter[c] - indexstart[c])\n # np,int_ は np.arangeが[0. 1. 2. ..]となるのをintにする\n for i in np.int_(np.arange(indexstart[c], indexcenter[c])):\n filterbank[c, i] = (i - indexstart[c]) * increment\n decrement = 1.0 / (indexstop[c] - indexcenter[c])\n # np,int_ は np.arangeが[0. 1. 2. ..]となるのをintにする\n for i in np.int_(np.arange(indexcenter[c], indexstop[c])):\n filterbank[c, i] = 1.0 - ((i - indexcenter[c]) * decrement)\n\n return filterbank, fcenters\n \n def mfcc(self, spectrum):\n \"\"\"\n スペクトルからMFCCを求める.\n \"\"\"\n mspec = []\n mspec = np.log10(np.dot(spectrum, self.filterbank.T))\n mspec = np.array(mspec)\n \n return scipy.fftpack.realtransforms.dct(mspec, type=2, norm=\"ortho\", axis=-1)\n \n def delta(self, mfcc):\n \"\"\"\n MFCCから動的特徴量を求める.\n 現在は,求める特徴量フレームtをt-1とt+1の平均としている.\n \"\"\"\n mfcc = np.concatenate([\n [mfcc[0]], \n mfcc, \n [mfcc[-1]]\n ]) # 最初のフレームを最初に、最後のフレームを最後に付け足す\n delta = None\n for i in range(1, mfcc.shape[0] - 1):\n slope = (mfcc[i+1] - mfcc[i-1]) / 2\n if delta is None:\n delta = slope\n else:\n delta = np.vstack([delta, slope])\n return delta\n \n def imfcc(self, mfcc, spectrogram):\n \"\"\"\n MFCCからスペクトルを求める.\n \"\"\"\n im_sp = np.array([])\n for i in range(mfcc.shape[0]):\n mfcc_s = np.hstack([mfcc[i], [0] * (self.channels - self.dimension)])\n mspectrum = scipy.fftpack.idct(mfcc_s, norm='ortho')\n # splrep はスプライン補間のための補間関数を求める\n tck = scipy.interpolate.splrep(self.fcenters, np.power(10, mspectrum))\n # splev は指定座標での補間値を求める\n im_spectrogram = scipy.interpolate.splev(self.fscale, tck)\n im_sp = np.concatenate((im_sp, im_spectrogram), axis=0)\n \n return im_sp.reshape(spectrogram.shape)\n \n def trim_zeros_frames(x, eps=1e-7):\n \"\"\"\n 無音区間を取り除く.\n \"\"\"\n T, D = x.shape\n s = np.sum(np.abs(x), axis=1)\n s[s < 1e-7] = 0.\n return x[s > eps]",
"_____no_output_____"
],
[
"def analyse_by_world_with_harverst(x, fs):\n \"\"\"\n WORLD音声分析合成器で基本周波数F0,スペクトル包絡,非周期成分を求める.\n 基本周波数F0についてはharvest法により,より精度良く求める.\n \"\"\"\n # 4 Harvest with F0 refinement (using Stonemask)\n frame_period = 5\n _f0_h, t_h = pw.harvest(x, fs, frame_period=frame_period)\n f0_h = pw.stonemask(x, _f0_h, t_h, fs)\n sp_h = pw.cheaptrick(x, f0_h, t_h, fs)\n ap_h = pw.d4c(x, f0_h, t_h, fs)\n \n return f0_h, sp_h, ap_h\n\ndef wavread(file):\n \"\"\"\n wavファイルから音声トラックとサンプリング周波数を抽出する.\n \"\"\"\n wf = wave.open(file, \"r\")\n fs = wf.getframerate()\n x = wf.readframes(wf.getnframes())\n x = np.frombuffer(x, dtype= \"int16\") / 32768.0\n wf.close()\n return x, float(fs)\n\ndef preEmphasis(signal, p=0.97):\n \"\"\"\n MFCC抽出のための高域強調フィルタ.\n 波形を通すことで,高域成分が強調される.\n \"\"\"\n return scipy.signal.lfilter([1.0, -p], 1, signal)\n\ndef alignment(source, target, path):\n \"\"\"\n タイムアライメントを取る.\n target音声をsource音声の長さに合うように調整する.\n \"\"\"\n # ここでは814に合わせよう(targetに合わせる)\n # p_p = 0 if source.shape[0] > target.shape[0] else 1\n\n #shapes = source.shape if source.shape[0] > target.shape[0] else target.shape \n shapes = source.shape\n align = np.array([])\n for (i, p) in enumerate(path[0]):\n if i != 0:\n if j != p:\n temp = np.array(target[path[1][i]])\n align = np.concatenate((align, temp), axis=0)\n else:\n temp = np.array(target[path[1][i]])\n align = np.concatenate((align, temp), axis=0) \n \n j = p\n \n return align.reshape(shapes)",
"_____no_output_____"
],
[
"covarXX = np.load(save_for_evgmm_covarXX)\ncovarYX = np.load(save_for_evgmm_covarYX)\nfitted_source = np.load(save_for_evgmm_fitted_source)\nfitted_target = np.load(save_for_evgmm_fitted_target)\nweights = np.load(save_for_evgmm_weights)\nsource_means = np.load(save_for_evgmm_source_means)",
"_____no_output_____"
],
[
"\"\"\"\n声質変換に用いる変換元音声と目標音声を読み込む.\n\"\"\"\n\ntimer_start = time.time()\nsource_mfcc_for_convert = []\nsource_sp_for_convert = []\nsource_f0_for_convert = []\nsource_ap_for_convert = []\nfs_source = None\nfor name in sorted(glob.iglob(for_convert_source, recursive=True)):\n print(\"source = \", name)\n x_source, fs_source = sf.read(name)\n f0_source, sp_source, ap_source = analyse_by_world_with_harverst(x_source, fs_source)\n mfcc_source = MFCC(fs_source)\n #mfcc_s_tmp = mfcc_s.mfcc(sp)\n #source_mfcc_for_convert = np.hstack([mfcc_s_tmp, mfcc_s.delta(mfcc_s_tmp)])\n source_mfcc_for_convert.append(mfcc_source.mfcc(sp_source))\n source_sp_for_convert.append(sp_source)\n source_f0_for_convert.append(f0_source)\n source_ap_for_convert.append(ap_source)\n\ntarget_mfcc_for_fit = []\ntarget_f0_for_fit = []\ntarget_ap_for_fit = []\nfor name in sorted(glob.iglob(for_convert_target, recursive=True)):\n print(\"target = \", name)\n x_target, fs_target = sf.read(name)\n f0_target, sp_target, ap_target = analyse_by_world_with_harverst(x_target, fs_target)\n mfcc_target = MFCC(fs_target)\n #mfcc_target_tmp = mfcc_target.mfcc(sp_target)\n #target_mfcc_for_fit = np.hstack([mfcc_t_tmp, mfcc_t.delta(mfcc_t_tmp)])\n target_mfcc_for_fit.append(mfcc_target.mfcc(sp_target))\n target_f0_for_fit.append(f0_target)\n target_ap_for_fit.append(ap_target)\n\n# 全部numpy.arrrayにしておく\nsource_data_mfcc = np.array(source_mfcc_for_convert)\nsource_data_sp = np.array(source_sp_for_convert)\nsource_data_f0 = np.array(source_f0_for_convert)\nsource_data_ap = np.array(source_ap_for_convert)\n\ntarget_mfcc = np.array(target_mfcc_for_fit)\ntarget_f0 = np.array(target_f0_for_fit)\ntarget_ap = np.array(target_ap_for_fit)\n\nprint(\"Load Input and Target Voice time = \", time.time() - timer_start , \"[sec]\") ",
"source = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A11.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A14.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A17.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A18.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A19.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A20.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A21.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A22.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A23.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A24.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A25.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A26.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A27.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A28.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A29.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A30.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A31.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A32.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A33.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A34.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A35.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A36.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A37.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A38.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A39.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A40.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A41.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A42.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A43.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A44.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A45.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A46.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A47.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A48.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A49.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A50.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A51.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A52.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A53.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A54.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A55.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A56.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A57.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A58.wav\nsource = ./utterance/pre-stored0.1.1/input/EJM10/V01/T01/TIMIT/000/A59.wav\ntarget = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/ATR503/A/A01.wav\ntarget = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/ATR503/A/A02.wav\ntarget = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/ATR503/A/A03.wav\ntarget = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/ATR503/A/A05.wav\ntarget = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/ATR503/A/A06.wav\ntarget = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/ATR503/A/A07.wav\ntarget = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/ATR503/A/A08.wav\ntarget = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/ATR503/A/A09.wav\ntarget = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/ATR503/A/A10.wav\ntarget = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/ATR503/A/A11.wav\ntarget = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/ATR503/A/A13.wav\ntarget = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/ATR503/A/A14.wav\ntarget = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/ATR503/A/A15.wav\ntarget = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/ATR503/A/A16.wav\ntarget = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/ATR503/A/A17.wav\ntarget = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/ATR503/A/A18.wav\nLoad Input and Target Voice time = 161.93552207946777 [sec]\n"
],
[
"def convert(source, covarXX, fitted_source, fitted_target, covarYX, weights, source_means):\n \"\"\"\n 声質変換を行う.\n \"\"\"\n Mixtured = 40\n \n D = source.shape[0]\n E = np.zeros((Mixtured, D))\n\n for m in range(Mixtured):\n xx = np.linalg.solve(covarXX[m], source - fitted_source[m])\n E[m] = fitted_target[m] + np.dot(covarYX[m], xx)\n\n px = GMM(n_components = Mixtured, covariance_type = 'full')\n px.weights_ = weights\n px.means_ = source_means\n px.covars_ = covarXX\n\n posterior = px.predict_proba(np.atleast_2d(source))\n return np.dot(posterior, E)",
"_____no_output_____"
],
[
"def calc_std_mean(input_f0):\n \"\"\"\n F0変換のために標準偏差と平均を求める.\n \"\"\"\n tempF0 = input_f0[ np.where(input_f0 > 0)]\n fixed_logF0 = np.log(tempF0)\n #logF0 = np.ma.log(input_f0) # 0要素にlogをするとinfになるのでmaskする\n #fixed_logF0 = np.ma.fix_invalid(logF0).data # maskを取る\n return np.std(fixed_logF0), np.mean(fixed_logF0) # 標準偏差と平均を返す",
"_____no_output_____"
],
[
"\"\"\"\n距離を測るために,正しい目標音声を読み込む\n\"\"\"\nsource_mfcc_for_measure_target = []\nsource_sp_for_measure_target = []\nsource_f0_for_measure_target = []\nsource_ap_for_measure_target = []\nfor name in sorted(glob.iglob(for_measure_target, recursive=True)):\n print(\"measure_target = \", name)\n x_measure_target, fs_measure_target = sf.read(name)\n f0_measure_target, sp_measure_target, ap_measure_target = analyse_by_world_with_harverst(x_measure_target, fs_measure_target)\n mfcc_measure_target = MFCC(fs_measure_target)\n #mfcc_s_tmp = mfcc_s.mfcc(sp)\n #source_mfcc_for_convert = np.hstack([mfcc_s_tmp, mfcc_s.delta(mfcc_s_tmp)])\n source_mfcc_for_measure_target.append(mfcc_measure_target.mfcc(sp_measure_target))\n source_sp_for_measure_target.append(sp_measure_target)\n source_f0_for_measure_target.append(f0_measure_target)\n source_ap_for_measure_target.append(ap_measure_target)\n \nmeasure_target_data_mfcc = np.array(source_mfcc_for_measure_target)\nmeasure_target_data_sp = np.array(source_sp_for_measure_target)\nmeasure_target_data_f0 = np.array(source_f0_for_measure_target)\nmeasure_target_data_ap = np.array(source_ap_for_measure_target)",
"measure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A11.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A14.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A17.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A18.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A19.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A20.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A21.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A22.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A23.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A24.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A25.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A26.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A27.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A28.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A29.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A30.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A31.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A32.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A33.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A34.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A35.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A36.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A37.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A38.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A39.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A40.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A41.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A42.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A43.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A44.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A45.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A46.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A47.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A48.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A49.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A50.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A51.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A52.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A53.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A54.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A55.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A56.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A57.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A58.wav\nmeasure_target = ./utterance/pre-stored0.1.1/adaptation/EJM04/V01/T01/TIMIT/000/A59.wav\n"
],
[
"def calc_mcd(source, convert, target):\n \"\"\"\n 変換する前の音声と目標音声でDTWを行う.\n その後,変換後の音声と目標音声とのMCDを計測する.\n \"\"\"\n dist, cost, acc, path = dtw(source, target, dist=lambda x, y: norm(x-y, ord=1))\n aligned = alignment(source, target, path)\n \n return 10.0 / np.log(10) * np.sqrt(2 * np.sum(np.square(aligned - convert))), aligned\n\n",
"_____no_output_____"
],
[
"\"\"\"\n変換を行う.\n\"\"\"\n\ntimer_start = time.time()\n\n# 事前に目標話者の標準偏差と平均を求めておく\ntemp_f = None\nfor x in range(len(target_f0)):\n temp = target_f0[x].flatten()\n if temp_f is None:\n temp_f = temp\n else:\n temp_f = np.hstack((temp_f, temp)) \ntarget_std, target_mean = calc_std_mean(temp_f)\n\n# 変換\noutput_mfcc = []\nfiler = open(mcd_text, 'a')\nfor i in range(len(source_data_mfcc)): \n print(\"voice no = \", i)\n # convert\n source_temp = source_data_mfcc[i]\n output_mfcc = np.array([convert(source_temp[frame], covarXX, fitted_source, fitted_target, covarYX, weights, source_means)[0] for frame in range(source_temp.shape[0])])\n \n # syntehsis\n source_sp_temp = source_data_sp[i]\n source_f0_temp = source_data_f0[i]\n source_ap_temp = source_data_ap[i]\n output_imfcc = mfcc_source.imfcc(output_mfcc, source_sp_temp)\n y_source = pw.synthesize(source_f0_temp, output_imfcc, source_ap_temp, fs_source, 5)\n np.save(converted_voice_npy + \"s{0}.npy\".format(i), output_imfcc)\n sf.write(converted_voice_wav + \"s{0}.wav\".format(i), y_source, fs_source)\n \n # calc MCD\n measure_temp = measure_target_data_mfcc[i]\n mcd, aligned_measure = calc_mcd(source_temp, output_mfcc, measure_temp)\n filer.write(\"MCD No.{0} = {1} , shape = {2}\\n\".format(i, mcd, source_temp.shape))\n \n # save figure spectram\n range_s = output_imfcc.shape[0]\n scale = [x for x in range(range_s)]\n MFCC_sample_s = [source_temp[x][0] for x in range(range_s)]\n MFCC_sample_c = [output_mfcc[x][0] for x in range(range_s)]\n MFCC_sample_t = [aligned_measure[x][0] for x in range(range_s)]\n \n plt.subplot(311)\n plt.plot(scale, MFCC_sample_s, label=\"source\", linewidth = 1.0)\n plt.plot(scale, MFCC_sample_c, label=\"convert\", linewidth = 1.0)\n plt.plot(scale, MFCC_sample_t, label=\"target\", linewidth = 1.0, linestyle=\"dashed\")\n plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3, ncol=3, mode=\"expand\", borderaxespad=0.)\n #plt.xlabel(\"Flame\")\n #plt.ylabel(\"amplitude MFCC\")\n \n MFCC_sample_s = [source_temp[x][1] for x in range(range_s)]\n MFCC_sample_c = [output_mfcc[x][1] for x in range(range_s)]\n MFCC_sample_t = [aligned_measure[x][1] for x in range(range_s)]\n \n plt.subplot(312)\n plt.plot(scale, MFCC_sample_s, label=\"source\", linewidth = 1.0)\n plt.plot(scale, MFCC_sample_c, label=\"convert\", linewidth = 1.0)\n plt.plot(scale, MFCC_sample_t, label=\"target\", linewidth = 1.0, linestyle=\"dashed\")\n plt.ylabel(\"amplitude MFCC\")\n \n MFCC_sample_s = [source_temp[x][2] for x in range(range_s)]\n MFCC_sample_c = [output_mfcc[x][2] for x in range(range_s)]\n MFCC_sample_t = [aligned_measure[x][2] for x in range(range_s)]\n \n plt.subplot(313)\n plt.plot(scale, MFCC_sample_s, label=\"source\", linewidth = 1.0)\n plt.plot(scale, MFCC_sample_c, label=\"convert\", linewidth = 1.0)\n plt.plot(scale, MFCC_sample_t, label=\"target\", linewidth = 1.0, linestyle=\"dashed\")\n plt.xlabel(\"Flame\")\n\n plt.savefig(mfcc_save_fig_png + \"s{0}.png\".format(i) , format='png', dpi=300)\n plt.close()\n \n # synthesis with conveted f0\n source_std, source_mean = calc_std_mean(source_f0_temp)\n std_ratio = target_std / source_std\n log_conv_f0 = std_ratio * (source_f0_temp - source_mean) + target_mean\n conv_f0 = np.maximum(log_conv_f0, 0)\n np.save(converted_voice_npy + \"f{0}.npy\".format(i), conv_f0)\n \n y_conv = pw.synthesize(conv_f0, output_imfcc, source_ap_temp, fs_source, 5)\n sf.write(converted_voice_with_f0_wav + \"sf{0}.wav\".format(i) , y_conv, fs_source)\n \n # save figure f0\n F0_s = [source_f0_temp[x] for x in range(range_s)]\n F0_c = [conv_f0[x] for x in range(range_s)]\n \n plt.plot(scale, F0_s, label=\"source\", linewidth = 1.0)\n plt.plot(scale, F0_c, label=\"convert\", linewidth = 1.0)\n plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3, ncol=2, mode=\"expand\", borderaxespad=0.)\n plt.xlabel(\"Frame\")\n plt.ylabel(\"Amplitude\")\n \n plt.savefig(f0_save_fig_png + \"f{0}.png\".format(i), format='png', dpi=300)\n plt.close()\n \nfiler.close()\nprint(\"Make Converted Spectram time = \", time.time() - timer_start , \"[sec]\")",
"voice no = 0\nvoice no = 1\nvoice no = 2\nvoice no = 3\nvoice no = 4\nvoice no = 5\nvoice no = 6\nvoice no = 7\nvoice no = 8\nvoice no = 9\nvoice no = 10\nvoice no = 11\nvoice no = 12\nvoice no = 13\nvoice no = 14\nvoice no = 15\nvoice no = 16\nvoice no = 17\nvoice no = 18\nvoice no = 19\nvoice no = 20\nvoice no = 21\nvoice no = 22\nvoice no = 23\nvoice no = 24\nvoice no = 25\nvoice no = 26\nvoice no = 27\nvoice no = 28\nvoice no = 29\nvoice no = 30\nvoice no = 31\nvoice no = 32\nvoice no = 33\nvoice no = 34\nvoice no = 35\nvoice no = 36\nvoice no = 37\nvoice no = 38\nvoice no = 39\nvoice no = 40\nvoice no = 41\nvoice no = 42\nvoice no = 43\nvoice no = 44\nMake Converted Spectram time = 1616.8302788734436 [sec]\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d08b4755b12e0fe37fe2ecaac56540ee428c3903 | 2,441 | ipynb | Jupyter Notebook | testing.ipynb | jennysheng14/3CL_protease_DMS | 13d1ecd39260d01ee0826970ab19b27de44d814e | [
"MIT"
] | null | null | null | testing.ipynb | jennysheng14/3CL_protease_DMS | 13d1ecd39260d01ee0826970ab19b27de44d814e | [
"MIT"
] | 1 | 2021-04-13T19:27:36.000Z | 2021-04-13T19:27:36.000Z | testing.ipynb | jennysheng14/3CL_protease_DMS | 13d1ecd39260d01ee0826970ab19b27de44d814e | [
"MIT"
] | 1 | 2021-02-09T23:16:38.000Z | 2021-02-09T23:16:38.000Z | 25.164948 | 83 | 0.459648 | [
[
[
"import pandas as pd",
"_____no_output_____"
],
[
"spreadsheet = \"sample_spreadsheet_021521.csv\"\nsamples = pd.read_csv(spreadsheet, comment = '#')",
"_____no_output_____"
],
[
"sets = set(list(samples['Set']))\ndef sets_and_residues(spreadsheet):\n '''\n Define which residues to take from which sets, especially for repeated\n residues.\n '''\n samples = pd.read_csv(spreadsheet, comment = '#')\n set_ = []\n res = []\n for s in sets:\n x = str(s)\n if 'R' in str(s) and str(s)!= 'R1':\n sites = list(samples[samples['Set'] == str(x)]['Sites'])[0]\n sites = [str(x) for x in sites.split(',')] \n for site in sites:\n print(set_, res)\n set_.append(x)\n res.append(site)\n for s in sets:\n x = str(s)\n if 'R' not in str(s) or str(s) == 'R1':\n start = list(samples[samples['Set'] == x]['Start range'])[0]\n end = list(samples[samples['Set'] == x]['End range'])[0]\n for site in range(start, end):\n if site not in res:\n set_.append(x)\n res.append(site)\n return(list(zip(set_, res)))",
"_____no_output_____"
],
[
"matrix = pd.DataFrame()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
d08b4cf98483e7fc6774886ae701674647181ec9 | 4,061 | ipynb | Jupyter Notebook | Index.ipynb | idlegene/pandas-tutorial | 9db30f6fd92748fa4360bb813c5ea38cb17263f7 | [
"BSD-2-Clause"
] | 183 | 2016-08-24T12:32:07.000Z | 2022-03-26T14:05:04.000Z | Index.ipynb | idlegene/pandas-tutorial | 9db30f6fd92748fa4360bb813c5ea38cb17263f7 | [
"BSD-2-Clause"
] | 1 | 2017-08-28T14:22:40.000Z | 2017-08-28T14:22:40.000Z | Index.ipynb | idlegene/pandas-tutorial | 9db30f6fd92748fa4360bb813c5ea38cb17263f7 | [
"BSD-2-Clause"
] | 204 | 2016-08-24T14:22:58.000Z | 2022-03-29T15:09:03.000Z | 32.488 | 304 | 0.606008 | [
[
[
"<CENTER>\n <header>\n <h1>Pandas Tutorial</h1>\n <h3>EuroScipy, Erlangen DE, August 24th, 2016</h3>\n <h2>Joris Van den Bossche</h2>\n <p></p>\nSource: <a href=\"https://github.com/jorisvandenbossche/pandas-tutorial\">https://github.com/jorisvandenbossche/pandas-tutorial</a>\n </header>\n</CENTER>",
"_____no_output_____"
],
[
"Two data files are not included in the repo, you can download them from: [`titles.csv`](https://drive.google.com/file/d/0B3G70MlBnCgKa0U4WFdWdGdVOFU/view?usp=sharing) and [`cast.csv`](https://drive.google.com/file/d/0B3G70MlBnCgKRzRmTWdQTUdjNnM/view?usp=sharing) and put them in the `/data` folder.",
"_____no_output_____"
],
[
"## Requirements to run this tutorial",
"_____no_output_____"
],
[
"To follow this tutorial you need to have the following packages installed:\n\n- Python version 2.6-2.7 or 3.3-3.5\n- `pandas` version 0.18.0 or later: http://pandas.pydata.org/\n- `numpy` version 1.7 or later: http://www.numpy.org/\n- `matplotlib` version 1.3 or later: http://matplotlib.org/\n- `ipython` version 3.x with notebook support, or `ipython 4.x` combined with `jupyter`: http://ipython.org\n- `seaborn` (this is used for some plotting, but not necessary to follow the tutorial): http://stanford.edu/~mwaskom/software/seaborn/\n",
"_____no_output_____"
],
[
"## Downloading the tutorial materials",
"_____no_output_____"
],
[
"If you have git installed, you can get the material in this tutorial by cloning this repo:\n\n git clone https://github.com/jorisvandenbossche/pandas-tutorial.git\n\nAs an alternative, you can download it as a zip file:\nhttps://github.com/jorisvandenbossche/pandas-tutorial/archive/master.zip.\nI will probably make some changes until the start of the tutorial, so best to download\nthe latest version then (or do a `git pull` if you are using git).\n\nTwo data files are not included in the repo, you can download them from: [`titles.csv`](https://drive.google.com/open?id=0B3G70MlBnCgKajNMa1pfSzN6Q3M) and [`cast.csv`](https://drive.google.com/open?id=0B3G70MlBnCgKal9UYTJSR2ZhSW8) and put them in the `/data` folder.",
"_____no_output_____"
],
[
"## Contents\n\nBeginners track:\n\n- [01 - Introduction - beginners.ipynb](01 - Introduction - beginners.ipynb)\n- [02 - Data structures](02 - Data structures.ipynb)\n- [03 - Indexing and selecting data](03 - Indexing and selecting data.ipynb)\n- [04 - Groupby operations](04 - Groupby operations.ipynb)\n\nAdvanced track:\n\n- [03b - Some more advanced indexing](03b - Some more advanced indexing.ipynb)\n- [04b - Advanced groupby operations](04b - Advanced groupby operations.ipynb)\n- [05 - Time series data](05 - Time series data.ipynb)\n- [06 - Reshaping data](06 - Reshaping data.ipynb)\n",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d08b5fa5acb2c65bcf8510bf9be4480af7b2e44f | 29,846 | ipynb | Jupyter Notebook | docs/src/notebooks/2_pubmed_mesh_to_umls_map.ipynb | nicoleepp/BioMedQuery.jl | 368347241f7e3d0ea7ed1b7121ce1705b9c0b417 | [
"MIT"
] | null | null | null | docs/src/notebooks/2_pubmed_mesh_to_umls_map.ipynb | nicoleepp/BioMedQuery.jl | 368347241f7e3d0ea7ed1b7121ce1705b9c0b417 | [
"MIT"
] | null | null | null | docs/src/notebooks/2_pubmed_mesh_to_umls_map.ipynb | nicoleepp/BioMedQuery.jl | 368347241f7e3d0ea7ed1b7121ce1705b9c0b417 | [
"MIT"
] | null | null | null | 55.996248 | 1,802 | 0.576225 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
d08b7a6e714923738dfb0847dd39352295877527 | 7,581 | ipynb | Jupyter Notebook | NLP/8.Using Named Entity Recognition (NER).ipynb | SuryaReginaAA/Learn | 4a98741e564ffd0cd58344fa597b1875e3d35cd1 | [
"MIT"
] | null | null | null | NLP/8.Using Named Entity Recognition (NER).ipynb | SuryaReginaAA/Learn | 4a98741e564ffd0cd58344fa597b1875e3d35cd1 | [
"MIT"
] | null | null | null | NLP/8.Using Named Entity Recognition (NER).ipynb | SuryaReginaAA/Learn | 4a98741e564ffd0cd58344fa597b1875e3d35cd1 | [
"MIT"
] | null | null | null | 27.667883 | 360 | 0.553225 | [
[
[
"# Using Named Entity Recognition (NER)",
"_____no_output_____"
],
[
"**Named entities** are noun phrases that refer to specific locations, people, organizations, and so on. With **named entity recognition**, you can find the named entities in your texts and also determine what kind of named entity they are.\n\nHere’s the list of named entity types from the <a href = \"https://www.nltk.org/book/ch07.html#sec-ner\">NLTK book</a>:",
"_____no_output_____"
],
[
"<table>\n <tr><th>NEtype</th>\t<th>Examples</th></tr>\n <tr><td>ORGANIZATION</td>\t<td>Georgia-Pacific Corp., WHO</td></tr>\n <tr><td>PERSON</td>\t<td>Eddy Bonte, President Obama</td></tr>\n <tr><td>LOCATION</td>\t<td>Murray River, Mount Everest</td></tr>\n <tr><td>DATE</td>\t<td>June, 2008-06-29</td></tr>\n <tr><td>TIME</td>\t<td>two fifty a m, 1:30 p.m.</td></tr>\n <tr><td>MONEY</td>\t<td>175 million Canadian dollars, GBP 10.40</td></tr>\n <tr><td>PERCENT</td>\t<td>twenty pct, 18.75 %</td></tr>\n <tr><td>FACILITY</td>\t<td>Washington Monument, Stonehenge</td></tr>\n <tr><td>GPE</td>\t<td>South East Asia, Midlothian</td></tr>\n<table>\nYou can use nltk.ne_chunk() to recognize named entities. Let’s use lotr_pos_tags again to test it out:",
"_____no_output_____"
]
],
[
[
"import nltk\nfrom nltk.tokenize import word_tokenize",
"_____no_output_____"
],
[
"lotr_quote = \"It's a dangerous business, Frodo, going out your door.\"",
"_____no_output_____"
],
[
"words_in_lotr_quote = word_tokenize(lotr_quote)\nprint(words_in_lotr_quote)",
"['It', \"'s\", 'a', 'dangerous', 'business', ',', 'Frodo', ',', 'going', 'out', 'your', 'door', '.']\n"
],
[
"lotr_pos_tags = nltk.pos_tag(words_in_lotr_quote)\nprint(lotr_pos_tags)",
"[('It', 'PRP'), (\"'s\", 'VBZ'), ('a', 'DT'), ('dangerous', 'JJ'), ('business', 'NN'), (',', ','), ('Frodo', 'NNP'), (',', ','), ('going', 'VBG'), ('out', 'RP'), ('your', 'PRP$'), ('door', 'NN'), ('.', '.')]\n"
],
[
"tree = nltk.ne_chunk(lotr_pos_tags)",
"_____no_output_____"
]
],
[
[
"Now take a look at the visual representation:",
"_____no_output_____"
]
],
[
[
"tree.draw()",
"_____no_output_____"
]
],
[
[
"Here’s what you get:\n\n",
"_____no_output_____"
],
[
"See how Frodo has been tagged as a PERSON? You also have the option to use the parameter binary=True if you just want to know what the named entities are but not what kind of named entity they are:",
"_____no_output_____"
]
],
[
[
"tree = nltk.ne_chunk(lotr_pos_tags, binary=True)\ntree.draw()",
"_____no_output_____"
]
],
[
[
"Now all you see is that Frodo is an NE:",
"_____no_output_____"
],
[
"That’s how you can identify named entities! But you can take this one step further and extract named entities directly from your text. Create a string from which to extract named entities. You can use this quote from <a href = \"https://en.wikipedia.org/wiki/The_War_of_the_Worlds\" >The War of the Worlds</a>:",
"_____no_output_____"
]
],
[
[
"quote = \"\"\"\nMen like Schiaparelli watched the red planet—it is odd, by-the-bye, that\nfor countless centuries Mars has been the star of war—but failed to\ninterpret the fluctuating appearances of the markings they mapped so well.\nAll that time the Martians must have been getting ready.\n\nDuring the opposition of 1894 a great light was seen on the illuminated\npart of the disk, first at the Lick Observatory, then by Perrotin of Nice,\nand then by other observers. English readers heard of it first in the\nissue of Nature dated August 2.\"\"\"",
"_____no_output_____"
]
],
[
[
"Now create a function to extract named entities:",
"_____no_output_____"
]
],
[
[
"def extract_ne(quote):\n words = word_tokenize(quote, language='english')\n tags = nltk.pos_tag(words)\n tree = nltk.ne_chunk(tags, binary=True)\n tree.draw()\n return set(\n \" \".join(i[0] for i in t)\n for t in tree\n if hasattr(t, \"label\") and t.label() == \"NE\"\n )",
"_____no_output_____"
]
],
[
[
"With this function, you gather all named entities, with no repeats. In order to do that, you tokenize by word, apply part of speech tags to those words, and then extract named entities based on those tags. Because you included binary=True, the named entities you’ll get won’t be labeled more specifically. You’ll just know that they’re named entities.\n\nTake a look at the information you extracted:",
"_____no_output_____"
]
],
[
[
"extract_ne(quote)",
"_____no_output_____"
]
],
[
[
"You missed the city of Nice, possibly because NLTK interpreted it as a regular English adjective, but you still got the following:\n\n1.**An institution**: 'Lick Observatory'\n\n2.**A planet**: 'Mars'\n\n3.**A publication**: 'Nature'\n\n4.**People**: 'Perrotin', 'Schiaparelli'",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d08b809153ccc8b8be53b95457f859923f5424af | 10,783 | ipynb | Jupyter Notebook | examples/urban_driver/closed_loop_test.ipynb | ronamit/l5kit | a27349e2f8862d4a2a5ff91ae842fe87016e76d2 | [
"Apache-2.0"
] | 1 | 2021-12-04T17:48:53.000Z | 2021-12-04T17:48:53.000Z | examples/urban_driver/closed_loop_test.ipynb | ramitnv/l5kit | a27349e2f8862d4a2a5ff91ae842fe87016e76d2 | [
"Apache-2.0"
] | null | null | null | examples/urban_driver/closed_loop_test.ipynb | ramitnv/l5kit | a27349e2f8862d4a2a5ff91ae842fe87016e76d2 | [
"Apache-2.0"
] | null | null | null | 30.985632 | 184 | 0.597515 | [
[
[
"# Closed-Loop Evaluation\nIn this notebook you are going to evaluate Urban Driver to control the SDV with a protocol named *closed-loop* evaluation.\n\n**Note: this notebook assumes you've already run the [training notebook](./train.ipynb) and stored your model successfully (or that you have stored a pre-trained one).**\n\n**Note: for a detailed explanation of what closed-loop evaluation (CLE) is, please refer to our [planning notebook](../planning/closed_loop_test.ipynb)**",
"_____no_output_____"
],
[
"### Imports",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport numpy as np\nimport torch\nfrom prettytable import PrettyTable\n\nfrom l5kit.configs import load_config_data\nfrom l5kit.data import LocalDataManager, ChunkedDataset\n\nfrom l5kit.dataset import EgoDatasetVectorized\nfrom l5kit.vectorization.vectorizer_builder import build_vectorizer\n\nfrom l5kit.simulation.dataset import SimulationConfig\nfrom l5kit.simulation.unroll import ClosedLoopSimulator\nfrom l5kit.cle.closed_loop_evaluator import ClosedLoopEvaluator, EvaluationPlan\nfrom l5kit.cle.metrics import (CollisionFrontMetric, CollisionRearMetric, CollisionSideMetric,\n DisplacementErrorL2Metric, DistanceToRefTrajectoryMetric)\nfrom l5kit.cle.validators import RangeValidator, ValidationCountingAggregator\n\nfrom l5kit.visualization.visualizer.zarr_utils import simulation_out_to_visualizer_scene\nfrom l5kit.visualization.visualizer.visualizer import visualize\nfrom bokeh.io import output_notebook, show\nfrom l5kit.data import MapAPI\n\nfrom collections import defaultdict\nimport os",
"_____no_output_____"
]
],
[
[
"## Prepare data path and load cfg\n\nBy setting the `L5KIT_DATA_FOLDER` variable, we can point the script to the folder where the data lies.\n\nThen, we load our config file with relative paths and other configurations (rasteriser, training params ...).",
"_____no_output_____"
]
],
[
[
"# set env variable for data\nfrom l5kit.data import get_dataset_path\nos.environ[\"L5KIT_DATA_FOLDER\"], project_path = get_dataset_path()\ndm = LocalDataManager(None)\n# get config\ncfg = load_config_data(\"./config.yaml\")",
"_____no_output_____"
]
],
[
[
"## Load the model",
"_____no_output_____"
]
],
[
[
"model_path = project_path + \"/urban_driver_dummy_model.pt\"\ndevice = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\nmodel = torch.load(model_path).to(device)\nmodel = model.eval()\ntorch.set_grad_enabled(False)",
"_____no_output_____"
]
],
[
[
"## Load the evaluation data\nDifferently from training and open loop evaluation, this setting is intrinsically sequential. As such, we won't be using any of PyTorch's parallelisation functionalities.",
"_____no_output_____"
]
],
[
[
"# ===== INIT DATASET\neval_cfg = cfg[\"val_data_loader\"]\neval_zarr = ChunkedDataset(dm.require(eval_cfg[\"key\"])).open()\nvectorizer = build_vectorizer(cfg, dm)\neval_dataset = EgoDatasetVectorized(cfg, eval_zarr, vectorizer)\nprint(eval_dataset)",
"_____no_output_____"
]
],
[
[
"## Define some simulation properties\nWe define here some common simulation properties such as the length of the simulation and how many scene to simulate.\n\n**NOTE: these properties have a significant impact on the execution time. We suggest you to increase them only if your setup includes a GPU.**",
"_____no_output_____"
]
],
[
[
"num_scenes_to_unroll = 10\nnum_simulation_steps = 50",
"_____no_output_____"
]
],
[
[
"# Closed-loop simulation\n\nWe define a closed-loop simulation that drives the SDV for `num_simulation_steps` steps while using the log-replayed agents.\n\nThen, we unroll the selected scenes.\nThe simulation output contains all the information related to the scene, including the annotated and simulated positions, states, and trajectories of the SDV and the agents. \nIf you want to know more about what the simulation output contains, please refer to the source code of the class `SimulationOutput`.",
"_____no_output_____"
]
],
[
[
"# ==== DEFINE CLOSED-LOOP SIMULATION\nsim_cfg = SimulationConfig(use_ego_gt=False, use_agents_gt=True, disable_new_agents=True,\n distance_th_far=500, distance_th_close=50, num_simulation_steps=num_simulation_steps,\n start_frame_index=0, show_info=True)\n\nsim_loop = ClosedLoopSimulator(sim_cfg, eval_dataset, device, model_ego=model, model_agents=None)",
"_____no_output_____"
],
[
"# ==== UNROLL\nscenes_to_unroll = list(range(0, len(eval_zarr.scenes), len(eval_zarr.scenes)//num_scenes_to_unroll))\nsim_outs = sim_loop.unroll(scenes_to_unroll)",
"_____no_output_____"
]
],
[
[
"# Closed-loop metrics\n\n**Note: for a detailed explanation of CLE metrics, please refer again to our [planning notebook](../planning/closed_loop_test.ipynb)**",
"_____no_output_____"
]
],
[
[
"metrics = [DisplacementErrorL2Metric(),\n DistanceToRefTrajectoryMetric(),\n CollisionFrontMetric(),\n CollisionRearMetric(),\n CollisionSideMetric()]\n\nvalidators = [RangeValidator(\"displacement_error_l2\", DisplacementErrorL2Metric, max_value=30),\n RangeValidator(\"distance_ref_trajectory\", DistanceToRefTrajectoryMetric, max_value=4),\n RangeValidator(\"collision_front\", CollisionFrontMetric, max_value=0),\n RangeValidator(\"collision_rear\", CollisionRearMetric, max_value=0),\n RangeValidator(\"collision_side\", CollisionSideMetric, max_value=0)]\n\nintervention_validators = [\"displacement_error_l2\",\n \"distance_ref_trajectory\",\n \"collision_front\",\n \"collision_rear\",\n \"collision_side\"]\n\ncle_evaluator = ClosedLoopEvaluator(EvaluationPlan(metrics=metrics,\n validators=validators,\n composite_metrics=[],\n intervention_validators=intervention_validators))",
"_____no_output_____"
]
],
[
[
"# Quantitative evaluation\n\nWe can now compute the metric evaluation, collect the results and aggregate them.",
"_____no_output_____"
]
],
[
[
"cle_evaluator.evaluate(sim_outs)\nvalidation_results = cle_evaluator.validation_results()\nagg = ValidationCountingAggregator().aggregate(validation_results)\ncle_evaluator.reset()",
"_____no_output_____"
]
],
[
[
"## Reporting errors from the closed-loop\n\nWe can now report the metrics and plot them.",
"_____no_output_____"
]
],
[
[
"fields = [\"metric\", \"value\"]\ntable = PrettyTable(field_names=fields)\n\nvalues = []\nnames = []\n\nfor metric_name in agg:\n table.add_row([metric_name, agg[metric_name].item()])\n values.append(agg[metric_name].item())\n names.append(metric_name)\n\nprint(table)\n\nplt.bar(np.arange(len(names)), values)\nplt.xticks(np.arange(len(names)), names, rotation=60, ha='right')\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Qualitative evaluation",
"_____no_output_____"
],
[
"## Visualise the closed-loop\n\nWe can visualise the scenes we have obtained previously. \n\n**The policy is now in full control of the SDV as this moves through the annotated scene.**",
"_____no_output_____"
]
],
[
[
"output_notebook()\nmapAPI = MapAPI.from_cfg(dm, cfg)\nfor sim_out in sim_outs: # for each scene\n vis_in = simulation_out_to_visualizer_scene(sim_out, mapAPI)\n show(visualize(sim_out.scene_id, vis_in))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
d08b85a6ce32741f39e34ae019cafd94902bbb5c | 69,776 | ipynb | Jupyter Notebook | DQN_practice.ipynb | AmanPriyanshu/Reinforcement-Learning | c27d23ee249de2891c974e72bddbf9e0ef8ba46e | [
"MIT"
] | null | null | null | DQN_practice.ipynb | AmanPriyanshu/Reinforcement-Learning | c27d23ee249de2891c974e72bddbf9e0ef8ba46e | [
"MIT"
] | null | null | null | DQN_practice.ipynb | AmanPriyanshu/Reinforcement-Learning | c27d23ee249de2891c974e72bddbf9e0ef8ba46e | [
"MIT"
] | null | null | null | 195.45098 | 38,794 | 0.883327 | [
[
[
"<a href=\"https://colab.research.google.com/github/AmanPriyanshu/Reinforcement-Learning/blob/master/DQN_practice.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"import torch\r\nimport numpy as np\r\nfrom matplotlib import pyplot as plt",
"_____no_output_____"
],
[
"torch.manual_seed(0)\r\nnp.random.seed(0)",
"_____no_output_____"
],
[
"class Environment:\r\n def __init__(self):\r\n self.constant_function_details = {'value':np.random.randint(10, 90)}\r\n self.uniform_function_details = {'min': 25, 'max': 75}\r\n self.gaussian_function_details = {'mean': 50, 'std': 25}\r\n self.quadratic_growth_details = {'m':0.0175, 'count':0}\r\n self.bandits = None\r\n self.generate_bandit_instance()\r\n\r\n def return_constant(self):\r\n return self.constant_function_details['value'] + np.random.random()*10\r\n\r\n def return_uniform(self):\r\n return np.random.uniform(self.uniform_function_details['min'], self.uniform_function_details['max'])\r\n\r\n def return_gaussian(self):\r\n return np.random.normal(loc=self.gaussian_function_details['mean'], scale=self.gaussian_function_details['std'])\r\n\r\n def return_quadratic_growth(self):\r\n self.quadratic_growth_details['count'] += 1\r\n return np.power((self.quadratic_growth_details['m'] * self.quadratic_growth_details['count']), 2)\r\n\r\n def generate_bandit_instance(self):\r\n self.bandits = np.array([self.return_constant, self.return_uniform, self.return_gaussian, self.return_quadratic_growth])\r\n np.random.shuffle(self.bandits)\r\n\r\n def observe_all_bandits(self):\r\n vals = []\r\n for func in self.bandits:\r\n vals.append(func()/100)\r\n return np.array(vals)",
"_____no_output_____"
],
[
"env = Environment()",
"_____no_output_____"
],
[
"values = []\r\nfor _ in range(1000):\r\n values.append(env.observe_all_bandits())\r\n\r\nvalues = np.array(values)",
"_____no_output_____"
],
[
"for index, function in enumerate(env.bandits):\r\n plt.plot(np.arange(values.shape[0]), values.T[index], label=function.__name__[len('return_'):])\r\n\r\nplt.legend()\r\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Model:",
"_____no_output_____"
]
],
[
[
"def model_generator():\r\n model = torch.nn.Sequential(\r\n torch.nn.Linear(2, 4),\r\n torch.nn.ReLU(),\r\n torch.nn.Linear(4, 8),\r\n torch.nn.ReLU(),\r\n torch.nn.Linear(8, 4),\r\n torch.nn.Softmax(dim=1),\r\n )\r\n return model",
"_____no_output_____"
],
[
"class Agent:\r\n def __init__(self):\r\n self.transition = {'state': None, 'action': None, 'next_state': None, 'reward': None}\r\n self.replay_memory = self.ReplayMemory()\r\n self.policy_net = model_generator()\r\n self.target_net = model_generator()\r\n self.target_net.eval()\r\n self.target_net.load_state_dict(self.policy_net.state_dict())\r\n self.epsilon = 1\r\n self.epsilon_limit = 0.01\r\n self.steps_taken = 0\r\n self.gamma = 0.\r\n self.optimizer = torch.optim.Adam(self.policy_net.parameters())\r\n self.batch_size = 5\r\n\r\n def loss_calculator(self):\r\n samples = self.replay_memory.sample(self.batch_size)\r\n losses = []\r\n for sample in samples:\r\n action = sample['action']\r\n state = sample['state']\r\n next_state = sample['next_state']\r\n reward = sample['reward']\r\n\r\n loss = self.policy_pass(state)[0][action] - (reward + self.gamma * torch.max(self.target_pass(next_state)))\r\n losses.append(loss)\r\n \r\n loss = torch.mean(torch.stack(losses))\r\n \r\n if abs(loss.item()) < 1:\r\n loss = 0.5 * torch.pow(loss, 2)\r\n else:\r\n loss = torch.abs(loss) - 0.5\r\n\r\n return loss\r\n\r\n def policy_update(self):\r\n loss = self.loss_calculator()\r\n self.optimizer.zero_grad()\r\n loss.backward()\r\n self.optimizer.step()\r\n\r\n def target_update(self):\r\n self.target_net.load_state_dict(self.policy_net.state_dict())\r\n\r\n def target_pass(self, state):\r\n input_state = torch.tensor([[state['rank'], state['reward']]], dtype=torch.float)\r\n actions = self.target_net(input_state)\r\n return actions\r\n\r\n def policy_pass(self, state):\r\n input_state = torch.tensor([[state['rank'], state['reward']]], dtype=torch.float)\r\n actions = self.policy_net(input_state)\r\n return actions\r\n \r\n def take_action(self, state):\r\n if np.random.random() < self.epsilon:\r\n action = torch.randint(0, 4, (1,))\r\n else:\r\n actions = self.policy_pass(state)\r\n action = torch.argmax(actions, 1)\r\n return action\r\n\r\n def take_transition(self, transition):\r\n self.steps_taken += 1\r\n self.replay_memory.push(transition)\r\n\r\n if self.steps_taken%self.batch_size == 0 and self.steps_taken>20:\r\n self.policy_update()\r\n if self.steps_taken%25 == 0 and self.steps_taken>20:\r\n self.target_update()\r\n\r\n self.epsilon -= self.epsilon_limit/6\r\n if self.epsilon<self.epsilon_limit:\r\n self.epsilon = self.epsilon_limit\r\n\r\n class ReplayMemory(object):\r\n def __init__(self, capacity=15):\r\n self.capacity = capacity\r\n self.memory = [None] * self.capacity\r\n self.position = 0\r\n\r\n def push(self, transition):\r\n self.memory[self.position] = transition\r\n self.position = (self.position + 1) % self.capacity\r\n \r\n def sample(self, batch_size=5):\r\n return np.random.choice(np.array(self.memory), batch_size)\r\n\r\n def __len__(self):\r\n return len(self.memory)",
"_____no_output_____"
],
[
"env = Environment()\r\nagent1 = Agent()\r\n\r\nrewards = []\r\n\r\nstate = {'rank':0, 'reward':0}\r\nfor _ in range(1000):\r\n with torch.no_grad():\r\n action = agent1.take_action(state)\r\n observation = env.observe_all_bandits()\r\n reward = observation[action]\r\n rank = np.argsort(observation)[action]\r\n next_state = {'rank': rank, 'reward':reward}\r\n transition = {'state': state, 'action': action, 'next_state': next_state, 'reward': reward}\r\n\r\n agent1.take_transition(transition)\r\n\r\n rewards.append(reward)",
"_____no_output_____"
],
[
"plt.plot([i for i in range(len(rewards))], rewards, label='rewards')\r\nplt.legend()\r\nplt.title('Rewards Progression')\r\nplt.show()",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
d08b8c9f9b0809a191cbf60634031394d53f2bb0 | 862,710 | ipynb | Jupyter Notebook | demos/basic/multimode/MultiModeSystem.ipynb | Phionx/quantumnetworks | 9e89cac61cdbc1885689336cd31bb5f7dbdde22a | [
"MIT"
] | 7 | 2021-11-22T18:45:00.000Z | 2022-02-21T16:01:10.000Z | demos/basic/multimode/MultiModeSystem.ipynb | Phionx/quantumnetworks | 9e89cac61cdbc1885689336cd31bb5f7dbdde22a | [
"MIT"
] | 35 | 2021-09-29T08:05:35.000Z | 2022-02-20T02:11:15.000Z | demos/basic/multimode/MultiModeSystem.ipynb | Phionx/quantumnetworks | 9e89cac61cdbc1885689336cd31bb5f7dbdde22a | [
"MIT"
] | 1 | 2022-02-01T18:05:34.000Z | 2022-02-01T18:05:34.000Z | 2,044.336493 | 135,544 | 0.962756 | [
[
[
"%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
],
[
"from quantumnetworks import MultiModeSystem, plot_full_evolution\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"# Trapezoidal Method",
"_____no_output_____"
]
],
[
[
"# params stored in txt\nsys = MultiModeSystem(params={\"dir\":\"data/\"})\nx_0 = np.array([1,0,0,1])\nts = np.linspace(0, 10, 101)\nX = sys.trapezoidal(x_0, ts)\nfig, ax = plot_full_evolution(X, ts, labels=[\"$q_a$\",\"$p_a$\",\"$q_b$\",\"$p_b$\"])\nax.legend()",
"_____no_output_____"
]
],
[
[
"# Forward Euler",
"_____no_output_____"
]
],
[
[
"# params stored in txt\nsys = MultiModeSystem(params={\"dir\":\"data/\"})\nx_0 = np.array([1,0,0,1])\nts = np.linspace(0, 10, 10001)\nX = sys.forward_euler(x_0, ts)\nfig, ax = plot_full_evolution(X, ts, labels=[\"$q_a$\",\"$p_a$\",\"$q_b$\",\"$p_b$\"])\nax.legend()",
"_____no_output_____"
],
[
"u = sys.eval_u(0)\nsys.eval_Jf(x_0, u)",
"_____no_output_____"
],
[
"sys.eval_Jf_numerical(x_0, u)",
"_____no_output_____"
],
[
"# params directly provided\nomegas = [1,2]\nkappas = [0.001,0.005]\ngammas = [0.002,0.002]\nkerrs = [0.001, 0.001]\ncouplings = [[0,1,0.002]]\nsys = MultiModeSystem(params={\"omegas\":omegas, \"kappas\":kappas, \"gammas\":gammas, \"kerrs\": kerrs, \"couplings\":couplings})\nx_0 = np.array([1,0,0,1])\nts = np.linspace(0, 10, 1001)\nX = sys.forward_euler(x_0, ts)\nfig, ax = plot_full_evolution(X, ts, labels=[\"$q_a$\",\"$p_a$\",\"$q_b$\",\"$p_b$\"])\nax.legend()",
"_____no_output_____"
],
[
"# single mode system\nomegas = [2*np.pi*1]\nkappas = [2*np.pi*0.001]\ngammas = [2*np.pi*0.002]\nkerrs = [2*np.pi*0.001]\ncouplings = []\nsys = MultiModeSystem(params={\"omegas\":omegas, \"kappas\":kappas,\"gammas\":gammas,\"kerrs\":kerrs,\"couplings\":couplings})\nx_0 = np.array([1,0])\nts = np.linspace(0, 10, 100001)\nX = sys.forward_euler(x_0, ts)\nfig, ax = plot_full_evolution(X, ts, labels=[\"$q_a$\",\"$p_a$\"])\nax.legend()",
"_____no_output_____"
],
[
"# params directly provided\nomegas = [2*np.pi*1,2*np.pi*2,2*np.pi*1]\nkappas = [2*np.pi*0.001,2*np.pi*0.005,2*np.pi*0.001]\ngammas = [2*np.pi*0.002,2*np.pi*0.002,2*np.pi*0.002]\nkerrs = [2*np.pi*0.001, 2*np.pi*0.001, 2*np.pi*0.001]\ncouplings = [[0,1,2*np.pi*0.002],[1,2,2*np.pi*0.002]]\nsys = MultiModeSystem(params={\"omegas\":omegas, \"kappas\":kappas, \"gammas\":gammas, \"kerrs\":kerrs, \"couplings\":couplings})\nprint(sys.A)\n# x_0 = np.array([1,0,0,1])\n# ts = np.linspace(0, 10, 1001)\n# X = sys.forward_euler(x_0, ts)\n# fig, ax = plot_full_evolution(X, ts, labels=[\"$q_a$\",\"$p_a$\",\"$q_b$\",\"$p_b$\"])\n# ax.legend()",
"[[-9.42477796e-03 6.28318531e+00 0.00000000e+00 1.25663706e-02\n 0.00000000e+00 0.00000000e+00]\n [-6.28318531e+00 -9.42477796e-03 -1.25663706e-02 0.00000000e+00\n 0.00000000e+00 0.00000000e+00]\n [ 0.00000000e+00 1.25663706e-02 -2.19911486e-02 1.25663706e+01\n 0.00000000e+00 1.25663706e-02]\n [-1.25663706e-02 0.00000000e+00 -1.25663706e+01 -2.19911486e-02\n -1.25663706e-02 0.00000000e+00]\n [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 1.25663706e-02\n -9.42477796e-03 6.28318531e+00]\n [ 0.00000000e+00 0.00000000e+00 -1.25663706e-02 0.00000000e+00\n -6.28318531e+00 -9.42477796e-03]]\n"
]
],
[
[
"# Linearization",
"_____no_output_____"
]
],
[
[
"omegas = [2*np.pi*1,2*np.pi*2,2*np.pi*1]\nkappas = [2*np.pi*0.001,2*np.pi*0.005,2*np.pi*0.001]\ngammas = [2*np.pi*0.002,2*np.pi*0.002,2*np.pi*0.002]\nkerrs = [2*np.pi*0.001, 2*np.pi*0.001, 2*np.pi*0.001]\ncouplings = [[0,1,2*np.pi*0.002],[1,2,2*np.pi*0.002]]\nsys = MultiModeSystem(params={\"omegas\":omegas, \"kappas\":kappas, \"gammas\":gammas, \"kerrs\":kerrs, \"couplings\":couplings})\n\nx_0 = np.array([1,0, 0,1, 1,0])\nts = np.linspace(0, 1, 1001)\nX = sys.forward_euler(x_0, ts)\nfig, ax = plot_full_evolution(X, ts, labels=[\"$q_a$\",\"$p_a$\", \"$q_b$\",\"$p_b$\", \"$q_c$\",\"$p_c$\"])\nax.legend()\n\nX_linear = sys.forward_euler_linear(x_0, ts, x_0, 0)\nfig, ax = plot_full_evolution(X_linear, ts, labels=[\"$q_{a,linear}$\",\"$p_{a,linear}$\",\"$q_{b,linear}$\",\"$p_{b,linear}$\",\"$q_{c,linear}$\",\"$p_{c,linear}$\"])\n\nDelta_X = (X-X_linear)/X\nplot_full_evolution(Delta_X[:,:50], ts[:50], labels=[\"$q_a - q_{a,linear}$\",\"$p_a - p_{a,linear}$\",\"$q_b - q_{b,linear}$\",\"$p_b - p_{b,linear}$\",\"$q_c - q_{c,linear}$\",\"$p_c - p_{c,linear}$\"])\nax.legend()",
"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ipykernel_launcher.py:17: RuntimeWarning: invalid value encountered in true_divide\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d08b9e8181511c207cac3ecc9f25c10f5fca1360 | 21,931 | ipynb | Jupyter Notebook | labs/Getting_Started_with_Azure_ML.ipynb | PeakIndicatorsHub/Getting-Started-On-Azure-ML | ee637ddfe7d213cf4759da0ed5f59c7382148870 | [
"MIT"
] | null | null | null | labs/Getting_Started_with_Azure_ML.ipynb | PeakIndicatorsHub/Getting-Started-On-Azure-ML | ee637ddfe7d213cf4759da0ed5f59c7382148870 | [
"MIT"
] | null | null | null | labs/Getting_Started_with_Azure_ML.ipynb | PeakIndicatorsHub/Getting-Started-On-Azure-ML | ee637ddfe7d213cf4759da0ed5f59c7382148870 | [
"MIT"
] | 1 | 2021-02-17T08:54:32.000Z | 2021-02-17T08:54:32.000Z | 40.166667 | 699 | 0.637955 | [
[
[
"# Getting Started with Azure Machine Learning\n\nAzure Machine Learning (*Azure ML*) is a cloud-based service for creating and managing machine learning solutions. It's designed to help data scientists leverage their existing data processing and model development skills and frameworks, and help them scale their workloads to the cloud. The Azure ML SDK for Python provides classes you can use to work with Azure ML in your Azure subscription.\n\n## Before You Start\n\n1. Complete the steps in [Lab 1 - Getting Started with Azure Machine Learning](./labdocs/Lab01.md) to create an Azure Machine Learning workspace and a compute instance with the contents of this repo.\n2. Open this notebook in the compute instance and run it there.\n\n## Check the Azure ML SDK Version\n\nLet's start by importing the **azureml-core** package and checking the version of the SDK that is installed. Click the cell below and then use the **► Run** button on the toolbar to run it.",
"_____no_output_____"
]
],
[
[
"import azureml.core\nprint(\"Ready to use Azure ML\", azureml.core.VERSION)",
"_____no_output_____"
]
],
[
[
"## Connect to Your Workspace\n\nAll experiments and associated resources are managed within you Azure ML workspace. You can connect to an existing workspace, or create a new one using the Azure ML SDK.\n\nIn most cases, you should store the workspace configuration in a JSON configuration file. This makes it easier to reconnect without needing to remember details like your Azure subscription ID. You can download the JSON configuration file from the blade for your workspace in the Azure portal, but if you're using a Compute Instance within your workspace, the configuration file has alreday been downloaded to the root folder.\n\nThe code below uses the configuration file to connect to your workspace. The first time you run it in a notebook session, you'll be prompted to sign into Azure by clicking the https://microsoft.com/devicelogin link, entering an automatically generated code, and signing into Azure. After you have successfully signed in, you can close the browser tab that was opened, return to this notebook, and wait for the sign-in process to complete.",
"_____no_output_____"
]
],
[
[
"from azureml.core import Workspace\n\nws = Workspace.from_config()\nprint(ws.name, \"loaded\")",
"_____no_output_____"
]
],
[
[
"## Run an Experiment\n\nOne of the most fundamentals tasks that data scientists need to perform is to create and run experiments that process and analyze data. In this exercise, you'll learn how to use an Azure ML *experiment* to run Python code and record values extracted from data. In this case, you'll use a simple dataset that contains details of patients that have been tested for diabetes. You'll run an experiment to explore the data, extracting statistics, visualizations, and data samples. Most of the code you'll use is fairly generic Python, such as you might run in any data exploration process. However, with the addition of a few lines, the code uses an Azure ML *experiment* to log details of the run.",
"_____no_output_____"
]
],
[
[
"from azureml.core import Experiment\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline \n\n# Create an Azure ML experiment in your workspace\nexperiment = Experiment(workspace = ws, name = \"diabetes-experiment\")\n\n# Start logging data from the experiment\nrun = experiment.start_logging()\nprint(\"Starting experiment:\", experiment.name)\n\n# load the data from a local file\ndata = pd.read_csv('data/diabetes.csv')\n\n# Count the rows and log the result\nrow_count = (len(data))\nrun.log('observations', row_count)\nprint('Analyzing {} rows of data'.format(row_count))\n\n# Plot and log the count of diabetic vs non-diabetic patients\ndiabetic_counts = data['Diabetic'].value_counts()\nfig = plt.figure(figsize=(6,6))\nax = fig.gca() \ndiabetic_counts.plot.bar(ax = ax) \nax.set_title('Patients with Diabetes') \nax.set_xlabel('Diagnosis') \nax.set_ylabel('Patients')\nplt.show()\nrun.log_image(name = 'label distribution', plot = fig)\n\n# log distinct pregnancy counts\npregnancies = data.Pregnancies.unique()\nrun.log_list('pregnancy categories', pregnancies)\n\n# Log summary statistics for numeric columns\nmed_columns = ['PlasmaGlucose', 'DiastolicBloodPressure', 'TricepsThickness', 'SerumInsulin', 'BMI']\nsummary_stats = data[med_columns].describe().to_dict()\nfor col in summary_stats:\n keys = list(summary_stats[col].keys())\n values = list(summary_stats[col].values())\n for index in range(len(keys)):\n run.log_row(col, stat = keys[index], value = values[index])\n \n# Save a sample of the data and upload it to the experiment output\ndata.sample(100).to_csv('sample.csv', index=False, header=True)\nrun.upload_file(name = 'outputs/sample.csv', path_or_stream = './sample.csv')\n\n# Complete the run\nrun.complete()",
"_____no_output_____"
]
],
[
[
"## View Experiment Results\n\nAfter the experiment has been finished, you can use the **run** object to get information about the run and its outputs:",
"_____no_output_____"
]
],
[
[
"import json\n\n# Get run details\ndetails = run.get_details()\nprint(details)\n\n# Get logged metrics\nmetrics = run.get_metrics()\nprint(json.dumps(metrics, indent=2))\n\n# Get output files\nfiles = run.get_file_names()\nprint(json.dumps(files, indent=2))",
"_____no_output_____"
]
],
[
[
"In Jupyter Notebooks, you can use the **RunDetails** widget to get a better visualization of the run details, while the experiment is running or after it has finished.",
"_____no_output_____"
]
],
[
[
"from azureml.widgets import RunDetails\n\nRunDetails(run).show()",
"_____no_output_____"
]
],
[
[
"Note that the **RunDetails** widget includes a link to view the run in Azure Machine Learning studio. Click this to open a new browser tab with the run details (you can also just open [Azure Machine Learning studio](https://ml.azure.com) and find the run on the **Experiments** page). When viewing the run in Azure Machine Learning studio, note the following:\n\n- The **Details** tab contains the general properties of the experiment run.\n- The **Metrics** tab enables you to select logged metrics and view them as tables or charts.\n- The **Images** tab enables you to select and view any images or plots that were logged in the experiment (in this case, the *Label Distribution* plot)\n- The **Child Runs** tab lists any child runs (in this experiment there are none).\n- The **Outputs + Logs** tab shows the output or log files generated by the experiment.\n- The **Snapshot** tab contains all files in the folder where the experiment code was run (in this case, everything in the same folder as this notebook).\n- The **Explanations** tab is used to show model explanations generated by the experiment (in this case, there are none).\n- The **Fairness** tab is used to visualize predictive performance disparities that help you evaluate the fairness of machine learning models (in this case, there are none).\n\n## Run an Experiment Script\n\nIn the previous example, you ran an experiment inline in this notebook. A more flexible solution is to create a separate script for the experiment, and store it in a folder along with any other files it needs, and then use Azure ML to run the experiment based on the script in the folder.\n\nFirst, let's create a folder for the experiment files, and copy the data into it:",
"_____no_output_____"
]
],
[
[
"import os, shutil\n\n# Create a folder for the experiment files\nfolder_name = 'diabetes-experiment-files'\nexperiment_folder = './' + folder_name\nos.makedirs(folder_name, exist_ok=True)\n\n# Copy the data file into the experiment folder\nshutil.copy('data/diabetes.csv', os.path.join(folder_name, \"diabetes.csv\"))",
"_____no_output_____"
]
],
[
[
"Now we'll create a Python script containing the code for our experiment, and save it in the experiment folder.\n\n> **Note**: running the following cell just *creates* the script file - it doesn't run it!",
"_____no_output_____"
]
],
[
[
"%%writefile $folder_name/diabetes_experiment.py\nfrom azureml.core import Run\nimport pandas as pd\nimport os\n\n# Get the experiment run context\nrun = Run.get_context()\n\n# load the diabetes dataset\ndata = pd.read_csv('diabetes.csv')\n\n# Count the rows and log the result\nrow_count = (len(data))\nrun.log('observations', row_count)\nprint('Analyzing {} rows of data'.format(row_count))\n\n# Count and log the label counts\ndiabetic_counts = data['Diabetic'].value_counts()\nprint(diabetic_counts)\nfor k, v in diabetic_counts.items():\n run.log('Label:' + str(k), v)\n \n# Save a sample of the data in the outputs folder (which gets uploaded automatically)\nos.makedirs('outputs', exist_ok=True)\ndata.sample(100).to_csv(\"outputs/sample.csv\", index=False, header=True)\n\n# Complete the run\nrun.complete()",
"_____no_output_____"
]
],
[
[
"This code is a simplified version of the inline code used before. However, note the following:\n- It uses the `Run.get_context()` method to retrieve the experiment run context when the script is run.\n- It loads the diabetes data from the folder where the script is located.\n- It creates a folder named **outputs** and writes the sample file to it - this folder is automatically uploaded to the experiment run\n\nNow you're almost ready to run the experiment. There are just a few configuration issues you need to deal with:\n\n1. Create a *Run Configuration* that defines the Python code execution environment for the script - in this case, it will automatically create a Conda environment with some default Python packages installed.\n2. Create a *Script Configuration* that identifies the Python script file to be run in the experiment, and the environment in which to run it.\n\nThe following cell sets up these configuration objects, and then submits the experiment.\n\n> **Note**: This will take a little longer to run the first time, as the conda environment must be created. ",
"_____no_output_____"
]
],
[
[
"import os\nimport sys\nfrom azureml.core import Experiment, RunConfiguration, ScriptRunConfig\nfrom azureml.widgets import RunDetails\n\n# create a new RunConfig object\nexperiment_run_config = RunConfiguration()\n\n# Create a script config\nsrc = ScriptRunConfig(source_directory=experiment_folder, \n script='diabetes_experiment.py',\n run_config=experiment_run_config) \n\n# submit the experiment\nexperiment = Experiment(workspace = ws, name = 'diabetes-experiment')\nrun = experiment.submit(config=src)\nRunDetails(run).show()\nrun.wait_for_completion()",
"_____no_output_____"
]
],
[
[
"As before, you can use the widget or the link to the experiment in [Azure Machine Learning studio](https://ml.azure.com) to view the outputs generated by the experiment, and you can also write code to retrieve the metrics and files it generated:",
"_____no_output_____"
]
],
[
[
"# Get logged metrics\nmetrics = run.get_metrics()\nfor key in metrics.keys():\n print(key, metrics.get(key))\nprint('\\n')\nfor file in run.get_file_names():\n print(file)",
"_____no_output_____"
]
],
[
[
"## View Experiment Run History\n\nNow that you've run experiments multiple times, you can view the history in [Azure Machine Learning studio](https://ml.azure.com) and explore each logged run. Or you can retrieve an experiment by name from the workspace and iterate through its runs using the SDK:",
"_____no_output_____"
]
],
[
[
"from azureml.core import Experiment, Run\n\ndiabetes_experiment = ws.experiments['diabetes-experiment']\nfor logged_run in diabetes_experiment.get_runs():\n print('Run ID:', logged_run.id)\n metrics = logged_run.get_metrics()\n for key in metrics.keys():\n print('-', key, metrics.get(key))",
"_____no_output_____"
]
],
[
[
"## Use MLflow\n\nMLflow is an open source platform for managing machine learning processes. It's commonly (but not exclusively) used in Databricks environments to coordinate experiments and track metrics. In Azure Machine Learning experiments, you can use MLflow to track metrics instead of the native log functionality if you desire.\n\n### Use MLflow with an Inline Experiment\n\nTo use MLflow to track metrics for an inline experiment, you must set the MLflow *tracking URI* to the workspace where the experiment is being run. This enables you to use **mlflow** tracking methods to log data to the experiment run.",
"_____no_output_____"
]
],
[
[
"from azureml.core import Experiment\nimport pandas as pd\nimport mlflow\n\n# Set the MLflow tracking URI to the workspace\nmlflow.set_tracking_uri(ws.get_mlflow_tracking_uri())\n\n# Create an Azure ML experiment in your workspace\nexperiment = Experiment(workspace=ws, name='diabetes-mlflow-experiment')\nmlflow.set_experiment(experiment.name)\n\n# start the MLflow experiment\nwith mlflow.start_run():\n \n print(\"Starting experiment:\", experiment.name)\n \n # Load data\n data = pd.read_csv('data/diabetes.csv')\n\n # Count the rows and log the result\n row_count = (len(data))\n print('observations:', row_count)\n mlflow.log_metric('observations', row_count)\n \n# Get a link to the experiment in Azure ML studio \nexperiment_url = experiment.get_portal_url()\nprint('See details at', experiment_url)",
"_____no_output_____"
]
],
[
[
"After running the code above, you can use the link that is displayed to view the experiment in Azure Machine Learning studio. Then select the latest run of tghe experiment and view it **Metrics** tab to see the logged metric.\n\n### Use MLflow in an Experiment Script\n\nYou can also use MLflow to track metrics in an experiment script.\n\nRun the following two cells to create a folder and a script for an experiment that uses MLflow.",
"_____no_output_____"
]
],
[
[
"import os, shutil\n\n# Create a folder for the experiment files\nfolder_name = 'mlflow-experiment-files'\nexperiment_folder = './' + folder_name\nos.makedirs(folder_name, exist_ok=True)\n\n# Copy the data file into the experiment folder\nshutil.copy('data/diabetes.csv', os.path.join(folder_name, \"diabetes.csv\"))",
"_____no_output_____"
],
[
"%%writefile $folder_name/mlflow_diabetes.py\nfrom azureml.core import Run\nimport pandas as pd\nimport mlflow\n\n# start the MLflow experiment\nwith mlflow.start_run():\n \n # Load data\n data = pd.read_csv('diabetes.csv')\n\n # Count the rows and log the result\n row_count = (len(data))\n print('observations:', row_count)\n mlflow.log_metric('observations', row_count)",
"_____no_output_____"
]
],
[
[
"When you use MLflow tracking in an Azure ML experiment script, the MLflow tracking URI is set automatically when you start the experiment run. However, the environment in which the script is to be run must include the required **mlflow** packages.",
"_____no_output_____"
]
],
[
[
"from azureml.core import Experiment, RunConfiguration, ScriptRunConfig\nfrom azureml.core.conda_dependencies import CondaDependencies\nfrom azureml.widgets import RunDetails\n\n# create a new RunConfig object\nexperiment_run_config = RunConfiguration()\n\n# Ensure the required packages are installed\npackages = CondaDependencies.create(pip_packages=['mlflow', 'azureml-mlflow'])\nexperiment_run_config.environment.python.conda_dependencies=packages\n\n# Create a script config\nsrc = ScriptRunConfig(source_directory=experiment_folder,\n script='mlflow_diabetes.py',\n run_config=experiment_run_config) \n\n# submit the experiment\nexperiment = Experiment(workspace = ws, name = 'diabetes-mlflow-experiment')\nrun = experiment.submit(config=src)\nRunDetails(run).show()\nrun.wait_for_completion()",
"_____no_output_____"
]
],
[
[
"As usual, you can get the logged metrics from the experiment run when it's finished.",
"_____no_output_____"
]
],
[
[
"# Get logged metrics\nmetrics = run.get_metrics()\nfor key in metrics.keys():\n print(key, metrics.get(key))",
"_____no_output_____"
]
],
[
[
"Now you've seen how to use the Azure ML SDK to view the resources in your workspace and run experiments. \n\n### Learn More\n\n- For more details about the SDK, see the [Azure ML SDK documentation](https://docs.microsoft.com/python/api/overview/azure/ml/intro?view=azure-ml-py).\n- To find out more about running experiments, see [Start, monitor, and cancel training runs in Python](https://docs.microsoft.com/azure/machine-learning/how-to-manage-runs) in the Azure ML documentation.\n- For details of how to log metrics in a run, see [Monitor Azure ML experiment runs and metrics](https://docs.microsoft.com/azure/machine-learning/how-to-track-experiments).\n- For more information about integrating Azure ML experiments with MLflow, see [Track model metrics and deploy ML models with MLflow and Azure Machine Learning](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-mlflow).\n\n## Clean Up\n\nOn the **File** menu, click **Close and Halt** to close this notebook. Then close all Jupyter tabs in your browser and **stop** your compute instance to minimize costs.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d08ba8de0f41d2da7ab637e3151d09f1ba36a820 | 17,200 | ipynb | Jupyter Notebook | notebooks/2.6-JS-ctakes-relationgram-bow-tfidf.ipynb | jonasspenger/clinical.notes.phenotyping | 5aa656a50a690500c6a9f304082a3876f57554f6 | [
"MIT"
] | null | null | null | notebooks/2.6-JS-ctakes-relationgram-bow-tfidf.ipynb | jonasspenger/clinical.notes.phenotyping | 5aa656a50a690500c6a9f304082a3876f57554f6 | [
"MIT"
] | null | null | null | notebooks/2.6-JS-ctakes-relationgram-bow-tfidf.ipynb | jonasspenger/clinical.notes.phenotyping | 5aa656a50a690500c6a9f304082a3876f57554f6 | [
"MIT"
] | null | null | null | 31.97026 | 191 | 0.522733 | [
[
[
"# Introduction\nImplementation of the cTAKES BoW method with relation pairs (f.e. CUI-Relationship-CUI) (added to the BoW cTAKES orig. pairs (Polarity-CUI)), evaluated against the annotations from: \n> Gehrmann, Sebastian, et al. \"Comparing deep learning and concept extraction based methods for patient phenotyping from clinical narratives.\" PloS one 13.2 (2018): e0192360.",
"_____no_output_____"
],
[
"## Import Packages",
"_____no_output_____"
]
],
[
[
"# imported packages\nimport multiprocessing\nimport collections\nimport itertools\nimport re\nimport os\n\n# xml and xmi\nfrom lxml import etree\n\n# arrays and dataframes\nimport pandas\nimport numpy\nfrom pandasql import sqldf\n\n# classifier\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nfrom sklearn.ensemble import GradientBoostingClassifier\nfrom sklearn.preprocessing import FunctionTransformer\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.multiclass import OneVsRestClassifier\nfrom sklearn.naive_bayes import MultinomialNB\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.svm import SVC\n\n# plotting\nimport matplotlib \nmatplotlib.use('Agg') # server\ntry:\n get_ipython\n # jupyter notebook\n %matplotlib inline \nexcept:\n pass\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"# import custom modules\nimport context # set search path to one level up\nfrom src import evaluation # method for evaluation of classifiers",
"_____no_output_____"
]
],
[
[
"## Define variables and parameters",
"_____no_output_____"
]
],
[
[
"# variables and parameters\n# filenames\ninput_directory = '../data/interim/cTAKES_output'\ninput_filename = '../data/raw/annotations.csv'\nresults_filename = '../reports/ctakes_relationgram_bow_tfidf_results.csv'\nplot_filename_1 = '../reports/ctakes_relationgram_bow_tfidf_boxplot_1.png'\nplot_filename_2 = '../reports/ctakes_relationgram_bow_tfidf_boxplot_2.png'\n\n# number of splits and repeats for cross validation\nn_splits = 5\nn_repeats = 10\n# n_repeats = 1 # for testing\n\n# number of workers\nn_workers=multiprocessing.cpu_count()\n# n_workers = 1 # for testing\n\n# keep the conditions for which results are reported in the publication\nconditions = [ \n# 'cohort',\n 'Obesity',\n# 'Non.Adherence',\n# 'Developmental.Delay.Retardation',\n 'Advanced.Heart.Disease', \n 'Advanced.Lung.Disease', \n 'Schizophrenia.and.other.Psychiatric.Disorders',\n 'Alcohol.Abuse', \n 'Other.Substance.Abuse',\n 'Chronic.Pain.Fibromyalgia', \n 'Chronic.Neurological.Dystrophies', \n 'Advanced.Cancer',\n 'Depression',\n# 'Dementia',\n# 'Unsure',\n]",
"_____no_output_____"
]
],
[
[
"## Load and prepare data",
"_____no_output_____"
],
[
"### Load and parse xmi data",
"_____no_output_____"
]
],
[
[
"%load_ext ipycache",
"_____no_output_____"
],
[
"%%cache --read 2.6-JS-ctakes-relationgram-bow-tfidf_cache.pkl X \n\ndef ctakes_xmi_to_df(xmi_path):\n records = []\n \n tree = etree.parse(xmi_path)\n root = tree.getroot()\n\n mentions = []\n for mention in root.iterfind('*[@{http://www.omg.org/XMI}id][@typeID][@polarity]'):\n if 'ontologyConceptArr' in mention.attrib:\n for concept in mention.attrib['ontologyConceptArr'].split(\" \"):\n d = dict(mention.attrib)\n d['ontologyConceptArr'] = concept\n mentions.append(d)\n else:\n d = dict(mention.attrib)\n mentions.append(d)\n mentions_df = pandas.DataFrame(mentions)\n \n concepts = []\n for concept in root.iterfind('*[@{http://www.omg.org/XMI}id][@cui][@tui]'):\n concepts.append(dict(concept.attrib))\n concepts_df = pandas.DataFrame(concepts)\n \n events = []\n for event in root.iterfind('*[@{http://www.omg.org/XMI}id][@properties]'):\n events.append(dict(event.attrib))\n events_df = pandas.DataFrame(events)\n \n eventproperties = []\n for eventpropertie in root.iterfind('*[@{http://www.omg.org/XMI}id][@docTimeRel]'):\n eventproperties.append(dict(eventpropertie.attrib))\n eventproperties_df = pandas.DataFrame(eventproperties)\n \n merged_df = mentions_df.add_suffix('_1')\\\n .merge(right=concepts_df, left_on='ontologyConceptArr_1', right_on='{http://www.omg.org/XMI}id')\\\n .merge(right=events_df, left_on='event_1', right_on='{http://www.omg.org/XMI}id')\\\n .merge(right=eventproperties_df, left_on='properties', right_on='{http://www.omg.org/XMI}id')\n \n# # unique cui and tui per event IDEA: consider keeping all\n# merged_df = merged_df.drop_duplicates(subset=['event', 'cui', 'tui'])\n \n # merge polarity of the *mention and the cui\n merged_df = merged_df.dropna(subset=['cui']) # remove any NaN\n merged_df['polaritycui'] = merged_df['polarity_1'] + merged_df['cui']\n \n # extract relations\n textrelations = []\n for tr in root.iterfind('*[@{http://www.omg.org/XMI}id][@category][@arg1][@arg2]'):\n textrelations.append(dict(tr.attrib))\n textrelations_df = pandas.DataFrame(textrelations)\n \n relationarguments = []\n for relationargument in root.iterfind('*[@{http://www.omg.org/XMI}id][@argument][@role]'):\n relationarguments.append(dict(relationargument.attrib))\n relationarguments_df = pandas.DataFrame(relationarguments) \n \n # transforms\n tdf = textrelations_df\n tdf['xmiid'] = tdf['{http://www.omg.org/XMI}id']\n rdf = relationarguments_df\n rdf['xmiid'] = rdf['{http://www.omg.org/XMI}id']\n mdf = mentions_df\n mdf['xmiid'] = mdf['{http://www.omg.org/XMI}id']\n cdf = concepts_df\n cdf['xmiid'] = cdf['{http://www.omg.org/XMI}id']\n\n subquery_1 = \"\"\"\n -- table with:\n -- (from *Relation): category\n -- (from RelationArgument): argument (as argument1 and argument2) (Foreign Key *Mentions.xmiid)\n -- (from *Mention): begin - end (as begin1 - end1 - begin2 - end2)\n SELECT\n r.category,\n m1.begin as begin1,\n m1.end as end1,\n m2.begin as begin2,\n m2.end as end2\n FROM\n tdf r\n INNER JOIN\n rdf a1\n ON r.arg1 = a1.xmiid\n INNER JOIN\n rdf a2\n ON r.arg2 = a2.xmiid\n INNER JOIN\n mdf m1\n ON a1.argument = m1.xmiid\n INNER JOIN\n mdf m2\n ON a2.argument = m2.xmiid\n \"\"\"\n\n subquery_2 = \"\"\"\n -- table with: \n -- (from *Mentions): begin - end - polarity\n -- (from Concepts): cui\n SELECT\n m.begin,\n m.end,\n m.polarity,\n c.cui\n FROM\n mdf m\n INNER JOIN\n cdf c\n ON\n m.ontologyConceptArr = c.xmiid\n \"\"\"\n\n # run subqueries and save in new tables\n sq1 = sqldf(subquery_1, locals())\n sq2 = sqldf(subquery_2, locals())\n\n query = \"\"\"\n -- table with:\n -- (from Concept): cui1, cui2\n -- (from *Mention): polarity1, polarity2\n -- (from *Relation): category (what kind of relation)\n SELECT\n sq1.category,\n sq21.cui as cui1,\n sq22.cui as cui2,\n sq21.polarity as polarity1,\n sq22.polarity as polarity2\n FROM\n sq1 sq1\n INNER JOIN\n sq2 sq21\n ON sq21.begin >= sq1.begin1\n and sq21.end <= sq1.end1\n INNER JOIN\n sq2 sq22\n ON sq22.begin >= sq1.begin2\n and sq22.end <= sq1.end2\n \"\"\"\n\n res = sqldf(query, locals())\n\n # remove duplicates\n res = res.drop_duplicates(subset=['cui1', 'cui2', 'category', 'polarity1', 'polarity2'])\n\n res['string'] = res['polarity1'] + res['cui1'] + res['category'] + res['polarity2'] + res['cui2']\n\n # return as a string\n return ' '.join(list(res['string']) + list(merged_df['polaritycui']))\n\nX = []\n\n# key function for sorting the files according to the integer of the filename\ndef key_fn(x):\n i = x.split(\".\")[0]\n if i != \"\":\n return int(i)\n return None\n\nfor f in sorted(os.listdir(input_directory), key=key_fn): # for each file in the input directory\n if f.endswith(\".xmi\"):\n fpath = os.path.join(input_directory, f)\n # parse file and append as a dataframe to x_df\n try:\n X.append(ctakes_xmi_to_df(fpath))\n except Exception as e:\n print e\n X.append('NaN')\n\nX = numpy.array(X)",
"_____no_output_____"
]
],
[
[
"### Load annotations and classification data ",
"_____no_output_____"
]
],
[
[
"# read and parse csv file\ndata = pandas.read_csv(input_filename)\n# data = data[0:100] # for testing\n# X = X[0:100] # for testing\ndata.head()",
"_____no_output_____"
],
[
"# groups: the subject ids\n# used in order to ensure that \n# \"patients’ notes stay within the set, so that all discharge notes in the \n# test set are from patients not previously seen by the model.\" Gehrmann17.\ngroups_df = data.filter(items=['subject.id']) \ngroups = groups_df.as_matrix()\n# y: the annotated classes\ny_df = data.filter(items=conditions) # filter the conditions\ny = y_df.as_matrix()",
"_____no_output_____"
],
[
"print(X.shape, groups.shape, y.shape)",
"_____no_output_____"
]
],
[
[
"## Define classifiers",
"_____no_output_____"
]
],
[
[
"# dictionary of classifiers (sklearn estimators)\nclassifiers = collections.OrderedDict()",
"_____no_output_____"
],
[
"def tokenizer(text):\n pattern = r'[\\s]+' # match any sequence of whitespace characters\n repl = r' ' # replace with space\n temp_text = re.sub(pattern, repl, text)\n return temp_text.lower().split(' ') # lower-case and split on space",
"_____no_output_____"
],
[
"prediction_models = [\n ('logistic_regression', LogisticRegression(random_state=0)),\n (\"random_forest\", RandomForestClassifier(random_state=0)),\n (\"naive_bayes\", MultinomialNB()),\n (\"svm_linear\", SVC(kernel=\"linear\", random_state=0, probability=True)),\n (\"gradient_boosting\", GradientBoostingClassifier(random_state=0)),\n]\n\n# BoW\nrepresentation_models = [('ctakes_relationgram_bow_tfidf', TfidfVectorizer(tokenizer=tokenizer))] # IDEA: Use Tfidf on normal BoW model aswell?\n\n# cross product of representation models and prediction models\n# save to classifiers as pipelines of rep. model into pred. model\nfor rep_model, pred_model in itertools.product(representation_models, prediction_models):\n classifiers.update({ # add this classifier to classifiers dictionary\n '{rep_model}_{pred_model}'.format(rep_model=rep_model[0], pred_model=pred_model[0]): # classifier name\n Pipeline([rep_model, pred_model]), # concatenate representation model with prediction model in a pipeline\n })",
"_____no_output_____"
]
],
[
[
"## Run and evaluate",
"_____no_output_____"
]
],
[
[
"results = evaluation.run_evaluation(X=X, \n y=y, \n groups=groups,\n conditions=conditions,\n classifiers=classifiers,\n n_splits=n_splits, \n n_repeats=n_repeats, \n n_workers=n_workers)",
"_____no_output_____"
]
],
[
[
"## Save and plot results",
"_____no_output_____"
]
],
[
[
"# save results \nresults_df = pandas.DataFrame(results)\nresults_df.to_csv(results_filename)",
"_____no_output_____"
],
[
"results_df.head(100)",
"_____no_output_____"
],
[
"## load results for plotting\n# import pandas\n# results = pandas.read_csv('output/results.csv')",
"_____no_output_____"
],
[
"# plot and save\naxs = results_df.groupby('name').boxplot(column='AUROC', by='condition', rot=90, figsize=(10,10))\nfor ax in axs:\n ax.set_ylim(0,1)\n\nplt.savefig(plot_filename_1)",
"_____no_output_____"
],
[
"# plot and save\naxs = results_df.groupby('condition').boxplot(column='AUROC', by='name', rot=90, figsize=(10,10))\nfor ax in axs:\n ax.set_ylim(0,1)\n\nplt.savefig(plot_filename_2)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
d08bb8cd512406fe2e277fd57fc52e299432cc90 | 124,282 | ipynb | Jupyter Notebook | res/Machine Learning Sections/Principal-Component-Analysis/Principal Component Analysis.ipynb | Calvibert/machine-learning-exercises | 8184a8338505ea8075992f419385620be6522d14 | [
"MIT"
] | 4 | 2018-10-26T05:01:04.000Z | 2022-01-29T00:04:32.000Z | res/Machine Learning Sections/Principal-Component-Analysis/Principal Component Analysis.ipynb | Calvibert/machine-learning-exercises | 8184a8338505ea8075992f419385620be6522d14 | [
"MIT"
] | 5 | 2021-05-12T03:00:56.000Z | 2022-02-10T04:52:10.000Z | res/Machine Learning Sections/Principal-Component-Analysis/Principal Component Analysis.ipynb | Calvibert/machine-learning-exercises | 8184a8338505ea8075992f419385620be6522d14 | [
"MIT"
] | 5 | 2018-02-18T10:54:48.000Z | 2020-06-01T05:18:28.000Z | 150.46247 | 62,870 | 0.848715 | [
[
[
"___\n\n<a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>\n___",
"_____no_output_____"
],
[
"# Principal Component Analysis\n\nLet's discuss PCA! Since this isn't exactly a full machine learning algorithm, but instead an unsupervised learning algorithm, we will just have a lecture on this topic, but no full machine learning project (although we will walk through the cancer set with PCA).\n\n## PCA Review\n\nMake sure to watch the video lecture and theory presentation for a full overview of PCA! \nRemember that PCA is just a transformation of your data and attempts to find out what features explain the most variance in your data. For example:",
"_____no_output_____"
],
[
"<img src='PCA.png' />",
"_____no_output_____"
],
[
"## Libraries",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\nimport seaborn as sns\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"## The Data\n\nLet's work with the cancer data set again since it had so many features.",
"_____no_output_____"
]
],
[
[
"from sklearn.datasets import load_breast_cancer",
"_____no_output_____"
],
[
"cancer = load_breast_cancer()",
"_____no_output_____"
],
[
"cancer.keys()",
"_____no_output_____"
],
[
"print(cancer['DESCR'])",
"Breast Cancer Wisconsin (Diagnostic) Database\n\nNotes\n-----\nData Set Characteristics:\n :Number of Instances: 569\n\n :Number of Attributes: 30 numeric, predictive attributes and the class\n\n :Attribute Information:\n - radius (mean of distances from center to points on the perimeter)\n - texture (standard deviation of gray-scale values)\n - perimeter\n - area\n - smoothness (local variation in radius lengths)\n - compactness (perimeter^2 / area - 1.0)\n - concavity (severity of concave portions of the contour)\n - concave points (number of concave portions of the contour)\n - symmetry \n - fractal dimension (\"coastline approximation\" - 1)\n \n The mean, standard error, and \"worst\" or largest (mean of the three\n largest values) of these features were computed for each image,\n resulting in 30 features. For instance, field 3 is Mean Radius, field\n 13 is Radius SE, field 23 is Worst Radius.\n \n - class:\n - WDBC-Malignant\n - WDBC-Benign\n\n :Summary Statistics:\n\n ===================================== ======= ========\n Min Max\n ===================================== ======= ========\n radius (mean): 6.981 28.11\n texture (mean): 9.71 39.28\n perimeter (mean): 43.79 188.5\n area (mean): 143.5 2501.0\n smoothness (mean): 0.053 0.163\n compactness (mean): 0.019 0.345\n concavity (mean): 0.0 0.427\n concave points (mean): 0.0 0.201\n symmetry (mean): 0.106 0.304\n fractal dimension (mean): 0.05 0.097\n radius (standard error): 0.112 2.873\n texture (standard error): 0.36 4.885\n perimeter (standard error): 0.757 21.98\n area (standard error): 6.802 542.2\n smoothness (standard error): 0.002 0.031\n compactness (standard error): 0.002 0.135\n concavity (standard error): 0.0 0.396\n concave points (standard error): 0.0 0.053\n symmetry (standard error): 0.008 0.079\n fractal dimension (standard error): 0.001 0.03\n radius (worst): 7.93 36.04\n texture (worst): 12.02 49.54\n perimeter (worst): 50.41 251.2\n area (worst): 185.2 4254.0\n smoothness (worst): 0.071 0.223\n compactness (worst): 0.027 1.058\n concavity (worst): 0.0 1.252\n concave points (worst): 0.0 0.291\n symmetry (worst): 0.156 0.664\n fractal dimension (worst): 0.055 0.208\n ===================================== ======= ========\n\n :Missing Attribute Values: None\n\n :Class Distribution: 212 - Malignant, 357 - Benign\n\n :Creator: Dr. William H. Wolberg, W. Nick Street, Olvi L. Mangasarian\n\n :Donor: Nick Street\n\n :Date: November, 1995\n\nThis is a copy of UCI ML Breast Cancer Wisconsin (Diagnostic) datasets.\nhttps://goo.gl/U2Uwz2\n\nFeatures are computed from a digitized image of a fine needle\naspirate (FNA) of a breast mass. They describe\ncharacteristics of the cell nuclei present in the image.\nA few of the images can be found at\nhttp://www.cs.wisc.edu/~street/images/\n\nSeparating plane described above was obtained using\nMultisurface Method-Tree (MSM-T) [K. P. Bennett, \"Decision Tree\nConstruction Via Linear Programming.\" Proceedings of the 4th\nMidwest Artificial Intelligence and Cognitive Science Society,\npp. 97-101, 1992], a classification method which uses linear\nprogramming to construct a decision tree. Relevant features\nwere selected using an exhaustive search in the space of 1-4\nfeatures and 1-3 separating planes.\n\nThe actual linear program used to obtain the separating plane\nin the 3-dimensional space is that described in:\n[K. P. Bennett and O. L. Mangasarian: \"Robust Linear\nProgramming Discrimination of Two Linearly Inseparable Sets\",\nOptimization Methods and Software 1, 1992, 23-34].\n\nThis database is also available through the UW CS ftp server:\n\nftp ftp.cs.wisc.edu\ncd math-prog/cpo-dataset/machine-learn/WDBC/\n\nReferences\n----------\n - W.N. Street, W.H. Wolberg and O.L. Mangasarian. Nuclear feature extraction \n for breast tumor diagnosis. IS&T/SPIE 1993 International Symposium on \n Electronic Imaging: Science and Technology, volume 1905, pages 861-870, \n San Jose, CA, 1993. \n - O.L. Mangasarian, W.N. Street and W.H. Wolberg. Breast cancer diagnosis and \n prognosis via linear programming. Operations Research, 43(4), pages 570-577, \n July-August 1995.\n - W.H. Wolberg, W.N. Street, and O.L. Mangasarian. Machine learning techniques\n to diagnose breast cancer from fine-needle aspirates. Cancer Letters 77 (1994) \n 163-171.\n\n"
],
[
"df = pd.DataFrame(cancer['data'],columns=cancer['feature_names'])\n#(['DESCR', 'data', 'feature_names', 'target_names', 'target'])",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
]
],
[
[
"## PCA Visualization\n\nAs we've noticed before it is difficult to visualize high dimensional data, we can use PCA to find the first two principal components, and visualize the data in this new, two-dimensional space, with a single scatter-plot. Before we do this though, we'll need to scale our data so that each feature has a single unit variance.",
"_____no_output_____"
]
],
[
[
"from sklearn.preprocessing import StandardScaler",
"_____no_output_____"
],
[
"scaler = StandardScaler()\nscaler.fit(df)",
"_____no_output_____"
],
[
"scaled_data = scaler.transform(df)",
"_____no_output_____"
]
],
[
[
"PCA with Scikit Learn uses a very similar process to other preprocessing functions that come with SciKit Learn. We instantiate a PCA object, find the principal components using the fit method, then apply the rotation and dimensionality reduction by calling transform().\n\nWe can also specify how many components we want to keep when creating the PCA object.",
"_____no_output_____"
]
],
[
[
"from sklearn.decomposition import PCA",
"_____no_output_____"
],
[
"pca = PCA(n_components=2)",
"_____no_output_____"
],
[
"pca.fit(scaled_data)",
"_____no_output_____"
]
],
[
[
"Now we can transform this data to its first 2 principal components.",
"_____no_output_____"
]
],
[
[
"x_pca = pca.transform(scaled_data)",
"_____no_output_____"
],
[
"scaled_data.shape",
"_____no_output_____"
],
[
"x_pca.shape",
"_____no_output_____"
]
],
[
[
"Great! We've reduced 30 dimensions to just 2! Let's plot these two dimensions out!",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(8,6))\nplt.scatter(x_pca[:,0],x_pca[:,1],c=cancer['target'],cmap='plasma')\nplt.xlabel('First principal component')\nplt.ylabel('Second Principal Component')",
"_____no_output_____"
]
],
[
[
"Clearly by using these two components we can easily separate these two classes.\n\n## Interpreting the components \n\nUnfortunately, with this great power of dimensionality reduction, comes the cost of being able to easily understand what these components represent.\n\nThe components correspond to combinations of the original features, the components themselves are stored as an attribute of the fitted PCA object:",
"_____no_output_____"
]
],
[
[
"pca.components_",
"_____no_output_____"
]
],
[
[
"In this numpy matrix array, each row represents a principal component, and each column relates back to the original features. we can visualize this relationship with a heatmap:",
"_____no_output_____"
]
],
[
[
"df_comp = pd.DataFrame(pca.components_,columns=cancer['feature_names'])",
"_____no_output_____"
],
[
"plt.figure(figsize=(12,6))\nsns.heatmap(df_comp,cmap='plasma',)",
"_____no_output_____"
]
],
[
[
"This heatmap and the color bar basically represent the correlation between the various feature and the principal component itself.\n\n## Conclusion\n\nHopefully this information is useful to you when dealing with high dimensional data!",
"_____no_output_____"
],
[
"# Great Job!",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
d08bce3aca67635e96577bcca62b2753796cd833 | 2,091 | ipynb | Jupyter Notebook | examples/notebooks/dolfin/demo_submesh.ipynb | Singlesnail/vedo | c61ad3aca5c926d4b41b8a468aefe8fc02f242ab | [
"CC0-1.0"
] | null | null | null | examples/notebooks/dolfin/demo_submesh.ipynb | Singlesnail/vedo | c61ad3aca5c926d4b41b8a468aefe8fc02f242ab | [
"CC0-1.0"
] | null | null | null | examples/notebooks/dolfin/demo_submesh.ipynb | Singlesnail/vedo | c61ad3aca5c926d4b41b8a468aefe8fc02f242ab | [
"CC0-1.0"
] | null | null | null | 26.1375 | 77 | 0.527499 | [
[
[
"\"\"\"\nHow to extract matching sub meshes from a common mesh.\n\"\"\"\nfrom dolfin import *\n\nclass Structure(SubDomain):\n def inside(self, x, on_boundary):\n return x[0] > 1.4 - DOLFIN_EPS and x[0] < 1.6 \\\n + DOLFIN_EPS and x[1] < 0.6 + DOLFIN_EPS\n\nmesh = RectangleMesh(Point(0.0, 0.0), Point(3.0, 1.0), 60, 20)\n\n# Create sub domain markers and mark everaything as 0\nsub_domains = MeshFunction(\"size_t\", mesh, mesh.topology().dim())\nsub_domains.set_all(0)\n\n# Mark structure domain as 1\nstructure = Structure()\nstructure.mark(sub_domains, 1)\n\n# Extract sub meshes\nfluid_mesh = SubMesh(mesh, sub_domains, 0)\nstructure_mesh = SubMesh(mesh, sub_domains, 1)\n\n# Move structure mesh\nfor x in structure_mesh.coordinates():\n x[0] += 0.1*x[0]*x[1]\n\n# Move fluid mesh according to structure mesh\nALE.move(fluid_mesh, structure_mesh)\nfluid_mesh.smooth()\n\n#############################################\nfrom vedo.dolfin import *\n\n# embedWindow('itkwidgets') # backends are: itkwidgets, k3d or False\n\nplot(fluid_mesh)\nplot(structure_mesh, c='tomato', add=True)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
d08bda8842f067a7cfa1ed23b81ffa508917cf13 | 25,449 | ipynb | Jupyter Notebook | dd_1/Part 1/Section 06 - First-Class Functions/10 - The operator Module.ipynb | rebekka-halal/bg | 616a40286fe1d34db2916762c477676ed8067cdb | [
"Apache-2.0"
] | null | null | null | dd_1/Part 1/Section 06 - First-Class Functions/10 - The operator Module.ipynb | rebekka-halal/bg | 616a40286fe1d34db2916762c477676ed8067cdb | [
"Apache-2.0"
] | null | null | null | dd_1/Part 1/Section 06 - First-Class Functions/10 - The operator Module.ipynb | rebekka-halal/bg | 616a40286fe1d34db2916762c477676ed8067cdb | [
"Apache-2.0"
] | null | null | null | 18.388006 | 1,423 | 0.467052 | [
[
[
" ### The **operator** Module",
"_____no_output_____"
]
],
[
[
"import operator",
"_____no_output_____"
],
[
"dir(operator)",
"_____no_output_____"
]
],
[
[
"#### Arithmetic Operators",
"_____no_output_____"
],
[
"A variety of arithmetic operators are implemented.",
"_____no_output_____"
]
],
[
[
"operator.add(1, 2)",
"_____no_output_____"
],
[
"operator.mul(2, 3)",
"_____no_output_____"
],
[
"operator.pow(2, 3)",
"_____no_output_____"
],
[
"operator.mod(13, 2)",
"_____no_output_____"
],
[
"operator.floordiv(13, 2)",
"_____no_output_____"
],
[
"operator.truediv(3, 2)",
"_____no_output_____"
]
],
[
[
"These would have been very handy in our previous section:",
"_____no_output_____"
]
],
[
[
"from functools import reduce",
"_____no_output_____"
],
[
"reduce(lambda x, y: x*y, [1, 2, 3, 4])",
"_____no_output_____"
]
],
[
[
"Instead of defining a lambda, we could simply use **operator.mul**:",
"_____no_output_____"
]
],
[
[
"reduce(operator.mul, [1, 2, 3, 4])",
"_____no_output_____"
]
],
[
[
"#### Comparison and Boolean Operators",
"_____no_output_____"
],
[
"Comparison and Boolean operators are also implemented as functions:",
"_____no_output_____"
]
],
[
[
"operator.lt(10, 100)",
"_____no_output_____"
],
[
"operator.le(10, 10)",
"_____no_output_____"
],
[
"operator.is_('abc', 'def')",
"_____no_output_____"
]
],
[
[
"We can even get the truthyness of an object:",
"_____no_output_____"
]
],
[
[
"operator.truth([1,2])",
"_____no_output_____"
],
[
"operator.truth([])",
"_____no_output_____"
],
[
"operator.and_(True, False)",
"_____no_output_____"
],
[
"operator.or_(True, False)",
"_____no_output_____"
]
],
[
[
"#### Element and Attribute Getters and Setters",
"_____no_output_____"
],
[
"We generally select an item by index from a sequence by using **[n]**:",
"_____no_output_____"
]
],
[
[
"my_list = [1, 2, 3, 4]\nmy_list[1]",
"_____no_output_____"
]
],
[
[
"We can do the same thing using:",
"_____no_output_____"
]
],
[
[
"operator.getitem(my_list, 1)",
"_____no_output_____"
]
],
[
[
"If the sequence is mutable, we can also set or remove items:",
"_____no_output_____"
]
],
[
[
"my_list = [1, 2, 3, 4]\nmy_list[1] = 100\ndel my_list[3]\nprint(my_list)",
"[1, 100, 3]\n"
],
[
"my_list = [1, 2, 3, 4]\noperator.setitem(my_list, 1, 100)\noperator.delitem(my_list, 3)\nprint(my_list)",
"[1, 100, 3]\n"
]
],
[
[
"We can also do the same thing using the **operator** module's **itemgetter** function.\n\nThe difference is that this returns a callable:",
"_____no_output_____"
]
],
[
[
"f = operator.itemgetter(2)",
"_____no_output_____"
]
],
[
[
"Now, **f(my_list)** will return **my_list[2]**",
"_____no_output_____"
]
],
[
[
"f(my_list)",
"_____no_output_____"
],
[
"x = 'python'\nf(x)",
"_____no_output_____"
]
],
[
[
"Furthermore, we can pass more than one index to **itemgetter**:",
"_____no_output_____"
]
],
[
[
"f = operator.itemgetter(2, 3)",
"_____no_output_____"
],
[
"my_list = [1, 2, 3, 4]\nf(my_list)",
"_____no_output_____"
],
[
"x = 'pytyhon'\nf(x)",
"_____no_output_____"
]
],
[
[
"Similarly, **operator.attrgetter** does the same thing, but with object attributes.",
"_____no_output_____"
]
],
[
[
"class MyClass:\n def __init__(self):\n self.a = 10\n self.b = 20\n self.c = 30\n \n def test(self):\n print('test method running...')",
"_____no_output_____"
],
[
"obj = MyClass()",
"_____no_output_____"
],
[
"obj.a, obj.b, obj.c",
"_____no_output_____"
],
[
"f = operator.attrgetter('a')",
"_____no_output_____"
],
[
"f(obj)",
"_____no_output_____"
],
[
"my_var = 'b'\noperator.attrgetter(my_var)(obj)",
"_____no_output_____"
],
[
"my_var = 'c'\noperator.attrgetter(my_var)(obj)",
"_____no_output_____"
],
[
"f = operator.attrgetter('a', 'b', 'c')",
"_____no_output_____"
],
[
"f(obj)",
"_____no_output_____"
]
],
[
[
"Of course, attributes can also be methods.\n\nIn this case, **attrgetter** will return the object's **test** method - a callable that can then be called using **()**:",
"_____no_output_____"
]
],
[
[
"f = operator.attrgetter('test')",
"_____no_output_____"
],
[
"obj_test_method = f(obj)",
"_____no_output_____"
],
[
"obj_test_method()",
"test method running...\n"
]
],
[
[
"Just like lambdas, we do not need to assign them to a variable name in order to use them:",
"_____no_output_____"
]
],
[
[
"operator.attrgetter('a', 'b')(obj)",
"_____no_output_____"
],
[
"operator.itemgetter(2, 3)('python')",
"_____no_output_____"
]
],
[
[
"Of course, we can achieve the same thing using functions or lambdas:",
"_____no_output_____"
]
],
[
[
"f = lambda x: (x.a, x.b, x.c)",
"_____no_output_____"
],
[
"f(obj)",
"_____no_output_____"
],
[
"f = lambda x: (x[2], x[3])",
"_____no_output_____"
],
[
"f([1, 2, 3, 4])",
"_____no_output_____"
],
[
"f('python')",
"_____no_output_____"
]
],
[
[
"##### Use Case Example: Sorting",
"_____no_output_____"
],
[
"Suppose we want to sort a list of complex numbers based on the real part of the numbers:",
"_____no_output_____"
]
],
[
[
"a = 2 + 5j\na.real",
"_____no_output_____"
],
[
"l = [10+1j, 8+2j, 5+3j]\nsorted(l, key=operator.attrgetter('real'))",
"_____no_output_____"
]
],
[
[
"Or if we want to sort a list of string based on the last character of the strings:",
"_____no_output_____"
]
],
[
[
"l = ['aaz', 'aad', 'aaa', 'aac']\nsorted(l, key=operator.itemgetter(-1))",
"_____no_output_____"
]
],
[
[
"Or maybe we want to sort a list of tuples based on the first item of each tuple:",
"_____no_output_____"
]
],
[
[
"l = [(2, 3, 4), (1, 2, 3), (4, ), (3, 4)]\nsorted(l, key=operator.itemgetter(0))",
"_____no_output_____"
]
],
[
[
"#### Slicing",
"_____no_output_____"
]
],
[
[
"l = [1, 2, 3, 4]",
"_____no_output_____"
],
[
"l[0:2]",
"_____no_output_____"
],
[
"l[0:2] = ['a', 'b', 'c']\nprint(l)",
"['a', 'b', 'c', 3, 4]\n"
],
[
"del l[3:5]\nprint(l)",
"['a', 'b', 'c']\n"
]
],
[
[
"We can do the same thing this way:",
"_____no_output_____"
]
],
[
[
"l = [1, 2, 3, 4]",
"_____no_output_____"
],
[
"operator.getitem(l, slice(0,2))",
"_____no_output_____"
],
[
"operator.setitem(l, slice(0,2), ['a', 'b', 'c'])\nprint(l)",
"['a', 'b', 'c', 3, 4]\n"
],
[
"operator.delitem(l, slice(3, 5))\nprint(l)",
"['a', 'b', 'c']\n"
]
],
[
[
"#### Calling another Callable",
"_____no_output_____"
]
],
[
[
"x = 'python'\nx.upper()",
"_____no_output_____"
],
[
"operator.methodcaller('upper')('python')",
"_____no_output_____"
]
],
[
[
"Of course, since **upper** is just an attribute of the string object **x**, we could also have used:",
"_____no_output_____"
]
],
[
[
"operator.attrgetter('upper')(x)()",
"_____no_output_____"
]
],
[
[
"If the callable takes in more than one parameter, they can be specified as additional arguments in **methodcaller**:",
"_____no_output_____"
]
],
[
[
"class MyClass:\n def __init__(self):\n self.a = 10\n self.b = 20\n \n def do_something(self, c):\n print(self.a, self.b, c)",
"_____no_output_____"
],
[
"obj = MyClass()",
"_____no_output_____"
],
[
"obj.do_something(100)",
"10 20 100\n"
],
[
"operator.methodcaller('do_something', 100)(obj)",
"10 20 100\n"
],
[
"class MyClass:\n def __init__(self):\n self.a = 10\n self.b = 20\n \n def do_something(self, *, c):\n print(self.a, self.b, c)",
"_____no_output_____"
],
[
"obj.do_something(c=100)",
"10 20 100\n"
],
[
"operator.methodcaller('do_something', c=100)(obj)",
"10 20 100\n"
]
],
[
[
"More information on the **operator** module can be found here:\n\nhttps://docs.python.org/3/library/operator.html",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d08bdd5464b12ba48b44d342071219d73c6e5b78 | 3,479 | ipynb | Jupyter Notebook | semester2/notebooks/1.2-data-types-solutions.ipynb | pedrohserrano/global-studies | 991513c694a6e6042492882b2e3df8ad0a93bb37 | [
"MIT"
] | null | null | null | semester2/notebooks/1.2-data-types-solutions.ipynb | pedrohserrano/global-studies | 991513c694a6e6042492882b2e3df8ad0a93bb37 | [
"MIT"
] | null | null | null | semester2/notebooks/1.2-data-types-solutions.ipynb | pedrohserrano/global-studies | 991513c694a6e6042492882b2e3df8ad0a93bb37 | [
"MIT"
] | null | null | null | 23.348993 | 317 | 0.559931 | [
[
[
"# Data types\n---",
"_____no_output_____"
],
[
"#### <i style=\"color:red\">**EXERCISES**</i>",
"_____no_output_____"
],
[
"_1. What type of value is 3.4? How can you find out?_",
"_____no_output_____"
],
[
"**Solution**\n\n_It is a floating-point number (often abbreviated “float”)._",
"_____no_output_____"
]
],
[
[
"print(type(3.4))",
"<class 'float'>\n"
]
],
[
[
"---\n_2. What type of value (integer, floating point number, or character string) would you use to represent each of the following?_ Try to come up with more than one good answer for each problem. For example, in # 1, when would counting days with a floating point variable make more sense than using an integer?_\n\n1. Number of days since the start of the year.\n2. Country code according to 2. Country code according to [The International Standard Organization](https://en.wikipedia.org/wiki/List_of_ISO_3166_country_codes).\n4. Current population of a city.\n5. Average population of a city over time.\n\n",
"_____no_output_____"
],
[
"**Solution**\n\n1. Integer, since the number of days would lie between 1 and 365.\n2. Character string since ISO has character and numbers.\n3. Choose floating point to represent population as large aggregates (eg millions), or integer to represent population in units of individuals.\n4. Floating point number, since an average is likely to have a fractional part.\n---\n\n",
"_____no_output_____"
],
[
"_3. Can you concatenate strings? Can you explain how with an example in a cell code?_",
"_____no_output_____"
],
[
"**Solution**\n\noh yeahhh, as previously mentioned in point 4\n\n```python\ncountry_profile = 'AFGHANISTAN ' + 'Gross domestic product growth rate in 2019' + '+ 2.4%'\nprint(country_profile)\n```",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d08be9c1a6295738381c6acfe34352699984aa97 | 679,543 | ipynb | Jupyter Notebook | Notebook/HereItIs.ipynb | soddencarpenter/dataviz | 289ac890b04820acf1c0fc516e0cb502570626e4 | [
"MIT"
] | null | null | null | Notebook/HereItIs.ipynb | soddencarpenter/dataviz | 289ac890b04820acf1c0fc516e0cb502570626e4 | [
"MIT"
] | null | null | null | Notebook/HereItIs.ipynb | soddencarpenter/dataviz | 289ac890b04820acf1c0fc516e0cb502570626e4 | [
"MIT"
] | null | null | null | 371.944718 | 68,452 | 0.92473 | [
[
[
"## This is the basic load and clean stuff",
"_____no_output_____"
]
],
[
[
"# %load ~/dataviz/ExplorePy/clean-divvy-explore.py\nimport pandas as pd\nimport numpy as np\nimport datetime as dt\nimport pandas.api.types as pt\nimport pytz as pytz\n\nfrom astral import LocationInfo\nfrom astral.sun import sun\nfrom astral.geocoder import add_locations, database, lookup\n\nfrom dateutil import parser as du_pr\n\nfrom pathlib import Path\n\ndb = database()\n\nTZ=pytz.timezone('US/Central')\nchi_town = lookup('Chicago', db)\nprint(chi_town)\n\n\nrev = \"5\"\n\ninput_dir = '/mnt/d/DivvyDatasets'\ninput_divvy_basename = \"divvy_trip_history_201909-202108\"\ninput_divvy_base = input_dir + \"/\" + input_divvy_basename\ninput_divvy_raw = input_divvy_base + \".csv\"\ninput_divvy_rev = input_dir + \"/rev\" + rev + \"-\" + input_divvy_basename + \".csv\"\ninput_chitemp = input_dir + \"/\" + \"ChicagoTemperature.csv\"\n\n\n#\n# returns true if the rev file is already present\n#\ndef rev_file_exists():\n path = Path(input_divvy_rev)\n return path.is_file()\n\n\n\n\ndef update_dow_to_category(df):\n #\n # we need to get the dow properly set\n #\n cats = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']\n\n cats_type = pt.CategoricalDtype(categories=cats, ordered=True)\n df['day_of_week'] = df['day_of_week'].astype(cats_type)\n\n return df\n\n\n\n\ndef update_start_cat_to_category(df):\n cats = ['AM_EARLY', 'AM_RUSH', 'AM_MID',\n 'LUNCH',\n 'PM_EARLY', 'PM_RUSH', 'PM_EVENING', 'PM_LATE']\n\n cats_type = pt.CategoricalDtype(categories=cats, ordered=True)\n df['start_cat'] = df['start_cat'].astype(cats_type)\n\n return df\n\n\n\n#\n# loads and returns the rev file as a data frame. It handles\n# the need to specify some column types\n#\n# filename : the filename to load\n#\ndef load_divvy_dataframe(filename):\n print(\"Loading \" + filename)\n # so need to set the type on a couple of columns\n col_names = pd.read_csv(filename, nrows=0).columns\n\n types_dict = { 'ride_id': str, \n 'start_station_id': str,\n 'end_station_id': str,\n 'avg_temperature_celsius': float,\n 'avg_temperature_fahrenheit': float,\n 'duration': float,\n 'start_lat': float,\n 'start_lng': float,\n 'end_lat': float,\n 'end_lng': float,\n 'avg_rain_intensity_mm/hour': float,\n 'avg_wind_speed': float,\n 'max_wind_speed': float,\n 'total_solar_radiation': int,\n 'is_dark': bool\n }\n types_dict.update({col: str for col in col_names if col not in types_dict})\n\n date_cols=['started_at','ended_at','date']\n\n df = pd.read_csv(filename, dtype=types_dict, parse_dates=date_cols)\n\n if 'start_time' in df:\n print(\"Converting start_time\")\n df['start_time'] = df['start_time'].apply(lambda x: dt.datetime.strptime(x, \"%H:%M:%S\"))\n\n return df\n\n\n\ndef yrmo(year, month):\n return \"{}-{}\".format(year, month)\n\n\n\n\ndef calc_duration_in_minutes(started_at, ended_at):\n diff = ended_at - started_at\n return diff.total_seconds() / 60\n\n\n\n\n#\n# load the chicago temperature into a data frame\n#\ndef load_temperature_dataframe():\n print(\"Loading \" + input_chitemp)\n df = pd.read_csv(input_chitemp)\n\n print(\"Converting date\")\n df['date'] = df['date'].apply(lambda x: dt.datetime.strptime(x, \"%Y-%m-%d\"))\n\n return df\n\n\n\n\ndef add_start_time(started_at):\n return started_at.time()\n\n\n\n\ndef add_start_cat(started_at):\n start_time = started_at.time()\n time_new_day = dt.time(00,00)\n time_am_rush_start = dt.time(7,00)\n time_am_rush_end = dt.time(9,00)\n\n time_lunch_start = dt.time(11,30)\n time_lunch_end = dt.time(13,00)\n\n time_pm_rush_start = dt.time(15,30)\n time_pm_rush_end = dt.time(19,00)\n\n time_evening_end = dt.time(23,00)\n\n\n if start_time >= time_new_day and start_time < time_am_rush_start:\n return 'AM_EARLY'\n\n if start_time >= time_am_rush_start and start_time < time_am_rush_end:\n return 'AM_RUSH'\n\n if start_time >= time_am_rush_end and start_time < time_lunch_start:\n return 'AM_MID'\n\n if start_time >= time_lunch_start and start_time < time_lunch_end:\n return 'LUNCH'\n\n # slight change on Chi rush from 15:00 to 15:30\n if start_time >= time_lunch_end and start_time < time_pm_rush_start:\n return 'PM_EARLY'\n\n if start_time >= time_pm_rush_start and start_time < time_pm_rush_end:\n return 'PM_RUSH'\n\n if start_time >= time_pm_rush_end and start_time < time_evening_end:\n return 'PM_EVENING'\n\n return 'PM_LATE'\n\n\n\n\ndef add_is_dark(started_at):\n st = started_at.replace(tzinfo=TZ)\n chk = sun(chi_town.observer, date=st, tzinfo=chi_town.timezone)\n return st >= chk['dusk'] or st <= chk['dawn']\n\n\n\n#\n# handles loading and processing the divvy raw data by\n# adding columns, removing bad data, etc.\n#\ndef process_raw_divvy(filename):\n df_divvy = load_divvy_dataframe(filename)\n\n print(\"Creating additional columns\")\n data = pd.Series(df_divvy.apply(lambda x: [\n add_start_time(x['started_at']),\n add_is_dark(x['started_at']),\n yrmo(x['year'], x['month']),\n calc_duration_in_minutes(x['started_at'], x['ended_at']),\n add_start_cat(x['started_at'])\n ], axis = 1))\n\n new_df = pd.DataFrame(data.tolist(),\n data.index, \n columns=['start_time','is_dark','yrmo','duration','start_cat'])\n\n df_divvy = df_divvy.merge(new_df, left_index=True, right_index=True)\n\n\n # #\n # # add a simplistic time element\n # #\n # print(\"Adding start_time\")\n # df_divvy['start_time'] = df_divvy.apply(lambda row: add_start_time(row['started_at']), axis = 1)\n\n # print(\"Adding start_cat\")\n # df_divvy['start_cat'] = df_divvy.apply(lambda row: add_start_cat(row['start_time']), axis = 1)\n\n # #\n # # is it dark\n # #\n # print(\"Adding is_dark\")\n # df_divvy['is_dark'] = df_divvy.apply(lambda row: add_is_dark(row['started_at']), axis = 1)\n\n\n # #\n # # add a year-month column to the divvy dataframe\n # # this uses a function with the row; it is not\n # # the absolute fastest way\n # #\n # print(\"Adding year-month as yrmo\")\n # df_divvy['yrmo'] = df_divvy.apply(lambda row: yrmo(row['year'], row['month']),\n # axis = 1)\n\n # #\n # # we also want a duration to be calculated\n # #\n # print(\"Adding duration\")\n # df_divvy['duration'] = df_divvy.apply(lambda row: calc_duration_in_minutes(row['started_at'],\n # row['ended_at']),\n # axis = 1)\n\n #\n # add the temperature\n #\n df_chitemp = load_temperature_dataframe()\n\n print(\"Merging in temperature\")\n df_divvy = pd.merge(df_divvy, df_chitemp, on=\"date\")\n print(df_divvy.shape)\n print(df_divvy.head())\n # print(df_divvy.loc[df_divvy['date'] == '2020-02-21']) # 2020-02-21 was missing in org. temp\n\n # print(df_divvy[['ride_id','member_casual','date','duration','yrmo','avg_temperature_fahrenheit','start_time','start_cat']])\n\n #\n # clean the dataframe to remove invalid durations\n # which are really only (about) < 1 minute, or > 12 hours\n #\n print(\"Removing invalid durations\")\n df_divvy = df_divvy[(df_divvy.duration >= 1.2) & (df_divvy.duration < 60 * 12)]\n # print(df_divvy.shape)\n\n df_divvy = update_dow_to_category(df_divvy)\n df_divvy = update_start_cat_to_category(df_divvy)\n\n #\n # drop some bogus columns\n #\n print(\"Dropping columns\")\n df_divvy.drop(df_divvy.columns[[0,-1]], axis=1, inplace=True)\n\n return df_divvy\n\n\n\n\n#\n# writes the dataframe to the specified filename\n#\ndef save_dataframe(df, filename):\n print(\"Saving dataframe to \" + filename)\n df_out = df.copy()\n df_out['date'] = df_out['date'].map(lambda x: dt.datetime.strftime(x, '%Y-%m-%d'))\n df_out.to_csv(filename, index=False, date_format=\"%Y-%m-%d %H:%M:%S\")\n\n\n#\n# load the divvy csv into a data frame\n#\n\n\nif rev_file_exists():\n df_divvy = load_divvy_dataframe(input_divvy_rev)\n df_divvy = update_dow_to_category(df_divvy)\n df_divvy = update_start_cat_to_category(df_divvy)\nelse:\n df_divvy = process_raw_divvy(input_divvy_raw)\n save_dataframe(df_divvy, input_divvy_rev)\n\nprint(df_divvy)\ndf_divvy.info()\n\n\n\n\n\n\n\n\n\n#\n# btw, can just pass the row and let the function figure it out\n#\n#def procone(row):\n# print(row['date'])\n# return 0\n#df_divvy.apply(lambda row: procone(row), axis = 1)\n\n\n\n\n\n",
"LocationInfo(name='Chicago', region='USA', timezone='US/Central', latitude=41.833333333333336, longitude=-87.68333333333334)\nLoading /mnt/d/DivvyDatasets/rev5-divvy_trip_history_201909-202108.csv\nConverting start_time\n ID ...1 ride_id rideable_type started_at \\\n0 1147482 8267501 24710636 docked_bike 2019-09-01 00:00:15 \n1 1147483 8267502 24710637 docked_bike 2019-09-01 00:00:48 \n2 1147484 8267503 24710638 docked_bike 2019-09-01 00:01:13 \n3 1147485 8267504 24710639 docked_bike 2019-09-01 00:01:34 \n4 1147486 8267505 24710640 docked_bike 2019-09-01 00:03:29 \n... ... ... ... ... ... \n8190302 9464757 6162300 98EAC61BBAAF73C9 classic_bike 2021-08-31 07:46:16 \n8190303 9464758 6162336 B060D2DF6AC0D65B classic_bike 2021-08-31 17:57:04 \n8190304 9464759 6162481 D22E7B3E5C0D162E classic_bike 2021-08-31 17:17:09 \n8190305 9464760 6162496 4C86FE37842CD185 classic_bike 2021-08-31 17:22:15 \n8190306 9464761 6162497 6BEE9ACBA8E9BD51 classic_bike 2021-08-31 13:20:57 \n\n ended_at start_station_name start_station_id \\\n0 2019-09-01 00:05:00 Southport Ave & Waveland Ave 227 \n1 2019-09-01 00:06:46 Wells St & Concord Ln 289 \n2 2019-09-01 00:07:53 Broadway & Waveland Ave 304 \n3 2019-09-01 00:11:04 Wells St & Concord Ln 289 \n4 2019-09-01 00:21:53 Wells St & Concord Ln 289 \n... ... ... ... \n8190302 2021-08-31 07:53:51 Wells St & Huron St TA1306000012 \n8190303 2021-08-31 18:08:36 Wells St & Huron St TA1306000012 \n8190304 2021-08-31 17:35:18 Wells St & Evergreen Ave TA1308000049 \n8190305 2021-08-31 17:30:12 Lakeview Ave & Fullerton Pkwy TA1309000019 \n8190306 2021-08-31 13:26:28 Lakeview Ave & Fullerton Pkwy TA1309000019 \n\n end_station_name end_station_id ... yrmo \\\n0 Ashland Ave & Belle Plaine Ave 246 ... 2019-09 \n1 Sedgwick St & Webster Ave 143 ... 2019-09 \n2 Broadway & Belmont Ave 296 ... 2019-09 \n3 Clark St & Drummond Pl 220 ... 2019-09 \n4 Western Ave & Walton St 374 ... 2019-09 \n... ... ... ... ... \n8190302 Franklin St & Adams St (Temp) TA1309000008 ... 2021-08 \n8190303 Clark St & Lincoln Ave 13179 ... 2021-08 \n8190304 Lincoln Ave & Diversey Pkwy TA1307000064 ... 2021-08 \n8190305 Clark St & Lincoln Ave 13179 ... 2021-08 \n8190306 Clark St & Lincoln Ave 13179 ... 2021-08 \n\n duration start_cat avg_temperature_celsius \\\n0 4.750000 AM_EARLY 19.1 \n1 5.966667 AM_EARLY 19.1 \n2 6.666667 AM_EARLY 19.1 \n3 9.500000 AM_EARLY 19.1 \n4 18.400000 AM_EARLY 19.1 \n... ... ... ... \n8190302 7.583333 AM_RUSH 23.7 \n8190303 11.533333 PM_RUSH 23.7 \n8190304 18.150000 PM_RUSH 23.7 \n8190305 7.950000 PM_RUSH 23.7 \n8190306 5.516667 PM_EARLY 23.7 \n\n avg_temperature_fahrenheit avg_humidity avg_rain_intensity_mm/hour \\\n0 66.3 82% 1.0 \n1 66.3 82% 1.0 \n2 66.3 82% 1.0 \n3 66.3 82% 1.0 \n4 66.3 82% 1.0 \n... ... ... ... \n8190302 74.6 65% 0.0 \n8190303 74.6 65% 0.0 \n8190304 74.6 65% 0.0 \n8190305 74.6 65% 0.0 \n8190306 74.6 65% 0.0 \n\n avg_wind_speed max_wind_speed total_solar_radiation \n0 1.4 8.9 4351 \n1 1.4 8.9 4351 \n2 1.4 8.9 4351 \n3 1.4 8.9 4351 \n4 1.4 8.9 4351 \n... ... ... ... \n8190302 2.7 9.5 13771 \n8190303 2.7 9.5 13771 \n8190304 2.7 9.5 13771 \n8190305 2.7 9.5 13771 \n8190306 2.7 9.5 13771 \n\n[8190307 rows x 32 columns]\n<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 8190307 entries, 0 to 8190306\nData columns (total 32 columns):\n # Column Dtype \n--- ------ ----- \n 0 ID object \n 1 ...1 object \n 2 ride_id object \n 3 rideable_type object \n 4 started_at datetime64[ns]\n 5 ended_at datetime64[ns]\n 6 start_station_name object \n 7 start_station_id object \n 8 end_station_name object \n 9 end_station_id object \n 10 start_lat float64 \n 11 start_lng float64 \n 12 end_lat float64 \n 13 end_lng float64 \n 14 member_casual object \n 15 date datetime64[ns]\n 16 month object \n 17 day object \n 18 year object \n 19 day_of_week category \n 20 start_time datetime64[ns]\n 21 is_dark bool \n 22 yrmo object \n 23 duration float64 \n 24 start_cat category \n 25 avg_temperature_celsius float64 \n 26 avg_temperature_fahrenheit float64 \n 27 avg_humidity object \n 28 avg_rain_intensity_mm/hour float64 \n 29 avg_wind_speed float64 \n 30 max_wind_speed float64 \n 31 total_solar_radiation int64 \ndtypes: bool(1), category(2), datetime64[ns](4), float64(10), int64(1), object(14)\nmemory usage: 1.8+ GB\n"
]
],
[
[
"## Look at the average duration by rider type & day of week\n### average duration by day of week for rider types",
"_____no_output_____"
]
],
[
[
"type(df_divvy['duration'])",
"_____no_output_____"
],
[
"df_divvy.info()\ndf_divvy.shape\n\ndf_rider_by_dow = df_divvy.groupby(['member_casual','day_of_week']).agg(mean_time = ('duration', 'mean')).round(2)\ndf_rider_by_dow\n\ndf_rider_by_dow.sort_values(by=['member_casual','day_of_week'])",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 8190307 entries, 0 to 8190306\nData columns (total 32 columns):\n # Column Dtype \n--- ------ ----- \n 0 ID object \n 1 ...1 object \n 2 ride_id object \n 3 rideable_type object \n 4 started_at datetime64[ns]\n 5 ended_at datetime64[ns]\n 6 start_station_name object \n 7 start_station_id object \n 8 end_station_name object \n 9 end_station_id object \n 10 start_lat float64 \n 11 start_lng float64 \n 12 end_lat float64 \n 13 end_lng float64 \n 14 member_casual object \n 15 date datetime64[ns]\n 16 month object \n 17 day object \n 18 year object \n 19 day_of_week category \n 20 start_time datetime64[ns]\n 21 is_dark bool \n 22 yrmo object \n 23 duration float64 \n 24 start_cat category \n 25 avg_temperature_celsius float64 \n 26 avg_temperature_fahrenheit float64 \n 27 avg_humidity object \n 28 avg_rain_intensity_mm/hour float64 \n 29 avg_wind_speed float64 \n 30 max_wind_speed float64 \n 31 total_solar_radiation int64 \ndtypes: bool(1), category(2), datetime64[ns](4), float64(10), int64(1), object(14)\nmemory usage: 1.8+ GB\n"
]
],
[
[
"### Now we want to plot",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns",
"_____no_output_____"
]
],
[
[
"#### bar plot of Duration by Rider Type and Day of Week",
"_____no_output_____"
]
],
[
[
"df_rider_by_dow.unstack('member_casual').plot(kind='bar')",
"_____no_output_____"
],
[
"df_rider_by_dow.reset_index(inplace=True)\nsns.set(rc={\"figure.figsize\":(16,8)})\nsns.barplot(data=df_rider_by_dow, x=\"day_of_week\", y=\"mean_time\", hue=\"member_casual\")",
"_____no_output_____"
]
],
[
[
"## Look at the number of riders by type and day of week\n### grouping",
"_____no_output_____"
]
],
[
[
"df_rider_by_dow = df_divvy.groupby(['member_casual','day_of_week']).agg(num_rides = ('ID', 'count'))\ndf_rider_by_dow\n\n#df_rider_by_dow['day_of_week'] = df_rider_by_dow['day_of_week'].astype(cats_type)\ndf_rider_by_dow.sort_values(by=['member_casual','day_of_week'])",
"_____no_output_____"
]
],
[
[
"### plot of Number of Rids by Rider Type and Day of Week",
"_____no_output_____"
]
],
[
[
"df_rider_by_dow.unstack('member_casual').plot(kind='bar')",
"_____no_output_____"
],
[
"df_rider_by_dow.reset_index(inplace=True)\nsns.set(rc={\"figure.figsize\":(16,8)})\nsns.barplot(data=df_rider_by_dow, x=\"day_of_week\", y=\"num_rides\", hue=\"member_casual\")",
"_____no_output_____"
],
[
"df_member_by_yr_dow = df_divvy[df_divvy['member_casual'] == 'member'].groupby(['year','day_of_week']).agg(mean_time = ('duration', 'mean')).round(2)\ndf_casual_by_yr_dow = df_divvy[df_divvy['member_casual'] == 'casual'].groupby(['year','day_of_week']).agg(mean_time = ('duration', 'mean')).round(2)",
"_____no_output_____"
],
[
"df_member_by_yr_dow.unstack('year').plot(kind='bar', title='Member Rider mean time by year and day of week')",
"_____no_output_____"
],
[
"df_casual_by_yr_dow.unstack('year').plot(kind='bar', title='Casual Rider mean time by year and day of week')",
"_____no_output_____"
],
[
"df_rider_by_yrmo = df_divvy.groupby(['member_casual','yrmo']).agg(mean_time = ('duration', 'mean')).round(2)",
"_____no_output_____"
],
[
"df_rider_by_yrmo.unstack('member_casual').plot(kind='bar', title='Rider mean time by yrmo')",
"_____no_output_____"
],
[
"df_rider_count_by_yrmo = df_divvy.groupby(['member_casual','yrmo']).agg(count = ('ID', 'count'))\ndf_rider_count_by_yrmo.unstack('member_casual').plot(kind='bar', title='Rider count by yrmo')",
"_____no_output_____"
],
[
"df_rider_count_by_yrmo.unstack('member_casual').plot(kind='line', title='Rider count by yrmo')",
"_____no_output_____"
]
],
[
[
"## Let's look at starting in the dark by rider",
"_____no_output_____"
]
],
[
[
"df_rider_count_by_is_dark = df_divvy.groupby(['member_casual','is_dark']).agg(count = ('ID', 'count'))\ndf_rider_count_by_is_dark.unstack('member_casual').plot(kind='bar', title='Rider count by starting in the dark')",
"_____no_output_____"
],
[
"df_rider_by_time = df_divvy.groupby(['member_casual','start_cat']).agg(count = ('ID', 'count'))\ndf_rider_by_time.unstack('start_cat').plot(kind='bar')",
"_____no_output_____"
],
[
"weekdays = ['Monday','Tuesday','Wednesday','Thursday','Friday']\nweekends = ['Saturday','Sunday']\nweekday_riders = df_divvy[df_divvy.day_of_week.isin(weekdays)]\nweekend_riders = df_divvy[df_divvy.day_of_week.isin(weekends)]",
"_____no_output_____"
],
[
"weekday_riders.shape\nweekend_riders.shape",
"_____no_output_____"
],
[
"df_rider_by_time_weekday = weekday_riders.groupby(['member_casual','start_cat']).agg(count = ('ID', 'count'))\n\ndf_rider_by_time_weekday.unstack('start_cat').plot(kind='bar', title=\"Weekday times\")",
"_____no_output_____"
],
[
"df_rider_by_time_weekday.to_csv(date_format=\"%Y-%m-%d %H:%M:%S\")",
"_____no_output_____"
],
[
"df_rider_by_time_weekend = weekend_riders.groupby(['member_casual','start_cat']).agg(count = ('ID', 'count'))\ndf_rider_by_time_weekend.unstack('start_cat').plot(kind='bar', title=\"Weekend times\")",
"_____no_output_____"
],
[
"df_rider_by_time_weekend.to_csv()",
"_____no_output_____"
]
],
[
[
"## Starting stations -- member",
"_____no_output_____"
]
],
[
[
"df_starting_member = df_divvy[df_divvy['member_casual']=='member'].groupby(['start_station_name']).agg(count=('ID','count'))\ndf_starting_member = df_starting_member.sort_values(by='count', ascending=False)\ndf_starting_member_top = df_starting_member.iloc[0:19]\ndf_starting_member_top.plot(kind='bar', title=\"Starting Stations - Member\")\n",
"_____no_output_____"
],
[
"\ndf_starting_member_weekday = weekday_riders[weekday_riders.member_casual=='member'].groupby(['start_station_name']).agg(count=('ID','count'))\ndf_starting_member_weekday = df_starting_member_weekday.sort_values(by='count', ascending=False)\ndf_starting_member_weekday_top = df_starting_member_weekday.iloc[0:19]\ndf_starting_member_weekday_top.plot(kind='bar', title=\"Starting Stations Weekday - Member\")",
"_____no_output_____"
],
[
"from io import StringIO",
"_____no_output_____"
],
[
"output = StringIO()\ndf_starting_member_weekday_top.to_csv(output)\nprint(output.getvalue())",
"start_station_name,count\nCanal St & Adams St,35273\nClinton St & Madison St,34717\nKingsbury St & Kinzie St,33102\nClinton St & Washington Blvd,32659\nClark St & Elm St,31045\nSt. Clair St & Erie St,29506\nColumbus Dr & Randolph St,27767\nDearborn St & Erie St,26663\nWells St & Huron St,26391\nWells St & Concord Ln,25320\nDaley Center Plaza,23766\nFranklin St & Monroe St,23604\nDesplaines St & Kinzie St,23542\nWells St & Elm St,22189\nBroadway & Barry Ave,21582\nClinton St & Lake St,21176\nWabash Ave & Grand Ave,21165\nLarrabee St & Kingsbury St,21155\nKingsbury St & Erie St,20737\n\n"
],
[
"df_starting_member_weekend = weekend_riders[weekend_riders.member_casual=='member'].groupby(['start_station_name']).agg(count=('ID','count'))\ndf_starting_member_weekend = df_starting_member_weekend.sort_values(by='count', ascending=False)\ndf_starting_member_weekend_top = df_starting_member_weekend.iloc[0:19]\ndf_starting_member_weekend_top.plot(kind='bar', title=\"Starting Stations Weekend - Member\")",
"_____no_output_____"
],
[
"output = StringIO()\ndf_starting_member_weekend_top.to_csv(output)\nprint(output.getvalue())",
"start_station_name,count\nClark St & Elm St,11804\nTheater on the Lake,11412\nWells St & Concord Ln,11294\nBroadway & Barry Ave,9624\nClark St & Lincoln Ave,9435\nClark St & Armitage Ave,9319\nLake Shore Dr & North Blvd,9245\nWells St & Elm St,8896\nStreeter Dr & Grand Ave,8820\nDearborn St & Erie St,8260\nLarrabee St & Webster Ave,8083\nDesplaines St & Kinzie St,8021\nKingsbury St & Kinzie St,7777\nWabash Ave & Grand Ave,7440\nWilton Ave & Belmont Ave,7427\nWells St & Huron St,7309\nClark St & Wrightwood Ave,7251\nWells St & Evergreen Ave,7215\nBroadway & Cornelia Ave,7171\n\n"
]
],
[
[
"## Starting Stations - casual",
"_____no_output_____"
]
],
[
[
"df_starting_casual = df_divvy[df_divvy['member_casual']=='casual'].groupby(['start_station_name']).agg(count=('ID','count'))\ndf_starting_casual = df_starting_casual.sort_values(by='count', ascending=False)\ndf_starting_casual_top = df_starting_casual.iloc[0:19]\ndf_starting_casual_top.head()\n",
"_____no_output_____"
],
[
"df_starting_casual_weekday = weekday_riders[weekday_riders.member_casual=='casual'].groupby(['start_station_name']).agg(count=('ID','count'))\ndf_starting_casual_weekday = df_starting_casual_weekday.sort_values(by='count', ascending=False)\ndf_starting_casual_weekday_top = df_starting_casual_weekday.iloc[0:19]\n\noutput = StringIO()\ndf_starting_casual_weekday_top.to_csv(output)\nprint(output.getvalue())\n\n",
"start_station_name,count\nStreeter Dr & Grand Ave,44736\nMillennium Park,24934\nLake Shore Dr & Monroe St,24582\nMichigan Ave & Oak St,20782\nShedd Aquarium,17311\nTheater on the Lake,16601\nLake Shore Dr & North Blvd,15762\nIndiana Ave & Roosevelt Rd,15170\nClark St & Elm St,13864\nWells St & Concord Ln,13838\nMichigan Ave & Lake St,13401\nWabash Ave & Grand Ave,12614\nMichigan Ave & Washington St,12559\nWells St & Elm St,12452\nClark St & Lincoln Ave,12262\nClark St & Armitage Ave,12175\nMichigan Ave & 8th St,12008\nDusable Harbor,11541\nBuckingham Fountain,11505\n\n"
],
[
"df_starting_casual_weekday_top.shape",
"_____no_output_____"
],
[
"df_starting_casual_weekend = weekend_riders[weekend_riders.member_casual=='casual'].groupby(['start_station_name']).agg(count=('ID','count'))\ndf_starting_casual_weekend = df_starting_casual_weekend.sort_values(by='count', ascending=False)\ndf_starting_casual_weekend_top = df_starting_casual_weekend.iloc[0:19]\n\noutput = StringIO()\ndf_starting_casual_weekend_top.to_csv(output)\nprint(output.getvalue())",
"start_station_name,count\nStreeter Dr & Grand Ave,42683\nLake Shore Dr & Monroe St,24422\nMillennium Park,23273\nMichigan Ave & Oak St,19327\nTheater on the Lake,17348\nLake Shore Dr & North Blvd,14798\nShedd Aquarium,14373\nIndiana Ave & Roosevelt Rd,12122\nClark St & Lincoln Ave,11290\nDusable Harbor,10879\nWells St & Concord Ln,10573\nClark St & Armitage Ave,10339\nWabash Ave & Grand Ave,10207\nMichigan Ave & Washington St,10082\nMichigan Ave & Lake St,9802\nClark St & Elm St,9795\nBuckingham Fountain,9458\nMichigan Ave & 8th St,9345\nFairbanks Ct & Grand Ave,8833\n\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d08bf1fbbc547e70f28fe57d61fd97d98e53e7df | 15,395 | ipynb | Jupyter Notebook | v7.0/mania_Colab.ipynb | jsstwright/osumapper | 5a2df1ef4b1817e3d00450751db89ad8d5aec4e5 | [
"Apache-2.0"
] | 296 | 2018-08-18T08:05:20.000Z | 2022-03-30T16:56:11.000Z | v7.0/mania_Colab.ipynb | jsstwright/osumapper | 5a2df1ef4b1817e3d00450751db89ad8d5aec4e5 | [
"Apache-2.0"
] | 27 | 2018-08-19T21:03:33.000Z | 2022-02-10T00:10:40.000Z | v7.0/mania_Colab.ipynb | jsstwright/osumapper | 5a2df1ef4b1817e3d00450751db89ad8d5aec4e5 | [
"Apache-2.0"
] | 44 | 2018-08-19T18:41:07.000Z | 2022-03-26T06:50:16.000Z | 49.822006 | 6,930 | 0.765898 | [
[
[
"## osumapper: create osu! map using Tensorflow and Colab\n\n### -- For osu!mania game mode --\n\nFor mappers who don't know how this colaboratory thing works:\n- Press Ctrl+Enter in code blocks to run them one by one\n- It will ask you to upload .osu file and audio.mp3 after the third block of code\n- .osu file needs to have correct timing (you can use [statementreply](https://osu.ppy.sh/users/126198)'s TimingAnlyz tool)\n- After uploading them, wait for a few minutes until download pops\n\nGithub: https://github.com/kotritrona/osumapper",
"_____no_output_____"
],
[
"### Step 1: Installation\n\nFirst of all, check the Notebook Settings under Edit tab.<br>\nActivate GPU to make the training faster.\n\nThen, clone the git repository and install dependencies.",
"_____no_output_____"
]
],
[
[
"%cd /content/\n!git clone https://github.com/kotritrona/osumapper.git\n%cd osumapper/v7.0\n!apt install -y ffmpeg\n!apt install -y nodejs\n!cp requirements_colab.txt requirements.txt\n!cp package_colab.json package.json\n!pip install -r requirements.txt\n!npm install",
"_____no_output_____"
]
],
[
[
"### Step 2: Choose a pre-trained model\nSet the select_model variable to one of:\n\n- \"default\": default model (choose only after training it)\n- \"lowkey\": model trained with 4-key and 5-key maps (☆2.5-5.5)\n- \"highkey\": model trained with 6-key to 9-key maps (☆2.5-5.5)",
"_____no_output_____"
]
],
[
[
"from mania_setup_colab import *\n\nselect_model = \"highkey\"\n\nmodel_params = load_pretrained_model(select_model);",
"_____no_output_____"
]
],
[
[
"### Step 3: Upload map and music file<br>\nMap file = .osu file with correct timing (**Important:** Set to mania mode and the wished key count!)<br>\nMusic file = the mp3 file in the osu folder\n",
"_____no_output_____"
]
],
[
[
"from google.colab import files\nprint(\"Please upload the map file:\")\nmapfile_upload = files.upload()\nfor fn in mapfile_upload.keys():\n uploaded_osu_name = fn\n print('Uploaded map file: \"{name}\" {length} bytes'.format(name=fn, length=len(mapfile_upload[fn])))\nprint(\"Please upload the music file:\")\nmusic_upload = files.upload()\nfor fn in music_upload.keys():\n print('Uploaded music file: \"{name}\" {length} bytes'.format(name=fn, length=len(music_upload[fn])))\n",
"_____no_output_____"
]
],
[
[
"### Step 4: Read the map and convert to python readable format\n\n",
"_____no_output_____"
]
],
[
[
"from act_newmap_prep import *\n\nstep4_read_new_map(uploaded_osu_name);",
"_____no_output_____"
]
],
[
[
"### Step 5: Use model to calculate map rhythm\n\nParameters:\n\n\"note_density\" determines how many notes will be placed on the timeline, ranges from 0 to 1.<br>\n\"hold_favor\" determines how the model favors holds against circles, ranges from -1 to 1.<br>\n\"divisor_favor\" determines how the model favors notes to be on X divisors starting from a beat (white, blue, red, blue), ranges from -1 to 1 each.<br>\n\"hold_max_ticks\" determines the max amount of time a hold can hold off, ranges from 1 to +∞.<br>\n\"hold_min_return\" determines the final granularity of the pattern dataset, ranges from 1 to +∞.<br>\n\"rotate_mode\" determines how the patterns from the dataset gets rotated. modes (0,1,2,3,4)\n- 0 = no rotation\n- 1 = random\n- 2 = mirror\n- 3 = circulate\n- 4 = circulate + mirror",
"_____no_output_____"
]
],
[
[
"from mania_act_rhythm_calc import *\n\nmodel = step5_load_model(model_file=model_params[\"rhythm_model\"]);\nnpz = step5_load_npz();\nparams = model_params[\"rhythm_param\"]\n# Or set the parameters here...\n# params = step5_set_params(note_density=0.6, hold_favor=0.2, divisor_favor=[0] * divisor, hold_max_ticks=8, hold_min_return=1, rotate_mode=4);\n\npredictions = step5_predict_notes(model, npz, params);\nnotes_each_key = step5_build_pattern(predictions, params, pattern_dataset=model_params[\"pattern_dataset\"]);",
"_____no_output_____"
]
],
[
[
"Do a little modding to the map.\n\nParameters:\n\n- key_fix: remove continuous notes on single key modes (0,1,2,3) 0=inactive 1=remove late note 2=remove early note 3=divert",
"_____no_output_____"
]
],
[
[
"modding_params = model_params[\"modding\"]\n# modding_params = {\n# \"key_fix\" : 3\n# }\n\nnotes_each_key = mania_modding(notes_each_key, modding_params);\nnotes, key_count = merge_objects_each_key(notes_each_key)",
"_____no_output_____"
]
],
[
[
"Finally, save the data into an .osu file!",
"_____no_output_____"
]
],
[
[
"from google.colab import files\nfrom mania_act_final import *\n\nsaved_osu_name = step8_save_osu_mania_file(notes, key_count);\n\nfiles.download(saved_osu_name)",
"_____no_output_____"
],
[
"# clean up if you want to make another map!\n# colab_clean_up(uploaded_osu_name)",
"_____no_output_____"
]
],
[
[
"That's it! Now you can try out the AI-created map in osu!.\n\nFor bug reports and feedbacks either report it on github or use discord: <br>\n[https://discord.com/invite/npmSy7K](https://discord.com/invite/npmSy7K)\n\n<img src=\"https://i.imgur.com/Ko2wogO.jpg\" />",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
d08bff105226ad0b5ad82d8062233eead92c628e | 1,060 | ipynb | Jupyter Notebook | cclhm0069/_build/jupyter_execute/mod2/sem7.ipynb | ericbrasiln/intro-historia-digital | 5733dc55396beffeb916693c552fd4eb987472d0 | [
"MIT"
] | null | null | null | cclhm0069/_build/jupyter_execute/mod2/sem7.ipynb | ericbrasiln/intro-historia-digital | 5733dc55396beffeb916693c552fd4eb987472d0 | [
"MIT"
] | null | null | null | cclhm0069/_build/jupyter_execute/mod2/sem7.ipynb | ericbrasiln/intro-historia-digital | 5733dc55396beffeb916693c552fd4eb987472d0 | [
"MIT"
] | null | null | null | 20 | 133 | 0.49434 | [
[
[
"# Semana 7\n\n**Período**: 22/11/2021 a 27/11/2021\n\n**CH**: 3h30",
"_____no_output_____"
],
[
"## Encontro Virtual 4 (AS)\n\n**Tema**: Tratamento de texto com OCR\n\n**Data**:24/11/2021\n\n**CH**: 2h30min\n\n**Plataforma**: `Google Meet` - link enviado por e-mail.\n\n```{Attention}\n[_Clique aqui para acessar a apresentação da aula_](https://ericbrasiln.github.io/intro-historia-digital/mod2/sem7_ap.html).\n```",
"_____no_output_____"
],
[
"## Tarefa 4 (AS)\n\n**Data**: até 24/11/2021\n\n1. Assistar o tutorial de instalação do gImageReader;\n2. Instalar o programa no computador.",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
]
] |
d08c3174ee3295127be89bc1a6436f35d3da09f7 | 97,060 | ipynb | Jupyter Notebook | TimeSeriesAnalysisWithPython-master/SciPyTimeSeries/.ipynb_checkpoints/09a. AR + MA processes-checkpoint.ipynb | sunny2309/scipy_conf_notebooks | 30a85d5137db95e01461ad21519bc1bdf294044b | [
"MIT"
] | 2 | 2021-01-09T15:57:26.000Z | 2021-11-29T01:44:21.000Z | TimeSeriesAnalysisWithPython-master/SciPyTimeSeries/.ipynb_checkpoints/09a. AR + MA processes-checkpoint.ipynb | sunny2309/scipy_conf_notebooks | 30a85d5137db95e01461ad21519bc1bdf294044b | [
"MIT"
] | 5 | 2019-11-15T02:00:26.000Z | 2021-01-06T04:26:40.000Z | TimeSeriesAnalysisWithPython-master/SciPyTimeSeries/.ipynb_checkpoints/09a. AR + MA processes-checkpoint.ipynb | sunny2309/scipy_conf_notebooks | 30a85d5137db95e01461ad21519bc1bdf294044b | [
"MIT"
] | null | null | null | 263.75 | 23,982 | 0.911622 | [
[
[
"%matplotlib inline\nimport matplotlib.pylab as plt\n\nimport pandas as pd\nimport numpy as np\n\nfrom statsmodels.tsa.stattools import acf, pacf",
"_____no_output_____"
]
],
[
[
"### First let's figure out how to generate an AR proces",
"_____no_output_____"
]
],
[
[
"def ar1(phi = .9, n = 100, init = 0):\n time_series = [init]\n error = np.random.randn(n)\n for period in range(n):\n time_series.append(error[period] + phi*time_series[-1])\n return pd.Series(time_series[1:], index = range(n))\n \ndef ar2(phi1 = .9, phi2 = .8, n = 100, init = 0):\n time_series = [init, init]\n error = np.random.randn(n)\n for period in range(2,n):\n time_series.append(error[period] + phi1*time_series[-1] + phi2*time_series[-2])\n return pd.Series(time_series[1:], index = range(1,n))\n ",
"_____no_output_____"
],
[
"# try out different values of phi >=1 as compared to < 1\n# sometimes you need to make a large n to see lack of stationarity\na1 = ar1(phi = .5, n = 10)\na1.plot()",
"_____no_output_____"
],
[
"# try out different values of phi >=1 as compared to < 1\n# sometimes you need to make a large n to see lack of stationarity\na2 = ar2(n = 100)\na2.plot()",
"_____no_output_____"
]
],
[
[
"### Now let's generate an MA process",
"_____no_output_____"
]
],
[
[
"def ma1(theta = .5, n = 100):\n time_series = []\n error = np.random.randn(n)\n for period in range(1,n):\n time_series.append(error[period] + theta*error[period-1])\n return pd.Series(time_series[1:], index = range(1,n-1))",
"_____no_output_____"
],
[
"m1 = ma1(theta = -1000)\nm1.plot()",
"_____no_output_____"
]
],
[
[
"### Let's look at ACF + PACF for each kind of process",
"_____no_output_____"
]
],
[
[
"a1 = ar1(phi = .5, n = 1000)\na1_acf = acf(a1, nlags = 50)\nplt.plot(a1_acf)\nplt.axhline(y=0,linestyle='--', color = 'black')\nplt.axhline(y=-1.96/np.sqrt(len(a1)),linestyle='--', color = 'red')\nplt.axhline(y=1.96/np.sqrt(len(a1)),linestyle='--', color = 'red')",
"_____no_output_____"
],
[
"a1 = ar1(phi = .5, n = 1000)\na1_pacf = pacf(a1, nlags = 50)\nplt.plot(a1_pacf)\nplt.axhline(y=0,linestyle='--', color = 'black')\nplt.axhline(y=-1.96/np.sqrt(len(a1)),linestyle='--', color = 'red')\nplt.axhline(y=1.96/np.sqrt(len(a1)),linestyle='--', color = 'red')",
"_____no_output_____"
],
[
"m1 = ma1(n = 1000, theta = .9)\nm1_acf = acf(m1, nlags = 50)\nplt.plot(m1_acf)\nplt.axhline(y=0,linestyle='--', color = 'black')\nplt.axhline(y=-1.96/np.sqrt(len(m1)),linestyle='--', color = 'red')\nplt.axhline(y=1.96/np.sqrt(len(m1)),linestyle='--', color = 'red')",
"_____no_output_____"
],
[
"m1 = ma1(n = 1000, theta = .9)\nm1_pacf = pacf(m1, nlags = 50)\nplt.plot(m1_pacf)\nplt.axhline(y=0,linestyle='--', color = 'black')\nplt.axhline(y=-1.96/np.sqrt(len(m1)),linestyle='--', color = 'red')\nplt.axhline(y=1.96/np.sqrt(len(m1)),linestyle='--', color = 'red')",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d08c36be31b9658110e25f512fa396947e336c75 | 5,572 | ipynb | Jupyter Notebook | study/MLP_Course_Chapter_2.ipynb | bitgn/ml-pipelines | 904b6fa200aac1638491658c2d51ea40c33cbffa | [
"BSD-2-Clause"
] | 7 | 2019-07-25T05:36:21.000Z | 2020-08-23T18:04:53.000Z | study/MLP_Course_Chapter_2.ipynb | bitgn/ml-pipelines | 904b6fa200aac1638491658c2d51ea40c33cbffa | [
"BSD-2-Clause"
] | 1 | 2019-08-18T14:05:49.000Z | 2019-08-18T17:51:39.000Z | study/MLP_Course_Chapter_2.ipynb | bitgn/ml-pipelines | 904b6fa200aac1638491658c2d51ea40c33cbffa | [
"BSD-2-Clause"
] | 4 | 2019-08-18T13:42:45.000Z | 2021-04-04T11:15:21.000Z | 34.608696 | 70 | 0.37509 | [
[
[
"# import required modules\nimport pandas as pd\nimport matplotlib as mpl\n\n\nfrom io import StringIO\n\nTSV = \"\"\"\nId\tDate\tQuantity\tCurrency\tNetAmount\n10206\t2017-08-15\t2\tUSD\t34.7999992\n10206\t2017-08-22\t36\tUSD\t626.400024\n10206\t2017-08-25\t24\tUSD\t417.600006\n10206\t2017-08-25\t24\tUSD\t417.600006\n10206\t2017-08-28\t2\tUSD\t34.7999992\n10206\t2017-08-31\t2\tUSD\t34.7999992\n10206\t2017-08-31\t4\tUSD\t69.5999985\n10206\t2017-08-31\t4\tUSD\t69.5999985\n10206\t2017-08-31\t2\tUSD\t34.7999992\n10206\t2017-08-31\t4\tUSD\t69.5999985\n10206\t2017-08-31\t2\tUSD\t34.7999992\n10206\t2017-08-31\t4\tUSD\t69.5999985\n10206\t2017-08-31\t4\tUSD\t69.5999985\n10206\t2017-08-31\t6\tUSD\t104.400002\n10206\t2017-08-31\t4\tUSD\t69.5999985\n10206\t2017-08-31\t4\tUSD\t69.5999985\n10206\t2017-08-31\t4\tUSD\t69.5999985\n10206\t2017-09-05\t2\tUSD\t34.7999992\n10206\t2017-09-11\t4\tUSD\t69.5999985\n10206\t2017-09-11\t2\tUSD\t34.7999992\n10206\t2017-09-13\t4\tUSD\t69.5999985\n10206\t2017-09-18\t6\tUSD\t104.400002\n10206\t2017-09-18\t2\tUSD\t34.7999992\n10206\t2017-09-20\t6\tUSD\t104.400002\n10206\t2017-09-22\t2\tUSD\t34.7999992\"\"\"\n\n\ndf = pd.read_csv(StringIO(TSV), sep='\\t')\ndf.head()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
d08c3f2603d0cce505be6c045e9b37f47c62dace | 798,988 | ipynb | Jupyter Notebook | Codes/Vizathon1.ipynb | Roshni1999/Vizathon2021 | 8b609edacddb20ff91c7b41fbef484c282db6559 | [
"MIT"
] | null | null | null | Codes/Vizathon1.ipynb | Roshni1999/Vizathon2021 | 8b609edacddb20ff91c7b41fbef484c282db6559 | [
"MIT"
] | null | null | null | Codes/Vizathon1.ipynb | Roshni1999/Vizathon2021 | 8b609edacddb20ff91c7b41fbef484c282db6559 | [
"MIT"
] | null | null | null | 314.06761 | 160,250 | 0.890847 | [
[
[
"#setup\nimport numpy as np # linear algebra\nimport pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)\nimport matplotlib.pyplot as plt\nimport plotly\nimport seaborn as sns\nimport plotly.express as px\nimport plotly.graph_objects as go\nimport warnings\nwarnings.filterwarnings('ignore')\n\n%matplotlib inline\n\nprint(\"Setup Complete\")",
"Setup Complete\n"
],
[
"\nimport pandas as pd\nimport io\nfrom google.colab import files\nuploaded = files.upload()\n",
"_____no_output_____"
],
[
" #loading the data\ncountry_vaccinations = pd.read_csv(io.BytesIO(uploaded['country_vaccinations.csv']))\nprint(country_vaccinations)",
" country ... source_website\n0 Afghanistan ... https://covid19.who.int/\n1 Afghanistan ... https://covid19.who.int/\n2 Afghanistan ... https://covid19.who.int/\n3 Afghanistan ... https://covid19.who.int/\n4 Afghanistan ... https://covid19.who.int/\n... ... ... ...\n33720 Zimbabwe ... https://twitter.com/MoHCCZim/status/1419767795...\n33721 Zimbabwe ... https://twitter.com/MoHCCZim/status/1419767795...\n33722 Zimbabwe ... https://twitter.com/MoHCCZim/status/1419767795...\n33723 Zimbabwe ... https://twitter.com/MoHCCZim/status/1419767795...\n33724 Zimbabwe ... https://twitter.com/MoHCCZim/status/1419767795...\n\n[33725 rows x 15 columns]\n"
],
[
"# Print the first 5 rows of the data\ncountry_vaccinations.head()\n",
"_____no_output_____"
],
[
"#data preprocessing and cleaning\ncountry_vaccinations.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 33725 entries, 0 to 33724\nData columns (total 15 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 country 33725 non-null object \n 1 iso_code 33725 non-null object \n 2 date 33725 non-null object \n 3 total_vaccinations 18746 non-null float64\n 4 people_vaccinated 17879 non-null float64\n 5 people_fully_vaccinated 15058 non-null float64\n 6 daily_vaccinations_raw 15449 non-null float64\n 7 daily_vaccinations 33464 non-null float64\n 8 total_vaccinations_per_hundred 18746 non-null float64\n 9 people_vaccinated_per_hundred 17879 non-null float64\n 10 people_fully_vaccinated_per_hundred 15058 non-null float64\n 11 daily_vaccinations_per_million 33464 non-null float64\n 12 vaccines 33725 non-null object \n 13 source_name 33725 non-null object \n 14 source_website 33725 non-null object \ndtypes: float64(9), object(6)\nmemory usage: 3.9+ MB\n"
],
[
"country_vaccinations.columns",
"_____no_output_____"
],
[
"country_vaccinations.describe()",
"_____no_output_____"
],
[
"#Detect missing values\ncountry_vaccinations.isnull().sum()",
"_____no_output_____"
],
[
"#Data spiltting\ncountry_vaccinations.fillna(value=0, inplace=True)\ndate = country_vaccinations.date.str.split('-', expand=True)\ndate",
"_____no_output_____"
],
[
"\ncountry_vaccinations['year'] = date[0]\ncountry_vaccinations['month'] = date[1]\ncountry_vaccinations['day'] = date[2]\n\ncountry_vaccinations.year = pd.to_numeric(country_vaccinations.year)\ncountry_vaccinations.month = pd.to_numeric(country_vaccinations.month)\ncountry_vaccinations.day = pd.to_numeric(country_vaccinations.day)\n\ncountry_vaccinations.date = pd.to_datetime(country_vaccinations.date)\n\ncountry_vaccinations.head()",
"_____no_output_____"
],
[
"country_vaccinations.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 33725 entries, 0 to 33724\nData columns (total 18 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 country 33725 non-null object \n 1 iso_code 33725 non-null object \n 2 date 33725 non-null datetime64[ns]\n 3 total_vaccinations 33725 non-null float64 \n 4 people_vaccinated 33725 non-null float64 \n 5 people_fully_vaccinated 33725 non-null float64 \n 6 daily_vaccinations_raw 33725 non-null float64 \n 7 daily_vaccinations 33725 non-null float64 \n 8 total_vaccinations_per_hundred 33725 non-null float64 \n 9 people_vaccinated_per_hundred 33725 non-null float64 \n 10 people_fully_vaccinated_per_hundred 33725 non-null float64 \n 11 daily_vaccinations_per_million 33725 non-null float64 \n 12 vaccines 33725 non-null object \n 13 source_name 33725 non-null object \n 14 source_website 33725 non-null object \n 15 year 33725 non-null int64 \n 16 month 33725 non-null int64 \n 17 day 33725 non-null int64 \ndtypes: datetime64[ns](1), float64(9), int64(3), object(5)\nmemory usage: 4.6+ MB\n"
],
[
"#visualization\nimport seaborn as sns\nimport matplotlib\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nsns.set_style('darkgrid')\nmatplotlib.rcParams['font.size'] = 16\nmatplotlib.rcParams['figure.figsize'] = (10, 6)\n",
"_____no_output_____"
],
[
"#explore mean, min, max\ncountry_vaccinations.mean()",
"_____no_output_____"
],
[
"country_vaccinations.min()",
"_____no_output_____"
],
[
"country_vaccinations.max()",
"_____no_output_____"
],
[
"#explore country column\ncountry_vaccinations.country.value_counts()",
"_____no_output_____"
],
[
"country_vaccinations.country",
"_____no_output_____"
],
[
"country_vaccinations.people_fully_vaccinated.max()",
"_____no_output_____"
],
[
"country_vaccinations.date.min()",
"_____no_output_____"
],
[
"country_vaccinations.date.max()",
"_____no_output_____"
],
[
"#visualization\nplt.figure(figsize=(18,10))\nsns.lineplot(x=country_vaccinations.date, y=country_vaccinations.daily_vaccinations)\nplt.title('The Number of daily vaccinations dynamic')\nplt.show()",
"_____no_output_____"
],
[
"#explore the vaccination rate\ncountries = country_vaccinations.groupby('country')['total_vaccinations'].max().sort_values(ascending= False)[:5].index\n\ntop_countries = pd.DataFrame(columns= country_vaccinations.columns)\nfor country in countries:\n top_countries = top_countries.append(country_vaccinations.loc[country_vaccinations['country'] == country])",
"_____no_output_____"
],
[
"plt.figure(figsize=(20,8))\nsns.lineplot(top_countries['date'], \n top_countries['daily_vaccinations_per_million'], \n hue= top_countries['country'], ci= False)\nplt.title('Vaccination procedure go on rapidly')",
"_____no_output_____"
],
[
"fully_vaccinated = country_vaccinations.groupby(\"country\")[\"people_fully_vaccinated\"].max().sort_values(ascending= False).head(25)",
"_____no_output_____"
],
[
"fully_vaccinated.reset_index()",
"_____no_output_____"
],
[
"plt.figure(figsize=(15,12))\nax = sns.barplot(x=fully_vaccinated, y=fully_vaccinated.index)\nplt.xlabel(\"Fully Vaccinated\")\nplt.ylabel(\"Country\");\nplt.title('Which country has most number of fully vaccinated people?');\n\nfor patch in ax.patches:\n width = patch.get_width()\n height = patch.get_height()\n x = patch.get_x()\n y = patch.get_y()\n \n plt.text(width + x, height + y, '{:.1f} '.format(width))",
"_____no_output_____"
],
[
"daily_vaccinations_per_million = country_vaccinations.groupby(\"country\")[\"daily_vaccinations_per_million\"].max().sort_values(ascending= False).head(15)",
"_____no_output_____"
],
[
"daily_vaccinations_per_million.reset_index()",
"_____no_output_____"
],
[
"plt.figure(figsize=(12,8))\nax = sns.barplot(x=daily_vaccinations_per_million, y=daily_vaccinations_per_million.index )\nplt.xlabel(\"daily vaccinations per million\")\nplt.ylabel(\"Country\")\nplt.title(\"Daily COVID-19 vaccine doses administered per million people\");\n\nfor patch in ax.patches:\n width = patch.get_width()\n height = patch.get_height()\n x = patch.get_x()\n y = patch.get_y()\n \n plt.text(width + x, height + y, '{:.1f} '.format(width))",
"_____no_output_____"
],
[
"#number of people daily vaccinated in India\nindia_df = country_vaccinations[country_vaccinations['country'] == 'India']\nindia_df",
"_____no_output_____"
],
[
"india_df.info()",
"<class 'pandas.core.frame.DataFrame'>\nInt64Index: 194 entries, 13684 to 13877\nData columns (total 18 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 country 194 non-null object \n 1 iso_code 194 non-null object \n 2 date 194 non-null datetime64[ns]\n 3 total_vaccinations 194 non-null float64 \n 4 people_vaccinated 194 non-null float64 \n 5 people_fully_vaccinated 194 non-null float64 \n 6 daily_vaccinations_raw 194 non-null float64 \n 7 daily_vaccinations 194 non-null float64 \n 8 total_vaccinations_per_hundred 194 non-null float64 \n 9 people_vaccinated_per_hundred 194 non-null float64 \n 10 people_fully_vaccinated_per_hundred 194 non-null float64 \n 11 daily_vaccinations_per_million 194 non-null float64 \n 12 vaccines 194 non-null object \n 13 source_name 194 non-null object \n 14 source_website 194 non-null object \n 15 year 194 non-null int64 \n 16 month 194 non-null int64 \n 17 day 194 non-null int64 \ndtypes: datetime64[ns](1), float64(9), int64(3), object(5)\nmemory usage: 28.8+ KB\n"
],
[
"india_df.daily_vaccinations_raw.sum()",
"_____no_output_____"
],
[
"plt.figure(figsize=(19,9))\nsns.lineplot(x=india_df.date, y=india_df.daily_vaccinations_raw)\nplt.xlabel(\"Date\")\nplt.ylabel(\"Daily_Vaccination\")\nplt.title('How many people daily vaccinated in India?')",
"_____no_output_____"
],
[
"#people fully vaccinated in India\nfully_vaccinated_india = india_df.people_fully_vaccinated.max()/1000000",
"_____no_output_____"
],
[
"print(\"Total fully vaccinated people in India: {0:.2f}M\".format(fully_vaccinated_india))",
"Total fully vaccinated people in India: 96.83M\n"
],
[
"#country which fully vaccinated most of the people\npopulation_country=country_vaccinations.groupby('country')['total_vaccinations_per_hundred'].max().sort_values(ascending=False).head(15)",
"_____no_output_____"
],
[
"population_country.reset_index()",
"_____no_output_____"
],
[
"plt.figure(figsize= (15, 8))\nax = sns.barplot(x=population_country, y=population_country.index)\nplt.title('Total Vaccinations / Population')\nplt.xlabel('Total Vaccinations')\nplt.ylabel('Country')\n\nfor patch in ax.patches:\n width = patch.get_width()\n height = patch.get_height()\n x = patch.get_x()\n y = patch.get_y()\n \n plt.text(width + x, height + y, '{:1f} %'.format(width))",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d08c56ae15fcd2f392bc29e0d3c1214fe7c249ab | 25,412 | ipynb | Jupyter Notebook | database/MongoDB notebooks/06_linear regression on titanic data set.ipynb | neo-mashiro/BU | 2d2e789342ada0da6c1676e93d7d4e4839bdd6ef | [
"CC0-1.0"
] | null | null | null | database/MongoDB notebooks/06_linear regression on titanic data set.ipynb | neo-mashiro/BU | 2d2e789342ada0da6c1676e93d7d4e4839bdd6ef | [
"CC0-1.0"
] | 1 | 2021-09-25T14:29:00.000Z | 2021-09-25T14:29:00.000Z | database/MongoDB notebooks/06_linear regression on titanic data set.ipynb | neo-mashiro/BU | 2d2e789342ada0da6c1676e93d7d4e4839bdd6ef | [
"CC0-1.0"
] | null | null | null | 26.360996 | 368 | 0.361916 | [
[
[
"from pandas.io.json import json_normalize\nfrom pymongo import MongoClient\nfrom sklearn import linear_model\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import mean_squared_error\nimport numpy as np\nimport pprint",
"_____no_output_____"
],
[
"course_cluster_uri = \"mongodb://agg-student:[email protected]:27017,cluster0-shard-00-01-jxeqq.mongodb.net:27017,cluster0-shard-00-02-jxeqq.mongodb.net:27017/test?ssl=true&replicaSet=Cluster0-shard-0&authSource=admin\"\ncourse_client = MongoClient(course_cluster_uri)",
"_____no_output_____"
],
[
"titanic = course_client['coursera-agg']['titanic']",
"_____no_output_____"
],
[
"unique_gender_stage = {\n \"$group\": {\n \"_id\": \"$gender\",\n \"count\": {\"$sum\": 1}\n }\n}",
"_____no_output_____"
],
[
"possible_gender_values = titanic.aggregate([\n {\n \"$match\": {\n \"age\": {\"$type\": \"number\"},\n \"point_of_embarkation\": {\"$ne\": \"\"}\n }\n },\n unique_gender_stage\n])",
"_____no_output_____"
],
[
"pprint.pprint(list(possible_gender_values))",
"[{'_id': 'female', 'count': 259}, {'_id': 'male', 'count': 453}]\n"
],
[
"unique_point_of_embarkation_stage = {\n \"$group\": {\n \"_id\": \"$point_of_embarkation\",\n \"count\": {\"$sum\": 1}\n }\n}",
"_____no_output_____"
],
[
"possible_point_of_embarkation_values = titanic.aggregate([\n {\n \"$match\": {\n \"age\": {\"$type\": \"number\"},\n \"point_of_embarkation\": {\"$ne\": \"\"}\n }\n },\n unique_point_of_embarkation_stage\n])",
"_____no_output_____"
],
[
"pprint.pprint(list(possible_point_of_embarkation_values))",
"[{'_id': 'Q', 'count': 28},\n {'_id': 'C', 'count': 130},\n {'_id': 'S', 'count': 554}]\n"
],
[
"# convert \"gender\" and \"point_of_embarkation\" to integer, just like one-hot encoding\ngender_and_point_of_embarkation_conversion_stage = {\n \"$project\": {\n \"passenger_id\": 1,\n \"survived\": 1,\n \"class\": 1,\n \"name\": 1,\n \"age\": 1,\n \"siblings_spouse\": 1,\n \"parents_children\": 1,\n \"ticket_number\": 1,\n \"fare_paid\": 1,\n \"cabin\": 1,\n \"gender\": \n {\n \"$switch\":\n {\n \"branches\": [\n {\"case\": {\"$eq\": [\"$gender\", \"female\"]}, \"then\": 0},\n {\"case\": {\"$eq\": [\"$gender\", \"male\"]}, \"then\": 1}\n ],\n \"default\": \"?\"\n }\n },\n \"point_of_embarkation\":\n {\n \"$switch\":\n {\n \"branches\": [\n {\"case\": {\"$eq\": [\"$point_of_embarkation\", \"Q\"]}, \"then\": 0},\n {\"case\": {\"$eq\": [\"$point_of_embarkation\", \"C\"]}, \"then\": 1},\n {\"case\": {\"$eq\": [\"$point_of_embarkation\", \"S\"]}, \"then\": 2}\n ],\n \"default\": \"?\"\n }\n }\n }\n}",
"_____no_output_____"
],
[
"cursor = titanic.aggregate([\n {\n \"$match\": {\n \"age\": {\"$type\": \"number\"},\n \"point_of_embarkation\": {\"$ne\": \"\"}\n }\n },\n gender_and_point_of_embarkation_conversion_stage,\n {\n \"$project\": {\n \"_id\": 0,\n \"ticket_number\": 0,\n \"name\": 0,\n \"passenger_id\": 0,\n \"cabin\": 0\n }\n }\n])",
"_____no_output_____"
],
[
"# Exhaust our cursor into a list\ntitanic_data = list(cursor)",
"_____no_output_____"
],
[
"titanic_data[:2]",
"_____no_output_____"
],
[
"# pandas.io.json.json_normalize() will convert a list of json data into a pandas data frame\ndf = json_normalize(titanic_data)\ndf.head()",
"_____no_output_____"
],
[
"df_x = df.drop(['survived'], axis=1)",
"_____no_output_____"
],
[
"df_x.head()",
"_____no_output_____"
],
[
"df_y = df['survived'] # careful, this is a pitfall!",
"_____no_output_____"
],
[
"df_y.shape # the dimension is not correct!",
"_____no_output_____"
]
],
[
[
"__Pitfall__: if you get a dimension like `(134,)`, be careful! For linear regression and some models, this works just fine, but for some other models such as CNN/RNN, this dimension will result in sth unexpected and very hard to debug. As a good habit, you should always check your one-dimensional array and make sure that the 2nd shape parameter is not missing.",
"_____no_output_____"
]
],
[
[
"df_y.head()",
"_____no_output_____"
],
[
"df_y = df.filter(items=['survived']) # to get the right shape, use filter()",
"_____no_output_____"
],
[
"df_y.shape",
"_____no_output_____"
],
[
"df_y.head()",
"_____no_output_____"
],
[
"reg = linear_model.LinearRegression()",
"_____no_output_____"
],
[
"x_train, x_test, y_train, y_test = train_test_split(df_x, df_y, test_size=0.2, random_state=0)",
"_____no_output_____"
],
[
"reg.fit(x_train, y_train)",
"_____no_output_____"
],
[
"reg.predict(x_test)",
"_____no_output_____"
],
[
"mean_squared_error(y_test, reg.predict(x_test))",
"_____no_output_____"
],
[
"# age: 25,\n# class: 1,\n# fare_paid: 45,\n# gender: 1 ('male')\n# parents_children: 0,\n# point_of_embarkation: 1 ('C')\n# siblings_spouse: 1\n\nfake_passenger = [[25, 1, 45, 1, 0, 1, 1]]",
"_____no_output_____"
],
[
"reg.predict(fake_passenger)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d08c6ee026c2e49061b4c109f4e15de13eace8b1 | 408,355 | ipynb | Jupyter Notebook | sessions/privacy/privacy_policy.ipynb | russellfunk/data_privacy | bb0bf6c4bfdc42e2e8d3992597b4f5cd88d7b9d3 | [
"MIT"
] | null | null | null | sessions/privacy/privacy_policy.ipynb | russellfunk/data_privacy | bb0bf6c4bfdc42e2e8d3992597b4f5cd88d7b9d3 | [
"MIT"
] | null | null | null | sessions/privacy/privacy_policy.ipynb | russellfunk/data_privacy | bb0bf6c4bfdc42e2e8d3992597b4f5cd88d7b9d3 | [
"MIT"
] | 3 | 2021-01-29T16:25:48.000Z | 2022-02-07T23:43:39.000Z | 231.756527 | 253,920 | 0.715 | [
[
[
"# Evaluate a privacy policy\n\nToday, virtually every organization with which you interact will collect or use some about you. Most typically, the collection and use of these data will be disclosed according to an organization's privacy policy. We encounter these privacy polices all the time, when we create an account on a website, open a new credit card, or even sign up for grocery store loyalty program. Yet despite (or perhaps because of) their ubiquity, most people have never read a privacy policy from start to finish. Moreover, even if we took the time to read privacy policies, many of us would struggle to fully understand them due to their frequent use of complex, legalistic, and opaque language. These considerations raise many potential ethical questions regarding whether organizations are sufficiently transparent about the increasingly vast sums of data they collect about their users, customers, employees, and other stakeholders.\n\nThe purpose of this notebook is to help you gain a better understanding of the landscape of contemporary privacy policies, using a data-driven approach. We'll leverage a [new dataset](https://github.com/ansgarw/privacy) that provides the full text of privacy policies for hundreds of publicly-traded companies, which we'll analyze using some techniques from natural language processing. By the time you make your way through this notebook, you should have a better understanding of the diverse form and content of modern privacy policies, their linguistic characteristics, and a few neat tricks for analyzing large textual data with Python. Without further ado, let's get started!",
"_____no_output_____"
],
[
"# Roadmap\n * Preliminaries (packages + data wrangling)\n * Topic models\n * Keywords in context\n * Named entities\n * Readability\n * Embeddings\n * Exercises",
"_____no_output_____"
],
[
"# Preliminaries",
"_____no_output_____"
],
[
"Let's start out by loading some packages. We'll be using pandas to help with data wrangling and holding the data in an easy to work with data frame format. The json package is part of the Python Standard Library and will help us with reading the raw data. Matplotlib is for plotting; umap is for clustering policies and is not completely necessary. Finally, we'll use several natural language processing packages, spacy, textacy, and gensim, for the actual text analysis. ",
"_____no_output_____"
]
],
[
[
"# run the following commands to install the needed packages\n\"\"\"\npip install pandas\npip install spacy\npython -m spacy download en_core_web_lg\npip install textacy\npip install gensim\npip install umap\npip install matplotlib\n\"\"\"",
"_____no_output_____"
],
[
"# load some packages\nimport pandas as pd\nimport json\nimport spacy\nimport textacy\nimport gensim\nimport matplotlib.pyplot as plt\nimport umap\nimport umap.plot\nfrom bokeh.plotting import show, output_notebook\nimport tqdm\ntqdm.tqdm.pandas()\n\n# for umap warnings\nfrom matplotlib.axes._axes import _log as matplotlib_axes_logger\nmatplotlib_axes_logger.setLevel(\"ERROR\")\n\n# load spacy nlp model\nnlp = spacy.load(\"en_core_web_lg\", disable=[\"parser\"])\nnlp.max_length = 2000000",
"/Users/rfunk/.pyenv/versions/anaconda3-2019.10/lib/python3.7/site-packages/tqdm/std.py:697: FutureWarning: The Panel class is removed from pandas. Accessing it from the top-level namespace will also be removed in the next version\n from pandas import Panel\n"
]
],
[
[
"Now, let's go ahead and load the data. ",
"_____no_output_____"
]
],
[
[
"# load the data\nwith open(\"data/policies.json\") as f:\n policies_df = pd.DataFrame({k:\" \".join(v) for k,v in json.load(f).items()}.items(), \n columns=[\"url\",\"policy_text\"])",
"_____no_output_____"
],
[
"# check out the results\npolicies_df.head()",
"_____no_output_____"
]
],
[
[
"Looks pretty reasonable. We have one column for the URL and one for the full text of the privacy policy. Note that the orignal data come in a json format, and there, each URL is associated with a set of paragraphs that constitute each privacy policy. In the code above, when we load the data, we concatenate these paragraphs to a single text string, which will be easier for us to work with in what follows. ",
"_____no_output_____"
],
[
"Our next step will be to process the documents with spacy. We'll add a column to our data frame with the processed documents (that way we still have the raw text handy). This might take a minute. If it takes too long on your machine, you can just look at a random sample of policies. Just uncomment out the code below.",
"_____no_output_____"
]
],
[
[
"#policies_df = policies_df.sample(frac=0.20) # set frac to some fraction that will run in a reasonable time on your machine",
"_____no_output_____"
],
[
"policies_df[\"policy_text_processed\"] = policies_df.policy_text.progress_apply(nlp)",
"100%|██████████| 4062/4062 [09:28<00:00, 7.15it/s] \n"
]
],
[
[
"With that simple line of code, spacy has done a bunch of hard work for us, including things like tokenization, part-of-speech tagging, entity parsing, and other stuff that go well beyond our needs today. Let's take a quick look.",
"_____no_output_____"
]
],
[
[
"policies_df.head()",
"_____no_output_____"
]
],
[
[
"Okay, at this point, we've loaded all the packages we need, and we've done some of the basic wrangling necessary to get the data into shape. We'll need to do a little more data wrangling to prepare for a few of the analyses in store below, but we've already done enough to let us get started. So without further ado, let's take our first peek at the data. ",
"_____no_output_____"
],
[
"# Topic models\n\nWe'll start out by trying to get a better sense for __what__ is discussed in corporate privacy policies. To do so, we'll make use of an approach in natural language processing known as topic models. Given our focus, we're not going to go into any of the methodological details of how these models work, but in essence, what they're going to do is search for a set of latent topics in our corpus of documents (here, privacy policies). You can think of topics as clusters of related words on a particular subject (e.g., if we saw the words \"homework\", \"teacher\", \"student\", \"lesson\" we might infer that the topic was school); documents can contain discussions of multiple topics.\n\nTo start out, we'll do some more processing on the privacy policies to make them more useable for our topic modeling library (called gensim).",
"_____no_output_____"
]
],
[
[
"# define a processing function\nprocess_gensim = lambda tokens: [token.lemma_.lower() for token in tokens if not(token.is_punct or token.is_stop or token.is_space or token.is_digit)]\n\n# apply the function\npolicies_df[\"policy_text_gensim\"] = policies_df.policy_text_processed.apply(process_gensim)",
"_____no_output_____"
],
[
"# create a gensim dictionary\ngensim_dict = gensim.corpora.dictionary.Dictionary(policies_df[\"policy_text_gensim\"])",
"_____no_output_____"
],
[
"# create a gensim corpus\ngensim_corpus = [gensim_dict.doc2bow(policy_text) for policy_text in policies_df[\"policy_text_gensim\"]]",
"_____no_output_____"
],
[
"# fit the topic model\nlda_model = gensim.models.LdaModel(gensim_corpus,\n id2word=gensim_dict,\n num_topics=10)",
"_____no_output_____"
],
[
"# show the results\nlda_model.show_topics(num_topics=-1, num_words=8)",
"_____no_output_____"
]
],
[
[
"As a bonus, we can also check the coherence, essentially a model fit (generally, these measures look at similarity among high scoring words in topics). If you're so inclined, you can re-run the topic model above with different hyperparameters to see if you can get a better fit; I didn't spend a whole lot of time tuning. ",
"_____no_output_____"
]
],
[
[
"# get coherence\ncoherence_model_lda = gensim.models.CoherenceModel(model=lda_model, texts=policies_df[\"policy_text_gensim\"], dictionary=gensim_dict, coherence=\"c_v\")\ncoherence_model_lda.get_coherence()",
"_____no_output_____"
]
],
[
[
"Take a look at the topics identified by the models above. Can you assign human-interpretable labels to them? What can you learn about the different topics of discussion in privacy policies?",
"_____no_output_____"
],
[
"# Key words in context\n\nTopic models are nice, but they're a bit abstract. They give us an overview about interesting clusters of words, but they don't tell us much about how particular words or used or the details of the topics. For that, we can actually learn a lot just by picking out particular words of interest and pulling out their context from the document, known as a \"keyword in context\" approach. \n\nAs an illustration, the code below pulls out uses of the word \"third party\" in the policies of 20 random firms. There's no random seed set, so if you run the code again you'll get a different set of result. In the comment on the first line, I've given you a few additional words you may want to check.",
"_____no_output_____"
]
],
[
[
"KEYWORD = \"right\" # \"third party\" # privacy, right, duty, selling, disclose, trust, inform\nNUM_FIRMS = 20\nwith pd.option_context(\"display.max_colwidth\", 100, \"display.min_rows\", NUM_FIRMS, \"display.max_rows\", NUM_FIRMS):\n display(\n pd.DataFrame(policies_df.sample(n=NUM_FIRMS).apply(lambda row: list(textacy.text_utils.KWIC(row[\"policy_text\"], \n keyword=KEYWORD, \n window_width=35, \n print_only=False)), axis=1).explode()).head(NUM_FIRMS)\n)",
"_____no_output_____"
]
],
[
[
"Run the code for some different words, not just the ones in my list, but also those that interest you. Can you learn anything about corporate mindsets on privacy? What kind of rights are discussed?",
"_____no_output_____"
],
[
"# Named entities\n\nAnother way we can gain some insight into the content of privacy policies is by seeing who exactly they discuss. Once again, spacy gives us an easy (if sometimes rough) way to do this. Specifically, when we process a document using spacy, it will automatically extract several different categories of named entities (e.g., person, organization, place, you can find the full list [here](https://spacy.io/api/annotation)). In the code, we'll pull out all the organization and person entities. ",
"_____no_output_____"
]
],
[
[
"# extract named entities from the privacy policies\npull_entities = lambda policy_text: list(set([entity.text.lower() for entity in policy_text.ents if entity.label_ in (\"ORG\", \"PERSON\")]))\npolicies_df[\"named_entities\"] = policies_df.policy_text_processed.apply(pull_entities)",
"_____no_output_____"
]
],
[
[
"Let's take a quick peek at our data frame and see what the results look like.",
"_____no_output_____"
]
],
[
[
"# look at the entities\nwith pd.option_context(\"display.max_colwidth\", 100, \"display.min_rows\", 50, \"display.max_rows\", 50):\n display(policies_df[[\"url\",\"named_entities\"]].head(50))",
"_____no_output_____"
]
],
[
[
"Now let's add a bit more structure. We'll run a little code to help us identify the most frequently discussed organizations and people in the corporate privacy policies. ",
"_____no_output_____"
]
],
[
[
"# pull the most frequent entities\nentities = policies_df[\"named_entities\"].explode(\"named_entities\")\nNUM_WANTED = 50\nwith pd.option_context(\"display.min_rows\", 50, \"display.max_rows\", 50):\n display(entities.groupby(entities).size().sort_values(ascending=False).head(50))",
"_____no_output_____"
]
],
[
[
"What do you make of the most frequent entities? Are you surprised? Do they fit with what you expected? Can we make any inferences about the kind of data sharing companies might be enaging in by looking at these entities?",
"_____no_output_____"
],
[
"# Readability\n\nNext, we'll evaluate the privacy policies according to their readability. There are many different measures of readability, but the basic idea is to evaluate a text according to various metrics (e.g., words per sentence, number of syllables per word) that correlate with, well, how easy it is to read. The textacy package makes it easy to quickly evaluate a bunch of different metrics of readability. Let's compute them and then do some exploration.",
"_____no_output_____"
]
],
[
[
"# compute a bunch of text statistics (including readability)\npolicies_df[\"TextStats\"] = policies_df.policy_text_processed.apply(textacy.text_stats.TextStats)",
"_____no_output_____"
]
],
[
[
"You can now access the various statistics for individual documents as follows (e.g., for the document at index 0).",
"_____no_output_____"
]
],
[
[
"policies_df.iloc[0][\"TextStats\"].flesch_kincaid_grade_level",
"_____no_output_____"
]
],
[
[
"This tells us that the Flesch-Kinkaid grade level for the policy is just under 12th grade. We're probably not terribly interested in the readability of any given policy. We can do a little wrangling with pandas to extract various metrics for all policies and add them to the data frame. Below, I'll pull out the Flesch-Kincaid grade level and the Gunning-Fog index (both are grade-level measures). ",
"_____no_output_____"
]
],
[
[
"# pull out a few readability metrics\npolicies_df[\"flesch_kincaid_grade_level\"] = policies_df.TextStats.apply(lambda ts: ts.flesch_kincaid_grade_level)\npolicies_df[\"gunning_fog_index\"] = policies_df.TextStats.apply(lambda ts: ts.gunning_fog_index)\n\n# let's also clean up some extreme values\npolicies_df.loc[(policies_df.flesch_kincaid_grade_level < 0) | (policies_df.flesch_kincaid_grade_level > 20), \"flesch_kincaid_grade_level\"] = None\npolicies_df.loc[(policies_df.gunning_fog_index < 0) | (policies_df.gunning_fog_index > 20), \"gunning_fog_index\"] = None",
"_____no_output_____"
]
],
[
[
"I would encourage you to adapt the code above to pull out some other readability-related features that seem interesting. You can find the full list available in our `TextStats` object [here](https://textacy.readthedocs.io/en/stable/api_reference/misc.html), in the textacy documentation. Let's plot the values we just extracted.",
"_____no_output_____"
]
],
[
[
"# plot with matplotlib\nfig, axes = plt.subplots(1, 2)\npolicies_df[\"flesch_kincaid_grade_level\"].hist(ax=axes[0])\npolicies_df[\"gunning_fog_index\"].hist(ax=axes[1])\nplt.tight_layout()",
"_____no_output_____"
]
],
[
[
"These results are pretty striking, especially when you consider them alongside statistics on the literacy rate in the United States. According to [surveys](https://www.oecd.org/skills/piaac/Country%20note%20-%20United%20States.pdf) by the OECD, about half of adults in the United States can read at an 8th grade level or lower. ",
"_____no_output_____"
],
[
"# Embeddings\nYet another way that we can gain some intuition on privacy policies is by seeing how similar or different particular policies are from one another. For example, we might not be all that surprised if we saw that Google's privacy policy was quite similar to Facebook's. We might raise an eyebrow if we saw that Nike and Facebook also had very similar privacy policies. What kind of data are they collecting on us when we buy our sneakers? One way we can compare the similarity among documents (here, privacy policies) is by embedding them in some high dimensional vector space, and the using linear algebra to find the distance between vectors. Classically, we would do this by representing documents as vectors of words, where entries represent word frequencies, and perhaps weighting those frequencies (e.g., using TF-IDF). Here, we'll use a slightly more sophisticated approach. When we process the privacy policies using spacy, we get a vector representation of each document, which is based on the word embeddings for its constituent terms. Again, given the focus of this class, we're not going to go into the methodological details of word embeddings, but you can think of them as a vectorization that aims to capture semantic relationships.\n\nBelow, we'll pull the document embeddings from spacy. We'll then do some dimension reduction using a cool algorithm from topological data analysis known as [Uniform Manifold Approximation and Projection](https://arxiv.org/abs/1802.03426) (UMAP), and visualize the results using an interactive plot. ",
"_____no_output_____"
]
],
[
[
"# pull the document embeddings from spacy and format for clustering\nembeddings_df = policies_df[[\"url\", \"policy_text_processed\"]]\nembeddings_df = embeddings_df.set_index(\"url\")\nembeddings_df[\"policy_text_processed\"] = embeddings_df[\"policy_text_processed\"].apply(lambda text: text.vector)\nembeddings_df = embeddings_df.policy_text_processed.apply(pd.Series)",
"_____no_output_____"
],
[
"# non-interactive plot\nmapper = umap.UMAP().fit(embeddings_df.to_numpy())\numap.plot.points(mapper)",
"_____no_output_____"
],
[
"# interactive plot\noutput_notebook()\nhover_df = embeddings_df.reset_index()\nhover_df[\"index\"] = hover_df.index\np = umap.plot.interactive(mapper, labels=hover_df[\"index\"], hover_data=hover_df[[\"index\",\"url\"]], point_size=2)\numap.plot.show(p)",
"_____no_output_____"
]
],
[
[
"Explore the plots a bit. Can you observe any patterns in the results? Did you expect more or less variation? What do you make of the different clusters?",
"_____no_output_____"
],
[
"# Exercises\n * Going back to the keyword-in-context exercise, consider several additional keywords that may give you insight into how different companies are thinking about privacy. How often, for instance, do you see the word \"rights\" used? How often in conjunction with the word privacy? Do you find evidence of considerations for fairness? \n * We've seen that the reading level for most privacy policies is quite high, but it's often a little difficult to interpret what, for example, a document written at a grade 14 reading level looks like. To gain some intuition, compute readability scores for some of your own writing (e.g., a prior course paper) and/or for some page on Wikipedia (you can use python, or do a quick Google search for an online readability calculator). How does the writing level of those compare to the privacy policies? \n * There is a general presumption that many companies use fairly standardized (or boilerplate) privacy policies that are aimed primarily at avoiding legal liability, and that do not describe their particular data practices in detail. Do we see support for these views in the data? Do the privacy policies seem more or less variable than you expected? What are the implications for customers and other stakeholders?\n * Spend some time exploring the data using any of the techniques above, or your own favorite analytical approach or tools. What additional insights can we learn about privacy policies?",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
d08c7228baa3b471e389857a3296a7cb10fb39b9 | 6,275 | ipynb | Jupyter Notebook | colab_tutorials/COLAB_12_Display_movies_available_on_the_server.ipynb | ocean-data-factory-sweden/koster_zooniverse | 208273da2419b7a4227e0fa5acac5141b99c6aa0 | [
"MIT"
] | null | null | null | colab_tutorials/COLAB_12_Display_movies_available_on_the_server.ipynb | ocean-data-factory-sweden/koster_zooniverse | 208273da2419b7a4227e0fa5acac5141b99c6aa0 | [
"MIT"
] | null | null | null | colab_tutorials/COLAB_12_Display_movies_available_on_the_server.ipynb | ocean-data-factory-sweden/koster_zooniverse | 208273da2419b7a4227e0fa5acac5141b99c6aa0 | [
"MIT"
] | null | null | null | 24.227799 | 177 | 0.478725 | [
[
[
"<img align=\"left\" src=\"https://panoptes-uploads.zooniverse.org/project_avatar/86c23ca7-bbaa-4e84-8d8a-876819551431.png\" type=\"image/png\" height=100 width=100>\n</img>\n<h1 align=\"right\">Colab KSO Tutorial #12: Display movies available on the server</h1>\n<h3 align=\"right\">Written by @jannesgg and @vykanton</h3>\n<h5 align=\"right\">Last updated: Jun 19, 2022</h5>",
"_____no_output_____"
],
[
"# Set up and requirements",
"_____no_output_____"
],
[
"## Install kso_data_management and its requirements",
"_____no_output_____"
]
],
[
[
"# Clone koster_data_management repo\n!git clone --recurse-submodules https://github.com/ocean-data-factory-sweden/koster_data_management.git\n!pip install -r koster_data_management/requirements.txt\n\n# Restart the session to load the latest packages\nexit()",
"_____no_output_____"
]
],
[
[
"### Import Python packages",
"_____no_output_____"
]
],
[
[
"# Set the directory of the libraries\nimport sys, os\nfrom pathlib import Path\n\n# Enables testing changes in utils\n%load_ext autoreload\n%autoreload 2\n\n# Specify the path of the tutorials\nos.chdir(\"koster_data_management/tutorials\")\nsys.path.append('..')\n\n# Import required modules\nimport kso_utils.tutorials_utils as t_utils\nimport kso_utils.project_utils as p_utils\nimport kso_utils.server_utils as s_utils\n\nprint(\"Packages loaded successfully\")",
"_____no_output_____"
]
],
[
[
"### Choose your project",
"_____no_output_____"
]
],
[
[
"project_name = t_utils.choose_project()",
"_____no_output_____"
]
],
[
[
"## Initiate database",
"_____no_output_____"
]
],
[
[
"# Initiate db\nproject = p_utils.find_project(project_name = project_name.value)\ndb_info_dict = t_utils.initiate_db(project)",
"_____no_output_____"
]
],
[
[
"## Retrieve info of movies available on the server",
"_____no_output_____"
]
],
[
[
"available_movies_df = s_utils.retrieve_movie_info_from_server(\n project = project,\n db_info_dict = db_info_dict\n)",
"_____no_output_____"
]
],
[
[
"# Select the movie of interest",
"_____no_output_____"
]
],
[
[
"movie_selected = t_utils.select_movie(available_movies_df)",
"_____no_output_____"
]
],
[
[
"# Display the movie",
"_____no_output_____"
]
],
[
[
"movie_display, movie_path = t_utils.preview_movie(\n project = project,\n db_info_dict = db_info_dict, \n available_movies_df = available_movies_df, \n movie_i = movie_selected.value\n)\nmovie_display",
"_____no_output_____"
],
[
"#END",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d08c72f957ef2ef7c09c40fe9675f75e491e12f9 | 1,451 | ipynb | Jupyter Notebook | Arvind/Introduction to Pandas Series and Creating Series.ipynb | Arvind-collab/Data-Science | 4d7027c308adba2b414f97abfe151c8881674da4 | [
"MIT"
] | null | null | null | Arvind/Introduction to Pandas Series and Creating Series.ipynb | Arvind-collab/Data-Science | 4d7027c308adba2b414f97abfe151c8881674da4 | [
"MIT"
] | null | null | null | Arvind/Introduction to Pandas Series and Creating Series.ipynb | Arvind-collab/Data-Science | 4d7027c308adba2b414f97abfe151c8881674da4 | [
"MIT"
] | null | null | null | 18.844156 | 97 | 0.474845 | [
[
[
"#### Create a pandas series having values 4, 7, -5, 3, NAN and their index as d, b, a, c, e",
"_____no_output_____"
]
],
[
[
"# Write a code here\nimport numpy as np\nimport pandas as pd\nseries = pd.Series([4,7,-5,3,np.nan], index=['d','b','a','c','e'])\nseries",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
]
] |
d08c74fcb6e41837d4a3766676a27daeda50e4f8 | 12,450 | ipynb | Jupyter Notebook | src/executables/model_tf.ipynb | frederik-schmidt/Handwritten-digits | f520b029d170685ab8e58371839ca753d083eae8 | [
"MIT"
] | null | null | null | src/executables/model_tf.ipynb | frederik-schmidt/Handwritten-digits | f520b029d170685ab8e58371839ca753d083eae8 | [
"MIT"
] | null | null | null | src/executables/model_tf.ipynb | frederik-schmidt/Handwritten-digits | f520b029d170685ab8e58371839ca753d083eae8 | [
"MIT"
] | null | null | null | 44.30605 | 4,892 | 0.683695 | [
[
[
"### Import modules",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport tensorflow as tf\nfrom tensorflow.keras.datasets import mnist\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Flatten, Dense\nfrom handwritten_digits.utils_np import test_prediction\nfrom handwritten_digits.data import one_hot",
"_____no_output_____"
]
],
[
[
"### Load data",
"_____no_output_____"
]
],
[
[
"(X_train, y_train), (X_test, y_test) = mnist.load_data()\nX_train = tf.keras.utils.normalize(X_train, axis=1)\nX_test = tf.keras.utils.normalize(X_test, axis=1)",
"_____no_output_____"
]
],
[
[
"### Define model architecture",
"_____no_output_____"
]
],
[
[
"model = Sequential([\n Flatten(),\n Dense(784, activation=tf.nn.relu),\n Dense(128, activation=tf.nn.relu),\n Dense(32, activation=tf.nn.relu),\n Dense(10, activation=tf.nn.softmax),\n])",
"_____no_output_____"
]
],
[
[
"### Train model",
"_____no_output_____"
]
],
[
[
"model.compile(\n optimizer=\"SGD\",\n loss=\"sparse_categorical_crossentropy\",\n metrics=[\"accuracy\"]\n)",
"_____no_output_____"
],
[
"model.fit(X_train, y_train, epochs=15)",
"Epoch 1/15\n1875/1875 [==============================] - 23s 12ms/step - loss: 0.9401 - accuracy: 0.7679\nEpoch 2/15\n1875/1875 [==============================] - 22s 12ms/step - loss: 0.3231 - accuracy: 0.9072\nEpoch 3/15\n1875/1875 [==============================] - 22s 12ms/step - loss: 0.2497 - accuracy: 0.9272\nEpoch 4/15\n1875/1875 [==============================] - 22s 12ms/step - loss: 0.2080 - accuracy: 0.9402\nEpoch 5/15\n1875/1875 [==============================] - 23s 12ms/step - loss: 0.1786 - accuracy: 0.9483\nEpoch 6/15\n1875/1875 [==============================] - 23s 12ms/step - loss: 0.1565 - accuracy: 0.9546\nEpoch 7/15\n1875/1875 [==============================] - 23s 12ms/step - loss: 0.1386 - accuracy: 0.9598\nEpoch 8/15\n1875/1875 [==============================] - 23s 12ms/step - loss: 0.1239 - accuracy: 0.9646\nEpoch 9/15\n1875/1875 [==============================] - 23s 12ms/step - loss: 0.1116 - accuracy: 0.96810s - l\nEpoch 10/15\n1875/1875 [==============================] - 23s 12ms/step - loss: 0.1012 - accuracy: 0.9712\nEpoch 11/15\n1875/1875 [==============================] - 21s 11ms/step - loss: 0.0922 - accuracy: 0.9740\nEpoch 12/15\n1875/1875 [==============================] - 23s 12ms/step - loss: 0.0841 - accuracy: 0.9759\nEpoch 13/15\n1875/1875 [==============================] - 20s 11ms/step - loss: 0.0769 - accuracy: 0.9784\nEpoch 14/15\n1875/1875 [==============================] - 21s 11ms/step - loss: 0.0705 - accuracy: 0.9803\nEpoch 15/15\n1875/1875 [==============================] - 22s 12ms/step - loss: 0.0648 - accuracy: 0.9820\n"
]
],
[
[
"### Evaluate model performance",
"_____no_output_____"
]
],
[
[
"training_loss, training_accuracy = model.evaluate(x=X_train, y=y_train)\ntest_loss, test_accuracy = model.evaluate(x=X_test, y=y_test)",
"1875/1875 [==============================] - 14s 7ms/step - loss: 0.0609 - accuracy: 0.9833\n313/313 [==============================] - 3s 8ms/step - loss: 0.0963 - accuracy: 0.9692\n"
]
],
[
[
"### Evaluate predictions",
"_____no_output_____"
]
],
[
[
"# bring preds to same shape as in numpy model\nprobs = model.predict(X_test)\npreds = probs == np.amax(probs, axis=1, keepdims=True)\npreds = preds.T.astype(float)",
"_____no_output_____"
],
[
"# bring data to same shape as in numpy model\nX_test_reshaped = X_test.reshape(X_test.shape[0], -1).T\ny_test_reshaped = one_hot(y_test)",
"_____no_output_____"
],
[
"random_index = np.random.randint(0, preds.shape[1])\ntest_prediction(\n X=X_test_reshaped,\n Y=y_test_reshaped,\n pred=preds,\n index=random_index,\n)",
"Prediction: 1\nTrue label: 1\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d08c7eead4f5db2a89a900bca08d4c4d8468f058 | 16,975 | ipynb | Jupyter Notebook | ddsp/colab/tutorials/0_processor.ipynb | kyjohnso/ddsp | 508decff1a15cce5f2d86c01f4594cb7ed8df0a5 | [
"Apache-2.0"
] | 9 | 2020-11-19T16:21:52.000Z | 2022-03-13T12:16:12.000Z | ddsp/colab/tutorials/0_processor.ipynb | kyjohnso/ddsp | 508decff1a15cce5f2d86c01f4594cb7ed8df0a5 | [
"Apache-2.0"
] | null | null | null | ddsp/colab/tutorials/0_processor.ipynb | kyjohnso/ddsp | 508decff1a15cce5f2d86c01f4594cb7ed8df0a5 | [
"Apache-2.0"
] | 1 | 2020-02-21T09:03:04.000Z | 2020-02-21T09:03:04.000Z | 32.581574 | 297 | 0.505214 | [
[
[
"<a href=\"https://colab.research.google.com/github/magenta/ddsp/blob/master/ddsp/colab/tutorials/0_processor.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"\n##### Copyright 2020 Google LLC.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\n\n\n\n",
"_____no_output_____"
]
],
[
[
"# Copyright 2020 Google LLC. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================",
"_____no_output_____"
]
],
[
[
"# DDSP Processor Demo\n\nThis notebook provides an introduction to the signal `Processor()` object. The main object type in the DDSP library, it is the base class used for Synthesizers and Effects, which share the methods:\n\n* `get_controls()`: inputs -> controls.\n* `get_signal()`: controls -> signal.\n* `__call__()`: inputs -> signal. (i.e. `get_signal(**get_controls())`)\n\nWhere:\n* `inputs` is a variable number of tensor arguments (depending on processor). Often the outputs of a neural network.\n* `controls` is a dictionary of tensors scaled and constrained specifically for the processor\n* `signal` is an output tensor (usually audio or control signal for another processor)\n\nLet's see why this is a helpful approach by looking at the specific example of the `Additive()` synthesizer processor. ",
"_____no_output_____"
]
],
[
[
"#@title Install and import dependencies\n\n%tensorflow_version 2.x\n!pip install -qU ddsp\n\n# Ignore a bunch of deprecation warnings\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\nimport ddsp\nimport ddsp.training\nfrom ddsp.colab.colab_utils import play, specplot, DEFAULT_SAMPLE_RATE\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport tensorflow as tf\n\nsample_rate = DEFAULT_SAMPLE_RATE # 16000",
"_____no_output_____"
]
],
[
[
"# Example: additive synthesizer\n\nThe additive synthesizer models a sound as a linear combination of harmonic sinusoids. Amplitude envelopes are generated with 50% overlapping hann windows. The final audio is cropped to n_samples.",
"_____no_output_____"
],
[
"## `__init__()`\n\nAll member variables are initialized in the constructor, which makes it easy to change them as hyperparameters using the [gin](https://github.com/google/gin-config) dependency injection library. All processors also have a `name` that is used by `ProcessorGroup()`.",
"_____no_output_____"
]
],
[
[
"n_frames = 1000\nhop_size = 64\nn_samples = n_frames * hop_size\n\n# Create a synthesizer object.\nadditive_synth = ddsp.synths.Additive(n_samples=n_samples,\n sample_rate=sample_rate,\n name='additive_synth')",
"_____no_output_____"
]
],
[
[
"\n## `get_controls()` \n\nThe outputs of a neural network are often not properly scaled and constrained. The `get_controls` method gives a dictionary of valid control parameters based on neural network outputs.\n\n",
"_____no_output_____"
],
[
"**3 inputs (amps, hd, f0)**\n* `amplitude`: Amplitude envelope of the synthesizer output.\n* `harmonic_distribution`: Normalized amplitudes of each harmonic.\n* `fundamental_frequency`: Frequency in Hz of base oscillator\n\n",
"_____no_output_____"
]
],
[
[
"# Generate some arbitrary inputs.\n\n# Amplitude [batch, n_frames, 1].\n# Make amplitude linearly decay over time.\namps = np.linspace(1.0, -3.0, n_frames)\namps = amps[np.newaxis, :, np.newaxis]\n\n# Harmonic Distribution [batch, n_frames, n_harmonics].\n# Make harmonics decrease linearly with frequency.\nn_harmonics = 30\nharmonic_distribution = (np.linspace(-2.0, 2.0, n_frames)[:, np.newaxis] + \n np.linspace(3.0, -3.0, n_harmonics)[np.newaxis, :])\nharmonic_distribution = harmonic_distribution[np.newaxis, :, :]\n\n# Fundamental frequency in Hz [batch, n_frames, 1].\nf0_hz = 440.0 * np.ones([1, n_frames, 1], dtype=np.float32)",
"_____no_output_____"
],
[
"# Plot it!\ntime = np.linspace(0, n_samples / sample_rate, n_frames)\n\nplt.figure(figsize=(18, 4))\nplt.subplot(131)\nplt.plot(time, amps[0, :, 0])\nplt.xticks([0, 1, 2, 3, 4])\nplt.title('Amplitude')\n\nplt.subplot(132)\nplt.plot(time, harmonic_distribution[0, :, :])\nplt.xticks([0, 1, 2, 3, 4])\nplt.title('Harmonic Distribution')\n\nplt.subplot(133)\nplt.plot(time, f0_hz[0, :, 0])\nplt.xticks([0, 1, 2, 3, 4])\n_ = plt.title('Fundamental Frequency')",
"_____no_output_____"
]
],
[
[
"Consider the plots above as outputs of a neural network. These outputs violate the synthesizer's expectations:\n* Amplitude is not >= 0 (avoids phase shifts)\n* Harmonic distribution is not normalized (factorizes timbre and amplitude)\n* Fundamental frequency * n_harmonics > nyquist frequency (440 * 20 > 8000), which will lead to [aliasing](https://en.wikipedia.org/wiki/Aliasing).\n",
"_____no_output_____"
]
],
[
[
"controls = additive_synth.get_controls(amps, harmonic_distribution, f0_hz)\nprint(controls.keys())",
"_____no_output_____"
],
[
"# Now let's see what they look like...\ntime = np.linspace(0, n_samples / sample_rate, n_frames)\n\nplt.figure(figsize=(18, 4))\nplt.subplot(131)\nplt.plot(time, controls['amplitudes'][0, :, 0])\nplt.xticks([0, 1, 2, 3, 4])\nplt.title('Amplitude')\n\nplt.subplot(132)\nplt.plot(time, controls['harmonic_distribution'][0, :, :])\nplt.xticks([0, 1, 2, 3, 4])\nplt.title('Harmonic Distribution')\n\nplt.subplot(133)\nplt.plot(time, controls['f0_hz'][0, :, 0])\nplt.xticks([0, 1, 2, 3, 4])\n_ = plt.title('Fundamental Frequency')",
"_____no_output_____"
]
],
[
[
"Notice that \n* Amplitudes are now all positive\n* The harmonic distribution sums to 1.0\n* All harmonics that are above the Nyquist frequency now have an amplitude of 0.",
"_____no_output_____"
],
[
"The amplitudes and harmonic distribution are scaled by an \"exponentiated sigmoid\" function (`ddsp.core.exp_sigmoid`). There is nothing particularly special about this function (other functions can be specified as `scale_fn=` during construction), but it has several nice properties:\n* Output scales logarithmically with input (as does human perception of loudness).\n* Centered at 0, with max and min in reasonable range for normalized neural network outputs.\n* Max value of 2.0 to prevent signal getting too loud.\n* Threshold value of 1e-7 for numerical stability during training.",
"_____no_output_____"
]
],
[
[
"x = tf.linspace(-10.0, 10.0, 1000)\ny = ddsp.core.exp_sigmoid(x)\n\nplt.figure(figsize=(18, 4))\nplt.subplot(121)\nplt.plot(x, y)\n\nplt.subplot(122)\n_ = plt.semilogy(x, y)",
"_____no_output_____"
]
],
[
[
"## `get_signal()`\n\nSynthesizes audio from controls.",
"_____no_output_____"
]
],
[
[
"audio = additive_synth.get_signal(**controls)\n\nplay(audio)\nspecplot(audio)",
"_____no_output_____"
]
],
[
[
"## `__call__()` \n\nSynthesizes audio directly from the raw inputs. `get_controls()` is called internally to turn them into valid control parameters.",
"_____no_output_____"
]
],
[
[
"audio = additive_synth(amps, harmonic_distribution, f0_hz)\n\nplay(audio)\nspecplot(audio)",
"_____no_output_____"
]
],
[
[
"# Example: Just for fun... \nLet's run another example where we tweak some of the controls...",
"_____no_output_____"
]
],
[
[
"## Some weird control envelopes...\n\n# Amplitude [batch, n_frames, 1].\namps = np.ones([n_frames]) * -5.0\namps[:50] += np.linspace(0, 7.0, 50)\namps[50:200] += 7.0\namps[200:900] += (7.0 - np.linspace(0.0, 7.0, 700))\namps *= np.abs(np.cos(np.linspace(0, 2*np.pi * 10.0, n_frames)))\namps = amps[np.newaxis, :, np.newaxis]\n\n# Harmonic Distribution [batch, n_frames, n_harmonics].\nn_harmonics = 20\nharmonic_distribution = np.ones([n_frames, 1]) * np.linspace(1.0, -1.0, n_harmonics)[np.newaxis, :]\nfor i in range(n_harmonics):\n harmonic_distribution[:, i] = 1.0 - np.linspace(i * 0.09, 2.0, 1000)\n harmonic_distribution[:, i] *= 5.0 * np.abs(np.cos(np.linspace(0, 2*np.pi * 0.1 * i, n_frames)))\n if i % 2 != 0:\n harmonic_distribution[:, i] = -3\nharmonic_distribution = harmonic_distribution[np.newaxis, :, :]\n\n# Fundamental frequency in Hz [batch, n_frames, 1].\nf0_hz = np.ones([n_frames]) * 200.0\nf0_hz[:100] *= np.linspace(2, 1, 100)**2\nf0_hz[200:1000] += 20 * np.sin(np.linspace(0, 8.0, 800) * 2 * np.pi * np.linspace(0, 1.0, 800)) * np.linspace(0, 1.0, 800)\nf0_hz = f0_hz[np.newaxis, :, np.newaxis]\n\n# Get valid controls\ncontrols = additive_synth.get_controls(amps, harmonic_distribution, f0_hz)",
"_____no_output_____"
],
[
"# Plot!\ntime = np.linspace(0, n_samples / sample_rate, n_frames)\n\nplt.figure(figsize=(18, 4))\nplt.subplot(131)\nplt.plot(time, controls['amplitudes'][0, :, 0])\nplt.xticks([0, 1, 2, 3, 4])\nplt.title('Amplitude')\n\nplt.subplot(132)\nplt.plot(time, controls['harmonic_distribution'][0, :, :])\nplt.xticks([0, 1, 2, 3, 4])\nplt.title('Harmonic Distribution')\n\nplt.subplot(133)\nplt.plot(time, controls['f0_hz'][0, :, 0])\nplt.xticks([0, 1, 2, 3, 4])\n_ = plt.title('Fundamental Frequency')",
"_____no_output_____"
],
[
"audio = additive_synth.get_signal(**controls)\n\nplay(audio)\nspecplot(audio)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d08c88eea9414cfc9d08dd33a728f6ca4e785c75 | 170,249 | ipynb | Jupyter Notebook | plot/plot_robust_sample.ipynb | lunliu454/infect_place | 46a7a2d69e74a55be8cc0c26631fd84cda2d23bb | [
"MIT"
] | null | null | null | plot/plot_robust_sample.ipynb | lunliu454/infect_place | 46a7a2d69e74a55be8cc0c26631fd84cda2d23bb | [
"MIT"
] | null | null | null | plot/plot_robust_sample.ipynb | lunliu454/infect_place | 46a7a2d69e74a55be8cc0c26631fd84cda2d23bb | [
"MIT"
] | null | null | null | 208.894479 | 44,444 | 0.887001 | [
[
[
"import pandas as pd\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nsns.set_context('talk')",
"_____no_output_____"
],
[
"sns.palplot(sns.color_palette(\"gray\", 100))",
"_____no_output_____"
],
[
"new_gray=sns.color_palette(\"gray\",4)",
"_____no_output_____"
],
[
"new_gray=[(0, 0, 0), (0.85, 0.85, 0.85)]",
"_____no_output_____"
]
],
[
[
"## Brazil",
"_____no_output_____"
]
],
[
[
"plot_bra2 = pd.read_csv('sensi_withhold_bra.csv')",
"_____no_output_____"
],
[
"eff_new = pd.DataFrame(\n np.array([np.repeat(list(plot_bra2['intervention']),320), \n plot_bra2[plot_bra2.columns[2:]].values.reshape(1,-1)[0]]).T,\n columns=['intervention','x'])\neff_new['x'] = eff_new['x'].astype(float)",
"_____no_output_____"
],
[
"eff_new['color'] =([1]*319+[0.1])*10",
"_____no_output_____"
],
[
"fig1,ax = plt.subplots(figsize=(10,6))\nax.spines['left'].set_visible(False)\nax.spines['right'].set_visible(False)\nax.spines['top'].set_visible(False)\nplt.axvline(x=0,ls=\"-\",linewidth=1,c=\"black\")\nsns.scatterplot(data = eff_new, \n x='x',\n y='intervention',\n hue='color',\n s=200,\n palette=new_gray, \n alpha=0.3, \n legend=False,\n edgecolor=None, \n ax=ax)\nplt.xlabel('Brazil',c=\"black\",fontsize=24,fontname='Helvetica') \nplt.ylabel('') \n#plt.xlim(-1.5,1)",
"_____no_output_____"
],
[
"fig1.savefig(\"sensi_withhold_bra\",bbox_inches='tight',dpi=300)",
"_____no_output_____"
]
],
[
[
"## Japan",
"_____no_output_____"
]
],
[
[
"plot_jp2 = pd.read_csv('sensi_withhold_jp.csv')",
"_____no_output_____"
],
[
"plot_jp2",
"_____no_output_____"
],
[
"eff_new = pd.DataFrame(\n np.array([np.repeat(list(plot_jp2['intervention']),46), \n plot_jp2[plot_jp2.columns[2:]].values.reshape(1,-1)[0]]).T,\n columns=['intervention','x'])\neff_new['x'] = eff_new['x'].astype(float)",
"_____no_output_____"
],
[
"eff_new['color'] =([1]*45+[0.1])*10",
"_____no_output_____"
],
[
"fig1,ax = plt.subplots(figsize=(10,6))\nax.spines['left'].set_visible(False)\nax.spines['right'].set_visible(False)\nax.spines['top'].set_visible(False)\nplt.axvline(x=0,ls=\"-\",linewidth=1,c=\"black\")\nsns.scatterplot(data = eff_new, \n x='x',\n y='intervention',\n hue='color',\n s=200,\n palette=new_gray, \n alpha=0.3, \n legend=False,\n edgecolor=None, \n ax=ax)\nplt.xlabel('Japan',c=\"black\",fontsize=24,fontname='Helvetica') \nplt.ylabel('') ",
"_____no_output_____"
],
[
"fig1.savefig(\"sensi_withhold_jp\",bbox_inches='tight',dpi=300)",
"_____no_output_____"
]
],
[
[
"## UK",
"_____no_output_____"
]
],
[
[
"plot_uk2 = pd.read_csv('sensi_withhold_uk.csv')",
"_____no_output_____"
],
[
"eff_new4 = pd.DataFrame(\n np.array([np.repeat(list(plot_uk2['intervention']),235), \n plot_uk2[plot_uk2.columns[2:]].values.reshape(1,-1)[0]]).T,\n columns=['intervention','x'])\neff_new4['x'] = eff_new4['x'].astype(float)",
"_____no_output_____"
],
[
"eff_new4['color'] =([1]*234+[0.1])*5",
"_____no_output_____"
],
[
"fig4,ax = plt.subplots(figsize=(10,4))\n\nax.spines['left'].set_visible(False)\nax.spines['right'].set_visible(False)\nax.spines['top'].set_visible(False)\nplt.axvline(x=0,ls=\"-\",linewidth=1,c=\"black\")\nsns.scatterplot(data = eff_new4, \n x='x',\n y='intervention',\n hue='color',\n s=200,\n palette=new_gray, \n alpha=0.3, \n legend=False,\n edgecolor=None, \n ax=ax)\nplt.xlabel('United Kingdom',c=\"black\",fontsize=24,fontname='Helvetica') \nplt.ylabel('') \n#plt.ylim(-0.6,2.5)",
"_____no_output_____"
],
[
"fig4.savefig(\"sensi_withhold_uk\",bbox_inches='tight',dpi=300)",
"_____no_output_____"
]
],
[
[
"## US",
"_____no_output_____"
]
],
[
[
"plot_us2 = pd.read_csv('sensi_withhold_us.csv')",
"_____no_output_____"
],
[
"eff_new = pd.DataFrame(\n np.array([np.repeat(list(plot_us2['intervention']),310), \n plot_us2[plot_us2.columns[2:]].values.reshape(1,-1)[0]]).T,\n columns=['intervention','x'])\neff_new['x'] = eff_new['x'].astype(float)",
"_____no_output_____"
],
[
"eff_new['color'] =([1]*309+[0.1])*9",
"_____no_output_____"
],
[
"fig4,ax = plt.subplots(figsize=(10,5.5))\nax.spines['left'].set_visible(False)\nax.spines['right'].set_visible(False)\nax.spines['top'].set_visible(False)\nplt.axvline(x=0,ls=\"-\",linewidth=1,c=\"black\")\nsns.scatterplot(data = eff_new, \n x='x',\n y='intervention',\n hue='color',\n s=200,\n palette=new_gray, \n alpha=0.3, \n legend=False,\n edgecolor=None, \n ax=ax)\nplt.xlabel('United States',c=\"black\",fontsize=24,fontname='Helvetica') \nplt.ylabel('') ",
"_____no_output_____"
],
[
"fig4.savefig(\"sensi_withhold_us\",bbox_inches='tight',dpi=300)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
d08c9132a85782cfa28543f31646e1ed099228e0 | 12,895 | ipynb | Jupyter Notebook | tutorials/PHM tutorial.ipynb | eleGAN23/Hyper | 9136e62d59ff3b68a554b58c749e3ee6c3d76fc3 | [
"MIT"
] | 13 | 2021-10-09T20:11:04.000Z | 2022-03-04T18:02:57.000Z | tutorials/PHM tutorial.ipynb | eleGAN23/Hyper | 9136e62d59ff3b68a554b58c749e3ee6c3d76fc3 | [
"MIT"
] | 2 | 2021-11-16T00:07:32.000Z | 2021-12-31T10:44:15.000Z | tutorials/PHM tutorial.ipynb | eleGAN23/Hyper | 9136e62d59ff3b68a554b58c749e3ee6c3d76fc3 | [
"MIT"
] | 2 | 2021-12-28T09:02:17.000Z | 2022-03-08T11:52:17.000Z | 32.077114 | 142 | 0.470725 | [
[
[
"### Tutorial: Parameterized Hypercomplex Multiplication (PHM) Layer\n\n#### Author: Eleonora Grassucci\n\nOriginal paper: Beyond Fully-Connected Layers with Quaternions: Parameterization of Hypercomplex Multiplications with 1/n Parameters.\n\nAston Zhang, Yi Tay, Shuai Zhang, Alvin Chan, Anh Tuan Luu, Siu Cheung Hui, Jie Fu.\n\n[ArXiv link](https://arxiv.org/pdf/2102.08597.pdf).",
"_____no_output_____"
]
],
[
[
"# Imports\n\nimport numpy as np\nimport math\nimport time\nimport torch\nimport torch.nn as nn\nfrom torch.autograd import Variable\nimport torch.nn.functional as F\nimport torch.utils.data as Data\nfrom torch.nn import init",
"_____no_output_____"
],
[
"# Check Pytorch version: torch.kron is available from 1.8.0\ntorch.__version__",
"_____no_output_____"
],
[
"# Define the PHM class\n\nclass PHM(nn.Module):\n '''\n Simple PHM Module, the only parameter is A, since S is passed from the trainset.\n '''\n\n def __init__(self, n, kernel_size, **kwargs):\n super().__init__(**kwargs)\n self.n = n\n A = torch.empty((n-1, n, n))\n self.A = nn.Parameter(A)\n self.kernel_size = kernel_size\n\n def forward(self, X, S):\n H = torch.zeros((self.n*self.kernel_size, self.n*self.kernel_size))\n \n # Sum of Kronecker products\n for i in range(n-1):\n H = H + torch.kron(self.A[i], S[i])\n return torch.matmul(X, H.T)",
"_____no_output_____"
]
],
[
[
"### Learn the Hamilton product between two pure quaternions\n\nA pure quaternion is a quaternion with scalar part equal to 0.",
"_____no_output_____"
]
],
[
[
"# Setup the training set\n\nx = torch.FloatTensor([0, 1, 2, 3]).view(4, 1) # Scalar part equal to 0\nW = torch.FloatTensor([[0,-1,-1,-1], [1,0,-1,1], [1,1,0,-1], [1,-1,1,0]]) # Scalar parts equal to 0\n\ny = torch.matmul(W, x)\n\nnum_examples = 1000\nbatch_size = 1\n\nX = torch.zeros((num_examples, 16))\nS = torch.zeros((num_examples, 16))\nY = torch.zeros((num_examples, 16))\n\nfor i in range(num_examples):\n x = torch.randint(low=-10, high=10, size=(12, ), dtype=torch.float)\n s = torch.randint(low=-10, high=10, size=(12, ), dtype=torch.float)\n \n s1, s2, s3, s4 = torch.FloatTensor([0]*4), s[0:4], s[4:8], s[8:12]\n s1 = s1.view(2,2)\n s2 = s2.view(2,2)\n s3 = s3.view(2,2)\n s4 = s4.view(2,2)\n\n s_1 = torch.cat([s1,-s2,-s3,-s4])\n s_2 = torch.cat([s2,s1,-s4,s3])\n s_3 = torch.cat([s3,s4,s1,-s2])\n s_4 = torch.cat([s4,-s3,s2,s1])\n\n W = torch.cat([s_1,s_2, s_3, s_4], dim=1) \n x = torch.cat([torch.FloatTensor([0]*4), x])\n s = torch.cat([torch.FloatTensor([0]*4), s])\n x_mult = x.view(2, 8)\n y = torch.matmul(x_mult, W.T) \n y = y.view(16, )\n\n X[i, :] = x\n S[i, :] = s\n Y[i, :] = y\n\nX = torch.FloatTensor(X).view(num_examples, 16, 1)\nS = torch.FloatTensor(S).view(num_examples, 16, 1)\nY = torch.FloatTensor(Y).view(num_examples, 16, 1)\n\ndata = torch.cat([X, S, Y], dim=2)\ntrain_iter = torch.utils.data.DataLoader(data, batch_size=batch_size)\n\n### Setup the test set\n\nnum_examples = 1\nbatch_size = 1\n\nX = torch.zeros((num_examples, 16))\nS = torch.zeros((num_examples, 16))\nY = torch.zeros((num_examples, 16))\n\nfor i in range(num_examples):\n x = torch.randint(low=-10, high=10, size=(12, ), dtype=torch.float)\n s = torch.randint(low=-10, high=10, size=(12, ), dtype=torch.float)\n \n s1, s2, s3, s4 = torch.FloatTensor([0]*4), s[0:4], s[4:8], s[8:12]\n s1 = s1.view(2,2)\n s2 = s2.view(2,2)\n s3 = s3.view(2,2)\n s4 = s4.view(2,2)\n\n s_1 = torch.cat([s1,-s2,-s3,-s4])\n s_2 = torch.cat([s2,s1,-s4,s3])\n s_3 = torch.cat([s3,s4,s1,-s2])\n s_4 = torch.cat([s4,-s3,s2,s1])\n\n W = torch.cat([s_1,s_2, s_3, s_4], dim=1) \n x = torch.cat([torch.FloatTensor([0]*4), x])\n s = torch.cat([torch.FloatTensor([0]*4), s])\n x_mult = x.view(2, 8)\n y = torch.matmul(x_mult, W.T) \n y = y.view(16, )\n\n X[i, :] = x\n S[i, :] = s\n Y[i, :] = y\n\nX = torch.FloatTensor(X).view(num_examples, 16, 1)\nS = torch.FloatTensor(S).view(num_examples, 16, 1)\nY = torch.FloatTensor(Y).view(num_examples, 16, 1)\n\ndata = torch.cat([X, S, Y], dim=2)\ntest_iter = torch.utils.data.DataLoader(data, batch_size=batch_size)",
"_____no_output_____"
],
[
"# Define training function\n\ndef train(net, lr, phm=True):\n # Squared loss\n loss = nn.MSELoss()\n optimizer = torch.optim.Adam(net.parameters(), lr=lr)\n \n for epoch in range(5):\n for data in train_iter:\n optimizer.zero_grad()\n X = data[:, :, 0]\n S = data[:, 4:, 1]\n Y = data[:, :, 2]\n \n if phm:\n out = net(X.view(2, 8), S.view(3, 2, 2))\n else:\n out = net(X)\n \n l = loss(out, Y.view(2, 8))\n l.backward()\n optimizer.step()\n print(f'epoch {epoch + 1}, loss {float(l.sum() / batch_size):.6f}')",
"_____no_output_____"
],
[
"# Initialize model parameters\ndef weights_init_uniform(m):\n m.A.data.uniform_(-0.07, 0.07)\n \n# Create layer instance\nn = 4\nphm_layer = PHM(n, kernel_size=2)\nphm_layer.apply(weights_init_uniform)\n\n# Train the model\ntrain(phm_layer, 0.005)",
"epoch 1, loss 0.021605\nepoch 2, loss 0.000000\nepoch 3, loss 0.000000\nepoch 4, loss 0.000000\nepoch 5, loss 0.000000\n"
],
[
"# Check parameters of the layer require grad\nfor name, param in phm_layer.named_parameters():\n if param.requires_grad:\n print(name, param.data)",
"A tensor([[[-6.0884e-08, 1.0000e+00, -1.6100e-08, 2.6916e-08],\n [-1.0000e+00, -1.8684e-08, -2.1245e-08, -8.8355e-08],\n [-1.2780e-08, 1.2693e-07, -3.8119e-08, 1.0000e+00],\n [-1.0182e-07, 4.7619e-08, -1.0000e+00, 3.8946e-08]],\n\n [[ 1.5405e-08, -3.1784e-08, 1.0000e+00, 2.9003e-08],\n [-3.5486e-08, -3.5375e-08, 3.3766e-08, -1.0000e+00],\n [-1.0000e+00, -2.9093e-08, -5.3595e-08, 3.2789e-08],\n [ 6.2255e-09, 1.0000e+00, 3.7168e-08, 8.2059e-09]],\n\n [[-3.9100e-08, -5.8766e-09, 2.8090e-09, 1.0000e+00],\n [-1.5466e-07, 5.3471e-08, 1.0000e+00, 3.3222e-08],\n [ 3.3584e-08, -1.0000e+00, -6.5275e-08, 1.9724e-07],\n [-1.0000e+00, -3.0299e-08, 1.3472e-08, -2.8102e-08]]])\n"
],
[
"# Take a look at the convolution performed on the test set\n\nfor data in test_iter:\n X = data[:, :, 0]\n S = data[:, 4:, 1]\n Y = data[:, :, 2]\n\n y_phm = phm_layer(X.view(2, 8), S.view(3, 2, 2))\n \n print('Hamilton product result from test set:\\n', Y.view(2, 8))\n print('Performing Hamilton product learned by PHM:\\n', y_phm)",
"Hamilton product result from test set:\n tensor([[ 82., -198., 2., 70., -4., 54., -160., 52.],\n [ 51., 45., -133., 86., -103., 225., -125., 92.]])\nPerforming Hamilton product learned by PHM:\n tensor([[ 82.0000, -198.0000, 2.0000, 70.0000, -4.0000, 54.0000,\n -160.0000, 52.0000],\n [ 51.0000, 45.0000, -133.0000, 86.0000, -103.0000, 225.0001,\n -125.0000, 92.0000]], grad_fn=<MmBackward>)\n"
],
[
"# Check the PHC layer have learnt the proper algebra for the marix A\n\nW = torch.FloatTensor([[0,-1,-1,-1], [1,0,-1,1], [1,1,0,-1], [1,-1,1,0]])\n\nprint('Ground-truth Hamilton product matrix:\\n', W)\nprint()\nprint('Learned A in PHM:\\n', phm_layer.A)\nprint()\nprint('Learned A sum in PHM:\\n', sum(phm_layer.A).T)",
"Ground-truth Hamilton product matrix:\n tensor([[ 0., -1., -1., -1.],\n [ 1., 0., -1., 1.],\n [ 1., 1., 0., -1.],\n [ 1., -1., 1., 0.]])\n\nLearned A in PHM:\n Parameter containing:\ntensor([[[-6.0884e-08, 1.0000e+00, -1.6100e-08, 2.6916e-08],\n [-1.0000e+00, -1.8684e-08, -2.1245e-08, -8.8355e-08],\n [-1.2780e-08, 1.2693e-07, -3.8119e-08, 1.0000e+00],\n [-1.0182e-07, 4.7619e-08, -1.0000e+00, 3.8946e-08]],\n\n [[ 1.5405e-08, -3.1784e-08, 1.0000e+00, 2.9003e-08],\n [-3.5486e-08, -3.5375e-08, 3.3766e-08, -1.0000e+00],\n [-1.0000e+00, -2.9093e-08, -5.3595e-08, 3.2789e-08],\n [ 6.2255e-09, 1.0000e+00, 3.7168e-08, 8.2059e-09]],\n\n [[-3.9100e-08, -5.8766e-09, 2.8090e-09, 1.0000e+00],\n [-1.5466e-07, 5.3471e-08, 1.0000e+00, 3.3222e-08],\n [ 3.3584e-08, -1.0000e+00, -6.5275e-08, 1.9724e-07],\n [-1.0000e+00, -3.0299e-08, 1.3472e-08, -2.8102e-08]]],\n requires_grad=True)\n\nLearned A sum in PHM:\n tensor([[-8.4579e-08, -1.0000e+00, -1.0000e+00, -1.0000e+00],\n [ 1.0000e+00, -5.8745e-10, -1.0000e+00, 1.0000e+00],\n [ 1.0000e+00, 1.0000e+00, -1.5699e-07, -1.0000e+00],\n [ 1.0000e+00, -1.0000e+00, 1.0000e+00, 1.9049e-08]],\n grad_fn=<PermuteBackward>)\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d08c964796862158c217380ed57cc6b3d6355129 | 7,359 | ipynb | Jupyter Notebook | Running tests.ipynb | maxoboe/QUOQAS | a54f55c147cc6bff2d7a2fe995685b7bb0921fbd | [
"MIT"
] | null | null | null | Running tests.ipynb | maxoboe/QUOQAS | a54f55c147cc6bff2d7a2fe995685b7bb0921fbd | [
"MIT"
] | null | null | null | Running tests.ipynb | maxoboe/QUOQAS | a54f55c147cc6bff2d7a2fe995685b7bb0921fbd | [
"MIT"
] | null | null | null | 45.147239 | 439 | 0.606468 | [
[
[
"from sklearn import linear_model\nimport numpy as np\nfrom collections import namedtuple\ntokenized_row = namedtuple('tokenized_row', 'sent_count sentences word_count words')\nfrom sklearn.feature_extraction.text import CountVectorizer\nimport pickle\nimport csv\n\n\ndef train_sgd(train_targets, train_regressors):\n sgd = linear_model.SGDClassifier()\n sgd.fit(train_regressors, train_targets)\n return sgd\n\ndef error_rate(train_targets, train_regressors, test_targets, test_regressors):\n sgd = train_sgd(train_targets, train_regressors)\n test_predictions = sgd.predict(test_regressors)\n rounded_predictions = np.rint(test_predictions)\n false_pos = 0\n false_neg = 0\n correct = 0\n for i in range(len(rounded_predictions)):\n if rounded_predictions[i] == 1 and test_targets[i] == 0: false_pos += 1\n if rounded_predictions[i] == 0 and test_targets[i] == 1: false_neg += 1\n if rounded_predictions[i] == test_targets[i]: correct += 1\n errors = false_pos + false_neg\n corrects = len(rounded_predictions) - errors\n assert(correct == corrects)\n error_rate = float(errors) / len(test_predictions)\n return (error_rate, false_pos, false_neg)\n\nfilenames = ['combined_train_test.p', 'r_train_so_test.p', 'so_train_r_test.p',\n 'so_alone.p', 'reddit_alone.p']\n",
"_____no_output_____"
],
[
"def baseline(filename):\n with open(filename, 'rb') as pfile:\n train, test = pickle.load(pfile)\n train_targets = train['answer_good'].values.reshape(-1, 1)\n train_regressors = train['AnswerCount'].values.reshape(-1, 1)\n test_targets = test['answer_good'].values.reshape(-1, 1)\n test_regressors = test['AnswerCount'].values.reshape(-1, 1)\n return error_rate(train_targets, train_regressors, test_targets, test_regressors)\n\nwith open('baseline_results.csv', 'w+', newline=\"\") as csvfile:\n fieldnames = ['Test Name', 'Success Rate', 'false +', 'false -']\n writer = csv.DictWriter(csvfile, fieldnames=fieldnames)\n writer.writeheader()\n for name in filenames:\n errors, false_pos, false_neg = baseline(name)\n success_rate = 1 - errors\n writer.writerow({'Test Name': name, 'Success Rate': success_rate, \n 'false +': false_pos, 'false -': false_neg})\n ",
"c:\\program files\\python36\\lib\\site-packages\\sklearn\\linear_model\\stochastic_gradient.py:128: FutureWarning: max_iter and tol parameters have been added in <class 'sklearn.linear_model.stochastic_gradient.SGDClassifier'> in 0.19. If both are left unset, they default to max_iter=5 and tol=None. If tol is not None, max_iter defaults to max_iter=1000. From 0.21, default max_iter will be 1000, and default tol will be 1e-3.\n \"and default tol will be 1e-3.\" % type(self), FutureWarning)\nc:\\program files\\python36\\lib\\site-packages\\sklearn\\utils\\validation.py:578: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().\n y = column_or_1d(y, warn=True)\n"
],
[
"def length_only(filename):\n with open(filename, 'rb') as pfile:\n train, test = pickle.load(pfile)\n # Get length from the dict! Word count and sentence count\n directory_name = filename.split('.p')[0]\n with open(directory_name + \"/tokenized_dict.p\", 'rb') as pfile:\n train_token_dict, test_token_dict = pickle.load(pfile)\n train_length = len(train.index.values)\n train_regressors = np.empty([train_length, 4])\n test_length = len(test.index.values)\n test_regressors = np.empty([test_length, 4])\n for i in range(train_length):\n index = train.index.values[i]\n row = train_token_dict[index]\n train_regressors[i] = [row[0].word_count, row[0].sent_count, row[1].word_count, row[1].sent_count]\n for i in range(test_length):\n index = test.index.values[i]\n row = test_token_dict[index]\n test_regressors[i] = [row[0].word_count, row[0].sent_count, row[1].word_count, row[1].sent_count]\n test_targets = test['answer_good'].values.reshape(-1, 1)\n train_targets = train['answer_good'].values.reshape(-1, 1)\n return error_rate(train_targets, train_regressors, test_targets, test_regressors)\n\nwith open('length_only_results.csv', 'w+', newline=\"\") as csvfile:\n fieldnames = ['Test Name', 'Success Rate', 'false +', 'false -']\n writer = csv.DictWriter(csvfile, fieldnames=fieldnames)\n writer.writeheader()\n for name in filenames:\n errors, false_pos, false_neg = length_only(name)\n success_rate = 1 - errors\n writer.writerow({'Test Name': name, 'Success Rate': success_rate, \n 'false +': false_pos, 'false -': false_neg})",
"c:\\program files\\python36\\lib\\site-packages\\sklearn\\linear_model\\stochastic_gradient.py:128: FutureWarning: max_iter and tol parameters have been added in <class 'sklearn.linear_model.stochastic_gradient.SGDClassifier'> in 0.19. If both are left unset, they default to max_iter=5 and tol=None. If tol is not None, max_iter defaults to max_iter=1000. From 0.21, default max_iter will be 1000, and default tol will be 1e-3.\n \"and default tol will be 1e-3.\" % type(self), FutureWarning)\nc:\\program files\\python36\\lib\\site-packages\\sklearn\\utils\\validation.py:578: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().\n y = column_or_1d(y, warn=True)\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
d08ca29b28589834525aa83b9cf60c8c0449daa6 | 521,841 | ipynb | Jupyter Notebook | example_boston_discover_feature_relationships.ipynb | PeteBleackley/discover_feature_relationships | e6782b13f91620dcf72af98ceecd413c7455d4d4 | [
"MIT"
] | null | null | null | example_boston_discover_feature_relationships.ipynb | PeteBleackley/discover_feature_relationships | e6782b13f91620dcf72af98ceecd413c7455d4d4 | [
"MIT"
] | null | null | null | example_boston_discover_feature_relationships.ipynb | PeteBleackley/discover_feature_relationships | e6782b13f91620dcf72af98ceecd413c7455d4d4 | [
"MIT"
] | null | null | null | 158.373596 | 114,284 | 0.796415 | [
[
[
"# Explore feature-to-feature relationship in Boston",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport seaborn as sns\nfrom sklearn import datasets\nimport discover\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"# watermark is optional - it shows the versions of installed libraries\n# so it is useful to confirm your library versions when you submit bug reports to projects\n# install watermark using\n# %install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py\n%load_ext watermark\n# show a watermark for this environment\n%watermark -d -m -v -p numpy,matplotlib,sklearn -g",
"2018-09-17 \n\nCPython 3.5.2\nIPython 6.3.1\n\nnumpy 1.14.2\nmatplotlib 2.2.2\nsklearn 0.19.1\n\ncompiler : GCC 5.4.0 20160609\nsystem : Linux\nrelease : 4.4.0-134-generic\nmachine : x86_64\nprocessor : x86_64\nCPU cores : 8\ninterpreter: 64bit\nGit hash : 7c969c1c4551ca860ef2a5473d90df8e15db9d3b\n"
],
[
"example_dataset = datasets.load_boston()\ndf_boston = pd.DataFrame(example_dataset.data, columns=example_dataset.feature_names)\ndf_boston['target'] = example_dataset.target\n\ndf = df_boston\ncols = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX',\n 'PTRATIO', 'B', 'LSTAT', 'target']\nclassifier_overrides = set()\n\ndf = df_boston\ndf.head()",
"_____no_output_____"
]
],
[
[
"# Discover non-linear relationships\n\n_Github note_ colours for `style` don't show up in Github, you'll have to grab a local copy of the Notebook.\n\n* NOX predicts RAD, INDUS, TAX and DIS\n* RAD predicts DIS poorly, NOX better, TAX better\n* CRIM predicts RAD but RAD poorly predicts CRIM",
"_____no_output_____"
]
],
[
[
"%time df_results = discover.discover(df[cols].sample(frac=1), classifier_overrides)\n\nfig, ax = plt.subplots(figsize=(12, 8))\nsns.heatmap(df_results.pivot(index='target', columns='feature', values='score').fillna(1),\n annot=True, center=0, ax=ax, vmin=-0.1, vmax=1, cmap=\"viridis\");",
"CPU times: user 21.7 s, sys: 4 ms, total: 21.7 s\nWall time: 21.7 s\n"
],
[
"# we can also output a DataFrame using style (note - doesn't render on github with colours, look at a local Notebook!)\ndf_results.pivot(index='target', columns='feature', values='score').fillna(1) \\\n.style.background_gradient(cmap=\"viridis\", low=0.7, axis=1) \\\n.set_precision(2)",
"_____no_output_____"
]
],
[
[
"# We can drill in to some of the discovered relationships",
"_____no_output_____"
]
],
[
[
"print(example_dataset.DESCR)",
"Boston House Prices dataset\n===========================\n\nNotes\n------\nData Set Characteristics: \n\n :Number of Instances: 506 \n\n :Number of Attributes: 13 numeric/categorical predictive\n \n :Median Value (attribute 14) is usually the target\n\n :Attribute Information (in order):\n - CRIM per capita crime rate by town\n - ZN proportion of residential land zoned for lots over 25,000 sq.ft.\n - INDUS proportion of non-retail business acres per town\n - CHAS Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)\n - NOX nitric oxides concentration (parts per 10 million)\n - RM average number of rooms per dwelling\n - AGE proportion of owner-occupied units built prior to 1940\n - DIS weighted distances to five Boston employment centres\n - RAD index of accessibility to radial highways\n - TAX full-value property-tax rate per $10,000\n - PTRATIO pupil-teacher ratio by town\n - B 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town\n - LSTAT % lower status of the population\n - MEDV Median value of owner-occupied homes in $1000's\n\n :Missing Attribute Values: None\n\n :Creator: Harrison, D. and Rubinfeld, D.L.\n\nThis is a copy of UCI ML housing dataset.\nhttp://archive.ics.uci.edu/ml/datasets/Housing\n\n\nThis dataset was taken from the StatLib library which is maintained at Carnegie Mellon University.\n\nThe Boston house-price data of Harrison, D. and Rubinfeld, D.L. 'Hedonic\nprices and the demand for clean air', J. Environ. Economics & Management,\nvol.5, 81-102, 1978. Used in Belsley, Kuh & Welsch, 'Regression diagnostics\n...', Wiley, 1980. N.B. Various transformations are used in the table on\npages 244-261 of the latter.\n\nThe Boston house-price data has been used in many machine learning papers that address regression\nproblems. \n \n**References**\n\n - Belsley, Kuh & Welsch, 'Regression diagnostics: Identifying Influential Data and Sources of Collinearity', Wiley, 1980. 244-261.\n - Quinlan,R. (1993). Combining Instance-Based and Model-Based Learning. In Proceedings on the Tenth International Conference of Machine Learning, 236-243, University of Massachusetts, Amherst. Morgan Kaufmann.\n - many more! (see http://archive.ics.uci.edu/ml/datasets/Housing)\n\n"
],
[
"# NOX (pollution) predicts AGE of properties - lower pollution means more houses built after 1940 than before\ndf.plot(kind=\"scatter\", x=\"NOX\", y=\"AGE\", alpha=0.1);",
"_____no_output_____"
],
[
"# NOX (pollution) predicts DIStance, lower pollution means larger distance to places of work\ndf.plot(kind=\"scatter\", x=\"NOX\", y=\"DIS\", alpha=0.1);",
"_____no_output_____"
],
[
"# More lower-status people means lower house prices\nax = df.plot(kind=\"scatter\", x=\"LSTAT\", y=\"target\", alpha=0.1);",
"_____no_output_____"
],
[
"# closer to employment centres means higher proportion of owner-occupied residences built prior to 1940 (i.e. more older houses)\nax = df.plot(kind=\"scatter\", x=\"DIS\", y=\"AGE\", alpha=0.1);",
"_____no_output_____"
]
],
[
[
"# Try correlations\n\nCorrelations can give us a direction and information about linear and rank-based relationships which we won't get from RF.",
"_____no_output_____"
],
[
"## Pearson (linear)",
"_____no_output_____"
]
],
[
[
"df_results = discover.discover(df[cols], classifier_overrides, method='pearson')\n\ndf_results.pivot(index='target', columns='feature', values='score').fillna(1) \\\n.style.background_gradient(cmap=\"viridis\", axis=1) \\\n.set_precision(2)",
"_____no_output_____"
]
],
[
[
"## Spearman (rank-based)",
"_____no_output_____"
]
],
[
[
"df_results = discover.discover(df[cols], classifier_overrides, method='spearman')\n\ndf_results.pivot(index='target', columns='feature', values='score').fillna(1) \\\n.style.background_gradient(cmap=\"viridis\", axis=1) \\\n.set_precision(2)",
"_____no_output_____"
],
[
"ax = df.plot(kind=\"scatter\", x=\"CRIM\", y=\"LSTAT\", alpha=0.1);",
"_____no_output_____"
],
[
"ax = df.plot(kind=\"scatter\", x=\"CRIM\", y=\"NOX\", alpha=0.1);",
"_____no_output_____"
]
],
[
[
"## Mutual Information\nMutual information represents the amount of information that each column predicts about the others.",
"_____no_output_____"
]
],
[
[
"df_results = discover.discover(df[cols], classifier_overrides, method='mutual_information')\n\ndf_results.pivot(index='target', columns='feature', values='score').fillna(1) \\\n.style.background_gradient(cmap=\"viridis\", axis=1) \\\n.set_precision(2)",
"_____no_output_____"
],
[
"ax = df.plot(kind=\"scatter\", x=\"TAX\", y=\"INDUS\", alpha=0.1)",
"_____no_output_____"
],
[
"ax = df.plot(kind=\"scatter\", x=\"TAX\", y=\"NOX\", alpha=0.1)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d08cbcef1e4d0617210b6ed5e52cc369aed54019 | 9,958 | ipynb | Jupyter Notebook | Notebooks/Production.ipynb | MikeAnderson89/Dare_In_Reality_Hackathon | 92800edc3eb038ca735c1d0014de044e492753ca | [
"MIT"
] | 1 | 2021-11-09T00:51:08.000Z | 2021-11-09T00:51:08.000Z | Notebooks/Production.ipynb | MikeAnderson89/Dare_In_Reality_Hackathon | 92800edc3eb038ca735c1d0014de044e492753ca | [
"MIT"
] | null | null | null | Notebooks/Production.ipynb | MikeAnderson89/Dare_In_Reality_Hackathon | 92800edc3eb038ca735c1d0014de044e492753ca | [
"MIT"
] | null | null | null | 24.406863 | 130 | 0.49528 | [
[
[
"import sys\nimport pandas as pd\nimport numpy as np\nimport scipy.stats as stats\nimport matplotlib.pyplot as plt\n\nsys.path.append('../Scripts')\nfrom Data_Processing import DataProcessing\n\nfrom tensorflow import keras\nfrom keras.callbacks import ModelCheckpoint\nfrom keras.models import load_model\nfrom keras import backend as K\nfrom datetime import datetime\nfrom sklearn.preprocessing import PowerTransformer\n\nimport joblib\n\nimport warnings\nwarnings.filterwarnings('ignore')",
"_____no_output_____"
],
[
"ColumnTransformer = joblib.load('../Models/Column_Transformer.pkl')\n#PowerTransformer = joblib.load('../Models/Power_Transformer.pkl')\nColumnTransformer_NN = joblib.load('../Models/Column_Transformer_NN.pkl')",
"_____no_output_____"
],
[
"df = DataProcessing('../Data/test.csv')",
"_____no_output_____"
],
[
"y = df['Lap_Time']\nX = df.drop(columns=['Lap_Time'])\n\nobj_columns = list(X.select_dtypes(include=object).columns)\nobj_columns.append('Lap_Number')\nobj_columns.append('Lap_Improvement')\n\nnum_columns = list(X.select_dtypes(include='number').columns)\nnum_columns.remove('Lap_Number')\nnum_columns.remove('Lap_Improvement')",
"_____no_output_____"
],
[
"#NN Only\ny = df['Lap_Time']\nX = df.drop(columns=['Lap_Time'])\n\nobj_columns = list(X.select_dtypes(include=object).columns)\n\nobj_columns.append('Lap_Improvement')\nobj_columns.append('Lap_Number')\nobj_columns.append('S1_Improvement')\nobj_columns.append('S2_Improvement')\nobj_columns.append('S3_Improvement')\n\nnum_columns = list(X.select_dtypes(include='number').columns)\nnum_columns.remove('Lap_Number')\nnum_columns.remove('Lap_Improvement')\nnum_columns.remove('S1_Improvement')\nnum_columns.remove('S2_Improvement')\nnum_columns.remove('S3_Improvement')",
"_____no_output_____"
],
[
"#X[num_columns] = PowerTransformer.transform(X[num_columns])\ntrans_X_nn = ColumnTransformer_NN.transform(X)\n\n#trans_X = trans_X.toarray()\n#trans_X = trans_X[:,[0, 2, 4, 6, 8, 9, 10, 12, 13, 14, 15, 16, 17, 18, 72, 73]]\n#trans_X_nn = trans_X_nn.toarray()",
"_____no_output_____"
],
[
"def root_mean_squared_log_error(y_true, y_pred):\n return K.sqrt(K.mean(K.square(K.log(1+y_pred) - K.log(1+y_true))))",
"_____no_output_____"
],
[
"#Neural Network\nnn_model = load_model('../Models/NN_model_test.h5')\n\n#nn_model = load_model('../Models/NN_model.h5', custom_objects={'root_mean_squared_log_error': root_mean_squared_log_error})",
"_____no_output_____"
],
[
"#Random Forest\nrf_model = joblib.load('../Models/RF_Model.h5')",
"_____no_output_____"
],
[
"#Gradient Boost\ngb_model = joblib.load('../Models/Gradient_Boost_Model.h5')",
"_____no_output_____"
],
[
"nn_y_scaler = joblib.load('../Models/NN_Y_Scaler.pkl')\n\ny_predicted_nn = nn_y_scaler.inverse_transform(nn_model.predict(trans_X_nn))\ny_predicted_nn = ((1 / y_predicted_nn) - 1).ravel()\ny_predicted_nn = nn_y_scaler.inverse_transform(nn_model.predict(trans_X_nn))\n#y_predicted_rf = rf_model.predict(trans_X)\n#y_predicted_gb = gb_model.predict(trans_X)",
"_____no_output_____"
],
[
"results = pd.DataFrame()\nresults['NN'] = y_predicted_nn\nresults['RF'] = y_predicted_rf\nresults['GB'] = y_predicted_gb\nresults['LAP_TIME'] = (results['NN'] + results['RF'] + results['GB']) / 3\nsubmission = results[['LAP_TIME']]\nresults",
"_____no_output_____"
],
[
"#Random Forest Only\nsubmission = results[['RF']]\nsubmission = submission.rename(columns={'RF': 'LAP_TIME'})",
"_____no_output_____"
],
[
"today = datetime.today().strftime('%m-%d-%y %H-%M')\nsubmission.to_csv(f'../Submissions/Dare_In_Reality {today}.csv', index=False)",
"_____no_output_____"
]
],
[
[
"### Just Neural Network",
"_____no_output_____"
]
],
[
[
"submission = pd.DataFrame()\nsubmission['LAP_TIME'] = y_predicted_nn.ravel()\nsubmission",
"_____no_output_____"
],
[
"submission.to_csv(f'../Submissions/Dare_In_Reality NN Only.csv', index=False)",
"_____no_output_____"
],
[
"y_predicted_nn",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d08cc24648d4a9a7642d565feaacc15bdf4ff155 | 267,176 | ipynb | Jupyter Notebook | #11_Amazon_Fine_Food_Reviews_Analysis_Truncated_SVD.ipynb | wizard-kv/Truncated-SVD-algorithm-on-Amazon-reviews-dataset | dc57bd845a05d4b0d1cfed98b9ebb8e24d8497b6 | [
"MIT"
] | 1 | 2019-07-18T05:41:08.000Z | 2019-07-18T05:41:08.000Z | #11_Amazon_Fine_Food_Reviews_Analysis_Truncated_SVD.ipynb | wizard-kv/Truncated-SVD-algorithm-on-Amazon-reviews-dataset | dc57bd845a05d4b0d1cfed98b9ebb8e24d8497b6 | [
"MIT"
] | null | null | null | #11_Amazon_Fine_Food_Reviews_Analysis_Truncated_SVD.ipynb | wizard-kv/Truncated-SVD-algorithm-on-Amazon-reviews-dataset | dc57bd845a05d4b0d1cfed98b9ebb8e24d8497b6 | [
"MIT"
] | null | null | null | 116.264578 | 163,248 | 0.831295 | [
[
[
"# Amazon Fine Food Reviews Analysis\n\n\nData Source: https://www.kaggle.com/snap/amazon-fine-food-reviews <br>\n\nEDA: https://nycdatascience.com/blog/student-works/amazon-fine-foods-visualization/\n\n\nThe Amazon Fine Food Reviews dataset consists of reviews of fine foods from Amazon.<br>\n\nNumber of reviews: 568,454<br>\nNumber of users: 256,059<br>\nNumber of products: 74,258<br>\nTimespan: Oct 1999 - Oct 2012<br>\nNumber of Attributes/Columns in data: 10 \n\nAttribute Information:\n\n1. Id\n2. ProductId - unique identifier for the product\n3. UserId - unqiue identifier for the user\n4. ProfileName\n5. HelpfulnessNumerator - number of users who found the review helpful\n6. HelpfulnessDenominator - number of users who indicated whether they found the review helpful or not\n7. Score - rating between 1 and 5\n8. Time - timestamp for the review\n9. Summary - brief summary of the review\n10. Text - text of the review\n\n\n#### Objective:\nGiven a review, determine whether the review is positive (rating of 4 or 5) or negative (rating of 1 or 2).\n\n<br>\n[Q] How to determine if a review is positive or negative?<br>\n<br> \n[Ans] We could use Score/Rating. A rating of 4 or 5 can be cosnidered as a positive review. A rating of 1 or 2 can be considered as negative one. A review of rating 3 is considered nuetral and such reviews are ignored from our analysis. This is an approximate and proxy way of determining the polarity (positivity/negativity) of a review.\n\n\n",
"_____no_output_____"
],
[
"# [1]. Reading Data",
"_____no_output_____"
],
[
"## [1.1] Loading the data\n\nThe dataset is available in two forms\n1. .csv file\n2. SQLite Database\n\nIn order to load the data, We have used the SQLITE dataset as it is easier to query the data and visualise the data efficiently.\n<br> \n\nHere as we only want to get the global sentiment of the recommendations (positive or negative), we will purposefully ignore all Scores equal to 3. If the score is above 3, then the recommendation wil be set to \"positive\". Otherwise, it will be set to \"negative\".",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\n\nimport sqlite3\nimport pandas as pd\nimport numpy as np\nimport nltk\nimport string\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom sklearn.feature_extraction.text import TfidfTransformer\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nfrom sklearn.metrics.pairwise import cosine_similarity\n\nfrom sklearn.feature_extraction.text import CountVectorizer\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn import metrics\nfrom sklearn.metrics import roc_curve, auc\nfrom nltk.stem.porter import PorterStemmer\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.metrics import roc_auc_score\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nfrom sklearn.decomposition import TruncatedSVD\nfrom sklearn.cluster import KMeans\nfrom wordcloud import WordCloud, STOPWORDS\n\nimport re\n# Tutorial about Python regular expressions: https://pymotw.com/2/re/\nimport string\nfrom nltk.corpus import stopwords\nfrom nltk.stem import PorterStemmer\nfrom nltk.stem.wordnet import WordNetLemmatizer\n\nfrom gensim.models import Word2Vec\nfrom gensim.models import KeyedVectors\nimport pickle\n\nfrom tqdm import tqdm\nimport os",
"_____no_output_____"
],
[
"from google.colab import drive\ndrive.mount('/content/drive')",
"Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&scope=email%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdocs.test%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive.photos.readonly%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fpeopleapi.readonly&response_type=code\n\nEnter your authorization code:\n··········\nMounted at /content/drive\n"
],
[
"# using SQLite Table to read data.\ncon = sqlite3.connect('drive/My Drive/database.sqlite') \n\n# filtering only positive and negative reviews i.e. \n# not taking into consideration those reviews with Score=3\n# SELECT * FROM Reviews WHERE Score != 3 LIMIT 500000, will give top 500000 data points\n# you can change the number to any other number based on your computing power\n\n# filtered_data = pd.read_sql_query(\"\"\" SELECT * FROM Reviews WHERE Score != 3 LIMIT 500000\"\"\", con) \n# for tsne assignment you can take 5k data points\n\nfiltered_data = pd.read_sql_query(\"\"\" SELECT * FROM Reviews WHERE Score != 3 LIMIT 200000\"\"\", con) \n\n# Give reviews with Score>3 a positive rating(1), and reviews with a score<3 a negative rating(0).\ndef partition(x):\n if x < 3:\n return 0\n return 1\n\n#changing reviews with score less than 3 to be positive and vice-versa\nactualScore = filtered_data['Score']\npositiveNegative = actualScore.map(partition) \nfiltered_data['Score'] = positiveNegative\nprint(\"Number of data points in our data\", filtered_data.shape)\nfiltered_data.head(3)",
"Number of data points in our data (200000, 10)\n"
],
[
"display = pd.read_sql_query(\"\"\"\nSELECT UserId, ProductId, ProfileName, Time, Score, Text, COUNT(*)\nFROM Reviews\nGROUP BY UserId\nHAVING COUNT(*)>1\n\"\"\", con)",
"_____no_output_____"
],
[
"print(display.shape)\ndisplay.head()",
"(80668, 7)\n"
],
[
"display[display['UserId']=='AZY10LLTJ71NX']",
"_____no_output_____"
],
[
"display['COUNT(*)'].sum()",
"_____no_output_____"
]
],
[
[
"# [2] Exploratory Data Analysis",
"_____no_output_____"
],
[
"## [2.1] Data Cleaning: Deduplication\n\nIt is observed (as shown in the table below) that the reviews data had many duplicate entries. Hence it was necessary to remove duplicates in order to get unbiased results for the analysis of the data. Following is an example:",
"_____no_output_____"
]
],
[
[
"display= pd.read_sql_query(\"\"\"\nSELECT *\nFROM Reviews\nWHERE Score != 3 AND UserId=\"AR5J8UI46CURR\"\nORDER BY ProductID\n\"\"\", con)\ndisplay.head()",
"_____no_output_____"
]
],
[
[
"As it can be seen above that same user has multiple reviews with same values for HelpfulnessNumerator, HelpfulnessDenominator, Score, Time, Summary and Text and on doing analysis it was found that <br>\n<br> \nProductId=B000HDOPZG was Loacker Quadratini Vanilla Wafer Cookies, 8.82-Ounce Packages (Pack of 8)<br>\n<br> \nProductId=B000HDL1RQ was Loacker Quadratini Lemon Wafer Cookies, 8.82-Ounce Packages (Pack of 8) and so on<br>\n\nIt was inferred after analysis that reviews with same parameters other than ProductId belonged to the same product just having different flavour or quantity. Hence in order to reduce redundancy it was decided to eliminate the rows having same parameters.<br>\n\nThe method used for the same was that we first sort the data according to ProductId and then just keep the first similar product review and delelte the others. for eg. in the above just the review for ProductId=B000HDL1RQ remains. This method ensures that there is only one representative for each product and deduplication without sorting would lead to possibility of different representatives still existing for the same product.",
"_____no_output_____"
]
],
[
[
"#Sorting data according to ProductId in ascending order\nsorted_data=filtered_data.sort_values('ProductId', axis=0, ascending=True, inplace=False, kind='quicksort', na_position='last')",
"_____no_output_____"
],
[
"#Deduplication of entries\nfinal=sorted_data.drop_duplicates(subset={\"UserId\",\"ProfileName\",\"Time\",\"Text\"}, keep='first', inplace=False)\nfinal.shape",
"_____no_output_____"
],
[
"#Checking to see how much % of data still remains\n(final['Id'].size*1.0)/(filtered_data['Id'].size*1.0)*100",
"_____no_output_____"
]
],
[
[
"<b>Observation:-</b> It was also seen that in two rows given below the value of HelpfulnessNumerator is greater than HelpfulnessDenominator which is not practically possible hence these two rows too are removed from calcualtions",
"_____no_output_____"
]
],
[
[
"display= pd.read_sql_query(\"\"\"\nSELECT *\nFROM Reviews\nWHERE Score != 3 AND Id=44737 OR Id=64422\nORDER BY ProductID\n\"\"\", con)\n\ndisplay.head()",
"_____no_output_____"
],
[
"final=final[final.HelpfulnessNumerator<=final.HelpfulnessDenominator]",
"_____no_output_____"
],
[
"#Before starting the next phase of preprocessing lets see the number of entries left\nprint(final.shape)\n\n#How many positive and negative reviews are present in our dataset?\nfinal['Score'].value_counts()",
"(160176, 10)\n"
]
],
[
[
"# [3] Preprocessing",
"_____no_output_____"
],
[
"## [3.1]. Preprocessing Review Text\n\nNow that we have finished deduplication our data requires some preprocessing before we go on further with analysis and making the prediction model.\n\nHence in the Preprocessing phase we do the following in the order below:-\n\n1. Begin by removing the html tags\n2. Remove any punctuations or limited set of special characters like , or . or # etc.\n3. Check if the word is made up of english letters and is not alpha-numeric\n4. Check to see if the length of the word is greater than 2 (as it was researched that there is no adjective in 2-letters)\n5. Convert the word to lowercase\n6. Remove Stopwords\n7. Finally Snowball Stemming the word (it was obsereved to be better than Porter Stemming)<br>\n\nAfter which we collect the words used to describe positive and negative reviews",
"_____no_output_____"
]
],
[
[
"# printing some random reviews\nsent_0 = final['Text'].values[0]\nprint(sent_0)\nprint(\"=\"*50)\n\nsent_1000 = final['Text'].values[1000]\nprint(sent_1000)\nprint(\"=\"*50)\n\nsent_1500 = final['Text'].values[1500]\nprint(sent_1500)\nprint(\"=\"*50)\n\nsent_4900 = final['Text'].values[4900]\nprint(sent_4900)\nprint(\"=\"*50)",
"I remembered this book from my childhood and got it for my kids. It's just as good as I remembered and my kids love it too. My older daughter now reads it to her sister. Good rhymes and nice pictures.\n==================================================\nThe qualitys not as good as the lamb and rice but it didn't seem to bother his stomach, you get 10 more pounds and it is cheaper wich is a plus for me. You can always ad your own rice and veggies. Its fresher that way and better for him in my opinion. Plus if you you can get it deliverd to your house for free its even better. Gotta love pitbulls\n==================================================\nThis is the Japanese version of breadcrumb (pan=bread, a Portuguese loan-word, and"ko-" is "child of" or of "derived from".) Panko are used for katsudon, tonkatsu or cutlets served on rice or in soups. The cutlets, pounded chicken or pork, are coated with these light and crispy crumbs and fried. They are not gritty and dense like regular crumbs. They are very nice on deep fried shrimps and decorative for a more gourmet touch.\n==================================================\nWhat can I say... If Douwe Egberts was good enough for my dutch grandmother, it's perfect for me. I like this flavor best with my Senseo... It has a nice dark full body flavor without the burt bean taste I tend sense with starbucks. It's a shame most americans haven't bought into single serve coffe makers as our Dutch counter parts have. Every cup is fresh brewed and doesn't sit long enough on my desk to get that old taste either.\n==================================================\n"
],
[
"# remove urls from text python: https://stackoverflow.com/a/40823105/4084039\nsent_0 = re.sub(r\"http\\S+\", \"\", sent_0)\nsent_1000 = re.sub(r\"http\\S+\", \"\", sent_1000)\nsent_150 = re.sub(r\"http\\S+\", \"\", sent_1500)\nsent_4900 = re.sub(r\"http\\S+\", \"\", sent_4900)\n\nprint(sent_0)",
"I remembered this book from my childhood and got it for my kids. It's just as good as I remembered and my kids love it too. My older daughter now reads it to her sister. Good rhymes and nice pictures.\n"
],
[
"# https://stackoverflow.com/questions/16206380/python-beautifulsoup-how-to-remove-all-tags-from-an-element\nfrom bs4 import BeautifulSoup\n\nsoup = BeautifulSoup(sent_0, 'lxml')\ntext = soup.get_text()\nprint(text)\nprint(\"=\"*50)\n\nsoup = BeautifulSoup(sent_1000, 'lxml')\ntext = soup.get_text()\nprint(text)\nprint(\"=\"*50)\n\nsoup = BeautifulSoup(sent_1500, 'lxml')\ntext = soup.get_text()\nprint(text)\nprint(\"=\"*50)\n\nsoup = BeautifulSoup(sent_4900, 'lxml')\ntext = soup.get_text()\nprint(text)",
"I remembered this book from my childhood and got it for my kids. It's just as good as I remembered and my kids love it too. My older daughter now reads it to her sister. Good rhymes and nice pictures.\n==================================================\nThe qualitys not as good as the lamb and rice but it didn't seem to bother his stomach, you get 10 more pounds and it is cheaper wich is a plus for me. You can always ad your own rice and veggies. Its fresher that way and better for him in my opinion. Plus if you you can get it deliverd to your house for free its even better. Gotta love pitbulls\n==================================================\nThis is the Japanese version of breadcrumb (pan=bread, a Portuguese loan-word, and\"ko-\" is \"child of\" or of \"derived from\".) Panko are used for katsudon, tonkatsu or cutlets served on rice or in soups. The cutlets, pounded chicken or pork, are coated with these light and crispy crumbs and fried. They are not gritty and dense like regular crumbs. They are very nice on deep fried shrimps and decorative for a more gourmet touch.\n==================================================\nWhat can I say... If Douwe Egberts was good enough for my dutch grandmother, it's perfect for me. I like this flavor best with my Senseo... It has a nice dark full body flavor without the burt bean taste I tend sense with starbucks. It's a shame most americans haven't bought into single serve coffe makers as our Dutch counter parts have. Every cup is fresh brewed and doesn't sit long enough on my desk to get that old taste either.\n"
],
[
"# https://stackoverflow.com/a/47091490/4084039\nimport re\n\ndef decontracted(phrase):\n # specific\n phrase = re.sub(r\"won't\", \"will not\", phrase)\n phrase = re.sub(r\"can\\'t\", \"can not\", phrase)\n\n # general\n phrase = re.sub(r\"n\\'t\", \" not\", phrase)\n phrase = re.sub(r\"\\'re\", \" are\", phrase)\n phrase = re.sub(r\"\\'s\", \" is\", phrase)\n phrase = re.sub(r\"\\'d\", \" would\", phrase)\n phrase = re.sub(r\"\\'ll\", \" will\", phrase)\n phrase = re.sub(r\"\\'t\", \" not\", phrase)\n phrase = re.sub(r\"\\'ve\", \" have\", phrase)\n phrase = re.sub(r\"\\'m\", \" am\", phrase)\n return phrase",
"_____no_output_____"
],
[
"sent_1500 = decontracted(sent_1500)\nprint(sent_1500)\nprint(\"=\"*50)",
"This is the Japanese version of breadcrumb (pan=bread, a Portuguese loan-word, and"ko-" is "child of" or of "derived from".) Panko are used for katsudon, tonkatsu or cutlets served on rice or in soups. The cutlets, pounded chicken or pork, are coated with these light and crispy crumbs and fried. They are not gritty and dense like regular crumbs. They are very nice on deep fried shrimps and decorative for a more gourmet touch.\n==================================================\n"
],
[
"#remove words with numbers python: https://stackoverflow.com/a/18082370/4084039\nsent_0 = re.sub(\"\\S*\\d\\S*\", \"\", sent_0).strip()\nprint(sent_0)",
"I remembered this book from my childhood and got it for my kids. It's just as good as I remembered and my kids love it too. My older daughter now reads it to her sister. Good rhymes and nice pictures.\n"
],
[
"#remove spacial character: https://stackoverflow.com/a/5843547/4084039\nsent_1500 = re.sub('[^A-Za-z0-9]+', ' ', sent_1500)\nprint(sent_1500)",
"This is the Japanese version of breadcrumb pan bread a Portuguese loan word and quot ko quot is quot child of quot or of quot derived from quot Panko are used for katsudon tonkatsu or cutlets served on rice or in soups The cutlets pounded chicken or pork are coated with these light and crispy crumbs and fried They are not gritty and dense like regular crumbs They are very nice on deep fried shrimps and decorative for a more gourmet touch \n"
],
[
"# https://gist.github.com/sebleier/554280\n# we are removing the words from the stop words list: 'no', 'nor', 'not'\n# <br /><br /> ==> after the above steps, we are getting \"br br\"\n# we are including them into stop words list\n# instead of <br /> if we have <br/> these tags would have revmoved in the 1st step\n\nstopwords= set(['br', 'the', 'i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', \"you're\", \"you've\",\\\n \"you'll\", \"you'd\", 'your', 'yours', 'yourself', 'yourselves', 'he', 'him', 'his', 'himself', \\\n 'she', \"she's\", 'her', 'hers', 'herself', 'it', \"it's\", 'its', 'itself', 'they', 'them', 'their',\\\n 'theirs', 'themselves', 'what', 'which', 'who', 'whom', 'this', 'that', \"that'll\", 'these', 'those', \\\n 'am', 'is', 'are', 'was', 'were', 'be', 'been', 'being', 'have', 'has', 'had', 'having', 'do', 'does', \\\n 'did', 'doing', 'a', 'an', 'the', 'and', 'but', 'if', 'or', 'because', 'as', 'until', 'while', 'of', \\\n 'at', 'by', 'for', 'with', 'about', 'against', 'between', 'into', 'through', 'during', 'before', 'after',\\\n 'above', 'below', 'to', 'from', 'up', 'down', 'in', 'out', 'on', 'off', 'over', 'under', 'again', 'further',\\\n 'then', 'once', 'here', 'there', 'when', 'where', 'why', 'how', 'all', 'any', 'both', 'each', 'few', 'more',\\\n 'most', 'other', 'some', 'such', 'only', 'own', 'same', 'so', 'than', 'too', 'very', \\\n 's', 't', 'can', 'will', 'just', 'don', \"don't\", 'should', \"should've\", 'now', 'd', 'll', 'm', 'o', 're', \\\n 've', 'y', 'ain', 'aren', \"aren't\", 'couldn', \"couldn't\", 'didn', \"didn't\", 'doesn', \"doesn't\", 'hadn',\\\n \"hadn't\", 'hasn', \"hasn't\", 'haven', \"haven't\", 'isn', \"isn't\", 'ma', 'mightn', \"mightn't\", 'mustn',\\\n \"mustn't\", 'needn', \"needn't\", 'shan', \"shan't\", 'shouldn', \"shouldn't\", 'wasn', \"wasn't\", 'weren', \"weren't\", \\\n 'won', \"won't\", 'wouldn', \"wouldn't\"])",
"_____no_output_____"
],
[
"# Combining all the above stundents \nfrom tqdm import tqdm\npreprocessed_reviews = []\n# tqdm is for printing the status bar\nfor sentance in tqdm(final['Text'].values):\n sentance = re.sub(r\"http\\S+\", \"\", sentance)\n sentance = BeautifulSoup(sentance, 'lxml').get_text()\n sentance = decontracted(sentance)\n sentance = re.sub(\"\\S*\\d\\S*\", \"\", sentance).strip()\n sentance = re.sub('[^A-Za-z]+', ' ', sentance)\n # https://gist.github.com/sebleier/554280\n sentance = ' '.join(e.lower() for e in sentance.split() if e.lower() not in stopwords)\n preprocessed_reviews.append(sentance.strip())",
"100%|██████████| 160176/160176 [01:15<00:00, 2133.20it/s]\n"
],
[
"preprocessed_reviews[100000]",
"_____no_output_____"
]
],
[
[
"# [4] Featurization",
"_____no_output_____"
],
[
"## [4.1] BAG OF WORDS",
"_____no_output_____"
]
],
[
[
"#BoW\ncount_vect = CountVectorizer() #in scikit-learn\ncount_vect.fit(preprocessed_reviews)\nprint(\"some feature names \", count_vect.get_feature_names()[:10])\nprint('='*50)\n\nfinal_counts = count_vect.transform(preprocessed_reviews)\nprint(\"the type of count vectorizer \",type(final_counts))\nprint(\"the shape of out text BOW vectorizer \",final_counts.get_shape())\nprint(\"the number of unique words \", final_counts.get_shape()[1])",
"some feature names ['aa', 'aahhhs', 'aback', 'abandon', 'abates', 'abbott', 'abby', 'abdominal', 'abiding', 'ability']\n==================================================\nthe type of count vectorizer <class 'scipy.sparse.csr.csr_matrix'>\nthe shape of out text BOW vectorizer (4986, 12997)\nthe number of unique words 12997\n"
]
],
[
[
"## [4.2] Bi-Grams and n-Grams.",
"_____no_output_____"
]
],
[
[
"#bi-gram, tri-gram and n-gram\n\n#removing stop words like \"not\" should be avoided before building n-grams\n# count_vect = CountVectorizer(ngram_range=(1,2))\n# please do read the CountVectorizer documentation http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html\n\n# you can choose these numebrs min_df=10, max_features=5000, of your choice\ncount_vect = CountVectorizer(ngram_range=(1,2), min_df=10, max_features=5000)\nfinal_bigram_counts = count_vect.fit_transform(preprocessed_reviews)\nprint(\"the type of count vectorizer \",type(final_bigram_counts))\nprint(\"the shape of out text BOW vectorizer \",final_bigram_counts.get_shape())\nprint(\"the number of unique words including both unigrams and bigrams \", final_bigram_counts.get_shape()[1])",
"the type of count vectorizer <class 'scipy.sparse.csr.csr_matrix'>\nthe shape of out text BOW vectorizer (4986, 3144)\nthe number of unique words including both unigrams and bigrams 3144\n"
]
],
[
[
"## [4.3] TF-IDF",
"_____no_output_____"
]
],
[
[
"tf_idf_vect = TfidfVectorizer(ngram_range=(1,2), min_df=10)\ntf_idf_vect.fit(preprocessed_reviews)\nprint(\"some sample features(unique words in the corpus)\",tf_idf_vect.get_feature_names()[0:10])\nprint('='*50)\n\nfinal_tf_idf = tf_idf_vect.transform(preprocessed_reviews)\nprint(\"the type of count vectorizer \",type(final_tf_idf))\nprint(\"the shape of out text TFIDF vectorizer \",final_tf_idf.get_shape())\nprint(\"the number of unique words including both unigrams and bigrams \", final_tf_idf.get_shape()[1])",
"some sample features(unique words in the corpus) ['ability', 'able', 'able find', 'able get', 'absolute', 'absolutely', 'absolutely delicious', 'absolutely love', 'absolutely no', 'according']\n==================================================\nthe type of count vectorizer <class 'scipy.sparse.csr.csr_matrix'>\nthe shape of out text TFIDF vectorizer (4986, 3144)\nthe number of unique words including both unigrams and bigrams 3144\n"
]
],
[
[
"## [4.4] Word2Vec",
"_____no_output_____"
]
],
[
[
"# Train your own Word2Vec model using your own text corpus\ni=0\nlist_of_sentance=[]\nfor sentance in preprocessed_reviews:\n list_of_sentance.append(sentance.split())",
"_____no_output_____"
],
[
"# Using Google News Word2Vectors\n\n# in this project we are using a pretrained model by google\n# its 3.3G file, once you load this into your memory \n# it occupies ~9Gb, so please do this step only if you have >12G of ram\n# we will provide a pickle file wich contains a dict , \n# and it contains all our courpus words as keys and model[word] as values\n# To use this code-snippet, download \"GoogleNews-vectors-negative300.bin\" \n# from https://drive.google.com/file/d/0B7XkCwpI5KDYNlNUTTlSS21pQmM/edit\n# it's 1.9GB in size.\n\n\n# http://kavita-ganesan.com/gensim-word2vec-tutorial-starter-code/#.W17SRFAzZPY\n# you can comment this whole cell\n# or change these varible according to your need\n\nis_your_ram_gt_16g=False\nwant_to_use_google_w2v = False\nwant_to_train_w2v = True\n\nif want_to_train_w2v:\n # min_count = 5 considers only words that occured atleast 5 times\n w2v_model=Word2Vec(list_of_sentance,min_count=5,size=50, workers=4)\n print(w2v_model.wv.most_similar('great'))\n print('='*50)\n print(w2v_model.wv.most_similar('worst'))\n \nelif want_to_use_google_w2v and is_your_ram_gt_16g:\n if os.path.isfile('GoogleNews-vectors-negative300.bin'):\n w2v_model=KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin', binary=True)\n print(w2v_model.wv.most_similar('great'))\n print(w2v_model.wv.most_similar('worst'))\n else:\n print(\"you don't have gogole's word2vec file, keep want_to_train_w2v = True, to train your own w2v \")",
"[('snack', 0.9951335191726685), ('calorie', 0.9946465492248535), ('wonderful', 0.9946032166481018), ('excellent', 0.9944332838058472), ('especially', 0.9941144585609436), ('baked', 0.9940600395202637), ('salted', 0.994047224521637), ('alternative', 0.9937226176261902), ('tasty', 0.9936816692352295), ('healthy', 0.9936649799346924)]\n==================================================\n[('varieties', 0.9994194507598877), ('become', 0.9992934465408325), ('popcorn', 0.9992750883102417), ('de', 0.9992610216140747), ('miss', 0.9992451071739197), ('melitta', 0.999218761920929), ('choice', 0.9992102384567261), ('american', 0.9991837739944458), ('beef', 0.9991780519485474), ('finish', 0.9991567134857178)]\n"
],
[
"w2v_words = list(w2v_model.wv.vocab)\nprint(\"number of words that occured minimum 5 times \",len(w2v_words))\nprint(\"sample words \", w2v_words[0:50])",
"number of words that occured minimum 5 times 3817\nsample words ['product', 'available', 'course', 'total', 'pretty', 'stinky', 'right', 'nearby', 'used', 'ca', 'not', 'beat', 'great', 'received', 'shipment', 'could', 'hardly', 'wait', 'try', 'love', 'call', 'instead', 'removed', 'easily', 'daughter', 'designed', 'printed', 'use', 'car', 'windows', 'beautifully', 'shop', 'program', 'going', 'lot', 'fun', 'everywhere', 'like', 'tv', 'computer', 'really', 'good', 'idea', 'final', 'outstanding', 'window', 'everybody', 'asks', 'bought', 'made']\n"
]
],
[
[
"## [4.4.1] Converting text into vectors using Avg W2V, TFIDF-W2V",
"_____no_output_____"
],
[
"#### [4.4.1.1] Avg W2v",
"_____no_output_____"
]
],
[
[
"# average Word2Vec\n# compute average word2vec for each review.\nsent_vectors = []; # the avg-w2v for each sentence/review is stored in this list\nfor sent in tqdm(list_of_sentance): # for each review/sentence\n sent_vec = np.zeros(50) # as word vectors are of zero length 50, you might need to change this to 300 if you use google's w2v\n cnt_words =0; # num of words with a valid vector in the sentence/review\n for word in sent: # for each word in a review/sentence\n if word in w2v_words:\n vec = w2v_model.wv[word]\n sent_vec += vec\n cnt_words += 1\n if cnt_words != 0:\n sent_vec /= cnt_words\n sent_vectors.append(sent_vec)\nprint(len(sent_vectors))\nprint(len(sent_vectors[0]))",
"100%|████████████████████████████████████████████████████████████████████████████| 4986/4986 [00:03<00:00, 1330.47it/s]\n"
]
],
[
[
"#### [4.4.1.2] TFIDF weighted W2v",
"_____no_output_____"
]
],
[
[
"# S = [\"abc def pqr\", \"def def def abc\", \"pqr pqr def\"]\nmodel = TfidfVectorizer()\ntf_idf_matrix = model.fit_transform(preprocessed_reviews)\n# we are converting a dictionary with word as a key, and the idf as a value\ndictionary = dict(zip(model.get_feature_names(), list(model.idf_)))",
"_____no_output_____"
],
[
"# TF-IDF weighted Word2Vec\ntfidf_feat = model.get_feature_names() # tfidf words/col-names\n# final_tf_idf is the sparse matrix with row= sentence, col=word and cell_val = tfidf\n\ntfidf_sent_vectors = []; # the tfidf-w2v for each sentence/review is stored in this list\nrow=0;\nfor sent in tqdm(list_of_sentance): # for each review/sentence \n sent_vec = np.zeros(50) # as word vectors are of zero length\n weight_sum =0; # num of words with a valid vector in the sentence/review\n for word in sent: # for each word in a review/sentence\n if word in w2v_words and word in tfidf_feat:\n vec = w2v_model.wv[word]\n# tf_idf = tf_idf_matrix[row, tfidf_feat.index(word)]\n # to reduce the computation we are \n # dictionary[word] = idf value of word in whole courpus\n # sent.count(word) = tf valeus of word in this review\n tf_idf = dictionary[word]*(sent.count(word)/len(sent))\n sent_vec += (vec * tf_idf)\n weight_sum += tf_idf\n if weight_sum != 0:\n sent_vec /= weight_sum\n tfidf_sent_vectors.append(sent_vec)\n row += 1",
"100%|█████████████████████████████████████████████████████████████████████████████| 4986/4986 [00:20<00:00, 245.63it/s]\n"
]
],
[
[
"## Truncated-SVD",
"_____no_output_____"
],
[
"### [5.1] Taking top features from TFIDF,<font color='red'> SET 2</font>",
"_____no_output_____"
]
],
[
[
"# Please write all the code with proper documentation\nX = preprocessed_reviews[:] \ny = final['Score'][:]\n\ntf_idf = TfidfVectorizer()\ntfidf_data = tf_idf.fit_transform(X)\n\ntfidf_feat = tf_idf.get_feature_names()\n",
"_____no_output_____"
]
],
[
[
"### [5.2] Calulation of Co-occurrence matrix",
"_____no_output_____"
]
],
[
[
"# Please write all the code with proper documentation\n#Ref:https://datascience.stackexchange.com/questions/40038/how-to-implement-word-to-word-co-occurence-matrix-in-python\n#Ref:# https://github.com/PushpendraSinghChauhan/Amazon-Fine-Food-Reviews/blob/master/Computing%20Word%20Vectors%20using%20TruncatedSVD.ipynb\n\ndef Co_Occurrence_Matrix(neighbour_num , list_words):\n \n # Storing all words with their indices in the dictionary\n corpus = dict()\n # List of all words in the corpus\n doc = []\n index = 0\n for sent in preprocessed_reviews:\n for word in sent.split():\n doc.append(word)\n corpus.setdefault(word,[])\n corpus[word].append(index)\n index += 1\n \n # Co-occurrence matrix\n matrix = []\n # rows in co-occurrence matrix\n for row in list_words:\n # row in co-occurrence matrix\n temp = []\n # column in co-occurrence matrix \n for col in list_words :\n if( col != row):\n # No. of times col word is in neighbourhood of row word\n count = 0\n # Value of neighbourhood\n num = neighbour_num\n # Indices of row word in the corpus\n positions = corpus[row]\n for i in positions:\n if i<(num-1):\n # Checking for col word in neighbourhood of row\n if col in doc[i:i+num]:\n count +=1\n elif (i>=(num-1)) and (i<=(len(doc)-num)):\n # Check col word in neighbour of row\n if (col in doc[i-(num-1):i+1]) and (col in doc[i:i+num]):\n count +=2\n # Check col word in neighbour of row\n elif (col in doc[i-(num-1):i+1]) or (col in doc[i:i+num]):\n count +=1\n else :\n if (col in doc[i-(num-1):i+1]):\n count +=1\n \n \n # appending the col count to row of co-occurrence matrix\n temp.append(count)\n else:\n # Append 0 in the column if row and col words are equal\n temp.append(0)\n # appending the row in co-occurrence matrix \n matrix.append(temp)\n # Return co-occurrence matrix\n return np.array(matrix)",
"_____no_output_____"
],
[
"X_new = Co_Occurrence_Matrix(15, top_feat)\n",
"_____no_output_____"
]
],
[
[
"### [5.3] Finding optimal value for number of components (n) to be retained.",
"_____no_output_____"
]
],
[
[
"# Please write all the code with proper documentation\nk = np.arange(2,100,3)\n\nvariance =[]\nfor i in k:\n svd = TruncatedSVD(n_components=i)\n svd.fit_transform(X_new)\n score = svd.explained_variance_ratio_.sum()\n variance.append(score)\n \nplt.plot(k, variance)\nplt.xlabel('Number of Components')\nplt.ylabel('Explained Variance')\nplt.title('n_components VS Explained variance')\nplt.show()",
"_____no_output_____"
]
],
[
[
"### [5.4] Applying k-means clustering",
"_____no_output_____"
]
],
[
[
"# Please write all the code with proper documentation\n\nerrors = []\nk = [2, 5, 10, 15, 25, 30, 50, 100]\n\nfor i in k:\n kmeans = KMeans(n_clusters=i, random_state=0)\n kmeans.fit(X_new)\n errors.append(kmeans.inertia_)\n \nplt.plot(k, errors)\nplt.xlabel('K')\nplt.ylabel('Error')\nplt.title('K VS Error Plot')\nplt.show()",
"_____no_output_____"
],
[
"svd = TruncatedSVD(n_components = 20)\nsvd.fit(X_new)\n\nscore = svd.explained_variance_ratio_",
"_____no_output_____"
]
],
[
[
"### [5.5] Wordclouds of clusters obtained in the above section",
"_____no_output_____"
]
],
[
[
"# Please write all the code with proper documentation\n\nindices = np.argsort(tf_idf.idf_[::-1])\ntop_feat = [tfidf_feat[i] for i in indices[0:3000]]\ntop_indices = indices[0:3000]\ntop_n = np.argsort(top_feat[::-1])\n\nfeature_importances = pd.DataFrame(top_n, index = top_feat, columns=['importance']).sort_values('importance',ascending=False)\ntop = feature_importances.iloc[0:30]\ncomment_words = ' '\nfor val in top.index: \n val = str(val) \n tokens = val.split() \n \n # Converts each token into lowercase \n for i in range(len(tokens)): \n tokens[i] = tokens[i].lower() \n \n for words in tokens: \n comment_words = comment_words + words + ' '\n stopwords = set(STOPWORDS)\n \n \n \nwordcloud = WordCloud(width = 600, height = 600, \n background_color ='black', \n stopwords = stopwords, \n min_font_size = 10).generate(comment_words) \n \nplt.figure(figsize = (10, 10), facecolor = None) \nplt.imshow(wordcloud) \nplt.axis(\"off\") \nplt.tight_layout(pad = 0) \n \nplt.show()",
"_____no_output_____"
]
],
[
[
"### [5.6] Function that returns most similar words for a given word.",
"_____no_output_____"
]
],
[
[
"# Please write all the code with proper documentation\ndef similarity(word):\n similarity = cosine_similarity(X_new)\n word_vect = similarity[top_feat.index(word)]\n index = word_vect.argsort()[::-1][1:5]\n for i in range(len(index)):\n print((i+1),top_feat[index[i]] ,\"\\n\")",
"_____no_output_____"
],
[
" similarity('sugary')\n",
"1 extra \n\n2 needs \n\n3 craving \n\n4 care \n\n"
],
[
" similarity('notlike')\n",
"1 maytag \n\n2 slip \n\n3 gibbles \n\n4 farriage \n\n"
]
],
[
[
"# [6] Conclusions",
"_____no_output_____"
]
],
[
[
"# Please write down few lines about what you observed from this assignment. \n# Also please do mention the optimal values that you obtained for number of components & number of clusters.",
"_____no_output_____"
],
[
"from prettytable import PrettyTable\n \nx = PrettyTable()\n\nx.field_names = [\"Algorithm\",\"Best Hyperparameter\"]\n\nx.add_row([\"T-SVD\", 20])\nx.add_row([\"K-Means\", 20])\n\n\n\nprint(x)",
"+-----------+---------------------+\n| Algorithm | Best Hyperparameter |\n+-----------+---------------------+\n| T-SVD | 20 |\n| K-Means | 20 |\n+-----------+---------------------+\n"
]
],
[
[
"\n\n* It can be obseverd that just 20 components preserve about 99.9% of the variance in the data.\n* The co occurence matrix used is to find the correlation of one word with respect to the other in the dataset.\n\n\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
d08cc76c9745a0ca9adbe50ad92d9687defb213b | 9,916 | ipynb | Jupyter Notebook | week_1/week_1_unit_5_ifstatement_notebook.ipynb | ceedee666/opensap_python_intro | d4b94eca1172c2e86d7711ed0d23c37250142b88 | [
"CC0-1.0"
] | 8 | 2021-10-09T14:55:01.000Z | 2022-02-16T15:55:53.000Z | week_1/week_1_unit_5_ifstatement_notebook.ipynb | ceedee666/opensap_python_intro | d4b94eca1172c2e86d7711ed0d23c37250142b88 | [
"CC0-1.0"
] | 11 | 2021-10-01T12:50:04.000Z | 2022-03-30T10:16:52.000Z | week_1/week_1_unit_5_ifstatement_notebook.ipynb | ceedee666/opensap_python_intro | d4b94eca1172c2e86d7711ed0d23c37250142b88 | [
"CC0-1.0"
] | 3 | 2021-09-30T07:04:28.000Z | 2021-12-16T09:52:04.000Z | 38.734375 | 803 | 0.590359 | [
[
[
"# Conditional statements - part 1\n## Motivation\n\nAll the previous programs are based on a pure sequence of statements. After the start of the program the statements are\nexecuted step by step and the program ends afterwards. However, it is often necessary that parts of a program are\nonly executed under certain conditions. For example, think of the following sentence and how it \nwould be converted into a [pseudo code](https://de.wikipedia.org/wiki/Pseudocode) program:\n\n> If it rains tomorrow, I will clean up the basement. Then I will tidy the cupboards and sort the photos. Otherwise, I\n> will go swimming. In the evening I will go to the cinema with my wife.\n\nThe textual description of the task is not precise. It is not exactly clear what is to be done.\nThis is common for description in natural language. Often addition information is conveyed through the \ncontext of e.g. a conversation. What is probably meant in the previous example is the following:\n```\n If it rains tomorrow, I will:\n - clean up the basement\n - tidy the cupboards\n - sort the photos\n Otherwise (so if it doesn't rain), I will:\n go swimming.\n\n In the evening I will go to the cinema with my wife.\n```\nSo, depending on the weather either one or the other path of the pseudo code program is executed. This\nis illustrated in the following graphic:\n\n\n\nTo enable this more complex workflow two things are required:\n\n- First, a construction that allows to split the workflow in different paths depending on a given condition.\n- Second, a specification of conditions.",
"_____no_output_____"
],
[
"## Conditions\nSo, what is a condition? In the end, it is something that is either `True` or `False`, in other word, a condition always results in a boolean value. In principal, you could use `True` or `False`, when a condition is required. However, this not flexible, i.e. `True` is always true. More sophisticated conditions can be expressed by comparing the content of variables with a given value. For example, there is an integer variable `age`. Then the value can be either equal to 18 or not equal. So checking for *is the value of age equal to 18* can either be `True` or `False`. There are a number of comparison operators, which can be used for both numerical datatypes and string datatypes. In the former case, the usual order of numbers is used, in the latter case, the alphabetic order is taken.\n\n## Comparison Operators\n\nIn order to use decisions in programs a way to specify conditions is needed. To formulate condition the comparison\noperators can be used. The following table shows a selection of comparison operators available in Python. The result of\na comparison using these operators is always a `Boolean` value. As already explained, the only possible `Boolean` values\nare `True` and `False`. For each comparison operator the table contain two example expressions that result in `True`\nand `False` respectively. \n\n| Operator | Explanation | Example True | Example False |\n| -------- | ------------------------------------ | ------------ | ------------- |\n| == | Check for equality | 2 == 2 | 2 == 3 |\n| != | Check for inequality | 2 != 3 | 2 != 2 |\n| < | Check for \"smaller\" | 2 < 3 | 2 < 1 |\n| > | Check for \"larger\" | 3 > 2 | 2 > 3 |\n| <= | Check for \"less than or equal to\" | 3 <= 3 | 3 <= 2 |\n| >= | Check for \"greater than or equal to\" | 2 >= 2 | 2 >= 3 |\n\n## `=` vs. `==`\nIt is important to emphasize the difference between `=` and `==`. If there is one equal sign, the statement is an *assignment*. A value is assigned to a variable. The assignment has no return value, it is neither true or false. If there are two equal signs, it is a comparison. The values on both sides of the `\"\"` are unchanged. However, the comparison leads to a value, namely `True` or `False`.\n\n## Complex Conditions\nWhat happens, if you want to check, if the variable `age` is greater than 18 but smaller than 30? In this case, you can build complex conditions using the boolean operators `and`, `or` and `not` (cf. the notebook about datatypes).",
"_____no_output_____"
],
[
"## Exercise\nFamiliarize yourself with the comparison operators. Also test more complex comparisons, such as:\n\n```python\n\"abc\" < \"abd\"\n\"abcd\" > \"abc\"\n2 == 2.0\n1 == True\n0 != True\n```",
"_____no_output_____"
]
],
[
[
"1 == True",
"_____no_output_____"
]
],
[
[
"# Conditional statements\nUsing the conditional operators it is now possible to formulate conditional statements in Python.\nThe syntax for conditional statements in Python is:\n\n```python\nif condition:\n statement_a1\n ...\n statement_an\nelse:\n statement_b1\n ...\n statement_bm\n```\n\nThe result of the condition can be either `True` or `False`. If the condition is `True` the statements `a1` to `an` are executed.\nIf the condition is `False` the statements `b1` to `bm` are executed.\nNote, that the `else` branch is optional, i.e. an\n`if` condition can also be specified without an `else` alternative. If the condition then is not true (i.e. `false`),\nthe statements of the `if` block are simply skipped.",
"_____no_output_____"
]
],
[
[
"number = int(input(\"Please type a number: \"))\nif number > 100:\n print(number, \"is greater than 100!\")",
"_____no_output_____"
],
[
"number = int(input(\"Please type a number: \"))\nif number > 100:\n print(number, \"is greater than 100!\")\nelse:\n print(number, \"is smaller or equals 100!\")",
"_____no_output_____"
]
],
[
[
"### Indentations mark the boundaries of code blocks\n\nStatements that belong together are called *code blocks*.\nAs can be seen in the previous examples, Python does not use special characters or keywords to mark the\nbeginning and the end of code blocks. Instead, indentation is used in Python. \n\nSo indentation and spaces have a meaning in Python! Therefore, you must not indent arbitrarily within a program. Execute the code in the following two cells to see what happens.",
"_____no_output_____"
]
],
[
[
"a = 3\n b = 4\nprint(a + b)",
"_____no_output_____"
],
[
"number = 100\nif number > 0:\n print(\"Number is greater than 0\")",
"_____no_output_____"
]
],
[
[
"Let us challenge your understanding of code blocks in Python. Take a look at the following program. The last statement \n`print(\"Done\")` is not indented. What does this mean for the execution of the\nprogram? Change the program and indent the `print(\"Done\")`. How does the execution of the\nprogram change?",
"_____no_output_____"
]
],
[
[
"number = int(input(\"Please insert a number: \"))\nif number > 100:\n print(number, \"is greater than 100!\")\nelse:\n print(number, \"is smaller oder equals 100!\")\nprint(\"Done\")",
"_____no_output_____"
]
],
[
[
"### Exercise\nWrite a conditional statement that asks for the user's name. Use the `input()` function. If his name is Harry or Harry Potter, then output \"Welcome to Gryffindor, Mr. Potter!\". Otherwise output \"Sorry, Hogwarts is full.\". ",
"_____no_output_____"
]
],
[
[
"name = ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d08cc97538441613a58abad5545e677cb7343f85 | 19,754 | ipynb | Jupyter Notebook | Election_in_the_run_with_correlation.ipynb | microprediction/microblog | 8f0495f1867b94b6189349bbcedf22d351198f27 | [
"MIT"
] | null | null | null | Election_in_the_run_with_correlation.ipynb | microprediction/microblog | 8f0495f1867b94b6189349bbcedf22d351198f27 | [
"MIT"
] | null | null | null | Election_in_the_run_with_correlation.ipynb | microprediction/microblog | 8f0495f1867b94b6189349bbcedf22d351198f27 | [
"MIT"
] | null | null | null | 53.825613 | 6,150 | 0.649236 | [
[
[
"<a href=\"https://colab.research.google.com/github/microprediction/microblog/blob/main/Election_in_the_run_with_correlation.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Greetings! \n\nYou might be here because you think\n\n\n* Betting markets are far more efficient then Nate Silver or G. Elliott Morris. I really can't help you if you insist otherwise - perhaps G. Elliott will offer you 19/1 on Trump LOL. \n* Betting markets still requires some interpretation, because many punters are so lazy they don't even run simulations, or because they involve heterogeneous groups and some markets are products of others, approximately, so we get a convexity effect. \n\nSee this post https://www.linkedin.com/posts/petercotton_is-bidens-chance-of-winning-90-percent-or-activity-6730191890530095104-njhk and if you like it, please react on linked-in so the marketting dollar for the open source prediction network goes further. Because it really is a dollar. \n\n## Okay then...\n\nThis notebook provides you with a simple interpretation of market implied state electoral college probabilities, nothing more. It can be used to compute things like the market implied correlation between states, using a very simple correlation model. That may, or may not, provide you with a new perspective on the markets or a lens as to their degree of internal consistency.\n\nIn using this, rather than the groovy graphics at 538, you are taking a stand against the ridiculous celebritization of statistics and journalistic group-think. \n\n\n",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom pprint import pprint\nimport math\nfrom scipy.stats import norm\n \n\n# Current prices for Biden, expressed as inverse probabilities, and electoral votes\nstates = [ ('arizona',1.23,11), ('michigan',1.01,16), ('pennsylvania',1.03,20),\n ('georgia',1.12,16),('nevada',1.035,6), ('north carolina',6.5,15), ('alaska',50,3),\n ('wisconsin',1.03,10)]\n\n# Maybe you want to add Wisconsin. \n# Okay, let's see if this foreignor can get the basic electoral calculus right. \n# You might want to re-introduce some other states, but if so change the existing totals below: \nbiden = 227\ntrump = 214 # Does not include Alaska\n\n\n# Sanity check. \nundecided = sum([a[2] for a in states])\nprint(undecided)\ntotal = biden + trump + undecided \nassert total==538\n\n",
"97\n"
],
[
"# Next ... let's write a little guy that simulated from modified state probabilities. Just ignore this if you\n# don't think there is any correlation between results at this late stage of the race. \n\n# Perhaps, however, there is some latent correlation still in the results - related to legal moves or military voting patterns or\n# consistent bias across state markets. I will merely remark that some correlation is required to make the betting markets coherent, but\n# also that this implied correlation will not necessarily be justified. \n\ndef conditional(p:float,rho=None,z=None):\n \"\"\" Simulate binary event conditioned on common factor, leaving unconditional probability alone\n p Unconditional probability\n z Gaussian common factor\n rho Correlation\n (this is a Normal Copula with common off-diagonal entries)\n \"\"\"\n if p<1e-8:\n return 0\n elif p>1-1e-8:\n return 1\n else:\n x1 = math.sqrt(1-rho)*np.random.randn() + math.sqrt(rho)*z if z is not None else np.random.randn()\n return x1<norm.ppf(p)\n\n\nexamples = {'p_z=0':conditional(p=0.5,rho=0.5,z=0),\n 'p_z=1':conditional(p=0.5,rho=0.5,z=1)}\npprint(examples)\n\n ",
"{'p_z=0': True, 'p_z=1': True}\n"
],
[
"# A quick sanity check. The mean of the conditional draws should be the same as the original probability\np_unconditional = 0.22\nzs = np.random.randn(10000)\np_mean = np.mean([ conditional(p=p_unconditional, rho=.7, z=z) for z in zs])\npprint( {'p_unconditional':p_unconditional,'mean of p_conditional':p_mean})",
"{'mean of p_conditional': 0.2225, 'p_unconditional': 0.22}\n"
],
[
"# Jolly good. Now let's use this model. \n# I've added a simple translational bias as well, if you'd rather use that to introduce correlation. \n\nBIAS = 0 # If you want to systematically translate all state probs (this is not mean preserving)\nRHO = 0.4 # If you want correlation introduced via a Normal Copula with constant off-diagnonal terms\n\n\ndef biden_sim() -> int:\n \"\"\"\n Simulate, once, the number of electoral college votes for Joe Biden\n \"\"\"\n votes = biden\n bias = BIAS*np.random.randn() # Apply the same translation to all states\n z = np.random.randn() # Common latent factor capturing ... you tell me\n for s in states:\n p = 1/s[1] \n conditional_p = conditional(p=p,rho=RHO,z=z)\n shifted_p = conditional_p + BIAS\n if np.random.rand()<shifted_p:\n votes = votes + s[2]\n return votes\nbiden_sim()",
"_____no_output_____"
],
[
"# Simulate it many times\nbs = [ biden_sim() for _ in range(50000) ]\nts = [538-b for b in bs] # Trump electoral votes \nb_win = np.mean([b>=270 for b in bs])\nprint('Biden win probability is '+str(b_win))\nimport matplotlib.pyplot as plt\nplt.hist(bs,bins=200)\n\nt_win = np.mean([b<=268 for b in bs ])\ntie = np.mean([b==269 for b in bs ])\nprint('Trump win probability is '+str(t_win))\nprint('Tie probability is '+ str(tie))\nb270 = np.mean([b==270 for b in bs])\nprint('Biden=270 probability is '+str(b270))",
"Biden win probability is 0.98328\nTrump win probability is 0.01248\nTie probability is 0.00424\nBiden=270 probability is 0.00276\n"
],
[
"# Compute inverse probabilities (European quoting convention) for range outcomes\nprices = {'trump_270_299':1./np.mean([t>=270 and t<=299 for t in ts]),\n 'trump_300_329':1./np.mean([t>=300 and t<=329 for t in ts]),\n 'biden_270_299':1./np.mean([b>=270 and b<=299 for b in bs]),\n 'biden_300_329':1./np.mean([b>=300 and b<=329 for b in bs]),\n 'biden_330_359':1./np.mean([b>=330 and b<=359 for b in bs]),\n 'biden_m_100.5':1./np.mean([b-t-100.5>0 for b,t in zip(bs,ts)]),\n 'biden_m_48.5':1./np.mean([b-t-48.5>0 for b,t in zip(bs,ts)])}\npprint(prices)",
"{'biden_270_299': 3.837298541826554,\n 'biden_300_329': 1.37211855104281,\n 'biden_330_359': inf,\n 'biden_m_100.5': 7.788161993769471,\n 'biden_m_48.5': 1.1452392404773357,\n 'trump_270_299': 136.6120218579235,\n 'trump_300_329': 8333.333333333334}\n"
],
[
"# American quoting conventions\ndef pm(p):\n if p>0.5:\n return '-'+str(round(100*(p/(1-p)),0))\n else:\n return '+'+str(round(100/p - 100,0))\nexamples = {'p=0.33333':pm(0.333333),\n 'p=0.75':pm(0.75)}\n#pprint(examples)\n\nprices = {'trump_270_or_more':pm(t_win),\n 'biden_270_or_more':pm(b_win),\n 'trump_270_299':pm(np.mean([t>=270 and t<=299 for t in ts])),\n 'trump_300_329':pm(np.mean([t>=300 and t<=329 for t in ts])),\n 'biden_270_299':pm(np.mean([b>=270 and b<=299 for b in bs])),\n 'biden_300_329':pm(np.mean([b>=300 and b<=329 for b in bs]))}\npprint(prices)",
"{'biden_270_299': '+290.0',\n 'biden_270_or_more': '-4019.0',\n 'biden_300_329': '-257.0',\n 'trump_270_299': '+5407.0',\n 'trump_270_or_more': '+5169.0',\n 'trump_300_329': '+121851.0'}\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d08ccd875b83a906b01d5539e0ad5ef604653c96 | 119,909 | ipynb | Jupyter Notebook | demographicModelSelectionExample.ipynb | kern-lab/popGenMachineLearningExamples | 3c659ea7b6a46e26fc8c0aa2230962dd4686bc89 | [
"MIT"
] | 23 | 2017-10-23T20:33:40.000Z | 2018-07-03T12:22:42.000Z | demographicModelSelectionExample.ipynb | attrna/popGenMachineLearningExamples | 3c659ea7b6a46e26fc8c0aa2230962dd4686bc89 | [
"MIT"
] | null | null | null | demographicModelSelectionExample.ipynb | attrna/popGenMachineLearningExamples | 3c659ea7b6a46e26fc8c0aa2230962dd4686bc89 | [
"MIT"
] | 4 | 2019-05-19T15:32:45.000Z | 2021-11-07T08:32:32.000Z | 299.7725 | 55,538 | 0.912926 | [
[
[
"# Using a random forest for demographic model selection\nIn Schrider and Kern (2017) we give a toy example of demographic model selection via supervised machine learning in Figure Box 1. Following a discussion on twitter, Vince Buffalo had the great idea of our providing a simple example of supervised ML in population genetics using a jupyter notebook; this notebook aims to serve that purpose by showing you exactly how we produced that figure in our paper",
"_____no_output_____"
],
[
"## Preliminaries\nThe road map here will be to 1) do some simulation of three demographic models, 2) to train a classifier to distinguish among those models, 3) test that classifier with new simulation data, and 4) to graphically present how well our trained classifier works. \n\nTo do this we will use coalescent simulations as implemented in Dick Hudson's well known `ms` software and for the ML side of things we will use the `scikit-learn` package. Let's start by installing these dependencies (if you don't have them installed already)",
"_____no_output_____"
],
[
"### Install, and compile `ms`\nWe have put a copy of the `ms` tarball in this repo, so the following should work upon cloning",
"_____no_output_____"
]
],
[
[
"#untar and compile ms and sample_stats\n!tar zxf ms.tar.gz; cd msdir; gcc -o ms ms.c streec.c rand1.c -lm; gcc -o sample_stats sample_stats.c tajd.c -lm\n#I get three compiler warnings from ms, but everything should be fine\n#now I'll just move the programs into the current working dir\n!mv msdir/ms . ; mv msdir/sample_stats .;",
"_____no_output_____"
]
],
[
[
"### Install `scikit-learn`\nIf you use anaconda, you may already have these modules installed, but if not you can install with either of the following",
"_____no_output_____"
]
],
[
[
"!conda install scikit-learn --yes",
"_____no_output_____"
]
],
[
[
"or if you don't use `conda`, you can use `pip` to install scikit-learn with",
"_____no_output_____"
]
],
[
[
"!pip install -U scikit-learn",
"_____no_output_____"
]
],
[
[
"# Step 1: create a training set and a testing set\nWe will create a training set using simulations from three different demographic models: equilibrium population size, instantaneous population growth, and instantaneous population contraction. As you'll see this is really just a toy example because we will perform classification based on data from a single locus; in practice this would be ill-advised and you would want to use data from many loci simulataneously. \n\nSo lets do some simulation using `ms` and summarize those simulations using the `sample_stats` program that Hudson provides. Ultimately we will only use two summary stats for classification, but one could use many more. Each of these simulations should take a few seconds to run.",
"_____no_output_____"
]
],
[
[
"#simulate under the equilibrium model\n!./ms 20 2000 -t 100 -r 100 10000 | ./sample_stats > equilibrium.msOut.stats",
"_____no_output_____"
],
[
"#simulate under the contraction model\n!./ms 20 2000 -t 100 -r 100 10000 -en 0 1 0.5 -en 0.2 1 1 | ./sample_stats > contraction.msOut.stats",
"_____no_output_____"
],
[
"#simulate under the growth model\n!./ms 20 2000 -t 100 -r 100 10000 -en 0.2 1 0.5 | ./sample_stats > growth.msOut.stats",
"_____no_output_____"
],
[
"#now lets suck up the data columns we want for each of these files, and create one big training set; we will use numpy for this\n# note that we are only using two columns of the data- these correspond to segSites and Fay & Wu's H\nimport numpy as np\nX1 = np.loadtxt(\"equilibrium.msOut.stats\",usecols=(3,9))\nX2 = np.loadtxt(\"contraction.msOut.stats\",usecols=(3,9))\nX3 = np.loadtxt(\"growth.msOut.stats\",usecols=(3,9))\nX = np.concatenate((X1,X2,X3))\n\n#create associated 'labels' -- these will be the targets for training\ny = [0]*len(X1) + [1]*len(X2) + [2]*len(X3)\nY = np.array(y)\n",
"_____no_output_____"
],
[
"#the last step in this process will be to shuffle the data, and then split it into a training set and a testing set\n#the testing set will NOT be used during training, and will allow us to check how well the classifier is doing\n#scikit-learn has a very convenient function for doing this shuffle and split operation\n#\n# will will keep out 10% of the data for testing\n\nfrom sklearn.model_selection import train_test_split\nX_train, X_test, Y_train, Y_test = train_test_split(X,Y,test_size=0.1)",
"_____no_output_____"
]
],
[
[
"# Step 2: train our classifier and visualize decision surface\nNow that we have a training and testing set ready to go, we can move on to training our classifier. For this example we will use a random forest classifier (Breiman 2001). This is all implemented in `scikit-learn` and so the code is very brief. ",
"_____no_output_____"
]
],
[
[
"from sklearn.ensemble import RandomForestClassifier\n\nrfClf = RandomForestClassifier(n_estimators=100,n_jobs=10)\nclf = rfClf.fit(X_train, Y_train)\n",
"_____no_output_____"
]
],
[
[
"That's it! The classifier is trained. This Random Forest classifer used 100 decision trees in its ensemble, a pretty large number considering that we are only using two summary stats to represent our data. Nevertheless it trains on the data very, very quickly.\n\nConfession: the real reason we are using only two summary statistics right here is because it makes it really easy to visualize that classifier's decision surface: which regions of the feature space would be assigned to which class? Let's have a look!\n\n(Note: I have increased the h argument for the call to `make_meshgrid` below, coarsening the contour plot in the interest of efficiency. Decreasing this will yield a smoother plot, but may take a while and use up a lot more memory. Adjust at your own risk!)",
"_____no_output_____"
]
],
[
[
"from sklearn.preprocessing import normalize\n\n#These two functions (taken from scikit-learn.org) plot the decision boundaries for a classifier.\ndef plot_contours(ax, clf, xx, yy, **params):\n Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])\n Z = Z.reshape(xx.shape)\n out = ax.contourf(xx, yy, Z, **params)\n return out\n\ndef make_meshgrid(x, y, h=.05):\n x_min, x_max = x.min() - 1, x.max() + 1\n y_min, y_max = y.min() - 1, y.max() + 1\n xx, yy = np.meshgrid(np.arange(x_min, x_max, h),\n np.arange(y_min, y_max, h))\n return xx, yy\n\n#Let's do the plotting\nimport matplotlib.pyplot as plt\nfig,ax= plt.subplots(1,1)\nX0, X1 = X[:, 0], X[:, 1]\nxx, yy = make_meshgrid(X0, X1, h=0.2)\nplot_contours(ax, clf, xx, yy, cmap=plt.cm.coolwarm, alpha=0.8)\n# plotting only a subset of our data to keep things from getting too cluttered\nax.scatter(X_test[:200, 0], X_test[:200, 1], c=Y_test[:200], cmap=plt.cm.coolwarm, edgecolors='k')\nax.set_xlabel(r\"$\\theta_{w}$\", fontsize=14)\nax.set_ylabel(r\"Fay and Wu's $H$\", fontsize=14)\nax.set_xticks(())\nax.set_yticks(())\nax.set_title(\"Classifier decision surface\", fontsize=14)\nplt.show()",
"_____no_output_____"
]
],
[
[
"Above we can see which regions of our feature space are assigned to each class: dark blue shaded areas will be classified as Equilibrium, faint blue as Contraction, and red as Growth. Note the non-linear decision surface. Looks pretty cool! And also illustrates how this type of classifier might be useful for discriminating among classes that are difficult to linearly separate. Also plotted are a subset of our test examples, as dots colored according to their true class. Looks like we are doing pretty well but have a few misclassifications. Would be nice to quantify this somehow, which brings us to...\n\n# Step 3: benchmark our classifier\nThe last step of the process is to use our trained classifier to predict which demographic models our test data are drawn from. Recall that the classifier hasn't seen these test data so this should be a fair test of how well the classifier will perform on any new data we throw at it in the future. We will visualize performance using a confusion matrix. ",
"_____no_output_____"
]
],
[
[
"#here's the confusion matrix function\ndef makeConfusionMatrixHeatmap(data, title, trueClassOrderLs, predictedClassOrderLs, ax):\n data = np.array(data)\n data = normalize(data, axis=1, norm='l1')\n heatmap = ax.pcolor(data, cmap=plt.cm.Blues, vmin=0.0, vmax=1.0)\n\n for i in range(len(predictedClassOrderLs)):\n for j in reversed(range(len(trueClassOrderLs))):\n val = 100*data[j, i]\n if val > 50:\n c = '0.9'\n else:\n c = 'black'\n ax.text(i + 0.5, j + 0.5, '%.2f%%' % val, horizontalalignment='center', verticalalignment='center', color=c, fontsize=9)\n\n cbar = plt.colorbar(heatmap, cmap=plt.cm.Blues, ax=ax)\n cbar.set_label(\"Fraction of simulations assigned to class\", rotation=270, labelpad=20, fontsize=11)\n\n # put the major ticks at the middle of each cell\n ax.set_xticks(np.arange(data.shape[1]) + 0.5, minor=False)\n ax.set_yticks(np.arange(data.shape[0]) + 0.5, minor=False)\n ax.axis('tight')\n ax.set_title(title)\n\n #labels\n ax.set_xticklabels(predictedClassOrderLs, minor=False, fontsize=9, rotation=45)\n ax.set_yticklabels(reversed(trueClassOrderLs), minor=False, fontsize=9)\n ax.set_xlabel(\"Predicted class\")\n ax.set_ylabel(\"True class\")\n \n#now the actual work\n#first get the predictions\npreds=clf.predict(X_test)\n\ncounts=[[0.,0.,0.],[0.,0.,0.],[0.,0.,0.]]\nfor i in range(len(Y_test)):\n counts[Y_test[i]][preds[i]] += 1\ncounts.reverse()\nclassOrderLs=['equil','contraction','growth']\n\n#now do the plotting\nfig,ax= plt.subplots(1,1)\nmakeConfusionMatrixHeatmap(counts, \"Confusion matrix\", classOrderLs, classOrderLs, ax)\nplt.show()",
"_____no_output_____"
]
],
[
[
"Looks pretty good. But can we make it better? Well a simple way might be to increase the number of features (i.e. summary statistics) we use as input. Let's give that a whirl using all of the output from Hudson's `sample_stats`",
"_____no_output_____"
]
],
[
[
"X1 = np.loadtxt(\"equilibrium.msOut.stats\",usecols=(1,3,5,7,9))\nX2 = np.loadtxt(\"contraction.msOut.stats\",usecols=(1,3,5,7,9))\nX3 = np.loadtxt(\"growth.msOut.stats\",usecols=(1,3,5,7,9))\nX = np.concatenate((X1,X2,X3))\n#create associated 'labels' -- these will be the targets for training\ny = [0]*len(X1) + [1]*len(X2) + [2]*len(X3)\nY = np.array(y)\nX_train, X_test, Y_train, Y_test = train_test_split(X,Y,test_size=0.1)\nrfClf = RandomForestClassifier(n_estimators=100,n_jobs=10)\nclf = rfClf.fit(X_train, Y_train)\npreds=clf.predict(X_test)\ncounts=[[0.,0.,0.],[0.,0.,0.],[0.,0.,0.]]\nfor i in range(len(Y_test)):\n counts[Y_test[i]][preds[i]] += 1\ncounts.reverse()\nfig,ax= plt.subplots(1,1)\nmakeConfusionMatrixHeatmap(counts, \"Confusion matrix\", classOrderLs, classOrderLs, ax)\nplt.show()",
"_____no_output_____"
]
],
[
[
"Even better!\n\nHopefully this simple example gives you the gist of how supervised ML can be used. In the future we will populate this GitHub repository with further examples that might be illustrative. ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d08cf2cd050f05ec49422c492bbff6102836b844 | 96,685 | ipynb | Jupyter Notebook | 2_Ejercicios/Modulo2/5.DeciTrees_Kgboost_gridserarch/exercises/4.crime.ipynb | JaunIgciona/DataBootcamp_Nov2020 | 14429a53a2e0001f661c71648646dbe53efdeec7 | [
"Apache-2.0"
] | 1 | 2021-01-18T13:15:08.000Z | 2021-01-18T13:15:08.000Z | 2_Ejercicios/Modulo2/5.DeciTrees_Kgboost_gridserarch/exercises/4.crime.ipynb | JaunIgciona/DataBootcamp_Nov2020 | 14429a53a2e0001f661c71648646dbe53efdeec7 | [
"Apache-2.0"
] | null | null | null | 2_Ejercicios/Modulo2/5.DeciTrees_Kgboost_gridserarch/exercises/4.crime.ipynb | JaunIgciona/DataBootcamp_Nov2020 | 14429a53a2e0001f661c71648646dbe53efdeec7 | [
"Apache-2.0"
] | 1 | 2021-05-24T21:49:24.000Z | 2021-05-24T21:49:24.000Z | 1,223.860759 | 66,023 | 0.748379 | [
[
[
"# 1 \n\nA partir del fichero \"US_Crime_Rates_1960_2014\", se pide:\n\n1. Tratar el dataset como una serie temporal a partir de la columna Year. Siempre el eje X será el nuevo índice Year.\n2. Dibujar todas las columnas numéricas.\n3. Como se puede ver en el punto 2, la columna \"Population\" tiene una magnitud diferente a la de las demás columnas: Dibuja la misma gráfica que antes pero con dos resoluciones diferentes para que se puedan observar con mejor detalle todas las columnas. \n4. A partir de ahora y para el resto del ejercicio, borra las columnas \"Population\" y \"Total\" ¿Qué columnas tienen mejor correlación? \n5. A partir de las cinco columnas que tengan mejor correlación con la columna \"Murder\", entrena un modelo de regresión no lineal a partir de polinomio de grado 6. Este modelo ha de entrenarse siguiendo el esquema recomendado de validación cruzada y almacenando los errores de entrenamiento y validación en cada iteración del entrenamiento.\n6. Representa la evolución de los errores de validación y entrenamiento en una gráfica. ¿Ha habido sobreaprendizaje?. Utiliza n_iteraciones=23 y n_repeats=7.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\ndf = pd.read_csv(\"../data/US_Crime_Rates_1960_2014.csv\")\n\ndf.set_index(\"Year\", inplace=True)\ndf.plot()",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
]
] |
d08cf3d211e7b56134f38e0f3a9a6fe3d68063d4 | 1,426 | ipynb | Jupyter Notebook | 10 days of statistics/day 6 the central limit theorem 3.ipynb | faisalsanto007/Hakcerrank-problem-solving | eaf6404e8896fe3448df8a3cb4c86585fd7bebcc | [
"MIT"
] | null | null | null | 10 days of statistics/day 6 the central limit theorem 3.ipynb | faisalsanto007/Hakcerrank-problem-solving | eaf6404e8896fe3448df8a3cb4c86585fd7bebcc | [
"MIT"
] | null | null | null | 10 days of statistics/day 6 the central limit theorem 3.ipynb | faisalsanto007/Hakcerrank-problem-solving | eaf6404e8896fe3448df8a3cb4c86585fd7bebcc | [
"MIT"
] | null | null | null | 20.084507 | 49 | 0.483871 | [
[
[
"import math\n\n# Set data\nn = float(input())\nmean = float(input())\nstd = float(input())\npercent_ci = float(input())\nvalue_ci = float(input())\n\n# Formula CI\nci = value_ci * (std / math.sqrt(n))\n\n# Gets the result and show on the screen\nprint(round(mean - ci, 2))\nprint(round(mean + ci, 2))",
"_____no_output_____"
],
[
"# direct approach\n\nmean = 500\nstd = 80\nn = 100\nz = 1.96\n\n# characteristics of sample\nmean = mean\nstd = std / n**(1/2)\n\n# Find the 95% interval\nprint(round(mean - std * z, 2))\nprint(round(mean + std * z, 2))",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code"
]
] |
d08cf9330279ef12d2c522e86e6e530ba8e2cf59 | 97,213 | ipynb | Jupyter Notebook | notebooks/train_single_class_ae.ipynb | sinanbayraktar/latent_3d_points | 3b07edbea5298fa5daa6d04a9cd62041d30d4e86 | [
"MIT"
] | null | null | null | notebooks/train_single_class_ae.ipynb | sinanbayraktar/latent_3d_points | 3b07edbea5298fa5daa6d04a9cd62041d30d4e86 | [
"MIT"
] | null | null | null | notebooks/train_single_class_ae.ipynb | sinanbayraktar/latent_3d_points | 3b07edbea5298fa5daa6d04a9cd62041d30d4e86 | [
"MIT"
] | null | null | null | 285.920588 | 45,024 | 0.919558 | [
[
[
"## This notebook will help you train a vanilla Point-Cloud AE with the basic architecture we used in our paper.\n (it assumes latent_3d_points is in the PYTHONPATH and the structural losses have been compiled)",
"_____no_output_____"
]
],
[
[
"import os.path as osp\n\nfrom latent_3d_points.src.ae_templates import mlp_architecture_ala_iclr_18, default_train_params\nfrom latent_3d_points.src.autoencoder import Configuration as Conf\nfrom latent_3d_points.src.point_net_ae import PointNetAutoEncoder\n\nfrom latent_3d_points.src.in_out import snc_category_to_synth_id, create_dir, PointCloudDataSet, \\\n load_all_point_clouds_under_folder\n\nfrom latent_3d_points.src.tf_utils import reset_tf_graph\nfrom latent_3d_points.src.general_utils import plot_3d_point_cloud",
"_____no_output_____"
],
[
"%load_ext autoreload\n%autoreload 2\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"Define Basic Parameters",
"_____no_output_____"
]
],
[
[
"top_out_dir = '../data/' # Use to save Neural-Net check-points etc.\ntop_in_dir = '../data/shape_net_core_uniform_samples_2048/' # Top-dir of where point-clouds are stored.\n\nexperiment_name = 'single_class_ae'\nn_pc_points = 2048 # Number of points per model.\nbneck_size = 128 # Bottleneck-AE size\nae_loss = 'chamfer' # Loss to optimize: 'emd' or 'chamfer'\nclass_name = raw_input('Give me the class name (e.g. \"chair\"): ').lower()",
"Give me the class name (e.g. \"chair\"): chair\n"
]
],
[
[
"Load Point-Clouds",
"_____no_output_____"
]
],
[
[
"syn_id = snc_category_to_synth_id()[class_name]\nclass_dir = osp.join(top_in_dir , syn_id)\nall_pc_data = load_all_point_clouds_under_folder(class_dir, n_threads=8, file_ending='.ply', verbose=True)",
"6778 pclouds were loaded. They belong in 1 shape-classes.\n"
]
],
[
[
"Load default training parameters (some of which are listed beloq). For more details please print the configuration object.\n\n 'batch_size': 50 \n \n 'denoising': False (# by default AE is not denoising)\n\n 'learning_rate': 0.0005\n\n 'z_rotate': False (# randomly rotate models of each batch)\n \n 'loss_display_step': 1 (# display loss at end of these many epochs)\n 'saver_step': 10 (# over how many epochs to save neural-network)",
"_____no_output_____"
]
],
[
[
"train_params = default_train_params()",
"_____no_output_____"
],
[
"encoder, decoder, enc_args, dec_args = mlp_architecture_ala_iclr_18(n_pc_points, bneck_size)\ntrain_dir = create_dir(osp.join(top_out_dir, experiment_name))",
"_____no_output_____"
],
[
"conf = Conf(n_input = [n_pc_points, 3],\n loss = ae_loss,\n training_epochs = train_params['training_epochs'],\n batch_size = train_params['batch_size'],\n denoising = train_params['denoising'],\n learning_rate = train_params['learning_rate'],\n train_dir = train_dir,\n loss_display_step = train_params['loss_display_step'],\n saver_step = train_params['saver_step'],\n z_rotate = train_params['z_rotate'],\n encoder = encoder,\n decoder = decoder,\n encoder_args = enc_args,\n decoder_args = dec_args\n )\nconf.experiment_name = experiment_name\nconf.held_out_step = 5 # How often to evaluate/print out loss on \n # held_out data (if they are provided in ae.train() ).\nconf.save(osp.join(train_dir, 'configuration'))",
"_____no_output_____"
]
],
[
[
"If you ran the above lines, you can reload a saved model like this:",
"_____no_output_____"
]
],
[
[
"load_pre_trained_ae = False\nrestore_epoch = 500\nif load_pre_trained_ae:\n conf = Conf.load(train_dir + '/configuration')\n reset_tf_graph()\n ae = PointNetAutoEncoder(conf.experiment_name, conf)\n ae.restore_model(conf.train_dir, epoch=restore_epoch)",
"_____no_output_____"
]
],
[
[
"Build AE Model.",
"_____no_output_____"
]
],
[
[
"reset_tf_graph()\nae = PointNetAutoEncoder(conf.experiment_name, conf)",
"_____no_output_____"
]
],
[
[
"Train the AE (save output to train_stats.txt) ",
"_____no_output_____"
]
],
[
[
"buf_size = 1 # Make 'training_stats' file to flush each output line regarding training.\nfout = open(osp.join(conf.train_dir, 'train_stats.txt'), 'a', buf_size)\ntrain_stats = ae.train(all_pc_data, conf, log_file=fout)\nfout.close()",
"_____no_output_____"
]
],
[
[
"## Evaluation",
"_____no_output_____"
],
[
"Get a batch of reconstuctions and their latent-codes.",
"_____no_output_____"
]
],
[
[
"feed_pc, feed_model_names, _ = all_pc_data.next_batch(10)\nreconstructions = ae.reconstruct(feed_pc)[0]\nlatent_codes = ae.transform(feed_pc)",
"_____no_output_____"
]
],
[
[
"Use any plotting mechanism such as matplotlib to visualize the results.",
"_____no_output_____"
]
],
[
[
"i = 2\nplot_3d_point_cloud(reconstructions[i][:, 0], \n reconstructions[i][:, 1], \n reconstructions[i][:, 2], in_u_sphere=True);\n\ni = 4\nplot_3d_point_cloud(reconstructions[i][:, 0], \n reconstructions[i][:, 1], \n reconstructions[i][:, 2], in_u_sphere=True);",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d08cfb30bcbd5149689a8dc4333db11fabdd4d1b | 672,892 | ipynb | Jupyter Notebook | 02_body/chapter3/images/trajctory analysis/.ipynb_checkpoints/graph_ploting-checkpoint.ipynb | eXpensia/Confined-Brownian-Motion | bd0eb6dea929727ea081dae060a7d1aa32efafd1 | [
"MIT"
] | null | null | null | 02_body/chapter3/images/trajctory analysis/.ipynb_checkpoints/graph_ploting-checkpoint.ipynb | eXpensia/Confined-Brownian-Motion | bd0eb6dea929727ea081dae060a7d1aa32efafd1 | [
"MIT"
] | null | null | null | 02_body/chapter3/images/trajctory analysis/.ipynb_checkpoints/graph_ploting-checkpoint.ipynb | eXpensia/Confined-Brownian-Motion | bd0eb6dea929727ea081dae060a7d1aa32efafd1 | [
"MIT"
] | null | null | null | 251.736626 | 72,224 | 0.891185 | [
[
[
"## Figs for the measurement force paper",
"_____no_output_____"
]
],
[
[
"from scipy.io import loadmat\nfrom scipy.optimize import curve_fit\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom numpy import trapz\ndef cm2inch(value):\n return value/2.54\n\n#axes.xaxis.set_tick_params(direction='in', which='both')\n#axes.yaxis.set_tick_params(direction='in', which='both')\n\n\n\nmpl.rcParams[\"xtick.direction\"] = \"in\"\nmpl.rcParams[\"ytick.direction\"] = \"in\"\nmpl.rcParams[\"lines.markeredgecolor\"] = \"k\"\nmpl.rcParams[\"lines.markeredgewidth\"] = 1.5\nmpl.rcParams[\"figure.dpi\"] = 200\nfrom matplotlib import rc\nrc('font', family='serif')\nrc('text', usetex=True)\nrc('xtick', labelsize='medium')\nrc('ytick', labelsize='medium')\nrc(\"axes\", labelsize = \"large\")\ndef cm2inch(value):\n return value/2.54\ndef cm2inch(value):\n return value/2.54\ndef gauss_function(x, a, x0, sigma):\n return a*np.exp(-(x-x0)**2/(2*sigma**2))\ndef pdf(data, bins = 10, density = True):\n \n pdf, bins_edge = np.histogram(data, bins = bins, density = density)\n bins_center = (bins_edge[0:-1] + bins_edge[1:]) / 2\n \n return pdf, bins_center ",
"_____no_output_____"
],
[
"#import the plots data\ndataset = loadmat(\"data_graphs.mat\")\nfor i in dataset.keys():\n try:\n dataset[i] = np.squeeze(dataset[i])\n except:\n continue\n \nfit_data = loadmat(\"data_fit_2705.mat\")\nfor i in fit_data.keys():\n try:\n fit_data[i] = np.squeeze(fit_data[i])\n except:\n continue\n \n\n ",
"_____no_output_____"
],
[
"def movmin(z, window):\n result = np.empty_like(z)\n start_pt = 0\n end_pt = int(np.ceil(window / 2))\n\n for i in range(len(z)):\n if i < int(np.ceil(window / 2)):\n start_pt = 0\n if i > len(z) - int(np.ceil(window / 2)):\n end_pt = len(z)\n result[i] = np.min(z[start_pt:end_pt])\n start_pt += 1\n end_pt += 1\n\n return result",
"_____no_output_____"
],
[
"plt.figure(figsize=( cm2inch(16),cm2inch(8)))\nplt.plot(dataset[\"time\"], dataset[\"z\"], label=\"raw\")\nplt.plot(dataset[\"time\"], dataset[\"z\"] - movmin(dataset[\"z\"], 10000), label=\"rescaled\")\nplt.xlabel(\"time (s)\")\nplt.ylabel(\"$z$ ($\\mu$m)\")\nplt.legend(frameon=False)\nplt.savefig(\"traj_rescaled.pdf\")",
"_____no_output_____"
],
[
"dataset",
"_____no_output_____"
],
[
"color = ['tab:blue', 'tab:orange', 'tab:green', 'tab:red', 'tab:purple', 'tab:brown', 'tab:pink', 'tab:gray', 'tab:olive', 'tab:cyan']\nplt.figure()\n\nfor n,i in enumerate(['pdf_Dz_short_t_1', 'pdf_Dz_short_t_2', 'pdf_Dz_short_t_3', 'pdf_Dz_short_t_4', 'pdf_Dz_short_t_5']):\n plt.semilogy(dataset[i][0,:],dataset[i][1,:], color = color[n], marker = \"o\", linestyle = \"\")\n\n \nplt.plot(dataset[\"pdf_Dz_short_th_t_5\"][0,:],dataset[\"pdf_Dz_short_th_t_5\"][1,:], color = color[4])\nplt.plot(dataset[\"gaussian_short_timetheory_z\"][0,:],dataset[\"gaussian_short_timetheory_z\"][1,:], color = \"gray\",linestyle = \"--\")\n\nax = plt.gca()\nax.set_ylim([1e-5,1])\nax.set_xlim([-7,7])\nplt.xlabel(\"$\\Delta z / \\sigma$\")\nplt.ylabel(\"$P(\\Delta z / \\sigma)$\")",
"_____no_output_____"
],
[
"#dataset",
"_____no_output_____"
],
[
"fig = plt.figure(figsize=(cm2inch(8.6), cm2inch(8.6)/1.68*1.3),constrained_layout=False)\ngs = fig.add_gridspec(2,3)\n\n##### MSD\n\n\nfig.add_subplot(gs[0,:])\n\n\n\nplt.loglog(dataset[\"MSD_time_tot\"], dataset[\"MSD_fit_x\"], color = \"k\")\nplt.loglog(dataset[\"MSD_time_tot\"], dataset[\"MSD_fit_z\"], color = \"k\")\n\nplt.loglog(dataset[\"MSD_time_tot\"],dataset[\"MSD_x_tot\"],\"o\", label = \"x\", markersize = 5)\nplt.loglog(dataset[\"MSD_time_tot\"][::2],dataset[\"MSD_y_tot\"][::2],\"o\", label = \"y\", markersize = 5)\nplt.loglog(dataset[\"MSD_time_tot\"],dataset[\"MSD_z_tot\"],\"o\", label = \"z\", markersize = 5)\n\n\n\n\n# plateau\n\nplateau = [dataset[\"fitted_MSD_Plateau\"] for i in range(len(dataset[\"MSD_time_tot\"]))]\nplt.loglog(dataset[\"MSD_time_tot\"][-60:], plateau[-60:], color = \"black\", linewidth = 1,zorder = 10, linestyle = \"--\")\n \n \n##\nax = plt.gca()\nlocmaj = mpl.ticker.LogLocator(base=10.0, subs=(1.0, ), numticks=100)\nax.xaxis.set_major_locator(locmaj)\nlocmin = mpl.ticker.LogLocator(base=10.0, subs=np.arange(2, 10) * .1,\n numticks=100)\nax.xaxis.set_minor_locator(locmin)\nax.xaxis.set_minor_formatter(mpl.ticker.NullFormatter())\n\nlocmaj = mpl.ticker.LogLocator(base=10.0, subs=(1.0, ), numticks=100)\nax.yaxis.set_major_locator(locmaj)\nlocmin = mpl.ticker.LogLocator(base=10.0, subs=np.arange(2, 10) * .1,\n numticks=100)\nax.yaxis.set_minor_locator(locmin)\nax.yaxis.set_minor_formatter(mpl.ticker.NullFormatter())\n\n\nax.set_xlim([1e-2,1e3])\nax.set_ylim([None,1e-10])\n\n\nymin, ymax = fig.gca().get_ylim()\nxmin, xmax = fig.gca().get_xlim()\n\nplt.text(0.45*xmax,2.5*ymin,'a)')\n\n\nplt.ylabel(\"$\\mathrm{MSD}$ ($\\mathrm{m^2}$)\",fontsize = \"small\", labelpad=0.5)\nplt.xlabel(\"$\\Delta t$ (s)\",fontsize = \"small\",labelpad=0.5)\n\nplt.legend(frameon = False,fontsize = \"x-small\",loc = \"upper left\")\n\n####### SHORT TIME X\n\n\n\nfig.add_subplot(gs[1,0])\n\nfor n,i in enumerate(['pdf_Dx_short_t_1', 'pdf_Dx_short_t_2', 'pdf_Dx_short_t_3', 'pdf_Dx_short_t_4', 'pdf_Dx_short_t_5']):\n plt.semilogy(dataset[i][0,:],dataset[i][1,:], color = color[n], marker = \"o\", linestyle = \"\",markersize = 3)\n\n \nplt.plot(dataset[\"pdf_Dx_short_th_t_5\"][0,:],dataset[\"pdf_Dx_short_th_t_5\"][1,:], color = \"k\",zorder=6,linewidth=1)\nplt.plot(dataset[\"gaussianx_short_timetheory\"][0,:],dataset[\"gaussianx_short_timetheory\"][1,:], color = \"gray\",zorder=-1,linestyle = \"--\",)\n\nax = plt.gca()\n\nlocmaj = mpl.ticker.LogLocator(base=10.0, subs=(1.0, ), numticks=100)\nax.yaxis.set_major_locator(locmaj)\nlocmin = mpl.ticker.LogLocator(base=10.0, subs=np.arange(2, 10) * .1,\n numticks=100)\nax.yaxis.set_minor_locator(locmin)\nax.yaxis.set_minor_formatter(mpl.ticker.NullFormatter())\n\n\n\n\nax.set_ylim([1e-5,1])\nax.set_xlim([-7,7])\nplt.xlabel(\"$\\Delta x / \\sigma$\",fontsize = \"small\", labelpad=0.5)\nplt.ylabel(\"$P_{x} \\sigma$\",fontsize = \"small\", labelpad=0.5)\n\nymin, ymax = fig.gca().get_ylim()\nxmin, xmax = fig.gca().get_xlim()\n\nplt.text(0.54*xmax,0.25*ymax,'b)')\n\n\n####### SHORT TIME Z\n\n\n\nfig.add_subplot(gs[1,1])\n\nfor n,i in enumerate(['pdf_Dz_short_t_1', 'pdf_Dz_short_t_2', 'pdf_Dz_short_t_3', 'pdf_Dz_short_t_4', 'pdf_Dz_short_t_5']):\n plt.semilogy(dataset[i][0,:],dataset[i][1,:], color = color[n], marker = \"o\", linestyle = \"\",markersize = 3)\n\n \nplt.plot(dataset[\"pdf_Dz_short_th_t_5\"][0,:],dataset[\"pdf_Dz_short_th_t_5\"][1,:], color = \"k\",zorder=6,linewidth=1)\nplt.plot(dataset[\"gaussian_short_timetheory_z\"][0,:],dataset[\"gaussian_short_timetheory_z\"][1,:], color = \"gray\",zorder=-1,linestyle = \"--\",)\n\nax = plt.gca()\n\nlocmaj = mpl.ticker.LogLocator(base=10.0, subs=(1.0, ), numticks=100)\nax.yaxis.set_major_locator(locmaj)\nlocmin = mpl.ticker.LogLocator(base=10.0, subs=np.arange(2, 10) * .1,\n numticks=100)\nax.yaxis.set_minor_locator(locmin)\nax.yaxis.set_minor_formatter(mpl.ticker.NullFormatter())\n\n\n\n\nax.set_ylim([1e-5,1])\nax.set_xlim([-7,7])\nplt.xlabel(\"$\\Delta z / \\sigma$\",fontsize = \"small\",labelpad=0.5)\nplt.ylabel(\"$P_{z} \\sigma$\",fontsize = \"small\",labelpad=0.5)\n\nymin, ymax = fig.gca().get_ylim()\nxmin, xmax = fig.gca().get_xlim()\n\nplt.text(0.58*xmax,0.25*ymax,'c)')\n###### LONG TIME PDF\\\n\n\nfig.add_subplot(gs[1,2])\n\nplt.errorbar(dataset[\"x_pdf_longtime\"]*1e6,dataset[\"pdf_longtime\"],yerr=dataset[\"err_long_t\"],ecolor = \"k\",barsabove=False,linewidth = 0.8, label = \"experimental pdf\",marker=\"o\", markersize=3,capsize = 1,linestyle=\"\")\n#plt.fill_between(bins_centers_long_t, pdf_long_t-err_long_t, pdf_long_t+err_long_t, alpha = 0.3)\nplt.semilogy(dataset[\"bins_centers_long_t\"],dataset[\"Pdeltaz_long_th\"],color=\"black\", linewidth = 1, zorder=10)\n\n\nplt.ylabel(\"$P_z$ ($\\mathrm{\\mu m^{-1}})$\",fontsize = \"small\", labelpad=0.5)\nplt.xlabel(\"$\\Delta z$ ($\\mathrm{\\mu m}$)\",fontsize = \"small\", labelpad=0.5)\n\nax = plt.gca()\n\nax = plt.gca()\n\nlocmaj = mpl.ticker.LogLocator(base=10.0, subs=(1.0, ), numticks=100)\nax.yaxis.set_major_locator(locmaj)\nlocmin = mpl.ticker.LogLocator(base=10.0, subs=np.arange(2, 10) * .1,\n numticks=100)\nax.yaxis.set_minor_locator(locmin)\nax.yaxis.set_minor_formatter(mpl.ticker.NullFormatter())\n\n\nax.set_ylim([1e-3,1])\n#ax.set_xlim([None,1e-10])\n\nymin, ymax = fig.gca().get_ylim()\nxmin, xmax = fig.gca().get_xlim()\n\nplt.text(0.5*xmax,0.4*ymax,'d)')\n\n\n\nplt.tight_layout(pad = 0.1,h_pad=0.1, w_pad=0.3)\n\nplt.savefig(\"MSD_displacements.svg\")",
"_____no_output_____"
],
[
"#dataset",
"_____no_output_____"
],
[
"\ndef P_b_off(z,z_off, B, ld, lb):\n z_off = z_off * 1e-6 \n lb = lb * 1e-9\n ld = ld * 1e-9\n z = z - z_off\n P_b = np.exp(-B * np.exp(-z / (ld)) - z / lb)\n P_b[z < 0] = 0\n \n # Normalization of P_b\n \n A = trapz(P_b,z * 1e6)\n P_b = P_b / A\n \n \n return P_b",
"_____no_output_____"
],
[
"fig = plt.figure(figsize=(cm2inch(8.6), 0.75*cm2inch(8.6)/1.68),constrained_layout=False)\ngs = fig.add_gridspec(1,2)\n\nfig.add_subplot(gs[0,0])\n\n\n\n#########\n\ndef pdf(data, bins = 10, density = True):\n \n pdf, bins_edge = np.histogram(data, bins = bins, density = density)\n bins_center = (bins_edge[0:-1] + bins_edge[1:]) / 2\n \n return pdf, bins_center \n\n\npdf_z,bins_center = pdf(dataset[\"z\"]- np.min(dataset[\"z\"]),bins = 150)\n\n\ndef logarithmic_hist(data,begin,stop,num = 50,base = 2):\n \n if begin == 0:\n beg = stop/num\n bins = np.logspace(np.log(beg)/np.log(base), np.log(stop)/np.log(base), num-1, base=base)\n widths = (bins[1:] - bins[:-1])\n #bins = np.cumsum(widths[::-1])\n bins = np.concatenate(([0],bins))\n #widths = (bins[1:] - bins[:-1])\n \n else:\n bins = np.logspace(np.log(begin)/np.log(base), np.log(stop)/np.log(base), num, base=base)\n widths = (bins[1:] - bins[:-1])\n \n hist,a= np.histogram(data, bins=bins,density=True)\n # normalize by bin width\n bins_center = (bins[1:] + bins[:-1])/2\n \n return bins_center,widths, hist\n\n\n\n#bins_center_pdf_z,widths,hist = logarithmic_hist(z_0offset, 0.000001, 3, num = 31,base=2)\n\n\n#pdf_z, bins_center_pdf_z = pdf(z_dedrift[z_dedrift < 3], bins = 100)\n#bins_center,widths, pdf_z = logarithmic_hist(dataset[\"z\"]-np.mean(dataset[\"z\"]),0.0001,4,num = 10,base = 10)\n\nP_b_th = P_b_off(bins_center*1e-6, 0, dataset[\"B\"], dataset[\"ld\"], dataset[\"lb\"])\n\n\n\nfig.add_subplot(gs[0,1])\n\nplt.plot(bins_center,P_b_th/trapz(P_b_th,bins_center),color = \"k\",linewidth=1)\nplt.semilogy(bins_center - dataset[\"offset_B\"],pdf_z, \"o\", markersize = 2.5)\n\nplt.xlabel(\"$z$ ($\\mathrm{\\mu m}$)\",fontsize = \"small\", labelpad=0.5)\nplt.ylabel(\"$P_{\\mathrm{eq}}$ ($\\mathrm{\\mu m ^{-1}}$)\",fontsize = \"small\", labelpad=0.5)\n\nax = plt.gca()\nax.set_ylim([1e-4,3])\nax.set_xlim([-0.2,4.5])\n#plt.xticks([0,1,2,3,4])\n\nlocmaj = mpl.ticker.LogLocator(base=10.0, subs=(1.0, ), numticks=100)\nax.yaxis.set_major_locator(locmaj)\nlocmin = mpl.ticker.LogLocator(base=10.0, subs=np.arange(2, 10) * .1,\n numticks=100)\nax.yaxis.set_minor_locator(locmin)\nax.yaxis.set_minor_formatter(mpl.ticker.NullFormatter())\n\n\n\nymin, ymax = fig.gca().get_ylim()\nxmin, xmax = fig.gca().get_xlim()\n\n\n\nplt.text(0.8*xmax,1.2*ymin,'b)')\n\nplt.tight_layout(pad = 0.01,h_pad=0.001, w_pad=0.1)\n\nplt.savefig(\"viscosityxpdfz.svg\")",
"_____no_output_____"
],
[
"#fig = plt.figure(figsize=(cm2inch(8.6), cm2inch(8.6)/1.68),constrained_layout=False)\n\nplt.errorbar(dataset[\"z_Force\"]*1e6, dataset[\"Force\"]*1e15,yerr=2*np.sqrt(2)*dataset[\"err_Force\"]*1e15,xerr=dataset[\"x_err_Force\"],ecolor = \"k\", linestyle=\"\", marker=\"o\", markersize = 4,linewidth = 0.8, capsize=1,zorder=3)\nplt.semilogx(dataset[\"z_Force_th\"]*1e6,dataset[\"Force_th\"]*1e15)\n\nplt.plot(np.linspace(1e-2,2,10), np.ones(10) * np.mean(dataset[\"Force\"][-10:]*1e15),zorder=-4,linewidth=1)\nax = plt.gca()\nax.set_ylim([-100,1200])\nax.set_xlim([0.1e-1,3])\nplt.ylabel(\"$F_z$ $\\\\mathrm{(fN)}$\",fontsize = \"small\", labelpad=0.5)\nplt.xlabel(\"$z$ $(\\\\mathrm{\\mu m})$\",fontsize = \"small\", labelpad=0.5)\n\n\nplt.text(1.2e-2,100, \"$F_g = -7 ~ \\mathrm{fN}$ \",fontsize=\"x-small\")\n\nplt.tight_layout()\nplt.savefig(\"Force.pdf\")\n",
"_____no_output_____"
],
[
"fig = plt.figure(figsize=(cm2inch(8.6), 0.75*cm2inch(8.6)/1.68),constrained_layout=False)\n\ngs = fig.add_gridspec(1,5)\n\nfig.add_subplot(gs[0,:2])\n\nz_th = np.linspace(10e-9,10e-6,100)\n\n#plt.errorbar(z_D_para_fit, D_para_fit/Do, yerr = err_d_para_fit/Do, linewidth = 3, marker = \"x\", linestyle = \"\",color = \"tab:red\", label = \"$D_ \\\\parallel$\")\nplt.loglog(z_th*1e6, dataset[\"D_x_th\"], color = \"k\")\nplt.plot(dataset[\"z_D_yacine\"]*1e6 - dataset[\"offset_diffusion\"], dataset[\"z_D_x_yacine\"] / dataset[\"Do\"], marker = \"o\", linestyle = \"\",color = \"tab:blue\",label = \"$D_\\\\parallel$\", markersize = 4)\n\n\n\n#plt.errorbar(bins_center_pdf_z[:-1], Dz[:]/Do, yerr=err[:]/Do, linewidth = 3, marker = \"o\", linestyle = \"\",color = \"tab:red\",label = \"$D_ \\\\bot$\")\nplt.semilogx(z_th*1e6, dataset[\"D_z_th\"],color = \"k\")\nplt.plot(dataset[\"z_D_yacine\"]*1e6 - dataset[\"offset_diffusion\"], dataset[\"z_D_z_yacine\"] / dataset[\"Do\"], marker = \"o\", linestyle = \"\",color = \"tab:green\",label = \"$D_z$\", markersize = 4)\n\n\nax = plt.gca()\nax.set_ylim([None,1.01])\nax.set_xlim([None,10])\n\nlocmaj = mpl.ticker.LogLocator(base=10.0, subs=(1.0, ), numticks=100)\nax.xaxis.set_major_locator(locmaj)\nlocmin = mpl.ticker.LogLocator(base=10.0, subs=np.arange(2, 10) * .1,\n numticks=100)\nax.xaxis.set_minor_locator(locmin)\nax.xaxis.set_minor_formatter(mpl.ticker.NullFormatter())\n\n\nymin, ymax = fig.gca().get_ylim()\nxmin, xmax = fig.gca().get_xlim()\n\nplt.text(0.3*xmax,1.5*ymin,'a)')\n\n\n\n\n\n\nplt.legend(frameon = False,fontsize = \"x-small\",loc=\"lower center\")\nplt.xlabel(\"$z$ ($\\mathrm{\\mu m}$)\",fontsize = \"small\", labelpad=0.5)\nplt.ylabel(\"$D_i/ D_\\mathrm{0}$\",fontsize = \"small\", labelpad=0.5)\n\n\n\n#########\n\n\nfig.add_subplot(gs[0,2:])\n\nplt.errorbar(dataset[\"z_Force\"]*1e6, dataset[\"Force\"]*1e15,yerr=2*np.sqrt(2)*dataset[\"err_Force\"]*1e15,xerr=dataset[\"x_err_Force\"],ecolor = \"k\", linestyle=\"\", marker=\"o\", markersize = 4,linewidth = 0.8, capsize=1,zorder=3)\nplt.semilogx(dataset[\"z_Force_th\"]*1e6,dataset[\"Force_th\"]*1e15,zorder = 9, color = \"k\",linewidth = 1)\n\nplt.plot(np.linspace(1e-2,5,100), np.ones(100) * np.mean(dataset[\"Force\"][-10:]*1e15),zorder=10, linewidth = 1, linestyle=\"--\", color = \"tab:red\")\nax = plt.gca()\nax.set_ylim([-100,1500])\nax.set_xlim([0.1e-1,3])\nplt.ylabel(\"$F_z$ $\\\\mathrm{(fN)}$\",fontsize = \"small\", labelpad=0.5)\nplt.xlabel(\"$z$ $(\\\\mathrm{\\mu m})$\",fontsize = \"small\", labelpad=0.5)\n\n\nplt.text(1.6e-1,100, \"$F_\\mathrm{g} = -7 ~ \\mathrm{fN}$ \",fontsize=\"x-small\", color = \"tab:red\")\n\n\nymin, ymax = fig.gca().get_ylim()\nxmin, xmax = fig.gca().get_xlim()\nplt.yticks([0,250,500,750,1000,1250,1500])\n\nplt.text(0.5*xmax,0.85*ymax,'b)')\n\n#inset\n\n\nplt.tight_layout(pad = 0.01)\n\nplt.savefig(\"viscosityxforce.svg\")",
"_____no_output_____"
],
[
"plt.semilogx(dataset[\"z_Force_th\"][500:1000]*1e6,dataset[\"Force_th\"][500:1000]*1e15,zorder = 10, color = \"k\",linewidth = 1)\n",
"_____no_output_____"
],
[
"fig = plt.figure(figsize=(cm2inch(8.6), cm2inch(8.6)/1.68),constrained_layout=False)\ngs = fig.add_gridspec(6,8)\nI_radius = fit_data[\"I_radius\"]\nI_r_exp = fit_data[\"I_r_exp\"] \nI_radius = fit_data[\"I_radius\"] \ntheo_exp = fit_data[\"theo_exp\"]\nerr = fit_data[\"I_errr_exp\"]\n\n#fig.add_subplot(gs[0:2,0:2])\n\nfig.add_subplot(gs[0:3,5:])\n\nplt.imshow(fit_data[\"exp_image\"], cmap = \"gray\")\nplt.yticks([0,125,250])\n\nfig.add_subplot(gs[3:6,5:])\n\nplt.imshow(fit_data[\"th_image\"], cmap = \"gray\")\n#plt.xticks([], [])\nplt.xticks([0,125,250])\nplt.yticks([0,125,250])\n\nfig.add_subplot(gs[3:6,0:5])\n\nplt.plot(I_radius* 0.532,I_r_exp,label = \"Experiment\", linewidth = 0.8)\n#plt.fill_between(I_radius* 0.532,I_r_exp-err,I_r_exp+err, alpha = 0.7)\nplt.plot(I_radius* 0.532,theo_exp,label = \"Theory\",linewidth = 0.8)\nplt.ylabel(\"$I/I_0$ \", fontsize = \"x-small\", labelpad=0.5)\nplt.xlabel(\"radial distance ($\\mathrm{\\mu m}$)\", fontsize = \"x-small\", labelpad=0.5)\nplt.legend(fontsize = 5,frameon = False, loc = \"lower right\")\n\n\n\nplt.tight_layout(pad = 0.01)\n\nplt.savefig(\"exp.svg\")",
"_____no_output_____"
],
[
"x = dataset[\"x\"] \ny = dataset[\"y\"]\nz = dataset[\"z\"]- np.min(dataset[\"z\"])",
"_____no_output_____"
],
[
"import matplotlib as mpl\ndef axisEqual3D(ax):\n extents = np.array([getattr(ax, 'get_{}lim'.format(dim))() for dim in 'xyz'])\n sz = extents[:,1] - extents[:,0]\n centers = np.mean(extents, axis=1)\n maxsize = max(abs(sz))\n r = maxsize/2\n for ctr, dim in zip(centers, 'xyz'):\n getattr(ax, 'set_{}lim'.format(dim))(ctr - r, ctr + r)\n \n \n",
"_____no_output_____"
],
[
"from matplotlib.ticker import MultipleLocator\nN = 200\ncmap = plt.get_cmap('jet')\nfig = plt.figure(figsize=(cm2inch(8.6)/1.5, 0.75*cm2inch(8.6)/1.68))\n#plt.figaspect(0.21)*1.5\nax = fig.gca(projection='3d')\nax.pbaspect = [1, 20/25, 3/25*4]\nax.ticklabel_format(style = \"sci\")\nfor i in range(N-1):\n ax.plot(x[i*360:i*360+360], y[i*360:i*360+360], z[i*360:i*360+360], color=plt.cm.jet(1*i/N), linewidth = 0.2)\n\n\nnorm = mpl.colors.Normalize(vmin=0,vmax=1)\nsm = plt.cm.ScalarMappable(cmap=cmap, norm=norm)\nsm.set_array([])\nax = plt.gca()\nax.w_xaxis.set_pane_color((1.0, 1.0, 1.0, 1.0))\nax.w_yaxis.set_pane_color((1.0, 1.0, 1.0, 1.0))\n#ax.w_zaxis.set_pane_color((1.0, 1.0, 1.0, 1.0))\nplt.rcParams['grid.color'] = \"gray\"\nax.grid(False)\n#ax.w_xaxis._axinfo.update({'grid' : {'color': (0, 0, 0, 1)}})\n#ax.w_yaxis._axinfo.update({'grid' : {'color': (0, 0, 0, 1)}})\n#ax.w_zaxis._axinfo.update({'grid' : {'color': (0, 0, 0, 1)}})\n\n\nax.set_ylim([25,45])\nax.set_xlim([15,40])\n\n#plt.xticks([20,30,40])\n#plt.yticks([30,35,40])\nax.set_zticks([0,1.5,3])\n\n\nplt.xlabel(\"$x$ ($\\mathrm{\\mu m}$)\",fontsize = \"small\", labelpad=0.5)\nplt.ylabel(\"$y$ ($\\mathrm{\\mu m}$)\",fontsize = \"small\", labelpad=0.5)\nax.set_zlabel(\"$z$ ($\\mathrm{\\mu m}$)\",fontsize = \"small\", labelpad=0.5)\n\nax.view_init(10,45)\n\nax.grid(False)\nax.xaxis.pane.set_edgecolor('black')\nax.yaxis.pane.set_edgecolor('black')\nax.xaxis.pane.fill = False\nax.yaxis.pane.fill = False\nax.zaxis.pane.fill = False\n\n\n\n[t.set_va('center') for t in ax.get_yticklabels()]\n[t.set_ha('left') for t in ax.get_yticklabels()]\n[t.set_va('center') for t in ax.get_xticklabels()]\n[t.set_ha('right') for t in ax.get_xticklabels()]\n[t.set_va('center') for t in ax.get_zticklabels()]\n[t.set_ha('left') for t in ax.get_zticklabels()]\n\n\n\nax.xaxis._axinfo['tick']['inward_factor'] = 0\nax.xaxis._axinfo['tick']['outward_factor'] = 0.4\nax.yaxis._axinfo['tick']['inward_factor'] = 0\nax.yaxis._axinfo['tick']['outward_factor'] = 0.4\nax.zaxis._axinfo['tick']['inward_factor'] = 0\nax.zaxis._axinfo['tick']['outward_factor'] = 0.4\nax.zaxis._axinfo['tick']['outward_factor'] = 0.4\n\nax.view_init(elev=5, azim=135)\n#ax.xaxis.set_major_locator(MultipleLocator(1))\n#ax.yaxis.set_major_locator(MultipleLocator(5))\n#ax.zaxis.set_major_locator(MultipleLocator())\n\n\n\nticks_c = []\nfor i in np.linspace(0,1,5):\n ticks_c.append(\"{:.0f}\".format(N*360*i/60/60))\ncbar = plt.colorbar(sm, ticks=np.linspace(0,1,5), format = \"%.1f\",shrink = 0.4,orientation='horizontal')\ncbar.set_ticklabels(ticks_c)\ncbar.set_label(\"$t$ (min)\", labelpad=0.5)\nplt.tight_layout(h_pad=0.1)\nplt.savefig(\"traj.svg\")",
"_____no_output_____"
],
[
"dir(ax)",
"_____no_output_____"
],
[
"20/25*0.55",
"_____no_output_____"
],
[
"ticks_c = []\nfor i in np.linspace(0,1,10):\n ticks_c.append(\"{:.0f} m\".format(N*500*i/60/60))\nticks_c",
"_____no_output_____"
],
[
"200*360",
"_____no_output_____"
],
[
"fig = plt.figure(figsize=(cm2inch(8.6), 1*cm2inch(8.6)/1.68),constrained_layout=False)\ngs = fig.add_gridspec(1,10)\n\nfig.add_subplot(gs[0,0:5], projection='3d')\n\nN = 200\ncmap = plt.get_cmap('jet')\nax = plt.gca()\nax.ticklabel_format(style = \"sci\")\n\nax.pbaspect = [1, 15/25, 0.25/25*4]\n\n\nfor i in range(N-1):\n ax.plot(x[i*500:i*500+500], y[i*500:i*500+500], z[i*500:i*500+500], color=plt.cm.jet(1*i/N), linewidth = 0.2)\n \nnorm = mpl.colors.Normalize(vmin=0,vmax=1)\nsm = plt.cm.ScalarMappable(cmap=cmap, norm=norm)\nsm.set_array([])\nax = plt.gca()\nax.pbaspect = [1, 20/25, 3/25*4]\n\nplt.xlabel(\"x [$\\mathrm{\\mu m}$]\")\nplt.ylabel(\"y [$\\mathrm{\\mu m}$]\")\nax.set_zlabel(\"z [$\\mathrm{\\mu m}$]\")\nax.grid(False)\n#ax.view_init(30, -10)\n#ax.view_init(20, -1)\n\n\n\nticks_c = []\nfor i in np.linspace(0,1,10):\n ticks_c.append(\"{:.0f} min\".format(N*500*i/60/60))\ncbar = plt.colorbar(sm, ticks=np.linspace(0,1,10), format = \"%.1f\",orientation='horizontal')\ncbar.set_ticklabels(ticks_c)\n\n\n\n#########\n\n\nfig.add_subplot(gs[0,7:])\n\nplt.plot(dataset[\"x_pdf_z\"] * 1e6,dataset[\"Pb_th\"])\nplt.semilogy(dataset[\"x_pdf_z\"] * 1e6 - dataset[\"offset_B\"],dataset[\"pdf_z\"], \"o\", markersize = 4)\n\nplt.xlabel(\"$z$ ($\\mathrm{\\mu m}$)\",fontsize = \"small\")\nplt.ylabel(\"$P(z)$ (a.u.)\",fontsize = \"small\")\n\nax = plt.gca()\nax.set_ylim([1e-2,3])\nax.set_xlim([-0.2,1])\nplt.xticks([0,1,2])\n\nlocmaj = mpl.ticker.LogLocator(base=10.0, subs=(1.0, ), numticks=100)\nax.yaxis.set_major_locator(locmaj)\nlocmin = mpl.ticker.LogLocator(base=10.0, subs=np.arange(2, 10) * .1,\n numticks=100)\nax.yaxis.set_minor_locator(locmin)\nax.yaxis.set_minor_formatter(mpl.ticker.NullFormatter())\n\n\n\nymin, ymax = fig.gca().get_ylim()\nxmin, xmax = fig.gca().get_xlim()\n\n\n\nplt.text(0.8*xmax,1.2*ymin,'b)')\n\nplt.tight_layout(pad = 0.01)\nplt.savefig(\"viscosityxpdfz.pdf\")",
"/home/expensia/miniconda3/envs/analysis/lib/python3.7/site-packages/ipykernel_launcher.py:117: UserWarning: Tight layout not applied. tight_layout cannot make axes width small enough to accommodate all axes decorations\n"
],
[
"fig = plt.figure(figsize=(cm2inch(8.6), 0.75*cm2inch(8.6)/1.68),constrained_layout=False)\ngs = fig.add_gridspec(10,1)\n\nfig.add_subplot(gs[0:2,0])\n\nplt.plot(np.arange(len(z))/60,z)\nplt.xlabel(\"time (s)\")\nplt.ylabel(\"$z$ ($\\mathrm{\\mu m}$)\")\n\n#########\n\n\nfig.add_subplot(gs[5:,0])\n\nplt.plot(dataset[\"x_pdf_z\"] * 1e6,dataset[\"Pb_th\"])\nplt.semilogy(dataset[\"x_pdf_z\"] * 1e6 - dataset[\"offset_B\"],dataset[\"pdf_z\"], \"o\", markersize = 4)\n\nplt.xlabel(\"$z$ ($\\mathrm{\\mu m}$)\",fontsize = \"small\")\nplt.ylabel(\"$P(z)$ (a.u.)\",fontsize = \"small\")\n\nax = plt.gca()\nax.set_ylim([1e-2,3])\nax.set_xlim([-0.2,1])\nplt.xticks([0,1,2])\n\nlocmaj = mpl.ticker.LogLocator(base=10.0, subs=(1.0, ), numticks=100)\nax.yaxis.set_major_locator(locmaj)\nlocmin = mpl.ticker.LogLocator(base=10.0, subs=np.arange(2, 10) * .1,\n numticks=100)\nax.yaxis.set_minor_locator(locmin)\nax.yaxis.set_minor_formatter(mpl.ticker.NullFormatter())\n\n\n\nymin, ymax = fig.gca().get_ylim()\nxmin, xmax = fig.gca().get_xlim()\n\n\n\nplt.text(0.8*xmax,1.2*ymin,'b)')\n\nplt.tight_layout(pad = 0.01,h_pad=0.001, w_pad=0.1)\n\nplt.savefig(\"viscosityxpdfz.pdf\")",
"/home/expensia/miniconda3/envs/analysis/lib/python3.7/site-packages/ipykernel_launcher.py:42: UserWarning: Tight layout not applied. tight_layout cannot make axes height small enough to accommodate all axes decorations\n"
],
[
"bins_center",
"_____no_output_____"
],
[
"dataset[\"B\"]",
"_____no_output_____"
],
[
"t = np.arange(len(z))/60",
"_____no_output_____"
],
[
"for n,i in enumerate(['pdf_Dz_short_t_1', 'pdf_Dz_short_t_2', 'pdf_Dz_short_t_3', 'pdf_Dz_short_t_4', 'pdf_Dz_short_t_5']):\n plt.semilogy(dataset[i][0,:],dataset[i][1,:], color = color[n], marker = \"o\", linestyle = \"\",markersize = 6)\n\n \nplt.plot(dataset[\"pdf_Dz_short_th_t_5\"][0,:],dataset[\"pdf_Dz_short_th_t_5\"][1,:], color = color[4])\nplt.plot(dataset[\"gaussia_short_timetheory\"][0,:],dataset[\"gaussia_short_timetheory\"][1,:], color = \"gray\",linestyle = \"--\")\n\nax = plt.gca()\n\nlocmaj = mpl.ticker.LogLocator(base=10.0, subs=(1.0, ), numticks=100)\nax.yaxis.set_major_locator(locmaj)\nlocmin = mpl.ticker.LogLocator(base=10.0, subs=np.arange(2, 10) * .1,\n numticks=100)\nax.yaxis.set_minor_locator(locmin)\nax.yaxis.set_minor_formatter(mpl.ticker.NullFormatter())\n\n\n\n\nax.set_ylim([1e-5,3])\nax.set_xlim([-7,7])\nplt.xlabel(\"$\\Delta z / \\sigma$\",fontsize = \"small\")\nplt.ylabel(\"$P(\\Delta z / \\sigma)$\",fontsize = \"small\")\n\nymin, ymax = fig.gca().get_ylim()\nxmin, xmax = fig.gca().get_xlim()\n",
"_____no_output_____"
],
[
"from matplotlib.ticker import MultipleLocator\nN = 200\ncmap = plt.get_cmap('jet')\nfig = plt.figure(figsize=(cm2inch(8.6)/1.5, 1.2*cm2inch(8.6)/1.68))\n#plt.figaspect(0.21)*1.5\nax = fig.gca(projection='3d')\nax.pbaspect = [1, 20/25, 3/25*4]\nax.ticklabel_format(style = \"sci\")\nfor i in range(N-1):\n ax.plot(x[i*360:i*360+360], y[i*360:i*360+360], z[i*360:i*360+360], color=plt.cm.jet(1*i/N), linewidth = 0.2)\n\n\nnorm = mpl.colors.Normalize(vmin=0,vmax=1)\nsm = plt.cm.ScalarMappable(cmap=cmap, norm=norm)\nsm.set_array([])\nax = plt.gca()\nax.w_xaxis.set_pane_color((1.0, 1.0, 1.0, 1.0))\nax.w_yaxis.set_pane_color((1.0, 1.0, 1.0, 1.0))\n#ax.w_zaxis.set_pane_color((1.0, 1.0, 1.0, 1.0))\nplt.rcParams['grid.color'] = \"gray\"\nax.grid(False)\n#ax.w_xaxis._axinfo.update({'grid' : {'color': (0, 0, 0, 1)}})\n#ax.w_yaxis._axinfo.update({'grid' : {'color': (0, 0, 0, 1)}})\n#ax.w_zaxis._axinfo.update({'grid' : {'color': (0, 0, 0, 1)}})\n\n\nax.set_ylim([25,45])\nax.set_xlim([15,40])\n\n#plt.xticks([20,30,40])\n#plt.yticks([30,35,40])\n#ax.set_zticks([0,1.5,3])\n\n\nplt.xlabel(\"$x$ ($\\mathrm{\\mu m}$)\",fontsize = \"small\")\nplt.ylabel(\"$y$ ($\\mathrm{\\mu m}$)\",fontsize = \"small\")\nax.set_zlabel(\"$z$ ($\\mathrm{\\mu m}$)\",fontsize = \"small\")\n#ax.view_init(10,45)\n\nax.grid(False)\nax.xaxis.pane.set_edgecolor('black')\nax.yaxis.pane.set_edgecolor('black')\nax.xaxis.pane.fill = False\nax.yaxis.pane.fill = False\nax.zaxis.pane.fill = False\n\n\n\n[t.set_va('center') for t in ax.get_yticklabels()]\n[t.set_ha('left') for t in ax.get_yticklabels()]\n[t.set_va('center') for t in ax.get_xticklabels()]\n[t.set_ha('right') for t in ax.get_xticklabels()]\n[t.set_va('center') for t in ax.get_zticklabels()]\n[t.set_ha('left') for t in ax.get_zticklabels()]\n\n\n\nax.xaxis._axinfo['tick']['inward_factor'] = 0\nax.xaxis._axinfo['tick']['outward_factor'] = 0.4\nax.yaxis._axinfo['tick']['inward_factor'] = 0\nax.yaxis._axinfo['tick']['outward_factor'] = 0.4\nax.zaxis._axinfo['tick']['inward_factor'] = 0\nax.zaxis._axinfo['tick']['outward_factor'] = 0.4\nax.zaxis._axinfo['tick']['outward_factor'] = 0.4\n\nax.view_init(elev=10, azim=135)\n#ax.xaxis.set_major_locator(MultipleLocator(1))\n#ax.yaxis.set_major_locator(MultipleLocator(5))\n#ax.zaxis.set_major_locator(MultipleLocator())\n\n\n\nticks_c = []\nfor i in np.linspace(0,1,5):\n ticks_c.append(\"{:.0f}\".format(N*360*i/60/60))\ncbar = plt.colorbar(sm, ticks=np.linspace(0,1,5), format = \"%.1f\",shrink = 0.4,orientation='horizontal')\ncbar.set_ticklabels(ticks_c)\nplt.tight_layout(h_pad=0.1)\nplt.savefig(\"traj.svg\")",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d08d2fe14cae7c0eaed26df96049f993b046950f | 79,833 | ipynb | Jupyter Notebook | 04- Simple Lists.ipynb | HaoRay/cookie-python | e5892bc346a4731f8c5165b27e2fe6cd48b6a2e2 | [
"CNRI-Python"
] | 21 | 2015-02-07T05:00:19.000Z | 2022-01-19T10:36:07.000Z | 04- Simple Lists.ipynb | HaoRay/cookie-python | e5892bc346a4731f8c5165b27e2fe6cd48b6a2e2 | [
"CNRI-Python"
] | 1 | 2016-04-17T08:49:19.000Z | 2016-04-17T08:49:19.000Z | 04- Simple Lists.ipynb | howardabrams/cookie-python | e5892bc346a4731f8c5165b27e2fe6cd48b6a2e2 | [
"CNRI-Python"
] | 16 | 2015-09-15T09:57:48.000Z | 2020-11-22T17:30:44.000Z | 131.08867 | 64,343 | 0.863164 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.