markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Linear spline functions are calculated with the following: $$i \in [1,\ \left\vert{X}\right\vert - 1],\ i \in \mathbb{N}: $$ $$P_i = \frac{x-x_i}{x_{i+1}-x_i} * y_{i+1} + \frac{x_{i+1}-x}{x_{i+1}-x_i} * y_i$$ By simplification, we can reduce to the following: $$P_i = \frac{y_{i+1} (x-x_i) + y_i (x_{i+1}-x)}{x_{i+1}-x_i} = \frac{(y_{i+1}x - y_ix) - y_{i+1}x_i + y_ix_{i+1}}{x_{i+1}-x_i}$$ The final form used will be: $$P_i = \frac{(y_{i+1}x - y_ix) + (y_ix_{i+1} - y_{i+1}x_i)}{(x_{i+1}-x_i)}$$ As it can be seen, the only gist would be to emulate the x in the first term (num1s below), the other terms being numbers (num2, den). Parantheses used to isolate the formula for each of the 3 variables. As such, we can write the parantheses as a string, while the others will be simply calculated. After this, the final string is evaluated as a lambda function.
print('x1 = %i' % data[0][0]) print('y1 = %i' % data[1][0]) print('---') # linear spline function aproximation print('no values: %i' % len(columns)) spline = {} for i in range(len(columns)-1): print('\nP[' + str(i+1) + ']') # we calculate the numerator num_1s = str(data[1][i+1]) + ' * x - ' + str(data[1][i]) + ' * x' print('num_1s: %s' % num_1s) num_2 = data[1][i] * data[0][i+1] - data[1][i+1] * data[0][i] print('num_2: %i' % num_2) # we calculate the denominator den = data[0][i+1] - data[0][i] print('den: %i' % den) # constructing the function func = 'lambda x: (' + num_1s + str(num_2) + ') / ' + str(den) print('func: %s' % func) spline[i] = eval(func) print('---') # sanity checks # P1(x) = -x - 5 assert (spline[0](-5) == 0),"For this example, the value should be 0, but the value returned is " + str(spline[0](-5)) # P2(x) = 4x + 1 # TODO: this is failing (checked my solution, probably my assertion is wrong) ! #assert (spline[1](0) == 1),"For this example, the value should be 1, but the value returned is " + str(spline[1](0)) # P3(x) = 13x - 17 assert (spline[2](1) == -4),"For this example, the value should be -4, but the value returned is " + str(spline[2](1)) print('Approximating values of S\n---') aproximation_queue = [-1, 1] results = {} def approximate(spline, val): for i in range(len(spline)-1): if data[0][i] <= val <= data[0][i+1]: print('Approximation using P[%i] is: %i' % (i, spline[i](val))) results[val] = spline[i](val) for i in range(len(aproximation_queue)): approximate(spline, aproximation_queue[i]) # sanity checks # S(-1) = P1(-1) = -4 assert (spline[0](-1) == -4),"For this example, the value should be -4, but the value returned is " + str(spline[0](-5)) # S(1) = P2(1) = 5 # TODO: same as above ! #assert (spline[1](1) == 5),"For this example, the value should be 5, but the value returned is " + str(spline[1](0)) #x.extend(results.keys()) #y.extend(results.values()) x2 = list(results.keys()) y2 = list(results.values()) matplotlib.pyplot.plot(data[0], data[1], ls='dashed', color='#a23636') matplotlib.pyplot.scatter(data[0], data[1]) matplotlib.pyplot.scatter(x2, y2, color='#ff0000') matplotlib.pyplot.show()
FII-year3sem2-CN/Exam.ipynb
danalexandru/Algo
gpl-2.0
Jupyter Notebooks This file - a Jupyter (IPython) notebook - does not follow the standard pattern with Python code in a text file. Instead, an IPython notebook is stored as a file in the JSON format. The advantage is that we can mix formatted text, Python code and code output. It requires the IPython notebook server to run it though, and therefore isn't a stand-alone Python program as described above. Other than that, there is no difference between the Python code that goes into a program file or an IPython notebook. Modules Most of the functionality in Python is provided by modules. The Python Standard Library is a large collection of modules that provides cross-platform implementations of common facilities such as access to the operating system, file I/O, string management, network communication, and much more. References The Python Language Reference: https://docs.python.org/3/reference/index.html The Python Standard Library: https://docs.python.org/3/library/ To use a module in a Python program it first has to be imported. A module can be imported using the import statement. For example, to import the module math, which contains many standard mathematical functions, we can do:
import math
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
This includes the whole module and makes it available for use later in the program. For example, we can do:
import math x = math.cos(2 * math.pi) print(x)
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Alternatively, we can chose to import all symbols (functions and variables) in a module to the current namespace (so that we don't need to use the prefix "math." every time we use something from the math module:
from math import * x = cos(2 * pi) print(x)
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
This pattern can be very convenient, but in large programs that include many modules it is often a good idea to keep the symbols from each module in their own namespaces, by using the import math pattern. This would elminate potentially confusing problems with name space collisions. As a third alternative, we can chose to import only a few selected symbols from a module by explicitly listing which ones we want to import instead of using the wildcard character *:
from math import cos, pi x = cos(2 * pi) print(x)
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Looking at what a module contains, and its documentation Once a module is imported, we can list the symbols it provides using the dir function:
import math print(dir(math))
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
And using the function help we can get a description of each function (almost .. not all functions have docstrings, as they are technically called, but the vast majority of functions are documented this way).
help(math.log) math.log(10) math.log(10, 2)
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
We can also use the help function directly on modules: Try help(math) Some very useful modules form the Python standard library are os, sys, math, shutil, re, subprocess, multiprocessing, threading. A complete lists of standard modules for Python 3 are available at http://docs.python.org/3/library/. For example, this is the os module in the standard library.
import os os.listdir()
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Variables and types Symbol names Variable names in Python can contain alphanumerical characters a-z, A-Z, 0-9 and some special characters such as _. Normal variable names must start with a letter. By convention, variable names start with a lower-case letter, and Class names start with a capital letter. In addition, there are a number of Python keywords that cannot be used as variable names. These keywords are: and, as, assert, break, class, continue, def, del, elif, else, except, exec, finally, for, from, global, if, import, in, is, lambda, not, or, pass, print, raise, return, try, while, with, yield Assignment The assignment operator in Python is =. Python is a dynamically typed language, so we do not need to specify the type of a variable when we create one. Assigning a value to a new variable creates the variable:
# variable assignments x = 1.0 my_variable = 12.2
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Although not explicitly specified, a variable does have a type associated with it. The type is derived from the value that was assigned to it.
type(x)
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
If we assign a new value to a variable, its type can change.
x = 1 type(x)
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
If we try to use a variable that has not yet been defined we get an NameError:
import traceback try: print(y) except NameError as e: print(traceback.format_exc())
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Fundamental types
# integers x = 1 type(x) # float x = 1.0 type(x) # boolean b1 = True b2 = False type(b1) # complex numbers: note the use of `j` to specify the imaginary part x = 1.0 - 1.0j type(x) print(x) print(x.real, x.imag)
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Type utility functions
x = 1.0 # check if the variable x is a float type(x) is float # check if the variable x is an int type(x) is int
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
We can also use the isinstance method for testing types of variables:
isinstance(x, float)
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Type casting
x = 1.5 print(x, type(x)) x = int(x) print(x, type(x)) z = complex(x) print(z, type(z)) import traceback try: x = float(z) except TypeError as e: print(traceback.format_exc())
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Operators and comparisons Most operators and comparisons in Python work as one would expect: Arithmetic operators +, -, *, /, // (integer division), '**' power
1 + 2, 1 - 2, 1 * 2, 1 / 2 1.0 + 2.0, 1.0 - 2.0, 1.0 * 2.0, 1.0 / 2.0 # Integer division of float numbers 3.0 // 2.0 # Note! The power operators in python isn't ^, but ** 2 ** 2
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Note: The / operator always performs a floating point division in Python 3.x. This is not true in Python 2.x, where the result of / is always an integer if the operands are integers. to be more specific, 1/2 = 0.5 (float) in Python 3.x, and 1/2 = 0 (int) in Python 2.x (but 1.0/2 = 0.5 in Python 2.x). The boolean operators are spelled out as the words and, not, or.
True and False not False True or False
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Comparison operators &gt;, &lt;, &gt;= (greater or equal), &lt;= (less or equal), == equality, is identical.
2 > 1, 2 < 1 2 > 2, 2 < 2 2 >= 2, 2 <= 2 # equality [1,2] == [1,2] # objects identical? list1 = list2 = [1,2] list1 is list2
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Exercise: Mindy has $5.25 in her pocket. Apples cost 29 cents each. Calculate how many apples mindy can buy and how much change she will have left. Money should be represented in variables of type float and apples should be represented in variables of type integer. Answer: mindy_money = 5.25 apple_cost = .29 num_apples = ? change = ? Compound types: Strings, List and dictionaries Strings Strings are the variable type that is used for storing text messages.
s = "Hello world" type(s) # length of the string: the number of characters len(s) # replace a substring in a string with somethign else s2 = s.replace("world", "test") print(s2)
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
We can index a character in a string using []:
s[0]
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Heads up MATLAB and R users: Indexing start at 0! We can extract a part of a string using the syntax [start:stop], which extracts characters between index start and stop -1 (the character at index stop is not included):
s[0:5] s[4:5]
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
If we omit either (or both) of start or stop from [start:stop], the default is the beginning and the end of the string, respectively:
s[:5] s[6:] s[:]
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
We can also define the step size using the syntax [start:end:step] (the default value for step is 1, as we saw above):
s[::1] s[::2]
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
This technique is called slicing. Read more about the syntax here: https://docs.python.org/3.5/library/functions.html#slice Python has a very rich set of functions for text processing. See for example https://docs.python.org/3.5/library/string.html for more information. String formatting examples
print("str1", "str2", "str3") # The print statement concatenates strings with a space print("str1", 1.0, False, -1j) # The print statements converts all arguments to strings print("str1" + "str2" + "str3") # strings added with + are concatenated without space print("value = %f" % 1.0) # we can use C-style string formatting # this formatting creates a string s2 = "value1 = %.2f. value2 = %d" % (3.1415, 1.5) print(s2) # alternative, more intuitive way of formatting a string s3 = 'value1 = {0}, value2 = {1}'.format(3.1415, 1.5) print(s3)
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Exercise: Paste in the code from your previous exercise and output the result as a story (round monetary values to 2 decimal places). The ouptut should look like this: "Mindy had \$5.25 in her pocket. Apples at her nearby store cost 29 cents. With her \$5.25, mindy can buy 18 apples and will have 10 cents left over." List Lists are very similar to strings, except that each element can be of any type. The syntax for creating lists in Python is [...]:
l = [1,2,3,4] print(type(l)) print(l)
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
We can use the same slicing techniques to manipulate lists as we could use on strings:
print(l) print(l[1:3]) print(l[::2])
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Heads up MATLAB and R users: Indexing starts at 0!
l[0]
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Elements in a list do not all have to be of the same type:
l = [1, 'a', 1.0, 1-1j] print(l)
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Python lists can be heterogeneous and arbitrarily nested:
nested_list = [1, [2, [3, [4, [5]]]]] nested_list
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Lists play a very important role in Python. For example they are used in loops and other flow control structures (discussed below). There are a number of convenient functions for generating lists of various types, for example the range function:
start = 10 stop = 30 step = 2 range(start, stop, step) # in python 3 range generates an iterator, which can be converted to a list using 'list(...)'. # It has no effect in python 2 list(range(start, stop, step)) list(range(-10, 10)) s # convert a string to a list by type casting: s2 = list(s) s2 # sorting lists (by creating a new variable) s3 = sorted(s2) print(s2) print(s3) # sorting lists in place s2.sort() print(s2)
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Adding, inserting, modifying, and removing elements from lists
# create a new empty list l = [] # add an elements using `append` l.append("A") l.append("d") l.append("d") print(l)
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
We can modify lists by assigning new values to elements in the list. In technical jargon, lists are mutable.
l[1] = "p" l[2] = "p" print(l) l[1:3] = ["d", "d"] print(l)
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Insert an element at an specific index using insert
l.insert(0, "i") l.insert(1, "n") l.insert(2, "s") l.insert(3, "e") l.insert(4, "r") l.insert(5, "t") print(l)
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Remove first element with specific value using 'remove'
l.remove("A") print(l)
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Remove an element at a specific location using del:
del l[7] del l[6] print(l)
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
See help(list) for more details, or read the online documentation Tuples Tuples are like lists, except that they cannot be modified once created, that is they are immutable. In Python, tuples are created using the syntax (..., ..., ...), or even ..., ...:
point = (10, 20) print(point, type(point)) point = 10, 20 print(point, type(point))
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
We can unpack a tuple by assigning it to a comma-separated list of variables:
x, y = point print("x =", x) print("y =", y)
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
If we try to assign a new value to an element in a tuple we get an error:
try: point[0] = 20 except TypeError as e: print(traceback.format_exc())
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Dictionaries Dictionaries are also like lists, except that each element is a key-value pair. The syntax for dictionaries is {key1 : value1, ...}:
params = {"parameter1" : 1.0, "parameter2" : 2.0, "parameter3" : 3.0,} print(type(params)) print(params) print("parameter1 = " + str(params["parameter1"])) print("parameter2 = " + str(params["parameter2"])) print("parameter3 = " + str(params["parameter3"])) params["parameter1"] = "A" params["parameter2"] = "B" # add a new entry params["parameter4"] = "D" print("parameter1 = " + str(params["parameter1"])) print("parameter2 = " + str(params["parameter2"])) print("parameter3 = " + str(params["parameter3"])) print("parameter4 = " + str(params["parameter4"]))
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Exercise: Mindy doesn't want 18 apples, that's too many for someone who lives by themselves. We're now going to represent mindy's world using our new data types. Make a list containing the fruits that mindy desires. She likes apples, strawberries, pinapples, and papayas. Make a tuple containing the fruits that the store has. This is immutable because the store doesn't change their inventory. The local store has apples, strawberries, pinapples, pears, bananas, and oranges. Make a dictonary showing the price of each fruit at the store. Apples are 29 cents, bananas are 5 cents, oranges are 20 cents, strawberries are 30 cents and pinapples are $1.50. Control Flow Conditional statements: if, elif, else The Python syntax for conditional execution of code uses the keywords if, elif (else if), else:
statement1 = False statement2 = False if statement1: print("statement1 is True") elif statement2: print("statement2 is True") else: print("statement1 and statement2 are False")
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
For the first time, here we encounted a peculiar and unusual aspect of the Python programming language: Program blocks are defined by their indentation level. Compare to the equivalent C code: if (statement1) { printf("statement1 is True\n"); } else if (statement2) { printf("statement2 is True\n"); } else { printf("statement1 and statement2 are False\n"); } In C blocks are defined by the enclosing curly brakets { and }. And the level of indentation (white space before the code statements) does not matter (completely optional). But in Python, the extent of a code block is defined by the indentation level (usually a tab or say four white spaces). This means that we have to be careful to indent our code correctly, or else we will get syntax errors. Examples:
statement1 = statement2 = True if statement1: if statement2: print("both statement1 and statement2 are True") # # Bad indentation! # if statement1: # if statement2: # print("both statement1 and statement2 are True") # this line is not properly indented statement1 = False if statement1: print("printed if statement1 is True") print("still inside the if block") if statement1: print("printed if statement1 is True") print("now outside the if block")
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Loops In Python, loops can be programmed in a number of different ways. The most common is the for loop, which is used together with iterable objects, such as lists. The basic syntax is: for loops:
for x in [1,2,3]: print(x)
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
The for loop iterates over the elements of the supplied list, and executes the containing block once for each element. Any kind of list can be used in the for loop. For example:
for x in range(4): # by default range start at 0 print(x)
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Note: range(4) does not include 4 !
for x in range(-3,3): print(x) for word in ["scientific", "computing", "with", "python"]: print(word)
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
To iterate over key-value pairs of a dictionary:
for key, value in params.items(): print(key + " = " + str(value))
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Sometimes it is useful to have access to the indices of the values when iterating over a list. We can use the enumerate function for this:
for idx, x in enumerate(range(-3,3)): print(idx, x)
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
List comprehensions: Creating lists using for loops: A convenient and compact way to initialize lists:
l1 = [x**2 for x in range(0,5)] print(l1)
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
while loops:
i = 0 while i < 5: print(i) i = i + 1 print("done")
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Note that the print("done") statement is not part of the while loop body because of the difference in indentation. Exercise: Loop through all of the fruits that mindy wants and check if the store has them. For each fruit that she wants print "Mindy, the store has apples and they cost $.29" or "Mindy, the store does not have papayas" Functions A function in Python is defined using the keyword def, followed by a function name, a signature within parentheses (), and a colon :. The following code, with one additional level of indentation, is the function body.
def func0(): print("test") func0()
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Optionally, but highly recommended, we can define a so called "docstring", which is a description of the functions purpose and behaivor. The docstring should follow directly after the function definition, before the code in the function body.
def func1(s): """ Print a string 's' and tell how many characters it has """ print(s + " has " + str(len(s)) + " characters") help(func1) func1("test")
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Functions that returns a value use the return keyword:
def square(x): """ Return the square of x. """ return x ** 2 square(4)
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
We can return multiple values from a function using tuples (see above):
def powers(x): """ Return a few powers of x. """ return x ** 2, x ** 3, x ** 4 powers(3) x2, x3, x4 = powers(3) print(x3)
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Default argument and keyword arguments In a definition of a function, we can give default values to the arguments the function takes:
def myfunc(x, p=2, debug=False): if debug: print("evaluating myfunc for x = " + str(x) + " using exponent p = " + str(p)) return x**p
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
If we don't provide a value of the debug argument when calling the the function myfunc it defaults to the value provided in the function definition:
myfunc(5) myfunc(5, debug=True)
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
If we explicitly list the name of the arguments in the function calls, they do not need to come in the same order as in the function definition. This is called keyword arguments, and is often very useful in functions that takes a lot of optional arguments.
myfunc(p=3, debug=True, x=7)
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Unnamed functions (lambda function) In Python we can also create unnamed functions, using the lambda keyword:
f1 = lambda x: x**2 # is equivalent to def f2(x): return x**2 f1(2), f2(2)
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
This technique is useful for example when we want to pass a simple function as an argument to another function, like this:
# map is a built-in python function map(lambda x: x**2, range(-3,4)) # in python 3 we can use `list(...)` to convert the iterator to an explicit list list(map(lambda x: x**2, range(-3,4)))
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Exercise: Mindy is great, but we want code that can tell anyone what fruits the store has. To do this we will generalize our code for Mindy using a function. Write a function that takes the following parameters - full_name (string) - fruits_you_want (list) - fruits_the_store_has (tuple) - prices (dict) and prints to the terminal a sentence per fruit that you want just like the last exercise. For example, if name = 'Al' list_of_fruits_you_want = ['apple', 'banana'] tuple_of_fruits_the_store_has = ('apple', 'banana', 'orange', 'strawberries', 'pineapple') prices = { 'apple' : .29 'banana': .05 'orange': .20 'strawberries': .30 'pinapple': 1.50 } The function should print. "Al, the store has apples and they cost \$.29" "Al, the store has bananas and they cost \$.05" Classes Classes are the key features of object-oriented programming. A class is a structure for representing an object and the operations that can be performed on the object. In Python a class can contain attributes (variables) and methods (functions). A class is defined almost like a function, but using the class keyword, and the class definition usually contains a number of class method definitions (a function in a class). Each class method should have an argument self as its first argument. This object is a self-reference. Some class method names have special meaning, for example: __init__: The name of the method that is invoked when the object is first created. __str__ : A method that is invoked when a simple string representation of the class is needed, as for example when printed. There are many more, see http://docs.python.org/2/reference/datamodel.html#special-method-names
class Point: """ Simple class for representing a point in a Cartesian coordinate system. """ def __init__(self, x, y): """ Create a new Point at x, y. """ self.x = x self.y = y def translate(self, dx, dy): """ Translate the point by dx and dy in the x and y direction. """ self.x += dx self.y += dy def __str__(self): return("Point at [%f, %f]" % (self.x, self.y))
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
To create a new instance of a class:
p1 = Point(0, 0) # this will invoke the __init__ method in the Point class print(p1) # this will invoke the __str__ method
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
To invoke a class method in the class instance p:
p2 = Point(1, 1) p1.translate(0.25, 1.5) print(p1) print(p2)
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Note that calling class methods can modifiy the state of that particular class instance, but does not effect other class instances or any global variables. That is one of the nice things about object-oriented design: code such as functions and related variables are grouped in separate and independent entities. Modules One of the most important concepts in good programming is to reuse code and avoid repetitions. The idea is to write functions and classes with a well-defined purpose and scope, and reuse these instead of repeating similar code in different part of a program (modular programming). The result is usually that readability and maintainability of a program is greatly improved. What this means in practice is that our programs have fewer bugs, are easier to extend and debug/troubleshoot. Python supports modular programming at different levels. Functions and classes are examples of tools for low-level modular programming. Python modules are a higher-level modular programming construct, where we can collect related variables, functions and classes in a module. A python module is defined in a python file (with file-ending .py), and it can be made accessible to other Python modules and programs using the import statement. Consider the following example: the file mymodule.py contains simple example implementations of a variable, function and a class:
%%file mymodule.py """ Example of a python module. Contains a variable called my_variable, a function called my_function, and a class called MyClass. """ my_variable = 0 def my_function(): """ Example function """ return my_variable class MyClass: """ Example class. """ def __init__(self): self.variable = my_variable def set_variable(self, new_value): """ Set self.variable to a new value """ self.variable = new_value def get_variable(self): return self.variable
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
We can import the module mymodule into our Python program using import:
import mymodule
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Use help(module) to get a summary of what the module provides:
help(mymodule) mymodule.my_variable mymodule.my_function() my_class = mymodule.MyClass() my_class.set_variable(10) my_class.get_variable()
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
If we make changes to the code in mymodule.py, we need to reload it using reload:
import importlib importlib.reload(mymodule) # Python 3 only # For Python 2 use reload(mymodule)
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Exceptions In Python errors are managed with a special language construct called "Exceptions". When errors occur exceptions can be raised, which interrupts the normal program flow and fallback to somewhere else in the code where the closest try-except statement is defined. To generate an exception we can use the raise statement, which takes an argument that must be an instance of the class BaseException or a class derived from it.
try: raise Exception("description of the error") except Exception as e: print(traceback.format_exc())
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
A typical use of exceptions is to abort functions when some error condition occurs, for example: def my_function(arguments): if not verify(arguments): raise Exception("Invalid arguments") # rest of the code goes here To gracefully catch errors that are generated by functions and class methods, or by the Python interpreter itself, use the try and except statements: try: # normal code goes here except: # code for error handling goes here # this code is not executed unless the code # above generated an error For example:
try: print("test") # generate an error: the variable test is not defined print(test) except Exception: print("Caught an exception")
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
To get information about the error, we can access the Exception class instance that describes the exception by using for example: except Exception as e:
try: print("test") # generate an error: the variable test is not defined print(test) except Exception as e: print("Caught an exception:", e)
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Excercise: Make two classes with the following variables and methods Store variables - inventory (dict) methods - show_inventory() * nicely displays the store's inventory - message_customer(customer) * shows the customer if the store has the fruits they want and how much each fruit costs (this is code from previous exercises, except it will use the customer's "Formal Greeting" instead of their first name.) Customer variables - first_name (string) - last_name (string) - is_male (boolean) - money (float) - fruit (dict) - preferred_fruit (list) methods - formal_greeting() * (Mr. Al Johri, Ms. Mindy Smith) - buy_fruit(store, fruit_name, fruit_amt) * inputs are a store, the name of a fruit, and the amount of that fruit * checks to see if the store has the fruit - returns an error if it does not * checks to see if the customer can afford the amount of fruit they intend to buy - returns an error if not * "purchases" the fruit by adding it to the customers fruit dict and removes the correct amount of money from their money variable Exercise: Instantiate a list of the following customers. Mindy Smith - \$5.25, likes apples and oranges Al Johri - \$20.19, likes papaya, strawberries, pinapple, and apples Hillary Clinton - \$15, likes strawberries and oranges Oliver Twist - \$.05, likes apples Donald Trump - \$4000, only likes durian Create a store called Whole Foods with the following inventory * 'apple' : \$.29 * 'banana': \$.05 * 'orange': \$.20 * 'strawberries': \$.30 * 'pinapple': \$1.50, * 'grapes': \$.22, * 'durian': \$5000 Write code to do the following Print the store's inventory For each customer, print the store's message to them Have each customer purchase 1 of each fruit in their list of preferred fruits. (Make sure you have error handling so the program doesn't halt if the store doesn't have the fruit the customer wants or if the customer doesn not have enough money to buy the fruit.) Bonus Exercise Organize the code! Make a module called fruits (in a file fruits.py) that contains the class definitions. Make a separate cell (in a file main.py) which imports the fruits module, instantiates the store and the list of customers, and runs the code in the previous exercise. Further reading http://www.python.org - The official web page of the Python programming language. https://docs.python.org/3/tutorial/ - The Official Python Tutorial http://www.python.org/dev/peps/pep-0008 - Style guide for Python programming. Highly recommended. http://www.scipy-lectures.org/intro/language/python_language.html - Scipy Lectures: Lecture 1.2 Versions
%reload_ext version_information %version_information
notebooks/intro-python.ipynb
AlJohri/DAT-DC-12
mit
Coordinate Transformation
def trans(x, y, a): '''create a 2D transformation''' s = np.sin(a) c = np.cos(a) return np.asarray([[c, -s, x], [s, c, y], [0, 0, 1]]) def from_trans(m): '''get x, y, theta from transform matrix''' a = np.arctan2(m[1, 0], m[0, 0]) return np.asarray([m[0, -1], m[1, -1], a]) print(trans(0., 0., 0.))
kinematics/inverse_kinematics_2d_jax.ipynb
DAInamite/programming-humanoid-robot-in-python
gpl-2.0
Parameters of robot arm
l = [0, 3, 2, 1] #l = [0, 3, 2, 1, 1] #l = [0, 3, 2, 1, 1, 1] #l = [1] * 30 N = len(l) - 1 # number of links max_len = sum(l) a = random.random_sample(N) # angles of joints T0 = trans(0, 0, 0) # base
kinematics/inverse_kinematics_2d_jax.ipynb
DAInamite/programming-humanoid-robot-in-python
gpl-2.0
Forward Kinematics
def forward_kinematics(T0, l, a): T = [T0] for i in range(len(a)): Ti = np.dot(T[-1], trans(l[i], 0, a[i])) T.append(Ti) Te = np.dot(T[-1], trans(l[-1], 0, 0)) # end effector T.append(Te) return T def show_robot_arm(T): plt.cla() x = [Ti[0,-1] for Ti in T] y = [Ti[1,-1] for Ti in T] plt.plot(x, y, '-or', linewidth=5, markersize=10) plt.plot(x[-1], y[-1], 'og', linewidth=5, markersize=10) plt.xlim([-max_len, max_len]) plt.ylim([-max_len, max_len]) ax = plt.axes() ax.set_aspect('equal') t = np.arctan2(T[-1][1, 0], T[-1][0,0]) ax.annotate('[%.2f,%.2f,%.2f]' % (x[-1], y[-1], t), xy=(x[-1], y[-1]), xytext=(x[-1], y[-1] + 0.5)) plt.show return ax
kinematics/inverse_kinematics_2d_jax.ipynb
DAInamite/programming-humanoid-robot-in-python
gpl-2.0
Inverse Kinematics Numerical Solution: jax
def error_func(theta, target): Ts = forward_kinematics(T0, l, theta) Te = Ts[-1] e = target - Te return np.sum(e * e) theta = random.random(N) def inverse_kinematics(x_e, y_e, theta_e, theta): target = trans(x_e, y_e, theta_e) func = lambda t: error_func(t, target) func_grad = jit(grad(func)) for i in range(1000): e = func(theta) d = func_grad(theta) theta -= d * 1e-2 if e < 1e-4: break return theta T = forward_kinematics(T0, l, theta) show_robot_arm(T) Te = np.asarray([from_trans(T[-1])]) @interact(x_e=(0, max_len, 0.01), y_e=(-max_len, max_len, 0.01), theta_e=(-pi, pi, 0.01), theta=fixed(theta)) def set_end_effector(x_e=Te[0,0], y_e=Te[0,1], theta_e=Te[0,2], theta=theta): theta = inverse_kinematics(x_e, y_e, theta_e, theta) T = forward_kinematics(T0, l, theta) show_robot_arm(T) return theta
kinematics/inverse_kinematics_2d_jax.ipynb
DAInamite/programming-humanoid-robot-in-python
gpl-2.0
We start with preparing the data points for clustering. The data is two-dimensional and craeted by drawing random numbers from four superpositioned gaussian distributions which are centered at the corners of a square (indicated by the red dashed lines).
# generate the data points npoints = 2000 mux = 1.6 muy = 1.6 points = np.zeros(shape=(npoints, 2), dtype=np.float64) points[:, 0] = np.random.randn(npoints) + mux * (-1)**np.random.randint(0, high=2, size=npoints) points[:, 1] = np.random.randn(npoints) + muy * (-1)**np.random.randint(0, high=2, size=npoints) # draw the data points fig, ax = plt.subplots(figsize=(5, 5)) ax.scatter(points[:, 0], points[:, 1], s=40) ax.plot([-mux, -mux], [-1.5 * muy, 1.5 * muy], '--', linewidth=2, color="red") ax.plot([mux, mux], [-1.5 * muy, 1.5 * muy], '--', linewidth=2, color="red") ax.plot([-1.5 * mux, 1.5 * mux], [-muy, -muy], '--', linewidth=2, color="red") ax.plot([-1.5 * mux, 1.5 * mux], [muy, muy], '--', linewidth=2, color="red") ax.set_xlabel(r"x / a.u.", fontsize=20) ax.set_ylabel(r"y / a.u.", fontsize=20) ax.tick_params(labelsize=15) ax.set_xlim([-7, 7]) ax.set_ylim([-7, 7]) ax.set_aspect('equal') fig.tight_layout()
ipython/Example01.ipynb
cwehmeyer/pydpc
lgpl-3.0
Now comes the interesting part. We pass the numpy ndarray with the data points to the Cluster class which prepares the data set for clustering. In this stage, it computes the Euclidean distances between all data points and from that the two properties to identify clusters within the data: each data points' density and minimal distance delta to a point of higher density. Once these properties are computed, a decision graph is drawn, where each outlier in the upper right corner represents a different cluster. In our example, we should find four outliers. So far, however, no clustering has yet been done.
clu = Cluster(points)
ipython/Example01.ipynb
cwehmeyer/pydpc
lgpl-3.0
Now that we have the decision graph, we can select the outliers via the assign method by setting lower bounds for delta and density. The assign method does the actual clustering; it also shows the decision graph again with the given selection.
clu.assign(20, 1.5)
ipython/Example01.ipynb
cwehmeyer/pydpc
lgpl-3.0
Let us have a look at the result. We again plot the data and red dashed lines indicating the centeres of the gaussian distributions. Indicated in the left panel by red dots are the four outliers from the decision graph; these are our four cluster centers. The center panel shows the points' densities and the right panel shows the membership to the four clusters by different coloring.
fig, ax = plt.subplots(1, 3, figsize=(15, 5)) ax[0].scatter(points[:, 0], points[:, 1], s=40) ax[0].scatter(points[clu.clusters, 0], points[clu.clusters, 1], s=50, c="red") ax[1].scatter(points[:, 0], points[:, 1], s=40, c=clu.density) ax[2].scatter(points[:, 0], points[:, 1], s=40, c=clu.membership, cmap=mpl.cm.cool) for _ax in ax: _ax.plot([-mux, -mux], [-1.5 * muy, 1.5 * muy], '--', linewidth=2, color="red") _ax.plot([mux, mux], [-1.5 * muy, 1.5 * muy], '--', linewidth=2, color="red") _ax.plot([-1.5 * mux, 1.5 * mux], [-muy, -muy], '--', linewidth=2, color="red") _ax.plot([-1.5 * mux, 1.5 * mux], [muy, muy], '--', linewidth=2, color="red") _ax.set_xlabel(r"x / a.u.", fontsize=20) _ax.set_ylabel(r"y / a.u.", fontsize=20) _ax.tick_params(labelsize=15) _ax.set_xlim([-7, 7]) _ax.set_ylim([-7, 7]) _ax.set_aspect('equal') fig.tight_layout()
ipython/Example01.ipynb
cwehmeyer/pydpc
lgpl-3.0
The density peak clusterng can further resolve if the membership of a data point to a certain cluster is strong or rather weak and separates the data points further into core and halo regions. The left panel depicts the border members in grey. The separation in the center panel uses the core/halo criterion of the original authors, the right panel shows a less strict criterion which assumes a halo only between different clusters; here, the halo members are depicted in grey.
fig, ax = plt.subplots(1, 3, figsize=(15, 5)) ax[0].scatter( points[:, 0], points[:, 1], s=40, c=clu.membership, cmap=mpl.cm.cool) ax[0].scatter(points[clu.border_member, 0], points[clu.border_member, 1], s=40, c="grey") ax[1].scatter( points[clu.core_idx, 0], points[clu.core_idx, 1], s=40, c=clu.membership[clu.core_idx], cmap=mpl.cm.cool) ax[1].scatter(points[clu.halo_idx, 0], points[clu.halo_idx, 1], s=40, c="grey") clu.autoplot=False clu.assign(20, 1.5, border_only=True) ax[2].scatter( points[clu.core_idx, 0], points[clu.core_idx, 1], s=40, c=clu.membership[clu.core_idx], cmap=mpl.cm.cool) ax[2].scatter(points[clu.halo_idx, 0], points[clu.halo_idx, 1], s=40, c="grey") ax[2].tick_params(labelsize=15) for _ax in ax: _ax.plot([-mux, -mux], [-1.5 * muy, 1.5 * muy], '--', linewidth=2, color="red") _ax.plot([mux, mux], [-1.5 * muy, 1.5 * muy], '--', linewidth=2, color="red") _ax.plot([-1.5 * mux, 1.5 * mux], [-muy, -muy], '--', linewidth=2, color="red") _ax.plot([-1.5 * mux, 1.5 * mux], [muy, muy], '--', linewidth=2, color="red") _ax.set_xlabel(r"x / a.u.", fontsize=20) _ax.set_ylabel(r"y / a.u.", fontsize=20) _ax.tick_params(labelsize=15) _ax.set_xlim([-7, 7]) _ax.set_ylim([-7, 7]) _ax.set_aspect('equal') fig.tight_layout()
ipython/Example01.ipynb
cwehmeyer/pydpc
lgpl-3.0
This concludes the example. In the remaining part, we address the performance of the pydpc implementation (numpy + cython-wrapped C code) with respect to an older development version (numpy). In particular, we look at the numerically most demanding part of computing the Euclidean distances between the data points and estimating density and delta.
npoints = 1000 points = np.zeros(shape=(npoints, 2), dtype=np.float64) points[:, 0] = np.random.randn(npoints) + 1.8 * (-1)**np.random.randint(0, high=2, size=npoints) points[:, 1] = np.random.randn(npoints) + 1.8 * (-1)**np.random.randint(0, high=2, size=npoints) %timeit Cluster(points, fraction=0.02, autoplot=False) %timeit RefCluster(fraction=0.02, autoplot=False).load(points)
ipython/Example01.ipynb
cwehmeyer/pydpc
lgpl-3.0
The next two cells measure the full clustering.
%%timeit Cluster(points, fraction=0.02, autoplot=False).assign(20, 1.5) %%timeit tmp = RefCluster(fraction=0.02, autoplot=False) tmp.load(points) tmp.assign(20, 1.5)
ipython/Example01.ipynb
cwehmeyer/pydpc
lgpl-3.0
Step 1.2: Data Preprocessing Now that we have a basic understanding of what our dataset looks like, lets convert our labels to binary variables, 0 to represent 'ham'(i.e. not spam) and 1 to represent 'spam' for ease of computation. You might be wondering why do we need to do this step? The answer to this lies in how scikit-learn handles inputs. Scikit-learn only deals with numerical values and hence if we were to leave our label values as strings, scikit-learn would do the conversion internally(more specifically, the string labels will be cast to unknown float values). Our model would still be able to make predictions if we left our labels as strings but we could have issues later when calculating performance metrics, for example when calculating our precision and recall scores. Hence, to avoid unexpected 'gotchas' later, it is good practice to have our categorical values be fed into our model as integers. Instructions: * Convert the values in the 'label' colum to numerical values using map method as follows: {'ham':0, 'spam':1} This maps the 'ham' value to 0 and the 'spam' value to 1. * Also, to get an idea of the size of the dataset we are dealing with, print out number of rows and columns using 'shape'.
''' Solution ''' df['label'] = df.label.map({'ham':0, 'spam':1}) print(df.shape) df.head() # returns (rows, columns)
Project-practice--naive_bayes_tutorial/Naive_Bayes_tutorial.ipynb
littlewizardLI/Udacity-ML-nanodegrees
apache-2.0
Step 2.1: Bag of words What we have here in our data set is a large collection of text data (5,572 rows of data). Most ML algorithms rely on numerical data to be fed into them as input, and email/sms messages are usually text heavy. Here we'd like to introduce the Bag of Words(BoW) concept which is a term used to specify the problems that have a 'bag of words' or a collection of text data that needs to be worked with. The basic idea of BoW is to take a piece of text and count the frequency of the words in that text. It is important to note that the BoW concept treats each word individually and the order in which the words occur does not matter. Using a process which we will go through now, we can covert a collection of documents to a matrix, with each document being a row and each word(token) being the column, and the corresponding (row,column) values being the frequency of occurrance of each word or token in that document. For example: Lets say we have 4 documents as follows: ['Hello, how are you!', 'Win money, win from home.', 'Call me now', 'Hello, Call you tomorrow?'] Our objective here is to convert this set of text to a frequency distribution matrix, as follows: <img src="images/countvectorizer.png" height="542" width="542"> Here as we can see, the documents are numbered in the rows, and each word is a column name, with the corresponding value being the frequency of that word in the document. Lets break this down and see how we can do this conversion using a small set of documents. To handle this, we will be using sklearns <a href = 'http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer'> sklearn.feature_extraction.text.CountVectorizer </a> method which does the following: It tokenizes the string(separates the string into individual words) and gives an integer ID to each token. It counts the occurrance of each of those tokens. Please Note: The CountVectorizer method automatically converts all tokenized words to their lower case form so that it does not treat words like 'He' and 'he' differently. It does this using the lowercase parameter which is by default set to True. It also ignores all punctuation so that words followed by a punctuation mark (for example: 'hello!') are not treated differently than the same words not prefixed or suffixed by a punctuation mark (for example: 'hello'). It does this using the token_pattern parameter which has a default regular expression which selects tokens of 2 or more alphanumeric characters. The third parameter to take note of is the stop_words parameter. Stop words refer to the most commonly used words in a language. They include words like 'am', 'an', 'and', 'the' etc. By setting this parameter value to english, CountVectorizer will automatically ignore all words(from our input text) that are found in the built in list of english stop words in scikit-learn. This is extremely helpful as stop words can skew our calculations when we are trying to find certain key words that are indicative of spam. We will dive into the application of each of these into our model in a later step, but for now it is important to be aware of such preprocessing techniques available to us when dealing with textual data. Step 2.2: Implementing Bag of Words from scratch Before we dive into scikit-learn's Bag of Words(BoW) library to do the dirty work for us, let's implement it ourselves first so that we can understand what's happening behind the scenes. Step 1: Convert all strings to their lower case form. Let's say we have a document set: documents = ['Hello, how are you!', 'Win money, win from home.', 'Call me now.', 'Hello, Call hello you tomorrow?'] Instructions: * Convert all the strings in the documents set to their lower case. Save them into a list called 'lower_case_documents'. You can convert strings to their lower case in python by using the lower() method.
''' Solution: ''' documents = ['Hello, how are you!', 'Win money, win from home.', 'Call me now.', 'Hello, Call hello you tomorrow?'] lower_case_documents = [] for i in documents: lower_case_documents.append(i.lower()) print(lower_case_documents)
Project-practice--naive_bayes_tutorial/Naive_Bayes_tutorial.ipynb
littlewizardLI/Udacity-ML-nanodegrees
apache-2.0
Step 2: Removing all punctuations Instructions: Remove all punctuation from the strings in the document set. Save them into a list called 'sans_punctuation_documents'.
''' Solution: ''' sans_punctuation_documents = [] import string for i in lower_case_documents: sans_punctuation_documents.append(i.translate(str.maketrans('', '', string.punctuation))) print(sans_punctuation_documents)
Project-practice--naive_bayes_tutorial/Naive_Bayes_tutorial.ipynb
littlewizardLI/Udacity-ML-nanodegrees
apache-2.0
Step 3: Tokenization Tokenizing a sentence in a document set means splitting up a sentence into individual words using a delimiter. The delimiter specifies what character we will use to identify the beginning and the end of a word(for example we could use a single space as the delimiter for identifying words in our document set.) Instructions: Tokenize the strings stored in 'sans_punctuation_documents' using the split() method. and store the final document set in a list called 'preprocessed_documents'.
''' Solution: ''' preprocessed_documents = [] for i in sans_punctuation_documents: preprocessed_documents.append(i.split(' ')) print(preprocessed_documents)
Project-practice--naive_bayes_tutorial/Naive_Bayes_tutorial.ipynb
littlewizardLI/Udacity-ML-nanodegrees
apache-2.0
Step 4: Count frequencies Now that we have our document set in the required format, we can proceed to counting the occurrence of each word in each document of the document set. We will use the Counter method from the Python collections library for this purpose. Counter counts the occurrence of each item in the list and returns a dictionary with the key as the item being counted and the corresponding value being the count of that item in the list. Instructions: Using the Counter() method and preprocessed_documents as the input, create a dictionary with the keys being each word in each document and the corresponding values being the frequncy of occurrence of that word. Save each Counter dictionary as an item in a list called 'frequency_list'.
''' Solution ''' frequency_list = [] import pprint from collections import Counter for i in preprocessed_documents: frequency_counts = Counter(i) frequency_list.append(frequency_counts) pprint.pprint(frequency_list)
Project-practice--naive_bayes_tutorial/Naive_Bayes_tutorial.ipynb
littlewizardLI/Udacity-ML-nanodegrees
apache-2.0
Congratulations! You have implemented the Bag of Words process from scratch! As we can see in our previous output, we have a frequency distribution dictionary which gives a clear view of the text that we are dealing with. We should now have a solid understanding of what is happening behind the scenes in the sklearn.feature_extraction.text.CountVectorizer method of scikit-learn. We will now implement sklearn.feature_extraction.text.CountVectorizer method in the next step. Step 2.3: Implementing Bag of Words in scikit-learn Now that we have implemented the BoW concept from scratch, let's go ahead and use scikit-learn to do this process in a clean and succinct way. We will use the same document set as we used in the previous step.
''' Here we will look to create a frequency matrix on a smaller document set to make sure we understand how the document-term matrix generation happens. We have created a sample document set 'documents'. ''' documents = ['Hello, how are you!', 'Win money, win from home.', 'Call me now.', 'Hello, Call hello you tomorrow?']
Project-practice--naive_bayes_tutorial/Naive_Bayes_tutorial.ipynb
littlewizardLI/Udacity-ML-nanodegrees
apache-2.0
Instructions: Import the sklearn.feature_extraction.text.CountVectorizer method and create an instance of it called 'count_vector'.
''' Solution ''' from sklearn.feature_extraction.text import CountVectorizer count_vector = CountVectorizer()
Project-practice--naive_bayes_tutorial/Naive_Bayes_tutorial.ipynb
littlewizardLI/Udacity-ML-nanodegrees
apache-2.0
Data preprocessing with CountVectorizer() In Step 2.2, we implemented a version of the CountVectorizer() method from scratch that entailed cleaning our data first. This cleaning involved converting all of our data to lower case and removing all punctuation marks. CountVectorizer() has certain parameters which take care of these steps for us. They are: lowercase = True The lowercase parameter has a default value of True which converts all of our text to its lower case form. token_pattern = (?u)\\b\\w\\w+\\b The token_pattern parameter has a default regular expression value of (?u)\\b\\w\\w+\\b which ignores all punctuation marks and treats them as delimiters, while accepting alphanumeric strings of length greater than or equal to 2, as individual tokens or words. stop_words The stop_words parameter, if set to english will remove all words from our document set that match a list of English stop words which is defined in scikit-learn. Considering the size of our dataset and the fact that we are dealing with SMS messages and not larger text sources like e-mail, we will not be setting this parameter value. You can take a look at all the parameter values of your count_vector object by simply printing out the object as follows:
''' Practice node: Print the 'count_vector' object which is an instance of 'CountVectorizer()' ''' print(count_vector)
Project-practice--naive_bayes_tutorial/Naive_Bayes_tutorial.ipynb
littlewizardLI/Udacity-ML-nanodegrees
apache-2.0
Instructions: Fit your document dataset to the CountVectorizer object you have created using fit(), and get the list of words which have been categorized as features using the get_feature_names() method.
''' Solution: ''' count_vector.fit(documents) count_vector.get_feature_names()
Project-practice--naive_bayes_tutorial/Naive_Bayes_tutorial.ipynb
littlewizardLI/Udacity-ML-nanodegrees
apache-2.0
The get_feature_names() method returns our feature names for this dataset, which is the set of words that make up our vocabulary for 'documents'. Instructions: Create a matrix with the rows being each of the 4 documents, and the columns being each word. The corresponding (row, column) value is the frequency of occurrance of that word(in the column) in a particular document(in the row). You can do this using the transform() method and passing in the document data set as the argument. The transform() method returns a matrix of numpy integers, you can convert this to an array using toarray(). Call the array 'doc_array'
''' Solution ''' doc_array = count_vector.transform(documents).toarray() doc_array
Project-practice--naive_bayes_tutorial/Naive_Bayes_tutorial.ipynb
littlewizardLI/Udacity-ML-nanodegrees
apache-2.0
Now we have a clean representation of the documents in terms of the frequency distribution of the words in them. To make it easier to understand our next step is to convert this array into a dataframe and name the columns appropriately. Instructions: Convert the array we obtained, loaded into 'doc_array', into a dataframe and set the column names to the word names(which you computed earlier using get_feature_names(). Call the dataframe 'frequency_matrix'.
''' Solution ''' frequency_matrix = pd.DataFrame(doc_array, columns = count_vector.get_feature_names()) frequency_matrix
Project-practice--naive_bayes_tutorial/Naive_Bayes_tutorial.ipynb
littlewizardLI/Udacity-ML-nanodegrees
apache-2.0
Congratulations! You have successfully implemented a Bag of Words problem for a document dataset that we created. One potential issue that can arise from using this method out of the box is the fact that if our dataset of text is extremely large(say if we have a large collection of news articles or email data), there will be certain values that are more common that others simply due to the structure of the language itself. So for example words like 'is', 'the', 'an', pronouns, grammatical contructs etc could skew our matrix and affect our analyis. There are a couple of ways to mitigate this. One way is to use the stop_words parameter and set its value to english. This will automatically ignore all words(from our input text) that are found in a built in list of English stop words in scikit-learn. Another way of mitigating this is by using the <a href = 'http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html#sklearn.feature_extraction.text.TfidfVectorizer'> sklearn.feature_extraction.text.TfidfVectorizer</a> method. This method is out of scope for the context of this lesson. Step 3.1: Training and testing sets Now that we have understood how to deal with the Bag of Words problem we can get back to our dataset and proceed with our analysis. Our first step in this regard would be to split our dataset into a training and testing set so we can test our model later. Instructions: Split the dataset into a training and testing set by using the train_test_split method in sklearn. Split the data using the following variables: * X_train is our training data for the 'sms_message' column. * y_train is our training data for the 'label' column * X_test is our testing data for the 'sms_message' column. * y_test is our testing data for the 'label' column Print out the number of rows we have in each our training and testing data.
''' Solution ''' # split into training and testing sets from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(df['sms_message'], df['label'], random_state=1) print('Number of rows in the total set: {}'.format(df.shape[0])) print('Number of rows in the training set: {}'.format(X_train.shape[0])) print('Number of rows in the test set: {}'.format(X_test.shape[0]))
Project-practice--naive_bayes_tutorial/Naive_Bayes_tutorial.ipynb
littlewizardLI/Udacity-ML-nanodegrees
apache-2.0
Step 3.2: Applying Bag of Words processing to our dataset. Now that we have split the data, our next objective is to follow the steps from Step 2: Bag of words and convert our data into the desired matrix format. To do this we will be using CountVectorizer() as we did before. There are two steps to consider here: Firstly, we have to fit our training data (X_train) into CountVectorizer() and return the matrix. Secondly, we have to transform our testing data (X_test) to return the matrix. Note that X_train is our training data for the 'sms_message' column in our dataset and we will be using this to train our model. X_test is our testing data for the 'sms_message' column and this is the data we will be using(after transformation to a matrix) to make predictions on. We will then compare those predictions with y_test in a later step. For now, we have provided the code that does the matrix transformations for you!
''' [Practice Node] The code for this segment is in 2 parts. Firstly, we are learning a vocabulary dictionary for the training data and then transforming the data into a document-term matrix; secondly, for the testing data we are only transforming the data into a document-term matrix. This is similar to the process we followed in Step 2.3 We will provide the transformed data to students in the variables 'training_data' and 'testing_data'. ''' ''' Solution ''' # Instantiate the CountVectorizer method count_vector = CountVectorizer() # Fit the training data and then return the matrix training_data = count_vector.fit_transform(X_train) # Transform testing data and return the matrix. Note we are not fitting the testing data into the CountVectorizer() testing_data = count_vector.transform(X_test)
Project-practice--naive_bayes_tutorial/Naive_Bayes_tutorial.ipynb
littlewizardLI/Udacity-ML-nanodegrees
apache-2.0
Step 4.1: Bayes Theorem implementation from scratch Now that we have our dataset in the format that we need, we can move onto the next portion of our mission which is the algorithm we will use to make our predictions to classify a message as spam or not spam. Remember that at the start of the mission we briefly discussed the Bayes theorem but now we shall go into a little more detail. In layman's terms, the Bayes theorem calculates the probability of an event occurring, based on certain other probabilities that are related to the event in question. It is composed of a prior(the probabilities that we are aware of or that is given to us) and the posterior(the probabilities we are looking to compute using the priors). Let us implement the Bayes Theorem from scratch using a simple example. Let's say we are trying to find the odds of an individual having diabetes, given that he or she was tested for it and got a positive result. In the medical field, such probabilies play a very important role as it usually deals with life and death situatuations. We assume the following: P(D) is the probability of a person having Diabetes. It's value is 0.01 or in other words, 1% of the general population has diabetes(Disclaimer: these values are assumptions and are not reflective of any medical study). P(Pos) is the probability of getting a positive test result. P(Neg) is the probability of getting a negative test result. P(Pos|D) is the probability of getting a positive result on a test done for detecting diabetes, given that you have diabetes. This has a value 0.9. In other words the test is correct 90% of the time. This is also called the Sensitivity or True Positive Rate. P(Neg|~D) is the probability of getting a negative result on a test done for detecting diabetes, given that you do not have diabetes. This also has a value of 0.9 and is therefore correct, 90% of the time. This is also called the Specificity or True Negative Rate. The Bayes formula is as follows: <img src="images/bayes_formula.png" height="242" width="242"> P(A) is the prior probability of A occuring independantly. In our example this is P(D). This value is given to us. P(B) is the prior probability of B occuring independantly. In our example this is P(Pos). P(A|B) is the posterior probability that A occurs given B. In our example this is P(D|Pos). That is, the probability of an individual having diabetes, given that, that individual got a positive test result. This is the value that we are looking to calculate. P(B|A) is the likelihood probability of B occuring, given A. In our example this is P(Pos|D). This value is given to us. Putting our values into the formula for Bayes theorem we get: P(D|Pos) = (P(D) * P(Pos|D) / P(Pos) The probability of getting a positive test result P(Pos) can be calulated using the Sensitivity and Specificity as follows: P(Pos) = [P(D) * Sensitivity] + [P(~D) * (1-Specificity))]
''' Instructions: Calculate probability of getting a positive test result, P(Pos) ''' ''' Solution (skeleton code will be provided) ''' # P(D) p_diabetes = 0.01 # P(~D) p_no_diabetes = 0.99 # Sensitivity or P(Pos|D) p_pos_diabetes = 0.9 # Specificity or P(Neg/~D) p_neg_no_diabetes = 0.9 # P(Pos) p_pos = (p_diabetes * p_pos_diabetes) + (p_no_diabetes * (1 - p_neg_no_diabetes)) print('The probability of getting a positive test result P(Pos) is: {}',format(p_pos))
Project-practice--naive_bayes_tutorial/Naive_Bayes_tutorial.ipynb
littlewizardLI/Udacity-ML-nanodegrees
apache-2.0
Using all of this information we can calculate our posteriors as follows: The probability of an individual having diabetes, given that, that individual got a positive test result: P(D/Pos) = (P(D) * Sensitivity)) / P(Pos) The probability of an individual not having diabetes, given that, that individual got a positive test result: P(~D/Pos) = (P(~D) * (1-Specificity)) / P(Pos) The sum of our posteriors will always equal 1.
''' Instructions: Compute the probability of an individual having diabetes, given that, that individual got a positive test result. In other words, compute P(D|Pos). The formula is: P(D|Pos) = (P(D) * P(Pos|D) / P(Pos) ''' ''' Solution ''' # P(D|Pos) p_diabetes_pos = (p_diabetes * p_pos_diabetes) / p_pos print('Probability of an individual having diabetes, given that that individual got a positive test result is:\ ',format(p_diabetes_pos)) ''' Instructions: Compute the probability of an individual not having diabetes, given that, that individual got a positive test result. In other words, compute P(~D|Pos). The formula is: P(~D|Pos) = (P(~D) * P(Pos|~D) / P(Pos) Note that P(Pos/~D) can be computed as 1 - P(Neg/~D). Therefore: P(Pos/~D) = p_pos_no_diabetes = 1 - 0.9 = 0.1 ''' ''' Solution ''' # P(Pos/~D) p_pos_no_diabetes = 0.1 # P(~D|Pos) p_no_diabetes_pos = (p_no_diabetes * p_pos_no_diabetes) / p_pos print 'Probability of an individual not having diabetes, given that that individual got a positive test result is:'\ ,p_no_diabetes_pos
Project-practice--naive_bayes_tutorial/Naive_Bayes_tutorial.ipynb
littlewizardLI/Udacity-ML-nanodegrees
apache-2.0
Congratulations! You have implemented Bayes theorem from scratch. Your analysis shows that even if you get a positive test result, there is only a 8.3% chance that you actually have diabetes and a 91.67% chance that you do not have diabetes. This is of course assuming that only 1% of the entire population has diabetes which of course is only an assumption. What does the term 'Naive' in 'Naive Bayes' mean ? The term 'Naive' in Naive Bayes comes from the fact that the algorithm considers the features that it is using to make the predictions to be independent of each other, which may not always be the case. So in our Diabetes example, we are considering only one feature, that is the test result. Say we added another feature, 'exercise'. Let's say this feature has a binary value of 0 and 1, where the former signifies that the individual exercises less than or equal to 2 days a week and the latter signifies that the individual exercises greater than or equal to 3 days a week. If we had to use both of these features, namely the test result and the value of the 'exercise' feature, to compute our final probabilities, Bayes' theorem would fail. Naive Bayes' is an extension of Bayes' theorem that assumes that all the features are independent of each other. Step 4.2: Naive Bayes implementation from scratch Now that you have understood the ins and outs of Bayes Theorem, we will extend it to consider cases where we have more than feature. Let's say that we have two political parties' candidates, 'Jill Stein' of the Green Party and 'Gary Johnson' of the Libertarian Party and we have the probabilities of each of these candidates saying the words 'freedom', 'immigration' and 'environment' when they give a speech: Probability that Jill Stein says 'freedom': 0.1 ---------> P(J|F) Probability that Jill Stein says 'immigration': 0.1 -----> P(J|I) Probability that Jill Stein says 'environment': 0.8 -----> P(J|E) Probability that Gary Johnson says 'freedom': 0.7 -------> P(G|F) Probability that Gary Johnson says 'immigration': 0.2 ---> P(G|I) Probability that Gary Johnson says 'environment': 0.1 ---> P(G|E) And let us also assume that the probablility of Jill Stein giving a speech, P(J) is 0.5 and the same for Gary Johnson, P(G) = 0.5. Given this, what if we had to find the probabilities of Jill Stein saying the words 'freedom' and 'immigration'? This is where the Naive Bayes'theorem comes into play as we are considering two features, 'freedom' and 'immigration'. Now we are at a place where we can define the formula for the Naive Bayes' theorem: <img src="images/naivebayes.png" height="342" width="342"> Here, y is the class variable or in our case the name of the candidate and x1 through xn are the feature vectors or in our case the individual words. The theorem makes the assumption that each of the feature vectors or words (xi) are independent of each other. To break this down, we have to compute the following posterior probabilities: P(J|F,I): Probability of Jill Stein saying the words Freedom and Immigration. Using the formula and our knowledge of Bayes' theorem, we can compute this as follows: P(J|F,I) = (P(J) * P(J|F) * P(J|I)) / P(F,I). Here P(F,I) is the probability of the words 'freedom' and 'immigration' being said in a speech. P(G|F,I): Probability of Gary Johnson saying the words Freedom and Immigration. Using the formula, we can compute this as follows: P(G|F,I) = (P(G) * P(G|F) * P(G|I)) / P(F,I)
''' Instructions: Compute the probability of the words 'freedom' and 'immigration' being said in a speech, or P(F,I). The first step is multiplying the probabilities of Jill Stein giving a speech with her individual probabilities of saying the words 'freedom' and 'immigration'. Store this in a variable called p_j_text The second step is multiplying the probabilities of Gary Johnson giving a speech with his individual probabilities of saying the words 'freedom' and 'immigration'. Store this in a variable called p_g_text The third step is to add both of these probabilities and you will get P(F,I). ''' ''' Solution: Step 1 ''' # P(J) p_j = 0.5 # P(J|F) p_j_f = 0.1 # P(J|I) p_j_i = 0.1 p_j_text = p_j * p_j_f * p_j_i print(p_j_text) ''' Solution: Step 2 ''' # P(G) p_g = 0.5 # P(G|F) p_g_f = 0.7 # P(G|I) p_g_i = 0.2 p_g_text = p_g * p_g_f * p_g_i print(p_g_text) ''' Solution: Step 3: Compute P(F,I) and store in p_f_i ''' p_f_i = p_j_text + p_g_text print('Probability of words freedom and immigration being said are: ', format(p_f_i))
Project-practice--naive_bayes_tutorial/Naive_Bayes_tutorial.ipynb
littlewizardLI/Udacity-ML-nanodegrees
apache-2.0
Now we can compute the probability of P(J|F,I), that is the probability of Jill Stein saying the words Freedom and Immigration and P(G|F,I), that is the probability of Gary Johnson saying the words Freedom and Immigration.
''' Instructions: Compute P(J|F,I) using the formula P(J|F,I) = (P(J) * P(J|F) * P(J|I)) / P(F,I) and store it in a variable p_j_fi ''' ''' Solution ''' p_j_fi = p_j_text / p_f_i print('The probability of Jill Stein saying the words Freedom and Immigration: ', format(p_j_fi)) ''' Instructions: Compute P(G|F,I) using the formula P(G|F,I) = (P(G) * P(G|F) * P(G|I)) / P(F,I) and store it in a variable p_g_fi ''' ''' Solution ''' p_g_fi = p_g_text / p_f_i print('The probability of Gary Johnson saying the words Freedom and Immigration: ', format(p_g_fi))
Project-practice--naive_bayes_tutorial/Naive_Bayes_tutorial.ipynb
littlewizardLI/Udacity-ML-nanodegrees
apache-2.0
And as we can see, just like in the Bayes' theorem case, the sum of our posteriors is equal to 1. Congratulations! You have implemented the Naive Bayes' theorem from scratch. Our analysis shows that there is only a 6.6% chance that Jill Stein of the Green Party uses the words 'freedom' and 'immigration' in her speech as compard the the 93.3% chance for Gary Johnson of the Libertarian party. Another more generic example of Naive Bayes' in action is as when we search for the term 'Sacramento Kings' in a search engine. In order for us to get the results pertaining to the Scramento Kings NBA basketball team, the search engine needs to be able to associate the two words together and not treat them individually, in which case we would get results of images tagged with 'Sacramento' like pictures of city landscapes and images of 'Kings' which could be pictures of crowns or kings from history when what we are looking to get are images of the basketball team. This is a classic case of the search engine treating the words as independant entities and hence being 'naive' in its approach. Applying this to our problem of classifying messages as spam, the Naive Bayes algorithm looks at each word individually and not as associated entities with any kind of link between them. In the case of spam detectors, this usually works as there are certain red flag words which can almost guarantee its classification as spam, for example emails with words like 'viagra' are usually classified as spam. Step 5: Naive Bayes implementation using scikit-learn Thankfully, sklearn has several Naive Bayes implementations that we can use and so we do not have to do the math from scratch. We will be using sklearns sklearn.naive_bayes method to make predictions on our dataset. Specifically, we will be using the multinomial Naive Bayes implementation. This particular classifier is suitable for classification with discrete features (such as in our case, word counts for text classification). It takes in integer word counts as its input. On the other hand Gaussian Naive Bayes is better suited for continuous data as it assumes that the input data has a Gaussian(normal) distribution.
''' Instructions: We have loaded the training data into the variable 'training_data' and the testing data into the variable 'testing_data'. Import the MultinomialNB classifier and fit the training data into the classifier using fit(). Name your classifier 'naive_bayes'. You will be training the classifier using 'training_data' and y_train' from our split earlier. ''' ''' Solution ''' from sklearn.naive_bayes import MultinomialNB naive_bayes = MultinomialNB() naive_bayes.fit(training_data, y_train) ''' Instructions: Now that our algorithm has been trained using the training data set we can now make some predictions on the test data stored in 'testing_data' using predict(). Save your predictions into the 'predictions' variable. ''' ''' Solution ''' predictions = naive_bayes.predict(testing_data)
Project-practice--naive_bayes_tutorial/Naive_Bayes_tutorial.ipynb
littlewizardLI/Udacity-ML-nanodegrees
apache-2.0
Now that predictions have been made on our test set, we need to check the accuracy of our predictions. Step 6: Evaluating our model Now that we have made predictions on our test set, our next goal is to evaluate how well our model is doing. There are various mechanisms for doing so, but first let's do quick recap of them. Accuracy measures how often the classifier makes the correct prediction. It’s the ratio of the number of correct predictions to the total number of predictions (the number of test data points). Precision tells us what proportion of messages we classified as spam, actually were spam. It is a ratio of true positives(words classified as spam, and which are actually spam) to all positives(all words classified as spam, irrespective of whether that was the correct classificatio), in other words it is the ratio of [True Positives/(True Positives + False Positives)] Recall(sensitivity) tells us what proportion of messages that actually were spam were classified by us as spam. It is a ratio of true positives(words classified as spam, and which are actually spam) to all the words that were actually spam, in other words it is the ratio of [True Positives/(True Positives + False Negatives)] For classification problems that are skewed in their classification distributions like in our case, for example if we had a 100 text messages and only 2 were spam and the rest 98 weren't, accuracy by itself is not a very good metric. We could classify 90 messages as not spam(including the 2 that were spam but we classify them as not spam, hence they would be false negatives) and 10 as spam(all 10 false positives) and still get a reasonably good accuracy score. For such cases, precision and recall come in very handy. These two metrics can be combined to get the F1 score, which is weighted average of the precision and recall scores. This score can range from 0 to 1, with 1 being the best possible F1 score. We will be using all 4 metrics to make sure our model does well. For all 4 metrics whose values can range from 0 to 1, having a score as close to 1 as possible is a good indicator of how well our model is doing.
''' Instructions: Compute the accuracy, precision, recall and F1 scores of your model using your test data 'y_test' and the predictions you made earlier stored in the 'predictions' variable. ''' ''' Solution ''' from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score print('Accuracy score: ', format(accuracy_score(y_test, predictions))) print('Precision score: ', format(precision_score(y_test, predictions))) print('Recall score: ', format(recall_score(y_test, predictions))) print('F1 score: ', format(f1_score(y_test, predictions)))
Project-practice--naive_bayes_tutorial/Naive_Bayes_tutorial.ipynb
littlewizardLI/Udacity-ML-nanodegrees
apache-2.0
We will be using the re library to parse our lines of text. We will import the InteractiveRunner class for executing out pipeline in the notebook environment and the interactive_beam module for exploring the PCollections. Finally we will import two functions from the Dataframe API, to_dataframe and to_pcollection. to_dataframe converts your (schema-aware) PCollection into a dataframe and to_pcollection goes back in the other direction to a PCollection of type beam.Row. We will first create a composite PTransform ReadWordsFromText to read in a file pattern (file_pattern), use the ReadFromText source to read in the files, and then FlatMap with a lambda to parse the line into individual words.
class ReadWordsFromText(beam.PTransform): def __init__(self, file_pattern): self._file_pattern = file_pattern def expand(self, pcoll): return (pcoll.pipeline | beam.io.ReadFromText(self._file_pattern) | beam.FlatMap(lambda line: re.findall(r'[\w\']+', line.strip(), re.UNICODE)))
courses/dataflow/demos/beam_notebooks/beam_notebooks_demo.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0