markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
When we increase the degree to this extent, it's clear that the resulting fit is no longer reflecting the true underlying distribution, but is more sensitive to the noise in the training data. For this reason, we call it a high-variance model, and we say that it over-fits the data. Just for fun, let's use IPython's interact capability (only in IPython 2.0+) to explore this interactively:
from ipywidgets import interact def plot_fit(degree=1, Npts=50): X, y = make_data(Npts, error=1) X_test = np.linspace(-0.1, 1.1, 500)[:, None] model = PolynomialRegression(degree=degree) model.fit(X, y) y_test = model.predict(X_test) plt.scatter(X.ravel(), y) plt.plot(X_test.ravel(), y_test) plt.ylim(-4, 14) plt.title("mean squared error: {0:.2f}".format(mean_squared_error(model.predict(X), y))) interact(plot_fit, degree=[1, 30], Npts=[2, 100]);
present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/05-Validation.ipynb
csaladenes/csaladenes.github.io
mit
Detecting Over-fitting with Validation Curves Clearly, computing the error on the training data is not enough (we saw this previously). As above, we can use cross-validation to get a better handle on how the model fit is working. Let's do this here, again using the validation_curve utility. To make things more clear, we'll use a slightly larger dataset:
X, y = make_data(120, error=1.0) plt.scatter(X, y); from sklearn.model_selection import validation_curve def rms_error(model, X, y): y_pred = model.predict(X) return np.sqrt(np.mean((y - y_pred) ** 2)) degree = np.arange(0, 18) val_train, val_test = validation_curve(PolynomialRegression(), X, y, 'polynomialfeatures__degree', degree, cv=7, scoring=rms_error)
present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/05-Validation.ipynb
csaladenes/csaladenes.github.io
mit
Now let's plot the validation curves:
def plot_with_err(x, data, **kwargs): mu, std = data.mean(1), data.std(1) lines = plt.plot(x, mu, '-', **kwargs) plt.fill_between(x, mu - std, mu + std, edgecolor='none', facecolor=lines[0].get_color(), alpha=0.2) plot_with_err(degree, val_train, label='training scores') plot_with_err(degree, val_test, label='validation scores') plt.xlabel('degree'); plt.ylabel('rms error') plt.legend();
present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/05-Validation.ipynb
csaladenes/csaladenes.github.io
mit
Notice the trend here, which is common for this type of plot. For a small model complexity, the training error and validation error are very similar. This indicates that the model is under-fitting the data: it doesn't have enough complexity to represent the data. Another way of putting it is that this is a high-bias model. As the model complexity grows, the training and validation scores diverge. This indicates that the model is over-fitting the data: it has so much flexibility, that it fits the noise rather than the underlying trend. Another way of putting it is that this is a high-variance model. Note that the training score (nearly) always improves with model complexity. This is because a more complicated model can fit the noise better, so the model improves. The validation data generally has a sweet spot, which here is around 5 terms. Here's our best-fit model according to the cross-validation:
model = PolynomialRegression(4).fit(X, y) plt.scatter(X, y) plt.plot(X_test, model.predict(X_test));
present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/05-Validation.ipynb
csaladenes/csaladenes.github.io
mit
Detecting Data Sufficiency with Learning Curves As you might guess, the exact turning-point of the tradeoff between bias and variance is highly dependent on the number of training points used. Here we'll illustrate the use of learning curves, which display this property. The idea is to plot the mean-squared-error for the training and test set as a function of Number of Training Points
from sklearn.model_selection import learning_curve def plot_learning_curve(degree=3): train_sizes = np.linspace(0.05, 1, 120) N_train, val_train, val_test = learning_curve(PolynomialRegression(degree), X, y, train_sizes, cv=5, scoring=rms_error) plot_with_err(N_train, val_train, label='training scores') plot_with_err(N_train, val_test, label='validation scores') plt.xlabel('Training Set Size'); plt.ylabel('rms error') plt.ylim(0, 3) plt.xlim(5, 80) plt.legend()
present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/05-Validation.ipynb
csaladenes/csaladenes.github.io
mit
Let's see what the learning curves look like for a linear model:
plot_learning_curve(1)
present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/05-Validation.ipynb
csaladenes/csaladenes.github.io
mit
This shows a typical learning curve: for very few training points, there is a large separation between the training and test error, which indicates over-fitting. Given the same model, for a large number of training points, the training and testing errors converge, which indicates potential under-fitting. As you add more data points, the training error will never increase, and the testing error will never decrease (why do you think this is?) It is easy to see that, in this plot, if you'd like to reduce the MSE down to the nominal value of 1.0 (which is the magnitude of the scatter we put in when constructing the data), then adding more samples will never get you there. For $d=1$, the two curves have converged and cannot move lower. What about for a larger value of $d$?
plot_learning_curve(3)
present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/05-Validation.ipynb
csaladenes/csaladenes.github.io
mit
Here we see that by adding more model complexity, we've managed to lower the level of convergence to an rms error of 1.0! What if we get even more complex?
plot_learning_curve(10)
present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/05-Validation.ipynb
csaladenes/csaladenes.github.io
mit
Try out Environment
BeraterEnv.showStep = True BeraterEnv.showDone = True env = BeraterEnv() print(env) observation = env.reset() print(observation) for t in range(1000): action = env.action_space.sample() observation, reward, done, info = env.step(action) if done: print("Episode finished after {} timesteps".format(t+1)) break env.close() print(observation)
notebooks/rl/berater-v4.ipynb
DJCordhose/ai
mit
Train model 0.73 would be perfect total reward
!rm -r logs !mkdir logs !mkdir logs/berater # https://github.com/openai/baselines/blob/master/baselines/deepq/experiments/train_pong.py # log_dir = logger.get_dir() log_dir = '/content/logs/berater/' import gym from baselines import deepq from baselines import bench from baselines import logger from baselines.common.vec_env.dummy_vec_env import DummyVecEnv from baselines.common.vec_env.vec_monitor import VecMonitor from baselines.ppo2 import ppo2 BeraterEnv.showStep = False BeraterEnv.showDone = False env = BeraterEnv() wrapped_env = DummyVecEnv([lambda: BeraterEnv()]) monitored_env = VecMonitor(wrapped_env, log_dir) model = ppo2.learn(network='mlp', env=monitored_env, total_timesteps=50000) # monitored_env = bench.Monitor(env, log_dir) # https://en.wikipedia.org/wiki/Q-learning#Influence_of_variables # %time model = deepq.learn(\ # monitored_env,\ # seed=42,\ # network='mlp',\ # lr=1e-3,\ # gamma=0.99,\ # total_timesteps=30000,\ # buffer_size=50000,\ # exploration_fraction=0.5,\ # exploration_final_eps=0.02,\ # print_freq=1000) model.save('berater-ppo-v4.pkl') monitored_env.close()
notebooks/rl/berater-v4.ipynb
DJCordhose/ai
mit
Visualizing Results https://github.com/openai/baselines/blob/master/docs/viz/viz.ipynb
!ls -l $log_dir from baselines.common import plot_util as pu results = pu.load_results(log_dir) import matplotlib.pyplot as plt import numpy as np r = results[0] # plt.ylim(-1, 1) # plt.plot(np.cumsum(r.monitor.l), r.monitor.r) plt.plot(np.cumsum(r.monitor.l), pu.smooth(r.monitor.r, radius=100))
notebooks/rl/berater-v4.ipynb
DJCordhose/ai
mit
Enjoy model
import numpy as np observation = env.reset() state = np.zeros((1, 2*128)) dones = np.zeros((1)) BeraterEnv.showStep = True BeraterEnv.showDone = False for t in range(1000): actions, _, state, _ = model.step(observation, S=state, M=dones) observation, reward, done, info = env.step(actions[0]) if done: print("Episode finished after {} timesteps".format(t+1)) break env.close()
notebooks/rl/berater-v4.ipynb
DJCordhose/ai
mit
为了修正这个问题,你可以修改模式字符串,增加对换行的支持。比如:
comment = re.compile(r'/\*((?:.|\n)*?)\*/') comment.findall(text2)
02 strings and text/02.08 regexp for multiline partterns.ipynb
wuafeing/Python3-Tutorial
gpl-3.0
在这个模式中, (?:.|\n) 指定了一个非捕获组 (也就是它定义了一个仅仅用来做匹配,而不能通过单独捕获或者编号的组)。 讨论 re.compile() 函数接受一个标志参数叫 re.DOTALL ,在这里非常有用。 它可以让正则表达式中的点 (.) 匹配包括换行符在内的任意字符。比如:
comment = re.compile(r'/\*(.*?)\*/', re.DOTALL) comment.findall(text2)
02 strings and text/02.08 regexp for multiline partterns.ipynb
wuafeing/Python3-Tutorial
gpl-3.0
Python Python is a widely used high-level programming language for general-purpose programming, created by Guido van Rossum and first released in 1991. An interpreted language, Python has a design philosophy which emphasizes code readability (notably using whitespace indentation to delimit code blocks rather than curly brackets or keywords), and a syntax which allows programmers to express concepts in fewer lines of code than might be used in languages such as C++ or Java. The language provides constructs intended to enable writing clear programs on both a small and large scale. <img src="rossum.png" width="300px"> Python’s Benevolent Dictator For Life! “Python is an experiment in how much freedom program-mers need. Too much freedom and nobody can read another's code; too little and expressive-ness is endangered.” - Guido van Rossum Why Use It? Simple and easy to use and very efficient What you can do in a 100 lines of python could take you a 1000 in C++ … this is the reason many startups (e.g., Instagram) use python and keep using it 90% of robotics uses either C++ or python Although C++ is faster in run-time, development (write, compile, link, etc) is much slower due to complex syntax, memory management, pointers (they can be fun!) and difficulty in debugging any sort of real program Java is dying (or dead) Microsoft is still struggling to get people outside of the Windows OS to embrace C# Apple's swift is too new and constantly making major changes ... maybe some day Who Uses It? Industrial Light & Magic (Stars Wars people): used in post production scripting to tie together outputs from other C++ programs Eve-Online (big MMORGP game): used for both client and server aspects of the game Instagram, Spotify, SurveyMonkey, The Onion, Bitbucket, Pinterest, and more use Django (python website template framework) to create/serve millions of users Dropbox, Paypal, Walmart and Google (YouTube) Note: Guido van Rossum worked for Google and now works for Dropbox Running Programs on UNIX (or your robot) Call python program via the python interpreter: python my_program.py This is kind of the stupid way Make a python file directly executable Add a shebang (it’s a Unix thing) to the top of your program: #!/usr/bin/env python Make the file executable: chmod a+x my_program.py Invoke file from Unix command line: ./my_program.py Enough to Understand Code (Short Version) Indentation matters for functions, loops, classes, etc First assignment to a variable creates it Variable types (int, float, etc) don’t need to be declared. Assignment is = and comparison is == For numbers + - * % are as expected modulas (%) returns the remainder: 5%3 => 2 Logical operators are words (and, or, not) not symbols We are using __future__ for python 2 / 3 compatibility The basic printing command is print(‘hello’) Division works like expected: Float division: 5/2 = 2.5 Integer division: 5//2 = 2 Start comments with #, rest of line is ignored Can include a “documentation string” as the first line of a new function or class you define ```python def my_function(n): """ my_function(n) takes a positive integer and returns n + 5 """ # assert ... remember this from ECE281? assert n>0, "crap, n is 0 or negative!" return n+5 ``` Printing Again, to have Python 3 compatability and help you in the future, we are going to print things using the print function. Python 2 by default uses a print statement. Also, it is good form to use the newer format() function on strings rather than the old C style %s for a string or %d for an integer. There are lots of cool things you can do with format() but we won't dive too far into it ... just the basics. WARNING: Your homework with Code Academy uses the old way to print, just do it for that and get through it. For this class we are doing it this way!
from __future__ import division # fix division from __future__ import print_function # use print function print('hello world') # single quotes print("hello world") # double quotes print('3/4 is', 3/4) # this prints 0.75 print('I am {} ... for {} yrs I have been training Jedhi'.format("Yoda", 853)) print('float: {:5.1f}'.format(3.1424567)) # prints float: 3.1
website/block_1_basics/lsn3/lsn3.ipynb
MarsUniversity/ece387
mit
Unicode Unicode sucks in python 2.7, but if you want to use it: alphabets arrows emoji
print(u'\u21e6 \u21e7 \u21e8 \u21e9') print(u'\u2620') # this is a dictionary, we will talk about it next ... sorry for the out of order uni = { 'left': u'\u21e6', 'up': u'\u21e7', 'right': u'\u21e8', 'down': u'\u21e9', } print(u'\nYou must go {}'.format(uni['up'])) # notice all strings have u on the front
website/block_1_basics/lsn3/lsn3.ipynb
MarsUniversity/ece387
mit
Data Types Python isn't typed, so you don't really need to keep track of variables and delare them as ints, floats, doubles, unsigned, etc. There are a few places where this isn't true, but we will deal with those as we encounter them.
# bool z = True # or False # integers (default) z = 3 # floats z = 3.124 z = 5/2 print('z =', z) # dictionary or hash tables bob = {'a': 5, 'b': 6} print('bob["a"]:', bob['a']) # you can assign a new key/values pair bob['c'] = 'this is a string!!' print(bob) print('len(bob) =', len(bob)) # you can also access what keys are in a dict print('bob.keys() =', bob.keys()) # let's get crazy and do different types and have a key that is an int bob = {'a': True, 11: [1,2,3]} print('bob = ', bob) print('bob[11] = ', bob[11]) # don't do this, it is confusing!! # arrays or lists are mutable (changable) # the first element is 0 like all good programming languages bob = [1,2,3,4,5] bob[2] = 'tom' print('bob list', bob) print('bob list[3]:', bob[3]) # remember it is zero indexed # or ... tuple will do this too bob = [1]*5 print('bob one-liner version 2:', bob) print('len(bob) =', len(bob)) # strings z = 'hello world!!' z = 'hello' + ' world' # concatinate z = 'hhhello world!@#$'[2:13] # strings are just an array of letters print('my crazy string:', z) print('{}: {} {:.2f}'.format('formatting', 3.1234, 6.6666)) print('len(z) =', len(z)) # tuples are immutable (not changable which makes them faster/smaller) bob = (1,2,3,4) print('bob tuple', bob) print('bob tuple*3', bob*3) # repeats tuple 3x print('len(bob) =', len(bob)) # since tuples are immutable, this will throw an error bob[1] = 'tom' # assign multiple variables at once bob = (4,5,) x,y = bob print(x,y) # wait, I changed by mind ... easy to swap x,y = y,x print(x,y)
website/block_1_basics/lsn3/lsn3.ipynb
MarsUniversity/ece387
mit
Flow Control Logic Operators Flow control is generally done via some math operator or boolean logic operator. For Loop
# range(start, stop, step) # this only works for integer values range(3,10) # jupyter cell will always print the last thing # iterates from start (default 0) to less than the highest number for i in range(5): print(i) # you can also create simple arrays like this: bob = [2*x+3 for x in range(4)] print('bob one-liner:', bob) for i in range(2,8,2): # start=2, stop<8, step=2, so notice the last value is 6 NOT 8 print(i) # I have a list of things ... maybe images or something else. # A for loop can iterate through the list. Here, each time # through, i is set to the next letter in my array 'dfec' things = ['d', 'e', 'f', 'c'] for ltr in things: print(ltr) # enumerate() # sometimes you need a counter in your for loop, use enumerate things = ['d', 'e', 'f', 3.14] # LOOK! the last element is a float not a letter ... that's OK for i, ltr in enumerate(things): print('things[{}]: {}'.format(i, ltr)) # zip() # somethimes you have a couple arrays that you want to work on at the same time, use zip # to combine them together # NOTE: all arrays have to have the SAME LENGTH a = ['bob', 'tom', 'sally'] b = ['good', 'evil', 'nice'] c = [10, 20, 15] for name, age, status in zip(a, c, b): # notice I mixed up a, b, c status = status.upper() name = name[0].upper() + name[1:] # strings are immutable print('{} is {} yrs old and totally {}'.format(name, age, status))
website/block_1_basics/lsn3/lsn3.ipynb
MarsUniversity/ece387
mit
if / elif / else
# classic if/then statements work the same as other languages. # if the statement is True, then do something, if it is False, then skip over it. if False: print('should not get here') elif True: print('this should print') else: print('this is the default if all else fails') n = 5 n = 3 if n==1 else n-1 # one line if/then statement print(n)
website/block_1_basics/lsn3/lsn3.ipynb
MarsUniversity/ece387
mit
While
x = 3 while True: # while loop runs while value is True if not x: # I will enter this if statement when x = False or 0 break # breaks me out of a loop else: print(x) x -= 1
website/block_1_basics/lsn3/lsn3.ipynb
MarsUniversity/ece387
mit
Exception Handling When you write code you should think about how you could break it, then design it so you can't. Now, you don't necessary need to write bullet proof code ... that takes a lot of time (and time is money), but you should make an effort to reduce your debug time. A list of Python 2.7 exceptions is here. KeyboardInterrupt: is a common one when a user pressed ctl-C to quit the program. Some others: BaseException +-- SystemExit +-- KeyboardInterrupt +-- GeneratorExit +-- Exception +-- StopIteration +-- StandardError | +-- BufferError | +-- ArithmeticError | | +-- FloatingPointError | | +-- OverflowError | | +-- ZeroDivisionError | +-- AssertionError | +-- AttributeError | +-- EnvironmentError | | +-- IOError | | +-- OSError | | +-- WindowsError (Windows) | | +-- VMSError (VMS) | +-- EOFError | +-- ImportError | +-- LookupError | | +-- IndexError | | +-- KeyError | +-- MemoryError | +-- NameError | | +-- UnboundLocalError | +-- ReferenceError | +-- RuntimeError | | +-- NotImplementedError | +-- SyntaxError | | +-- IndentationError | | +-- TabError | +-- SystemError | +-- TypeError | +-- ValueError | +-- UnicodeError | +-- UnicodeDecodeError | +-- UnicodeEncodeError | +-- UnicodeTranslateError +-- Warning +-- DeprecationWarning +-- PendingDeprecationWarning +-- RuntimeWarning +-- SyntaxWarning +-- UserWarning +-- FutureWarning +-- ImportWarning +-- UnicodeWarning +-- BytesWarning
# exception handling ... use in your code in smart places try: a = (1,2,) # tupple ... notice the extra comma after the 2 a[0] = 1 # this won't work! except: # this catches any exception thrown print('you idiot ... you cannot modify a tuple!!') # error 5/0 try: 5/0 except ZeroDivisionError as e: print(e) # raise # this rasies the error to the next # level so i don't have to handle it here try: 5/0 except ZeroDivisionError as e: print(e) raise # this rasies the error to the next (in this case, the Jupyter GUI handles it) # level so i don't have to handle it here
website/block_1_basics/lsn3/lsn3.ipynb
MarsUniversity/ece387
mit
When would you want to use raise? Why not always handle the error here? What is different when the raise command is used?
# Honestly, I generally just use Exception from which most other exceptions # are derived from, but I am lazy and it works fine for what I do try: 5/0 except Exception as e: print(e) # all is right with the world ... these will work, nothing will print assert True assert 3 > 1 # this will fail ... and we can add a message if we want to assert 3 < 1, 'hello ... this should fail'
website/block_1_basics/lsn3/lsn3.ipynb
MarsUniversity/ece387
mit
Libraries We will need to import math to have access to trig and other functions. There will be other libraries like numpy, cv2, etc you will need to.
import math print('messy', math.cos(math.pi/4)) # that looks clumbsy ... let's do this instead from math import cos, pi print('simpler math:', cos(pi/4)) # or we just want to shorten the name to reduce typing ... good programmers are lazy! import numpy as np # well what is in the math library I might want to use???? dir(math) # what is tanh??? help(math.tanh) print(math.__doc__) # print the doc string for the library ... what does it do?
website/block_1_basics/lsn3/lsn3.ipynb
MarsUniversity/ece387
mit
Functions There isn't too much that is special about python functions, just the format.
def my_cool_function(x): """ This is my cool function which takes an argument x and returns a value """ return 2*x/3 my_cool_function(6) # 2*6/3 = 4
website/block_1_basics/lsn3/lsn3.ipynb
MarsUniversity/ece387
mit
Classes and Object Oriented Programming (OOP) Ok, we don't have time to really teach you how to do this. It would be better if your real programming classes did this. So we are just going to kludge this together here, because these could be useful in this class. In fact I generally (and 99% of the world) does OOP. Classes are awesome because of a few reasons. First, they help you reuse code instead of duplicating the code in other places all over your program. Classes will save your life when you realize you want to change a function and you will only change it in one spot instead of 10 different spots with slightly different code. You can also put a bunch of related functions together because they make sense. Another important part of Classes is that they allow you to create more flexible functions. We are going to keep it simple and basically show you how to do OOP in python very simply. This will be a little familar from ECE382 with structs (sort of)
class ClassName(object): """ So this is my cool class """ def __init__(self, x): """ This is called a constructor in OOP. When I make an object this function is called. self = contains all of the objects values x = an argument to pass something into the constructor """ self.x = x print('> Constructor called', x) def my_cool_function(self, y): """ This is called a method (function) that works on the class. It always needs self to access class values, but can also have as many arguments as you want. I only have 1 arg called y""" self.x = y print('> called function: {}'.format(self.x)) def __del__(self): """ Destructor. This is called when the object goes out of scope and is destoryed. It take NO arguments other than self. Note, this is hard to call in jupyter, because it will probably get called with the program (notebook) ends (shutsdown) """ pass a = ClassName('bob') a.my_cool_function(3.14) b = ClassName(28) b.my_cool_function('tom') for i in range(3): a = ClassName('bob')
website/block_1_basics/lsn3/lsn3.ipynb
MarsUniversity/ece387
mit
There are tons of things you can do with objects. Here is one example. Say we have a ball class and for some reason we want to be able to add balls together.
class Ball(object): def __init__(self, color, radius): # this ball always has this color and raduis below self.radius = radius self.color = color def __str__(self): """ When something tries to turn this object into a string, this function gets called """ s = 'Ball {}, radius: {:.1f}'.format(self.color, self.radius) return s def __add__(self, a): c = Ball('gray', a.radius + self.radius) return c r = Ball('red', 3) g = Ball('green', radius=4) b = Ball(radius=5, color='blue') print(r) print(g) print(b) print('total size:', r.radius+b.radius+g.radius) print('Add method:', r+b+g) # the base class of all objects in Python should be # object. It comes with these methods already defined. dir(object)
website/block_1_basics/lsn3/lsn3.ipynb
MarsUniversity/ece387
mit
Importamos los paquetes necesarios:
import numpy as np import matplotlib.pyplot as plt
4-matplotlib.ipynb
eduardojvieira/Curso-Python-MEC-UCV
mit
La biblioteca matplotlib es gigantesca y es difícil hacerse una idea global de todas sus posibilidades en una primera toma de contacto. Es recomendable tener a mano la documentación y la galería (http://matplotlib.org/gallery.html#pylab_examples): Interfaz pyplot La interfaz pyplot proporciona una serie de funciones que operan sobre un estado global - es decir, nosotros no especificamos sobre qué gráfica o ejes estamos actuando. Es una forma rápida y cómoda de crear gráficas pero perdemos parte del control. Función plot El paquete pyplot se suele importar bajo el alias plt, de modo que todas las funciones se acceden a través de plt.&lt;funcion&gt;. La función más básica es la función plot:
plt plt.plot([0.0, 0.1, 0.2, 0.7, 0.9], [1, -2, 3, 4, 1])
4-matplotlib.ipynb
eduardojvieira/Curso-Python-MEC-UCV
mit
La función plot recibe una sola lista (si queremos especificar los valores y) o dos listas (si especificamos x e y). Naturalmente si especificamos dos listas ambas tienen que tener la misma longitud. La tarea más habitual a la hora de trabajar con matplotlib es representar una función. Lo que tendremos que hacer es definir un dominio y evaluarla en dicho dominio. Por ejemplo: $$ f(x) = e^{-x^2} $$
def f(x): return np.exp(-x ** 2)
4-matplotlib.ipynb
eduardojvieira/Curso-Python-MEC-UCV
mit
Definimos el dominio con la función np.linspace, que crea un vector de puntos equiespaciados:
x = np.linspace(-1, 3, 100)
4-matplotlib.ipynb
eduardojvieira/Curso-Python-MEC-UCV
mit
Y representamos la función:
plt.plot(x, f(x), label="Función f(x)") plt.xlabel("Eje $x$") plt.ylabel("$f(x)$") plt.legend() plt.title("Función $f(x)$")
4-matplotlib.ipynb
eduardojvieira/Curso-Python-MEC-UCV
mit
Notamos varias cosas: Con diversas llamadas a funciones dentro de plt. se actualiza el gráfico actual. Esa es la forma de trabajar con la interfaz pyplot. Podemos añadir etiquetas, y escribir $\LaTeX$ en ellas. Tan solo hay que encerrarlo entre signos de dólar $$. Añadiendo como argumento label podemos definir una leyenda. Personalización La función plot acepta una serie de argumentos para personalizar el aspecto de la función. Con una letra podemos especificar el color, y con un símbolo el tipo de línea.
plt.plot(x, f(x), 'ro') plt.plot(x, 1 - f(x), 'g--')
4-matplotlib.ipynb
eduardojvieira/Curso-Python-MEC-UCV
mit
Esto en realidad son códigos abreviados, que se corresponden con argumentos de la función plot:
plt.plot(x, f(x), color='red', linestyle='', marker='o') plt.plot(x, 1 - f(x), c='g', ls='--')
4-matplotlib.ipynb
eduardojvieira/Curso-Python-MEC-UCV
mit
La lista de posibles argumentos y abreviaturas está disponible en la documentación de la función plot http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.plot. Más personalización, pero a lo loco Desde matplotlib 1.4 se puede manipular fácilmente la apariencia de la gráfica usando estilos. Para ver qué estilos hay disponibles, escribiríamos plt.style.available.
plt.style.available
4-matplotlib.ipynb
eduardojvieira/Curso-Python-MEC-UCV
mit
No hay muchos pero podemos crear los nuestros. Para activar uno de ellos, usamos plt.style.use. ¡Aquí va el que uso yo! https://gist.github.com/Juanlu001/edb2bf7b583e7d56468a
#plt.style.use("ggplot") # Afecta a todos los plots
4-matplotlib.ipynb
eduardojvieira/Curso-Python-MEC-UCV
mit
<div class="alert alert-warning">No he sido capaz de encontrar una manera fácil de volver a la apariencia por defecto en el notebook. A ver qué dicen los desarrolladores (https://github.com/ipython/ipython/issues/6707) ¡pero de momento si quieres volver a como estaba antes toca reiniciar el notebook!</div> Para emplear un estilo solo a una porción del código, creamos un bloque with plt.style.context("STYLE"):
with plt.style.context('ggplot'): plt.plot(x, f(x)) plt.plot(x, 1 - f(x))
4-matplotlib.ipynb
eduardojvieira/Curso-Python-MEC-UCV
mit
Y hay otro tipo de personalización más loca todavía:
with plt.xkcd(): plt.plot(x, f(x)) plt.plot(x, 1 - f(x)) plt.xlabel("Eje x")
4-matplotlib.ipynb
eduardojvieira/Curso-Python-MEC-UCV
mit
¡Nunca imitar a XKCD fue tan fácil! http://xkcd.com/353/ Otros tipo de gráficas La función scatter muestra una nube de puntos, con posibilidad de variar también el tamaño y el color.
N = 100 x = np.random.randn(N) y = np.random.randn(N) plt.scatter(x, y)
4-matplotlib.ipynb
eduardojvieira/Curso-Python-MEC-UCV
mit
Con s y c podemos modificar el tamaño y el color respectivamente. Para el color, a cada valor numérico se le asigna un color a través de un mapa de colores; ese mapa se puede cambiar con el argumento cmap. Esa correspondencia se puede visualizar llamando a la función colorbar.
s = np.abs(50 + 50 * np.random.randn(N)) c = np.random.randn(N) plt.scatter(x, y, s=s, c=c, cmap=plt.cm.Blues) plt.colorbar() plt.scatter(x, y, s=s, c=c, cmap=plt.cm.Oranges) plt.colorbar()
4-matplotlib.ipynb
eduardojvieira/Curso-Python-MEC-UCV
mit
matplotlib trae por defecto muchos mapas de colores. En las SciPy Lecture Notes dan una lista de todos ellos (http://scipy-lectures.github.io/intro/matplotlib/matplotlib.html#colormaps) La función contour se utiliza para visualizar las curvas de nivel de funciones de dos variables y está muy ligada a la función np.meshgrid. Veamos un ejemplo: $$f(x) = x^2 - y^2$$
def f(x, y): return x ** 2 - y ** 2 x = np.linspace(-2, 2) y = np.linspace(-2, 2) xx, yy = np.meshgrid(x, y) zz = f(xx, yy) plt.contour(xx, yy, zz) plt.colorbar()
4-matplotlib.ipynb
eduardojvieira/Curso-Python-MEC-UCV
mit
La función contourf es casi idéntica pero rellena el espacio entre niveles. Podemos especificar manualmente estos niveles usando el cuarto argumento:
plt.contourf(xx, yy, zz, np.linspace(-4, 4, 100)) plt.colorbar()
4-matplotlib.ipynb
eduardojvieira/Curso-Python-MEC-UCV
mit
Para guardar las gráficas en archivos aparte podemos usar la función plt.savefig. matplotlib usará el tipo de archivo adecuado según la extensión que especifiquemos. Veremos esto con más detalle cuando hablemos de la interfaz orientada a objetos. Varias figuras Podemos crear figuras con varios sistemas de ejes, pasando a subplot el número de filas y de columnas.
x = np.linspace(-1, 7, 1000) fig = plt.figure() plt.subplot(211) plt.plot(x, np.sin(x)) plt.grid(False) plt.title("Función seno") plt.subplot(212) plt.plot(x, np.cos(x)) plt.grid(False) plt.title("Función coseno")
4-matplotlib.ipynb
eduardojvieira/Curso-Python-MEC-UCV
mit
<div class="alert alert-info">¿Cómo se ajusta el espacio entre gráficas para que no se solapen los textos? Buscamos en Google "plt.subplot adjust" en el primer resultado tenemos la respuesta http://stackoverflow.com/a/9827848</div> Como hemos guardado la figura en una variable, puedo recuperarla más adelate y seguir editándola.
fig.tight_layout() fig
4-matplotlib.ipynb
eduardojvieira/Curso-Python-MEC-UCV
mit
<div class="alert alert-warning">Si queremos manipular la figura una vez hemos abandonado la celda donde la hemos definido, tendríamos que utilizar la interfaz orientada a objetos de matplotlib. Es un poco lioso porque algunas funciones cambian de nombre, así que en este curso no la vamos a ver. Si te interesa puedes ver los notebooks de la primera edición, donde sí la introdujimos. https://github.com/AeroPython/Curso_AeroPython/releases/tag/v1.0</div> Ejercicio Crear una función que represente gráficamente esta expresión: $$\sin(2 \pi f_1 t) + \sin(2 \pi f_2 t)$$ Siendo $f_1$ y $f_2$ argumentos de entrada (por defecto $10$ y $100$) y $t \in [0, 0.5]$. Además, debe mostrar: leyenda, título "Dos frecuencias", eje x "Tiempo ($t$)" y usar algún estilo de los disponibles.
def frecuencias(f1=10.0, f2=100.0): max_time = 0.5 times = np.linspace(0, max_time, 1000) signal = np.sin(2 * np.pi * f1 * times) + np.sin(2 * np.pi * f2 * times) with plt.style.context("ggplot"): plt.plot(signal, label="Señal") plt.xlabel("Tiempo ($t$)") plt.title("Dos frecuencias") plt.legend() frecuencias()
4-matplotlib.ipynb
eduardojvieira/Curso-Python-MEC-UCV
mit
Ejercicio Representar las curvas de nivel de esta función: $$g(x, y) = \cos{x} + \sin^2{y}$$ Para obtener este resultado:
def g(x, y): return np.cos(x) + np.sin(y) ** 2 # Necesitamos muchos puntos en la malla, para que cuando se # crucen las líneas no se vean irregularidades x = np.linspace(-2, 3, 1000) y = np.linspace(-2, 3, 1000) xx, yy = np.meshgrid(x, y) zz = g(xx, yy) # Podemos ajustar el tamaño de la figura con figsize fig = plt.figure(figsize=(6, 6)) # Ajustamos para que tenga 13 niveles y que use el colormap Spectral # Tenemos que asignar la salida a la variable cs para luego crear el colorbar cs = plt.contourf(xx, yy, zz, np.linspace(-1, 2, 13), cmap=plt.cm.Spectral) # Creamos la barra de colores plt.colorbar() # Con `colors='k'` dibujamos todas las líneas negras # Asignamos la salida a la variable cs2 para crear las etiquetas cs = plt.contour(xx, yy, zz, np.linspace(-1, 2, 13), colors='k') # Creamos las etiquetas sobre las líneas plt.clabel(cs) # Ponemos las etiquetas de los ejes plt.xlabel("Eje x") plt.ylabel("Eje y") plt.title(r"Función $g(x, y) = \cos{x} + \sin^2{y}$")
4-matplotlib.ipynb
eduardojvieira/Curso-Python-MEC-UCV
mit
El truco final: componentes interactivos No tenemos mucho tiempo pero vamos a ver algo interesante que se ha introducido hace poco en el notebook: componentes interactivos.
from IPython.html.widgets import interactive interactive(frecuencias, f1=(10.0,200.0), f2=(10.0,200.0))
4-matplotlib.ipynb
eduardojvieira/Curso-Python-MEC-UCV
mit
Referencias Guía de matplotlib para principiantes http://matplotlib.org/users/beginner.html Tutorial de matplotlib en español http://pybonacci.org/tag/tutorial-matplotlib-pyplot/ Referencia rápida de matplotlib http://scipy-lectures.github.io/intro/matplotlib/matplotlib.html#quick-references
# Esta celda da el estilo al notebook from IPython.core.display import HTML css_file = './css/aeropython.css' HTML(open(css_file, "r").read())
4-matplotlib.ipynb
eduardojvieira/Curso-Python-MEC-UCV
mit
This example is an itinerary choice model built using the example itinerary choice dataset included with Larch. We'll begin by loading that example data.
from larch.data_warehouse import example_file itin = pd.read_csv(example_file("arc"), index_col=['id_case','id_alt']) d = larch.DataFrames(itin, ch='choice', crack=True, autoscale_weights=True)
book/example/legacy/301_itin_mnl.ipynb
jpn--/larch
gpl-3.0
Now let's make our model. We'll use a few variables to define our linear-in-parameters utility function.
m = larch.Model(dataservice=d) v = [ "timeperiod==2", "timeperiod==3", "timeperiod==4", "timeperiod==5", "timeperiod==6", "timeperiod==7", "timeperiod==8", "timeperiod==9", "carrier==2", "carrier==3", "carrier==4", "carrier==5", "equipment==2", "fare_hy", "fare_ly", "elapsed_time", "nb_cnxs", ]
book/example/legacy/301_itin_mnl.ipynb
jpn--/larch
gpl-3.0
The larch.roles module defines a few convenient classes for declaring data and parameter. One we will use here is PX which creates a linear-in-parameter term that represents one data element (a column from our data, or an expression that can be evaluated on the data alone) multiplied by a parameter with the same name.
from larch.roles import PX m.utility_ca = sum(PX(i) for i in v) m.choice_ca_var = 'choice'
book/example/legacy/301_itin_mnl.ipynb
jpn--/larch
gpl-3.0
Since we are estimating just an MNL model in this example, this is all we need to do to build our model, and we're ready to go. To estimate the likelihood maximizing parameters, we give:
m.load_data() m.maximize_loglike() # TEST result = _ from pytest import approx assert result.loglike == approx(-777770.0688722526) assert result.x['carrier==2'] == approx(0.11720047917232307) assert result.logloss == approx(3.306873650593341)
book/example/legacy/301_itin_mnl.ipynb
jpn--/larch
gpl-3.0
TimeSide API Timeside API is based on different core processing unit called processors : Decoders (timeside.api.IDecoder) that enables to decode a giving audio source and split it up into frames for further processing Analyzers (timeside.api.IAnalyzer) that provides some signal processing module to analyze incoming audio frames Encoders (timeside.api.IEncoder) that can encode incoming frames back into an audio object Graphers (timeside.api.IGrapher) that can display some representations of the signal or corresponding extracted features Decoders
import timeside.core from timeside.core import list_processors list_processors(timeside.core.api.IDecoder)
docs/ipynb/01_Timeside_API.ipynb
Parisson/TimeSide
agpl-3.0
Analyzers
list_processors(timeside.core.api.IAnalyzer)
docs/ipynb/01_Timeside_API.ipynb
Parisson/TimeSide
agpl-3.0
Encoders
list_processors(timeside.core.api.IEncoder)
docs/ipynb/01_Timeside_API.ipynb
Parisson/TimeSide
agpl-3.0
Graphers
list_processors(timeside.core.api.IGrapher)
docs/ipynb/01_Timeside_API.ipynb
Parisson/TimeSide
agpl-3.0
Processors pipeline All these processors can be chained to form a process pipeline. Let first define a decoder that reads and decodes audio from a file
from timeside.core import get_processor from timeside.core.tools.test_samples import samples file_decoder = get_processor('file_decoder')(samples['C4_scale.wav'])
docs/ipynb/01_Timeside_API.ipynb
Parisson/TimeSide
agpl-3.0
And then some other processors
# analyzers pitch = get_processor('aubio_pitch')() level = get_processor('level')() # Encoder mp3 = get_processor('mp3_encoder')('/tmp/guitar.mp3', overwrite=True) # Graphers specgram = get_processor('spectrogram_lin')() waveform = get_processor('waveform_simple')()
docs/ipynb/01_Timeside_API.ipynb
Parisson/TimeSide
agpl-3.0
Let's now define a process pipeline with all these processors and run it
pipe = (file_decoder | pitch | level | mp3 | specgram | waveform) pipe.run()
docs/ipynb/01_Timeside_API.ipynb
Parisson/TimeSide
agpl-3.0
Analyzers results are available through the pipe:
pipe.results.keys()
docs/ipynb/01_Timeside_API.ipynb
Parisson/TimeSide
agpl-3.0
or from the analyzer:
pitch.results.keys() pitch.results['aubio_pitch.pitch'].keys() pitch.results['aubio_pitch.pitch']
docs/ipynb/01_Timeside_API.ipynb
Parisson/TimeSide
agpl-3.0
Grapher result can also be display or save into a file
imshow(specgram.render(), origin='lower') imshow(waveform.render(), origin='lower') waveform.render('/tmp/waveform.png')
docs/ipynb/01_Timeside_API.ipynb
Parisson/TimeSide
agpl-3.0
And TimeSide can be embedded into a web page dynamically. For example, in Telemeta:
from IPython.display import HTML HTML('<iframe width=1300 height=260 frameborder=0 scrolling=no marginheight=0 marginwidth=0 src=http://demo.telemeta.org/archives/items/6/player/1200x170></iframe>')
docs/ipynb/01_Timeside_API.ipynb
Parisson/TimeSide
agpl-3.0
# TensorFlow 编程概念 学习目标: * 学习 TensorFlow 编程模型的基础知识,重点了解以下概念: * 张量 * 指令 * 图 * 会话 * 构建一个简单的 TensorFlow 程序,使用该程序绘制一个默认图并创建一个运行该图的会话 注意:请仔细阅读本教程。TensorFlow 编程模型很可能与您遇到的其他模型不同,因此可能不如您期望的那样直观。 ## 概念概览 TensorFlow 的名称源自张量,张量是任意维度的数组。借助 TensorFlow,您可以操控具有大量维度的张量。即便如此,在大多数情况下,您会使用以下一个或多个低维张量: 标量是零维数组(零阶张量)。例如,\'Howdy\' 或 5 矢量是一维数组(一阶张量)。例如,[2, 3, 5, 7, 11] 或 [5] 矩阵是二维数组(二阶张量)。例如,[[3.1, 8.2, 5.9][4.3, -2.7, 6.5]] TensorFlow 指令会创建、销毁和操控张量。典型 TensorFlow 程序中的大多数代码行都是指令。 TensorFlow 图(也称为计算图或数据流图)是一种图数据结构。很多 TensorFlow 程序由单个图构成,但是 TensorFlow 程序可以选择创建多个图。图的节点是指令;图的边是张量。张量流经图,在每个节点由一个指令操控。一个指令的输出张量通常会变成后续指令的输入张量。TensorFlow 会实现延迟执行模型,意味着系统仅会根据相关节点的需求在需要时计算节点。 张量可以作为常量或变量存储在图中。您可能已经猜到,常量存储的是值不会发生更改的张量,而变量存储的是值会发生更改的张量。不过,您可能没有猜到的是,常量和变量都只是图中的一种指令。常量是始终会返回同一张量值的指令。变量是会返回分配给它的任何张量的指令。 要定义常量,请使用 tf.constant 指令,并传入它的值。例如: x = tf.constant([5.2]) 同样,您可以创建如下变量: y = tf.Variable([5]) 或者,您也可以先创建变量,然后再如下所示地分配一个值(注意:您始终需要指定一个默认值): y = tf.Variable([0]) y = y.assign([5]) 定义一些常量或变量后,您可以将它们与其他指令(如 tf.add)结合使用。在评估 tf.add 指令时,它会调用您的 tf.constant 或 tf.Variable 指令,以获取它们的值,然后返回一个包含这些值之和的新张量。 图必须在 TensorFlow 会话中运行,会话存储了它所运行的图的状态: with tf.Session() as sess: initialization = tf.global_variables_initializer() print(y.eval()) 在使用 tf.Variable 时,您必须在会话开始时调用 tf.global_variables_initializer,以明确初始化这些变量,如上所示。 注意:会话可以将图分发到多个机器上执行(假设程序在某个分布式计算框架上运行)。有关详情,请参阅分布式 TensorFlow。 总结 TensorFlow 编程本质上是一个两步流程: 将常量、变量和指令整合到一个图中。 在一个会话中评估这些常量、变量和指令。 ## 创建一个简单的 TensorFlow 程序 我们来看看如何编写一个将两个常量相加的简单 TensorFlow 程序。 ### 添加 import 语句 与几乎所有 Python 程序一样,您首先要添加一些 import 语句。 当然,运行 TensorFlow 程序所需的 import 语句组合取决于您的程序将要访问的功能。至少,您必须在所有 TensorFlow 程序中添加 import tensorflow 语句:
import tensorflow as tf
ml/cc/prework/zh-CN/tensorflow_programming_concepts.ipynb
google/eng-edu
apache-2.0
请勿忘记执行前面的代码块(import 语句)。 其他常见的 import 语句包括: import matplotlib.pyplot as plt # 数据集可视化。 import numpy as np # 低级数字 Python 库。 import pandas as pd # 较高级别的数字 Python 库。 TensorFlow 提供了一个默认图。不过,我们建议您明确创建自己的 Graph,以便跟踪状态(例如,您可能希望在每个单元格中使用一个不同的 Graph)。
from __future__ import print_function import tensorflow as tf # Create a graph. g = tf.Graph() # Establish the graph as the "default" graph. with g.as_default(): # Assemble a graph consisting of the following three operations: # * Two tf.constant operations to create the operands. # * One tf.add operation to add the two operands. x = tf.constant(8, name="x_const") y = tf.constant(5, name="y_const") sum = tf.add(x, y, name="x_y_sum") # Now create a session. # The session will run the default graph. with tf.Session() as sess: print(sum.eval())
ml/cc/prework/zh-CN/tensorflow_programming_concepts.ipynb
google/eng-edu
apache-2.0
## 练习:引入第三个运算数 修改上面的代码列表,以将三个整数(而不是两个)相加: 定义第三个标量整数常量 z,并为其分配一个值 4。 将 sum 与 z 相加,以得出一个新的和。 提示:请参阅有关 tf.add() 的 API 文档,了解有关其函数签名的更多详细信息。 重新运行修改后的代码块。该程序是否生成了正确的总和? ### 解决方案 点击下方,查看解决方案。
# Create a graph. g = tf.Graph() # Establish our graph as the "default" graph. with g.as_default(): # Assemble a graph consisting of three operations. # (Creating a tensor is an operation.) x = tf.constant(8, name="x_const") y = tf.constant(5, name="y_const") sum = tf.add(x, y, name="x_y_sum") # Task 1: Define a third scalar integer constant z. z = tf.constant(4, name="z_const") # Task 2: Add z to `sum` to yield a new sum. new_sum = tf.add(sum, z, name="x_y_z_sum") # Now create a session. # The session will run the default graph. with tf.Session() as sess: # Task 3: Ensure the program yields the correct grand total. print(new_sum.eval())
ml/cc/prework/zh-CN/tensorflow_programming_concepts.ipynb
google/eng-edu
apache-2.0
2. Load the data
data = pd.read_csv("loan.csv", low_memory=False)
post_data/final_project_jasmine_dumas.ipynb
jasdumas/jasdumas.github.io
mit
a. Data reduction for computation From previous attempts to create a model matrix below and having the kernal crash, I'm going to reduce the data set size to compute better by selecting a random sample of 20% from the original dataset
# 5% of the data without replacement data = data.sample(frac=0.05, replace=False, random_state=123)
post_data/final_project_jasmine_dumas.ipynb
jasdumas/jasdumas.github.io
mit
3. Explore the data visaully and descriptive methods
data.shape data.head(n=5) data.columns
post_data/final_project_jasmine_dumas.ipynb
jasdumas/jasdumas.github.io
mit
The loan_status column is the target! a. How many classes are there?
pd.unique(data['loan_status'].values.ravel()) print("Amount of Classes: ", len(pd.unique(data['loan_status'].values.ravel()))) len(pd.unique(data['zip_code'].values.ravel())) # want to make sure this was not too unique len(pd.unique(data['url'].values.ravel())) # drop url len(pd.unique(data['last_pymnt_d'].values.ravel())) len(pd.unique(data['next_pymnt_d'].values.ravel())) for col in data.select_dtypes(include=['object']).columns: print ("Column {} has {} unique instances".format( col, len(data[col].unique())) )
post_data/final_project_jasmine_dumas.ipynb
jasdumas/jasdumas.github.io
mit
b. Are there unique customers in the data or repeats?
len(pd.unique(data['member_id'].values.ravel())) == data.shape[0]
post_data/final_project_jasmine_dumas.ipynb
jasdumas/jasdumas.github.io
mit
c. Drop some of the junk variables (id, member_id, ...) Reasons: High Cardinality pre-pre-processing 😃
data = data.drop('id', 1) # data = data.drop('member_id', 1)# data = data.drop('url', 1)# data = data.drop('purpose', 1) data = data.drop('title', 1)# data = data.drop('zip_code', 1)# data = data.drop('emp_title', 1)# data = data.drop('earliest_cr_line', 1)# data = data.drop('term', 1) data = data.drop('sub_grade', 1) # data = data.drop('last_pymnt_d', 1)# data = data.drop('next_pymnt_d', 1)# data = data.drop('last_credit_pull_d', 1) data = data.drop('issue_d', 1) ## data = data.drop('desc', 1)## data = data.drop('addr_state', 1)## data.shape # yay this is better for col in data.select_dtypes(include=['object']).columns: print ("Column {} has {} unique instances".format( col, len(data[col].unique())) )
post_data/final_project_jasmine_dumas.ipynb
jasdumas/jasdumas.github.io
mit
d. Exploratory Data Analysis: What is the distribution of the loan amount? In general the loans amount was usually under $15,000
data['loan_amnt'].plot(kind="hist", bins=10) data['grade'].value_counts().plot(kind='bar') data['emp_length'].value_counts().plot(kind='bar')
post_data/final_project_jasmine_dumas.ipynb
jasdumas/jasdumas.github.io
mit
e. What is the distribution of target class? Most of this dataset the loans are in a current state (in-payment?), or Fully paid off Looks like a Poisson Distribution?!
data['loan_status'].value_counts().plot(kind='bar')
post_data/final_project_jasmine_dumas.ipynb
jasdumas/jasdumas.github.io
mit
f. What are the numeric columns? For pre-processing and scaling
data._get_numeric_data().columns "There are {} numeric columns in the data set".format(len(data._get_numeric_data().columns) )
post_data/final_project_jasmine_dumas.ipynb
jasdumas/jasdumas.github.io
mit
g. What are the character columns? For one-hot encoding into a model matrix
data.select_dtypes(include=['object']).columns "There are {} Character columns in the data set (minus the target)".format(len(data.select_dtypes(include=['object']).columns) -1)
post_data/final_project_jasmine_dumas.ipynb
jasdumas/jasdumas.github.io
mit
4. Pre-processing the data a. Remove the target from the entire dataset
X = data.drop("loan_status", axis=1, inplace = False) y = data.loan_status y.head()
post_data/final_project_jasmine_dumas.ipynb
jasdumas/jasdumas.github.io
mit
b. Transform the data into a model matrix with one-hot encoding isolate the variables of char class
def model_matrix(df , columns): dummified_cols = pd.get_dummies(df[columns]) df = df.drop(columns, axis = 1, inplace=False) df_new = df.join(dummified_cols) return df_new X = model_matrix(X, ['grade', 'emp_length', 'home_ownership', 'verification_status', 'pymnt_plan', 'initial_list_status', 'application_type', 'verification_status_joint']) # 'issue_d' 'desc' 'addr_state' X.head() X.shape
post_data/final_project_jasmine_dumas.ipynb
jasdumas/jasdumas.github.io
mit
c. Scale the continuous variables use min max calculation
# impute rows with NaN with a 0 for now X2 = X.fillna(value = 0) X2.head() from sklearn.preprocessing import MinMaxScaler Scaler = MinMaxScaler() X2[['loan_amnt', 'funded_amnt', 'funded_amnt_inv', 'int_rate', 'installment', 'annual_inc', 'dti', 'delinq_2yrs', 'inq_last_6mths', 'mths_since_last_delinq', 'mths_since_last_record', 'open_acc', 'pub_rec', 'revol_bal', 'revol_util', 'total_acc', 'out_prncp', 'out_prncp_inv', 'total_pymnt', 'total_pymnt_inv', 'total_rec_prncp', 'total_rec_int', 'total_rec_late_fee', 'recoveries', 'collection_recovery_fee', 'last_pymnt_amnt', 'collections_12_mths_ex_med', 'mths_since_last_major_derog', 'policy_code', 'annual_inc_joint', 'dti_joint', 'acc_now_delinq', 'tot_coll_amt', 'tot_cur_bal', 'open_acc_6m', 'open_il_6m', 'open_il_12m', 'open_il_24m', 'mths_since_rcnt_il', 'total_bal_il', 'il_util', 'open_rv_12m', 'open_rv_24m', 'max_bal_bc', 'all_util', 'total_rev_hi_lim', 'inq_fi', 'total_cu_tl', 'inq_last_12m']] = Scaler.fit_transform(X2[['loan_amnt', 'funded_amnt', 'funded_amnt_inv', 'int_rate', 'installment', 'annual_inc', 'dti', 'delinq_2yrs', 'inq_last_6mths', 'mths_since_last_delinq', 'mths_since_last_record', 'open_acc', 'pub_rec', 'revol_bal', 'revol_util', 'total_acc', 'out_prncp', 'out_prncp_inv', 'total_pymnt', 'total_pymnt_inv', 'total_rec_prncp', 'total_rec_int', 'total_rec_late_fee', 'recoveries', 'collection_recovery_fee', 'last_pymnt_amnt', 'collections_12_mths_ex_med', 'mths_since_last_major_derog', 'policy_code', 'annual_inc_joint', 'dti_joint', 'acc_now_delinq', 'tot_coll_amt', 'tot_cur_bal', 'open_acc_6m', 'open_il_6m', 'open_il_12m', 'open_il_24m', 'mths_since_rcnt_il', 'total_bal_il', 'il_util', 'open_rv_12m', 'open_rv_24m', 'max_bal_bc', 'all_util', 'total_rev_hi_lim', 'inq_fi', 'total_cu_tl', 'inq_last_12m']]) X2.head()
post_data/final_project_jasmine_dumas.ipynb
jasdumas/jasdumas.github.io
mit
d. Partition the data into train and testing
x_train, x_test, y_train, y_test = train_test_split(X2, y, test_size=.3, random_state=123) print(x_train.shape) print(y_train.shape) print(x_test.shape) print(y_test.shape)
post_data/final_project_jasmine_dumas.ipynb
jasdumas/jasdumas.github.io
mit
5. Building the k Nearest Neighbor Classifier experiment with different values for neighbors
# start out with the number of classes for neighbors data_knn = KNeighborsClassifier(n_neighbors = 10, metric='euclidean') data_knn data_knn.fit(x_train, y_train)
post_data/final_project_jasmine_dumas.ipynb
jasdumas/jasdumas.github.io
mit
a. predict on the test data using the knn model created above
data_knn.predict(x_test)
post_data/final_project_jasmine_dumas.ipynb
jasdumas/jasdumas.github.io
mit
b. Evaluating the classifier model using R squared
# R-square from training and test data rsquared_train = data_knn.score(x_train, y_train) rsquared_test = data_knn.score(x_test, y_test) print ('Training data R-squared:') print(rsquared_train) print ('Test data R-squared:') print(rsquared_test)
post_data/final_project_jasmine_dumas.ipynb
jasdumas/jasdumas.github.io
mit
c. Confusion Matrix
# confusion matrix from sklearn.metrics import confusion_matrix knn_confusion_matrix = confusion_matrix(y_true = y_test, y_pred = data_knn.predict(x_test)) print("The Confusion matrix:\n", knn_confusion_matrix) # visualize the confusion matrix # http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html plt.matshow(knn_confusion_matrix, cmap = plt.cm.Blues) plt.title("KNN Confusion Matrix\n") #plt.xticks([0,1], ['No', 'Yes']) #plt.yticks([0,1], ['No', 'Yes']) plt.ylabel('True label') plt.xlabel('Predicted label') for y in range(knn_confusion_matrix.shape[0]): for x in range(knn_confusion_matrix.shape[1]): plt.text(x, y, '{}'.format(knn_confusion_matrix[y, x]), horizontalalignment = 'center', verticalalignment = 'center',) plt.show()
post_data/final_project_jasmine_dumas.ipynb
jasdumas/jasdumas.github.io
mit
d. Classification Report
#Generate the classification report from sklearn.metrics import classification_report knn_classify_report = classification_report(y_true = y_test, y_pred = data_knn.predict(x_test)) print(knn_classify_report)
post_data/final_project_jasmine_dumas.ipynb
jasdumas/jasdumas.github.io
mit
semantic_version 2.4.2 : Python Package Index
list(apply_to_repos(repo_version,kwargs={'version_type':'patch'},repos=all_repos))
rebuild_travis_on_repos.ipynb
rdhyee/nypl50
apache-2.0
templates template path? variables to fill: epub_title encrypted_key repo_name
def new_travis_template(repo, template, write_template=False): """ compute (and optionally write) .travis.yml based on the template and current metadata.yaml """ template_written = False sh.cd(os.path.join(GITENBERG_DIR, repo)) metadata_path = os.path.join(GITENBERG_DIR, repo, "metadata.yaml") travis_path = os.path.join(GITENBERG_DIR, repo, ".travis.yml") travis_api_key_path = os.path.join(GITENBERG_DIR, repo, ".travis.deploy.api_key.txt") md = metadata.pandata.Pandata(metadata_path) epub_title = slugify(md.metadata.get("title")) encrypted_key = open(travis_api_key_path).read().strip() repo_name = md.metadata.get("_repo") template_vars = { 'epub_title': epub_title, 'encrypted_key': encrypted_key, 'repo_name': repo_name } template_result = template.render(**template_vars) if write_template: with open(travis_path, "w") as f: f.write(template_result) template_written = True return (template_result, template_written) from itertools import izip template = template = travis_template() results = list(izip(all_repos, apply_to_repos(new_travis_template, kwargs={'template':template}, repos=all_repos))) [result for result in results if isinstance(result[1], Exception) ] import os import yaml import pdb def commit_travis_api_key_and_update_travis(repo, template, write_updates=False): """ create .travis.deploy.api_key.txt and update .travis.yml; do git commit """ sh.cd(os.path.join(GITENBERG_DIR, repo)) metadata_path = os.path.join(GITENBERG_DIR, repo, "metadata.yaml") travis_path = os.path.join(GITENBERG_DIR, repo, ".travis.yml") travis_api_key_path = os.path.join(GITENBERG_DIR, repo, ".travis.deploy.api_key.txt") # git add .travis.deploy.api_key.txt if write_updates: sh.git.add(travis_api_key_path) # read the current metadata file and replace current_ver with next_ver (v0, v1, v_updated) = repo_version(repo, version_type='patch', write_version=write_updates) if v_updated: sh.git.add(metadata_path) # write new .travis.yml (new_template, template_written) = new_travis_template(repo, template, write_template=write_updates) if template_written: sh.git.add(travis_path) if write_updates: sh.git.commit("-m", "add .travis.deploy.api_key.txt; updated .travis.yml") # add tag if v_updated: sh.git.tag(v1) sh.git.push("origin", "master", "--tags") return True else: return False problem_repos = ('The-Picture-of-Dorian-Gray_174', 'The-Hunchback-of-Notre-Dame_6539', 'Divine-Comedy-Longfellow-s-Translation-Hell_1001', 'The-Works-of-Edgar-Allan-Poe-The-Raven-EditionTable-Of-Contents-And-Index-Of-The-Five-Volumes_25525' ) repos = all_repos[36:][0:] repos = [repo for repo in repos if repo not in problem_repos] repos template = travis_template() # I wish there would be a way to figure out variables in a template from jinja2...but I don't see a way. results = list(apply_to_repos(commit_travis_api_key_and_update_travis, kwargs={'template':template, 'write_updates':True}, repos=repos)) results import requests def url_status(url): r = requests.get(url, allow_redirects=True, stream=True) return r.status_code def repo_epub_status(repo): return url_status(latest_epub(repo)) list(izip(repos, apply_to_repos(repo_epub_status, repos=repos))) results = list(izip(all_repos, apply_to_repos(repo_epub_status, repos=all_repos))) results ok_repos = [result[0] for result in results if result[1] == 200 ] not_ok_repos = [result[0] for result in results if result[1] <> 200 ] len(ok_repos), len(not_ok_repos) for (i, repo) in enumerate(ok_repos): print (i+1, "\t", repo, "\t", latest_epub(repo)) not_ok_repos
rebuild_travis_on_repos.ipynb
rdhyee/nypl50
apache-2.0
Divine Comedy Divine-Comedy-Longfellow-s-Translation-Hell_1001 / /Users/raymondyee/C/src/gitenberg/Divine-Comedy-Longfellow-s-Translation-Hell_1001: there is a book.asciidoc but no .travis.yml Let's do this by hand and document the process... template
from second_folio import TRAVIS_TEMPLATE_URL repo = "Divine-Comedy-Longfellow-s-Translation-Hell_1001" title = "Divine Comedy, Longfellow's Translation, Hell" slugify(title)
rebuild_travis_on_repos.ipynb
rdhyee/nypl50
apache-2.0
Problem 1) Introduction to scikit-learn At the most basic level, scikit-learn makes machine learning extremely easy within python. By way of example, here is a short piece of code that builds a complex, non-linear model to classify sources in the Iris data set that we learned about earlier: from sklearn import datasets from sklearn.ensemble import RandomForestClassifier iris = datasets.load_iris() RFclf = RandomForestClassifier().fit(iris.data, iris.target) Those 4 lines of code have constructed a model that is superior to any system of hard cuts that we could have encoded while looking at the multidimensional space. This can be fast as well: execute the dummy code in the cell below to see how "easy" machine-learning is with scikit-learn.
# execute dummy code here from sklearn import datasets from sklearn.ensemble import RandomForestClassifier iris = datasets.load_iris() RFclf = RandomForestClassifier().fit(iris.data, iris.target)
Sessions/Session04/Day0/TooBriefMachLearn.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Generally speaking, the procedure for scikit-learn is uniform across all machine-learning algorithms. Models are accessed via the various modules (ensemble, SVM, neighbors, etc), with user-defined tuning parameters. The features (or data) for the models are stored in a 2D array, X, with rows representing individual sources and columns representing the corresponding feature values. [In a minority of cases, X, represents a similarity or distance matrix where each entry represents the distance to every other source in the data set.] In cases where there is a known classification or scalar value (typically supervised methods), this information is stored in a 1D array y. Unsupervised models are fit by calling .fit(X) and supervised models are fit by calling .fit(X, y). In both cases, predictions for new observations, Xnew, can be obtained by calling .predict(Xnew). Those are the basics and beyond that, the details are algorithm specific, but the documentation for essentially everything within scikit-learn is excellent, so read the docs. To further develop our intuition, we will now explore the Iris dataset a little further. Problem 1a What is the pythonic type of iris?
# complete
Sessions/Session04/Day0/TooBriefMachLearn.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
You likely haven't encountered a scikit-learn Bunch before. It's functionality is essentially the same as a dictionary. Problem 1b What are the keys of iris?
# complete
Sessions/Session04/Day0/TooBriefMachLearn.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Most importantly, iris contains data and target values. These are all you need for scikit-learn, though the feature and target names and description are useful. Problem 1c What is the shape and content of the iris data?
print( # complete # complete
Sessions/Session04/Day0/TooBriefMachLearn.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 1d What is the shape and content of the iris target?
print( # complete # complete
Sessions/Session04/Day0/TooBriefMachLearn.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Finally, as a baseline for the exercises that follow, we will now make a simple 2D plot showing the separation of the 3 classes in the iris dataset. This plot will serve as the reference for examining the quality of the clustering algorithms. Problem 1e Make a scatter plot showing sepal length vs. sepal width for the iris data set. Color the points according to their respective classes.
print(iris.feature_names) # shows that sepal length is first feature and sepal width is second feature plt.scatter( # complete # complete # complete
Sessions/Session04/Day0/TooBriefMachLearn.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 2) Unsupervised Machine Learning Unsupervised machine learning, sometimes referred to as clustering or data mining, aims to group or classify sources in the multidimensional feature space. The "unsupervised" comes from the fact that there are no target labels provided to the algorithm, so the machine is asked to cluster the data "on its own." The lack of labels means there is no (simple) method for validating the accuracy of the solution provided by the machine (though sometimes simple examination can show the results are terrible). For this reason [note - this is my (AAM) opinion and there many be many others who disagree], unsupervised methods are not particularly useful for astronomy. Supposing one did find some useful clustering structure, an adversarial researcher could always claim that the current feature space does not accurately capture the physics of the system and as such the clustering result is not interesting or, worse, erroneous. The one potentially powerful exception to this broad statement is outlier detection, which can be a branch of both unsupervised and supervised learning. Finding weirdo objects is an astronomical pastime, and there are unsupervised methods that may help in that regard in the LSST era. To begin today we will examine one of the most famous, and simple, clustering algorithms: $k$-means. $k$-means clustering looks to identify $k$ convex clusters, where $k$ is a user defined number. And here-in lies the rub: if we truly knew the number of clusters in advance, we likely wouldn't need to perform any clustering in the first place. This is the major downside to $k$-means. Operationally, pseudocode for the algorithm can be summarized as the following: initiate search by identifying k points (i.e. the cluster centers) loop assign each point in the data set to the closest cluster center calculate new cluster centers based on mean position of all points within cluster if diff(new center - old center) &lt; threshold: stop (i.e. clusters are defined) The threshold is defined by the user, though in some cases the total number of iterations is also. An advantage of $k$-means is that the solution will always converge, though the solution may only be a local minimum. Disadvantages include the assumption of convexity, i.e. difficult to capture complex geometry, and the curse of dimensionality. In scikit-learn the KMeans algorithm is implemented as part of the sklearn.cluster module. Problem 2a Fit two different $k$-means models to the iris data, one with 2 clusters and one with 3 clusters. Plot the resulting clusters in the sepal length-sepal width plane (same plot as above). How do the results compare to the true classifications?
from sklearn.cluster import KMeans Kcluster = KMeans( # complete Kcluster.fit( # complete plt.figure() plt.scatter( # complete # complete # complete # complete # complete
Sessions/Session04/Day0/TooBriefMachLearn.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
With 3 clusters the algorithm does a good job of separating the three classes. However, without the a priori knowledge that there are 3 different types of iris, the 2 cluster solution would appear to be superior. Problem 2b How do the results change if the 3 cluster model is called with n_init = 1 and init = 'random' options? Use rs for the random state [this allows me to cheat in service of making a point]. *Note - the respective defaults for these two parameters are 10 and k-means++, respectively. Read the docs to see why these choices are, likely, better than those in 2b.
rs = 14 Kcluster1 = KMeans( # complete # complete # complete # complete
Sessions/Session04/Day0/TooBriefMachLearn.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
A random aside that is not particularly relevant here $k$-means evaluates the Euclidean distance between individual sources and cluster centers, thus, the magnitude of the individual features has a strong effect on the final clustering outcome. Problem 2c Calculate the mean, standard deviation, min, and max of each feature in the iris data set. Based on these summaries, which feature is most important for clustering?
print("feature\t\t\tmean\tstd\tmin\tmax") for featnum, feat in enumerate(iris.feature_names): print("{:s}\t{:.2f}\t{:.2f}\t{:.2f}\t{:.2f}".format(feat, np.mean(iris.data[:,featnum]), np.std(iris.data[:,featnum]), np.min(iris.data[:,featnum]), np.max(iris.data[:,featnum])))
Sessions/Session04/Day0/TooBriefMachLearn.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Petal length has the largest range and standard deviation, thus, it will have the most "weight" when determining the $k$ clusters. The truth is that the iris data set is fairly small and straightfoward. Nevertheless, we will now examine the clustering results after re-scaling the features. [Some algorithms, cough Support Vector Machines cough, are notoriously sensitive to the feature scaling, so it is important to know about this step.] Imagine you are classifying stellar light curves: the data set will include contact binaries with periods of $\sim 0.1 \; \mathrm{d}$ and Mira variables with periods of $\gg 100 \; \mathrm{d}$. Without re-scaling, this feature that covers 4 orders of magnitude may dominate all others in the final model projections. The two most common forms of re-scaling are to rescale to a guassian with mean $= 0$ and variance $= 1$, or to rescale the min and max of the feature to $[0, 1]$. The best normalization is problem dependent. The sklearn.preprocessing module makes it easy to re-scale the feature set. It is essential that the same scaling used for the training set be used for all other data run through the model. The testing, validation, and field observations cannot be re-scaled independently. This would result in meaningless final classifications/predictions. Problem 2d Re-scale the features to normal distributions, and perform $k$-means clustering on the iris data. How do the results compare to those obtained earlier? Hint - you may find 'StandardScaler()' within the sklearn.preprocessing module useful.
from sklearn.preprocessing import StandardScaler scaler = StandardScaler().fit( # complete # complete # complete # complete # complete
Sessions/Session04/Day0/TooBriefMachLearn.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
These results are almost identical to those obtained without scaling. This is due to the simplicity of the iris data set. How do I test the accuracy of my clusters? Essentially - you don't. There are some methods that are available, but they essentially compare clusters to labeled samples, and if the samples are labeled it is likely that supervised learning is more useful anyway. If you are curious, scikit-learn does provide some built-in functions for analyzing clustering, but again, it is difficult to evaluate the validity of any newly discovered clusters. What if I don't know how many clusters are present in the data? An excellent question, as you will almost never know this a priori. Many algorithms, like $k$-means, do require the number of clusters to be specified, but some other methods do not. As an example DBSCAN. In brief, DBSCAN requires two parameters: minPts, the minimum number of points necessary for a cluster, and $\epsilon$, a distance measure. Clusters are grown by identifying core points, objects that have at least minPts located within a distance $\epsilon$. Reachable points are those within a distance $\epsilon$ of at least one core point but less than minPts core points. Identically, these points define the outskirts of the clusters. Finally, there are also outliers which are points that are $> \epsilon$ away from any core points. Thus, DBSCAN naturally identifies clusters, does not assume clusters are convex, and even provides a notion of outliers. The downsides to the algorithm are that the results are highly dependent on the two tuning parameters, and that clusters of highly different densities can be difficult to recover (because $\epsilon$ and minPts is specified for all clusters. In scitkit-learn the DBSCAN algorithm is part of the sklearn.cluster module. $\epsilon$ and minPts are set by eps and min_samples, respectively. Problem 2e Cluster the iris data using DBSCAN. Play around with the tuning parameters to see how they affect the final clustering results. How does the use of DBSCAN compare to $k$-means? Can you obtain 3 clusters with DBSCAN? If not, given the knowledge that the iris dataset has 3 classes - does this invalidate DBSCAN as a viable algorithm? Note - DBSCAN labels outliers as $-1$, and thus, plt.scatter(), will plot all these points as the same color.
# execute this cell from sklearn.cluster import DBSCAN dbs = DBSCAN(eps = 0.7, min_samples = 7) dbs.fit(scaler.transform(iris.data)) # best to use re-scaled data since eps is in absolute units dbs_outliers = dbs.labels_ == -1 plt.figure() plt.scatter(iris.data[:,0], iris.data[:,1], c = dbs.labels_, s = 30, edgecolor = "None", cmap = "viridis") plt.scatter(iris.data[:,0][dbs_outliers], iris.data[:,1][dbs_outliers], s = 30, c = 'k') plt.xlabel('sepal length') plt.ylabel('sepal width')
Sessions/Session04/Day0/TooBriefMachLearn.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
I was unable to obtain 3 clusters with DBSCAN. While these results are, on the surface, worse than what we got with $k$-means, my suspicion is that the 4 features do not adequately separate the 3 classes. [See - a nayseyer can always make that argument.] This is not a problem for DBSCAN as an algorithm, but rather, evidence that no single algorithm works well in all cases. Challenge Problem) Cluster SDSS Galaxy Data The following query will select 10k likely galaxies from the SDSS database and return the results of that query into an astropy.Table object. (For now, if you are not familiar with the SDSS DB schema, don't worry about this query, just know that it returns a bunch of photometric features.)
from astroquery.sdss import SDSS # enables direct queries to the SDSS database GALquery = """SELECT TOP 10000 p.dered_u - p.dered_g as ug, p.dered_g - p.dered_r as gr, p.dered_g - p.dered_i as gi, p.dered_g - p.dered_z as gz, p.petroRad_i, p.petroR50_i, p.deVAB_i FROM PhotoObjAll AS p JOIN specObjAll s ON s.bestobjid = p.objid WHERE p.mode = 1 AND s.sciencePrimary = 1 AND p.clean = 1 AND p.type = 3 """ SDSSgals = SDSS.query_sql(GALquery) SDSSgals
Sessions/Session04/Day0/TooBriefMachLearn.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
I have used my own domain knowledge to specifically choose features that may be useful when clustering galaxies. If you know a bit about SDSS and can think of other features that may be useful feel free to add them to the query. One nice feature of astropy tables is that they can readily be turned into pandas DataFrames, which can in turn easily be turned into a sklearn X array with NumPy. For example: X = np.array(SDSSgals.to_pandas()) And you are ready to go. Challenge Problem Using the SDSS dataset above, identify interesting clusters within the data [this is intentionally very open ended, if you uncover anything especially exciting you'll have a chance to share it with the group]. Feel free to use the algorithms discussed above, or any other packages available via sklearn. Can you make sense of the clusters in the context of galaxy evolution? Hint - don't fret if you know nothing about galaxy evolution (neither do I!). Just take a critical look at the clusters that are identified
# complete
Sessions/Session04/Day0/TooBriefMachLearn.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit