markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
multiline string (Docstrings) can be used to describe the functions, can be accessed by \__doc\__ method
print(my_func.__doc__)
this is power function
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
One function can return more than one value
def my_function(a): x = a*2 y = a+2 return x, y variable1 = my_function(5) print(variable1) print(variable2)
(10, 7) 7
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
One function can have between 0 and many arguments
def my_formula(a, b, c): y = (a*b) + c return y
_____no_output_____
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
Positional arguments
my_formula(2,3,4)
_____no_output_____
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
Keyword arguments
my_formula(c=4, a=2, b=3)
_____no_output_____
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
You can pass both positional and keyword arguments to a function but the positional should always come first
my_formula(4, c=4, b=3)
_____no_output_____
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
Default argumentsThis are arguments that are assigned when declaring the function and if not specified will take the default data
def my_formula(a, b, c=3): y = (a*b) + c return y my_formula(2, 3, c=6)
_____no_output_____
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
Arbitrary Arguments, \*args:If you do not know how many arguments that will be passed into your function, add a * before the argument name in the function definition.The function will receive a tuple of arguments, and they can be access accordingly:
def greeting(*args): greeting = f'Hi to {", ".join(args[:-1])} and {args[-1]}' print(greeting) greeting('Joe', 'Ben', 'Bobby')
Hi to Joe, Ben and Bobby
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
Arbitrary Keyword Arguments, \**kwargsIf you do not know how many keyword arguments that will be passed into your function, add two asterisk: ** before the parameter name in the function definition.This way the function will receive a dictionary of arguments, and can access the items accordingly
def list_names(**kwargs): for key, value in kwargs.items(): print(f'{key} is: {value}') list_names(first_name='Jonny', family_name='Walker') list_names(primer_nombre='Jose', segundo_nombre='Maria', primer_apellido='Peréz', segundo_apellido='García')
primer_nombre is: Jose segundo_nombre is: Maria primer_apellido is: Peréz segundo_apellido is: García
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
Scope of the functionScope of the function is what a function can see and use.The function can use all global variables if there is no local assigned
a = 'Hello' def my_function(): print(a) my_function()
Hello
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
If we have local variable with the same name the function will use the local.
a = 'Hello' def my_function(): a = 'Hi' print(a) my_function() a = 'Hello' def my_function(): print(a) a = 'Hi' my_function()
_____no_output_____
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
This is important as this is preventing us from changing global variables inside function
a = 'Hello' def change_a(): a = a + 'Hi' change_a() print(a)
_____no_output_____
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
A function cannot access local variables from another function.
def my_function(): b = 'Hi' print(a) def my_other_function(): print(b) my_other_function()
_____no_output_____
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
Local variables cannot be accessed from global environment
print(b)
_____no_output_____
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
Similar to variables you can use functions from the global environment or define them inside a parent function
def add_function(a, b): result = a + b return result def formula_function(a, b, c): result = add_function(a, b) * c return result print(formula_function(2,3,4))
20
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
We can use the result from one function as argument for another
print(formula_function(add_function(4,5), 3, 2))
24
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
We can use function as argument for another function or return function from another function, we have Anonymous/Lambda Function in Python as well. Recursive functionsRecursive function is function that is using (calling) itself
def factorial(x): """This is a recursive function to find the factorial of an integer (factorial(4) = 4*3*2*1)""" if x == 1: return 1 else: result = x * factorial(x-1) return result factorial(5) def extract('http..'): result = request('http..') if request = None: time.sleep(360) result = extract()
_____no_output_____
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
Special functions (range, enumerate, zip) range() function - is creating sequence
my_range = range(5) print(my_range) my_list = list(range(2, 10, 2)) my_list my_list = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h'] for i in range(3, len(my_list), 2): print(my_list[i]) range_list = list(range(10)) print(range_list)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
enumerate() function is creating index for iterables
import time my_list = list(range(10)) my_second_list = [] for index, value in enumerate(my_list): time.sleep(1) my_second_list.append(value+2) print(f'{index+1} from {len(my_list)}') print(my_second_list) print(my_second_list)
[2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
zip() function is aggregating items into tuples
list1 = [2, 4, 6, 7, 8] list2 = ['a', 'b', 'c', 'd', 'e'] for item1, item2 in zip(list1, list2): print(f'item1 is:{item1} and item2 is: {item2}')
item1 is:2 and item2 is: a item1 is:4 and item2 is: b item1 is:6 and item2 is: c item1 is:7 and item2 is: d item1 is:8 and item2 is: e
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
Iterator objects
string = 'abc' it = iter(string) it next(it)
_____no_output_____
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
I/O working with files, working directory, projectsI/O = Input / Output. Loading data to python, getting data out of python Keyboard input input() function
str = input("Enter your input: ") print("Received input is : "+ str)
Enter your input: Hi! Received input is : Hi!
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
Console output print() function
print('Console output')
Console output
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
Working with text files open() functionopen(file_name [, access_mode][, buffering])file_name = string with format 'C:/temp/my_file.txt'access_mode = string with format: 'r', 'rb', 'w' etc1. r = Opens a file for reading only. The file pointer is placed at the beginning of the file. This is the default mode.2. rb = Opens a file for reading only in binary format. 3. r+ = Opens a file for both reading and writing.4. rb = Opens a file for both reading and writing in binary format. 5. w = Opens a file for writing only. Overwrites the file if the file exists. If the file does not exist, creates a new file for writing.6. wb = Opens a file for writing only in binary format. Overwrites the file if the file exists. If the file does not exist, creates a new file for writing.7. w+ = Opens a file for both writing and reading. Overwrites the existing file if the file exists. If the file does not exist, creates a new file for reading and writing.8. wb+ = Opens a file for both writing and reading in binary format. Overwrites the existing file if the file exists. If the file does not exist, creates a new file for reading and writing.9. a = Opens a file for appending. The file pointer is at the end of the file if the file exists. That is, the file is in the append mode. If the file does not exist, it creates a new file for writing.10. ab = Opens a file for appending in binary format. The file pointer is at the end of the file if the file exists. That is, the file is in the append mode. If the file does not exist, it creates a new file for writing.11. a+ = Opens a file for both appending and reading. The file pointer is at the end of the file if the file exists. The file opens in the append mode. If the file does not exist, it creates a new file for reading and writing.12. ab+ = Opens a file for both appending and reading in binary format. The file pointer is at the end of the file if the file exists. The file opens in the append mode. If the file does not exist, it creates a new file for reading and writing.
txt_file = open('C:/temp/python test/txt_file.txt', 'w') txt_file.write('some text') txt_file.close() txt_file = open('C:/temp/python test/txt_file.txt', 'r') text = txt_file.read() txt_file.close() print(text) txt_file = open('C:/temp/python test/txt_file.txt', 'a') txt_file.write('\nsome more text') txt_file.close() txt_file = open('C:/temp/python test/txt_file.txt', 'r') txt_lines = txt_file.readlines() print(type(txt_lines)) txt_file.close() print(txt_lines) txt_file = open('C:/temp/python test/txt_file.txt', 'r') txt_line = txt_file.readline() print(txt_line) txt_line2 = txt_file.readline() print(txt_line2)
some text some more text
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
Deleting filesrequires os library this library is part of Python but is not loaded by default so to use it we should import it
import os os.remove('C:/temp/python test/txt_file.txt') if os.path.exists('C:/temp/python test/txt_file.txt'): os.remove('C:/temp/python test/txt_file.txt') else: print('The file does not exist')
The file does not exist
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
Removing directories with os.rmdir()To delete the directory with os.rmdir() the directory should be empty we can check what is inside the directory with os.listdir() or os.walk()
os.listdir('C:/temp/python test/') os.walk('C:/temp/python test/') for item in os.walk('C:/temp/python test/'): print(item[0]) print(item[1]) print(item[2])
C:/temp/python test/ ['test dir'] ['test file.txt', 'txt_file.txt'] C:/temp/python test/test dir [] []
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
Rename file or directory
os.rename('C:/temp/python test/test file.txt', 'C:/temp/python test/test file renamed.txt') os.listdir('C:/temp/python test/')
_____no_output_____
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
Open folder or file in Windows with the associated program
os.startfile('C:/temp/python test/test file renamed.txt')
_____no_output_____
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
Working directory
import os os.getcwd() os.chdir('C:/temp/python test/') os.getcwd() os.listdir()
_____no_output_____
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
ProjectsProject is a folder organising your files, the top level is your working directory.Good practices of organising your projects:1. Create separate folder for your python(.py) files, name this folder without space (eg. py_files or python_files)2. Add in your py_files fodler a file called \_\_init\_\_.py, this is an empty python file that will allow you to import all files in this folder as packages.3. is a good idea to make your project folder a git repository so you can track your changes.4. put all your source files and result files in your project directory. PackagesPackages (or libraries) are python files with objects and functions that you can use, some of them are installed with python and are part of the programming language, others should be installed. Package managersPackage managers are helping you to install, update and uninstall packages. pip package managerThis is the default python package manager* pip install package_name=version - installing a package* pip freeze - get the list of installed packages* pip freeze > requirements.txt - saves the list of installed packages as requirements.txt file* pip install -r requirements.txt - install all packages from requirements.txt file conda package managerThis is used by anaconda distributions of python The Python Standard Library - packages included in python[Full list](https://docs.python.org/3/library/)* os - Miscellaneous operating system interfaces* time — Time access and conversions* datetime — Basic date and time types* math — Mathematical functions* random — Generate pseudo-random numbers* statistics — Mathematical statistics functions* shutil — High-level file operations* pickle — Python object serialization* logging — Logging facility for Python* tkinter — Python interface to Tcl/Tk (creating UI)* venv — Creation of virtual environments* re - Regular expression operations time package examples
import time print('start') time.sleep(3) print('stop') time_now = time.localtime() print(time_now)
time.struct_time(tm_year=2020, tm_mon=10, tm_mday=6, tm_hour=9, tm_min=45, tm_sec=26, tm_wday=1, tm_yday=280, tm_isdst=1)
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
convert time to string with form dd-mm-yyyy
date = time.strftime('%d-%m-%Y', time_now) print(date) month = time.strftime('%B', time_now) print(f'month is {month}')
06-10-2020 month is October
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
convert string to time
as_time = time.strptime("30 Nov 2020", "%d %b %Y") print(as_time)
time.struct_time(tm_year=2020, tm_mon=11, tm_mday=30, tm_hour=0, tm_min=0, tm_sec=0, tm_wday=0, tm_yday=335, tm_isdst=-1)
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
datatime package examples
import datetime today = datetime.date.today() print(today) print(type(today)) week_ago = today - datetime.timedelta(days=7) print(week_ago) today_string = today.strftime('%Y/%m/%d') print(today_string) print(type(today_string))
2020-10-06 <class 'datetime.date'> 2020-09-29 2020/10/06 <class 'str'>
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
shutil package examplesfunctions for file copying and removal* shutil.copy(src, dst)* shutil.copytree(src, dst)* shutil.rmtree(path)* shutil.move(src, dst) How to import packages and function from packages* Import the whole package - in this case you can use all the functions of the package including the functions in the modules of the package, you can rename the package when importing
import datetime today = datetime.date.today() print(today) import datetime as dt today = dt.date.today() print(today)
2020-10-06 2020-10-06
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
* import individual modules or individual functions - in this case you can use the functions direktly as if they are defined in your script. Important: be aware of function shadowing - when you import functions with the same name from different packages or you have defined function with the same name!
from datetime import date # importing date class today = date.today() print(today) # Warning this is replacing date class with string!!! date = '25/06/2012' today = date.today() print(today)
2020-10-06
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
When importing individual functions or classes from the same package you can import them together
from datetime import date, time, timedelta
_____no_output_____
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
Selected external packagesIf you are using pip package manager all the packages available are installed from [PyPI](https://pypi.org/) * [Biopython](https://biopython.org/) - contains parsers for various Bioinformatics file formats (BLAST, Clustalw, FASTA, Genbank,...), access to online services (NCBI, Expasy,...) and more* [SQLAlchemy](https://docs.sqlalchemy.org/en/13/) - connect to SQL database and query the database* [cx_Oracle](https://oracle.github.io/python-cx_Oracle/) - connect to Oracle database* [xmltodict](https://github.com/martinblech/xmltodict) - convert xml to Python dictionary with xml tags as keys and the information inside the tags as values
import xmltodict xml = """ <root xmlns="http://defaultns.com/" xmlns:a="http://a.com/" xmlns:b="http://b.com/"> <x>1</x> <a:y>2</a:y> <b:z>3</b:z> </root>""" xml_dict = xmltodict.parse(xml) print(xml_dict.keys()) print(xml_dict['root'].keys()) print(xml_dict['root'].values())
odict_keys(['root']) odict_keys(['@xmlns', '@xmlns:a', '@xmlns:b', 'x', 'a:y', 'b:z']) odict_values(['http://defaultns.com/', 'http://a.com/', 'http://b.com/', '1', '2', '3'])
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
Pyautogui[PyAutoGUI](https://pyautogui.readthedocs.io/en/latest/index.html) lets your Python scripts control the mouse and keyboard to automate interactions with other applications.
import pyautogui as pa screen_width, screen_height = pa.size() # Get the size of the primary monitor. print(f'screen size is {screen_width} x {screen_height}') mouse_x, mouse_y = pa.position() # Get the XY position of the mouse. print(f'mouse position is: {mouse_x}, {mouse_y}') pa.moveTo(600, 500, duration=5) # Move the mouse to XY coordinates. import time time.sleep(3) pa.moveTo(600, 500) pa.click() pa.write('Hello world!', interval=0.25) pa.alert('Script finished!') pa.screenshot('C:/temp/python test/my_screenshot.png', region=(0,0, 300, 400)) location = pa.locateOnScreen('C:/temp/python test/python.PNG') print(location) image_center = pa.center(location) print(image_center) pa.moveTo(image_center, duration=3)
Box(left=1669, top=131, width=59, height=54) Point(x=1698, y=158)
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
Pandas[Pandas](https://pandas.pydata.org/docs/user_guide/index.html) - is providing high-performance, easy-to-use data structures and data analysis tools for Python[Pandas cheat sheet](https://pandas.pydata.org/Pandas_Cheat_Sheet.pdf)Is providing 2 new data structures to Python1. Series - is a one-dimensional labeled (indexed) array capable of holding any data type2. DataFrame - is a 2-dimensional labeled data structure with columns of potentially different types. You can think of it like a spreadsheet or SQL table
import pandas as pd d = {'b': 1, 'a': 0, 'c': 2} my_serie = pd.Series(d) print(my_serie['a']) print(type(my_serie)) list1 = [1, 2, 3] list2 = [5, 6, 8] list3 = [10, 12, 13] df = pd.DataFrame({'b': list1, 'a': list2, 'c': list3}) df print(df.index) print(df.columns) print(df.shape) df.columns = ['column1', 'column2', 'column3'] # alternative df.rename({'a':'column1'}) in case you dont want to rename all the columns df df.index = ['a', 'b', 'c'] df
_____no_output_____
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
selecting values from dataframe* select column
df['column1']
_____no_output_____
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
* select multiple columns
df[['column3', 'column2']]
_____no_output_____
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
* selecting row
row1 = df.iloc[1] row1 df.loc['a'] df.loc[['a', 'c']]
_____no_output_____
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
* selecting values from single cell
df['column1'][2] df.iloc[1:2, 0:2]
_____no_output_____
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
* selecting by column only rows meeting criteria (filtering the table)
df[df['column1'] > 1]
_____no_output_____
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
* select random columns by number (n) or as a fraction (frac)
df.sample(n=2)
_____no_output_____
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
adding new data to Data Frame* add new column
df['column4'] = [24, 12, 16] df df['column5'] = df['column1'] + df['column2'] df df['column6'] = 7 df
_____no_output_____
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
* add new row
df = df.append({'column1':4, 'column2': 8, 'column3': 5, 'column4': 7, 'column5': 8, 'column6': 11}, ignore_index=True) df
_____no_output_____
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
* add new dataframe on the bottom (columns should have the same names in both dataframes)
new_df = df.append(df, ignore_index=True) new_df
_____no_output_____
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
* merging data frames (similar to joins in SQL), default ‘inner’
df2 = pd.DataFrame({'c1':[2, 3, 4, 5], 'c2': [4, 7, 11, 3]}) df2 merged_df = df.merge(df2, left_on='column1', right_on='c1', how='left') merged_df merged_df = pd.merge(df, df2, left_on='column1', right_on='c1') merged_df
_____no_output_____
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
* copy data frames - this is important to prevent warnings and artefacts
df1 = pd.DataFrame({'a':[1,2,3,4,5], 'b':[6,7,8,9,10]}) df2 = df1[df1['a'] > 2].copy() df2.iloc[0, 0] = 56 df2
_____no_output_____
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
* change the data type in a column
print(type(df1['a'][0])) df1['a'] = df1['a'].astype('str') print(type(df1['a'][0])) df1
<class 'str'> <class 'str'>
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
* value counts - counts the number of appearances of a value in a column
df1.iloc[0, 0] = '5' df1 df1['a'].value_counts()
_____no_output_____
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
* drop duplicates - removes duplicated rows in a data frame
df1.iloc[0, 1] = 10 df1 df1.drop_duplicates(inplace=True) df1
_____no_output_____
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
Pandas I/O* from / to excel file
excel_sheet = pd.read_excel('C:/temp/python test/example.xlsx', sheet_name='Sheet1') excel_sheet.head() print(excel_sheet.shape) print(excel_sheet['issue'][0]) excel_sheet = excel_sheet[~excel_sheet['keywords'].isna()] print(excel_sheet.shape) excel_sheet.to_excel('C:/temp/python test/example_1.xlsx', index=False)
_____no_output_____
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
To create excel file with multiple sheets pandas ExcelWriter method shoyld be used and sheets assigned to it
writer = pd.ExcelWriter('C:/temp/python test/example_2.xlsx') df1.to_excel(writer, 'Sheet1', index = False) excel_sheet.to_excel(writer, 'Sheet2', index = False) writer.save()
_____no_output_____
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
* from html pagepandas read_html method is reading the whole page and is creating list of dataframes, one for every html table in the webpage
codons = pd.read_html('https://en.wikipedia.org/wiki/DNA_codon_table') codons[2]
_____no_output_____
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
* from SQL database
my_data = pd.read_sql('select column1, column2 from table1', connection)
_____no_output_____
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
* from CSV file
my_data = pd.read_csv('data.csv')
_____no_output_____
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
XLWingsWorking with excel files[Documentation](https://docs.xlwings.org/en/stable/)
import xlwings as xw workbook = xw.Book() new_sht = workbook.sheets.add('new_sheet') new_sht.range('A1').value = 'Hi from Python' new_sht.range('A1').column_width = 30 new_sht.range('A1').color = (0,255,255) a2_value = new_sht.range('A2').value print(a2_value) workbook.save('C:/temp/python test/new_file.xlsx') workbook.close()
_____no_output_____
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
Errors an debugging Escaping errors in Python with try: except:
a = 7/0 import sys try: a = 7/0 except: print(f'a cannot be calculated, {sys.exc_info()[0]}!') a = None try: 'something' except: try: 'something else' except: 'and another try' finally: print('Nothing is working :(')
Nothing is working :(
MIT
Python for beginners.ipynb
avkch/Python-for-beginners
AMATH 515 Homework 2**Due Date: 02/08/2019*** Name: Tyler Chen* Student Number: *Homework Instruction*: Please follow order of this notebook and fill in the codes where commented as `TODO`.
import numpy as np import scipy.io as sio import matplotlib.pyplot as plt
_____no_output_____
MIT
amath515/hw2/515Hw2_Coding.ipynb
interesting-courses/UW_coursework
Please complete the solvers in `solver.py`
import sys sys.path.append('./') from solvers import *
_____no_output_____
MIT
amath515/hw2/515Hw2_Coding.ipynb
interesting-courses/UW_coursework
Problem 3: Compressive SensingConsier the optimization problem,$$\min_x~~\frac{1}{2}\|Ax - b\|^2 + \lambda\|x\|_1$$In the following, please specify the $f$ and $g$ and use the proximal gradient descent solver to obtain the solution.
# create the data np.random.seed(123) m = 100 # number of measurements n = 500 # number of variables k = 10 # number of nonzero variables s = 0.05 # measurements noise level # A_cs = np.random.randn(m, n) x_cs = np.zeros(n) x_cs[np.random.choice(range(n), k, replace=False)] = np.random.choice([-1.0, 1.0], k) b_cs = A_cs.dot(x_cs) + s*np.random.randn(m) # lam_cs = 0.1*norm(A_cs.T.dot(b_cs), np.inf) # define the function, prox and the beta constant def func_f_cs(x): # TODO: complete the function return norm(A_cs@x-b_cs)**2/2 def func_g_cs(x): # TODO: complete the gradient return lam_cs*norm(x,ord=1) def grad_f_cs(x): # TODO: complete the function return A_cs.T@(A_cs@x-b_cs) def prox_g_cs(x, t): # TODO: complete the prox of 1 norm leq = x <= -lam_cs*t # boolean array of coordinates where x_i <= -lam_cs * t geq = x >= lam_cs*t # boolean array of coordinates where x_i >= lam_cs * t # (leq + geq) gives components where x not in [-1,1]*lam_cs*t return (leq+geq) * x + leq * lam_cs*t - geq * lam_cs*t # TODO: what is the beta value for the smooth part beta_f_cs = norm(A_cs,ord=2)**2
_____no_output_____
MIT
amath515/hw2/515Hw2_Coding.ipynb
interesting-courses/UW_coursework
Proximal gradient descent on compressive sensing
# apply the proximal gradient descent solver x0_cs_pgd = np.zeros(x_cs.size) x_cs_pgd, obj_his_cs_pgd, err_his_cs_pgd, exit_flag_cs_pgd = \ optimizeWithPGD(x0_cs_pgd, func_f_cs, func_g_cs, grad_f_cs, prox_g_cs, beta_f_cs) # plot signal result plt.plot(x_cs) plt.plot(x_cs_pgd, '.') plt.legend(['true signal', 'recovered']) plt.title('Compressive Sensing Signal') plt.show() # plot result fig, ax = plt.subplots(1, 2, figsize=(12,5)) ax[0].plot(obj_his_cs_pgd) ax[0].set_title('function value') ax[1].semilogy(err_his_cs_pgd) ax[1].set_title('optimality condition') fig.suptitle('Proximal Gradient Descent on Compressive Sensing') plt.show() # plot result fig, ax = plt.subplots(1, 3, figsize=(18,5)) ax[0].plot(x_cs) ax[0].plot(x_cs_pgd, '.') ax[0].legend(['true signal', 'recovered']) ax[0].set_title('Compressive Sensing Signal') ax[1].plot(obj_his_cs_pgd) ax[1].set_title('function value') ax[2].semilogy(err_his_cs_pgd) ax[2].set_title('optimality condition') #fig.suptitle('Proximal Gradient Descent on Compressive Sensing') plt.savefig('img/cs_pgd.pdf',bbox_inches="tight")
_____no_output_____
MIT
amath515/hw2/515Hw2_Coding.ipynb
interesting-courses/UW_coursework
Accelerate proximal gradient descent on compressive sensing
# apply the proximal gradient descent solver x0_cs_apgd = np.zeros(x_cs.size) x_cs_apgd, obj_his_cs_apgd, err_his_cs_apgd, exit_flag_cs_apgd = \ optimizeWithAPGD(x0_cs_apgd, func_f_cs, func_g_cs, grad_f_cs, prox_g_cs, beta_f_cs) # plot signal result plt.plot(x_cs) plt.plot(x_cs_apgd, '.') plt.legend(['true signal', 'recovered']) plt.title('Compressive Sensing Signal') plt.show() # plot result fig, ax = plt.subplots(1, 2, figsize=(12,5)) ax[0].plot(obj_his_cs_apgd) ax[0].set_title('function value') ax[1].semilogy(err_his_cs_apgd) ax[1].set_title('optimality condition') fig.suptitle('Accelerated Proximal Gradient Descent on Compressive Sensing') plt.show() # plot result fig, ax = plt.subplots(1, 3, figsize=(18,5)) ax[0].plot(x_cs) ax[0].plot(x_cs_apgd, '.') ax[0].legend(['true signal', 'recovered']) ax[0].set_title('Compressive Sensing Signal') ax[1].plot(obj_his_cs_apgd) ax[1].set_title('function value') ax[2].semilogy(err_his_cs_apgd) ax[2].set_title('optimality condition') #fig.suptitle('Proximal Gradient Descent on Compressive Sensing') plt.savefig('img/cs_apgd.pdf',bbox_inches="tight")
_____no_output_____
MIT
amath515/hw2/515Hw2_Coding.ipynb
interesting-courses/UW_coursework
Problem 4: Logistic Regression on MINST DataNow let's play with some real data, recall the logistic regression problem,$$\min_x~~\sum_{i=1}^m\left\{\log(1 + \exp(\langle a_i,x \rangle)) - b_i\langle a_i,x \rangle\right\} + \frac{\lambda}{2}\|x\|^2.$$Here our data pair $\{a_i, b_i\}$, $a_i$ is the image and $b_i$ is the label.In this homework problem, let's consider the binary classification problem, where $b_i \in \{0, 1\}$.
# import data mnist_data = np.load('mnist01.npy') # A_lgt = mnist_data[0] b_lgt = mnist_data[1] A_lgt_test = mnist_data[2] b_lgt_test = mnist_data[3] # # set regularizer parameter lam_lgt = 0.1 # # beta constant of the function beta_lgt = 0.25*norm(A_lgt, 2)**2 + lam_lgt # plot the images fig, ax = plt.subplots(1, 2) ax[0].imshow(A_lgt[0].reshape(28,28)) ax[1].imshow(A_lgt[7].reshape(28,28)) plt.show() # define function, gradient and Hessian def lgt_func(x): # TODO: complete the function of logistic regression return np.sum(np.log(1+np.exp(A_lgt@x))) - b_lgt@A_lgt@x + lam_lgt*x@x/2 # def lgt_grad(x): # TODO: complete the gradient of logistic regression return A_lgt.T@ ((np.exp(A_lgt@x)/(1+np.exp(A_lgt@x))) - b_lgt) + lam_lgt*x # def lgt_hess(x): # TODO: complete the hessian of logistic regression return A_lgt.T @ np.diag( np.exp(A_lgt@x)/(1+np.exp(A_lgt@x))**2 ) @ A_lgt + lam_lgt * np.eye(len(x))
_____no_output_____
MIT
amath515/hw2/515Hw2_Coding.ipynb
interesting-courses/UW_coursework
Gradient decsent on logistic regression
# apply the gradient descent x0_lgt_gd = np.zeros(A_lgt.shape[1]) x_lgt_gd, obj_his_lgt_gd, err_his_lgt_gd, exit_flag_lgt_gd = \ optimizeWithGD(x0_lgt_gd, lgt_func, lgt_grad, beta_lgt) # plot result fig, ax = plt.subplots(1, 2, figsize=(12,5)) ax[0].plot(obj_his_lgt_gd) ax[0].set_title('function value') ax[1].semilogy(err_his_lgt_gd) ax[1].set_title('optimality condition') fig.suptitle('Gradient Descent on Logistic Regression') plt.savefig('img/lr_gd.pdf',bbox_inches="tight")
_____no_output_____
MIT
amath515/hw2/515Hw2_Coding.ipynb
interesting-courses/UW_coursework
Accelerate Gradient decsent on logistic regression
# apply the accelerated gradient descent x0_lgt_agd = np.zeros(A_lgt.shape[1]) x_lgt_agd, obj_his_lgt_agd, err_his_lgt_agd, exit_flag_lgt_agd = \ optimizeWithAGD(x0_lgt_agd, lgt_func, lgt_grad, beta_lgt) # plot result fig, ax = plt.subplots(1, 2, figsize=(12,5)) ax[0].plot(obj_his_lgt_agd) ax[0].set_title('function value') ax[1].semilogy(err_his_lgt_agd) ax[1].set_title('optimality condition') fig.suptitle('Accelerated Gradient Descent on Logistic Regression') plt.savefig('img/lr_agd.pdf',bbox_inches="tight") plt.show()
_____no_output_____
MIT
amath515/hw2/515Hw2_Coding.ipynb
interesting-courses/UW_coursework
Accelerate Gradient decsent on logistic regression
# apply the accelerated gradient descent x0_lgt_nt = np.zeros(A_lgt.shape[1]) x_lgt_nt, obj_his_lgt_nt, err_his_lgt_nt, exit_flag_lgt_nt = \ optimizeWithNT(x0_lgt_nt, lgt_func, lgt_grad, lgt_hess) # plot result fig, ax = plt.subplots(1, 2, figsize=(12,5)) ax[0].plot(obj_his_lgt_nt) ax[0].set_title('function value') ax[1].semilogy(err_his_lgt_nt) ax[1].set_title('optimality condition') fig.suptitle('Newton\'s Method on Logistic Regression') plt.savefig('img/lr_nm.pdf',bbox_inches="tight") plt.show()
_____no_output_____
MIT
amath515/hw2/515Hw2_Coding.ipynb
interesting-courses/UW_coursework
Test Logistic Regression
# define accuracy function def accuracy(x, A_test, b_test): r = A_test.dot(x) b_test[b_test == 0.0] = -1.0 correct_count = np.sum((r*b_test) > 0.0) return correct_count/b_test.size print('accuracy of the result is %0.3f' % accuracy(x_lgt_nt, A_lgt_test, b_lgt_test))
accuracy of the result is 1.000
MIT
amath515/hw2/515Hw2_Coding.ipynb
interesting-courses/UW_coursework
Start with simplest problemI feel like clasification is the easiest problem catogory to start with.We will start with simple clasification problem to predict survivals of titanic https://www.kaggle.com/c/titanic Contents1. [Basic pipeline for a predictive modeling problem](1)1. [Exploratory Data Analysis (EDA)](2) * [Overall survival stats](2_1) * [Analysis features](2_2) 1. [Sex](2_2_1) 1. [Pclass](2_2_2) 1. [Age](2_2_3) 1. [Embarked](2_2_4) 1. [SibSip & Parch](2_2_5) 1. [Fare](2_2_6) * [Observations Summary](2_3) * [Correlation Between The Features](2_4)1. [Feature Engineering and Data Cleaning](4) * [Converting String Values into Numeric](4_1) * [Convert Age into a categorical feature by binning](4_2) * [Convert Fare into a categorical feature by binning](4_3) * [Dropping Unwanted Features](4_4)1. [Predictive Modeling](5) * [Cross Validation](5_1) * [Confusion Matrix](5_2) * [Hyper-Parameters Tuning](5_3) * [Ensembling](5_4) * [Prediction](5_5)1. [Feature Importance](6) **Basic Pipeline for predictive modeling problem**[^](1)**Exploratory Data Analysis -> Feature Engineering and Data Preparation -> Predictive Modeling.**1. First we need to see what the data can tell us: We call this **Exploratory Data Analysis(EDA)**. Here we look at data which is hidden in rows and column format and try to visualize, summarize and interprete it looking for information.1. Next we can **leverage domain knowledge** to boost machine learning model performance. We call this step, **Feature Engineering and Data Cleaning**. In this step we might add few features, Remove redundant features, Converting features into suitable form for modeling.1. Then we can move on to the **Predictive Modeling**. Here we try basic ML algorthms, cross validate, ensemble and Important feature Extraction. --- Exploratory Data Analysis (EDA)[^](2)With the objective in mind that this kernal aims to explain the workflow of a predictive modelling problem for begginers, I will try to use simple easy to understand visualizations in the EDA section. Kernals with more advanced EDA sections will be mentioned at the end for you to learn more.
# Python 3 environment comes with many helpful analytics libraries installed # For example, here's several helpful packages to load in import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import matplotlib.pyplot as plt import seaborn as sns import os # Read data to a pandas data frame data=pd.read_csv('../input/train.csv') # lets have a look on first few rows display(data.head()) # Checking shape of our data set print('Shape of Data : ',data.shape)
_____no_output_____
Apache-2.0
start-to-solve-your-first-problem-in-ml.ipynb
sanjayatb/Kaggle
* We have 891 data points (rows); each data point has 12 columns.
#checking for null value counts in each column data.isnull().sum()
_____no_output_____
Apache-2.0
start-to-solve-your-first-problem-in-ml.ipynb
sanjayatb/Kaggle
* The Age, Cabin and Embarked have null values. Lets look at overall survival stats[^](2_1)
f,ax=plt.subplots(1,2,figsize=(13,5)) data['Survived'].value_counts().plot.pie(explode=[0,0.05],autopct='%1.1f%%',ax=ax[0],shadow=True) ax[0].set_title('Survived') ax[0].set_ylabel('') sns.countplot('Survived',data=data,ax=ax[1]) ax[1].set_title('Survived') plt.show()
_____no_output_____
Apache-2.0
start-to-solve-your-first-problem-in-ml.ipynb
sanjayatb/Kaggle
* Sad Story! Only 38% have survived. That is roughly 340 out of 891. --- Analyse features[^](2_2) Feature: Sex[^](3_2_1)
f,ax=plt.subplots(1,3,figsize=(18,5)) data[['Sex','Survived']].groupby(['Sex']).mean().plot.bar(ax=ax[0]) ax[0].set_title('Fraction of Survival with respect to Sex') sns.countplot('Sex',hue='Survived',data=data,ax=ax[1]) ax[1].set_title('Survived vs Dead counts with respect to Sex') sns.barplot(x="Sex", y="Survived", data=data,ax=ax[2]) ax[2].set_title('Survival by Gender') plt.show()
_____no_output_____
Apache-2.0
start-to-solve-your-first-problem-in-ml.ipynb
sanjayatb/Kaggle
* While survival rate for female is around 75%, same for men is about 20%.* It looks like they have given priority to female passengers in the rescue.* **Looks like Sex is a good predictor on the survival.** --- Feature: Pclass[^](2_2_2)**Meaning :** Ticket class : 1 = 1st, 2 = 2nd, 3 = 3rd
f,ax=plt.subplots(1,3,figsize=(18,5)) data['Pclass'].value_counts().plot.bar(color=['#BC8F8F','#F4A460','#DAA520'],ax=ax[0]) ax[0].set_title('Number Of Passengers with respect to Pclass') ax[0].set_ylabel('Count') sns.countplot('Pclass',hue='Survived',data=data,ax=ax[1]) ax[1].set_title('Survived vs Dead counts with respect to Pclass') sns.barplot(x="Pclass", y="Survived", data=data,ax=ax[2]) ax[2].set_title('Survival by Pclass') plt.show()
_____no_output_____
Apache-2.0
start-to-solve-your-first-problem-in-ml.ipynb
sanjayatb/Kaggle
* For Pclass 1 %survived is around 63%, for Pclass2 is around 48% and for Pclass2 is around 25%.* **So its clear that higher classes had higher priority while rescue.*** **Looks like Pclass is also an important feature.** --- Feature: Age[^](2_2_3)**Meaning :** Age in years
# Plot plt.figure(figsize=(25,6)) sns.barplot(data['Age'],data['Survived'], ci=None) plt.xticks(rotation=90);
_____no_output_____
Apache-2.0
start-to-solve-your-first-problem-in-ml.ipynb
sanjayatb/Kaggle
* Survival rate for passenegers below Age 14(i.e children) looks to be good than others.* So Age seems an important feature too.* Rememer we had 177 null values in the Age feature. How are we gonna fill them?. Filling Age NaNWell there are many ways to do this. One can use the mean value or median .. etc.. But can we do better?. Seems yes. [EDA To Prediction(DieTanic)](https://www.kaggle.com/ash316/eda-to-prediction-dietanicEDA-To-Prediction-(DieTanic)) has used a wonderful method which I would use here too. There is a name feature. First lets extract the initials.
data['Initial']=0 for i in data: data['Initial']=data.Name.str.extract('([A-Za-z]+)\.') #lets extract the Salutations pd.crosstab(data.Initial,data.Sex).T.style.background_gradient(cmap='summer_r') #Checking the Initials with the Sex
_____no_output_____
Apache-2.0
start-to-solve-your-first-problem-in-ml.ipynb
sanjayatb/Kaggle
Okay so there are some misspelled Initials like Mlle or Mme that stand for Miss. Lets replace them.
data['Initial'].replace(['Mlle','Mme','Ms','Dr','Major','Lady','Countess','Jonkheer','Col','Rev','Capt','Sir','Don'],['Miss','Miss','Miss','Mr','Mr','Mrs','Mrs','Other','Other','Other','Mr','Mr','Mr'],inplace=True) data.groupby('Initial')['Age'].mean() #lets check the average age by Initials ## Assigning the NaN Values with the Ceil values of the mean ages data.loc[(data.Age.isnull())&(data.Initial=='Mr'),'Age']=33 data.loc[(data.Age.isnull())&(data.Initial=='Mrs'),'Age']=36 data.loc[(data.Age.isnull())&(data.Initial=='Master'),'Age']=5 data.loc[(data.Age.isnull())&(data.Initial=='Miss'),'Age']=22 data.loc[(data.Age.isnull())&(data.Initial=='Other'),'Age']=46 data.Age.isnull().any() #So no null values left finally
_____no_output_____
Apache-2.0
start-to-solve-your-first-problem-in-ml.ipynb
sanjayatb/Kaggle
--- Feature: Embarked[^](2_2_4)**Meaning :** Port of Embarkation. C = Cherbourg, Q = Queenstown, S = Southampton
f,ax=plt.subplots(1,2,figsize=(12,5)) sns.countplot('Embarked',data=data,ax=ax[0]) ax[0].set_title('No. Of Passengers Boarded') sns.countplot('Embarked',hue='Survived',data=data,ax=ax[1]) ax[1].set_title('Embarked vs Survived') plt.subplots_adjust(wspace=0.2,hspace=0.5) plt.show()
_____no_output_____
Apache-2.0
start-to-solve-your-first-problem-in-ml.ipynb
sanjayatb/Kaggle
* Majority of passengers borded from Southampton* Survival counts looks better at C. Why?. Could there be an influence from sex and pclass features we already studied?. Let's find out
f,ax=plt.subplots(1,2,figsize=(12,5)) sns.countplot('Embarked',hue='Sex',data=data,ax=ax[0]) ax[0].set_title('Male-Female Split for Embarked') sns.countplot('Embarked',hue='Pclass',data=data,ax=ax[1]) ax[1].set_title('Embarked vs Pclass') plt.subplots_adjust(wspace=0.2,hspace=0.5) plt.show()
_____no_output_____
Apache-2.0
start-to-solve-your-first-problem-in-ml.ipynb
sanjayatb/Kaggle
* We guessed correctly. higher % of 1st class passegers boarding from C might be the reason. Filling Embarked NaN
f,ax=plt.subplots(1,1,figsize=(6,5)) data['Embarked'].value_counts().plot.pie(explode=[0,0,0],autopct='%1.1f%%',ax=ax) plt.show()
_____no_output_____
Apache-2.0
start-to-solve-your-first-problem-in-ml.ipynb
sanjayatb/Kaggle
* Since 72.5% passengers are from Southampton, So lets fill missing 2 values using S (Southampton)
data['Embarked'].fillna('S',inplace=True) data.Embarked.isnull().any()
_____no_output_____
Apache-2.0
start-to-solve-your-first-problem-in-ml.ipynb
sanjayatb/Kaggle
--- Features: SibSip & Parch[^](2_2_5)**Meaning :** SibSip -> Number of siblings / spouses aboard the TitanicParch -> Number of parents / children aboard the TitanicSibSip + Parch -> Family Size
f,ax=plt.subplots(2,2,figsize=(15,10)) sns.countplot('SibSp',hue='Survived',data=data,ax=ax[0,0]) ax[0,0].set_title('SibSp vs Survived') sns.barplot('SibSp','Survived',data=data,ax=ax[0,1]) ax[0,1].set_title('SibSp vs Survived') sns.countplot('Parch',hue='Survived',data=data,ax=ax[1,0]) ax[1,0].set_title('Parch vs Survived') sns.barplot('Parch','Survived',data=data,ax=ax[1,1]) ax[1,1].set_title('Parch vs Survived') plt.subplots_adjust(wspace=0.2,hspace=0.5) plt.show()
_____no_output_____
Apache-2.0
start-to-solve-your-first-problem-in-ml.ipynb
sanjayatb/Kaggle
* The barplot and factorplot shows that if a passenger is alone onboard with no siblings, he have 34.5% survival rate. The graph roughly decreases if the number of siblings increase. Lets combine above and analyse family size.
data['FamilySize'] = data['Parch'] + data['SibSp'] f,ax=plt.subplots(1,2,figsize=(15,4.5)) sns.countplot('FamilySize',hue='Survived',data=data,ax=ax[0]) ax[0].set_title('FamilySize vs Survived') sns.barplot('FamilySize','Survived',data=data,ax=ax[1]) ax[1].set_title('FamilySize vs Survived') plt.subplots_adjust(wspace=0.2,hspace=0.5) plt.show()
_____no_output_____
Apache-2.0
start-to-solve-your-first-problem-in-ml.ipynb
sanjayatb/Kaggle
* This looks interesting! looks like family sizes of 1-3 have better survival rates than others. --- Fare[^](2_2_6)**Meaning :** Passenger fare
f,ax=plt.subplots(1,1,figsize=(20,5)) sns.distplot(data.Fare,ax=ax) ax.set_title('Distribution of Fares') plt.show() print('Highest Fare:',data['Fare'].max(),' Lowest Fare:',data['Fare'].min(),' Average Fare:',data['Fare'].mean()) data['Fare_Bin']=pd.qcut(data['Fare'],6) data.groupby(['Fare_Bin'])['Survived'].mean().to_frame().style.background_gradient(cmap='summer_r')
Highest Fare: 512.3292 Lowest Fare: 0.0 Average Fare: 32.2042079685746
Apache-2.0
start-to-solve-your-first-problem-in-ml.ipynb
sanjayatb/Kaggle
* It is clear that as Fare Bins increase chances of survival increase too. Observations Summary[^](2_3) **Sex:** Survival chance for female is better than that for male.**Pclass:** Being a 1st class passenger gives you better chances of survival.**Age:** Age range 5-10 years have a high chance of survival.**Embarked:** Majority of passengers borded from Southampton.The chances of survival at C looks to be better than even though the majority of Pclass1 passengers got up at S. All most all Passengers at Q were from Pclass3.**Family Size:** looks like family sizes of 1-3 have better survival rates than others.**Fare:** As Fare Bins increase chances of survival increases Correlation Between The Features[^](2_4)
sns.heatmap(data.corr(),annot=True,cmap='RdYlGn',linewidths=0.2) #data.corr()-->correlation matrix fig=plt.gcf() fig.set_size_inches(10,8) plt.show()
_____no_output_____
Apache-2.0
start-to-solve-your-first-problem-in-ml.ipynb
sanjayatb/Kaggle
--- Feature Engineering and Data Cleaning[^](4)Now what is Feature Engineering? Feature engineering is the process of using domain knowledge of the data to create features that make machine learning algorithms work.In this section we will be doing,1. Converting String Values into Numeric1. Convert Age into a categorical feature by binning1. Convert Fare into a categorical feature by binning1. Dropping Unwanted Features Converting String Values into Numeric[^](4_1)Since we cannot pass strings to a machine learning model, we need to convert features Sex, Embarked, etc into numeric values.
data['Sex'].replace(['male','female'],[0,1],inplace=True) data['Embarked'].replace(['S','C','Q'],[0,1,2],inplace=True) data['Initial'].replace(['Mr','Mrs','Miss','Master','Other'],[0,1,2,3,4],inplace=True)
_____no_output_____
Apache-2.0
start-to-solve-your-first-problem-in-ml.ipynb
sanjayatb/Kaggle
Convert Age into a categorical feature by binning[^](4_2)
print('Highest Age:',data['Age'].max(),' Lowest Age:',data['Age'].min()) data['Age_cat']=0 data.loc[data['Age']<=16,'Age_cat']=0 data.loc[(data['Age']>16)&(data['Age']<=32),'Age_cat']=1 data.loc[(data['Age']>32)&(data['Age']<=48),'Age_cat']=2 data.loc[(data['Age']>48)&(data['Age']<=64),'Age_cat']=3 data.loc[data['Age']>64,'Age_cat']=4
_____no_output_____
Apache-2.0
start-to-solve-your-first-problem-in-ml.ipynb
sanjayatb/Kaggle
Convert Fare into a categorical feature by binning[^](4_3)
data['Fare_cat']=0 data.loc[data['Fare']<=7.775,'Fare_cat']=0 data.loc[(data['Fare']>7.775)&(data['Fare']<=8.662),'Fare_cat']=1 data.loc[(data['Fare']>8.662)&(data['Fare']<=14.454),'Fare_cat']=2 data.loc[(data['Fare']>14.454)&(data['Fare']<=26.0),'Fare_cat']=3 data.loc[(data['Fare']>26.0)&(data['Fare']<=52.369),'Fare_cat']=4 data.loc[data['Fare']>52.369,'Fare_cat']=5
_____no_output_____
Apache-2.0
start-to-solve-your-first-problem-in-ml.ipynb
sanjayatb/Kaggle
Dropping Unwanted Features[^](4_4)Name--> We don't need name feature as it cannot be converted into any categorical value.Age--> We have the Age_cat feature, so no need of this.Ticket--> It is any random string that cannot be categorised.Fare--> We have the Fare_cat feature, so unneededCabin--> A lot of NaN values and also many passengers have multiple cabins. So this is a useless feature.Fare_Bin--> We have the fare_cat feature.PassengerId--> Cannot be categorised.Sibsp & Parch --> We got FamilySize feature
#data.drop(['Name','Age','Ticket','Fare','Cabin','Fare_Range','PassengerId'],axis=1,inplace=True) data.drop(['Name','Age','Fare','Ticket','Cabin','Fare_Bin','SibSp','Parch','PassengerId'],axis=1,inplace=True) data.head(2) sns.heatmap(data.corr(),annot=True,cmap='RdYlGn',linewidths=0.2) #data.corr()-->correlation matrix fig=plt.gcf() fig.set_size_inches(10,8) plt.show()
_____no_output_____
Apache-2.0
start-to-solve-your-first-problem-in-ml.ipynb
sanjayatb/Kaggle
--- Predictive Modeling[^](5) Now after data cleaning and feature engineering we are ready to train some classification algorithms that will make predictions for unseen data. We will first train few classification algorithms and see how they perform. Then we can look how an ensemble of classification algorithms perform on this data set.Following Machine Learning algorithms will be used in this kernal.* Logistic Regression Classifier* Naive Bayes Classifier* Decision Tree Classifier* Random Forest Classifier
#importing all the required ML packages from sklearn.linear_model import LogisticRegression #logistic regression from sklearn.ensemble import RandomForestClassifier #Random Forest from sklearn.naive_bayes import GaussianNB #Naive bayes from sklearn.tree import DecisionTreeClassifier #Decision Tree from sklearn.model_selection import train_test_split #training and testing data split from sklearn import metrics #accuracy measure from sklearn.metrics import confusion_matrix #for confusion matrix #Lets prepare data sets for training. train,test=train_test_split(data,test_size=0.3,random_state=0,stratify=data['Survived']) train_X=train[train.columns[1:]] train_Y=train[train.columns[:1]] test_X=test[test.columns[1:]] test_Y=test[test.columns[:1]] X=data[data.columns[1:]] Y=data['Survived'] data.head(2) # Logistic Regression model = LogisticRegression(C=0.05,solver='liblinear') model.fit(train_X,train_Y.values.ravel()) LR_prediction=model.predict(test_X) print('The accuracy of the Logistic Regression model is \t',metrics.accuracy_score(LR_prediction,test_Y)) # Naive Bayes model=GaussianNB() model.fit(train_X,train_Y.values.ravel()) NB_prediction=model.predict(test_X) print('The accuracy of the NaiveBayes model is\t\t\t',metrics.accuracy_score(NB_prediction,test_Y)) # Decision Tree model=DecisionTreeClassifier() model.fit(train_X,train_Y) DT_prediction=model.predict(test_X) print('The accuracy of the Decision Tree is \t\t\t',metrics.accuracy_score(DT_prediction,test_Y)) # Random Forest model=RandomForestClassifier(n_estimators=100) model.fit(train_X,train_Y.values.ravel()) RF_prediction=model.predict(test_X) print('The accuracy of the Random Forests model is \t\t',metrics.accuracy_score(RF_prediction,test_Y))
The accuracy of the Logistic Regression model is 0.8134328358208955 The accuracy of the NaiveBayes model is 0.8134328358208955 The accuracy of the Decision Tree is 0.8134328358208955 The accuracy of the Random Forests model is 0.8171641791044776
Apache-2.0
start-to-solve-your-first-problem-in-ml.ipynb
sanjayatb/Kaggle
Cross Validation[^](5_1)Accuracy we get here higlhy depends on the train & test data split of the original data set. We can use cross validation to avoid such problems arising from dataset splitting.I am using K-fold cross validation here. Watch this short [vedio](https://www.youtube.com/watch?v=TIgfjmp-4BA) to understand what it is.
from sklearn.model_selection import KFold #for K-fold cross validation from sklearn.model_selection import cross_val_score #score evaluation from sklearn.model_selection import cross_val_predict #prediction kfold = KFold(n_splits=10, random_state=22) # k=10, split the data into 10 equal parts xyz=[] accuracy=[] std=[] classifiers=['Logistic Regression','Decision Tree','Naive Bayes','Random Forest'] models=[LogisticRegression(solver='liblinear'),DecisionTreeClassifier(),GaussianNB(),RandomForestClassifier(n_estimators=100)] for i in models: model = i cv_result = cross_val_score(model,X,Y, cv = kfold,scoring = "accuracy") xyz.append(cv_result.mean()) std.append(cv_result.std()) accuracy.append(cv_result) new_models_dataframe2=pd.DataFrame({'CV Mean':xyz,'Std':std},index=classifiers) new_models_dataframe2
_____no_output_____
Apache-2.0
start-to-solve-your-first-problem-in-ml.ipynb
sanjayatb/Kaggle
Now we have looked at cross validation accuracies to get an idea how those models work. There is more we can do to understand the performances of the models we tried ; let's have a look at confusion matrix for each model. Confusion Matrix[^](5_2) A confusion matrix is a table that is often used to describe the performance of a classification model. read more [here](https://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/)
f,ax=plt.subplots(2,2,figsize=(10,8)) y_pred = cross_val_predict(LogisticRegression(C=0.05,solver='liblinear'),X,Y,cv=10) sns.heatmap(confusion_matrix(Y,y_pred),ax=ax[0,0],annot=True,fmt='2.0f') ax[0,0].set_title('Matrix for Logistic Regression') y_pred = cross_val_predict(DecisionTreeClassifier(),X,Y,cv=10) sns.heatmap(confusion_matrix(Y,y_pred),ax=ax[0,1],annot=True,fmt='2.0f') ax[0,1].set_title('Matrix for Decision Tree') y_pred = cross_val_predict(GaussianNB(),X,Y,cv=10) sns.heatmap(confusion_matrix(Y,y_pred),ax=ax[1,0],annot=True,fmt='2.0f') ax[1,0].set_title('Matrix for Naive Bayes') y_pred = cross_val_predict(RandomForestClassifier(n_estimators=100),X,Y,cv=10) sns.heatmap(confusion_matrix(Y,y_pred),ax=ax[1,1],annot=True,fmt='2.0f') ax[1,1].set_title('Matrix for Random-Forests') plt.subplots_adjust(hspace=0.2,wspace=0.2) plt.show()
_____no_output_____
Apache-2.0
start-to-solve-your-first-problem-in-ml.ipynb
sanjayatb/Kaggle
* By looking at above matrices we can say that, if we are more concerned on making less mistakes by predicting survived as dead, then Naive Bayes model does better.* If we are more concerned on making less mistakes by predicting dead as survived, then Decision Tree model does better. Hyper-Parameters Tuning[^](5_3)You might have noticed there are few parameters for each model which defines how the model learns. We call these hyperparameters. These hyperparameters can be tuned to improve performance. Let's try this for Random Forest classifier.
from sklearn.model_selection import GridSearchCV n_estimators=range(100,1000,100) hyper={'n_estimators':n_estimators} gd=GridSearchCV(estimator=RandomForestClassifier(random_state=0),param_grid=hyper,verbose=True,cv=10) gd.fit(X,Y) print(gd.best_score_) print(gd.best_estimator_)
Fitting 10 folds for each of 9 candidates, totalling 90 fits
Apache-2.0
start-to-solve-your-first-problem-in-ml.ipynb
sanjayatb/Kaggle
* Best Score for Random Forest is with n_estimators=100 Ensembling[^](5_4)Ensembling is a way to increase performance of a model by combining several simple models to create a single powerful model.read more about ensembling [here](https://www.analyticsvidhya.com/blog/2018/06/comprehensive-guide-for-ensemble-models/).Ensembling can be done in ways like: Voting Classifier, Bagging, Boosting.I will use voting method in this kernal
from sklearn.ensemble import VotingClassifier estimators=[('RFor',RandomForestClassifier(n_estimators=100,random_state=0)), ('LR',LogisticRegression(C=0.05,solver='liblinear')), ('DT',DecisionTreeClassifier()), ('NB',GaussianNB())] ensemble=VotingClassifier(estimators=estimators,voting='soft') ensemble.fit(train_X,train_Y.values.ravel()) print('The accuracy for ensembled model is:',ensemble.score(test_X,test_Y)) cross=cross_val_score(ensemble,X,Y, cv = 10,scoring = "accuracy") print('The cross validated score is',cross.mean())
The accuracy for ensembled model is: 0.8059701492537313 The cross validated score is 0.803603166496425
Apache-2.0
start-to-solve-your-first-problem-in-ml.ipynb
sanjayatb/Kaggle
Prediction[^](5_5)We can see that ensemble model does better than individual models. lets use that for predictions.
Ensemble_Model_For_Prediction=VotingClassifier(estimators=[ ('RFor',RandomForestClassifier(n_estimators=200,random_state=0)), ('LR',LogisticRegression(C=0.05,solver='liblinear')), ('DT',DecisionTreeClassifier(random_state=0)), ('NB',GaussianNB()) ], voting='soft') Ensemble_Model_For_Prediction.fit(X,Y)
_____no_output_____
Apache-2.0
start-to-solve-your-first-problem-in-ml.ipynb
sanjayatb/Kaggle
We need to do some preprocessing to this test data set before we can feed that to the trained model.
test=pd.read_csv('../input/test.csv') IDtest = test["PassengerId"] test.head(2) test.isnull().sum() # Prepare Test Data set for feeding # Construct feature Initial test['Initial']=0 for i in test: test['Initial']=test.Name.str.extract('([A-Za-z]+)\.') #lets extract the Salutations test['Initial'].replace(['Mlle','Mme','Ms','Dr','Major','Lady','Countess','Jonkheer','Col','Rev','Capt','Sir','Don','Dona'],['Miss','Miss','Miss','Mr','Mr','Mrs','Mrs','Other','Other','Other','Mr','Mr','Mr','Other'],inplace=True) # Fill Null values in Age Column test.loc[(test.Age.isnull())&(test.Initial=='Mr'),'Age']=33 test.loc[(test.Age.isnull())&(test.Initial=='Mrs'),'Age']=36 test.loc[(test.Age.isnull())&(test.Initial=='Master'),'Age']=5 test.loc[(test.Age.isnull())&(test.Initial=='Miss'),'Age']=22 test.loc[(test.Age.isnull())&(test.Initial=='Other'),'Age']=46 # Fill Null values in Fare Column test.loc[(test.Fare.isnull()) & (test['Pclass']==3),'Fare'] = 12.45 # Construct feature Age_cat test['Age_cat']=0 test.loc[test['Age']<=16,'Age_cat']=0 test.loc[(test['Age']>16)&(test['Age']<=32),'Age_cat']=1 test.loc[(test['Age']>32)&(test['Age']<=48),'Age_cat']=2 test.loc[(test['Age']>48)&(test['Age']<=64),'Age_cat']=3 test.loc[test['Age']>64,'Age_cat']=4 # Construct feature Fare_cat test['Fare_cat']=0 test.loc[test['Fare']<=7.775,'Fare_cat']=0 test.loc[(test['Fare']>7.775)&(test['Fare']<=8.662),'Fare_cat']=1 test.loc[(test['Fare']>8.662)&(test['Fare']<=14.454),'Fare_cat']=2 test.loc[(test['Fare']>14.454)&(test['Fare']<=26.0),'Fare_cat']=3 test.loc[(test['Fare']>26.0)&(test['Fare']<=52.369),'Fare_cat']=4 test.loc[test['Fare']>52.369,'Fare_cat']=5 # Construct feature FamilySize test['FamilySize'] = test['Parch'] + test['SibSp'] # Drop unwanted features test.drop(['Name','Age','Ticket','Cabin','SibSp','Parch','Fare','PassengerId'],axis=1,inplace=True) # Converting String Values into Numeric test['Sex'].replace(['male','female'],[0,1],inplace=True) test['Embarked'].replace(['S','C','Q'],[0,1,2],inplace=True) test['Initial'].replace(['Mr','Mrs','Miss','Master','Other'],[0,1,2,3,4],inplace=True) test.head(2) # Predict test_Survived = pd.Series(ensemble.predict(test), name="Survived") results = pd.concat([IDtest,test_Survived],axis=1) results.to_csv("predictions.csv",index=False)
/opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) /opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) /opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) /opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) /opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) /opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)]) /opt/conda/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) /opt/conda/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) /opt/conda/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) /opt/conda/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) /opt/conda/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) /opt/conda/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)])
Apache-2.0
start-to-solve-your-first-problem-in-ml.ipynb
sanjayatb/Kaggle
Feature Importance[^](6)Well after we have trained a model to make predictions for us, we feel curiuos on how it works. What are the features model weights more when trying to make a prediction?. As humans we seek to understand how it works. Looking at feature importances of a trained model is one way we could explain the decisions it make. Lets visualize the feature importances of the Random forest model we used inside the ensemble above.
f,ax=plt.subplots(1,1,figsize=(6,6)) model=RandomForestClassifier(n_estimators=500,random_state=0) model.fit(X,Y) pd.Series(model.feature_importances_,X.columns).sort_values(ascending=True).plot.barh(width=0.8,ax=ax) ax.set_title('Feature Importance in Random Forests') plt.show()
_____no_output_____
Apache-2.0
start-to-solve-your-first-problem-in-ml.ipynb
sanjayatb/Kaggle
ディープラーニングに必要な数学と NumPy の操作 1. NumPy の基本 NumPy のインポート
import numpy as np
_____no_output_____
MIT
notebooks/Chapter03/math_numpy.ipynb
tagomaru/ai_security
ndarray による1次元配列の例
a1 = np.array([1, 2, 3]) # 1次元配列を生成 print('変数の型:',type(a1)) print('データの型 (dtype):', a1.dtype) print('要素の数 (size):', a1.size) print('形状 (shape):', a1.shape) print('次元の数 (ndim):', a1.ndim) print('中身:', a1)
変数の型: <class 'numpy.ndarray'> データの型 (dtype): int64 要素の数 (size): 3 形状 (shape): (3,) 次元の数 (ndim): 1 中身: [1 2 3]
MIT
notebooks/Chapter03/math_numpy.ipynb
tagomaru/ai_security