markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
If you're uploading many queries at a time, you can upload in batches. This saves API calls and allows you to just pass in a list rather than iterating over the upload function.
queries.upload_all([ {"name":"Pets", "includedTerms":"dogs OR cats", "backfill_date":"2016-01-01T05:00:00"}, {"name":"ice cream cake", "includedTerms":"(\"ice cream\" OR icecream) AND (cake)"}, {"name": "Test1", "includedTerms": "akdnvaoifg;anf"}, {"name": "Test2", "includedTerms": "anvoapihajkvn"}, {"name": "Test3", "includedTerms": "nviuphabaveh"}, ])
DEMO.ipynb
anthonybu/api_sdk
mit
Channels will be shown as queries and can be deleted as queries, but must be uploaded differently. You must be authenticated in the app to upload channels. In order to upload a channel you must pass in the name of the channel, the handle you'd like to track and the type of channel. As with keyword queries, we can upload channels individually or in batches. Note: Currently we can only support uploading Twitter channels through the API.
queries.upload_channel(name = "Brandwatch", handle = "brandwatch", channel_type = "twitter") queries.upload_all_channel([{"name": "BWReact", "handle": "BW_React", "channel_type": "twitter"}, {"name": "Brandwatch Careers", "handle": "BrandwatchJobs", "channel_type": "twitter"}])
DEMO.ipynb
anthonybu/api_sdk
mit
We can delete queries one at a time, or in batches.
queries.delete(name = "Brandwatch Engagement") queries.delete_all(["Pets", "Test3", "Brandwatch", "BWReact", "Brandwatch Careers"])
DEMO.ipynb
anthonybu/api_sdk
mit
Groups You'll notice that a lot of the things that were true for queries are also true for groups. Many of the functions are nearly identical with any adaptations necessary handled behind the scenes for ease of use. Again (as with queries), we need to create an object with which we can manipulate groups within the account
groups = BWGroups(project)
DEMO.ipynb
anthonybu/api_sdk
mit
And can check for exisiting groups in the same way as before.
groups.names
DEMO.ipynb
anthonybu/api_sdk
mit
Now let's check which queries are in each group in the account
for group in groups.names: print(group) print(groups.get_group_queries(group)) print()
DEMO.ipynb
anthonybu/api_sdk
mit
We can easily create a group with any preexisting queries. (Recall that upload accepts two boolean keyword arguments - "create_only" and "modify_only" (both defaulting to False) - which specifies what API verbs the function is allowed to use; for instance, if we set "create_only" to True then the function will post a new query if it can and otherwise it will do nothing.)
groups.upload(name = "group 1", queries = ["Test1", "Test2"])
DEMO.ipynb
anthonybu/api_sdk
mit
Or upload new queries and create a group with them, all in one call
groups.upload_queries_as_group(group_name = "group 2", query_data_list = [{"name": "Test3", "includedTerms": "adcioahnanva"}, {"name": "Test4", "includedTerms": "ioanvauhekanv;"}])
DEMO.ipynb
anthonybu/api_sdk
mit
We can either delete just the group, or delete the group and the queries at the same time.
groups.delete("group 1") print() groups.deep_delete("group 2")
DEMO.ipynb
anthonybu/api_sdk
mit
Downloading Mentions (From a Query or a Group) You can download mentions from a Query or from a Group (the code does not yet support Channels) There is a function get_mentions() in the classes BWQueries and in BWGroups. They are used the same way. Be careful with time zones, as they affect the date range and alter the results. If you're using the same date range for all your operations, I reccomend setting some variables at the start with dates and time zones. Here, today is set to the current day, and start is set to 30 days ago. Each number is offset by one to make it accurate.
today = (datetime.date.today() + datetime.timedelta(days=1)).isoformat() + "T05:00:00" start = (datetime.date.today() - datetime.timedelta(days=29)).isoformat() + "T05:00:00"
DEMO.ipynb
anthonybu/api_sdk
mit
To use get_mentions(), the minimum parameters needed are name (query name in this case, or group name if downloading mentions from a group), startDate, and endDate
filtered = queries.get_mentions(name = "ice cream cake", startDate = start, endDate = today)
DEMO.ipynb
anthonybu/api_sdk
mit
There are over a hundred filters you can use to only download the mentions that qualify. see the full list in the file filters.py Here, different filters are used, which take different data types. filters.py details which data type is used with each filter. Some filters, like sentiment and xprofession below, have a limited number of settings to choose from. You can filter many things by inclusion or exclusion. The x in xprofession stands for exclusion, for example.
filtered = queries.get_mentions(name = "ice cream cake", startDate = start, endDate = today, sentiment = "positive", twitterVerified = False, impactMin = 50, xprofession = ["Politician", "Legal"])
DEMO.ipynb
anthonybu/api_sdk
mit
To filter by tags, pass in a list of strings where each string is a tag name. You can filter by categories in two differnt ways: on a subcategory level or a parent category level. To filter on a subcategory level, use the category keyword and pass in a dictionary, where each the keys are the parent categories and the values are lists of the subcategories. To filter on a parent category level, use the parentCategory keyword and pass in a list of parent category names. Note: In the following call the parentCategory filter is redundant, but executed for illustrative purposes.
filtered = queries.get_mentions(name = "ice cream cake", startDate = start, endDate = today, parentCategory = ["Colors", "Days"], category = {"Colors": ["Blue", "Yellow"], "Days": ["Monday"]}, tag = ["Tastes Good"]) filtered[0]
DEMO.ipynb
anthonybu/api_sdk
mit
Categories Instantiate a BWCategories object by passing in your project as a parameter, which loads all of the categories in your project. Print out ids to see which categories are currently in your project.
categories = BWCategories(project) categories.ids
DEMO.ipynb
anthonybu/api_sdk
mit
Upload categories individually with upload(), or in bulk with upload_all(). If you are uploading many categories, it is more efficient to use upload_all(). For upload(), pass in name and children. name is the string which represents the parent category, and children is a list of dictionaries where each dictionary is a child category- its key is "name" and its value is the name of the child category. By default, a category will allow multiple subcategories to be applies, so the keyword argument "multiple" is set to True. You can manually set it to False by passing in multipe=False as another parameter when uploading a category. For upload_all(), pass in a list of dictionaries, where each dictionary corrosponds to one category, and contains the parameters described above. Let's upload a category and then check what's in the category.
categories.upload(name = "Droids", children = ["r2d2", "c3po"])
DEMO.ipynb
anthonybu/api_sdk
mit
Now let's upload a few categories and then check what parent categories are in the system
categories.upload_all([{"name":"month", "children":["January","February"]}, {"name":"Time of Day", "children":["morning", "evening"]}])
DEMO.ipynb
anthonybu/api_sdk
mit
To add children/subcategories, call upload() and pass in the parent category name and a list of the new subcategories to add. If you'd like to instead overwrite the existing subcategories with new subcategories, call upload() and pass in the parameter overwrite_children = True.
categories.upload(name = "Droids", children = ["bb8"])
DEMO.ipynb
anthonybu/api_sdk
mit
To rename a category, call rename(), with parameters name and new_name.
categories.rename(name = "month", new_name = "Months") categories.ids["Months"]
DEMO.ipynb
anthonybu/api_sdk
mit
You can delete categories either individually with delete(), or in bulk with delete_all(). You also have the option to delete the entire parent category or just some of the subcategories. To delete ALL CATEGORIES in a project, call clear_all_in_project with no parameters. Be careful with this one, and do not use unless you want to delete all categories in the current project. First let's delete just some subcategories.
categories.delete({"name": "Months", "children":["February"]}) categories.delete_all([{"name": "Droids", "children": ["bb8", "c3po"]}]) categories.delete("Droids") categories.delete_all(["Months", "Time of Day"]) categories.ids
DEMO.ipynb
anthonybu/api_sdk
mit
Tags Instantiate a BWTags object by passing in your project as a parameter, which loads all of the tags in your project. Print out ids to see which tags are currently in your project.
tags = BWTags(project) tags.names
DEMO.ipynb
anthonybu/api_sdk
mit
There are two ways to upload tags: individually and in bulk. When uploading many tags, it is more efficient to use upload_all. In upload, pass in the name of the tag. In upload_all, pass in a list of dictionaries, where each dictionary contains "name" as the key and the tag name as the its value
tags.upload(name = "yellow") tags.upload_all([{"name":"green"}, {"name":"blue"}, {"name":"purple"}]) tags.names
DEMO.ipynb
anthonybu/api_sdk
mit
To change the name of a tag, but mantain its id, upload it with keyword arguments name and new_name.
tags.upload(name = "yellow", new_name = "yellow-orange blend") tags.names
DEMO.ipynb
anthonybu/api_sdk
mit
As with categories, there are three ways of deleting tags. Delete one tag by calling delete and passing in a string, the name of the tag to delete Delete multiple tags by calling delete_all and passing in a list of strings, where each string is a name of a tag to delete To delete ALL TAGS in a project, call clear_all_in_project with no parameters. Be careful with this one, and do not use unless you want to delete all tags in the current project
tags.delete("purple") tags.delete_all(["blue", "green", "yellow-orange blend"]) tags.names
DEMO.ipynb
anthonybu/api_sdk
mit
Brandwatch Lists Note: to avoid ambiguity between the python data type "list" and a Brandwatch author list, site list, or location list, the latter is referred to in this demo as a "Brandwatch List." BWAuthorLists, BWSiteLists, BWLocationLists work almost identically. First, instantiate your the object which contains the Brandwatch Lists in your project, with your project as a the parameter. This will load the data from your project so you can see what's there, upload more Brandwatch Lists, edit existing Brandwatch Lists, and delete Brandwatch Lists from your project Printing out ids will show you the Brandwatch Lists (by name and ID) that are currently in your project.
authorlists = BWAuthorLists(project) authorlists.names
DEMO.ipynb
anthonybu/api_sdk
mit
To upload a Brandwatch List, pass in a name as a string and the contents of your Brandwatch List as a list of strings. The keyword "authors" is used for BWAuthorLists, shown below. The keyword "domains"is used for BWSiteLists. The keyword "locations" is used for BWLocationLists. To see the contents of a Brandwatch List, call get_list with the name as the parameter Uploading is done with either a POST call, for new Brandwatch Lists, or a PUT call, for existing Brandwatch Lists, where the ID of the Brandwatch Lists is mantained, so if you upload and then upload a list with the same name and different contents, the first upload will create a new Brandwatch List, and the second upload will modify the existing list and keep its ID. Similarly, you can change the name of an existing Brandwatch List by passing in both "name" and "new_name"
authorlists.upload(name = "Writers", authors = ["Edward Albee", "Tenessee Williams", "Anna Deavere Smith"]) authorlists.get("Writers")["authors"] authorlists.upload(name = "Writers", new_name = "Playwrights", authors = ["Edward Albee", "Tenessee Williams", "Anna Deavere Smith", "Susan Glaspell"]) authorlists.get("Playwrights")["authors"]
DEMO.ipynb
anthonybu/api_sdk
mit
To add items to a Brandwatch List without reentering all of the existing items, call add_items
authorlists.add_items(name = "Playwrights", items = ["Eugene O'Neill"]) authorlists.get("Playwrights")["authors"]
DEMO.ipynb
anthonybu/api_sdk
mit
To delete a Brandwatch List, pass in its name. Note the ids before the Brandwatch List is deleted, compared to after it is deleted. The BWLists object is updated to reflect the Brandwatch Lists in the project after each upload and each delete
authorlists.names authorlists.delete("Playwrights") authorlists.names
DEMO.ipynb
anthonybu/api_sdk
mit
The only difference between how you use BWAuthorlists compared to how you use BWSiteLists and BWLocationLists is the parameter which is passed in. BWAuthorlists: authors = ["edward albee", "tenessee williams", "Anna Deavere Smith"] BWSiteLists: domains = ["github.com", "stackoverflow.com", "docs.python.org"] *BWLocationLists: locations = [{"id": "mai4", "name": "Maine", "type": "state", "fullName": "Maine, United States, North America"}, {"id": "verf", "name": "Vermont", "type": "state", "fullName": "Vermont, United States, North America"}, {"id": "rho4", "name": "Rhode Island", "type": "state", "fullName": "Rhode Island, United States, North America"} ] *Requires dictionary of location data instead of a string Rules Instantiate a BWRules object by passing in your project as a parameter, which loads all of the rules in your project. Print out names and IDs to see which rules are currently in your project.
rules = BWRules(project) rules.names
DEMO.ipynb
anthonybu/api_sdk
mit
Every rule must have a name, an action, and filters. The first step to creating a rule through the API is to prepare filters by calling filters(). If your desired rules applies to a query (or queries), include queryName as a filter and pass in a list of the queries you want to apply it to. There are over a hundred filters you can use to only download the mentions that qualify. See the full list in the file filters.py. Here, different filters are used, which take different data types. filters.py details which data type is used with each filter. Some filters, like sentiment and xprofession below, have a limited number of settings to choose from. You can filter many things by inclusion or exclusion. The x in xprofession stands for exclusion, for example. If you include search terms, be sure to use nested quotes - passing in "cat food" will result in a search that says cat food (i.e. cat AND food)
filters = rules.filters(queryName = "ice cream cake", sentiment = "positive", twitterVerified = False, impactMin = 50, xprofession = ["Politician", "Legal"]) filters = rules.filters(queryName = ["Australian Animals", "ice cream cake"], search = '"cat food" OR "dog food"')
DEMO.ipynb
anthonybu/api_sdk
mit
The second step is to prepare the rule action by calling rule_action(). For this function, you must pass in the action and setting. Below I've used examples of adding categories and tags, but you can also set sentiment or workflow (as in the front end). If you pass in a category or tag that does not yet exist, it will be automatically uploaded for you.
action = rules.rule_action(action = "addTag", setting = ["animal food"])
DEMO.ipynb
anthonybu/api_sdk
mit
The last step is to upload! Pass in the name, filters, and action. Scope is optional - it will default to query if queryName is in the filters and otherwise be set to project. Backfill is also optional - it will default to False. The upload() function will automatically check the validity of your search string and give a helpful error message if errors are found.
rules.upload(name = "rule", scope = "query", filter = filters, ruleAction = action, backfill = True)
DEMO.ipynb
anthonybu/api_sdk
mit
You can also upload rules in bulk. Below we prepare a bunch of filters and actions at once.
filters1 = rules.filters(search = "caknvfoga;vnaei") filters2 = rules.filters(queryName = ["Australian Animals"], search = "(bloop NEAR/10 blorp)") filters3 = rules.filters(queryName = ["Australian Animals", "ice cream cake"], search = '"hello world"') action1 = rules.rule_action(action = "addCategories", setting = {"Example": ["One"]}) action2 = rules.rule_action(action = "addTag", setting = ["My Example"])
DEMO.ipynb
anthonybu/api_sdk
mit
When uploading in bulk, it is helpful (but not necessary) to use the rules() function before uploading in order to keep the dictionaries organized.
rule1 = rules.rule(name = "rule1", filter = filters1, action = action1, scope = "project") rule2 = rules.rule(name = "rule2", filter = filters2, action = action2) rule3 = rules.rule(name = "rule3", filter = filters3, action = action1, backfill = True) rules.upload_all([rule1, rule2, rule3])
DEMO.ipynb
anthonybu/api_sdk
mit
As with other resources, we can delete, delete_all or clear_all_in_project
rules.delete(name = "rule") rules.delete_all(names = ["rule1", "rule2", "rule3"]) rules.names
DEMO.ipynb
anthonybu/api_sdk
mit
Signals Instantiate a BWSignals object by passing in your project as a parameter, which loads all of the signals in your project. Print out ids to see which signals are currently in your project.
signals = BWSignals(project) signals.names
DEMO.ipynb
anthonybu/api_sdk
mit
Again, we can upload signals individually or in batch. You must pass at least a name, queries (list of queries you'd like the signal to apply to) and subscribers. For each subscriber, you have to pass both an emailAddress and notificationThreshold. The notificationThreshold will be a number 1, 2 or 3 - where 1 means send all notifications and 3 means send only high priority signals. Optionally, you can also pass in categories or tags to filter by. As before, you can filter by an entire category with the keyword parentCategory or just a subcategory (or list of subcategories) with the keyword category. An example of how to pass in each filter is shown below.
signals.upload(name= "New Test", queries= ["ice cream cake"], parentCategory = ["Colors"], subscribers= [{"emailAddress": "[email protected]", "notificationThreshold": 1}]) signals.upload_all([{"name": "Signal Me", "queries": ["ice cream cake"], "category": {"Colors": ["Blue", "Yellow"]}, "subscribers": [{"emailAddress": "[email protected]", "notificationThreshold": 3}]}, {"name": "Signal Test", "queries": ["ice cream cake"], "tag": ["Tastes Good"], "subscribers": [{"emailAddress": "[email protected]", "notificationThreshold": 2}]}]) signals.names
DEMO.ipynb
anthonybu/api_sdk
mit
Signals can be deleted individually or in bulk.
signals.delete("New Test") signals.delete_all(["Signal Me", "Signal Test"]) signals.names
DEMO.ipynb
anthonybu/api_sdk
mit
Patching Mentions To patch the metadata on mentions, whether those mentions come from queries or from groups, you must first instantiate a BWMentions object and pass in your project as a parameter.
mentions = BWMentions(project) filtered = queries.get_mentions(name = "ice cream cake", startDate = start, endDate = today, parentCategory = ["Colors", "Days"], category = {"Colors": ["Blue", "Yellow"], "Days": ["Monday"]}, tag = ["Tastes Good"])
DEMO.ipynb
anthonybu/api_sdk
mit
if you don't want to upload your tags and categories ahead of time, you don't have to! BWMentions will do that for you, but if there are a lot of differnet tags/categories, it's definitely more efficient to upload them in bulk ahead of time For this example, i'm arbitrarily patching a few of the mentions, rather than all of them
mentions.patch_mentions(filtered[0:10], action = "addTag", setting = ["cold"]) mentions.patch_mentions(filtered[5:12], action = "starred", setting = True) mentions.patch_mentions(filtered[6:8], action = "addCategories", setting = {"color":["green", "blue"]})
DEMO.ipynb
anthonybu/api_sdk
mit
Herança
# Criando a classe Animal - Super-classe class Animal(): def __init__(self): print("Animal criado") def Identif(self): print("Animal") def comer(self): print("Comendo") # Criando a classe Cachorro - Sub-classe class Cachorro(Animal): def __init__(self): Animal.__init__(self) print("Objeto Cachorro criado") def Identif(self): print("Cachorro") def latir(self): print("Au Au!") # Criando um objeto (Instanciando a classe) rex = Cachorro() # Executando o método da classe Cachorro (sub-classe) rex.Identif() # Executando o método da classe Animal (super-classe) rex.comer() # Executando o método da classe Cachorro (sub-classe) rex.latir()
Cap05/Notebooks/DSA-Python-Cap05-04-Heranca.ipynb
dsacademybr/PythonFundamentos
gpl-3.0
1. Introduction In this notebook we explore an application of clustering algorithms to shape segmentation from binary images. We will carry out some exploratory work with a small set of images provided with this notebook. Most of them are not binary images, so we must do some preliminary work to extract he binary shape images and apply the clustering algorithms to them. We will have the opportunity to test the differences between $k$-means and spectral clustering in this problem. 1.1. Load Image Several images are provided with this notebook: BinarySeeds.png birds.jpg blood_frog_1.jpg cKyDP.jpg Matricula.jpg Matricula2.jpg Seeds.png Select image birds.jpg from file and plot it in grayscale
name = "birds.jpg" name = "Seeds.jpg" birds = imread("Images/" + name) birdsG = np.sum(birds, axis=2) plt.imshow(birdsG, cmap=plt.get_cmap('gray')) plt.grid(False) plt.axis('off') plt.show()
U_lab1.Clustering/Lab_ShapeSegmentation_draft/LabSessionClustering.ipynb
ML4DS/ML4all
mit
2. Thresholding Select an intensity threshold by manual inspection of the image histogram
plt.hist(birdsG.ravel(), bins=256) plt.show()
U_lab1.Clustering/Lab_ShapeSegmentation_draft/LabSessionClustering.ipynb
ML4DS/ML4all
mit
Plot the binary image after thresholding.
if name == "birds.jpg": th = 256 elif name == "Seeds.jpg": th = 650 birdsBN = birdsG > th # If there are more white than black pixels, reverse the image if np.sum(birdsBN) > float(np.prod(birdsBN.shape)/2): birdsBN = 1-birdsBN plt.imshow(birdsBN, cmap=plt.get_cmap('gray')) plt.grid(False) plt.axis('off') plt.show()
U_lab1.Clustering/Lab_ShapeSegmentation_draft/LabSessionClustering.ipynb
ML4DS/ML4all
mit
3. Dataset generation Extract pixel coordinates dataset from image
(h, w) = birdsBN.shape bW = birdsBN * range(w) bH = birdsBN * np.array(range(h))[:,np.newaxis] pSet = [t for t in zip(bW.ravel(), bH.ravel()) if t!=(0,0)] X = np.array(pSet) print X plt.scatter(X[:, 0], X[:, 1], s=5); plt.axis('equal') plt.show()
U_lab1.Clustering/Lab_ShapeSegmentation_draft/LabSessionClustering.ipynb
ML4DS/ML4all
mit
4. k-means clustering algorithm
from sklearn.cluster import KMeans est = KMeans(50) # 4 clusters est.fit(X) y_kmeans = est.predict(X) plt.scatter(X[:, 0], X[:, 1], c=y_kmeans, s=5, cmap='rainbow', linewidth=0.0) plt.axis('equal') plt.show()
U_lab1.Clustering/Lab_ShapeSegmentation_draft/LabSessionClustering.ipynb
ML4DS/ML4all
mit
5. Spectral clustering algorithm 5.1. Affinity matrix Compute and visualize the affinity matrix
from sklearn.metrics.pairwise import rbf_kernel gamma = 5 sf = 4 Xsub = X[0::sf] print Xsub.shape gamma = 0.001 K = rbf_kernel(Xsub, Xsub, gamma=gamma) plt.imshow(K, cmap='hot') plt.colorbar() plt.title('RBF Affinity Matrix for gamma = ' + str(gamma)) plt.grid('off') plt.show() from sklearn.cluster import SpectralClustering spc = SpectralClustering(n_clusters=50, gamma=gamma, affinity='rbf') y_kmeans = spc.fit_predict(Xsub) plt.scatter(Xsub[:,0], Xsub[:,1], c=y_kmeans, s=5, cmap='rainbow', linewidth=0.0) plt.axis('equal') plt.show() print X[:,1]
U_lab1.Clustering/Lab_ShapeSegmentation_draft/LabSessionClustering.ipynb
ML4DS/ML4all
mit
例題9-2 「係数ダミー」 以下のようにモデルを設定して回帰分析を行う。 $Y_{i} = \alpha + \beta X_{i} + \gamma D_{i} + \delta D_{i} X_{i} + u_{i}$
# データ読み込み data = pd.read_csv('example/k0902.csv') data # 説明変数設定 X = data[['X', 'D', 'DX']] X = sm.add_constant(X) X # 被説明変数設定 Y = data['Y'] Y # OLSの実行(Ordinary Least Squares: 最小二乗法) model = sm.OLS(Y,X) results = model.fit() print(results.summary()) # ダミー別データ data_d0 = data[data["D"] == 0] data_d1 = data[data["D"] == 1] # グラフ生成 plt.plot(data["X"], data["Y"], 'o', label="data") plt.plot(data_d0.X, results.fittedvalues[data_d0.index], label="D=0") plt.plot(data_d1.X, results.fittedvalues[data_d1.index], label="D=1") plt.xlim(min(data["X"])-1, max(data["X"])+1) plt.ylim(min(data["Y"])-1, max(data["Y"])+1) plt.title('9-2: Dummy Variable') plt.legend(loc=2) plt.show()
Dummy.ipynb
ogaway/Econometrics
gpl-3.0
例題9-3 「t検定による構造変化のテスト」 例題9-2において$\gamma = 0$に関するP値は0.017であり、$\delta = 0$に関するP値は0.003であることから、標準的な有意水準を設定すれば、いずれのダミー変数も有意であるといえる。 例題9-4 「F検定による構造変化のテスト」
# ダミー変数を加えない時のOLSモデル作成 X = data[['X']] X = sm.add_constant(X) model2 = sm.OLS(Y,X) results2 = model2.fit() # anova(Analysis of Variance) print(sm.stats.anova_lm(results2, results))
Dummy.ipynb
ogaway/Econometrics
gpl-3.0
Import non-standard libraries (install as needed)
from osgeo import ogr,osr import folium import simplekml
Lecture 11/.ipynb_checkpoints/Satoshi Nakamoto Lecture 11 Jupyter Notebook-checkpoint.ipynb
SatoshiNakamotoGeoscripting/SatoshiNakamotoGeoscripting
mit
Optional directory creation
if not exists('./data'): makedirs('./data') chdir("./data")
Lecture 11/.ipynb_checkpoints/Satoshi Nakamoto Lecture 11 Jupyter Notebook-checkpoint.ipynb
SatoshiNakamotoGeoscripting/SatoshiNakamotoGeoscripting
mit
Is the ESRI Shapefile driver available?
driverName = "ESRI Shapefile" drv = ogr.GetDriverByName( driverName ) if drv is None: print "%s driver not available.\n" % driverName else: print "%s driver IS available.\n" % driverName
Lecture 11/.ipynb_checkpoints/Satoshi Nakamoto Lecture 11 Jupyter Notebook-checkpoint.ipynb
SatoshiNakamotoGeoscripting/SatoshiNakamotoGeoscripting
mit
Define a function which will create a shapefile from the points input and export it as kml if the option is set to True.
def shpFromPoints(filename, layername, points, save_kml = True): spatialReference = osr.SpatialReference() spatialReference.ImportFromProj4('+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs') ds = drv.CreateDataSource(filename) layer=ds.CreateLayer(layername, spatialReference, ogr.wkbPoint) layerDefinition = layer.GetLayerDefn() point = ogr.Geometry(ogr.wkbPoint) feature = ogr.Feature(layerDefinition) kml = simplekml.Kml() for i, value in enumerate(points): point.SetPoint(0,value[0], value[1]) feature.SetGeometry(point) layer.CreateFeature(feature) kml.newpoint(name=str(i), coords = [(value[0],value[1])]) ds.Destroy() if save_kml == True: kml.save("my_points2.kml")
Lecture 11/.ipynb_checkpoints/Satoshi Nakamoto Lecture 11 Jupyter Notebook-checkpoint.ipynb
SatoshiNakamotoGeoscripting/SatoshiNakamotoGeoscripting
mit
Define the file and layer name as well as the points to be mapped.
filename = "wageningenpoints.shp" layername = "wagpoints" pts = [(5.665777,51.987398), (5.663133,51.978434)] shpFromPoints(filename, layername, pts)
Lecture 11/.ipynb_checkpoints/Satoshi Nakamoto Lecture 11 Jupyter Notebook-checkpoint.ipynb
SatoshiNakamotoGeoscripting/SatoshiNakamotoGeoscripting
mit
Define a function to create a nice map with the points using folium library.
def mapFromPoints(pts, outname, zoom_level, save = True): mean_long = mean([pt[1] for pt in pts]) mean_lat = mean([pt[0] for pt in pts]) point_map = folium.Map(location=[mean_long, mean_lat], zoom_start = zoom_level) for pt in pts: folium.Marker([pt[1], pt[0]],\ popup = folium.Popup(folium.element.IFrame( html=''' <b>Latitude:</b> {lat}<br> <b>Longitude:</b> {lon}<br> '''.format(lat = pt[1], lon = pt[0]),\ width=150, height=100),\ max_width=150)).add_to(point_map) if save == True: point_map.save("{}.html".format(outname)) return point_map
Lecture 11/.ipynb_checkpoints/Satoshi Nakamoto Lecture 11 Jupyter Notebook-checkpoint.ipynb
SatoshiNakamotoGeoscripting/SatoshiNakamotoGeoscripting
mit
Call the function specifying the list of points, the output map name and its zoom level. If not False, the map is saved as an html
mapFromPoints(pts, "SatoshiNakamotoMap", zoom_level = 15)
Lecture 11/.ipynb_checkpoints/Satoshi Nakamoto Lecture 11 Jupyter Notebook-checkpoint.ipynb
SatoshiNakamotoGeoscripting/SatoshiNakamotoGeoscripting
mit
Introduction The Corpus Callosum (CC) is the largest white matter structure in the central nervous system that connects both brain hemispheres and allows the communication between them. The CC has great importance in research studies due to the correlation between shape and volume with some subject's characteristics, such as: gender, age, numeric and mathematical skills and handedness. In addition, some neurodegenerative diseases like Alzheimer, autism, schizophrenia and dyslexia could cause CC shape deformation. CC segmentation is a necessary step for morphological and physiological features extraction in order to analyze the structure in image-based clinical and research applications. Magnetic Resonance Imaging (MRI) is the most suitable image technique for CC segmentation due to its ability to provide contrast between brain tissues however CC segmentation is challenging because of the shape and intensity variability between subjects, volume partial effect in diffusion MRI, fornex proximity and narrow areas in CC. Among the known MRI modalities, Diffusion-MRI arouses special interest to study the CC, despite its low resolution and high complexity, since it provides useful information related to the organization of brain tissues and the magnetic field does not interfere with the diffusion process itself. Some CC segmentation approaches using Diffusion-MRI were found in the literature. Niogi et al. proposed a method based on thresholding, Freitas et al. e Rittner et al. proposed region methods based on Watershed transform, Nazem-Zadeh et al. implemented based on level surfaces, Kong et al. presented an clustering algorithm for segmentation, Herrera et al. segmented CC directly in diffusion weighted imaging (DWI) using a model based on pixel classification and Garcia et al. proposed a hybrid segmentation method based on active geodesic regions and level surfaces. With the growing of data and the proliferation of automatic algorithms, segmentation over large databases is affordable. Therefore, error automatic detection is important in order to facilitate and speed up filter on CC segmentation databases. presented proposals for content-based image retrieval (CBIR) using shape signature of the planar object representation. In this work, a method for automatic detection of segmentation error in large datasets is proposed based on CC shape signature. Signature offers shape characterization of the CC and therefore it is expected that a "typical correct signature" represents well any correct segmentation. Signature is extracted measuring curvature along segmentation contour. The method was implemented in three main stages: mean correct signature generation, signature configuration and method testing. The first one takes 20 corrects segmentations and generates one correct signature of reference (typical correct signature), per-resolution, using mean values in each point. The second stage stage takes 10 correct segmentations and 10 erroneous segmentations and adjusts the optimal resolution and threshold, based on mean correct signature, that lets detection of erroneous segmentations. The third stage labels a new segmentation as correct and erroneous comparing with the mean signature using optimal resolution and threshold. <img src="../figures/workflow.png"> The comparison between signatures is done using root mean square error (RMSE). True label for each segmentation was done visually. Correct segmentation corresponds to segmentations with at least 50% of agreement with the structure. It is expected that RMSE for correct segmentations is lower than RMSE associated to erroneous segmentation when compared with a typical correct segmentation.
#Loading labeled segmentations seg_label = genfromtxt('../../dataset/Seg_Watershed/watershed_label.csv', delimiter=',').astype('uint8') list_masks = seg_label[np.logical_or(seg_label[:,1] == 0, seg_label[:,1] == 1), 0] #Extracting segmentations list_labels = seg_label[np.logical_or(seg_label[:,1] == 0, seg_label[:,1] == 1), 1] #Extracting labels ind_ex_err = list_masks[np.where(list_labels)[0]] ind_ex_cor = list_masks[np.where(np.logical_not(list_labels))[0]] print "Mask List", list_masks print "Label List", list_labels print "Correct List", ind_ex_cor print "Erroneous List", ind_ex_err mask_correct = np.load('../../dataset/Seg_Watershed/mask_wate_{}.npy'.format(ind_ex_cor[10])) mask_error = np.load('../../dataset/Seg_Watershed/mask_wate_{}.npy'.format(ind_ex_err[10])) plt.figure() plt.axis('off') plt.imshow(mask_correct,'gray',interpolation='none') plt.title("Correct segmentation example") plt.show() plt.figure() plt.axis('off') plt.imshow(mask_error,'gray',interpolation='none') plt.title("Erroneous segmentation example") plt.show()
dev/Autoencoderxclass.ipynb
wilomaku/IA369Z
gpl-3.0
Shape signature for comparison Signature is a shape descriptor that measures the rate of variation along the segmentation contour. As shown in figure, the curvature $k$ in the pivot point $p$, with coordinates ($x_p$,$y_p$), is calculated using the next equation. This curvature depict the angle between the segments $\overline{(x_{p-ls},y_{p-ls})(x_p,y_p)}$ and $\overline{(x_p,y_p)(x_{p+ls},y_{p+ls})}$. These segments are located to a distance $ls>0$, starting in a pivot point and finishing in anterior and posterior points, respectively. The signature is obtained calculating the curvature along all segmentation contour. \begin{equation} \label{eq:per1} k(x_p,y_p) = \arctan\left(\frac{y_{p+ls}-y_p}{x_{p+ls}-x_p}\right)-\arctan\left(\frac{y_p-y_{p-ls}}{x_p-x_{p-ls}}\right) \end{equation} <img src="../figures/curvature.png"> Signature construction is performed from segmentation contour of the CC. From contour, spline is obtained. Spline purpose is twofold: to get a smooth representation of the contour and to facilitate calculation of the curvature using its parametric representation. The signature is obtained measuring curvature along spline. $ls$ is the parametric distance between pivot point and both posterior and anterior points and it determines signature resolution. By simplicity, $ls$ is measured in percentage of reconstructed spline points. In order to achieve quantitative comparison between two signatures root mean square error (RMSE) is introduced. RMSE measures distance, point to point, between signatures $a$ and $b$ along all points $p$ of signatures. \begin{equation} \label{eq:per4} RMSE = \sqrt{\frac{1}{P}\sum_{p=1}^{P}(k_{ap}-k_{bp})^2} \end{equation} Frequently, signatures of different segmentations are not fitted along the 'x' axis because of the initial point on the spline calculation starts in different relative positions. This makes impossible to compare directly two signatures and therefore, a prior fitting process must be accomplished. The fitting process is done shifting one of the signature while the other is kept fixed. For each shift, RMSE between the two signatures is measured. The point giving the minor error is the fitting point. Fitting was done at resolution $ls = 0.35$. This resolution represents globally the CC's shape and eases their fitting. After fitting, RMSE between signatures can be measured in order to achieve final quantitative comparison. Signature for segmentation error detection For segmentation error detection, a typical correct signature is obtained calculating mean over a group of signatures from correct segmentations. Because of this signature could be used in any resolution, $ls$ must be chosen for achieve segmentation error detection. The optimal resolution must be able to return the greatest RMSE difference between correct and erroneous segmentation when compared with a typical correct signature. In the optimal resolution, a threshold must be chosen for separate erroneous and correct segmentations. This threshold stays between RMSE associated to correct ($RMSE_E$) and erroneous ($RMSE_C$) signatures and it is given by the next equation where N (in percentage) represents proximity to correct or erroneous RMSE. If RMSE calculated over a group of signatures, mean value is applied. \begin{equation} \label{eq:eq3} th = N*(\overline{RMSE_E}-\overline{RMSE_C})+\overline{RMSE_C} \end{equation} Experiments and results In this work, comparison of signatures through RMSE is used for segmentation error detection in large datasets. For this, it will be calculated a mean correct signature based on 20 correct segmentation signatures. This mean correct signature represents a tipycal correct segmentation. For a new segmentation, signature is extracted and compared with mean signature. For experiments, DWI from 152 subjects at the University of Campinas, were acquired on a Philips scanner Achieva 3T in the axial plane with a $1$x$1mm$ spatial resolution and $2mm$ slice thickness, along $32$ directions ($b-value=1000s/mm^2$, $TR=8.5s$, and $TE=61ms$). All data used in this experiment was acquired through a project approved by the research ethics committee from the School of Medicine at UNICAMP. From each acquired DWI volume, only the midsaggital slice was used. Three segmentation methods were implemented to obtained binary masks over a 152 subject dataset: Watershed, ROQS and pixel-based. 40 Watershed segmentations were chosen as follows: 20 correct segmentations for mean correct signature generation and 10 correct and 10 erroneous segmentations for signature configuration stage. Watershed was chosen to generate and adjust the mean signature because of its higher error rate and its variability in the erroneous segmentation shape. These characteristics allow improve generalization. The method was tested on the remaining Watershed segmentations (108 masks) and two additional segmentations methods: ROQS (152 masks) and pixel-based (152 masks). Mean correct signature generation In this work, segmentations based on Watershed method were used for implementation of the first and second stages. From the Watershed dataset, 20 correct segmentations were chosen. Spline for each one was obtained from segmentation contour. The contour was obtained using mathematical morphology, applying xor logical operation, pixel-wise, between original segmentation and the eroded version of itself by an structuring element b: \begin{equation} \label{eq:per2} G_E = XOR(S,S \ominus b) \end{equation} From contour, it is calculated spline. The implementation, is a B-spline (Boor's basic spline). This formulation has two parameters: degree, representing polynomial degrees of the spline, and smoothness, being the trade off between proximity and smoothness in the fitness of the spline. Degree was fixed in 5 allowing adequate representation of the contour. Smoothness was fixed in 700. This value is based on the mean quantity of pixels of the contour that are passed for spline calculation. The curvature was measured over 500 points over the spline to generate the signature along 20 segmentations. Signatures were fitted to make possible comparison (Fig. signatures). Fitting resolution was fixed in 0.35. In order to get a representative correct signature, mean signature per-resolution is generated using 20 correct signatures. The mean is calculated in each point. Signature configuration Because of the mean signature was extracted for all the resolutions, it is necessary to find resolution in that diference between RMSE for correct signature and RMSE for erroneous signature is maximum. So, 20 news segmentations were used to find this optimal resolution, being divided as 10 correct segmentations and 10 erroneous segmentations. For each segmentation, it was extracted signature for all resolutions.
smoothness = 700 #Smoothness degree = 5 #Spline degree fit_res = 0.35 resols = np.arange(0.01,0.5,0.01) #Signature resolutions resols = np.insert(resols,0,fit_res) #Insert resolution for signature fitting points = 500 #Points of Spline reconstruction prof_vec = np.empty((len(list_masks),resols.shape[0],points)) #Initializing correct signature vector for ind, mask in enumerate(list_masks): #Loading correct mask mask_pn = np.load('../../dataset/Seg_Watershed/mask_wate_{}.npy'.format(mask)) refer_temp = sign_extract(mask_pn, resols) #Function for shape signature extraction prof_vec[ind] = refer_temp if mask > 0: #Fitting curves using the first one as basis prof_ref = prof_vec[0] prof_vec[ind] = sign_fit(prof_ref[0], refer_temp) #Function for signature fitting ind_rel_cor = np.where(np.logical_not(list_labels))[0] ind_rel_err = np.where(list_labels)[0] print "Correct segmentations' vector: ", prof_vec[ind_rel_cor].shape print "Erroneous segmentations' vector: ", prof_vec[ind_rel_err].shape print(ind_rel_cor.shape) print(ind_ex_cor.shape) res_ex = 15 #for ind_ex, ind_rel in zip(ind_ex_cor, ind_rel_cor): # plt.figure() # f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5)) # ax1.plot(prof_vec[ind_rel,res_ex,:].T) # ax1.set_title("Signature %i at res: %f"%(ind_ex, resols[res_ex])) # # mask_correct = np.load('../../dataset/Seg_Watershed/mask_wate_{}.npy'.format(ind_ex)) # ax2.axis('off') # ax2.imshow(mask_correct,'gray',interpolation='none') # # plt.show() plt.figure() plt.plot(prof_vec[ind_rel_cor,res_ex,:].T) plt.title("Correct signatures for res: %f"%(resols[res_ex])) plt.show() plt.figure() plt.plot(prof_vec[ind_rel_err,res_ex,:].T) plt.title("Erroneous signatures for res: %f"%(resols[res_ex])) plt.show()
dev/Autoencoderxclass.ipynb
wilomaku/IA369Z
gpl-3.0
Autoencoder
def train(model,train_loader,loss_fn,optimizer,epochs=100,patience=5,criteria_stop="loss"): hist_train_loss = hist_val_loss = hist_train_acc = hist_val_acc = np.array([]) best_epoch = patience_count = 0 print("Training starts along %i epoch"%epochs) for e in range(epochs): correct_train = correct_val = total_train = total_val = 0 cont_i = loss_t_e = loss_v_e = 0 for data_train in train_loader: var_inputs = Variable(data_train) predict, encode = model(var_inputs) loss = loss_fn(predict, var_inputs.view(-1, 500)) loss_t_e += loss.data[0] optimizer.zero_grad() loss.backward() optimizer.step() cont_i += 1 #Stacking historical hist_train_loss = np.hstack((hist_train_loss, loss_t_e/(cont_i*1.0))) print('Epoch: ', e, 'train loss: ', hist_train_loss[-1]) if(e == epochs-1): best_epoch = e best_model = copy.deepcopy(model) print("Training stopped") patience_count += 1 return(best_model, hist_train_loss, hist_val_loss) class autoencoder(nn.Module): def __init__(self): super(autoencoder, self).__init__() self.fc1 = nn.Linear(500, 200) self.fc21 = nn.Linear(200, 2) self.fc3 = nn.Linear(2, 200) self.fc4 = nn.Linear(200, 500) self.relu = nn.ReLU() self.sigmoid = nn.Sigmoid() def encode(self, x): h1 = self.relu(self.fc1(x)) return self.fc21(h1) def decode(self, z): h3 = self.relu(self.fc3(z)) return self.sigmoid(self.fc4(h3)) def forward(self, x): z = self.encode(x.view(-1, 500)) return self.decode(z), z class decoder(nn.Module): def __init__(self): super(decoder, self).__init__() self.fc3 = nn.Linear(2, 200) self.fc4 = nn.Linear(200, 500) self.relu = nn.ReLU() self.sigmoid = nn.Sigmoid() def decode(self, z): h3 = self.relu(self.fc3(z)) return self.sigmoid(self.fc4(h3)) def forward(self, x): return self.decode(x.view(-1, 2)) net = autoencoder() print(net) res_chs = res_ex trainloader = prof_vec[:,res_chs,:] val_norm = np.amax(trainloader).astype(float) print val_norm trainloader = trainloader / val_norm trainloader = torch.FloatTensor(trainloader) print trainloader.size() loss_fn = torch.nn.MSELoss() optimizer = torch.optim.Adam(net.parameters()) epochs = 20 patience = 5 max_batch = 64 criteria = "loss" best_model, loss, loss_test = train(net, trainloader, loss_fn, optimizer, epochs = epochs, patience = patience, criteria_stop = criteria) plt.title('Loss') plt.xlabel('epochs') plt.ylabel('loss') plt.plot(loss, label='Train') plt.legend() plt.show() decode, encode = net(Variable(trainloader)) out_decod = decode.data.numpy() out_encod = encode.data.numpy() print(out_decod.shape, out_encod.shape, list_labels.shape) plt.figure(figsize=(7, 6)) plt.scatter(out_encod[:,0], out_encod[:,1], c=list_labels) plt.show()
dev/Autoencoderxclass.ipynb
wilomaku/IA369Z
gpl-3.0
Testing in new datasets ROQS test
#Loading labeled segmentations seg_label = genfromtxt('../../dataset/Seg_ROQS/roqs_label.csv', delimiter=',').astype('uint8') list_masks = seg_label[np.logical_or(seg_label[:,1] == 0, seg_label[:,1] == 1), 0] #Extracting segmentations list_labels = seg_label[np.logical_or(seg_label[:,1] == 0, seg_label[:,1] == 1), 1] #Extracting labels ind_ex_err = list_masks[np.where(list_labels)[0]] ind_ex_cor = list_masks[np.where(np.logical_not(list_labels))[0]] prof_vec_roqs = np.empty((len(list_masks),resols.shape[0],points)) #Initializing correct signature vector for ind, mask in enumerate(list_masks): mask_pn = np.load('../../dataset/Seg_ROQS/mask_roqs_{}.npy'.format(mask)) #Loading mask refer_temp = sign_extract(mask_pn, resols) #Function for shape signature extraction prof_vec_roqs[ind] = sign_fit(prof_ref[0], refer_temp) #Function for signature fitting using Watershed as basis ind_rel_cor = np.where(np.logical_not(list_labels))[0] ind_rel_err = np.where(list_labels)[0] print "Correct segmentations' vector: ", prof_vec_roqs[ind_rel_cor].shape print "Erroneous segmentations' vector: ", prof_vec_roqs[ind_rel_err].shape #for ind_ex, ind_rel in zip(ind_ex_err, ind_rel_err): # plt.figure() # f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5)) # ax1.plot(prof_vec_roqs[ind_rel,res_ex,:].T) # ax1.set_title("Signature %i at res: %f"%(ind_ex, resols[res_ex])) # # mask_correct = np.load('../../dataset/Seg_ROQS/mask_roqs_{}.npy'.format(ind_ex)) # ax2.axis('off') # ax2.imshow(mask_correct,'gray',interpolation='none') # # plt.show() plt.figure() plt.plot(prof_vec_roqs[ind_rel_cor,res_ex,:].T) plt.title("Correct signatures for res: %f"%(resols[res_ex])) plt.show() plt.figure() plt.plot(prof_vec_roqs[ind_rel_err,res_ex,:].T) plt.title("Erroneous signatures for res: %f"%(resols[res_ex])) plt.show() trainloader = prof_vec_roqs[:,res_chs,:] trainloader = trainloader / val_norm trainloader = torch.FloatTensor(trainloader) print trainloader.size() decode, encode = net(Variable(trainloader)) out_decod = decode.data.numpy() out_encod = encode.data.numpy() print(out_decod.shape, out_encod.shape, list_labels.shape) plt.figure(figsize=(7, 6)) plt.scatter(out_encod[:,0], out_encod[:,1], c=list_labels) plt.show()
dev/Autoencoderxclass.ipynb
wilomaku/IA369Z
gpl-3.0
Pixel-based test
#Loading labeled segmentations seg_label = genfromtxt('../../dataset/Seg_pixel/pixel_label.csv', delimiter=',').astype('uint8') list_masks = seg_label[np.logical_or(seg_label[:,1] == 0, seg_label[:,1] == 1), 0] #Extracting segmentations list_labels = seg_label[np.logical_or(seg_label[:,1] == 0, seg_label[:,1] == 1), 1] #Extracting labels ind_ex_err = list_masks[np.where(list_labels)[0]] ind_ex_cor = list_masks[np.where(np.logical_not(list_labels))[0]] prof_vec_pixe = np.empty((len(list_masks),resols.shape[0],points)) #Initializing correct signature vector for ind, mask in enumerate(list_masks): mask_pn = np.load('../../dataset/Seg_pixel/mask_pixe_{}.npy'.format(mask)) #Loading mask refer_temp = sign_extract(mask_pn, resols) #Function for shape signature extraction prof_vec_pixe[ind] = sign_fit(prof_ref[0], refer_temp) #Function for signature fitting using Watershed as basis ind_rel_cor = np.where(np.logical_not(list_labels))[0] ind_rel_err = np.where(list_labels)[0] print "Correct segmentations' vector: ", prof_vec_pixe[ind_rel_cor].shape print "Erroneous segmentations' vector: ", prof_vec_pixe[ind_rel_err].shape #for ind_ex, ind_rel in zip(ind_ex_cor, ind_rel_cor): # plt.figure() # f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5)) # ax1.plot(prof_vec_pixe[ind_rel,res_ex,:].T) # ax1.set_title("Signature %i at res: %f"%(ind_ex, resols[res_ex])) # # mask_correct = np.load('../../dataset/Seg_pixel/mask_pixe_{}.npy'.format(ind_ex)) # ax2.axis('off') # ax2.imshow(mask_correct,'gray',interpolation='none') # # plt.show() plt.figure() plt.plot(prof_vec_pixe[ind_rel_cor,res_ex,:].T) plt.title("Correct signatures for res: %f"%(resols[res_ex])) plt.show() plt.figure() plt.plot(prof_vec_pixe[ind_rel_err,res_ex,:].T) plt.title("Erroneous signatures for res: %f"%(resols[res_ex])) plt.show() trainloader = prof_vec_pixe[:,res_chs,:] trainloader = trainloader / val_norm trainloader = torch.FloatTensor(trainloader) print trainloader.size() decode, encode = net(Variable(trainloader)) out_decod = decode.data.numpy() out_encod = encode.data.numpy() print(out_decod.shape, out_encod.shape, list_labels.shape) plt.figure(figsize=(7, 6)) plt.scatter(out_encod[:,0], out_encod[:,1], c=list_labels) plt.show()
dev/Autoencoderxclass.ipynb
wilomaku/IA369Z
gpl-3.0
Declaring a pre-processing configuration The pre-processing configuration defines how to interpret the raw crowdsourcing input. To do this, we need to define a configuration class. First, we import the default CrowdTruth configuration class:
import crowdtruth from crowdtruth.configuration import DefaultConfig
tutorial/notebooks/Multiple Choice Task - Person Type Annotation in Video.ipynb
CrowdTruth/CrowdTruth-core
apache-2.0
Our test class inherits the default configuration DefaultConfig, while also declaring some additional attributes that are specific to the Person Type Annotation in Video task: inputColumns: list of input columns from the .csv file with the input data outputColumns: list of output columns from the .csv file with the answers from the workers annotation_separator: string that separates between the crowd annotations in outputColumns open_ended_task: boolean variable defining whether the task is open-ended (i.e. the possible crowd annotations are not known beforehand, like in the case of free text input); in the task that we are processing, workers pick the answers from a pre-defined list, therefore the task is not open ended, and this variable is set to False annotation_vector: list of possible crowd answers, mandatory to declare when open_ended_task is False; for our task, this is the list of relations processJudgments: method that defines processing of the raw crowd data; for this task, we process the crowd answers to correspond to the values in annotation_vector The complete configuration class is declared below:
class TestConfig(DefaultConfig): inputColumns = ["videolocation", "subtitles", "imagetags", "subtitletags"] outputColumns = ["selected_answer"] # processing of a closed task open_ended_task = False annotation_vector = ["archeologist", "architect", "artist", "astronaut", "athlete", "businessperson","celebrity", "chef", "criminal", "engineer", "farmer", "fictionalcharacter", "journalist", "judge", "lawyer", "militaryperson", "model", "monarch", "philosopher", "politician", "presenter", "producer", "psychologist", "scientist", "sportsmanager", "writer", "none", "other"] def processJudgments(self, judgments): # pre-process output to match the values in annotation_vector for col in self.outputColumns: # transform to lowercase judgments[col] = judgments[col].apply(lambda x: str(x).lower()) # remove square brackets from annotations judgments[col] = judgments[col].apply(lambda x: str(x).replace('[','')) judgments[col] = judgments[col].apply(lambda x: str(x).replace(']','')) # remove the quotes around the annotations judgments[col] = judgments[col].apply(lambda x: str(x).replace('"','')) return judgments
tutorial/notebooks/Multiple Choice Task - Person Type Annotation in Video.ipynb
CrowdTruth/CrowdTruth-core
apache-2.0
Pre-processing the input data After declaring the configuration of our input file, we are ready to pre-process the crowd data:
data, config = crowdtruth.load( file = "../data/person-video-multiple-choice.csv", config = TestConfig() ) data['judgments'].head()
tutorial/notebooks/Multiple Choice Task - Person Type Annotation in Video.ipynb
CrowdTruth/CrowdTruth-core
apache-2.0
Computing the CrowdTruth metrics The pre-processed data can then be used to calculate the CrowdTruth metrics:
results = crowdtruth.run(data, config)
tutorial/notebooks/Multiple Choice Task - Person Type Annotation in Video.ipynb
CrowdTruth/CrowdTruth-core
apache-2.0
Video fragment quality The video fragments metrics are stored in results["units"]. The uqs column in results["units"] contains the video fragment quality scores, capturing the overall workers agreement over each video fragment.
results["units"].head()
tutorial/notebooks/Multiple Choice Task - Person Type Annotation in Video.ipynb
CrowdTruth/CrowdTruth-core
apache-2.0
Distribution of video fragment quality scores The histogram below shows video fragment quality scores are nicely distributed, with both low and high quality video fragments.
import matplotlib.pyplot as plt %matplotlib inline plt.hist(results["units"]["uqs"]) plt.xlabel("Video Fragment Quality Score") plt.ylabel("Video Fragments")
tutorial/notebooks/Multiple Choice Task - Person Type Annotation in Video.ipynb
CrowdTruth/CrowdTruth-core
apache-2.0
The unit_annotation_score column in results["units"] contains the video fragment-annotation scores, capturing the likelihood that an annotation is expressed in a video fragment. For each video fragment, we store a dictionary mapping each annotation to its video fragment-annotation score.
results["units"]["unit_annotation_score"].head()
tutorial/notebooks/Multiple Choice Task - Person Type Annotation in Video.ipynb
CrowdTruth/CrowdTruth-core
apache-2.0
Ambiguous video fragments A low unit quality score can be used to identify ambiguous video fragments. First, we sort the unit quality metrics stored in results["units"] based on the quality score (uqs), in ascending order. Thus, the most clear video fragments are found at the tail of the new structure:
results["units"].sort_values(by=["uqs"])[["input.videolocation", "uqs", "unit_annotation_score"]].head()
tutorial/notebooks/Multiple Choice Task - Person Type Annotation in Video.ipynb
CrowdTruth/CrowdTruth-core
apache-2.0
Below we show an example video fragment with low quality score, where workers couldn't agree on what annotation best describes the person in the video. The role of the person in the video is not directly specified, so the workers made assumptions based on the topic of discussion.
from IPython.display import HTML print(results["units"].sort_values(by=["uqs"])[["uqs"]].iloc[0]) print("\n") print("Person types picked for the video below:") for k, v in results["units"].sort_values(by=["uqs"])[["unit_annotation_score"]].iloc[0]["unit_annotation_score"].items(): if v > 0: print(str(k) + " : " + str(v)) vid_url = list(results["units"].sort_values(by=["uqs"])[["input.videolocation"]].iloc[0]) HTML("<video width='320' height='240' controls><source src=" + vid_url[0] + " type='video/mp4'></video>")
tutorial/notebooks/Multiple Choice Task - Person Type Annotation in Video.ipynb
CrowdTruth/CrowdTruth-core
apache-2.0
Unambiguous video fragments Similarly, a high unit quality score represents lack of ambiguity of the video fragment.
results["units"].sort_values(by=["uqs"], ascending=False)[["input.videolocation", "uqs", "unit_annotation_score"]].head()
tutorial/notebooks/Multiple Choice Task - Person Type Annotation in Video.ipynb
CrowdTruth/CrowdTruth-core
apache-2.0
Below we show an example unambiguous video fragment - no person appears in the video, so most workers picked the none option in the crowd task.
print(results["units"].sort_values(by=["uqs"], ascending=False)[["uqs"]].iloc[0]) print("\n") print("Person types picked for the video below:") for k, v in results["units"].sort_values(by=["uqs"], ascending=False)[["unit_annotation_score"]].iloc[0]["unit_annotation_score"].items(): if v > 0: print(str(k) + " : " + str(v)) vid_url = list(results["units"].sort_values(by=["uqs"], ascending=False)[["input.videolocation"]].iloc[0]) HTML("<video width='320' height='240' controls><source src=" + vid_url[0] + " type='video/mp4'></video>")
tutorial/notebooks/Multiple Choice Task - Person Type Annotation in Video.ipynb
CrowdTruth/CrowdTruth-core
apache-2.0
Worker Quality Scores The worker metrics are stored in results["workers"]. The wqs columns in results["workers"] contains the worker quality scores, capturing the overall agreement between one worker and all the other workers.
results["workers"].head()
tutorial/notebooks/Multiple Choice Task - Person Type Annotation in Video.ipynb
CrowdTruth/CrowdTruth-core
apache-2.0
Distribution of worker quality scores The histogram below shows the worker quality scores are distributed across a wide spectrum, from low to high quality workers.
plt.hist(results["workers"]["wqs"]) plt.xlabel("Worker Quality Score") plt.ylabel("Workers")
tutorial/notebooks/Multiple Choice Task - Person Type Annotation in Video.ipynb
CrowdTruth/CrowdTruth-core
apache-2.0
Low quality workers Low worker quality scores can be used to identify spam workers, or workers that have misunderstood the annotation task.
results["workers"].sort_values(by=["wqs"]).head()
tutorial/notebooks/Multiple Choice Task - Person Type Annotation in Video.ipynb
CrowdTruth/CrowdTruth-core
apache-2.0
Example annotations from low quality worker 44606916 (with the second lowest quality score) for video fragment 1856509900:
import operator work_id = results["workers"].sort_values(by=["wqs"]).index[1] work_units = results["judgments"][results["judgments"]["worker"] == work_id]["unit"] work_judg = results["judgments"][results["judgments"]["unit"] == work_units.iloc[0]] print("JUDGMENTS OF LOW QUALITY WORKER %d FOR VIDEO %d:" % (work_id, work_units.iloc[0])) for k, v in work_judg[work_judg["worker"] == work_id]["output.selected_answer"].iloc[0].items(): if v > 0: print(str(k) + " : " + str(v)) print("\nALL JUDGMENTS FOR VIDEO %d" % work_units.iloc[0]) sorted_judg = sorted( results["units"]["output.selected_answer"][work_units.iloc[0]].items(), key=operator.itemgetter(1), reverse=True) for k, v in sorted_judg: if v > 0: print(str(k) + " : " + str(v)) vid_url = results["units"]["input.videolocation"][work_units.iloc[0]] HTML("<video width='320' height='240' controls><source src=" + str(vid_url) + " type='video/mp4'></video>")
tutorial/notebooks/Multiple Choice Task - Person Type Annotation in Video.ipynb
CrowdTruth/CrowdTruth-core
apache-2.0
Example annotations from the same low quality worker (44606916) for a second video fragment (1856509903):
work_judg = results["judgments"][results["judgments"]["unit"] == work_units.iloc[1]] print("JUDGMENTS OF LOW QUALITY WORKER %d FOR VIDEO %d:" % (work_id, work_units.iloc[1])) for k, v in work_judg[work_judg["worker"] == work_id]["output.selected_answer"].iloc[0].items(): if v > 0: print(str(k) + " : " + str(v)) print("\nALL JUDGMENTS FOR VIDEO %d" % work_units.iloc[0]) sorted_judg = sorted( results["units"]["output.selected_answer"][work_units.iloc[1]].items(), key=operator.itemgetter(1), reverse=True) for k, v in sorted_judg: if v > 0: print(str(k) + " : " + str(v)) vid_url = results["units"]["input.videolocation"][work_units.iloc[1]] HTML("<video width='320' height='240' controls><source src=" + str(vid_url) + " type='video/mp4'></video>")
tutorial/notebooks/Multiple Choice Task - Person Type Annotation in Video.ipynb
CrowdTruth/CrowdTruth-core
apache-2.0
High quality workers High worker quality scores can be used to identify reliable workers.
results["workers"].sort_values(by=["wqs"], ascending=False).head()
tutorial/notebooks/Multiple Choice Task - Person Type Annotation in Video.ipynb
CrowdTruth/CrowdTruth-core
apache-2.0
Example annotations from worker 6432269 (with the highest worker quality score) for video fragment 1856509904:
work_id = results["workers"].sort_values(by=["wqs"], ascending=False).index[0] work_units = results["judgments"][results["judgments"]["worker"] == work_id]["unit"] work_judg = results["judgments"][results["judgments"]["unit"] == work_units.iloc[0]] print("JUDGMENTS OF HIGH QUALITY WORKER %d FOR VIDEO %d:" % (work_id, work_units.iloc[0])) for k, v in work_judg[work_judg["worker"] == work_id]["output.selected_answer"].iloc[0].items(): if v > 0: print(str(k) + " : " + str(v)) print("\nALL JUDGMENTS FOR VIDEO %d" % work_units.iloc[1]) sorted_judg = sorted( results["units"]["output.selected_answer"][work_units.iloc[0]].items(), key=operator.itemgetter(1), reverse=True) for k, v in sorted_judg: if v > 0: print(str(k) + " : " + str(v)) vid_url = results["units"]["input.videolocation"][work_units.iloc[0]] HTML("<video width='320' height='240' controls><source src=" + str(vid_url) + " type='video/mp4'></video>")
tutorial/notebooks/Multiple Choice Task - Person Type Annotation in Video.ipynb
CrowdTruth/CrowdTruth-core
apache-2.0
Example annotations from worker 6432269 (with the highest worker quality score) for video fragment 1856509908:
work_id = results["workers"].sort_values(by=["wqs"], ascending=False).index[0] work_units = results["judgments"][results["judgments"]["worker"] == work_id]["unit"] work_judg = results["judgments"][results["judgments"]["unit"] == work_units.iloc[1]] print("JUDGMENTS OF HIGH QUALITY WORKER %d FOR VIDEO %d:" % (work_id, work_units.iloc[1])) for k, v in work_judg[work_judg["worker"] == work_id]["output.selected_answer"].iloc[0].items(): if v > 0: print(str(k) + " : " + str(v)) print("\nALL JUDGMENTS FOR VIDEO %d" % work_units.iloc[1]) sorted_judg = sorted( results["units"]["output.selected_answer"][work_units.iloc[1]].items(), key=operator.itemgetter(1), reverse=True) for k, v in sorted_judg: if v > 0: print(str(k) + " : " + str(v)) vid_url = results["units"]["input.videolocation"][work_units.iloc[1]] HTML("<video width='320' height='240' controls><source src=" + str(vid_url) + " type='video/mp4'></video>")
tutorial/notebooks/Multiple Choice Task - Person Type Annotation in Video.ipynb
CrowdTruth/CrowdTruth-core
apache-2.0
Worker Quality vs. # Annotations As we can see from the plot below, there is no clear correlation between worker quality and number of annotations collected from the worker.
plt.scatter(results["workers"]["wqs"], results["workers"]["judgment"]) plt.xlabel("WQS") plt.ylabel("# Annotations")
tutorial/notebooks/Multiple Choice Task - Person Type Annotation in Video.ipynb
CrowdTruth/CrowdTruth-core
apache-2.0
Annotation Quality Scores The annotation metrics are stored in results["annotations"]. The aqs column contains the annotation quality scores, capturing the overall worker agreement over one annotation. There is a slight correlation between the number of annotations (column output.selected_answer) and the annotation quality score - annotations that have not been picked often (e.g. engineer, farmer) tend to have lower quality scores - this is because these annotations are less present in the corpus, therefore the likelihood that they are picked is lower, and when they do get picked it is more likely it was a mistake by the worker. However, it is not a set rule, and there exist annotations that are picked less often (e.g. astronaut) that can have high quality scores.
results["annotations"]["output.selected_answer"] = 0 for idx in results["judgments"].index: for k,v in results["judgments"]["output.selected_answer"][idx].items(): if v > 0: results["annotations"].loc[k, "output.selected_answer"] += 1 results["annotations"] = results["annotations"].sort_values(by=["aqs"], ascending=False) results["annotations"].round(3)[["output.selected_answer", "aqs"]] rows = [] header = ["unit", "videolocation", "subtitles", "imagetags", "subtitletags", "uqs", "uqs_initial"] annotation_vector = ["archeologist", "architect", "artist", "astronaut", "athlete", "businessperson","celebrity", "chef", "criminal", "engineer", "farmer", "fictionalcharacter", "journalist", "judge", "lawyer", "militaryperson", "model", "monarch", "philosopher", "politician", "presenter", "producer", "psychologist", "scientist", "sportsmanager", "writer", "none", "other"] header.extend(annotation_vector) annotation_vector_in = ["archeologist_initial_initial", "architect_initial", "artist_initial", "astronaut_initial", "athlete_initial", "businessperson_initial","celebrity_initial", "chef_initial", "criminal_initial", "engineer_initial", "farmer_initial", "fictionalcharacter_initial", "journalist_initial", "judge_initial", "lawyer_initial", "militaryperson_initial", "model_initial", "monarch_initial", "philosopher_initial", "politician_initial", "presenter_initial", "producer_initial", "psychologist_initial", "scientist_initial", "sportsmanager_initial", "writer_initial", "none_initial", "other_initial"] header.extend(annotation_vector_in) units = results["units"].reset_index() for i in range(len(units.index)): row = [units["unit"].iloc[i], units["input.videolocation"].iloc[i], units["input.subtitles"].iloc[i], \ units["input.imagetags"].iloc[i], units["input.subtitletags"].iloc[i], units["uqs"].iloc[i], units["uqs_initial"].iloc[i]] for item in annotation_vector: row.append(units["unit_annotation_score"].iloc[i][item]) for item in annotation_vector_in: row.append(units["unit_annotation_score_initial"].iloc[i][item]) rows.append(row) rows = pd.DataFrame(rows, columns=header) rows.to_csv("../data/results/multchoice-people-video-units.csv", index=False) results["workers"].to_csv("../data/results/multchoice-people-video-workers.csv", index=True) results["annotations"].to_csv("../data/results/multchoice-people-video-annotations.csv", index=True)
tutorial/notebooks/Multiple Choice Task - Person Type Annotation in Video.ipynb
CrowdTruth/CrowdTruth-core
apache-2.0
DDL to construct table for SQL transformations: sql CREATE TABLE kaggle_sf_crime ( dates TIMESTAMP, category VARCHAR, descript VARCHAR, dayofweek VARCHAR, pd_district VARCHAR, resolution VARCHAR, addr VARCHAR, X FLOAT, Y FLOAT); Getting training data into a locally hosted PostgreSQL database: sql \copy kaggle_sf_crime FROM '/Users/Goodgame/Desktop/MIDS/207/final/sf_crime_train.csv' DELIMITER ',' CSV HEADER; SQL Query used for transformations: sql SELECT category, date_part('hour', dates) AS hour_of_day, CASE WHEN dayofweek = 'Monday' then 1 WHEN dayofweek = 'Tuesday' THEN 2 WHEN dayofweek = 'Wednesday' THEN 3 WHEN dayofweek = 'Thursday' THEN 4 WHEN dayofweek = 'Friday' THEN 5 WHEN dayofweek = 'Saturday' THEN 6 WHEN dayofweek = 'Sunday' THEN 7 END AS dayofweek_numeric, X, Y, CASE WHEN pd_district = 'BAYVIEW' THEN 1 ELSE 0 END AS bayview_binary, CASE WHEN pd_district = 'INGLESIDE' THEN 1 ELSE 0 END AS ingleside_binary, CASE WHEN pd_district = 'NORTHERN' THEN 1 ELSE 0 END AS northern_binary, CASE WHEN pd_district = 'CENTRAL' THEN 1 ELSE 0 END AS central_binary, CASE WHEN pd_district = 'BAYVIEW' THEN 1 ELSE 0 END AS pd_bayview_binary, CASE WHEN pd_district = 'MISSION' THEN 1 ELSE 0 END AS mission_binary, CASE WHEN pd_district = 'SOUTHERN' THEN 1 ELSE 0 END AS southern_binary, CASE WHEN pd_district = 'TENDERLOIN' THEN 1 ELSE 0 END AS tenderloin_binary, CASE WHEN pd_district = 'PARK' THEN 1 ELSE 0 END AS park_binary, CASE WHEN pd_district = 'RICHMOND' THEN 1 ELSE 0 END AS richmond_binary, CASE WHEN pd_district = 'TARAVAL' THEN 1 ELSE 0 END AS taraval_binary FROM kaggle_sf_crime; Loading the data, version 2, with weather features to improve performance: (Negated with hashtags for now, as will cause file dependency issues if run locally for everyone. Will be run by Isabell in final notebook with correct files she needs) We seek to add features to our models that will improve performance with respect to out desired performance metric. There is evidence that there is a correlation between weather patterns and crime, with some experts even arguing for a causal relationship between weather and crime [1]. More specifically, a 2013 paper published in Science showed that higher temperatures and extreme rainfall led to large increases in conflict. In the setting of strong evidence that weather influences crime, we see it as a candidate for additional features to improve the performance of our classifiers. Weather data was gathered from (insert source). Certain features from this data set were incorporated into the original crime data set in order to add features that were hypothesizzed to improve performance. These features included (insert what we eventually include).
#data_path = "./data/train_transformed.csv" #df = pd.read_csv(data_path, header=0) #x_data = df.drop('category', 1) #y = df.category.as_matrix() ########## Adding the date back into the data #import csv #import time #import calendar #data_path = "./data/train.csv" #dataCSV = open(data_path, 'rt') #csvData = list(csv.reader(dataCSV)) #csvFields = csvData[0] #['Dates', 'Category', 'Descript', 'DayOfWeek', 'PdDistrict', 'Resolution', 'Address', 'X', 'Y'] #allData = csvData[1:] #dataCSV.close() #df2 = pd.DataFrame(allData) #df2.columns = csvFields #dates = df2['Dates'] #dates = dates.apply(time.strptime, args=("%Y-%m-%d %H:%M:%S",)) #dates = dates.apply(calendar.timegm) #print(dates.head()) #x_data['secondsFromEpoch'] = dates #colnames = x_data.columns.tolist() #colnames = colnames[-1:] + colnames[:-1] #x_data = x_data[colnames] ########## ########## Adding the weather data into the original crime data #weatherData1 = "./data/1027175.csv" #weatherData2 = "./data/1027176.csv" #dataCSV = open(weatherData1, 'rt') #csvData = list(csv.reader(dataCSV)) #csvFields = csvData[0] #['Dates', 'Category', 'Descript', 'DayOfWeek', 'PdDistrict', 'Resolution', 'Address', 'X', 'Y'] #allWeatherData1 = csvData[1:] #dataCSV.close() #dataCSV = open(weatherData2, 'rt') #csvData = list(csv.reader(dataCSV)) #csvFields = csvData[0] #['Dates', 'Category', 'Descript', 'DayOfWeek', 'PdDistrict', 'Resolution', 'Address', 'X', 'Y'] #allWeatherData2 = csvData[1:] #dataCSV.close() #weatherDF1 = pd.DataFrame(allWeatherData1) #weatherDF1.columns = csvFields #dates1 = weatherDF1['DATE'] #sunrise1 = weatherDF1['DAILYSunrise'] #sunset1 = weatherDF1['DAILYSunset'] #weatherDF2 = pd.DataFrame(allWeatherData2) #weatherDF2.columns = csvFields #dates2 = weatherDF2['DATE'] #sunrise2 = weatherDF2['DAILYSunrise'] #sunset2 = weatherDF2['DAILYSunset'] #functions for processing the sunrise and sunset times of each day #def get_hour_and_minute(milTime): # hour = int(milTime[:-2]) # minute = int(milTime[-2:]) # return [hour, minute] #def get_date_only(date): # return time.struct_time(tuple([date[0], date[1], date[2], 0, 0, 0, date[6], date[7], date[8]])) #def structure_sun_time(timeSeries, dateSeries): # sunTimes = timeSeries.copy() # for index in range(len(dateSeries)): # sunTimes[index] = time.struct_time(tuple([dateSeries[index][0], dateSeries[index][1], dateSeries[index][2], timeSeries[index][0], timeSeries[index][1], dateSeries[index][5], dateSeries[index][6], dateSeries[index][7], dateSeries[index][8]])) # return sunTimes #dates1 = dates1.apply(time.strptime, args=("%Y-%m-%d %H:%M",)) #sunrise1 = sunrise1.apply(get_hour_and_minute) #sunrise1 = structure_sun_time(sunrise1, dates1) #sunrise1 = sunrise1.apply(calendar.timegm) #sunset1 = sunset1.apply(get_hour_and_minute) #sunset1 = structure_sun_time(sunset1, dates1) #sunset1 = sunset1.apply(calendar.timegm) #dates1 = dates1.apply(calendar.timegm) #dates2 = dates2.apply(time.strptime, args=("%Y-%m-%d %H:%M",)) #sunrise2 = sunrise2.apply(get_hour_and_minute) #sunrise2 = structure_sun_time(sunrise2, dates2) #sunrise2 = sunrise2.apply(calendar.timegm) #sunset2 = sunset2.apply(get_hour_and_minute) #sunset2 = structure_sun_time(sunset2, dates2) #sunset2 = sunset2.apply(calendar.timegm) #dates2 = dates2.apply(calendar.timegm) #weatherDF1['DATE'] = dates1 #weatherDF1['DAILYSunrise'] = sunrise1 #weatherDF1['DAILYSunset'] = sunset1 #weatherDF2['DATE'] = dates2 #weatherDF2['DAILYSunrise'] = sunrise2 #weatherDF2['DAILYSunset'] = sunset2 #weatherDF = pd.concat([weatherDF1,weatherDF2[32:]],ignore_index=True) # Starting off with some of the easier features to work with-- more to come here . . . still in beta #weatherMetrics = weatherDF[['DATE','HOURLYDRYBULBTEMPF','HOURLYRelativeHumidity', 'HOURLYWindSpeed', \ # 'HOURLYSeaLevelPressure', 'HOURLYVISIBILITY', 'DAILYSunrise', 'DAILYSunset']] #weatherMetrics = weatherMetrics.convert_objects(convert_numeric=True) #weatherDates = weatherMetrics['DATE'] #'DATE','HOURLYDRYBULBTEMPF','HOURLYRelativeHumidity', 'HOURLYWindSpeed', #'HOURLYSeaLevelPressure', 'HOURLYVISIBILITY' #timeWindow = 10800 #3 hours #hourlyDryBulbTemp = [] #hourlyRelativeHumidity = [] #hourlyWindSpeed = [] #hourlySeaLevelPressure = [] #hourlyVisibility = [] #dailySunrise = [] #dailySunset = [] #daylight = [] #test = 0 #for timePoint in dates:#dates is the epoch time from the kaggle data # relevantWeather = weatherMetrics[(weatherDates <= timePoint) & (weatherDates > timePoint - timeWindow)] # hourlyDryBulbTemp.append(relevantWeather['HOURLYDRYBULBTEMPF'].mean()) # hourlyRelativeHumidity.append(relevantWeather['HOURLYRelativeHumidity'].mean()) # hourlyWindSpeed.append(relevantWeather['HOURLYWindSpeed'].mean()) # hourlySeaLevelPressure.append(relevantWeather['HOURLYSeaLevelPressure'].mean()) # hourlyVisibility.append(relevantWeather['HOURLYVISIBILITY'].mean()) # dailySunrise.append(relevantWeather['DAILYSunrise'].iloc[-1]) # dailySunset.append(relevantWeather['DAILYSunset'].iloc[-1]) # daylight.append(1.0*((timePoint >= relevantWeather['DAILYSunrise'].iloc[-1]) and (timePoint < relevantWeather['DAILYSunset'].iloc[-1]))) #if timePoint < relevantWeather['DAILYSunset'][-1]: #daylight.append(1) #else: #daylight.append(0) # if test%100000 == 0: # print(relevantWeather) # test += 1 #hourlyDryBulbTemp = pd.Series.from_array(np.array(hourlyDryBulbTemp)) #hourlyRelativeHumidity = pd.Series.from_array(np.array(hourlyRelativeHumidity)) #hourlyWindSpeed = pd.Series.from_array(np.array(hourlyWindSpeed)) #hourlySeaLevelPressure = pd.Series.from_array(np.array(hourlySeaLevelPressure)) #hourlyVisibility = pd.Series.from_array(np.array(hourlyVisibility)) #dailySunrise = pd.Series.from_array(np.array(dailySunrise)) #dailySunset = pd.Series.from_array(np.array(dailySunset)) #daylight = pd.Series.from_array(np.array(daylight)) #x_data['HOURLYDRYBULBTEMPF'] = hourlyDryBulbTemp #x_data['HOURLYRelativeHumidity'] = hourlyRelativeHumidity #x_data['HOURLYWindSpeed'] = hourlyWindSpeed #x_data['HOURLYSeaLevelPressure'] = hourlySeaLevelPressure #x_data['HOURLYVISIBILITY'] = hourlyVisibility #x_data['DAILYSunrise'] = dailySunrise #x_data['DAILYSunset'] = dailySunset #x_data['Daylight'] = daylight #x_data.to_csv(path_or_buf="C:/MIDS/W207 final project/x_data.csv") ########## # Impute missing values with mean values: #x_complete = x_data.fillna(x_data.mean()) #X_raw = x_complete.as_matrix() # Scale the data between 0 and 1: #X = MinMaxScaler().fit_transform(X_raw) # Shuffle data to remove any underlying pattern that may exist: #shuffle = np.random.permutation(np.arange(X.shape[0])) #X, y = X[shuffle], y[shuffle] # Separate training, dev, and test data: #test_data, test_labels = X[800000:], y[800000:] #dev_data, dev_labels = X[700000:800000], y[700000:800000] #train_data, train_labels = X[:700000], y[:700000] #mini_train_data, mini_train_labels = X[:75000], y[:75000] #mini_dev_data, mini_dev_labels = X[75000:100000], y[75000:100000] #labels_set = set(mini_dev_labels) #print(labels_set) #print(len(labels_set)) #print(train_data[:10])
iterations/misc/Cha_Goodgame_Kao_Moore_W207_Final_Project_updated_08_19_2230.ipynb
samgoodgame/sf_crime
mit
Local, individual load of updated data set (with weather data integrated) into training, development, and test subsets.
data_path = "/Users/Bryan/Desktop/UC_Berkeley_MIDS_files/Courses/W207_Intro_To_Machine_Learning/Final_Project/x_data_3.csv" df = pd.read_csv(data_path, header=0) x_data = df.drop('category', 1) y = df.category.as_matrix() # Impute missing values with mean values: x_complete = x_data.fillna(x_data.mean()) X_raw = x_complete.as_matrix() # Scale the data between 0 and 1: X = MinMaxScaler().fit_transform(X_raw) # Shuffle data to remove any underlying pattern that may exist. Must re-run random seed step each time: np.random.seed(0) shuffle = np.random.permutation(np.arange(X.shape[0])) X, y = X[shuffle], y[shuffle] # Due to difficulties with log loss and set(y_pred) needing to match set(labels), we will remove the extremely rare # crimes from the data for quality issues. X_minus_trea = X[np.where(y != 'TREA')] y_minus_trea = y[np.where(y != 'TREA')] X_final = X_minus_trea[np.where(y_minus_trea != 'PORNOGRAPHY/OBSCENE MAT')] y_final = y_minus_trea[np.where(y_minus_trea != 'PORNOGRAPHY/OBSCENE MAT')] # Separate training, dev, and test data: test_data, test_labels = X_final[800000:], y_final[800000:] dev_data, dev_labels = X_final[700000:800000], y_final[700000:800000] train_data, train_labels = X_final[100000:700000], y_final[100000:700000] calibrate_data, calibrate_labels = X_final[:100000], y_final[:100000] # Create mini versions of the above sets mini_train_data, mini_train_labels = X_final[:20000], y_final[:20000] mini_calibrate_data, mini_calibrate_labels = X_final[19000:28000], y_final[19000:28000] mini_dev_data, mini_dev_labels = X_final[49000:60000], y_final[49000:60000] # Create list of the crime type labels. This will act as the "labels" parameter for the log loss functions that follow crime_labels = list(set(y_final)) crime_labels_mini_train = list(set(mini_train_labels)) crime_labels_mini_dev = list(set(mini_dev_labels)) crime_labels_mini_calibrate = list(set(mini_calibrate_labels)) print(len(crime_labels), len(crime_labels_mini_train), len(crime_labels_mini_dev),len(crime_labels_mini_calibrate)) #print(len(train_data),len(train_labels)) #print(len(dev_data),len(dev_labels)) #print(len(mini_train_data),len(mini_train_labels)) #print(len(mini_dev_data),len(mini_dev_labels)) #print(len(test_data),len(test_labels)) #print(len(mini_calibrate_data),len(mini_calibrate_labels)) #print(len(calibrate_data),len(calibrate_labels))
iterations/misc/Cha_Goodgame_Kao_Moore_W207_Final_Project_updated_08_19_2230.ipynb
samgoodgame/sf_crime
mit
Sarah's School data that we may still get to work as features: (Negated with hashtags for now, as will cause file dependency issues if run locally for everyone. Will be run by Isabell in final notebook with correct files she needs)
### Read in zip code data #data_path_zip = "./data/2016_zips.csv" #zips = pd.read_csv(data_path_zip, header=0, sep ='\t', usecols = [0,5,6], names = ["GEOID", "INTPTLAT", "INTPTLONG"], dtype ={'GEOID': int, 'INTPTLAT': float, 'INTPTLONG': float}) #sf_zips = zips[(zips['GEOID'] > 94000) & (zips['GEOID'] < 94189)] ### Mapping longitude/latitude to zipcodes #def dist(lat1, long1, lat2, long2): # return np.sqrt((lat1-lat2)**2+(long1-long2)**2) # return abs(lat1-lat2)+abs(long1-long2) #def find_zipcode(lat, long): # distances = sf_zips.apply(lambda row: dist(lat, long, row["INTPTLAT"], row["INTPTLONG"]), axis=1) # return sf_zips.loc[distances.idxmin(), "GEOID"] #x_data['zipcode'] = 0 #for i in range(0, 1): # x_data['zipcode'][i] = x_data.apply(lambda row: find_zipcode(row['x'], row['y']), axis=1) #x_data['zipcode']= x_data.apply(lambda row: find_zipcode(row['x'], row['y']), axis=1) ### Read in school data #data_path_schools = "./data/pubschls.csv" #schools = pd.read_csv(data_path_schools,header=0, sep ='\t', usecols = ["CDSCode","StatusType", "School", "EILCode", "EILName", "Zip", "Latitude", "Longitude"], dtype ={'CDSCode': str, 'StatusType': str, 'School': str, 'EILCode': str,'EILName': str,'Zip': str, 'Latitude': float, 'Longitude': float}) #schools = schools[(schools["StatusType"] == 'Active')] ### Find the closest school #def dist(lat1, long1, lat2, long2): # return np.sqrt((lat1-lat2)**2+(long1-long2)**2) #def find_closest_school(lat, long): # distances = schools.apply(lambda row: dist(lat, long, row["Latitude"], row["Longitude"]), axis=1) # return min(distances) #x_data['closest_school'] = x_data_sub.apply(lambda row: find_closest_school(row['y'], row['x']), axis=1)
iterations/misc/Cha_Goodgame_Kao_Moore_W207_Final_Project_updated_08_19_2230.ipynb
samgoodgame/sf_crime
mit
Formatting to meet Kaggle submission standards: (Negated with hashtags for now, as will cause file dependency issues if run locally for everyone. Will be run by Isabell in final notebook with correct files she needs)
# The Kaggle submission format requires listing the ID of each example. # This is to remember the order of the IDs after shuffling #allIDs = np.array(list(df.axes[0])) #allIDs = allIDs[shuffle] #testIDs = allIDs[800000:] #devIDs = allIDs[700000:800000] #trainIDs = allIDs[:700000] # Extract the column names for the required submission format #sampleSubmission_path = "./data/sampleSubmission.csv" #sampleDF = pd.read_csv(sampleSubmission_path) #allColumns = list(sampleDF.columns) #featureColumns = allColumns[1:] # Extracting the test data for a baseline submission #real_test_path = "./data/test_transformed.csv" #testDF = pd.read_csv(real_test_path, header=0) #real_test_data = testDF #test_complete = real_test_data.fillna(real_test_data.mean()) #Test_raw = test_complete.as_matrix() #TestData = MinMaxScaler().fit_transform(Test_raw) # Here we remember the ID of each test data point, in case we ever decide to shuffle the test data for some reason #testIDs = list(testDF.axes[0])
iterations/misc/Cha_Goodgame_Kao_Moore_W207_Final_Project_updated_08_19_2230.ipynb
samgoodgame/sf_crime
mit
Generate baseline prediction probabilities from MNB classifier and store in a .csv file (Negated with hashtags for now, as will cause file dependency issues if run locally for everyone. Will be run by Isabell in final notebook with correct files she needs)
# Generate a baseline MNB classifier and make it return prediction probabilities for the actual test data #def MNB(): # mnb = MultinomialNB(alpha = 0.0000001) # mnb.fit(train_data, train_labels) # print("\n\nMultinomialNB accuracy on dev data:", mnb.score(dev_data, dev_labels)) # return mnb.predict_proba(dev_data) #MNB() #baselinePredictionProbabilities = MNB() # Place the resulting prediction probabilities in a .csv file in the required format # First, turn the prediction probabilties into a data frame #resultDF = pd.DataFrame(baselinePredictionProbabilities,columns=featureColumns) # Add the IDs as a final column #resultDF.loc[:,'Id'] = pd.Series(testIDs,index=resultDF.index) # Make the 'Id' column the first column #colnames = resultDF.columns.tolist() #colnames = colnames[-1:] + colnames[:-1] #resultDF = resultDF[colnames] # Output to a .csv file # resultDF.to_csv('result.csv',index=False)
iterations/misc/Cha_Goodgame_Kao_Moore_W207_Final_Project_updated_08_19_2230.ipynb
samgoodgame/sf_crime
mit
Note: the code above will shuffle data differently every time it's run, so model accuracies will vary accordingly.
## Data sub-setting quality check-point print(train_data[:1]) print(train_labels[:1]) # Modeling quality check-point with MNB--fast model def MNB(): mnb = MultinomialNB(alpha = 0.0000001) mnb.fit(train_data, train_labels) print("\n\nMultinomialNB accuracy on dev data:", mnb.score(dev_data, dev_labels)) MNB()
iterations/misc/Cha_Goodgame_Kao_Moore_W207_Final_Project_updated_08_19_2230.ipynb
samgoodgame/sf_crime
mit
Defining Performance Criteria As determined by the Kaggle submission guidelines, the performance criteria metric for the San Francisco Crime Classification competition is Multi-class Logarithmic Loss (also known as cross-entropy). There are various other performance metrics that are appropriate for different domains: accuracy, F-score, Lift, ROC Area, average precision, precision/recall break-even point, and squared error. (Describe each performance metric and a domain in which it is preferred. Give Pros/Cons if able) Multi-class Log Loss: Accuracy: F-score: Lift: ROC Area: Average precision Precision/Recall break-even point: Squared-error: Model Prototyping We will start our classifier and feature engineering process by looking at the performance of various classifiers with default parameter settings in predicting labels on the mini_dev_data:
def model_prototype(train_data, train_labels, eval_data, eval_labels): knn = KNeighborsClassifier(n_neighbors=5).fit(train_data, train_labels) bnb = BernoulliNB(alpha=1, binarize = 0.5).fit(train_data, train_labels) mnb = MultinomialNB().fit(train_data, train_labels) log_reg = LogisticRegression().fit(train_data, train_labels) neural_net = MLPClassifier().fit(train_data, train_labels) random_forest = RandomForestClassifier().fit(train_data, train_labels) decision_tree = DecisionTreeClassifier().fit(train_data, train_labels) support_vm_step_one = svm.SVC(probability = True) support_vm = support_vm_step_one.fit(train_data, train_labels) models = [knn, bnb, mnb, log_reg, neural_net, random_forest, decision_tree, support_vm] for model in models: eval_prediction_probabilities = model.predict_proba(eval_data) eval_predictions = model.predict(eval_data) print(model, "Multi-class Log Loss:", log_loss(y_true = eval_labels, y_pred = eval_prediction_probabilities, labels = crime_labels_mini_dev), "\n\n") model_prototype(mini_train_data, mini_train_labels, mini_dev_data, mini_dev_labels)
iterations/misc/Cha_Goodgame_Kao_Moore_W207_Final_Project_updated_08_19_2230.ipynb
samgoodgame/sf_crime
mit
Adding Features, Hyperparameter Tuning, and Model Calibration To Improve Prediction For Each Classifier Here we seek to optimize the performance of our classifiers in a three-step, dynamnic engineering process. 1) Feature addition We previously added components from the weather data into the original SF crime data as new features. We will not repeat work done in our initial submission, where our training dataset did not include these features. For comparision with respoect to how the added features improved our performance with respect to log loss, please refer back to our initial submission. We can have Kalvin expand on exactly what he did here. 2) Hyperparameter tuning Each classifier has parameters that we can engineer to further optimize performance, as opposed to using the default parameter values as we did above in the model prototyping cell. This will be specific to each classifier as detailed below. 3) Model calibration We can calibrate the models via Platt Scaling or Isotonic Regression to attempt to improve their performance. Platt Scaling: ((brief explanation of how it works)) Isotonic Regression: ((brief explanation of how it works)) For each classifier, we can use CalibratedClassifierCV to perform probability calibration with isotonic regression or sigmoid (Platt Scaling). The parameters within CalibratedClassifierCV that we can adjust are the method ('sigmoid' or 'isotonic') and cv (cross-validation generator). As we will already be training our models before calibration, we will only use cv = 'prefit'. Thus, in practice the cross-validation generator will not be a modifiable parameter for us. K-Nearest Neighbors Hyperparameter tuning: For the KNN classifier, we can seek to optimize the following classifier parameters: n-neighbors, weights, and the power parameter ('p').
list_for_ks = [] list_for_ws = [] list_for_ps = [] list_for_log_loss = [] def k_neighbors_tuned(k,w,p): tuned_KNN = KNeighborsClassifier(n_neighbors=k, weights=w, p=p).fit(mini_train_data, mini_train_labels) dev_prediction_probabilities = tuned_KNN.predict_proba(mini_dev_data) list_for_ks.append(this_k) list_for_ws.append(this_w) list_for_ps.append(this_p) working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = dev_prediction_probabilities, labels = crime_labels_mini_dev) list_for_log_loss.append(working_log_loss) #print("Multi-class Log Loss with KNN and k,w,p =", k,",",w,",", p, "is:", working_log_loss) k_value_tuning = [i for i in range(1,5002,500)] weight_tuning = ['uniform', 'distance'] power_parameter_tuning = [1,2] start = time.clock() for this_k in k_value_tuning: for this_w in weight_tuning: for this_p in power_parameter_tuning: k_neighbors_tuned(this_k, this_w, this_p) index_best_logloss = np.argmin(list_for_log_loss) print('For KNN the best log loss with hyperparameter tuning is',list_for_log_loss[index_best_logloss], 'with k =', list_for_ks[index_best_logloss], 'w =', list_for_ws[index_best_logloss], 'p =', list_for_ps[index_best_logloss]) end = time.clock() print("Computation time for this step is %.2f" % (end-start), 'seconds')
iterations/misc/Cha_Goodgame_Kao_Moore_W207_Final_Project_updated_08_19_2230.ipynb
samgoodgame/sf_crime
mit
Model calibration: Here we will calibrate the KNN classifier with both Platt Scaling and with Isotonic Regression using CalibratedClassifierCV with various parameter settings. The "method" parameter can be set to "sigmoid" or to "isotonic", corresponding to Platt Scaling and to Isotonic Regression respectively.
list_for_ks = [] list_for_ws = [] list_for_ps = [] list_for_ms = [] list_for_log_loss = [] def knn_calibrated(k,w,p,m): tuned_KNN = KNeighborsClassifier(n_neighbors=k, weights=w, p=p).fit(mini_train_data, mini_train_labels) dev_prediction_probabilities = tuned_KNN.predict_proba(mini_dev_data) ccv = CalibratedClassifierCV(tuned_KNN, method = m, cv = 'prefit') ccv.fit(mini_calibrate_data, mini_calibrate_labels) ccv_prediction_probabilities = ccv.predict_proba(mini_dev_data) list_for_ks.append(this_k) list_for_ws.append(this_w) list_for_ps.append(this_p) list_for_ms.append(this_m) working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = ccv_prediction_probabilities, labels = crime_labels_mini_dev) list_for_log_loss.append(working_log_loss) #print("Multi-class Log Loss with KNN and k,w,p =", k,",",w,",",p,",",m,"is:", working_log_loss) #k_value_tuning = [i for i in range(1,5002,500)] k_value_tuning = [1] weight_tuning = ['uniform', 'distance'] power_parameter_tuning = [1,2] methods = ['sigmoid', 'isotonic'] start = time.clock() for this_k in k_value_tuning: for this_w in weight_tuning: for this_p in power_parameter_tuning: for this_m in methods: knn_calibrated(this_k, this_w, this_p, this_m) index_best_logloss = np.argmin(list_for_log_loss) print('For KNN the best log loss with hyperparameter tuning and calibration is',list_for_log_loss[index_best_logloss], 'with k =', list_for_ks[index_best_logloss], 'w =', list_for_ws[index_best_logloss], 'p =', list_for_ps[index_best_logloss], 'm =', list_for_ms[index_best_logloss]) end = time.clock() print("Computation time for this step is %.2f" % (end-start), 'seconds')
iterations/misc/Cha_Goodgame_Kao_Moore_W207_Final_Project_updated_08_19_2230.ipynb
samgoodgame/sf_crime
mit
Comments on results for Hyperparameter tuning and Calibration for KNN: We see that the best log loss we achieve for KNN is with _ neighbors, _ weights, and _ power parameter. When we add-in calibration, we see that the the best log loss we achieve for KNN is with _ neighbors, _ weights, _ power parameter, and _ calibration method. (Further explanation here?) Multinomial, Bernoulli, and Gaussian Naive Bayes Hyperparameter tuning: Bernoulli Naive Bayes For the Bernoulli Naive Bayes classifier, we seek to optimize the alpha parameter (Laplace smoothing parameter) and the binarize parameter (threshold for binarizing of the sample features). For the binarize parameter, we will create arbitrary thresholds over which our features, which are not binary/boolean features, will be binarized.
list_for_as = [] list_for_bs = [] list_for_log_loss = [] def BNB_tuned(a,b): bnb_tuned = BernoulliNB(alpha = a, binarize = b).fit(mini_train_data, mini_train_labels) dev_prediction_probabilities = bnb_tuned.predict_proba(mini_dev_data) list_for_as.append(this_a) list_for_bs.append(this_b) working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = dev_prediction_probabilities, labels = crime_labels_mini_dev) list_for_log_loss.append(working_log_loss) #print("Multi-class Log Loss with BNB and a,b =", a,",",b,"is:", working_log_loss) alpha_tuning = [0.001, 0.01, 0.1, 0.2, 0.3, 0.4, 0.6, 0.8, 1.0, 1.1, 1.2, 1.4, 1.6, 1.8, 2.0, 10.0] binarize_thresholds_tuning = [1e-20, 1e-19, 1e-18, 1e-17, 1e-16, 1e-15, 1e-14, 1e-13, 1e-12, 1e-11, 1e-10, 1e-9, 1e-8, 1e-7, 1e-6, 1e-5, 1e-4, 0.001, 0.01, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.95, 0.99, 0.999, 0.9999] start = time.clock() for this_a in alpha_tuning: for this_b in binarize_thresholds_tuning: BNB_tuned(this_a, this_b) index_best_logloss = np.argmin(list_for_log_loss) print('For BNB the best log loss with hyperparameter tuning is',list_for_log_loss[index_best_logloss], 'with alpha =', list_for_as[index_best_logloss], 'binarization threshold =', list_for_bs[index_best_logloss]) end = time.clock() print("Computation time for this step is %.2f" % (end-start), 'seconds')
iterations/misc/Cha_Goodgame_Kao_Moore_W207_Final_Project_updated_08_19_2230.ipynb
samgoodgame/sf_crime
mit
Model calibration: BernoulliNB Here we will calibrate the BNB classifier with both Platt Scaling and with Isotonic Regression using CalibratedClassifierCV with various parameter settings. The "method" parameter can be set to "sigmoid" or to "isotonic", corresponding to Platt Scaling and to Isotonic Regression respectively.
list_for_as = [] list_for_bs = [] list_for_ms = [] list_for_log_loss = [] def BNB_calibrated(a,b,m): bnb_tuned = BernoulliNB(alpha = a, binarize = b).fit(mini_train_data, mini_train_labels) dev_prediction_probabilities = bnb_tuned.predict_proba(mini_dev_data) ccv = CalibratedClassifierCV(bnb_tuned, method = m, cv = 'prefit') ccv.fit(mini_calibrate_data, mini_calibrate_labels) ccv_prediction_probabilities = ccv.predict_proba(mini_dev_data) list_for_as.append(this_a) list_for_bs.append(this_b) list_for_ms.append(this_m) working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = ccv_prediction_probabilities, labels = crime_labels_mini_dev) list_for_log_loss.append(working_log_loss) #print("Multi-class Log Loss with BNB and a,b,m =", a,",", b,",", m, "is:", working_log_loss) alpha_tuning = [0.001, 0.01, 0.1, 0.2, 0.3, 0.4, 0.6, 0.8, 1.0, 1.1, 1.2, 1.4, 1.6, 1.8, 2.0, 10.0] binarize_thresholds_tuning = [1e-20, 1e-19, 1e-18, 1e-17, 1e-16, 1e-15, 1e-14, 1e-13, 1e-12, 1e-11, 1e-10, 1e-9, 1e-8, 1e-7, 1e-6, 1e-5, 1e-4, 0.001, 0.01, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.95, 0.99, 0.999, 0.9999] methods = ['sigmoid', 'isotonic'] start = time.clock() for this_a in alpha_tuning: for this_b in binarize_thresholds_tuning: for this_m in methods: BNB_calibrated(this_a, this_b, this_m) index_best_logloss = np.argmin(list_for_log_loss) print('For BNB the best log loss with hyperparameter tuning and calibration is',list_for_log_loss[index_best_logloss], 'with alpha =', list_for_as[index_best_logloss], 'binarization threshold =', list_for_bs[index_best_logloss], 'method = ', list_for_ms[index_best_logloss]) end = time.clock() print("Computation time for this step is %.2f" % (end-start), 'seconds')
iterations/misc/Cha_Goodgame_Kao_Moore_W207_Final_Project_updated_08_19_2230.ipynb
samgoodgame/sf_crime
mit
Hyperparameter tuning: Multinomial Naive Bayes For the Multinomial Naive Bayes classifer, we seek to optimize the alpha parameter (Laplace smoothing parameter).
list_for_as = [] list_for_log_loss = [] def MNB_tuned(a): mnb_tuned = MultinomialNB(alpha = a).fit(mini_train_data, mini_train_labels) dev_prediction_probabilities =mnb_tuned.predict_proba(mini_dev_data) list_for_as.append(this_a) working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = dev_prediction_probabilities, labels = crime_labels_mini_dev) list_for_log_loss.append(working_log_loss) #print("Multi-class Log Loss with BNB and a =", a, "is:", working_log_loss) alpha_tuning = [0.001, 0.01, 0.1, 0.2, 0.3, 0.4, 0.6, 0.8, 1.0, 1.1, 1.2, 1.4, 1.6, 1.8, 2.0, 10.0] start = time.clock() for this_a in alpha_tuning: MNB_tuned(this_a) index_best_logloss = np.argmin(list_for_log_loss) print('For MNB the best log loss with hyperparameter tuning is',list_for_log_loss[index_best_logloss], 'with alpha =', list_for_as[index_best_logloss]) end = time.clock() print("Computation time for this step is %.2f" % (end-start), 'seconds')
iterations/misc/Cha_Goodgame_Kao_Moore_W207_Final_Project_updated_08_19_2230.ipynb
samgoodgame/sf_crime
mit
Model calibration: MultinomialNB Here we will calibrate the MNB classifier with both Platt Scaling and with Isotonic Regression using CalibratedClassifierCV with various parameter settings. The "method" parameter can be set to "sigmoid" or to "isotonic", corresponding to Platt Scaling and to Isotonic Regression respectively.
list_for_as = [] list_for_ms = [] list_for_log_loss = [] def MNB_calibrated(a,m): mnb_tuned = MultinomialNB(alpha = a).fit(mini_train_data, mini_train_labels) ccv = CalibratedClassifierCV(mnb_tuned, method = m, cv = 'prefit') ccv.fit(mini_calibrate_data, mini_calibrate_labels) ccv_prediction_probabilities = ccv.predict_proba(mini_dev_data) list_for_as.append(this_a) list_for_ms.append(this_m) working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = ccv_prediction_probabilities, labels = crime_labels_mini_dev) list_for_log_loss.append(working_log_loss) #print("Multi-class Log Loss with MNB and a =", a, "and m =", m, "is:", working_log_loss) alpha_tuning = [0.001, 0.01, 0.1, 0.2, 0.3, 0.4, 0.6, 0.8, 1.0, 1.1, 1.2, 1.4, 1.6, 1.8, 2.0, 10.0] methods = ['sigmoid', 'isotonic'] start = time.clock() for this_a in alpha_tuning: for this_m in methods: MNB_calibrated(this_a, this_m) index_best_logloss = np.argmin(list_for_log_loss) print('For MNB the best log loss with hyperparameter tuning and calibration is',list_for_log_loss[index_best_logloss], 'with alpha =', list_for_as[index_best_logloss], 'and method =', list_for_ms[index_best_logloss]) end = time.clock() print("Computation time for this step is %.2f" % (end-start), 'seconds')
iterations/misc/Cha_Goodgame_Kao_Moore_W207_Final_Project_updated_08_19_2230.ipynb
samgoodgame/sf_crime
mit
Tuning: Gaussian Naive Bayes For the Gaussian Naive Bayes classifier there are no inherent parameters within the classifier function to optimize, but we will look at our log loss before and after adding noise to the data that is hypothesized to give it a more normal (Gaussian) distribution, which is required by the GNB classifier.
def GNB_pre_tune(): gnb_pre_tuned = GaussianNB().fit(mini_train_data, mini_train_labels) dev_prediction_probabilities =gnb_pre_tuned.predict_proba(mini_dev_data) working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = dev_prediction_probabilities, labels = crime_labels_mini_dev) print("Multi-class Log Loss with pre-tuned GNB is:", working_log_loss) GNB_pre_tune() def GNB_post_tune(): # Gaussian Naive Bayes requires the data to have a relative normal distribution. Sometimes # adding noise can improve performance by making the data more normal: mini_train_data_noise = np.random.rand(mini_train_data.shape[0],mini_train_data.shape[1]) modified_mini_train_data = np.multiply(mini_train_data,mini_train_data_noise) gnb_with_noise = GaussianNB().fit(modified_mini_train_data,mini_train_labels) dev_prediction_probabilities =gnb_with_noise.predict_proba(mini_dev_data) working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = dev_prediction_probabilities, labels = crime_labels_mini_dev) print("Multi-class Log Loss with tuned GNB via addition of noise to normalize the data's distribution is:", working_log_loss) GNB_post_tune()
iterations/misc/Cha_Goodgame_Kao_Moore_W207_Final_Project_updated_08_19_2230.ipynb
samgoodgame/sf_crime
mit
Model calibration: GaussianNB Here we will calibrate the GNB classifier with both Platt Scaling and with Isotonic Regression using CalibratedClassifierCV with various parameter settings. The "method" parameter can be set to "sigmoid" or to "isotonic", corresponding to Platt Scaling and to Isotonic Regression respectively.
list_for_ms = [] list_for_log_loss = [] def GNB_calibrated(m): # Gaussian Naive Bayes requires the data to have a relative normal distribution. Sometimes # adding noise can improve performance by making the data more normal: mini_train_data_noise = np.random.rand(mini_train_data.shape[0],mini_train_data.shape[1]) modified_mini_train_data = np.multiply(mini_train_data,mini_train_data_noise) gnb_with_noise = GaussianNB().fit(modified_mini_train_data,mini_train_labels) ccv = CalibratedClassifierCV(gnb_with_noise, method = m, cv = 'prefit') ccv.fit(mini_calibrate_data, mini_calibrate_labels) ccv_prediction_probabilities = ccv.predict_proba(mini_dev_data) list_for_ms.append(this_m) working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = ccv_prediction_probabilities, labels = crime_labels_mini_dev) list_for_log_loss.append(working_log_loss) #print("Multi-class Log Loss with tuned GNB via addition of noise to normalize the data's distribution and after calibration is:", working_log_loss, 'with calibration method =', m) methods = ['sigmoid', 'isotonic'] start = time.clock() for this_m in methods: GNB_calibrated(this_m) index_best_logloss = np.argmin(list_for_log_loss) print('For GNB the best log loss with tuning and calibration is',list_for_log_loss[index_best_logloss], 'with method =', list_for_ms[index_best_logloss]) end = time.clock() print("Computation time for this step is %.2f" % (end-start), 'seconds')
iterations/misc/Cha_Goodgame_Kao_Moore_W207_Final_Project_updated_08_19_2230.ipynb
samgoodgame/sf_crime
mit
Logistic Regression Hyperparameter tuning: For the Logistic Regression classifier, we can seek to optimize the following classifier parameters: penalty (l1 or l2), C (inverse of regularization strength), solver ('newton-cg', 'lbfgs', 'liblinear', or 'sag') Model calibration: See above Decision Tree (Bryan) Hyperparameter tuning: For the Decision Tree classifier, we can seek to optimize the following classifier parameters: min_samples_leaf (the minimum number of samples required to be at a leaf node), max_depth From readings, setting min_samples_leaf to approximately 1% of the data points can stop the tree from inappropriately classifying outliers, which can help to improve accuracy (unsure if significantly improves MCLL). Model calibration: See above Support Vector Machines (Kalvin) Hyperparameter tuning: For the SVM classifier, we can seek to optimize the following classifier parameters: C (penalty parameter C of the error term), kernel ('linear', 'poly', 'rbf', sigmoid', or 'precomputed') See source [2] for parameter optimization in SVM Model calibration: See above Neural Nets (Sarah) Hyperparameter tuning: For the Neural Networks MLP classifier, we can seek to optimize the following classifier parameters: hidden_layer_sizes, activation ('identity', 'logistic', 'tanh', 'relu'), solver ('lbfgs','sgd', adam'), alpha, learning_rate ('constant', 'invscaling','adaptive')
### All the work from Sarah's notebook: import theano from theano import tensor as T from theano.sandbox.rng_mrg import MRG_RandomStreams as RandomStreams print (theano.config.device) # We're using CPUs (for now) print (theano.config.floatX )# Should be 64 bit for CPUs np.random.seed(0) from IPython.display import display, clear_output numFeatures = train_data[1].size numTrainExamples = train_data.shape[0] numTestExamples = test_data.shape[0] print ('Features = %d' %(numFeatures)) print ('Train set = %d' %(numTrainExamples)) print ('Test set = %d' %(numTestExamples)) class_labels = list(set(train_labels)) print(class_labels) numClasses = len(class_labels) ### Binarize the class labels def binarizeY(data): binarized_data = np.zeros((data.size,39)) for j in range(0,data.size): feature = data[j] i = class_labels.index(feature) binarized_data[j,i]=1 return binarized_data train_labels_b = binarizeY(train_labels) test_labels_b = binarizeY(test_labels) numClasses = train_labels_b[1].size print ('Classes = %d' %(numClasses)) print ('\n', train_labels_b[:5, :], '\n') print (train_labels[:10], '\n') ###1) Parameters numFeatures = train_data.shape[1] numHiddenNodeslayer1 = 50 numHiddenNodeslayer2 = 30 w_1 = theano.shared(np.asarray((np.random.randn(*(numFeatures, numHiddenNodeslayer1))*0.01))) w_2 = theano.shared(np.asarray((np.random.randn(*(numHiddenNodeslayer1, numHiddenNodeslayer2))*0.01))) w_3 = theano.shared(np.asarray((np.random.randn(*(numHiddenNodeslayer2, numClasses))*0.01))) params = [w_1, w_2, w_3] ###2) Model X = T.matrix() Y = T.matrix() srng = RandomStreams() def dropout(X, p=0.): if p > 0: X *= srng.binomial(X.shape, p=1 - p) X /= 1 - p return X def model(X, w_1, w_2, w_3, p_1, p_2, p_3): return T.nnet.softmax(T.dot(dropout(T.nnet.sigmoid(T.dot(dropout(T.nnet.sigmoid(T.dot(dropout(X, p_1), w_1)),p_2), w_2)),p_3),w_3)) y_hat_train = model(X, w_1, w_2, w_3, 0.2, 0.5,0.5) y_hat_predict = model(X, w_1, w_2, w_3, 0., 0., 0.) ### (3) Cost function cost = T.mean(T.sqr(y_hat - Y)) cost = T.mean(T.nnet.categorical_crossentropy(y_hat_train, Y)) ### (4) Objective (and solver) alpha = 0.01 def backprop(cost, w): grads = T.grad(cost=cost, wrt=w) updates = [] for wi, grad in zip(w, grads): updates.append([wi, wi - grad * alpha]) return updates update = backprop(cost, params) train = theano.function(inputs=[X, Y], outputs=cost, updates=update, allow_input_downcast=True) y_pred = T.argmax(y_hat_predict, axis=1) predict = theano.function(inputs=[X], outputs=y_pred, allow_input_downcast=True) miniBatchSize = 10 def gradientDescent(epochs): for i in range(epochs): for start, end in zip(range(0, len(train_data), miniBatchSize), range(miniBatchSize, len(train_data), miniBatchSize)): cc = train(train_data[start:end], train_labels_b[start:end]) clear_output(wait=True) print ('%d) accuracy = %.4f' %(i+1, np.mean(np.argmax(test_labels_b, axis=1) == predict(test_data))) ) gradientDescent(50) ### How to decide what # to use for epochs? epochs in this case are how many rounds? ### plot costs for each of the 50 iterations and see how much it decline.. if its still very decreasing, you should ### do more iterations; otherwise if its looking like its flattening, you can stop
iterations/misc/Cha_Goodgame_Kao_Moore_W207_Final_Project_updated_08_19_2230.ipynb
samgoodgame/sf_crime
mit
Model calibration: See above Random Forest (Sam, possibly in AWS) Hyperparameter tuning: For the Random Forest classifier, we can seek to optimize the following classifier parameters: n_estimators (the number of trees in the forsest), max_features, max_depth, min_samples_leaf, bootstrap (whether or not bootstrap samples are used when building trees), oob_score (whether or not out-of-bag samples are used to estimate the generalization accuracy) Model calibration: See above Meta-estimators AdaBoost Classifier Hyperparameter tuning: There are no major changes that we seek to make in the AdaBoostClassifier with respect to default parameter values. Adaboosting each classifier: We will run the AdaBoostClassifier on each different classifier from above, using the classifier settings with optimized Multi-class Log Loss after hyperparameter tuning and calibration. Bagging Classifier Hyperparameter tuning: For the Bagging meta classifier, we can seek to optimize the following classifier parameters: n_estimators (the number of trees in the forsest), max_samples, max_features, bootstrap (whether or not bootstrap samples are used when building trees), bootstrap_features (whether features are drawn with replacement), and oob_score (whether or not out-of-bag samples are used to estimate the generalization accuracy) Bagging each classifier: We will run the BaggingClassifier on each different classifier from above, using the classifier settings with optimized Multi-class Log Loss after hyperparameter tuning and calibration. Gradient Boosting Classifier Hyperparameter tuning: For the Gradient Boosting meta classifier, we can seek to optimize the following classifier parameters: n_estimators (the number of trees in the forsest), max_depth, min_samples_leaf, and max_features Gradient Boosting each classifier: We will run the GradientBoostingClassifier with loss = 'deviance' (as loss = 'exponential' uses the AdaBoost algorithm) on each different classifier from above, using the classifier settings with optimized Multi-class Log Loss after hyperparameter tuning and calibration. Final evaluation on test data
# Here we will likely use Pipeline and GridSearchCV in order to find the overall classifier with optimized Multi-class Log Loss. # This will be the last step after all attempts at feature addition, hyperparameter tuning, and calibration are completed # and the corresponding performance metrics are gathered.
iterations/misc/Cha_Goodgame_Kao_Moore_W207_Final_Project_updated_08_19_2230.ipynb
samgoodgame/sf_crime
mit
Benchmarking GP codes Implemented the right way, GPs can be super fast! Let's compare the time it takes to evaluate our GP likelihood and the time it takes to evaluate the likelihood computed with the snazzy george and celerite packages. We'll learn how to use both along the way. Let's create a large, fake dataset for these tests:
import numpy as np np.random.seed(0) t = np.linspace(0, 10, 10000) y = np.random.randn(10000) sigma = np.ones(10000)
Sessions/Session13/Day2/02-Fast-GPs.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Our GP
def ExpSquaredCovariance(t, A=1.0, l=1.0, tprime=None): """ Return the ``N x M`` exponential squared covariance matrix. """ if tprime is None: tprime = t TPrime, T = np.meshgrid(tprime, t) return A ** 2 * np.exp(-0.5 * (T - TPrime) ** 2 / l ** 2) def ln_gp_likelihood(t, y, sigma=0, A=1.0, l=1.0): """ Return the log of the GP likelihood for a datatset y(t) with uncertainties sigma, modeled with a Squared Exponential Kernel with amplitude A and lengthscale l. """ # The covariance and its determinant npts = len(t) K = ExpSquaredCovariance(t, A=A, l=l) + sigma ** 2 * np.eye(npts) # The log marginal likelihood log_like = -0.5 * np.dot(y.T, np.linalg.solve(K, y)) log_like -= 0.5 * np.linalg.slogdet(K)[1] log_like -= 0.5 * npts * np.log(2 * np.pi) return log_like
Sessions/Session13/Day2/02-Fast-GPs.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit