markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
Export as Edge model
You can export an AutoML image object detection model as a Edge model which you can then custom deploy to an edge device or download locally. Use the method export_model() to export the model to Cloud Storage, which takes the following parameters:
artifact_destination: The Cloud Storage location to store the SavedFormat model artifacts to.
export_format_id: The format to save the model format as. For AutoML image object detection there is just one option:
tf-saved-model: TensorFlow SavedFormat for deployment to a container.
tflite: TensorFlow Lite for deployment to an edge or mobile device.
edgetpu-tflite: TensorFlow Lite for TPU
tf-js: TensorFlow for web client
coral-ml: for Coral devices
sync: Whether to perform operational sychronously or asynchronously. | response = model.export_model(
artifact_destination=BUCKET_NAME, export_format_id="tflite", sync=True
)
model_package = response["artifactOutputUri"] | notebooks/community/sdk/sdk_automl_image_object_detection_online_export_edge.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Download the TFLite model artifacts
Now that you have an exported TFLite version of your model, you can test the exported model locally, but first downloading it from Cloud Storage. | ! gsutil ls $model_package
# Download the model artifacts
! gsutil cp -r $model_package tflite
tflite_path = "tflite/model.tflite" | notebooks/community/sdk/sdk_automl_image_object_detection_online_export_edge.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Instantiate a TFLite interpreter
The TFLite version of the model is not a TensorFlow SavedModel format. You cannot directly use methods like predict(). Instead, one uses the TFLite interpreter. You must first setup the interpreter for the TFLite model as follows:
Instantiate an TFLite interpreter for the TFLite model.
Instruct the interpreter to allocate input and output tensors for the model.
Get detail information about the models input and output tensors that will need to be known for prediction. | import tensorflow as tf
interpreter = tf.lite.Interpreter(model_path=tflite_path)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
input_shape = input_details[0]["shape"]
print("input tensor shape", input_shape) | notebooks/community/sdk/sdk_automl_image_object_detection_online_export_edge.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Get test item
You will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction. | test_items = ! gsutil cat $IMPORT_FILE | head -n1
test_item = test_items[0].split(",")[0]
with tf.io.gfile.GFile(test_item, "rb") as f:
content = f.read()
test_image = tf.io.decode_jpeg(content)
print("test image shape", test_image.shape)
test_image = tf.image.resize(test_image, (224, 224))
print("test image shape", test_image.shape, test_image.dtype)
test_image = tf.cast(test_image, dtype=tf.uint8).numpy() | notebooks/community/sdk/sdk_automl_image_object_detection_online_export_edge.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Make a prediction with TFLite model
Finally, you do a prediction using your TFLite model, as follows:
Convert the test image into a batch of a single image (np.expand_dims)
Set the input tensor for the interpreter to your batch of a single image (data).
Invoke the interpreter.
Retrieve the softmax probabilities for the prediction (get_tensor).
Determine which label had the highest probability (np.argmax). | import numpy as np
data = np.expand_dims(test_image, axis=0)
interpreter.set_tensor(input_details[0]["index"], data)
interpreter.invoke()
softmax = interpreter.get_tensor(output_details[0]["index"])
label = np.argmax(softmax)
print(label) | notebooks/community/sdk/sdk_automl_image_object_detection_online_export_edge.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Classes
Everything is an object in Python including native types. You define class names with camel casing.
You define the constructor with special name __init__(). The fields (private) are denoted with _variable_name specification and properties are decorated with @property decorator.
Fields and properties are accessed within the class using self.name notation. This helps differentiate a class field / property from a local variable or method argument of the same name.
A simple class
class MyClass:
_local_variables = "value"
def __init__(self, args): #constructor
statements
self._local_variables = args # assign values to fields
def func_1(self, args):
statements
You use this method by instantiating an object.
obj1 = myClass(args_defined_in_constructor) | # Define a class to hold a satellite or aerial imagery file. Its properties give information
# such as location of the ground, area, dimensions, spatial and spectral resolution etc.
class ImageryObject:
_default_gsd = 5.0
def __init__(self, file_path):
self._file_path = file_path
self._gps_location = (3,4)
@property
def bands(self):
#count number of bands
count = 3
return count
@property
def gsd(self):
# logic to calculate the ground sample distance
gsd = 10.0
return gsd
@property
def address(self):
# logic to reverse geocode the self._gps_location to get address
# reverse geocode self._gps_location
address = "123 XYZ Street"
return address
#class methods
def display(self):
#logic to display picture
print("image is displayed")
def shuffle_bands(self):
#logic to shift RGB combination
print("shifting pands")
self.display()
# class instantiation
img1 = ImageryObject("user\img\file.img") #pass value to constructor
img1.address
img1._default_gsd
img1._gps_location
img1.shuffle_bands()
# Get help on any object. Only public methods, properties are displayed.
# fields are private, properties are public. Class variables beginning with _ are private fields.
help(img1) | python_crash_course/python_cheat_sheet_2.ipynb | AtmaMani/pyChakras | mit |
Exception handling
Exceptions are classes. You can define your own by inheriting from Exception class.
try:
statements
except Exception_type1 as e1:
handling statements
except Exception_type2 as e2:
specific handling statements
except Exception as generic_ex:
generic handling statements
else:
some more statements
finally:
default statements which will always be executed | try:
img2 = ImageryObject("user\img\file2.img")
img2.display()
except:
print("something bad happened")
try:
img2 = ImageryObject("user\img\file2.img")
img2.display()
except:
print("something bad happened")
else:
print("else block")
finally:
print("finally block")
try:
img2 = ImageryObject()
img2.display()
except:
print("something bad happened")
else:
print("else block")
finally:
print("finally block")
try:
img2 = ImageryObject()
img2.display()
except Exception as ex:
print("something bad happened")
print("exactly what whent bad? : " + str(ex))
try:
img2 = ImageryObject('path')
img2.dddisplay()
except TypeError as terr:
print("looks like you forgot a parameter")
except Exception as ex:
print("nope, it went worng here: " + str(ex)) | python_crash_course/python_cheat_sheet_2.ipynb | AtmaMani/pyChakras | mit |
This downloads the dataset and automatically pre-processes it into sparse matrices suitable for further calculation. In particular, it prepares the sparse user-item matrices, containing positive entries where a user interacted with a product, and zeros otherwise.
We have two such matrices, a training and a testing set. Both have around 1000 users and 1700 items. We'll train the model on the train matrix but test it on the test matrix. | print(repr(data['train']))
print(repr(data['test'])) | examples/quickstart/quickstart.ipynb | paoloRais/lightfm | apache-2.0 |
We need to import the model class to fit the model: | from lightfm import LightFM | examples/quickstart/quickstart.ipynb | paoloRais/lightfm | apache-2.0 |
We're going to use the WARP (Weighted Approximate-Rank Pairwise) model. WARP is an implicit feedback model: all interactions in the training matrix are treated as positive signals, and products that users did not interact with they implicitly do not like. The goal of the model is to score these implicit positives highly while assigining low scores to implicit negatives.
Model training is accomplished via SGD (stochastic gradient descent). This means that for every pass through the data --- an epoch --- the model learns to fit the data more and more closely. We'll run it for 10 epochs in this example. We can also run it on multiple cores, so we'll set that to 2. (The dataset in this example is too small for that to make a difference, but it will matter on bigger datasets.) | model = LightFM(loss='warp')
%time model.fit(data['train'], epochs=30, num_threads=2) | examples/quickstart/quickstart.ipynb | paoloRais/lightfm | apache-2.0 |
Done! We should now evaluate the model to see how well it's doing. We're most interested in how good the ranking produced by the model is. Precision@k is one suitable metric, expressing the percentage of top k items in the ranking the user has actually interacted with. lightfm implements a number of metrics in the evaluation module. | from lightfm.evaluation import precision_at_k | examples/quickstart/quickstart.ipynb | paoloRais/lightfm | apache-2.0 |
We'll measure precision in both the train and the test set. | print("Train precision: %.2f" % precision_at_k(model, data['train'], k=5).mean())
print("Test precision: %.2f" % precision_at_k(model, data['test'], k=5).mean()) | examples/quickstart/quickstart.ipynb | paoloRais/lightfm | apache-2.0 |
Unsurprisingly, the model fits the train set better than the test set.
For an alternative way of judging the model, we can sample a couple of users and get their recommendations. To make predictions for given user, we pass the id of that user and the ids of all products we want predictions for into the predict method. | def sample_recommendation(model, data, user_ids):
n_users, n_items = data['train'].shape
for user_id in user_ids:
known_positives = data['item_labels'][data['train'].tocsr()[user_id].indices]
scores = model.predict(user_id, np.arange(n_items))
top_items = data['item_labels'][np.argsort(-scores)]
print("User %s" % user_id)
print(" Known positives:")
for x in known_positives[:3]:
print(" %s" % x)
print(" Recommended:")
for x in top_items[:3]:
print(" %s" % x)
sample_recommendation(model, data, [3, 25, 450]) | examples/quickstart/quickstart.ipynb | paoloRais/lightfm | apache-2.0 |
CONTENTS
Overview
Graph Coloring
N-Queens
AC-3
Backtracking Search
Tree CSP Solver
Graph Coloring Visualization
N-Queens Visualization
OVERVIEW
CSPs are a special kind of search problems. Here we don't treat the space as a black box but the state has a particular form and we use that to our advantage to tweak our algorithms to be more suited to the problems. A CSP State is defined by a set of variables which can take values from corresponding domains. These variables can take only certain values in their domains to satisfy the constraints. A set of assignments which satisfies all constraints passes the goal test. Let us start by exploring the CSP class which we will use to model our CSPs. You can keep the popup open and read the main page to get a better idea of the code. | psource(CSP) | csp.ipynb | jo-tez/aima-python | mit |
The _ init _ method parameters specify the CSP. Variables can be passed as a list of strings or integers. Domains are passed as dict (dictionary datatpye) where "key" specifies the variables and "value" specifies the domains. The variables are passed as an empty list. Variables are extracted from the keys of the domain dictionary. Neighbor is a dict of variables that essentially describes the constraint graph. Here each variable key has a list of its values which are the variables that are constraint along with it. The constraint parameter should be a function f(A, a, B, b) that returns true if neighbors A, B satisfy the constraint when they have values A=a, B=b. We have additional parameters like nassings which is incremented each time an assignment is made when calling the assign method. You can read more about the methods and parameters in the class doc string. We will talk more about them as we encounter their use. Let us jump to an example.
GRAPH COLORING
We use the graph coloring problem as our running example for demonstrating the different algorithms in the csp module. The idea of map coloring problem is that the adjacent nodes (those connected by edges) should not have the same color throughout the graph. The graph can be colored using a fixed number of colors. Here each node is a variable and the values are the colors that can be assigned to them. Given that the domain will be the same for all our nodes we use a custom dict defined by the UniversalDict class. The UniversalDict Class takes in a parameter and returns it as a value for all the keys of the dict. It is very similar to defaultdict in Python except that it does not support item assignment. | s = UniversalDict(['R','G','B'])
s[5] | csp.ipynb | jo-tez/aima-python | mit |
For our CSP we also need to define a constraint function f(A, a, B, b). In this, we need to ensure that the neighbors don't have the same color. This is defined in the function different_values_constraint of the module. | psource(different_values_constraint) | csp.ipynb | jo-tez/aima-python | mit |
The CSP class takes neighbors in the form of a Dict. The module specifies a simple helper function named parse_neighbors which allows us to take input in the form of strings and return a Dict of a form that is compatible with the CSP Class. | %pdoc parse_neighbors | csp.ipynb | jo-tez/aima-python | mit |
The MapColoringCSP function creates and returns a CSP with the above constraint function and states. The variables are the keys of the neighbors dict and the constraint is the one specified by the different_values_constratint function. Australia, USA and France are three CSPs that have been created using MapColoringCSP. Australia corresponds to Figure 6.1 in the book. | psource(MapColoringCSP)
australia, usa, france | csp.ipynb | jo-tez/aima-python | mit |
N-QUEENS
The N-queens puzzle is the problem of placing N chess queens on an N×N chessboard in a way such that no two queens threaten each other. Here N is a natural number. Like the graph coloring problem, NQueens is also implemented in the csp module. The NQueensCSP class inherits from the CSP class. It makes some modifications in the methods to suit this particular problem. The queens are assumed to be placed one per column, from left to right. That means position (x, y) represents (var, val) in the CSP. The constraint that needs to be passed to the CSP is defined in the queen_constraint function. The constraint is satisfied (true) if A, B are really the same variable, or if they are not in the same row, down diagonal, or up diagonal. | psource(queen_constraint) | csp.ipynb | jo-tez/aima-python | mit |
The NQueensCSP method implements methods that support solving the problem via min_conflicts which is one of the many popular techniques for solving CSPs. Because min_conflicts hill climbs the number of conflicts to solve, the CSP assign and unassign are modified to record conflicts. More details about the structures: rows, downs, ups which help in recording conflicts are explained in the docstring. | psource(NQueensCSP) | csp.ipynb | jo-tez/aima-python | mit |
The _ init _ method takes only one parameter n i.e. the size of the problem. To create an instance, we just pass the required value of n into the constructor. | eight_queens = NQueensCSP(8) | csp.ipynb | jo-tez/aima-python | mit |
We have defined our CSP.
Now, we need to solve this.
Min-conflicts
As stated above, the min_conflicts algorithm is an efficient method to solve such a problem.
<br>
In the start, all the variables of the CSP are randomly initialized.
<br>
The algorithm then randomly selects a variable that has conflicts and violates some constraints of the CSP.
<br>
The selected variable is then assigned a value that minimizes the number of conflicts.
<br>
This is a simple stochastic algorithm which works on a principle similar to Hill-climbing.
The conflicting state is repeatedly changed into a state with fewer conflicts in an attempt to reach an approximate solution.
<br>
This algorithm sometimes benefits from having a good initial assignment.
Using greedy techniques to get a good initial assignment and then using min_conflicts to solve the CSP can speed up the procedure dramatically, especially for CSPs with a large state space. | psource(min_conflicts) | csp.ipynb | jo-tez/aima-python | mit |
Let's use this algorithm to solve the eight_queens CSP. | solution = min_conflicts(eight_queens) | csp.ipynb | jo-tez/aima-python | mit |
This is indeed a valid solution.
<br>
notebook.py has a helper function to visualize the solution space. | plot_NQueens(solution) | csp.ipynb | jo-tez/aima-python | mit |
Lets' see if we can find a different solution. | eight_queens = NQueensCSP(8)
solution = min_conflicts(eight_queens)
plot_NQueens(solution) | csp.ipynb | jo-tez/aima-python | mit |
The solution is a bit different this time.
Running the above cell several times should give you different valid solutions.
<br>
In the search.ipynb notebook, we will see how NQueensProblem can be solved using a heuristic search method such as uniform_cost_search and astar_search.
Helper Functions
We will now implement a few helper functions that will allow us to visualize the Coloring Problem; we'll also make a few modifications to the existing classes and functions for additional record keeping. To begin, we modify the assign and unassign methods in the CSP in order to add a copy of the assignment to the assignment_history. We name this new class as InstruCSP; it will allow us to see how the assignment evolves over time. | import copy
class InstruCSP(CSP):
def __init__(self, variables, domains, neighbors, constraints):
super().__init__(variables, domains, neighbors, constraints)
self.assignment_history = []
def assign(self, var, val, assignment):
super().assign(var,val, assignment)
self.assignment_history.append(copy.deepcopy(assignment))
def unassign(self, var, assignment):
super().unassign(var,assignment)
self.assignment_history.append(copy.deepcopy(assignment)) | csp.ipynb | jo-tez/aima-python | mit |
Next, we define make_instru which takes an instance of CSP and returns an instance of InstruCSP. | def make_instru(csp):
return InstruCSP(csp.variables, csp.domains, csp.neighbors, csp.constraints) | csp.ipynb | jo-tez/aima-python | mit |
We will now use a graph defined as a dictionary for plotting purposes in our Graph Coloring Problem. The keys are the nodes and their values are the corresponding nodes they are connected to. | neighbors = {
0: [6, 11, 15, 18, 4, 11, 6, 15, 18, 4],
1: [12, 12, 14, 14],
2: [17, 6, 11, 6, 11, 10, 17, 14, 10, 14],
3: [20, 8, 19, 12, 20, 19, 8, 12],
4: [11, 0, 18, 5, 18, 5, 11, 0],
5: [4, 4],
6: [8, 15, 0, 11, 2, 14, 8, 11, 15, 2, 0, 14],
7: [13, 16, 13, 16],
8: [19, 15, 6, 14, 12, 3, 6, 15, 19, 12, 3, 14],
9: [20, 15, 19, 16, 15, 19, 20, 16],
10: [17, 11, 2, 11, 17, 2],
11: [6, 0, 4, 10, 2, 6, 2, 0, 10, 4],
12: [8, 3, 8, 14, 1, 3, 1, 14],
13: [7, 15, 18, 15, 16, 7, 18, 16],
14: [8, 6, 2, 12, 1, 8, 6, 2, 1, 12],
15: [8, 6, 16, 13, 18, 0, 6, 8, 19, 9, 0, 19, 13, 18, 9, 16],
16: [7, 15, 13, 9, 7, 13, 15, 9],
17: [10, 2, 2, 10],
18: [15, 0, 13, 4, 0, 15, 13, 4],
19: [20, 8, 15, 9, 15, 8, 3, 20, 3, 9],
20: [3, 19, 9, 19, 3, 9]
} | csp.ipynb | jo-tez/aima-python | mit |
Now we are ready to create an InstruCSP instance for our problem. We are doing this for an instance of MapColoringProblem class which inherits from the CSP Class. This means that our make_instru function will work perfectly for it. | coloring_problem = MapColoringCSP('RGBY', neighbors)
coloring_problem1 = make_instru(coloring_problem) | csp.ipynb | jo-tez/aima-python | mit |
CONSTRAINT PROPAGATION
Algorithms that solve CSPs have a choice between searching and or doing a constraint propagation, a specific type of inference.
The constraints can be used to reduce the number of legal values for another variable, which in turn can reduce the legal values for some other variable, and so on.
<br>
Constraint propagation tries to enforce local consistency.
Consider each variable as a node in a graph and each binary constraint as an arc.
Enforcing local consistency causes inconsistent values to be eliminated throughout the graph,
a lot like the GraphPlan algorithm in planning, where mutex links are removed from a planning graph.
There are different types of local consistencies:
1. Node consistency
2. Arc consistency
3. Path consistency
4. K-consistency
5. Global constraints
Refer section 6.2 in the book for details.
<br>
AC-3
Before we dive into AC-3, we need to know what arc-consistency is.
<br>
A variable $X_i$ is arc-consistent with respect to another variable $X_j$ if for every value in the current domain $D_i$ there is some value in the domain $D_j$ that satisfies the binary constraint on the arc $(X_i, X_j)$.
<br>
A network is arc-consistent if every variable is arc-consistent with every other variable.
<br>
AC-3 is an algorithm that enforces arc consistency.
After applying AC-3, either every arc is arc-consistent, or some variable has an empty domain, indicating that the CSP cannot be solved.
Let's see how AC3 is implemented in the module. | psource(AC3) | csp.ipynb | jo-tez/aima-python | mit |
AC3 also employs a helper function revise. | psource(revise) | csp.ipynb | jo-tez/aima-python | mit |
AC3 maintains a queue of arcs to consider which initially contains all the arcs in the CSP.
An arbitrary arc $(X_i, X_j)$ is popped from the queue and $X_i$ is made arc-consistent with respect to $X_j$.
<br>
If in doing so, $D_i$ is left unchanged, the algorithm just moves to the next arc,
but if the domain $D_i$ is revised, then we add all the neighboring arcs $(X_k, X_i)$ to the queue.
<br>
We repeat this process and if at any point, the domain $D_i$ is reduced to nothing, then we know the whole CSP has no consistent solution and AC3 can immediately return failure.
<br>
Otherwise, we keep removing values from the domains of variables until the queue is empty.
We finally get the arc-consistent CSP which is faster to search because the variables have smaller domains.
Let's see how AC3 can be used.
<br>
We'll first define the required variables. | neighbors = parse_neighbors('A: B; B: ')
domains = {'A': [0, 1, 2, 3, 4], 'B': [0, 1, 2, 3, 4]}
constraints = lambda X, x, Y, y: x % 2 == 0 and (x + y) == 4 and y % 2 != 0
removals = [] | csp.ipynb | jo-tez/aima-python | mit |
We'll now define a CSP object. | csp = CSP(variables=None, domains=domains, neighbors=neighbors, constraints=constraints)
AC3(csp, removals=removals) | csp.ipynb | jo-tez/aima-python | mit |
This configuration is inconsistent. | constraints = lambda X, x, Y, y: (x % 2) == 0 and (x + y) == 4
removals = []
csp = CSP(variables=None, domains=domains, neighbors=neighbors, constraints=constraints)
AC3(csp,removals=removals) | csp.ipynb | jo-tez/aima-python | mit |
This configuration is consistent.
BACKTRACKING SEARCH
The main issue with using Naive Search Algorithms to solve a CSP is that they can continue to expand obviously wrong paths; whereas, in backtracking search, we check the constraints as we go and we deal with only one variable at a time. Backtracking Search is implemented in the repository as the function backtracking_search. This is the same as Figure 6.5 in the book. The function takes as input a CSP and a few other optional parameters which can be used to speed it up further. The function returns the correct assignment if it satisfies the goal. However, we will discuss these later. For now, let us solve our coloring_problem1 with backtracking_search. | result = backtracking_search(coloring_problem1)
result # A dictonary of assignments. | csp.ipynb | jo-tez/aima-python | mit |
Let us also check the number of assignments made. | coloring_problem1.nassigns | csp.ipynb | jo-tez/aima-python | mit |
Now, let us check the total number of assignments and unassignments, which would be the length of our assignment history. We can see it by using the command below. | len(coloring_problem1.assignment_history) | csp.ipynb | jo-tez/aima-python | mit |
Now let us explore the optional keyword arguments that the backtracking_search function takes. These optional arguments help speed up the assignment further. Along with these, we will also point out the methods in the CSP class that help to make this work.
The first one is select_unassigned_variable. It takes in, as a parameter, a function that helps in deciding the order in which the variables will be selected for assignment. We use a heuristic called Most Restricted Variable which is implemented by the function mrv. The idea behind mrv is to choose the variable with the least legal values left in its domain. The intuition behind selecting the mrv or the most constrained variable is that it allows us to encounter failure quickly before going too deep into a tree if we have selected a wrong step before. The mrv implementation makes use of another function num_legal_values to sort out the variables by the number of legal values left in its domain. This function, in turn, calls the nconflicts method of the CSP to return such values. | psource(mrv)
psource(num_legal_values)
psource(CSP.nconflicts) | csp.ipynb | jo-tez/aima-python | mit |
Another ordering related parameter order_domain_values governs the value ordering. Here we select the Least Constraining Value which is implemented by the function lcv. The idea is to select the value which rules out least number of values in the remaining variables. The intuition behind selecting the lcv is that it allows a lot of freedom to assign values later. The idea behind selecting the mrc and lcv makes sense because we need to do all variables but for values, and it's better to try the ones that are likely. So for vars, we face the hard ones first. | psource(lcv) | csp.ipynb | jo-tez/aima-python | mit |
Finally, the third parameter inference can make use of one of the two techniques called Arc Consistency or Forward Checking. The details of these methods can be found in the Section 6.3.2 of the book. In short the idea of inference is to detect the possible failure before it occurs and to look ahead to not make mistakes. mac and forward_checking implement these two techniques. The CSP methods support_pruning, suppose, prune, choices, infer_assignment and restore help in using these techniques. You can find out more about these by looking up the source code.
Now let us compare the performance with these parameters enabled vs the default parameters. We will use the Graph Coloring problem instance 'usa' for comparison. We will call the instances solve_simple and solve_parameters and solve them using backtracking and compare the number of assignments. | solve_simple = copy.deepcopy(usa)
solve_parameters = copy.deepcopy(usa)
backtracking_search(solve_simple)
backtracking_search(solve_parameters, order_domain_values=lcv, select_unassigned_variable=mrv, inference=mac)
solve_simple.nassigns
solve_parameters.nassigns | csp.ipynb | jo-tez/aima-python | mit |
TREE CSP SOLVER
The tree_csp_solver function (Figure 6.11 in the book) can be used to solve problems whose constraint graph is a tree. Given a CSP, with neighbors forming a tree, it returns an assignment that satisfies the given constraints. The algorithm works as follows:
First it finds the topological sort of the tree. This is an ordering of the tree where each variable/node comes after its parent in the tree. The function that accomplishes this is topological_sort; it builds the topological sort using the recursive function build_topological. That function is an augmented DFS (Depth First Search), where each newly visited node of the tree is pushed on a stack. The stack in the end holds the variables topologically sorted.
Then the algorithm makes arcs between each parent and child consistent. Arc-consistency between two variables, a and b, occurs when for every possible value of a there is an assignment in b that satisfies the problem's constraints. If such an assignment cannot be found, the problematic value is removed from a's possible values. This is done with the use of the function make_arc_consistent, which takes as arguments a variable Xj and its parent, and makes the arc between them consistent by removing any values from the parent which do not allow for a consistent assignment in Xj.
If an arc cannot be made consistent, the solver fails. If every arc is made consistent, we move to assigning values.
First we assign a random value to the root from its domain and then we assign values to the rest of the variables. Since the graph is now arc-consistent, we can simply move from variable to variable picking any remaining consistent values. At the end we are left with a valid assignment. If at any point though we find a variable where no consistent value is left in its domain, the solver fails.
Run the cell below to see the implementation of the algorithm: | psource(tree_csp_solver) | csp.ipynb | jo-tez/aima-python | mit |
We will now use the above function to solve a problem. More specifically, we will solve the problem of coloring Australia's map. We have two colors at our disposal: Red and Blue. As a reminder, this is the graph of Australia:
"SA: WA NT Q NSW V; NT: WA Q; NSW: Q V; T: "
Unfortunately, as you can see, the above is not a tree. However, if we remove SA, which has arcs to WA, NT, Q, NSW and V, we are left with a tree (we also remove T, since it has no in-or-out arcs). We can now solve this using our algorithm. Let's define the map coloring problem at hand: | australia_small = MapColoringCSP(list('RB'),
'NT: WA Q; NSW: Q V') | csp.ipynb | jo-tez/aima-python | mit |
We will input australia_small to the tree_csp_solver and print the given assignment. | assignment = tree_csp_solver(australia_small)
print(assignment) | csp.ipynb | jo-tez/aima-python | mit |
WA, Q and V got painted with the same color and NT and NSW got painted with the other.
GRAPH COLORING VISUALIZATION
Next, we define some functions to create the visualisation from the assignment_history of coloring_problem1. The readers need not concern themselves with the code that immediately follows as it is the usage of Matplotib with IPython Widgets. If you are interested in reading more about these, visit ipywidgets.readthedocs.io. We will be using the networkx library to generate graphs. These graphs can be treated as graphs that need to be colored or as constraint graphs for this problem. If interested you can check out a fairly simple tutorial here. We start by importing the necessary libraries and initializing matplotlib inline. | %matplotlib inline
import networkx as nx
import matplotlib.pyplot as plt
import matplotlib
import time | csp.ipynb | jo-tez/aima-python | mit |
The ipython widgets we will be using require the plots in the form of a step function such that there is a graph corresponding to each value. We define the make_update_step_function which returns such a function. It takes in as inputs the neighbors/graph along with an instance of the InstruCSP. The example below will elaborate it further. If this sounds confusing, don't worry. This is not part of the core material and our only goal is to help you visualize how the process works. | def make_update_step_function(graph, instru_csp):
#define a function to draw the graphs
def draw_graph(graph):
G=nx.Graph(graph)
pos = nx.spring_layout(G,k=0.15)
return (G, pos)
G, pos = draw_graph(graph)
def update_step(iteration):
# here iteration is the index of the assignment_history we want to visualize.
current = instru_csp.assignment_history[iteration]
# We convert the particular assignment to a default dict so that the color for nodes which
# have not been assigned defaults to black.
current = defaultdict(lambda: 'Black', current)
# Now we use colors in the list and default to black otherwise.
colors = [current[node] for node in G.node.keys()]
# Finally drawing the nodes.
nx.draw(G, pos, node_color=colors, node_size=500)
labels = {label:label for label in G.node}
# Labels shifted by offset so that nodes don't overlap
label_pos = {key:[value[0], value[1]+0.03] for key, value in pos.items()}
nx.draw_networkx_labels(G, label_pos, labels, font_size=20)
# display the graph
plt.show()
return update_step # <-- this is a function
def make_visualize(slider):
''' Takes an input a slider and returns
callback function for timer and animation
'''
def visualize_callback(Visualize, time_step):
if Visualize is True:
for i in range(slider.min, slider.max + 1):
slider.value = i
time.sleep(float(time_step))
return visualize_callback
| csp.ipynb | jo-tez/aima-python | mit |
Finally let us plot our problem. We first use the function below to obtain a step function. | step_func = make_update_step_function(neighbors, coloring_problem1) | csp.ipynb | jo-tez/aima-python | mit |
Next, we set the canvas size. | matplotlib.rcParams['figure.figsize'] = (18.0, 18.0) | csp.ipynb | jo-tez/aima-python | mit |
Finally, our plot using ipywidget slider and matplotib. You can move the slider to experiment and see the colors change. It is also possible to move the slider using arrow keys or to jump to the value by directly editing the number with a double click. The Visualize Button will automatically animate the slider for you. The Extra Delay Box allows you to set time delay in seconds (upto one second) for each time step. | import ipywidgets as widgets
from IPython.display import display
iteration_slider = widgets.IntSlider(min=0, max=len(coloring_problem1.assignment_history)-1, step=1, value=0)
w=widgets.interactive(step_func,iteration=iteration_slider)
display(w)
visualize_callback = make_visualize(iteration_slider)
visualize_button = widgets.ToggleButton(description = "Visualize", value = False)
time_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0'])
a = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select)
display(a) | csp.ipynb | jo-tez/aima-python | mit |
N-QUEENS VISUALIZATION
Just like the Graph Coloring Problem, we will start with defining a few helper functions to help us visualize the assignments as they evolve over time. The make_plot_board_step_function behaves similar to the make_update_step_function introduced earlier. It initializes a chess board in the form of a 2D grid with alternating 0s and 1s. This is used by plot_board_step function which draws the board using matplotlib and adds queens to it. This function also calls the label_queen_conflicts which modifies the grid placing a 3 in any position where there is a conflict. | def label_queen_conflicts(assignment,grid):
''' Mark grid with queens that are under conflict. '''
for col, row in assignment.items(): # check each queen for conflict
conflicts = {temp_col:temp_row for temp_col,temp_row in assignment.items()
if (temp_row == row and temp_col != col
or (temp_row+temp_col == row+col and temp_col != col)
or (temp_row-temp_col == row-col and temp_col != col)}
# Place a 3 in positions where this is a conflict
for col, row in conflicts.items():
grid[col][row] = 3
return grid
def make_plot_board_step_function(instru_csp):
'''ipywidgets interactive function supports
single parameter as input. This function
creates and return such a function by taking
in input other parameters.
'''
n = len(instru_csp.variables)
def plot_board_step(iteration):
''' Add Queens to the Board.'''
data = instru_csp.assignment_history[iteration]
grid = [[(col+row+1)%2 for col in range(n)] for row in range(n)]
grid = label_queen_conflicts(data, grid) # Update grid with conflict labels.
# color map of fixed colors
cmap = matplotlib.colors.ListedColormap(['white','lightsteelblue','red'])
bounds=[0,1,2,3] # 0 for white 1 for black 2 onwards for conflict labels (red).
norm = matplotlib.colors.BoundaryNorm(bounds, cmap.N)
fig = plt.imshow(grid, interpolation='nearest', cmap = cmap,norm=norm)
plt.axis('off')
fig.axes.get_xaxis().set_visible(False)
fig.axes.get_yaxis().set_visible(False)
# Place the Queens Unicode Symbol
for col, row in data.items():
fig.axes.text(row, col, u"\u265B", va='center', ha='center', family='Dejavu Sans', fontsize=32)
plt.show()
return plot_board_step | csp.ipynb | jo-tez/aima-python | mit |
Now let us visualize a solution obtained via backtracking. We make use of the previosuly defined make_instru function for keeping a history of steps. | twelve_queens_csp = NQueensCSP(12)
backtracking_instru_queen = make_instru(twelve_queens_csp)
result = backtracking_search(backtracking_instru_queen)
backtrack_queen_step = make_plot_board_step_function(backtracking_instru_queen) # Step Function for Widgets | csp.ipynb | jo-tez/aima-python | mit |
Now finally we set some matplotlib parameters to adjust how our plot will look like. The font is necessary because the Black Queen Unicode character is not a part of all fonts. You can move the slider to experiment and observe how the queens are assigned. It is also possible to move the slider using arrow keys or to jump to the value by directly editing the number with a double click. The Visualize Button will automatically animate the slider for you. The Extra Delay Box allows you to set time delay in seconds of upto one second for each time step. | matplotlib.rcParams['figure.figsize'] = (8.0, 8.0)
matplotlib.rcParams['font.family'].append(u'Dejavu Sans')
iteration_slider = widgets.IntSlider(min=0, max=len(backtracking_instru_queen.assignment_history)-1, step=0, value=0)
w=widgets.interactive(backtrack_queen_step,iteration=iteration_slider)
display(w)
visualize_callback = make_visualize(iteration_slider)
visualize_button = widgets.ToggleButton(description = "Visualize", value = False)
time_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0'])
a = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select)
display(a) | csp.ipynb | jo-tez/aima-python | mit |
Now let us finally repeat the above steps for min_conflicts solution. | conflicts_instru_queen = make_instru(twelve_queens_csp)
result = min_conflicts(conflicts_instru_queen)
conflicts_step = make_plot_board_step_function(conflicts_instru_queen) | csp.ipynb | jo-tez/aima-python | mit |
This visualization has same features as the one above; however, this one also highlights the conflicts by labeling the conflicted queens with a red background. | iteration_slider = widgets.IntSlider(min=0, max=len(conflicts_instru_queen.assignment_history)-1, step=0, value=0)
w=widgets.interactive(conflicts_step,iteration=iteration_slider)
display(w)
visualize_callback = make_visualize(iteration_slider)
visualize_button = widgets.ToggleButton(description = "Visualize", value = False)
time_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0'])
a = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select)
display(a) | csp.ipynb | jo-tez/aima-python | mit |
Pivot Tables w/ pandas
http://nicolas.kruchten.com/content/2015/09/jupyter_pivottablejs/ | YouTubeVideo("ZbrRrXiWBKc", width=400, height=300)
!conda install pivottablejs -y
df = pd.read_csv("../data/mps.csv", encoding="ISO-8859-1")
df.head(10)
from pivottablejs import pivot_ui | notebook-tutorial/notebooks/01-Tips-and-tricks.ipynb | AstroHackWeek/AstroHackWeek2016 | mit |
Enhanced Pandas Dataframe Display | pivot_ui(df)
# Province, Party, Average, Age, Heatmap | notebook-tutorial/notebooks/01-Tips-and-tricks.ipynb | AstroHackWeek/AstroHackWeek2016 | mit |
Keyboard shortcuts
For help, ESC + h | # in select mode, shift j/k (to select multiple cells at once)
# split cell with ctrl shift -
first = 1
second = 2
third = 3 | notebook-tutorial/notebooks/01-Tips-and-tricks.ipynb | AstroHackWeek/AstroHackWeek2016 | mit |
You can also get syntax highlighting if you tell it the language that you're including:
```bash
mkdir toc
cd toc
wget https://raw.githubusercontent.com/minrk/ipython_extensions/master/nbextensions/toc.js
wget https://raw.githubusercontent.com/minrk/ipython_extensions/master/nbextensions/toc.css
cd ..
jupyter-nbextension install --user toc
jupyter-nbextension enable toc/toc
```
SQL
SELECT *
FROM tablename | %%bash
pwd
for i in *.ipynb
do
wc $i
done
echo
echo "break"
echo
du -h *ipynb
def silly_absolute_value_function(xval):
"""Takes a value and returns the value."""
xval_sq = xval ** 2.0
xval_abs = np.sqrt(xval_sq)
return xval_abs
silly_absolute_value_function?
silly_absolute_value_function??
# shift-tab
silly_absolute_value_function()
# shift-tab-tab
silly_absolute_value_function()
# shift-tab-tab-tab
silly_absolute_value_function()
import numpy as np
np.sin?? | notebook-tutorial/notebooks/01-Tips-and-tricks.ipynb | AstroHackWeek/AstroHackWeek2016 | mit |
Stop here for now
R
pyRserve
rpy2 | import numpy as np
# !conda install -c r rpy2 -y
import rpy2
%load_ext rpy2.ipython
X = np.array([0,1,2,3,4])
Y = np.array([3,5,4,6,7])
%%R?
%%R -i X,Y -o XYcoef
XYlm = lm(Y~X)
XYcoef = coef(XYlm)
print(summary(XYlm))
par(mfrow=c(2,2))
plot(XYlm)
XYcoef | notebook-tutorial/notebooks/01-Tips-and-tricks.ipynb | AstroHackWeek/AstroHackWeek2016 | mit |
Table 4 - Low Resolution Analysis | tbl4 = ascii.read("http://iopscience.iop.org/0004-637X/794/1/36/suppdata/apj500669t4_mrt.txt")
tbl4[0:4]
Na_mask = ((tbl4["f_EWNaI"] == "Y") | (tbl4["f_EWNaI"] == "N"))
print "There are {} sources with Na I line detections out of {} sources in the catalog".format(Na_mask.sum(), len(tbl4))
tbl4_late = tbl4[['Name', '2MASS', 'SpType', 'e_SpType','EWHa', 'f_EWHa', 'EWNaI', 'e_EWNaI', 'f_EWNaI']][Na_mask]
tbl4_late.pprint(max_lines=100, ) | notebooks/Hernandez2014.ipynb | BrownDwarf/ApJdataFrames | mit |
Vertex SDK: AutoML training image object detection model for online prediction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_image_object_detection_online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_image_object_detection_online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_image_object_detection_online.ipynb">
Open in Google Cloud Notebooks
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex SDK to create image object detection models and do online prediction using a Google Cloud AutoML model.
Dataset
The dataset used for this tutorial is the Salads category of the OpenImages dataset from TensorFlow Datasets. This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the bounding box locations and corresponding type of salad items in an image from a class of five items: salad, seafood, tomato, baked goods, or cheese.
Objective
In this tutorial, you create an AutoML image object detection model and deploy for online prediction from a Python script using the Vertex SDK. You can alternatively create and deploy models using the gcloud command-line tool or online using the Cloud Console.
The steps performed include:
Create a Vertex Dataset resource.
Train the model.
View the model evaluation.
Deploy the Model resource to a serving Endpoint resource.
Make a prediction.
Undeploy the Model.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
The Cloud Storage SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
Install and initialize the SDK.
Install Python 3.
Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Installation
Install the latest version of Vertex SDK for Python. | import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG | notebooks/community/sdk/sdk_automl_image_object_detection_online.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Create and run training pipeline
To train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline.
Create training pipeline
An AutoML training pipeline is created with the AutoMLImageTrainingJob class, with the following parameters:
display_name: The human readable name for the TrainingJob resource.
prediction_type: The type task to train the model for.
classification: An image classification model.
object_detection: An image object detection model.
multi_label: If a classification task, whether single (False) or multi-labeled (True).
model_type: The type of model for deployment.
CLOUD: Deployment on Google Cloud
CLOUD_HIGH_ACCURACY_1: Optimized for accuracy over latency for deployment on Google Cloud.
CLOUD_LOW_LATENCY_: Optimized for latency over accuracy for deployment on Google Cloud.
MOBILE_TF_VERSATILE_1: Deployment on an edge device.
MOBILE_TF_HIGH_ACCURACY_1:Optimized for accuracy over latency for deployment on an edge device.
MOBILE_TF_LOW_LATENCY_1: Optimized for latency over accuracy for deployment on an edge device.
base_model: (optional) Transfer learning from existing Model resource -- supported for image classification only.
The instantiated object is the DAG (directed acyclic graph) for the training job. | dag = aip.AutoMLImageTrainingJob(
display_name="salads_" + TIMESTAMP,
prediction_type="object_detection",
multi_label=False,
model_type="CLOUD",
base_model=None,
)
print(dag) | notebooks/community/sdk/sdk_automl_image_object_detection_online.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Run the training pipeline
Next, you run the DAG to start the training job by invoking the method run, with the following parameters:
dataset: The Dataset resource to train the model.
model_display_name: The human readable name for the trained model.
training_fraction_split: The percentage of the dataset to use for training.
test_fraction_split: The percentage of the dataset to use for test (holdout data).
validation_fraction_split: The percentage of the dataset to use for validation.
budget_milli_node_hours: (optional) Maximum training time specified in unit of millihours (1000 = hour).
disable_early_stopping: If True, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements.
The run method when completed returns the Model resource.
The execution of the training pipeline will take upto 60 minutes. | model = dag.run(
dataset=dataset,
model_display_name="salads_" + TIMESTAMP,
training_fraction_split=0.8,
validation_fraction_split=0.1,
test_fraction_split=0.1,
budget_milli_node_hours=20000,
disable_early_stopping=False,
) | notebooks/community/sdk/sdk_automl_image_object_detection_online.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Deploy the model
Next, deploy your model for online prediction. To deploy the model, you invoke the deploy method. | endpoint = model.deploy() | notebooks/community/sdk/sdk_automl_image_object_detection_online.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Send a online prediction request
Send a online prediction to your deployed model.
Get test item
You will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction. | test_items = !gsutil cat $IMPORT_FILE | head -n1
cols = str(test_items[0]).split(",")
if len(cols) == 11:
test_item = str(cols[1])
test_label = str(cols[2])
else:
test_item = str(cols[0])
test_label = str(cols[1])
print(test_item, test_label) | notebooks/community/sdk/sdk_automl_image_object_detection_online.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Make the prediction
Now that your Model resource is deployed to an Endpoint resource, you can do online predictions by sending prediction requests to the Endpoint resource.
Request
Since in this example your test item is in a Cloud Storage bucket, you open and read the contents of the image using tf.io.gfile.Gfile(). To pass the test data to the prediction service, you encode the bytes into base64 -- which makes the content safe from modification while transmitting binary data over the network.
The format of each instance is:
{ 'content': { 'b64': base64_encoded_bytes } }
Since the predict() method can take multiple items (instances), send your single test item as a list of one test item.
Response
The response from the predict() call is a Python dictionary with the following entries:
ids: The internal assigned unique identifiers for each prediction request.
displayNames: The class names for each class label.
confidences: The predicted confidence, between 0 and 1, per class label.
bboxes: The bounding box of each detected object.
deployed_model_id: The Vertex AI identifier for the deployed Model resource which did the predictions. | import base64
import tensorflow as tf
with tf.io.gfile.GFile(test_item, "rb") as f:
content = f.read()
# The format of each instance should conform to the deployed model's prediction input schema.
instances = [{"content": base64.b64encode(content).decode("utf-8")}]
prediction = endpoint.predict(instances=instances)
print(prediction) | notebooks/community/sdk/sdk_automl_image_object_detection_online.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model. | endpoint.undeploy_all() | notebooks/community/sdk/sdk_automl_image_object_detection_online.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Collapse all posts of the same job title into a single document | by_job_title = jobs.groupby('title')
job_title_df = by_job_title.agg({'job_id': lambda x: ','.join(x), 'doc': lambda x: 'next_doc'.join(x)})
job_title_df = job_title_df.add_prefix('agg_').job_title_dfet_index()
job_title_df.head()
n_job_title = by_job_title.ngroups
print('# job titles: %d' %n_job_title)
reload(cluster_skill_helpers)
from cluster_skill_helpers import *
jd_docs = job_title_df['agg_doc']
# This version of skills still contain stopwords
doc_skill = buildDocSkillMat(jd_docs, skill_df) | .ipynb_checkpoints/jobtitle_skill-checkpoint.ipynb | musketeer191/job_analytics | gpl-3.0 |
Concat matrices doc_unigram, doc_bigram and doc_trigram to get occurrences of all skills: | from scipy.sparse import hstack
jobtitle_skill = hstack([doc_unigram, doc_bigram, doc_trigram])
with(open(SKILL_DIR + 'jobtitle_skill.mtx', 'w')) as f:
mmwrite(f, jobtitle_skill)
jobtitle_skill.shape
jobtitle_skill = jobtitle_skill.toarray() | .ipynb_checkpoints/jobtitle_skill-checkpoint.ipynb | musketeer191/job_analytics | gpl-3.0 |
Most popular skills by job title | job_title_df.head(1)
idx_of_top_skill = np.apply_along_axis(np.argmax, 1, jobtitle_skill)
# skill_df = skills
skills = skill_df['skill']
top_skill_by_job_title = pd.DataFrame({'job_title': job_titles, 'idx_of_top_skill': idx_of_top_skill})
top_skill_by_job_title['top_skill'] = top_skill_by_job_title['idx_of_top_skill'].apply(lambda i: skills[i])
top_skill_by_job_title.head(30)
with(open(SKILL_DIR + 'jobtitle_skill.mtx', 'r')) as f:
jobtitle_skill = mmread(f)
jobtitle_skill = jobtitle_skill.tocsr()
jobtitle_skill.shape
job_titles = job_title_df['title']
# for each row (corresponding to a jobtitle) in matrix jobtitle_skill, get non-zero freqs
global k
k = 3
def getTopK_Skills(idx):
title = job_titles[idx]
print('Finding top-{} skills of job title {}...'.format(k, title))
skill_occur = jobtitle_skill.getrow(idx)
tmp = find(skill_occur)
nz_indices = tmp[1]
values = tmp[2]
res = pd.DataFrame({'job_title': title, 'skill_found_in_jd': skills[nz_indices], 'occur_freq': values})
res.sort_values('occur_freq', ascending=False, inplace=True)
return res.head(k)
# getTopK_Skills(0)
frames = map(getTopK_Skills, range(n_job_title))
res = pd.concat(frames) # concat() is great as it can concat as many df as possible
res.head(30)
res.to_csv(RES_DIR + 'top3_skill_by_jobtitle.csv', index=False) | .ipynb_checkpoints/jobtitle_skill-checkpoint.ipynb | musketeer191/job_analytics | gpl-3.0 |
/...this is where I learned to not use pip install with scikit-learn...
To upgrade scikit-learn:
conda update scikit-learn | import sklearn.cluster
#from sklearn.cluster import KMeans
silAverage = [0.4227, 0.33299, 0.354, 0.3768, 0.3362, 0.3014, 0.3041, 0.307, 0.313, 0.325,
0.3109, 0.2999, 0.293, 0.289, 0.2938, 0.29, 0.288, 0.3, 0.287]
import matplotlib.pyplot as plt
%matplotlib inline | .ipynb_checkpoints/KL rambling notes on Python-checkpoint.ipynb | halexand/NB_Distribution | mit |
OK...can I get a simple scatter plot? | plt.scatter(range(0,len(silAverage)), silAverage)
plt.grid() #put on a grid
plt.xlim(-1,20)
#get list of column names in pandas data frame
list(my_dataframe.columns.values)
for i in range(0,len(ut)):
if i == 10:
break
p = ut.iloc[i,:]
n = p.name
if n[0] == 'R':
#do the plotting,
#print 'yes'
CO = p.KEGG
kos = CO_withKO[CO]['Related KO']
cos = CO_withKO[CO]['Related CO']
#Tracer()()
for k in kos:
if k in KO_RawData.index:
kData=KO_RawData.loc[kos].dropna()
kData=(kData.T/kData.sum(axis=1)).T
#? why RawData, the output from the K-means will have the normalized data, use that for CO
#bc easier since that is the file I am working with right now.
#cData=CO_RawData.loc[cos].dropna()
#cData=(cData.T/cData.sum(axis=1)).T
cData = pd.DataFrame(p[dayList]).T
#go back and check, but I think this next step is already done
#cData=(cData.T/cData.sum(axis=1)).T
fig, ax=plt.subplots(1)
kData.T.plot(color='r', ax=ax)
cData.T.plot(color='k', ax=ax)
else:
#skip over the KO plotting, so effectively doing nothing
#print 'no' | .ipynb_checkpoints/KL rambling notes on Python-checkpoint.ipynb | halexand/NB_Distribution | mit |
Write a function to match RI number and cNumbers | def findRInumber(dataIn,KEGGin):
#find possible RI numbers for a given KEGG number.
for i,KEGG in enumerate(dataIn['KEGG']):
if KEGG == KEGGin:
t = dataIn.index[i]
print t
#For example: this will give back one row, C18028 will be multiple
m = findRInumber(forRelatedness,'C00031')
m
#to copy a matrix I would think this works: NOPE
#forRelatedness = CcoClust# this is NOT making a new copy...
#instead it makes a new pointing to an existing data frame. So you now have two ways to
#reference the same data frame. Make a change with one term and you can see the same change
#using the other name. Odd. No idea why you would want that.
##this is the test that finally let me understand enumerate
# for index, KEGG in enumerate(useSmall['KEGG']):
# print index,KEGG
# Windows
chrome_path = 'C:/Program Files (x86)/Google/Chrome/Application/chrome.exe %s'
url = "http://www.genome.jp/dbget-bin/www_bget?cpd:C00019"
webbrowser.get(chrome_path).open_new(url)
#while a nice idea, this stays open until you close the web browser window.
from IPython.display import HTML
tList = ['C02265','C00001']
for i in tList:
ml = '<iframe src = http://www.genome.jp/dbget-bin/www_bget?cpd:' + i + ' width=700 height=350></iframe>'
print ml
from IPython.display import HTML
CO='C02265'
HTML('<iframe src = http://www.genome.jp/dbget-bin/www_bget?cpd:' + CO + ' width=700 height=350></iframe>')
| .ipynb_checkpoints/KL rambling notes on Python-checkpoint.ipynb | halexand/NB_Distribution | mit |
Also note that this exercise assumes you've already populated a malicious/ and a benign/ directory with samples that you consider malicious and benign, respectively. How many samples? In this notebook, I'm using 50K of each for demonstration purposes. Sadly, you must bring your own. If you don't populate these subdirectories for binaries (each renamed to the sha256 hash of its contents!), the code will bicker and complain incessently.
Feature extraction for feature-based models
There is a lot of domain knowledge on what malware authors can do, and what malware authors actually do when crafting malicious files. Furthermore, there are some things malware authors seldom do that would indicate that a file is benign. For each file we want to analyze, we're going to encapsulate that domain knowledge about malicious and benign files in a single feature vector. See the source code at classifier/pefeatures.py.
Note that the feature extraction we use here contains many elements from published malware classification papers. Some of those are slightly modified. And there are additional features in this particular feature extraction that are included because, well, they were just sitting there in the LIEF parser patiently waiting for a chair at the feature vector table. Read: there's really no secret sauce in there, and to turn this into something commercially viable would take a bit of work. But, be my guest.
A note about LIEF. What a cool tool with a great mission! It aims to parse and manipulate binary files for Windows (PE), Linux (ELF) and MacOS (macho). Of course, we're using only the PE subset here. At the time of this writing, LIEF is still very much a new tool, and I've worked with the authors to help resolve some kinks. It's a growing project with more warts to find and fix. Nevertheless, we're using it as the backbone for features that requires one to parse a PE file. | from classifier import common
# this will take a LONG time the first time you run it (and cache features to disk for next time)
# it's also chatty. Parts of feature extraction require LIEF, and LIEF is quite chatty.
# the output you see below is *after* I've already run feature extraction, so that
# X and sample_index are being read from cache on disk
X, y, sha256list = common.extract_features_and_persist()
# split our features, labels and hashes into training and test sets
from sklearn.model_selection import train_test_split
import numpy as np
np.random.seed(123)
X_train, X_test, y_train, y_test, sha256_train, sha256_test = train_test_split( X, y, sha256list, test_size=1000)
# a random train_test split, but for a malware classifier, we should really be holding out *future* malicious and benign
# samples, to better capture how we'll generalize to malware yet to be seen in the wild. ...an exercise left to the reader.. | BSidesLV -- your model isn't that special -- (1) MLP.ipynb | endgameinc/youarespecial | mit |
Multilayer perceptron
We'll use the features we extracted to train a multilayer perceptron (MLP). An MLP is an artificial neural network with at least one hidden layer. Is a multilayer perceptron "deep learning"? Well, it's a matter of semantics, but "deep learning" may imply that the features and model are optimized together, end-to-end. So, it that sense, no: since we're using domain knowledge to extract features, then pass it to an artificial neural network, we'll remain conservative and call this an MLP. (As we'll see, don't get fooled just because we're not calling this "deep learning": this MLP is no slouch.) The network architecture is defined in classifier/simple_multilayer.py. | # StandardScaling the data can be important to multilayer perceptron
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X_train)
# Note that we're using scaling info form X_train to transform both
X_train = scaler.transform(X_train) # scale for multilayer perceptron
X_test = scaler.transform(X_test)
from classifier import simple_multilayer
from keras.callbacks import LearningRateScheduler, EarlyStopping, ReduceLROnPlateau, ModelCheckpoint
model = simple_multilayer.create_model(
input_shape=(X_train.shape[1], ), # input dimensions
input_dropout=0.05, # this prevents the model becoming a fanboy of (overfitting to) any particular input feature
hidden_dropout=0.1, # same, but for hidden units. Dropping out hidden layers can create a sort of ensemble learner
hidden_layers=[4096, 2048, 1024, 512] # this is "art". making up # of hidden layers and width of each. don't be afraid to change this
)
model.fit(X_train, y_train,
batch_size=128,
epochs=200,
verbose=1,
callbacks=[
EarlyStopping( patience=20 ),
ModelCheckpoint( 'multilayer.h5', save_best_only=True),
ReduceLROnPlateau( patience=5, verbose=1)],
validation_data=(X_test, y_test))
from keras.models import load_model
# we'll load the "best" model (in this case, the penultimate model) that was saved
# by our ModelCheckPoint callback
model = load_model('multilayer.h5')
y_pred = model.predict(X_test)
common.summarize_performance(y_pred, y_test, "Multilayer perceptron")
# The astute reader will note we should be doing this on a separate holdout, since we've explicitly
# saved the model that works best on X_test, y_test...an exercise for left for the reader...
| BSidesLV -- your model isn't that special -- (1) MLP.ipynb | endgameinc/youarespecial | mit |
Sanity check: random forest classifier
Alright. Is that good? Let's compare to another model. We'll reach for the simple and reliable random forest classifier?
One nice thing about tree-based classifiers like a random forest classifier is that they are invariant to linear scaling and shifting of the dataset (the model will automatically learn those transformations). Nevertheless, for a sanity check, we're going to use the scaled/transformed features in a random forest classifier. | from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
# you can increase performance by increasing n_estimators, and removing the restriction on max_depth
# I've kept those in there because I want a quick-and-dirty look at how the MLP above
rf = RandomForestClassifier(
n_estimators=40,
n_jobs=-1,
max_depth=30
).fit(X_train, y_train)
y_pred = rf.predict_proba(X_test)[:,-1] # get probabiltiy of malicious (last class == last column )
_ = common.summarize_performance(y_pred, y_test, "RF Classifier") | BSidesLV -- your model isn't that special -- (1) MLP.ipynb | endgameinc/youarespecial | mit |
The file object is already implemented in Python, just like thousands of other classes, therefore we do not have to bother with reading and writing files in Pthon. Therefore, let's have a look at defining our own classes.
A class can be defined using the <span style="color: green">class</span> statement followed by a class name. This is very similar to <span style="color: green">def</span>. Everything inside the class namespace is now part of that class. The shortest possible class does now define nothing inside the namespace (and will therefore have no attributes and no functionality). Nevertheless, it can be instantiated and a reference to the class instance can be assigned to a variable. | # define class
class Car:
pass
# create two instances
vw = Car()
audi= Car()
print('vw: ', type(vw), 'audi: ', type(audi))
print('vw: ', vw.__class__, 'audi: ', audi.__class__)
print('vw: ', str(vw), 'audi: ', str(audi)) | felis_python1/lectures/06_Classes.ipynb | mmaelicke/felis_python1 | mit |
Methods
The shown class <span style='color: blue'>Car</span> is not really useful. But we can define functions inside the class namespace. These functions are called methods. To be correct here: they are called instance methods and should not be confused with class methods, which will not be covered here.
Although, we did not define methods so far, there are already some methods assigned to <span style='color: blue'>Car</span>, which Python created for us. These very generic methods handle the return of the <span style="color: green">type</span> or <span style="color: green">str</span> function if invoked on a <span style='color: blue'>Car</span> instance.
We will first focus on a special method, the __init__. This method is already defined, but doesn't do anything. But we can do that and fill the method. It will be called on object instantiation. This way we can set default values and define what a <span style='color: blue'>Car</span> instance should look like after creation.
Let's define an actual speed and maximum speed for our car, because this is what a car needs. | # redefine class
class Car:
def __init__(self):
self.speed = 0
self.max_speed = 100
# create two instances
vw = Car()
audi = Car()
print('vw: speed: %d max speed: %d' % (vw.speed, vw.max_speed))
print('audi: speed: %d max speed: %d' % (audi.speed, audi.max_speed))
audi.max_speed = 250
audi.speed = 260
vw.speed = - 50.4
print('vw: speed: %d max speed: %d' % (vw.speed, vw.max_speed))
print('audi: speed: %d max speed: %d' % (audi.speed, audi.max_speed)) | felis_python1/lectures/06_Classes.ipynb | mmaelicke/felis_python1 | mit |
This is better, but still somehow wrong. A car should not be allowed to drive faster than the maximum possible speed. A Volkswagen might not be the best car in the world, but it can do definitely better than negative speeds. A better approach would be to define some methods for accelerating and decelerating the car.<br>
Define two methods accelerate and decelerate that accept a value and set the new speed for the car. Prevent the car from negative speeds and stick to the maximum speed. | # redefine class
class Car:
pass
vw = Car()
print(vw.speed)
vw.accelerate(60)
print(vw.speed)
vw.accelerate(45)
print(vw.speed)
vw.decelerate(10)
print(vw.speed)
vw.decelerate(2000)
print(vw.speed) | felis_python1/lectures/06_Classes.ipynb | mmaelicke/felis_python1 | mit |
Magic Methods
Maybe you recognized the two underscores in the __init__ method. A defined set of function names following this name pattern are called magic methods in Python, because they are influcencing the object behaviour using magic. Beside __init__ two other very important magic methods are __repr__ and __str__. <br>
The return value of __str__ defines the string representation of the object instance. This way you can define the return value whenever <span style="color: green">str</span> is called on an object instance. The __repr__ method is very similar, but returns the object representation. Whenever possible, the object shall be recoverable from this returned string. However, with most custom classes this is not easily possible and __repr__ shall return a one line string that clearly identifies the object instance. This is really useful for debugging your code. | print('str(vw) old:' , str(vw))
class Car:
pass
vw = Car()
vw.accelerate(45)
print('str(vw) new:', str(vw)) | felis_python1/lectures/06_Classes.ipynb | mmaelicke/felis_python1 | mit |
Using these functions, almost any behaviour of the <span style='color: blue'>Car</span> instance can be influenced.
Imagine you are using it in a conditional statement and test two instances for equality or if one instance is bigger than the other one.<br>
Are these two variables equal if they reference exactly the same instance?
Are they equal in case they are of the same model
Is one instance bigger in case it's actually faster?
or has the higher maximum speed?
Let's define a new attribute model, which is requested by __init__ as an argument. Then the magic method __eq__ can be used to check the models of the two instances.<br>
The __eq__ method can be defined like: __eq__(self, other) and return either <span style='color: green'>True</span> or <span style='color: green'>False</span>. | class Car:
pass
vw = Car('vw')
vw2 = Car('vw')
audi = Car('audi')
print('vw equals vw2? ',vw == vw2)
print('vw equals vw? ',vw == vw)
print('vw equals audi? ', vw == audi)
print('is vw exactly 9? ', vw == 9) | felis_python1/lectures/06_Classes.ipynb | mmaelicke/felis_python1 | mit |
private methods and attributes
The <span style='color: blue'>Car</span> class has two methods which are meant to be used for mainpulating the actual speed. Nevertheless, one could directly assign new values, even of other types than integers, to the speed and max_speed attribute. Thus, one would call these attributes public attributes, just like accelerate and decelerate are public methods. This implies to other developers, 'It's ok to directly use these attributes and methods, that's why I putted them there.' | vw = Car('audi')
print('Speed: ', vw.speed)
vw.speed = 900
print('Speed: ', vw.speed)
vw.speed = -11023048282
print('Speed: ', vw.speed)
vw.speed = Car('vw')
print('Speed: ', vw.speed) | felis_python1/lectures/06_Classes.ipynb | mmaelicke/felis_python1 | mit |
Consequently, we want to protect this attribute from access from outside the class itself. Other languages use the keyword <span style="color: blue">private</span> to achieve this. Here, Python is not very explicit, as it does not define a keyword or statement for this. You'll have to prefix your attribute or method name with double underscores. Renaming Car.speed to Car.__speed will therefore not work like shown above.
As the user or other developers cannot access the speed anymore, we have to offer a new interface for accessing this attribute. We could either define a method getSpeed returning the actual speed or implement a so called property. This will be introduced in a later shown example.<br>
Note: Some jupyter notebooks allow accessing a protected attribute, but your Python console won't allow this. | class Car:
pass
vw = Car('vw')
vw.accelerate(45)
print(vw)
vw.decelerate(20)
print(vw)
print(vw.getSpeed()) | felis_python1/lectures/06_Classes.ipynb | mmaelicke/felis_python1 | mit |
class attributes
All attributes and methods defined so far have one thing in common. They are bound to the instance. That means you can only access or invoke them using a reference to this instance. In most cases this is exactly what you want and would expect, as altering one instance won't influence other class instances. But in some cases this is exactly the desired behaviour. A typical example is counting object instances. For our <span style='color: blue'>Car</span> class this would mean an attribute storing the current amount of instanciated cars. It is not possible to implement this using instance attibutes and methods. <br>
One (bad) solution would be shifting the declaration of <span style='color: blue'>Car</span> from the global namespace to a function returning a new car instance. Then the function could increment a global variable. The downside is, that destroyed car instances won't decrement this global variable. A function like this would, by the way, be called a ClassFactory in the Python world.<br>
The second (way better) solution are using a class attribute. These attributes are bound to the class, not an instance of that class. That means all instances will operate on the same variable. In the field of data analysis one would implement a counter like this for example for counting the instances of a class handling large data amounts like a raster image. Then the amount of instances could be limited. | class Car:
pass
vw = Car('vw')
print(vw.count)
audi = Car('audi')
print(audi.count)
bmw = Car('bmw')
print('BMW:', bmw.max_speed)
print('VW:', vw.max_speed)
print('Audi:', audi.max_speed)
print(vw.count) | felis_python1/lectures/06_Classes.ipynb | mmaelicke/felis_python1 | mit |
Inheritance
As a proper OOP language Python does also implement inheritance. This means, that one can define a class which inherits the attibutes and classes from another class. You can put other classes into the parenthesis of your class signature and the new class will inherit from these classes. One would call this new class a child class and the class it inherits from a parent class. Every of that child classes can of course inherit to as many children as needed. Then these children will inherit from its parent and all their parents.<br>
In case a method or attribute gets re-defined, the child method or attribute will overwrtie the parent methods and attributes.<br>
A real world example of this concept is the definition of a class that can read different file formats and transform the content into a inner-application special format. You could then first write a class that can do the transformation. Next, another class is defined inheriting from this base class. This class can now read all text files on a very generic level. From here different class can be defined, each one capable of exactly one specific text-based format, like a CSV or JSON reader. Now, each of these specific classes know all the methods from all prent classes and the transformation does not have to be redefined on each level. The second advantage is, that at a later point of time one could decide to implement a generic database reader as well. Then different database engine specific reader could be defined and again inherit all the transformation stuff.
Here, we will use this concept to write two inheriting class es VW and Audi, which both just set the model into a protected attribute.<br> How could this concept be extended? | class VW(Car):
def __init__(self):
super(VW, self).__init__('vw')
class Audi(Car):
def __init__(self):
super(Audi, self).__init__('audi')
vw = VW()
audi = Audi()
vw.accelerate(40)
audi.accelerate(400)
print(vw)
print(audi)
print(vw == audi)
print(isinstance(vw, VW))
print(isinstance(vw, Car)) | felis_python1/lectures/06_Classes.ipynb | mmaelicke/felis_python1 | mit |
Property
Sometimes it would be really handy if an attribute could be altered or calculated before returning it to the user. Or even better: if one could make a function to behave like an attribute. That's exactly what a property does. These are methods with no other argument than self and therefore be executed without parentheses. Using a property like this enables us to reimplement the speed attribute. We're just using a property.<br>
The property function is a built-in function that needs a function as only argument and returns exactly the same function again with the added property behaviour. In information science a function expecting another function, altering it and returing it back for usage are called decorators (a concept borrowed from Java). Decorating functions is in Python even easier as you can just use the decorator operator: @. | class MyInt(int):
def as_string(self):
return 'The value is %s' % self
i = MyInt(5)
print(i.as_string())
class MyInt(int):
@property
def as_string(self):
return 'The value is %s' % self
x = MyInt(7)
print(x.as_string)
class Car:
pass
class VW(Car):
def __init__(self):
super(VW, self).__init__('vw')
vw = VW()
vw.accelerate(60)
print(vw.speed) | felis_python1/lectures/06_Classes.ipynb | mmaelicke/felis_python1 | mit |
Property.setter
Obviously, the protectec __speed attribute cannot be changed and the speed property is a function and thus, cannot be set. In the example of the Car, this absolutely makes sense, but nevertheless, setting a property is also possible. This time the property function is defined again accepting an additional positional argument. This will be filled by the assigned value. The Decorator for the redefinition is the @property.setter function. | class Model(object):
def __init__(self, name):
self.__model = self.check_model(name)
def check_model(self, name):
if name.lower() not in ('vw', 'audi'):
return 'VW'
else:
return name.upper()
@property
def model(self):
return self.__model
@model.setter
def model(self, value):
self.__model = self.check_model(value)
car = Model('audi')
print(car.model)
car.model = 'vw'
print(car.model)
car.model = 'mercedes'
print(car.model)
setattr(car, '__model', 'mercedes')
print(car.model) | felis_python1/lectures/06_Classes.ipynb | mmaelicke/felis_python1 | mit |
Grabbing Current Data
data.current()
data.current() can be used to retrieve the most recent value of a given field(s) for a given asset(s). data.current() requires two arguments: the asset or list of assets, and the field or list of fields being queried. Possible fields include 'price', 'open', 'high', 'low', 'close', and 'volume'. The output type will depend on the input types | def initialize(context):
# Reference to Tech Stocks
context.techies = [sid(16841),
sid(24),
sid(1900)]
def handle_data(context, data):
# Position our portfolio optimization!
tech_close = data.current(context.techies, 'close')
print(type(tech_close)) # Pandas Series
print(tech_close) # Closing Prices | 17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/02-Basic-Algorithm-Methods.ipynb | arcyfelix/Courses | apache-2.0 |
Note! You can use data.is_stale(sid(#)) to check if the results of data.current() where generated at the current bar (the timeframe) or were forward filled from a previous time.
Checking for trading
data.can_trade()
data.can_trade() is used to determine if an asset(s) is currently listed on a supported exchange and can be ordered. If data.can_trade() returns True for a particular asset in a given minute bar, we are able to place an order for that asset in that minute. This is an important guard to have in our algorithm if we hand-pick the securities that we want to trade. It requires a single argument: an asset or a list of assets. | def initialize(context):
# Reference to amazn
context.amzn = sid(16841)
def handle_data(context, data):
# This insures we don't hit an exception!
if data.can_trade(sid(16841)):
order_target_percent(context.amzn, 1.0) | 17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/02-Basic-Algorithm-Methods.ipynb | arcyfelix/Courses | apache-2.0 |
Checking Historical Data
When your algorithm calls data.history on equities, the returned data is adjusted for splits, mergers, and dividends as of the current simulation date. In other words, when your algorithm asks for a historical window of prices, and there is a split in the middle of that window, the first part of that window will be adjusted for the split. This adustment is done so that your algorithm can do meaningful calculations using the values in the window.
This code queries the last 20 days of price history for a static set of securities. Specifically, this returns the closing daily price for the last 20 days, including the current price for the current day. Equity prices are split- and dividend-adjusted as of the current date in the simulation: |
def initialize(context):
# AAPL, MSFT, and SPY
context.assets = [sid(24), sid(1900), sid(16841)]
def before_trading_start(context,data):
price_history = data.history(context.assets,
fields = "price",
bar_count = 5,
frequency = "1d")
print(price_history)
| 17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/02-Basic-Algorithm-Methods.ipynb | arcyfelix/Courses | apache-2.0 |
The bar_count field specifies the number of days or minutes to include in the pandas DataFrame returned by the history function. This parameter accepts only integer values.
The frequency field specifies how often the data is sampled: daily or minutely. Acceptable inputs are ‘1d’ or ‘1m’. For other frequencies, use the pandas resample function.
Examples
Below are examples of code along with explanations of the data returned.
Daily History
Use "1d" for the frequency. The dataframe returned is always in daily bars. The bars never span more than one trading day. For US equities, a daily bar captures the trade activity during market hours (usually 9:30am-4:00pm ET). For US futures, a daily bar captures the trade activity from 6pm-6pm ET (24 hours). For example, the Monday daily bar captures trade activity from 6pm the day before (Sunday) to 6pm on the Monday. Tuesday's daily bar will run from 6pm Monday to 6pm Tuesday, etc. For either asset class, the last bar, if partial, is built using the minutes of the current day.
Examples (assuming context.assets exists):
data.history(context.assets, "price", 1, "1d") returns the current price.
data.history(context.assets, "volume", 1, "1d") returns the volume since the current day's open, even if it is partial.
data.history(context.assets, "price", 2, "1d") returns yesterday's close price and the current price.
data.history(context.assets, "price", 6, "1d") returns the prices for the previous 5 days and the current price.
Minute History
Use "1m" for the frequency.
Examples (assuming context.assets exists):
data.history(context.assets, "price", 1, "1m") returns the current price.
data.history(context.assets, "price", 2, "1m") returns the previous minute's close price and the current price.
data.history(context.assets, "volume", 60, "1m") returns the volume for the previous 60 minutes.
Scheduling
Use schedule_function to indicate when you want other functions to occur. The functions passed in must take context and data as parameters. | def initialize(context):
context.appl = sid(49051)
# At ebginning of trading week
# At Market Open, set 10% of portfolio to be apple
schedule_function(open_positions,
date_rules.week_start(),
time_rules.market_open())
# At end of trading week
# 30 min before market close, dump all apple stock.
schedule_function(close_positions,
date_rules.week_end(),
time_rules.market_close(minutes = 30))
def open_positions(context, data):
order_target_percent(context.appl, 0.10)
def close_positions(context, data):
order_target_percent(context.appl, 0) | 17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/02-Basic-Algorithm-Methods.ipynb | arcyfelix/Courses | apache-2.0 |
Portfolio Information
You can get portfolio information and record it! | def initialize(context):
context.amzn = sid(16841)
context.ibm = sid(3766)
schedule_function(rebalance,
date_rules.every_day(),
time_rules.market_open())
schedule_function(record_vars,
date_rules.every_day(),
time_rules.market_close())
def rebalance(context, data):
# Half of our portfolio long on amazn
order_target_percent(context.amzn, 0.50)
# Half is shorting IBM
order_target_percent(context.ibm, -0.50)
def record_vars(context, data):
# Plot the counts
record(amzn_close=data.current(context.amzn, 'close'))
record(ibm_close=data.current(context.ibm, 'close')) | 17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/02-Basic-Algorithm-Methods.ipynb | arcyfelix/Courses | apache-2.0 |
Slippage and Commision
Slippage
Slippage is where a simulation estimates the impact of orders on the fill rate and execution price they receive. When an order is placed for a trade, the market is affected. Buy orders drive prices up, and sell orders drive prices down; this is generally referred to as the price_impact of a trade. Additionally, trade orders do not necessarily fill instantaneously. Fill rates are dependent on the order size and current trading volume of the ordered security. The volume_limit determines the fraction of a security's trading volume that can be used by your algorithm.
In backtesting and non-brokerage paper trading (Quantopian paper trading), a slippage model can be specified in initialize() using set_slippage(). There are different builtin slippage models that can be used, as well as the option to set a custom model. By default (if a slippage model is not specified), the following volume share slippage model is used: | set_slippage(slippage.VolumeShareSlippage(volume_limit = 0.025,
price_impact = 0.1)) | 17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/02-Basic-Algorithm-Methods.ipynb | arcyfelix/Courses | apache-2.0 |
Using the default model, if an order of 60 shares is placed for a given stock, then 1000 shares of that stock trade in each of the next several minutes and the volume_limit is 0.025, then our trade order will be split into three orders (25 shares, 25 shares, and 10 shares) that execute over the next 3 minutes.
At the end of each day, all open orders are canceled, so trading liquid stocks is generally a good idea. Additionally, orders placed exactly at market close will not have time to fill, and will be canceled.
Commision
To set the cost of trades, we can specify a commission model in initialize() using set_commission(). By default (if a commission model is not specified), the following commission model is used: | set_commission(commission.PerShare(cost = 0.0075,
min_trade_cost = 1)) | 17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/02-Basic-Algorithm-Methods.ipynb | arcyfelix/Courses | apache-2.0 |
Данные были взяты из репозитория UCI Machine Learning Repository по адресу http://archive.ics.uci.edu/ml/datasets/banknote+authentication.
Выборка сконструирована при помощи вейвлет преобразования избражений фальшивых и аутентичных банкнот в градациях серого. | df = pd.read_csv( 'data_banknote_authentication.txt', sep = ",", decimal = ".", header = None,
names = [ "variance", "skewness", "curtosis", "entropy", "class" ] )
y = df.xs( "class", axis = 1 )
X = df.drop( "class", axis = 1 ) | year_15_16/fall_2015/game theoretic foundations of ml/labs/SVM-lab.ipynb | ivannz/study_notes | mit |
В исследуемых данных мы имеем следующее число точек: | print len( X ) | year_15_16/fall_2015/game theoretic foundations of ml/labs/SVM-lab.ipynb | ivannz/study_notes | mit |
Загруженные данные разбиваем на две выборки: обучающую ($\text{_train}$) и тестовую. которая будет не будет использоваться при обучении ($\text{_test}$).
Разобьём выборку на обучающую и тестовую в соотношении 2:3. | X_train, X_test, y_train, y_test = cross_validation.train_test_split( X, y, test_size = 0.60,
random_state = random_state ) | year_15_16/fall_2015/game theoretic foundations of ml/labs/SVM-lab.ipynb | ivannz/study_notes | mit |
В обучающей выборке имеем столько наблюдений: | print len( X_train ) | year_15_16/fall_2015/game theoretic foundations of ml/labs/SVM-lab.ipynb | ivannz/study_notes | mit |
Рассмотрим SVM в линейно неразделимом случае с $L^1$ нормой на зазоры $(\xi_i){i=1}^n$:
$$ \frac{1}{2} \|\beta\|^2 + C \sum{i=1}^n \xi_i \to \min_{\beta, \beta_0, (\xi_i)_{i=1}^n} \,, $$
при условиях: для любого $i=1,\ldots,n$ требуется $\xi_i \geq 0$ и
$$ \bigl( \beta' \phi(x_i) + \beta_0 \bigr) y_i \geq 1 - \xi_i \,.$$ | svm_clf_ = svm.SVC( probability = True, max_iter = 100000 ) | year_15_16/fall_2015/game theoretic foundations of ml/labs/SVM-lab.ipynb | ivannz/study_notes | mit |
Параметры вида ядра (и соответственно отображений признаков $\phi:\mathcal{X}\to\mathcal{H}$) и параметр регуляризации $C$ будем искать с помощью переборного поиска на сетке с $5$-fold кроссвалидацией на тренировочной выборке $\text{X_train}$.
Рассмотрим три ядра: гауссовское
$$ K( x, y ) = \text{exp}\bigl{ -\frac{1}{2\gamma^2} \|x-y\|^2 \bigr} \,,$$ | ## Вид ядра : Гауссовское ядро
grid_rbf_ = grid_search.GridSearchCV( svm_clf_, param_grid = {
## Параметр регуляризции: C = 0.0001, 0.001, 0.01, 0.1, 1, 10.
"C" : np.logspace( -4, 1, num = 6 ),
"kernel" : [ "rbf" ],
## Параметр "концентрации" Гауссовского ядра
"gamma" : np.logspace( -2, 2, num = 10 ),
}, cv = 5, n_jobs = -1, verbose = 0 ).fit( X_train, y_train )
df_rbf_ = collect_result( grid_rbf_, names = [ "Ядро", "C", "Параметр" ] ) | year_15_16/fall_2015/game theoretic foundations of ml/labs/SVM-lab.ipynb | ivannz/study_notes | mit |
полимониальное
$$ K( x, y ) = \bigl( 1 + \langle x, y\rangle\bigr)^p \,, $$ | ## Вид ядра : Полиномиальное ядро
grid_poly_ = grid_search.GridSearchCV( svm.SVC( probability = True, max_iter = 20000, kernel = "poly" ), param_grid = {
## Параметр регуляризции: C = 0.0001, 0.001, 0.01, 0.1, 1, 10.
"C" : np.logspace( -4, 1, num = 6 ),
"kernel" : [ "poly" ],
## Степень полиномиального ядра
"degree" : [ 2, 3, 5, 7 ],
}, cv = 5, n_jobs = -1, verbose = 0 ).fit( X_train, y_train )
df_poly_ = collect_result( grid_poly_, names = [ "Ядро", "C", "Параметр" ] ) | year_15_16/fall_2015/game theoretic foundations of ml/labs/SVM-lab.ipynb | ivannz/study_notes | mit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.