markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
и линейное (в $\mathbb{R}^d$) $$ K( x, y ) = \langle x, y\rangle \,,$$
## Вид ядра : линейное ядро grid_linear_ = grid_search.GridSearchCV( svm_clf_, param_grid = { ## Параметр регуляризции: C = 0.0001, 0.001, 0.01, 0.1, 1, 10. "C" : np.logspace( -4, 1, num = 6 ), "kernel" : [ "linear" ], "degree" : [ 0 ] }, cv = 5, n_jobs = -1, verbose = 0 ).fit( X_train, y_train ) df_linear_ = collect_result( grid_linear_, names = [ "Ядро", "C", "Параметр" ] )
year_15_16/fall_2015/game theoretic foundations of ml/labs/SVM-lab.ipynb
ivannz/study_notes
mit
Результаты поиска приведены ниже:
pd.concat( [ df_linear_, df_poly_, df_rbf_ ], axis = 0 ).sort_index( )
year_15_16/fall_2015/game theoretic foundations of ml/labs/SVM-lab.ipynb
ivannz/study_notes
mit
Посмотрим точность лучших моделей в каждом классе ядер на тестовтй выборке. Линейное ядро
print grid_linear_.best_estimator_ print "Accuracy: %0.3f%%" % ( grid_linear_.best_estimator_.score( X_test, y_test ) * 100, )
year_15_16/fall_2015/game theoretic foundations of ml/labs/SVM-lab.ipynb
ivannz/study_notes
mit
Гауссовское ядро
print grid_rbf_.best_estimator_ print "Accuracy: %0.3f%%" % ( grid_rbf_.best_estimator_.score( X_test, y_test ) * 100, )
year_15_16/fall_2015/game theoretic foundations of ml/labs/SVM-lab.ipynb
ivannz/study_notes
mit
Полимониальное ядро
print grid_poly_.best_estimator_ print "Accuracy: %0.3f%%" % ( grid_poly_.best_estimator_.score( X_test, y_test ) * 100, )
year_15_16/fall_2015/game theoretic foundations of ml/labs/SVM-lab.ipynb
ivannz/study_notes
mit
Построим ROC-AUC кривую для лучшей моделей.
result_ = { name_: metrics.roc_curve( y_test, estimator_.predict_proba( X_test )[:,1] ) for name_, estimator_ in { "Linear": grid_linear_.best_estimator_, "Polynomial": grid_poly_.best_estimator_, "RBF": grid_rbf_.best_estimator_ }.iteritems( ) } fig = plt.figure( figsize = ( 16, 9 ) ) ax = fig.add_subplot( 111 ) ax.set_ylim( -0.1, 1.1 ) ; ax.set_xlim( -0.1, 1.1 ) ax.set_xlabel( "FPR" ) ; ax.set_ylabel( u"TPR" ) ax.set_title( u"ROC-AUC" ) for name_, value_ in result_.iteritems( ) : fpr, tpr, _ = value_ ax.plot( fpr, tpr, lw=2, label = name_ ) ax.legend( loc = "lower right" )
year_15_16/fall_2015/game theoretic foundations of ml/labs/SVM-lab.ipynb
ivannz/study_notes
mit
Load and prepare the data A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
data_path = 'Bike-Sharing-Dataset/hour.csv' rides = pd.read_csv(data_path) rides.head()
your-first-network/.ipynb_checkpoints/dlnd-your-first-neural-network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
Checking out the data This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above. Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
rides[:24*10].plot(x='dteday', y='cnt')
your-first-network/.ipynb_checkpoints/dlnd-your-first-neural-network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
Dummy variables Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday'] for each in dummy_fields: dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False) rides = pd.concat([rides, dummies], axis=1) fields_to_drop = ['instant', 'dteday', 'season', 'weathersit', 'weekday', 'atemp', 'mnth', 'workingday', 'hr'] data = rides.drop(fields_to_drop, axis=1) data.head()
your-first-network/.ipynb_checkpoints/dlnd-your-first-neural-network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
Scaling target variables To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1. The scaling factors are saved so we can go backwards when we use the network for predictions.
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed'] # Store scalings in a dictionary so we can convert back later scaled_features = {} for each in quant_features: mean, std = data[each].mean(), data[each].std() scaled_features[each] = [mean, std] data.loc[:, each] = (data[each] - mean)/std
your-first-network/.ipynb_checkpoints/dlnd-your-first-neural-network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
Splitting the data into training, testing, and validation sets We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
# Save the last 21 days test_data = data[-21*24:] data = data[:-21*24] # Separate the data into features and targets target_fields = ['cnt', 'casual', 'registered'] features, targets = data.drop(target_fields, axis=1), data[target_fields] test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
your-first-network/.ipynb_checkpoints/dlnd-your-first-neural-network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
# Hold out the last 60 days of the remaining data as a validation set train_features, train_targets = features[:-60*24], targets[:-60*24] val_features, val_targets = features[-60*24:], targets[-60*24:]
your-first-network/.ipynb_checkpoints/dlnd-your-first-neural-network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes. The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation. We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation. Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$. Below, you have these tasks: 1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function. 2. Implement the forward pass in the train method. 3. Implement the backpropagation algorithm in the train method, including calculating the output error. 4. Implement the forward pass in the run method.
class NeuralNetwork(object): def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize weights self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5, (self.hidden_nodes, self.input_nodes)) self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5, (self.output_nodes, self.hidden_nodes)) self.lr = learning_rate #### Set this to your implemented sigmoid function #### # Activation function is the sigmoid function self.activation_function = self.sigmoid def sigmoid(self, x): return 1 / (1 + np.exp(-x)) def train(self, inputs_list, targets_list): # Convert inputs list to 2d array inputs = np.array(inputs_list, ndmin=2).T targets = np.array(targets_list, ndmin=2).T #### Implement the forward pass here #### ### Forward pass ### # TODO: Hidden layer hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer # TODO: Output layer final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) # signals into final output layer final_outputs = self.activation_function(final_inputs) # signals from final output layer #### Implement the backward pass here #### ### Backward pass ### # TODO: Output error output_errors = targets - final_outputs # Output layer error is the difference between desired target and actual output. # TODO: Backpropagated error hidden_errors = np.dot(self.weights_hidden_to_output, output_error) # errors propagated to the hidden layer hidden_grad = hidden_outputs * (1 - hidden_outputs) # hidden layer gradients # TODO: Update the weights self.weights_hidden_to_output += self.lr * np.dot(hidden_outputs, output_errors).T # update hidden-to-output weights with gradient descent step self.weights_input_to_hidden += self.lr * np.dot(inputs, hidden_errors * hidden_grad).T # update input-to-hidden weights with gradient descent step def run(self, inputs_list): # Run a forward pass through the network inputs = np.array(inputs_list, ndmin=2).T #### Implement the forward pass here #### # TODO: Hidden layer hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer # TODO: Output layer final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) # signals into final output layer final_outputs = self.activation_function(final_inputs) # signals from final output layer return final_outputs def MSE(y, Y): return np.mean((y-Y)**2)
your-first-network/.ipynb_checkpoints/dlnd-your-first-neural-network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops. You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later. Choose the number of epochs This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting. Choose the learning rate This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge. Choose the number of hidden nodes The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
import sys ### Set the hyperparameters here ### epochs = 1000 learning_rate = 0.05 hidden_nodes = 3 output_nodes = 1 N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[]} for e in range(epochs): # Go through a random batch of 128 records from the training data set batch = np.random.choice(train_features.index, size=128) for record, target in zip(train_features.ix[batch].values, train_targets.ix[batch]['cnt']): network.train(record, target) # Printing out the training progress train_loss = MSE(network.run(train_features), train_targets['cnt'].values) val_loss = MSE(network.run(val_features), val_targets['cnt'].values) sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \ + "% ... Training loss: " + str(train_loss)[:5] \ + " ... Validation loss: " + str(val_loss)[:5]) losses['train'].append(train_loss) losses['validation'].append(val_loss) plt.plot(losses['train'], label='Training loss') plt.plot(losses['validation'], label='Validation loss') plt.legend() plt.ylim(ymax=0.5)
your-first-network/.ipynb_checkpoints/dlnd-your-first-neural-network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
Check out your predictions Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
fig, ax = plt.subplots(figsize=(8,4)) mean, std = scaled_features['cnt'] predictions = network.run(test_features)*std + mean ax.plot(predictions[0], label='Prediction') ax.plot((test_targets['cnt']*std + mean).values, label='Data') ax.set_xlim(right=len(predictions)) ax.legend() dates = pd.to_datetime(rides.ix[test_data.index]['dteday']) dates = dates.apply(lambda d: d.strftime('%b %d')) ax.set_xticks(np.arange(len(dates))[12::24]) _ = ax.set_xticklabels(dates[12::24], rotation=45)
your-first-network/.ipynb_checkpoints/dlnd-your-first-neural-network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
Thinking about your results Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does? Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter Your answer below Unit tests Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.
import unittest inputs = [0.5, -0.2, 0.1] targets = [0.4] test_w_i_h = np.array([[0.1, 0.4, -0.3], [-0.2, 0.5, 0.2]]) test_w_h_o = np.array([[0.3, -0.1]]) class TestMethods(unittest.TestCase): ########## # Unit tests for data loading ########## def test_data_path(self): # Test that file path to dataset has been unaltered self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv') def test_data_loaded(self): # Test that data frame loaded self.assertTrue(isinstance(rides, pd.DataFrame)) ########## # Unit tests for network functionality ########## def test_activation(self): network = NeuralNetwork(3, 2, 1, 0.5) # Test that the activation function is a sigmoid self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5)))) def test_train(self): # Test that weights are updated correctly on training network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() network.train(inputs, targets) self.assertTrue(np.allclose(network.weights_hidden_to_output, np.array([[ 0.37275328, -0.03172939]]))) self.assertTrue(np.allclose(network.weights_input_to_hidden, np.array([[ 0.10562014, 0.39775194, -0.29887597], [-0.20185996, 0.50074398, 0.19962801]]))) def test_run(self): # Test correctness of run method network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() self.assertTrue(np.allclose(network.run(inputs), 0.09998924)) suite = unittest.TestLoader().loadTestsFromModule(TestMethods()) unittest.TextTestRunner().run(suite)
your-first-network/.ipynb_checkpoints/dlnd-your-first-neural-network-checkpoint.ipynb
xpharry/Udacity-DLFoudation
mit
Plotting the data First let's make a plot of our data to see how it looks. In order to have a 2D plot, let's ingore the rank.
# Importing matplotlib import matplotlib.pyplot as plt # Function to help us plot def plot_points(data): X = np.array(data[["gre","gpa"]]) y = np.array(data["admit"]) admitted = X[np.argwhere(y==1)] rejected = X[np.argwhere(y==0)] plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'red', edgecolor = 'k') plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'cyan', edgecolor = 'k') plt.xlabel('Test (GRE)') plt.ylabel('Grades (GPA)') # Plotting the points plot_points(data) plt.show()
student-admissions/StudentAdmissions.ipynb
ktmud/deep-learning
mit
Roughly, it looks like the students with high scores in the grades and test passed, while the ones with low scores didn't, but the data is not as nicely separable as we hoped it would. Maybe it would help to take the rank into account? Let's make 4 plots, each one for each rank.
# Separating the ranks data_rank1 = data[data["rank"]==1] data_rank2 = data[data["rank"]==2] data_rank3 = data[data["rank"]==3] data_rank4 = data[data["rank"]==4] # Plotting the graphs plot_points(data_rank1) plt.title("Rank 1") plt.show() plot_points(data_rank2) plt.title("Rank 2") plt.show() plot_points(data_rank3) plt.title("Rank 3") plt.show() plot_points(data_rank4) plt.title("Rank 4") plt.show()
student-admissions/StudentAdmissions.ipynb
ktmud/deep-learning
mit
This looks more promising, as it seems that the lower the rank, the higher the acceptance rate. Let's use the rank as one of our inputs. In order to do this, we should one-hot encode it. TODO: One-hot encoding the rank Use the get_dummies function in numpy in order to one-hot encode the data.
# TODO: Make dummy variables for rank one_hot_data = pass # TODO: Drop the previous rank column one_hot_data = pass # Print the first 10 rows of our data one_hot_data[:10]
student-admissions/StudentAdmissions.ipynb
ktmud/deep-learning
mit
TODO: Scaling the data The next step is to scale the data. We notice that the range for grades is 1.0-4.0, whereas the range for test scores is roughly 200-800, which is much larger. This means our data is skewed, and that makes it hard for a neural network to handle. Let's fit our two features into a range of 0-1, by dividing the grades by 4.0, and the test score by 800.
# Making a copy of our data processed_data = one_hot_data[:] # TODO: Scale the columns # Printing the first 10 rows of our procesed data processed_data[:10]
student-admissions/StudentAdmissions.ipynb
ktmud/deep-learning
mit
Splitting the data into Training and Testing In order to test our algorithm, we'll split the data into a Training and a Testing set. The size of the testing set will be 10% of the total data.
sample = np.random.choice(processed_data.index, size=int(len(processed_data)*0.9), replace=False) train_data, test_data = processed_data.iloc[sample], processed_data.drop(sample) print("Number of training samples is", len(train_data)) print("Number of testing samples is", len(test_data)) print(train_data[:10]) print(test_data[:10])
student-admissions/StudentAdmissions.ipynb
ktmud/deep-learning
mit
Splitting the data into features and targets (labels) Now, as a final step before the training, we'll split the data into features (X) and targets (y).
features = train_data.drop('admit', axis=1) targets = train_data['admit'] features_test = test_data.drop('admit', axis=1) targets_test = test_data['admit'] print(features[:10]) print(targets[:10])
student-admissions/StudentAdmissions.ipynb
ktmud/deep-learning
mit
Training the 2-layer Neural Network The following function trains the 2-layer neural network. First, we'll write some helper functions.
# Activation (sigmoid) function def sigmoid(x): return 1 / (1 + np.exp(-x)) def sigmoid_prime(x): return sigmoid(x) * (1-sigmoid(x)) def error_formula(y, output): return - y*np.log(output) - (1 - y) * np.log(1-output)
student-admissions/StudentAdmissions.ipynb
ktmud/deep-learning
mit
TODO: Backpropagate the error Now it's your turn to shine. Write the error term. Remember that this is given by the equation $$ -(y-\hat{y}) \sigma'(x) $$
# TODO: Write the error term formula def error_term_formula(y, output): pass # Neural Network hyperparameters epochs = 1000 learnrate = 0.5 # Training function def train_nn(features, targets, epochs, learnrate): # Use to same seed to make debugging easier np.random.seed(42) n_records, n_features = features.shape last_loss = None # Initialize weights weights = np.random.normal(scale=1 / n_features**.5, size=n_features) for e in range(epochs): del_w = np.zeros(weights.shape) for x, y in zip(features.values, targets): # Loop through all records, x is the input, y is the target # Activation of the output unit # Notice we multiply the inputs and the weights here # rather than storing h as a separate variable output = sigmoid(np.dot(x, weights)) # The error, the target minus the network output error = error_formula(y, output) # The error term # Notice we calulate f'(h) here instead of defining a separate # sigmoid_prime function. This just makes it faster because we # can re-use the result of the sigmoid function stored in # the output variable error_term = error_term_formula(y, output) # The gradient descent step, the error times the gradient times the inputs del_w += error_term * x # Update the weights here. The learning rate times the # change in weights, divided by the number of records to average weights += learnrate * del_w / n_records # Printing out the mean square error on the training set if e % (epochs / 10) == 0: out = sigmoid(np.dot(features, weights)) loss = np.mean((out - targets) ** 2) print("Epoch:", e) if last_loss and last_loss < loss: print("Train loss: ", loss, " WARNING - Loss Increasing") else: print("Train loss: ", loss) last_loss = loss print("=========") print("Finished training!") return weights weights = train_nn(features, targets, epochs, learnrate)
student-admissions/StudentAdmissions.ipynb
ktmud/deep-learning
mit
Calculating the Accuracy on the Test Data
# Calculate accuracy on test data tes_out = sigmoid(np.dot(features_test, weights)) predictions = tes_out > 0.5 accuracy = np.mean(predictions == targets_test) print("Prediction accuracy: {:.3f}".format(accuracy))
student-admissions/StudentAdmissions.ipynb
ktmud/deep-learning
mit
Parameters
# number of realizations along which to average the psd estimate n_real = 100 # modulation scheme and constellation points constellation = [ -1, 1 ] # number of symbols n_symb = 100 t_symb = 1.0 chips_per_symbol = 8 samples_per_chip = 8 samples_per_symbol = samples_per_chip * chips_per_symbol # parameters for frequency regime N_fft = 512 omega = np.linspace( -np.pi, np.pi, N_fft ) f_vec = omega / ( 2 * np.pi * t_symb / samples_per_symbol )
nt1/vorlesung/extra/dsss.ipynb
kit-cel/wt
gpl-2.0
Real data-modulated Tx-signal
# define rectangular function responses rect = np.ones( samples_per_symbol ) rect /= np.linalg.norm( rect ) # number of realizations along which to average the psd estimate n_real = 10 # initialize two-dimensional field for collecting several realizations along which to average RECT_PSD = np.zeros( (n_real, N_fft ) ) DSSS_PSD = np.zeros( (n_real, N_fft ) ) # get chips and signature # NOTE: looping until number of +-1 chips in | sum ones - 0.5 N_chips | < 0.2 N_chips, # i.e., number of +1,-1 is approximately 1/2 (up to 20 percent) while True: dsss_chips = (-1) ** np.random.randint( 0, 2, size = chips_per_symbol ) if np.abs( np.sum( dsss_chips > 0) - chips_per_symbol/2 ) / chips_per_symbol < .2: break # generate signature out of chips by putting samples_per_symbol samples with chip amplitude # normalize signature to energy 1 dsss_signature = np.ones( samples_per_symbol ) for n in range( chips_per_symbol ): dsss_signature[ n * samples_per_chip : (n+1) * samples_per_chip ] *= dsss_chips[ n ] dsss_signature /= np.linalg.norm( dsss_signature ) # activate switch if chips should be resampled for every simulation # this would average (e.g., for PSD) instead of showing "one reality" new_chips_per_sim = 1 # loop for realizations for k in np.arange( n_real ): if new_chips_per_sim: # resample signature using identical method as above while True: dsss_chips = (-1) ** np.random.randint( 0, 2, size = chips_per_symbol ) if np.abs( np.sum( dsss_chips > 0) - chips_per_symbol/2 ) / chips_per_symbol < .2: break # get signature dsss_signature = np.ones( samples_per_symbol ) for n in range( chips_per_symbol ): dsss_signature[ n * samples_per_chip : (n+1) * samples_per_chip ] *= dsss_chips[ n ] dsss_signature /= np.linalg.norm( dsss_signature ) # generate random binary vector and modulate data = np.random.randint( 2, size = n_symb ) mod = [ constellation[ d ] for d in data ] # get signals by putting symbols and filtering s_up = np.zeros( n_symb * samples_per_symbol ) s_up[ :: samples_per_symbol ] = mod # apply RECTANGULAR and CDMA shaping in time domain s_rect = np.convolve( rect, s_up ) s_dsss = np.convolve( dsss_signature, s_up ) # get spectrum RECT_PSD[ k, :] = np.abs( np.fft.fftshift( np.fft.fft( s_rect, N_fft ) ) )**2 DSSS_PSD[ k, :] = np.abs( np.fft.fftshift( np.fft.fft( s_dsss, N_fft ) ) )**2 # average along realizations RECT_av = np.average( RECT_PSD, axis=0 ) RECT_av /= np.max( RECT_av ) DSSS_av = np.average( DSSS_PSD, axis=0 ) DSSS_av /= np.max( DSSS_av ) # show limited amount of symbols in time domain N_syms_plot = 5 t_plot = np.arange( 0, N_syms_plot * t_symb, t_symb / samples_per_symbol ) # plot plt.figure() plt.subplot(121) plt.plot( t_plot, s_rect[ : N_syms_plot * samples_per_symbol], linewidth=2.0, label='Rect') plt.plot( t_plot, s_dsss[ : N_syms_plot * samples_per_symbol ], linewidth=2.0, label='DS-SS') plt.ylim( (-1.1, 1.1 ) ) plt.grid( True ) plt.legend(loc='upper right') plt.xlabel('$t/T$') plt.title('$s(t)$') plt.subplot(122) np.seterr(divide='ignore') # ignore warning for logarithm of 0 plt.plot( f_vec, 10*np.log10( RECT_av ), linewidth=2.0, label='Rect., sim.' ) plt.plot( f_vec, 10*np.log10( DSSS_av ), linewidth=2.0, label='DS-SS, sim.' ) np.seterr(divide='warn') # enable warning for logarithm of 0 plt.grid(True) plt.legend(loc='lower right') plt.ylim( (-60, 10 ) ) plt.xlabel('$fT$') plt.title('$|S(f)|^2$')
nt1/vorlesung/extra/dsss.ipynb
kit-cel/wt
gpl-2.0
Selecting Asset Data Checkout the QuantConnect docs to learn how to select asset data.
spy = qb.AddEquity("SPY") eur = qb.AddForex("EURUSD") btc = qb.AddCrypto("BTCUSD") fxv = qb.AddData[FxcmVolume]("EURUSD_Vol", Resolution.Hour)
Jupyter/KitchenSinkQuantBookTemplate.ipynb
Jay-Jay-D/LeanSTP
apache-2.0
Historical Data Requests We can use the QuantConnect API to make Historical Data Requests. The data will be presented as multi-index pandas.DataFrame where the first index is the Symbol. For more information, please follow the link.
# Gets historical data from the subscribed assets, the last 360 datapoints with daily resolution h1 = qb.History(qb.Securities.Keys, 360, Resolution.Daily) # Plot closing prices from "SPY" h1.loc["SPY"]["close"].plot() # Gets historical data from the subscribed assets, from the last 30 days with daily resolution h2 = qb.History(qb.Securities.Keys, timedelta(360), Resolution.Daily) # Plot high prices from "EURUSD" h2.loc["EURUSD"]["high"].plot() # Gets historical data from the subscribed assets, between two dates with daily resolution h3 = qb.History([btc.Symbol], datetime(2014,1,1), datetime.now(), Resolution.Daily) # Plot closing prices from "BTCUSD" h3.loc["BTCUSD"]["close"].plot() # Only fetchs historical data from a desired symbol h4 = qb.History([spy.Symbol], 360, Resolution.Daily) # or qb.History(["SPY"], 360, Resolution.Daily) # Only fetchs historical data from a desired symbol h5 = qb.History([eur.Symbol], timedelta(360), Resolution.Daily) # or qb.History(["EURUSD"], timedelta(30), Resolution.Daily) # Fetchs custom data h6 = qb.History([fxv.Symbol], timedelta(360)) h6.loc[fxv.Symbol.Value]["volume"].plot()
Jupyter/KitchenSinkQuantBookTemplate.ipynb
Jay-Jay-D/LeanSTP
apache-2.0
Historical Options Data Requests Select the option data Sets the filter, otherwise the default will be used SetFilter(-1, 1, timedelta(0), timedelta(35)) Get the OptionHistory, an object that has information about the historical options data
goog = qb.AddOption("GOOG") goog.SetFilter(-2, 2, timedelta(0), timedelta(180)) option_history = qb.GetOptionHistory(goog.Symbol, datetime(2017, 1, 4)) print (option_history.GetStrikes()) print (option_history.GetExpiryDates()) h7 = option_history.GetAllData()
Jupyter/KitchenSinkQuantBookTemplate.ipynb
Jay-Jay-D/LeanSTP
apache-2.0
Historical Future Data Requests Select the future data Sets the filter, otherwise the default will be used SetFilter(timedelta(0), timedelta(35)) Get the FutureHistory, an object that has information about the historical future data
es = qb.AddFuture("ES") es.SetFilter(timedelta(0), timedelta(180)) future_history = qb.GetFutureHistory(es.Symbol, datetime(2017, 1, 4)) print (future_history.GetExpiryDates()) h7 = future_history.GetAllData()
Jupyter/KitchenSinkQuantBookTemplate.ipynb
Jay-Jay-D/LeanSTP
apache-2.0
Get Fundamental Data GetFundamental([symbol], selector, start_date = datetime(1998,1,1), end_date = datetime.now()) We will get a pandas.DataFrame with fundamental data.
data = qb.GetFundamental(["AAPL","AIG","BAC","GOOG","IBM"], "ValuationRatios.PERatio") data
Jupyter/KitchenSinkQuantBookTemplate.ipynb
Jay-Jay-D/LeanSTP
apache-2.0
Indicators We can easily get the indicator of a given symbol with QuantBook. For all indicators, please checkout QuantConnect Indicators Reference Table
# Example with BB, it is a datapoint indicator # Define the indicator bb = BollingerBands(30, 2) # Gets historical data of indicator bbdf = qb.Indicator(bb, "SPY", 360, Resolution.Daily) # drop undesired fields bbdf = bbdf.drop('standarddeviation', 1) # Plot bbdf.plot() # For EURUSD bbdf = qb.Indicator(bb, "EURUSD", 360, Resolution.Daily) bbdf = bbdf.drop('standarddeviation', 1) bbdf.plot() # Example with ADX, it is a bar indicator adx = AverageDirectionalIndex("adx", 14) adxdf = qb.Indicator(adx, "SPY", 360, Resolution.Daily) adxdf.plot() # For EURUSD adxdf = qb.Indicator(adx, "EURUSD", 360, Resolution.Daily) adxdf.plot() # Example with ADO, it is a tradebar indicator (requires volume in its calculation) ado = AccumulationDistributionOscillator("ado", 5, 30) adodf = qb.Indicator(ado, "SPY", 360, Resolution.Daily) adodf.plot() # For EURUSD. # Uncomment to check that this SHOULD fail, since Forex is data type is not TradeBar. # adodf = qb.Indicator(ado, "EURUSD", 360, Resolution.Daily) # adodf.plot() # SMA cross: symbol = "EURUSD" # Get History hist = qb.History([symbol], 500, Resolution.Daily) # Get the fast moving average fast = qb.Indicator(SimpleMovingAverage(50), symbol, 500, Resolution.Daily) # Get the fast moving average slow = qb.Indicator(SimpleMovingAverage(200), symbol, 500, Resolution.Daily) # Remove undesired columns and rename others fast = fast.drop('rollingsum', 1).rename(columns={'simplemovingaverage': 'fast'}) slow = slow.drop('rollingsum', 1).rename(columns={'simplemovingaverage': 'slow'}) # Concatenate the information and plot df = pd.concat([hist.loc[symbol]["close"], fast, slow], axis=1).dropna(axis=0) df.plot() # Get indicator defining a lookback period in terms of timedelta ema1 = qb.Indicator(ExponentialMovingAverage(50), "SPY", timedelta(100), Resolution.Daily) # Get indicator defining a start and end date ema2 = qb.Indicator(ExponentialMovingAverage(50), "SPY", datetime(2016,1,1), datetime(2016,10,1), Resolution.Daily) ema = pd.concat([ema1, ema2], axis=1) ema.plot() rsi = RelativeStrengthIndex(14) # Selects which field we want to use in our indicator (default is Field.Close) rsihi = qb.Indicator(rsi, "SPY", 360, Resolution.Daily, Field.High) rsilo = qb.Indicator(rsi, "SPY", 360, Resolution.Daily, Field.Low) rsihi = rsihi.rename(columns={'relativestrengthindex': 'high'}) rsilo = rsilo.rename(columns={'relativestrengthindex': 'low'}) rsi = pd.concat([rsihi['high'], rsilo['low']], axis=1) rsi.plot()
Jupyter/KitchenSinkQuantBookTemplate.ipynb
Jay-Jay-D/LeanSTP
apache-2.0
Quickstart The easiest way to get started with using emcee is to use it for a project. To get you started, here’s an annotated, fully-functional example that demonstrates a standard usage pattern. How to sample a multi-dimensional Gaussian We’re going to demonstrate how you might draw samples from the multivariate Gaussian density given by: $$ p(\vec{x}) \propto \exp \left [ - \frac{1}{2} (\vec{x} - \vec{\mu})^\mathrm{T} \, \Sigma ^{-1} \, (\vec{x} - \vec{\mu}) \right ] $$ where $\vec{\mu}$ is an $N$-dimensional vector position of the mean of the density and $\Sigma$ is the square N-by-N covariance matrix. The first thing that we need to do is import the necessary modules:
import numpy as np
docs/_static/notebooks/quickstart.ipynb
johnbachman/emcee
mit
Then, we’ll code up a Python function that returns the density $p(\vec{x})$ for specific values of $\vec{x}$, $\vec{\mu}$ and $\Sigma^{-1}$. In fact, emcee actually requires the logarithm of $p$. We’ll call it log_prob:
def log_prob(x, mu, cov): diff = x - mu return -0.5 * np.dot(diff, np.linalg.solve(cov, diff))
docs/_static/notebooks/quickstart.ipynb
johnbachman/emcee
mit
It is important that the first argument of the probability function is the position of a single "walker" (a N dimensional numpy array). The following arguments are going to be constant every time the function is called and the values come from the args parameter of our :class:EnsembleSampler that we'll see soon. Now, we'll set up the specific values of those "hyperparameters" in 5 dimensions:
ndim = 5 np.random.seed(42) means = np.random.rand(ndim) cov = 0.5 - np.random.rand(ndim ** 2).reshape((ndim, ndim)) cov = np.triu(cov) cov += cov.T - np.diag(cov.diagonal()) cov = np.dot(cov, cov)
docs/_static/notebooks/quickstart.ipynb
johnbachman/emcee
mit
and where cov is $\Sigma$. How about we use 32 walkers? Before we go on, we need to guess a starting point for each of the 32 walkers. This position will be a 5-dimensional vector so the initial guess should be a 32-by-5 array. It's not a very good guess but we'll just guess a random number between 0 and 1 for each component:
nwalkers = 32 p0 = np.random.rand(nwalkers, ndim)
docs/_static/notebooks/quickstart.ipynb
johnbachman/emcee
mit
Now that we've gotten past all the bookkeeping stuff, we can move on to the fun stuff. The main interface provided by emcee is the :class:EnsembleSampler object so let's get ourselves one of those:
import emcee sampler = emcee.EnsembleSampler(nwalkers, ndim, log_prob, args=[means, cov])
docs/_static/notebooks/quickstart.ipynb
johnbachman/emcee
mit
Remember how our function log_prob required two extra arguments when it was called? By setting up our sampler with the args argument, we're saying that the probability function should be called as:
log_prob(p0[0], means, cov)
docs/_static/notebooks/quickstart.ipynb
johnbachman/emcee
mit
If we didn't provide any args parameter, the calling sequence would be log_prob(p0[0]) instead. It's generally a good idea to run a few "burn-in" steps in your MCMC chain to let the walkers explore the parameter space a bit and get settled into the maximum of the density. We'll run a burn-in of 100 steps (yep, I just made that number up... it's hard to really know how many steps of burn-in you'll need before you start) starting from our initial guess p0:
state = sampler.run_mcmc(p0, 100) sampler.reset()
docs/_static/notebooks/quickstart.ipynb
johnbachman/emcee
mit
You'll notice that I saved the final position of the walkers (after the 100 steps) to a variable called state. You can check out what will be contained in the other output variables by looking at the documentation for the :func:EnsembleSampler.run_mcmc function. The call to the :func:EnsembleSampler.reset method clears all of the important bookkeeping parameters in the sampler so that we get a fresh start. It also clears the current positions of the walkers so it's a good thing that we saved them first. Now, we can do our production run of 10000 steps:
sampler.run_mcmc(state, 10000);
docs/_static/notebooks/quickstart.ipynb
johnbachman/emcee
mit
The samples can be accessed using the :func:EnsembleSampler.get_chain method. This will return an array with the shape (10000, 32, 5) giving the parameter values for each walker at each step in the chain. Take note of that shape and make sure that you know where each of those numbers come from. You can make histograms of these samples to get an estimate of the density that you were sampling:
import matplotlib.pyplot as plt samples = sampler.get_chain(flat=True) plt.hist(samples[:, 0], 100, color="k", histtype="step") plt.xlabel(r"$\theta_1$") plt.ylabel(r"$p(\theta_1)$") plt.gca().set_yticks([]);
docs/_static/notebooks/quickstart.ipynb
johnbachman/emcee
mit
Another good test of whether or not the sampling went well is to check the mean acceptance fraction of the ensemble using the :func:EnsembleSampler.acceptance_fraction property:
print("Mean acceptance fraction: {0:.3f}".format(np.mean(sampler.acceptance_fraction)))
docs/_static/notebooks/quickstart.ipynb
johnbachman/emcee
mit
and the integrated autocorrelation time (see the :ref:autocorr tutorial for more details)
print( "Mean autocorrelation time: {0:.3f} steps".format( np.mean(sampler.get_autocorr_time()) ) )
docs/_static/notebooks/quickstart.ipynb
johnbachman/emcee
mit
Load previously saved data In the previous notebook, we had saved the data in a binary format. Let us try and load the data back.
interactions_ts = gl.TimeSeries("data/user_activity_data.ts/") users = gl.SFrame("data/users.sf/")
dss-2016/churn_prediction/churn-tutorial.ipynb
turi-code/tutorials
apache-2.0
Training a churn predictor We define churn to be no activity within a period of time (called the churn_period). Hence, a user/customer is said to have churned if periods of activity is followed by no activity for a churn_period (for example, 30 days). <img src="https://dato.com/learn/userguide/churn_prediction/images/churn-illustration.png", align="left">
churn_period_oct = datetime.datetime(year = 2011, month = 10, day = 1)
dss-2016/churn_prediction/churn-tutorial.ipynb
turi-code/tutorials
apache-2.0
Making a train-validation split Next, we perform a train-validation split where we randomly split the data such that one split contains data for a fraction of the users while the second split contains all data for the rest of the users.
(train, valid) = gl.churn_predictor.random_split(interactions_ts, user_id = 'CustomerID', fraction = 0.9, seed = 12) print "Users in the training dataset : %s" % len(train['CustomerID'].unique()) print "Users in the validation dataset : %s" % len(valid['CustomerID'].unique())
dss-2016/churn_prediction/churn-tutorial.ipynb
turi-code/tutorials
apache-2.0
Training a churn predictor model
model = gl.churn_predictor.create(train, user_id='CustomerID', user_data = users, time_boundaries = [churn_period_oct]) model
dss-2016/churn_prediction/churn-tutorial.ipynb
turi-code/tutorials
apache-2.0
Consuming predictions made by the model Here the question to ask is will they churn after a certain period of time. To validate we can see if they user has used us after that evaluation period. Voila! I was confusing it with expiration time (customer churn not usage churn)
predictions = model.predict(valid, user_data=users) predictions predictions['probability'].show()
dss-2016/churn_prediction/churn-tutorial.ipynb
turi-code/tutorials
apache-2.0
Evaluating the model
metrics = model.evaluate(valid, user_data=users, time_boundary=churn_period_oct) metrics model.save('data/churn_model.mdl')
dss-2016/churn_prediction/churn-tutorial.ipynb
turi-code/tutorials
apache-2.0
The :class:Info &lt;mne.Info&gt; data structure The :class:Info &lt;mne.Info&gt; data object is typically created when data is imported into MNE-Python and contains details such as: date, subject information, and other recording details the sampling rate information about the data channels (name, type, position, etc.) digitized points sensor–head coordinate transformation matrices and so forth. See the :class:the API reference &lt;mne.Info&gt; for a complete list of all data fields. Once created, this object is passed around throughout the data analysis pipeline.
import mne import os.path as op
0.15/_downloads/plot_info.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
:class:mne.Info behaves as a nested Python dictionary:
# Read the info object from an example recording info = mne.io.read_info( op.join(mne.datasets.sample.data_path(), 'MEG', 'sample', 'sample_audvis_raw.fif'), verbose=False)
0.15/_downloads/plot_info.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
List all the fields in the info object
print('Keys in info dictionary:\n', info.keys())
0.15/_downloads/plot_info.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Obtain the sampling rate of the data
print(info['sfreq'], 'Hz')
0.15/_downloads/plot_info.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
List all information about the first data channel
print(info['chs'][0])
0.15/_downloads/plot_info.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Obtaining subsets of channels There are a number of convenience functions to obtain channel indices, given an :class:mne.Info object. Get channel indices by name
channel_indices = mne.pick_channels(info['ch_names'], ['MEG 0312', 'EEG 005'])
0.15/_downloads/plot_info.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Get channel indices by regular expression
channel_indices = mne.pick_channels_regexp(info['ch_names'], 'MEG *')
0.15/_downloads/plot_info.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Channel types MNE supports different channel types: eeg : For EEG channels with data stored in Volts (V) meg (mag) : For MEG magnetometers channels stored in Tesla (T) meg (grad) : For MEG gradiometers channels stored in Tesla/Meter (T/m) ecg : For ECG channels stored in Volts (V) seeg : For Stereotactic EEG channels in Volts (V). ecog : For Electrocorticography (ECoG) channels in Volts (V). fnirs (HBO) : Functional near-infrared spectroscopy oxyhemoglobin data. fnirs (HBR) : Functional near-infrared spectroscopy deoxyhemoglobin data. emg : For EMG channels stored in Volts (V) bio : For biological channels (AU). stim : For the stimulus (a.k.a. trigger) channels (AU) resp : For the response-trigger channel (AU) chpi : For HPI coil channels (T). exci : Flux excitation channel used to be a stimulus channel. ias : For Internal Active Shielding data (maybe on Triux only). syst : System status channel information (on Triux systems only). Get channel indices by type
channel_indices = mne.pick_types(info, meg=True) # MEG only channel_indices = mne.pick_types(info, eeg=True) # EEG only
0.15/_downloads/plot_info.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
MEG gradiometers and EEG channels
channel_indices = mne.pick_types(info, meg='grad', eeg=True)
0.15/_downloads/plot_info.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Get a dictionary of channel indices, grouped by channel type
channel_indices_by_type = mne.io.pick.channel_indices_by_type(info) print('The first three magnetometers:', channel_indices_by_type['mag'][:3])
0.15/_downloads/plot_info.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Obtaining information about channels
# Channel type of a specific channel channel_type = mne.io.pick.channel_type(info, 75) print('Channel #75 is of type:', channel_type)
0.15/_downloads/plot_info.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Channel types of a collection of channels
meg_channels = mne.pick_types(info, meg=True)[:10] channel_types = [mne.io.pick.channel_type(info, ch) for ch in meg_channels] print('First 10 MEG channels are of type:\n', channel_types)
0.15/_downloads/plot_info.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Dropping channels from an info structure It is possible to limit the info structure to only include a subset of channels with the :func:mne.pick_info function:
# Only keep EEG channels eeg_indices = mne.pick_types(info, meg=False, eeg=True) reduced_info = mne.pick_info(info, eeg_indices) print(reduced_info)
0.15/_downloads/plot_info.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Webdriver 主要用的是selenium的Webdriver 我们可以通过下面的方式先看看Selenium.Webdriver支持哪些浏览器
from selenium import webdriver help(webdriver)
code/04.PythonCrawler_selenium.ipynb
computational-class/cjc2016
mit
下载和设置Webdriver 对于Chrome需要的webdriver下载地址 http://chromedriver.storage.googleapis.com/index.html 需要将webdriver放在系统路径下: - 确保anaconda在系统路径名里 - 把下载的webdriver 放在Anaconda的bin文件夹下 PhantomJS PhantomJS是一个而基于WebKit的服务端JavaScript API,支持Web而不需要浏览器支持,其快速、原生支持各种Web标准:Dom处理,CSS选择器,JSON等等。PhantomJS可以用用于页面自动化、网络监测、网页截屏,以及无界面测试
#browser = webdriver.Firefox() # 打开Firefox浏览器 browser = webdriver.Chrome() # 打开Chrome浏览器
code/04.PythonCrawler_selenium.ipynb
computational-class/cjc2016
mit
访问页面
from selenium import webdriver browser = webdriver.Chrome() browser.get("http://music.163.com") print(browser.page_source) #browser.close()
code/04.PythonCrawler_selenium.ipynb
computational-class/cjc2016
mit
查找元素 单个元素查找
from selenium import webdriver browser = webdriver.Chrome() browser.get("http://music.163.com") input_first = browser.find_element_by_id("g_search") input_second = browser.find_element_by_css_selector("#g_search") input_third = browser.find_element_by_xpath('//*[@id="g_search"]') print(input_first) print(input_second) print(input_third)
code/04.PythonCrawler_selenium.ipynb
computational-class/cjc2016
mit
这里我们通过三种不同的方式去获取响应的元素,第一种是通过id的方式,第二个中是CSS选择器,第三种是xpath选择器,结果都是相同的。 常用的查找元素方法: find_element_by_name find_element_by_id find_element_by_xpath find_element_by_link_text find_element_by_partial_link_text find_element_by_tag_name find_element_by_class_name find_element_by_css_selector
# 下面这种方式是比较通用的一种方式:这里需要记住By模块所以需要导入 from selenium.webdriver.common.by import By browser = webdriver.Chrome() browser.get("http://music.163.com") input_first = browser.find_element(By.ID,"g_search") print(input_first) browser.close()
code/04.PythonCrawler_selenium.ipynb
computational-class/cjc2016
mit
多个元素查找 其实多个元素和单个元素的区别,举个例子:find_elements,单个元素是find_element,其他使用上没什么区别,通过其中的一个例子演示:
browser = webdriver.Chrome() browser.get("http://music.163.com") lis = browser.find_elements_by_css_selector('body') print(lis) browser.close()
code/04.PythonCrawler_selenium.ipynb
computational-class/cjc2016
mit
当然上面的方式也是可以通过导入from selenium.webdriver.common.by import By 这种方式实现 lis = browser.find_elements(By.CSS_SELECTOR,'.service-bd li') 同样的在单个元素中查找的方法在多个元素查找中同样存在: - find_elements_by_name - find_elements_by_id - find_elements_by_xpath - find_elements_by_link_text - find_elements_by_partial_link_text - find_elements_by_tag_name - find_elements_by_class_name - find_elements_by_css_selector 元素交互操作 对于获取的元素调用交互方法
from selenium import webdriver import time browser = webdriver.Chrome() browser.get("https://music.163.com/") input_str = browser.find_element_by_id('srch') input_str.send_keys("周杰伦") time.sleep(3) #休眠,模仿人工搜索 input_str.clear() input_str.send_keys("林俊杰")
code/04.PythonCrawler_selenium.ipynb
computational-class/cjc2016
mit
运行的结果可以看出程序会自动打开Chrome浏览器并打开淘宝输入ipad,然后删除,重新输入MacBook pro,并点击搜索 Selenium所有的api文档:http://selenium-python.readthedocs.io/api.html#module-selenium.webdriver.common.action_chains 执行JavaScript 这是一个非常有用的方法,这里就可以直接调用js方法来实现一些操作, 下面的例子是通过登录知乎然后通过js翻到页面底部,并弹框提示
from selenium import webdriver browser = webdriver.Chrome() browser.get("https://www.zhihu.com/explore/") browser.execute_script('window.scrollTo(0, document.body.scrollHeight)') browser.execute_script('alert("To Bottom")')
code/04.PythonCrawler_selenium.ipynb
computational-class/cjc2016
mit
一个例子 ```pyton from selenium import webdriver browser = webdriver.Chrome() browser.get("https://www.privco.com/home/login") #需要翻墙打开网址 username = 'fake_username' password = 'fake_password' browser.find_element_by_id("username").clear() browser.find_element_by_id("username").send_keys(username) browser.find_element_by_id("password").clear() browser.find_element_by_id("password").send_keys(password) browser.find_element_by_css_selector("#login-form > div:nth-child(5) > div > button").click() ```
# url = "https://www.privco.com/private-company/329463" def download_excel(url): browser.get(url) name = url.split('/')[-1] title = browser.title source = browser.page_source with open(name+'.html', 'w') as f: f.write(source) try: soup = BeautifulSoup(source, 'html.parser') url_new = soup.find('span', {'class', 'profile-name'}).a['href'] url_excel = url_new + '/export' browser.get(url_excel) except Exception as e: print(url, 'no excel') pass urls = [ 'https://www.privco.com/private-company/1135789', 'https://www.privco.com/private-company/542756', 'https://www.privco.com/private-company/137908', 'https://www.privco.com/private-company/137138'] for k, url in enumerate(urls): print(k) try: download_excel(url) except Exception as e: print(url, e)
code/04.PythonCrawler_selenium.ipynb
computational-class/cjc2016
mit
In the code, we assign a lambda function to variable f. The function specifies that on each possible input it receives, the resulting function that is applied is a multiplication by 2. Hence f(1)=2, f(2)=4, etc. Note that, invoking f only works if we provide an argument that can be combined with the * 2 operation. For example, for strings, the * 2 operation concatenates the input argument with itself:
f('Pete')
notebooks/2_event_data_filtering.ipynb
pm4py/pm4py-core
gpl-3.0
Filter and Map Lambda functions allow us to write short, type-independent functions. Given a list of objects, Python provides two core functions that can apply a given lambda function on each element of the given list (in fact, any iterable): filter(f,l) apply the given lambda function f as a filter on the iterable l. map(f,l) apply the given lambda function f as a transformation on the iterable l. For more information, study the concept of ‘higher order functions’ in Python, e.g., as introduced here. Let's consider a few simple examples.
l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] filter(lambda n: n >= 5, l)
notebooks/2_event_data_filtering.ipynb
pm4py/pm4py-core
gpl-3.0
The previous example needs little to no explanation, i.e., the filter retains all numbers in the list greater or equal to five. However, what is interesting, is the fact that the resulting objects are not a list (or an iterables), rather a filter object. Such an objects can be easily transformed to a list by wrapping it with a list() cast:
list(filter(lambda n: n >= 5, l))
notebooks/2_event_data_filtering.ipynb
pm4py/pm4py-core
gpl-3.0
The same holds for the map() function:
map(lambda n: n * 3, l) list(map(lambda n: n * 3, l))
notebooks/2_event_data_filtering.ipynb
pm4py/pm4py-core
gpl-3.0
Observe that, the previous map function simply muliplies each element of list l by three. Lambda-Based Filtering in pm4py In pm4py, event log objects mimic lists of traces, which in turn, mimic lists of events. Clearly, lambda functions can therefore be applied to event logs and traces. However, as we have shown in the previous example, after applying such a lamda-based filter, the resulting object is no longer an event log. Furthermore, casting a filter object or map object to an event log in pm4py is a bit more involved, i.e., it is not so trivial as list(filter(...)) in the previous example. This is due to the fact that various meta-data is stored in the event log object as well. To this end, pm4py offers wrapper functions that make sure that after applying your higher-order function with a lambda function, the resulting object is again an Event Log object. In the upcoming scripts, we'll take a look at some lambda-based fitlering. First, let's inspect the length of each trace in our running example log by applying a generic map function
import pm4py log = pm4py.read_xes('data/running_example.xes') # inspect the length of each trace using a generic map function list(map(lambda t: len(t), log))
notebooks/2_event_data_filtering.ipynb
pm4py/pm4py-core
gpl-3.0
As we can see, there are four traces describing a trace of length 5, one trace of length 9 and one trace of length 13. Let's retain all traces that have a lenght greater than 5.
lf = pm4py.filter_log(lambda t: len(t) > 5, log) list(map(lambda t: len(t), lf))
notebooks/2_event_data_filtering.ipynb
pm4py/pm4py-core
gpl-3.0
The traces of length 9 and 13 have repeated behavior in them, i.e., the reinitiate request activity has been performed at least once:
list(map(lambda t: (len(t), len(list(filter(lambda e: e['concept:name'] == 'reinitiate request', t)))), log))
notebooks/2_event_data_filtering.ipynb
pm4py/pm4py-core
gpl-3.0
Observe that the map function maps each trace onto a tuple. The first element describes the length of the trace. The second element describes the number of occurrences of the activity register request. Observe that we obtain said counter by filtering the trace, i.e., by retaining only those events that describe the reinitiate request activity and counting the length of the resulting list. Note that the traces describe a list of events, and, events are implementing a dictionary. In this case, the activity name is captured by the concept:name attribute. In general, PM4PY supports the following generic filtering functions: pm4py.filter_log(f, log) filter the log according to a function f. pm4py.filter_trace(f,trace) filter the trace according to function f. pm4py.sort_log(log, key, reverse) sort the event log according to a given key, reversed order if reverse==True. pm4py.sort_trace(trace, key, reverse) sort the trace according to a given key, reversed order if reverse==True. Let's see these functions in action:
print(len(log)) lf = pm4py.filter_log(lambda t: len(t) > 5, log) print(len(lf)) print(len(log[0])) #log[0] fetches the 1st trace tf = pm4py.filter_trace(lambda e: e['concept:name'] in {'register request', 'pay compensation'}, log[0]) print(len(tf)) print(len(log[0])) ls = pm4py.sort_log(log, lambda t: len(t)) print(len(ls[0])) ls = pm4py.sort_log(log, lambda t: len(t), reverse=True) print(len(ls[0]))
notebooks/2_event_data_filtering.ipynb
pm4py/pm4py-core
gpl-3.0
Specific Filters There are various pre-built filters in PM4Py, which make commonly needed process mining filtering functionality a lot easier. In the upcoming overview, we briefly give present these functions. We describe how to call them, their main input parameters and their return objects. Note that, all of the filters work on both DataFrames and pm4py event log objects. Start Activities filter_start_activities(log, activities, retain=True) retains (or drops) the traces that contain the given activity as the final event.
pm4py.filter_start_activities(log, {'register request'}) pm4py.filter_start_activities(log, {'register request TYPO!'}) import pandas ldf = pm4py.format_dataframe(pandas.read_csv('data/running_example.csv', sep=';'), case_id='case_id', activity_key='activity', timestamp_key='timestamp') pm4py.filter_start_activities(ldf, {'register request'}) pm4py.filter_start_activities(ldf, {'register request TYPO!'})
notebooks/2_event_data_filtering.ipynb
pm4py/pm4py-core
gpl-3.0
End Activities filter_end_activities(log, activities, retain=True) retains (or drops) the traces that contain the given activity as the final event. For example, we can retain the number of cases that end with a "payment of the compensation":
len(pm4py.filter_end_activities(log, 'pay compensation'))
notebooks/2_event_data_filtering.ipynb
pm4py/pm4py-core
gpl-3.0
Event Attribute Values filter_event_attribute_values(log, attribute_key, values, level="case", retain=True) retains (or drops) traces (or events) based on a given collection of values that need to be matched for the given attribute_key. If level=='case', complete traces are matched (or dropped if retain==False) that have at least one event that describes a specifeid value for the given attribute. If level=='event', only events that match are retained (or dropped).
# retain any case that has either Peter or Mike working on it lf = pm4py.filter_event_attribute_values(log, 'org:resource', {'Pete', 'Mike'}) list(map(lambda t: list(map(lambda e: e['org:resource'], t)), lf)) # retain only those events that have Pete or Mik working on it lf = pm4py.filter_event_attribute_values(log, 'org:resource', {'Pete', 'Mike'}, level='event') list(map(lambda t: list(map(lambda e: e['org:resource'], t)), lf))
notebooks/2_event_data_filtering.ipynb
pm4py/pm4py-core
gpl-3.0
Plot items Lines, Bars, Points and Right yAxis
x = [1, 4, 6, 8, 10] y = [3, 6, 4, 5, 9] pp = Plot(title='Bars, Lines, Points and 2nd yAxis', xLabel="xLabel", yLabel="yLabel", legendLayout=LegendLayout.HORIZONTAL, legendPosition=LegendPosition(position=LegendPosition.Position.RIGHT), omitCheckboxes=True) pp.add(YAxis(label="Right yAxis")) pp.add(Bars(displayName="Bar", x=[1,3,5,7,10], y=[100, 120,90,100,80], width=1)) pp.add(Line(displayName="Line", x=x, y=y, width=6, yAxis="Right yAxis")) pp.add(Points(x=x, y=y, size=10, shape=ShapeType.DIAMOND, yAxis="Right yAxis")) plot = Plot(title= "Setting line properties") ys = [0, 1, 6, 5, 2, 8] ys2 = [0, 2, 7, 6, 3, 8] plot.add(Line(y= ys, width= 10, color= Color.red)) plot.add(Line(y= ys, width= 3, color= Color.yellow)) plot.add(Line(y= ys, width= 4, color= Color(33, 87, 141), style= StrokeType.DASH, interpolation= 0)) plot.add(Line(y= ys2, width= 2, color= Color(212, 57, 59), style= StrokeType.DOT)) plot.add(Line(y= [5, 0], x= [0, 5], style= StrokeType.LONGDASH)) plot.add(Line(y= [4, 0], x= [0, 5], style= StrokeType.DASHDOT)) plot = Plot(title= "Changing Point Size, Color, Shape") y1 = [6, 7, 12, 11, 8, 14] y2 = [4, 5, 10, 9, 6, 12] y3 = [2, 3, 8, 7, 4, 10] y4 = [0, 1, 6, 5, 2, 8] plot.add(Points(y= y1)) plot.add(Points(y= y2, shape= ShapeType.CIRCLE)) plot.add(Points(y= y3, size= 8.0, shape= ShapeType.DIAMOND)) plot.add(Points(y= y4, size= 12.0, color= Color.orange, outlineColor= Color.red)) plot = Plot(title= "Changing point properties with list") cs = [Color.black, Color.red, Color.orange, Color.green, Color.blue, Color.pink] ss = [6.0, 9.0, 12.0, 15.0, 18.0, 21.0] fs = [False, False, False, True, False, False] plot.add(Points(y= [5] * 6, size= 12.0, color= cs)) plot.add(Points(y= [4] * 6, size= 12.0, color= Color.gray, outlineColor= cs)) plot.add(Points(y= [3] * 6, size= ss, color= Color.red)) plot.add(Points(y= [2] * 6, size= 12.0, color= Color.black, fill= fs, outlineColor= Color.black)) plot = Plot() y1 = [1.5, 1, 6, 5, 2, 8] cs = [Color.black, Color.red, Color.gray, Color.green, Color.blue, Color.pink] ss = [StrokeType.SOLID, StrokeType.SOLID, StrokeType.DASH, StrokeType.DOT, StrokeType.DASHDOT, StrokeType.LONGDASH] plot.add(Stems(y= y1, color= cs, style= ss, width= 5)) plot = Plot(title= "Setting the base of Stems") ys = [3, 5, 2, 3, 7] y2s = [2.5, -1.0, 3.5, 2.0, 3.0] plot.add(Stems(y= ys, width= 2, base= y2s)) plot.add(Points(y= ys)) plot = Plot(title= "Bars") cs = [Color(255, 0, 0, 128)] * 5 # transparent bars cs[3] = Color.red # set color of a single bar, solid colored bar plot.add(Bars(x= [1, 2, 3, 4, 5], y= [3, 5, 2, 3, 7], color= cs, outlineColor= Color.black, width= 0.3))
doc/python/ChartingAPI.ipynb
jpallas/beakerx
apache-2.0
Lines, Points with Pandas
plot = Plot(title= "Pandas line") plot.add(Line(y= tableRows.y1, width= 2, color= Color(216, 154, 54))) plot.add(Line(y= tableRows.y10, width= 2, color= Color.lightGray)) plot plot = Plot(title= "Pandas Series") plot.add(Line(y= pd.Series([0, 6, 1, 5, 2, 4, 3]), width=2)) plot = Plot(title= "Bars") cs = [Color(255, 0, 0, 128)] * 7 # transparent bars cs[3] = Color.red # set color of a single bar, solid colored bar plot.add(Bars(pd.Series([0, 6, 1, 5, 2, 4, 3]), color= cs, outlineColor= Color.black, width= 0.3))
doc/python/ChartingAPI.ipynb
jpallas/beakerx
apache-2.0
Areas, Stems and Crosshair
ch = Crosshair(color=Color.black, width=2, style=StrokeType.DOT) plot = Plot(crosshair=ch) y1 = [4, 8, 16, 20, 32] base = [2, 4, 8, 10, 16] cs = [Color.black, Color.orange, Color.gray, Color.yellow, Color.pink] ss = [StrokeType.SOLID, StrokeType.SOLID, StrokeType.DASH, StrokeType.DOT, StrokeType.DASHDOT, StrokeType.LONGDASH] plot.add(Area(y=y1, base=base, color=Color(255, 0, 0, 50))) plot.add(Stems(y=y1, base=base, color=cs, style=ss, width=5)) plot = Plot() y = [3, 5, 2, 3] x0 = [0, 1, 2, 3] x1 = [3, 4, 5, 8] plot.add(Area(x= x0, y= y)) plot.add(Area(x= x1, y= y, color= Color(128, 128, 128, 50), interpolation= 0)) p = Plot() p.add(Line(y= [3, 6, 12, 24], displayName= "Median")) p.add(Area(y= [4, 8, 16, 32], base= [2, 4, 8, 16], color= Color(255, 0, 0, 50), displayName= "Q1 to Q3")) ch = Crosshair(color= Color(255, 128, 5), width= 2, style= StrokeType.DOT) pp = Plot(crosshair= ch, omitCheckboxes= True, legendLayout= LegendLayout.HORIZONTAL, legendPosition= LegendPosition(position=LegendPosition.Position.TOP)) x = [1, 4, 6, 8, 10] y = [3, 6, 4, 5, 9] pp.add(Line(displayName= "Line", x= x, y= y, width= 3)) pp.add(Bars(displayName= "Bar", x= [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], y= [2, 2, 4, 4, 2, 2, 0, 2, 2, 4], width= 0.5)) pp.add(Points(x= x, y= y, size= 10))
doc/python/ChartingAPI.ipynb
jpallas/beakerx
apache-2.0
Constant Lines, Constant Bands
p = Plot () p.add(Line(y=[-1, 1])) p.add(ConstantLine(x=0.65, style=StrokeType.DOT, color=Color.blue)) p.add(ConstantLine(y=0.1, style=StrokeType.DASHDOT, color=Color.blue)) p.add(ConstantLine(x=0.3, y=0.4, color=Color.gray, width=5, showLabel=True)) Plot().add(Line(y=[-3, 1, 3, 4, 5])).add(ConstantBand(x=[1, 2], y=[1, 3])) p = Plot() p.add(Line(x= [-3, 1, 2, 4, 5], y= [4, 2, 6, 1, 5])) p.add(ConstantBand(x= ['-Infinity', 1], color= Color(128, 128, 128, 50))) p.add(ConstantBand(x= [1, 2])) p.add(ConstantBand(x= [4, 'Infinity'])) from decimal import Decimal pos_inf = Decimal('Infinity') neg_inf = Decimal('-Infinity') print (pos_inf) print (neg_inf) from beakerx.plot import Text as BeakerxText plot = Plot() xs = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] ys = [8.6, 6.1, 7.4, 2.5, 0.4, 0.0, 0.5, 1.7, 8.4, 1] def label(i): if ys[i] > ys[i+1] and ys[i] > ys[i-1]: return "max" if ys[i] < ys[i+1] and ys[i] < ys[i-1]: return "min" if ys[i] > ys[i-1]: return "rising" if ys[i] < ys[i-1]: return "falling" return "" for i in xs: i = i - 1 if i > 0 and i < len(xs)-1: plot.add(BeakerxText(x= xs[i], y= ys[i], text= label(i), pointerAngle= -i/3.0)) plot.add(Line(x= xs, y= ys)) plot.add(Points(x= xs, y= ys)) plot = Plot(title= "Setting 2nd Axis bounds") ys = [0, 2, 4, 6, 15, 10] ys2 = [-40, 50, 6, 4, 2, 0] ys3 = [3, 6, 3, 6, 70, 6] plot.add(YAxis(label="Spread")) plot.add(Line(y= ys)) plot.add(Line(y= ys2, yAxis="Spread")) plot.setXBound([-2, 10]) #plot.setYBound(1, 5) plot.getYAxes()[0].setBound(1,5) plot.getYAxes()[1].setBound(3,6) plot plot = Plot(title= "Setting 2nd Axis bounds") ys = [0, 2, 4, 6, 15, 10] ys2 = [-40, 50, 6, 4, 2, 0] ys3 = [3, 6, 3, 6, 70, 6] plot.add(YAxis(label="Spread")) plot.add(Line(y= ys)) plot.add(Line(y= ys2, yAxis="Spread")) plot.setXBound([-2, 10]) plot.setYBound(1, 5) plot
doc/python/ChartingAPI.ipynb
jpallas/beakerx
apache-2.0
TimePlot
import time millis = current_milli_time() hour = round(1000 * 60 * 60) xs = [] ys = [] for i in range(11): xs.append(millis + hour * i) ys.append(i) plot = TimePlot(timeZone="America/New_York") # list of milliseconds plot.add(Points(x=xs, y=ys, size=10, displayName="milliseconds")) plot = TimePlot() plot.add(Line(x=tableRows['time'], y=tableRows['m3']))
doc/python/ChartingAPI.ipynb
jpallas/beakerx
apache-2.0
numpy datatime64
y = pd.Series([7.5, 7.9, 7, 8.7, 8, 8.5]) dates = [np.datetime64('2015-02-01'), np.datetime64('2015-02-02'), np.datetime64('2015-02-03'), np.datetime64('2015-02-04'), np.datetime64('2015-02-05'), np.datetime64('2015-02-06')] plot = TimePlot() plot.add(Line(x=dates, y=y))
doc/python/ChartingAPI.ipynb
jpallas/beakerx
apache-2.0
Timestamp
y = pd.Series([7.5, 7.9, 7, 8.7, 8, 8.5]) dates = pd.Series(['2015-02-01', '2015-02-02', '2015-02-03', '2015-02-04', '2015-02-05', '2015-02-06'] , dtype='datetime64[ns]') plot = TimePlot() plot.add(Line(x=dates, y=y))
doc/python/ChartingAPI.ipynb
jpallas/beakerx
apache-2.0
Datetime and date
import datetime y = pd.Series([7.5, 7.9, 7, 8.7, 8, 8.5]) dates = [datetime.date(2015, 2, 1), datetime.date(2015, 2, 2), datetime.date(2015, 2, 3), datetime.date(2015, 2, 4), datetime.date(2015, 2, 5), datetime.date(2015, 2, 6)] plot = TimePlot() plot.add(Line(x=dates, y=y)) import datetime y = pd.Series([7.5, 7.9, 7, 8.7, 8, 8.5]) dates = [datetime.datetime(2015, 2, 1), datetime.datetime(2015, 2, 2), datetime.datetime(2015, 2, 3), datetime.datetime(2015, 2, 4), datetime.datetime(2015, 2, 5), datetime.datetime(2015, 2, 6)] plot = TimePlot() plot.add(Line(x=dates, y=y))
doc/python/ChartingAPI.ipynb
jpallas/beakerx
apache-2.0
NanoPlot
millis = current_milli_time() nanos = millis * 1000 * 1000 xs = [] ys = [] for i in range(11): xs.append(nanos + 7 * i) ys.append(i) nanoplot = NanoPlot() nanoplot.add(Points(x=xs, y=ys))
doc/python/ChartingAPI.ipynb
jpallas/beakerx
apache-2.0
Stacking
y1 = [1,5,3,2,3] y2 = [7,2,4,1,3] p = Plot(title='Plot with XYStacker', initHeight=200) a1 = Area(y=y1, displayName='y1') a2 = Area(y=y2, displayName='y2') stacker = XYStacker() p.add(stacker.stack([a1, a2]))
doc/python/ChartingAPI.ipynb
jpallas/beakerx
apache-2.0
SimpleTime Plot
SimpleTimePlot(tableRows, ["y1", "y10"], # column names timeColumn="time", # time is default value for a timeColumn yLabel="Price", displayNames=["1 Year", "10 Year"], colors = [[216, 154, 54], Color.lightGray], displayLines=True, # no lines (true by default) displayPoints=False) # show points (false by default)) #time column base on DataFrame index tableRows.index = tableRows['time'] SimpleTimePlot(tableRows, ['m3']) rng = pd.date_range('1/1/2011', periods=72, freq='H') ts = pd.Series(np.random.randn(len(rng)), index=rng) df = pd.DataFrame(ts, columns=['y']) SimpleTimePlot(df, ['y'])
doc/python/ChartingAPI.ipynb
jpallas/beakerx
apache-2.0
Second Y Axis The plot can have two y-axes. Just add a YAxis to the plot object, and specify its label. Then for data that should be scaled according to this second axis, specify the property yAxis with a value that coincides with the label given. You can use upperMargin and lowerMargin to restrict the range of the data leaving more white, perhaps for the data on the other axis.
p = TimePlot(xLabel= "Time", yLabel= "Interest Rates") p.add(YAxis(label= "Spread", upperMargin= 4)) p.add(Area(x= tableRows.time, y= tableRows.spread, displayName= "Spread", yAxis= "Spread", color= Color(180, 50, 50, 128))) p.add(Line(x= tableRows.time, y= tableRows.m3, displayName= "3 Month")) p.add(Line(x= tableRows.time, y= tableRows.y10, displayName= "10 Year"))
doc/python/ChartingAPI.ipynb
jpallas/beakerx
apache-2.0
Combined Plot
import math points = 100 logBase = 10 expys = [] xs = [] for i in range(0, points): xs.append(i / 15.0) expys.append(math.exp(xs[i])) cplot = CombinedPlot(xLabel= "Linear") logYPlot = Plot(title= "Linear x, Log y", yLabel= "Log", logY= True, yLogBase= logBase) logYPlot.add(Line(x= xs, y= expys, displayName= "f(x) = exp(x)")) logYPlot.add(Line(x= xs, y= xs, displayName= "g(x) = x")) cplot.add(logYPlot, 4) linearYPlot = Plot(title= "Linear x, Linear y", yLabel= "Linear") linearYPlot.add(Line(x= xs, y= expys, displayName= "f(x) = exp(x)")) linearYPlot.add(Line(x= xs, y= xs, displayName= "g(x) = x")) cplot.add(linearYPlot,4) cplot plot = Plot(title= "Log x, Log y", xLabel= "Log", yLabel= "Log", logX= True, xLogBase= logBase, logY= True, yLogBase= logBase) plot.add(Line(x= xs, y= expys, displayName= "f(x) = exp(x)")) plot.add(Line(x= xs, y= xs, displayName= "f(x) = x")) plot
doc/python/ChartingAPI.ipynb
jpallas/beakerx
apache-2.0
Hat potential The following potential is often used in Physics and other fields to describe symmetry breaking and is often known as the "hat potential": $$ V(x) = -a x^2 + b x^4 $$ Write a function hat(x,a,b) that returns the value of this function:
def hat(x,a,b): v = -a*x**2 + b*x**4 return v assert hat(0.0, 1.0, 1.0)==0.0 assert hat(0.0, 1.0, 1.0)==0.0 assert hat(1.0, 10.0, 1.0)==-9.0
assignments/assignment11/OptimizationEx01.ipynb
LimeeZ/phys292-2015-work
mit
Plot this function over the range $x\in\left[-3,3\right]$ with $b=1.0$ and $a=5.0$:
a = 5.0 b = 1.0 x1 = np.arange(-3,3,0.1) plt.plot(x1, hat(x1, 5,1)) assert True # leave this to grade the plot
assignments/assignment11/OptimizationEx01.ipynb
LimeeZ/phys292-2015-work
mit
Write code that finds the two local minima of this function for $b=1.0$ and $a=5.0$. Use scipy.optimize.minimize to find the minima. You will have to think carefully about how to get this function to find both minima. Print the x values of the minima. Plot the function as a blue line. On the same axes, show the minima as red circles. Customize your visualization to make it beatiful and effective.
def hat(x): b = 1 a = 5 v = -a*x**2 + b*x**4 return v xmin1 = opt.minimize(hat,-1.5)['x'][0] xmin2 = opt.minimize(hat,1.5)['x'][0] xmins = np.array([xmin1,xmin2]) print(xmin1) print(xmin2) x1 = np.arange(-3,3,0.1) plt.plot(x1, hat(x1)) plt.scatter(xmins,hat(xmins), c = 'r',marker = 'o') plt.grid(True) plt.title('Hat Potential') plt.xlabel('Range') plt.ylabel('Potential') assert True # leave this for grading the plot
assignments/assignment11/OptimizationEx01.ipynb
LimeeZ/phys292-2015-work
mit
Programa principal Substituïu els comentaris per les ordres necessàries:
# avançar # girar # avançar # girar # avançar # girar # avançar # girar # parar
task/quadrat.ipynb
ecervera/mindstorms-nb
mit