id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
17,400 | labels are oh-encoded when reading in the datawe are using one-hot-encoding to represent the labels (the actual digit drawne " "of the images one-hotencoding uses vector of binary values to represent numeric or categorical values as our labels are for the digits - the vector contains ten valuesone for each possible digit one of these values is set to to represent the digit at that index of the vectorand the rest are set to for examplethe digit is represented using the vector [ as the value at index is stored as the vector therefore represents the digit to represent the actual images themselvesthe pixels are flattened into vector which is pixels in size each of the pixels making up the image is stored as value between and this determines the grayscale of the pixelas our images are presented in black and white only so black pixel is represented by and white pixel by with the various shades of gray somewhere in between we can use the mnist variable to find out the size of the dataset we have just imported looking at the num_examples for each of the three subsetswe can determine that the dataset has been split into , images for training for validationand , for testing add the following lines to your filemain py n_train mnist train num_examples , n_validation mnist validation num_examples |
17,401 | , now that we have our data importedit' time to think about the neural network step -defining the neural network architecture the architecture of the neural network refers to elements such as the number of layers in the networkthe number of units in each layerand how the units are connected between layers as neural networks are loosely inspired by the workings of the human brainhere the term unit is used to represent what we would biologically think of as neuron like neurons passing signals around the brainunits take some values from previous units as inputperform computationand then pass on the new value as output to other units these units are layered to form the networkstarting at minimum with one layer for inputting valuesand one layer to output values the term hidden layer is used for all of the layers in between the input and output layersi those "hiddenfrom the real world different architectures can yield dramatically different resultsas the performance can be thought of as function of the architecture among other thingssuch as the parametersthe dataand the duration of training add the following lines of code to your file to store the number of units per layer in global variables this allows us to alter the network architecture in one placeand at the end of the tutorial you can test for yourself how different numbers of layers and units will impact the results of our model |
17,402 | n_input input layer ( pixelsn_hidden st hidden layer n_hidden nd hidden layer n_hidden rd hidden layer n_output output layer ( - digitsthe following diagram shows visualization of the architecture we've designedwith each layer fully connected to the surrounding layersdiagram of neural network the term "deep neural networkrelates to the number of hidden layerswith "shallowusually meaning just one hidden layerand "deepreferring to multiple hidden layers given enough training dataa shallow neural network with sufficient number of units should |
17,403 | can but it is often more computationally efficient to use smaller deep neural network to achieve the same task that would require shallow network with exponentially more hidden units shallow neural networks also often encounter overfittingwhere the network essentially memorizes the training data that it has seenand is not able to generalize the knowledge to new data this is why deep neural networks are more commonly usedthe multiple layers between the raw input data and the output label allow the network to learn features at various levels of abstractionmaking the network itself better able to generalize other elements of the neural network that need to be defined here are the hyperparameters unlike the parameters that will get updated during trainingthese values are set initially and remain constant throughout the process in your fileset the following variables and valuesmain py learning_rate - n_iterations batch_size dropout the learning rate represents how much the parameters will adjust at each step of the learning process these adjustments are key component of trainingafter each pass through the network we tune the weights slightly to try and reduce the loss larger learning rates can converge fasterbut also have the potential to overshoot the optimal values as they are updated the number of iterations refers to how many times we go |
17,404 | examples we are using at each step the dropout variable represents threshold at which we eliminate some units at random we will be using dropout in our final hidden layer to give each unit chance of being eliminated at every training step this helps prevent overfitting we have now defined the architecture of our neural networkand the hyperparameters that impact the learning process the next step is to build the network as tensorflow graph step -building the tensorflow graph to build our networkwe will set up the network as computational graph for tensorflow to execute the core concept of tensorflow is the tensora data structure similar to an array or list initializedmanipulated as they are passed through the graphand updated through the learning process we'll start by defining three tensors as placeholderswhich are tensors that we'll feed values into later add the following to your filemain py tf placeholder("float"[nonen_input] tf placeholder("float"[nonen_output]keep_prob tf placeholder(tf float the only parameter that needs to be specified at its declaration is the size of the data we will be feeding in for we use shape of [none ]where none represents any amountas we will be feeding in an undefined number of -pixel images the shape of is [none as |
17,405 | possible classes the keep_prob tensor is used to control the dropout rateand we initialize it as placeholder rather than an immutable variable because we want to use the same tensor both for training (when dropout is set to and testing (when dropout is set to the parameters that the network will update in the training process are the weight and bias valuesso for these we need to set an initial value rather than an empty placeholder these values are essentially where the network does its learningas they are used in the activation functions of the neuronsrepresenting the strength of the connections between units since the values are optimized during trainingwe could set them to zero for now but the initial value actually has significant impact on the final accuracy of the model we'll use random values from truncated normal distribution for the weights we want them to be close to zeroso they can adjust in either positive or negative directionand slightly differentso they generate different errors this will ensure that the model learns something useful add these linesmain py weights ' 'tf variable(tf truncated_normal([n_inputn_hidden ]stddev= ))' 'tf variable(tf truncated_normal([n_hidden n_hidden ]stddev= ))' 'tf variable(tf truncated_normal([n_hidden n_hidden ]stddev= ))'out'tf variable(tf truncated_normal([n_hidden n_output] |
17,406 | for the biaswe use small constant value to ensure that the tensors activate in the intial stages and therefore contribute to the propagation the weights and bias tensors are stored in dictionary objects for ease of access add this code to your file to define the biasesmain py biases ' 'tf variable(tf constant( shape=[n_hidden ]))' 'tf variable(tf constant( shape=[n_hidden ]))' 'tf variable(tf constant( shape=[n_hidden ]))'out'tf variable(tf constant( shape=[n_output])nextset up the layers of the network by defining the operations that will manipulate the tensors add these lines to your filemain py layer_ tf add(tf matmul(xweights[' '])biases[' ']layer_ tf add(tf matmul(layer_ weights[' '])biases[' ']layer_ tf add(tf matmul(layer_ weights[' '])biases[' ']layer_drop tf nn dropout(layer_ keep_proboutput_layer tf matmul(layer_ weights['out']biases['out' |
17,407 | layer' outputs and the current layer' weightsand add the bias to these values at the last hidden layerwe will apply dropout operation using our keep_prob value of the final step in building the graph is to define the loss function that we want to optimize popular choice of loss function in tensorflow programs is cross-entropyalso known as log-losswhich quantifies the difference between two probability distributions (the predictions and the labelsa perfect classification would result in cross-entropy of with the loss completely minimized we also need to choose the optimization algorithm which will be used to minimize the loss function process named gradient descent optimization is common method for finding the (localminimum of function by taking iterative steps along the gradient in negative (descendingdirection there are several choices of gradient descent optimization algorithms already implemented in tensorflowand in this tutorial we will be using the adam optimizer this extends upon gradient descent optimization by using momentum to speed up the process through computing an exponentially weighted average of the gradients and using that in the adjustments add the following code to your filemain py cross_entropy tf reduce_meantf nn softmax_cross_entropy_with_logitslabels=ylogits=output_layer ) |
17,408 | we've now defined the network and built it out with tensorflow the next step is to feed data through the graph to train itand then test that it has actually learnt something step -training and testing the training process involves feeding the training dataset through the graph and optimizing the loss function every time the network iterates through batch of more training imagesit updates the parameters to reduce the loss in order to more accurately predict the digits shown the testing process involves running our testing dataset through the trained graphand keeping track of the number of images that are correctly predictedso that we can calculate the accuracy before starting the training processwe will define our method of evaluating the accuracy so we can print it out on mini-batches of data while we train these printed statements will allow us to check that from the first iteration to the lastloss decreases and accuracy increasesthey will also allow us to track whether or not we have ran enough iterations to reach consistent and optimal resultmain py correct_pred tf equal(tf argmax(output_layer )tf argmax( )accuracy tf reduce_mean(tf cast(correct_predtf float )in correct_predwe use the arg_max function to compare which images are being predicted correctly by looking at the output_layer |
17,409 | as list of booleans we can then cast this list to floats and calculate the mean to get total accuracy score we are now ready to initialize session for running the graph in this session we will feed the network with our training examplesand once trainedwe feed the same graph with new test examples to determine the accuracy of the model add the following lines of code to your filemain py init tf global_variables_initializer(sess tf session(sess run(initthe essence of the training process in deep learning is to optimize the loss function here we are aiming to minimize the difference between the predicted labels of the imagesand the true labels of the images the process involves four steps which are repeated for set number of iterationspropagate values forward through the network compute the loss propagate values backward through the network update the parameters at each training stepthe parameters are adjusted slightly to try and reduce the loss for the next step as the learning progresseswe should |
17,410 | network as model for testing our new data add this code to the filemain py train on mini batches for in range(n_iterations)batch_xbatch_y mnist train next_batch(batch_sizesess run(train_stepfeed_dict=xbatch_xybatch_ykeep_probdropout }print loss and accuracy (per minibatchif = minibatch_lossminibatch_accuracy sess run[cross_entropyaccuracy]feed_dict={xbatch_xybatch_ykeep_prob print"iteration"str( )"\tloss ="str(minibatch_loss)"\taccuracy ="str(minibatch_accuracy |
17,411 | of images through the networkwe print out the loss and accuracy of that batch note that we should not be expecting decreasing loss and increasing accuracy hereas the values are per batchnot for the entire model we use mini-batches of images rather than feeding them through individually to speed up the training process and allow the network to see number of different examples before updating the parameters once the training is completewe can run the session on the test images this time we are using keep_prob dropout rate to ensure all units are active in the testing process add this code to the filemain py test_accuracy sess run(accuracyfeed_dict={xmnist test imagesymnist test labelskeep_prob }print("\naccuracy on test set:"test_accuracyit' now time to run our program and see how accurately our neural network can recognize these handwritten digits save the main py file and execute the following command in the terminal to run the script(tensorflow-demopython main py you'll see an output similar to the followingalthough individual loss and accuracy results may vary slightlyoutput |
17,412 | loss accuracy iteration loss accuracy iteration loss accuracy iteration loss accuracy iteration loss accuracy iteration loss accuracy iteration loss accuracy iteration loss accuracy iteration loss accuracy iteration loss accuracy accuracy on test set to try and improve the accuracy of our modelor to learn more about the impact of tuning hyperparameterswe can test the effect of changing the learning ratethe dropout thresholdthe batch sizeand the number of iterations we can also change the number of units in our hidden layersand change the amount of hidden layers themselvesto see how different architectures increase or decrease the model accuracy to demonstrate that the network is actually recognizing the handdrawn imageslet' test it on single image of our own if you are on local machine and you would like to use your own hand-drawn numberyou can use graphics editor to create your own pixel image of digit otherwiseyou can use curl to download the following sample test image to your server or computer(tensorflow-democurl - images/test_img png |
17,413 | code to the top of the file to import two libraries necessary for image manipulation main py import numpy as np from pil import image then at the end of the fileadd the following line of code to load the test image of the handwritten digitmain py img np invert(image open("test_img png"convert(' ')ravel(the open function of the image library loads the test image as array containing the three rgb color channels and the alpha transparency this is not the same representation we used previously when reading in the dataset with tensorflowso we'll need to do some extra work to match the format firstwe use the convert function with the parameter to reduce the rgba representation to one grayscale color channel we store this as numpy array and invert it using np invertbecause the current matrix represents black as and white as whereas we need the opposite finallywe call ravel to flatten the array now that the image data is structured correctlywe can run session in the same way as previouslybut this time only feeding in the single |
17,414 | add the following code to your file to test the image and print the outputted label main py prediction sess run(tf argmax(output_layer )feed_dict={ [img]}print ("prediction for test image:"np squeeze(prediction)th np squeeze function is called on the prediction to return the single integer from the array ( to go from [ to the resulting output demonstrates that the network has recognized this image as the digit output prediction for test image you can try testing the network with more complex images -digits that look like other digitsfor exampleor digits that have been drawn poorly or incorrectly -to see how well it fares conclusion in this tutorial you successfully trained neural network to classify the mnist dataset with around accuracy and tested it on an image of your own current state-of-the-art research achieves around on this same problemusing more complex network architectures involving convolutional layers these use the structure of the image to better represent the contentsunlike our method which flattened all the pixels |
17,415 | tensorflow websiteand see the research papers detailing the most accurate results on the mnist website now that you know how to build and train neural networkyou can try and use this implementation on your own dataor test it on other popular datasets such as the google streetview house numbersor the cifar- dataset for more general image recognition |
17,416 | how to build bot for atari with openai gym written by alvin wan edited by mark drake reinforcement learning is subfield within control theorywhich concerns controlling systems that change over time and broadly includes applications such as self-driving carsroboticsand bots for games throughout this guideyou will use reinforcement learning to build bot for atari video games this bot is not given access to internal information about the game insteadit' only given access to the game' rendered display and the reward for that displaymeaning that it can only see what human player would see in machine learninga bot is formally known as an agent in the case of this tutorialan agent is "playerin the system that acts according to decision-making functioncalled policy the primary goal is to develop strong agents by arming them with strong policies in other wordsour aim is to develop intelligent bots by arming them with strong decisionmaking capabilities you will begin this tutorial by training basic reinforcement learning agent that takes random actions when playing space invadersthe classic atari arcade gamewhich will serve as your baseline for comparison following thisyou will explore several other techniques -including qlearningdeep -learningand least squares -while building agents that play space invaders and frozen lakea simple game environment included in gym (toolkit released by openai ( |
17,417 | that govern one' choice of model complexity in machine learning prerequisites to complete this tutorialyou will needa server running ubuntu with at least gb of ram this server should have non-root user with sudo privileges configuredas well as firewall set up with ufw you can set this up by following this initial server setup guide for ubuntu python virtual environment which you can achieve by reading our guide "how to install python and set up programming environment on an ubuntu server alternativelyif you are using local machineyou can install python and set up local programming environment by reading the appropriate tutorial for your operating system via our python installation and setup series step -creating the project and installing dependencies in order to set up the development environment for your botsyou must download the game itself and the libraries needed for computation begin by creating workspace for this project named ataribotmkdir ~/ataribot navigate to the new ataribot directory |
17,418 | then create new virtual environment for the project you can name this virtual environment anything you' likeherewe will name it ataribotpython - venv ataribot activate your environmentsource ataribot/bin/activate on ubuntuas of version opencv requires few more packages be installed in order to function these include cmake -an application that manages software build processes -as well as session managermiscellaneous extensionsand digital image composition run the following command to install these packagessudo apt-get install - cmake libsm libxext libxrender-dev libz-dev noteif you're following this guide on local machine running macosthe only additional software you need to install is cmake install it using homebrew (which you will have installed if you followed the prerequisite macos tutorialby typingbrew install cmake |
17,419 | use pip to install the wheel packagethe reference implementation of the wheel packaging standard python librarythis package serves as an extension for building wheels and includes command line tool for working with whl filespython - pip install wheel in addition to wheelyou'll need to install the following packagesgyma python library that makes various games available for researchas well as all dependencies for the atari games developed by openaigym offers public benchmarks for each of the games so that the performance for various agents and algorithms can be uniformly /evaluated tensorflowa deep learning library this library gives us the ability to run computations more efficiently specificallyit does this by building mathematical functions using tensorflow' abstractions that run exclusively on your gpu opencvthe computer vision library mentioned previously scipya scientific computing library that offers efficient optimization algorithms numpya linear algebra library install each of these packages with the following command note that this command specifies which version of each package to installpython - pip install gym=tensorflow=tensorpack=numpy=scipy=opencv-python= |
17,420 | thisuse pip once more to install gym' atari environmentswhich includes variety of atari video gamesincluding space invaderspython - pip install gym[atariif your installation of the gym[ataripackage was successfulyour output will end with the followingoutput installing collected packagesatari-pypillowpyopengl successfully installed pillowpyopenglatari-pywith these dependencies installedyou're ready to move on and build an agent that plays randomly to serve as your baseline for comparison step -creating baseline random agent with gym now that the required software is on your serveryou will set up an agent that will play simplified version of the classic atari gamespace invaders for any experimentit is necessary to obtain baseline to help you understand how well your model performs because this agent takes random actions at each framewe'll refer to it as our randombaseline agent in this caseyou will compare against this baseline agent to understand how well your agents perform in later steps with gymyou maintain your own game loop this means that you handle every step of the game' executionat every time stepyou give the gym new action and ask gym for the game state in this tutorialthe |
17,421 | what you would see if you were playing the game using your preferred text editorcreate python file named bot_ _random py herewe'll use nanonano bot_ _random py notethroughout this guidethe botsnames are aligned with the step number in which they appearrather than the order in which they appear hencethis bot is named bot\ \_random py rather than bot\ \_random py start this script by adding the following highlighted lines these lines include comment block that explains what this script will do and two import statements that will import the packages this script will ultimately need in order to function/ataribot/bot_ _random py ""bot -make randombaseline agent for the spaceinvaders game ""import gym import random add main function in this functioncreate the game environment -spaceinvaders- -and then initialize the game using env reset/ataribot/bot_ _random py |
17,422 | import gym import random def main()env gym make('spaceinvaders- 'env reset(nextadd an env step function this function can return the following kinds of valuesstatethe new state of the gameafter applying the provided action rewardthe increase in score that the state incurs by way of examplethis could be when bullet has destroyed an alienand the score increases by points thenreward in playing any score-based gamethe player' goal is to maximize the score this is synonymous with maximizing the total reward donewhether or not the episode has endedwhich usually occurs when player has lost all lives infoextraneous information that you'll put aside for now you will use reward to count your total reward you'll also use done to determine when the player dieswhich will be when done returns true add the following game loopwhich instructs the game to loop until the player dies |
17,423 | def main()env gym make('spaceinvaders- 'env reset(episode_reward while trueaction env action_space sample(_rewarddone_ env step(actionepisode_reward +reward if doneprint('reward%sepisode_rewardbreak finallyrun the main function include __name__ check to ensure main only runs when you invoke it directly with python bot_ _random py if you do not add the if checkmain will always be triggered when the python file is executedeven when you import the file consequentlyit' good practice to place the code in main functionexecuted only when __name__ ='__main__/ataribot/bot_ _random py def main()if doneprint('reward %sepisode_reward |
17,424 | if **name*='**main**'main(save the file and exit the editor if you're using nanodo so by pressing ctrl+xythen enter thenrun your script by typingpython bot_ _random py your program will output numberakin to the following note that each time you run the file you will get different resultoutput making new envspaceinvaders- reward these random results present an issue in order to produce work that other researchers and practitioners can benefit fromyour results and trials must be reproducible to correct thisreopen the script filenano bot_ _random py import randomadd random seed( env gym make('spaceinvaders- ')add env seed( togetherthese lines "seedthe environment with consistent starting pointensuring that the results will always be reproducible your final file will match the followingexactly |
17,425 | ""bot -make randombaseline agent for the spaceinvaders game ""import gym import random random seed( def main()env gym make('spaceinvaders- 'env seed( env reset(episode_reward while trueaction env action_space sample(_rewarddone_ env step(actionepisode_reward +reward if doneprint('reward%sepisode_rewardbreak if **name*='**main**'main( |
17,426 | following in your terminalpython bot_ _random py this will output the following rewardexactlyoutput making new envspaceinvaders- reward this is your very first botalthough it' rather unintelligent since it doesn' account for the surrounding environment when it makes decisions for more reliable estimate of your bot' performanceyou could have the agent run for multiple episodes at timereporting rewards averaged across multiple episodes to configure thisfirst reopen the filenano bot_ _random py after random seed( )add the following highlighted line which tells the agent to play the game for episodes/ataribot/bot_ _random py random seed( num_episodes |
17,427 | right after env seed( )start new list of rewards/ataribot/bot_ _random py env seed( rewards [nest all code from env reset(to the end of main(in for loopiterating num_episodes times make sure to indent each line from env reset(to break by four spaces/ataribot/bot_ _random py def main()env gym make('spaceinvaders- 'env seed( rewards [for in range(num_episodes)env reset(episode_reward while true |
17,428 | the current episode' reward to the list of all rewards/ataribot/bot_ _random py if doneprint('reward%sepisode_rewardrewards append(episode_rewardbreak at the end of the main functionreport the average reward/ataribot/bot_ _random py def main()print('reward%sepisode_rewardbreak print('average reward (sum(rewardslen(rewards))your file will now align with the following please note that the following code block includes few comments to clarify key parts of the script/ataribot/bot_ _random py "" |
17,429 | ""import gym import random random seed( make results reproducible num_episodes def main()env gym make('spaceinvaders- 'env seed( create the game make results reproducible rewards [for in range(num_episodes)env reset(episode_reward while trueaction env action_space sample(_rewarddone_ env step(actionrandom action episode_reward +reward if doneprint('reward%depisode_rewardrewards append(episode_rewardbreak print('average reward (sum(rewardslen(rewards)) |
17,430 | main(save the fileexit the editorand run the scriptpython bot_ _random py this will print the following average rewardexactlyoutput making new envspaceinvaders- average reward we now have more reliable estimate of the baseline score to beat to create superior agentthoughyou will need to understand the framework for reinforcement learning how can one make the abstract notion of "decision-makingmore concreteunderstanding reinforcement learning in any gamethe player' goal is to maximize their score in this guidethe player' score is referred to as its reward to maximize their rewardthe player must be able to refine its decision-making abilities formallya decision is the process of looking at the gameor observing the game' stateand picking an action our decision-making function is called policya policy accepts state as input and "decideson an action |
17,431 | to build such functionwe will start with specific set of algorithms in reinforcement learning called -learning algorithms to illustrate theseconsider the initial state of gamewhich we'll call state your spaceship and the aliens are all in their starting positions thenassume we have access to magical " -tablewhich tells us how much reward each action will earnstate action reward state shoot state right state left shoot action will maximize your rewardas it results in the reward with the highest value as you can seea -table provides straightforward way to make decisionsbased on the observed statepolicystate -look at -tablepick action with greatest reward howevermost games have too many states to list in table in such casesthe -learning agent learns -function instead of -table we use this -function similarly to how we used the -table previously rewriting the table entries as functions gives us the followingq(state shoot (state right |
17,432 | given particular stateit' easy for us to make decisionwe simply look at each possible action and its rewardthen take the action that corresponds with the highest expected reward reformulating the earlier policy more formallywe havepolicystate -argmax_{actionq(stateactionthis satisfies the requirements of decision-making functiongiven state in the gameit decides on an action howeverthis solution depends on knowing (stateactionfor every state and action to estimate (stateaction)consider the following given many observations of an agent' statesactionsand rewardsone can obtain an estimate of the reward for every state and action by taking running average space invaders is game with delayed rewardsthe player is rewarded when the alien is blown up and not when the player shoots howeverthe player taking an action by shooting is the true impetus for the reward somehowthe -function must assign (state shoota positive reward these two insights are codified in the following equationsq(stateaction( learning_rateq(stateactionlearning_rate q_target q_target reward discount_factor max_{action' (state'action' |
17,433 | statethe state at current time step actionthe action taken at current time step rewardthe reward for current time step state'the new state for next time stepgiven that we took action action'all possible actions learning_ratethe learning rate discount_factorthe discount factorhow much reward "degradesas we propagate it for complete explanation of these two equationssee this article on understanding -learning with this understanding of reinforcement learning in mindall that remains is to actually run the game and obtain these -value estimates for new policy step -creating simple -learning agent for frozen lake now that you have baseline agentyou can begin creating new agents and compare them against the original in this stepyou will create an agent that uses -learninga reinforcement learning technique used to teach an agent which action to take given certain state this agent will play new gamefrozenlake the setup for this game is described as follows on the gym websitewinter is here you and your friends were tossing around frisbee at the park when you made wild throw that left the frisbee out in the middle of the lake the water is mostly frozenbut there are few holes where the ice has melted if you step into one of those holesyou'll fall into the freezing |
17,434 | absolutely imperative that you navigate across the lake and retrieve the disc howeverthe ice is slipperyso you won' always move in the direction you intend the surface is described using grid like the followingsfff (sstarting pointsafefhfh (ffrozen surfacesafefffh (hholefall to your doomhffg (ggoalwhere the frisbee is locatedthe player starts at the top leftdenoted by sand works its way to the goal at the bottom rightdenoted by the available actions are rightleftupand downand reaching the goal results in score of there are number of holesdenoted hand falling into one immediately results in score of in this sectionyou will implement simple -learning agent using what you've learned previouslyyou will create an agent that trades off between exploration and exploitation in this contextexploration means the agent acts randomlyand exploitation means it uses its -values to choose what it believes to be the optimal action you will also create table to hold the -valuesupdating it incrementally as the agent acts and learns make copy of your script from step cp bot_ _random py bot_ _q_table py then open up this new file for editing |
17,435 | begin by updating the comment at the top of the file that describes the script' purpose because this is only commentthis change isn' necessary for the script to function properlybut it can be helpful for keeping track of what the script does/ataribot/bot_ _q_table py ""bot -build simple -learning agent for frozenlake ""before you make functional modifications to the scriptyou will need to import numpy for its linear algebra utilities right underneath import gymadd the highlighted line/ataribot/bot_ _q_table py ""bot -build simple -learning agent for frozenlake ""import gym import numpy as np import random random seed( make results reproducible |
17,436 | /ataribot/bot_ _q_table py import random random seed( make results reproducible np random seed( nextmake the game states accessible update the env reset(line to say the followingwhich stores the initial state of the game in the variable state/ataribot/bot_ _q_table py for \ in range(num_episodes)state env reset(update the env stepline to say the followingwhich stores the next statestate you will need both the current state and the next one -state -to update the -function /ataribot/bot_ _q_table py while trueaction env action_space sample( |
17,437 | episode_reward +rewardadd line updating the variable state this keeps the variable state updated for the next iterationas you will expect state to reflect the current state/ataribot/bot_ _q_table py while trueepisode_reward +reward state state if donein the if done blockdelete the print statement which prints the reward for each episode insteadyou'll output the average reward over many episodes the if done block will then look like this/ataribot/bot_ _q_table py if donerewards append(episode_rewardbreak after these modifications your game loop will match the following |
17,438 | for in range(num_episodes)state env reset(episode_reward while trueaction env action_space sample(state rewarddone_ env step(actionepisode_reward +reward state state if donerewards append(episode_reward)break nextadd the ability for the agent to trade off between exploration and exploitation right before your main game loop (which starts with for )create the -value table/ataribot/bot_ _q_table py np zeros((env observation_space nenv action_space )for in range(num_episodes)thenrewrite the for loop to expose the episode number/ataribot/bot_ _q_table py |
17,439 | np zeros((env observation_space nenv action_space )for episode in range( num_episodes )inside the while trueinner game loopcreate noise noiseor meaninglessrandom datais sometimes introduced when training deep neural networks because it can improve both the performance and the accuracy of the model note that the higher the noisethe less the values in [state:matter as resultthe higher the noisethe more likely that the agent acts independently of its knowledge of the game in other wordshigher noise encourages the agent to explore random actions/ataribot/bot_ _q_table py while truenoise np random random(( env action_space )(episode** action env action_space sample(note that as episodes increasesthe amount of noise decreases quadraticallyas time goes onthe agent explores less and less because it can trust its own assessment of the game' reward and begin to exploit its knowledge update the action line to have your agent pick actions according to the -value tablewith some exploration built in |
17,440 | noise np random random(( env action_space )(episode** action np argmax( [state:noisestate rewarddone_ env step(actionyour main game loop will then match the following/ataribot/bot_ _q_table py np zeros((env observation_space nenv action_space )for episode in range( num_episodes )state env reset(episode_reward while truenoise np random random(( env action_space )(episode** action np argmax( [state:noisestate rewarddone_ env step(actionepisode_reward +reward state state if donerewards append(episode_rewardbreak |
17,441 | equationan equation widely used in machine learning to find the optimal policy within given environment the bellman equation incorporates two ideas that are highly relevant to this project firsttaking particular action from particular state many times will result in good estimate for the -value associated with that state and action to this endyou will increase the number of episodes this bot must play through in order to return stronger -value estimate secondrewards must propagate through timeso that the original action is assigned non-zero reward this idea is clearest in games with delayed rewardsfor examplein space invadersthe player is rewarded when the alien is blown up and not when the player shoots howeverthe player shooting is the true impetus for reward likewisethe -function must assign (state shoota positive reward firstupdate num_episodes to equal /ataribot/bot_ _q_table py np random seed( num_episodes thenadd the necessary hyperparameters to the top of the file in the form of two more variables/ataribot/bot_ _q_table py |
17,442 | discount_factor learning_rate compute the new target -valueright after the line containing env step)/ataribot/bot_ _q_table py state rewarddone_ env step(actionqtarget reward discount_factor np max( [state :]episode_reward +reward on the line directly after qtargetupdate the -value table using weighted average of the old and new -values/ataribot/bot_ _q_table py qtarget reward discount_factor np max( [state :] [stateaction -learning_rate [stateactionlearning_rate qtarget episode_reward +reward check that your main game loop now matches the following |
17,443 | np zeros((env observation_space nenv action_space )for episode in range( num_episodes )state env reset(episode_reward while truenoise np random random(( env action_space )(episode** action np argmax( [state:noisestate rewarddone_ env step(actionqtarget reward discount_factor np max( [state :] [stateaction -learning_rate [stateactionlearning_rate qtarget episode_reward +reward state state if donerewards append(episode_rewardbreak our logic for training the agent is now complete all that' left is to add reporting mechanisms even though python does not enforce strict type checkingadd types to your function declarations for cleanliness at the top of the filebefore the first line reading import gymimport the list type |
17,444 | from typing import list import gym right after learning_rate outside of the main functiondeclare the interval and format for reports/ataribot/bot_ _q_table py learning_rate report_interval report ' -ep average best -ep average average '(episode % )def main()before the main functionadd new function that will populate this report stringusing the list of all rewards/ataribot/bot_ _q_table py report ' -ep average best -ep average average '(episode % ) |
17,445 | """print rewards report for current episode average for last episodes best -episode average across all time average for all episodes across time ""print(report np mean(rewards[- :])max([np mean(rewards[ : + ]for in range(len(rewards )])np mean(rewards)episode)def main()change the game to frozenlake instead of spaceinvaders/ataribot/bot_ _q_table py def main()env gym make('frozenlake- 'create the game after rewards append)print the average reward over the last episodes and print the average reward across all episodes/ataribot/bot_ _q_table py |
17,446 | if donerewards append(episode_rewardif episode report_interval = print_report(rewardsepisodeat the end of the main(functionreport both averages once more do this by replacing the line that reads print('average reward (sum(rewardslen(rewards))with the following highlighted line/ataribot/bot_ _q_table py def main()break print_report(rewards- finallyyou have completed your -learning agent check that your script aligns with the following/ataribot/bot_ _q_table py ""bot -build simple -learning agent for frozenlake "" |
17,447 | import gym import numpy as np import random random seed( make results reproducible np random seed( make results reproducible num_episodes discount_factor learning_rate report_interval report ' -ep average best -ep average average '(episode % )def print_report(rewardslistepisodeint)"""print rewards report for current episode average for last episodes best -episode average across all time average for all episodes across time ""print(report np mean(rewards[- :])max([np mean(rewards[ : + ]for in range(len(rewards )]) |
17,448 | episode)def main()env gym make('frozenlake- 'env seed( create the game make results reproducible rewards [ np zeros((env observation_space nenv action_space )for episode in range( num_episodes )state env reset(episode_reward while truenoise np random random(( env action_space )(episode** action np argmax( [state:noisestate rewarddone_ env step(actionqtarget reward discount_factor np max( [state :] [stateaction -learning_rate [stateactionlearning_rate qtarget episode_reward +reward state state if donerewards append(episode_rewardif episode report_interval = print_report(rewardsepisode |
17,449 | print_report(rewards- if __name__ ='__main__'main(save the fileexit your editorand run the scriptpython bot_ _q_table py your output will match the followingoutput -ep average best -ep average average (episode -ep average best -ep average average (episode -ep average best -ep average average (episode -ep average best -ep average average (episode -ep average best -ep average average (episode -ep average best -ep average average (episode -ep average best -ep average average (episode -ep average best -ep average average |
17,450 | -ep average best -ep average average (episode - you now have your first non-trivial bot for gamesbut let' put this average reward of into perspective according to the gym frozenlake page"solvingthe game means attaining -episode average of informally"solvingmeans "plays the game very wellwhile not in record timethe -table agent is able to solve frozenlake in episodes howeverthe game may be more complex hereyou used table to store all of the possible statesbut consider tic tac toe in which there are , possible states likewiseconsider space invaders where there are too many possible states to count -table is not sustainable as games grow increasingly complex for this reasonyou need some way to approximate the -table as you continue experimenting in the next stepyou will design function that can accept states and actions as inputs and output -value step -building deep -learning agent for frozen lake in reinforcement learningthe neural network effectively predicts the value of based on the state and action inputsusing table to store all the possible valuesbut this becomes unstable in complex games deep reinforcement learning instead uses neural network to approximate the -function for more detailssee understanding deep -learning to get accustomed to tensorflowa deep learning library you installed in step you will reimplement all of the logic used so far with |
17,451 | abstractions and you'll use neural network to approximate your -function howeveryour neural network will be extremely simpleyour output (sis matrix multiplied by your input this is known as neural network with one fully-connected layerq(sws to reiteratethe goal is to reimplement all of the logic from the bots we've already built using tensorflow' abstractions this will make your operations more efficientas tensorflow can then perform all computation on the gpu begin by duplicating your -table script from step cp bot_ _q_table py bot_ _q_network py then open the new file with nano or your preferred text editornano bot_ _q_network py firstupdate the comment at the top of the file/ataribot/bot_ _q_network py ""bot -use -learning network to train bot "" |
17,452 | right below import random additionallyadd tf set_radon_seed( right below np random seed( this will ensure that the results of this script will be repeatable across all sessions/ataribot/bot_ _q_network py import random import tensorflow as tf random seed( np random seed( tf set_random_seed( redefine your hyperparameters at the top of the file to match the following and add function called exploration_probabilitywhich will return the probability of exploration at each step remember thatin this context"explorationmeans taking random actionas opposed to taking the action recommended by the -value estimates/ataribot/bot_ _q_network py num_episodes discount_factor learning_rate report_interval exploration_probability lambda episode (episode |
17,453 | '(episode % )nextyou will add one-hot encoding function in shortone-hot encoding is process through which variables are converted into form that helps machine learning algorithms make better predictions if you' like to learn more about one-hot encodingyou can check out adversarial examples in computer visionhow to build then fool an emotion-based dog filter directly beneath report add one_hot function/ataribot/bot_ _q_network py report ' -ep average best -ep average average '(episode % )def one_hot(iintnint-np array"""implements one-hot encoding by selecting the ith standard basis vector""return np identity( )[ireshape(( - )def print_report(rewardslistepisodeint) |
17,454 | abstractions before doing thatthoughyou'll need to first create placeholders for your data in your main functiondirectly beneath rewards=[]insert the following highlighted content hereyou define placeholders for your observation at time (as obs_t_phand time + (as obs_tp _ph)as well as placeholders for your actionrewardand target/ataribot/bot_ _q_network py def main()env gym make('frozenlake- 'create the game env seed( make results reproducible rewards [ setup placeholders n_obsn_actions env observation_space nenv action_space obs_t_ph tf placeholder(shape=[ n_obs]dtype=tf float obs_tp _ph tf placeholder(shape=[ n_obs]dtype=tf float act_ph tf placeholder(tf int shape=()rew_ph tf placeholder(shape=()dtype=tf float q_target_ph tf placeholder(shape=[ n_actions]dtype=tf float np zeros((env observation_space nenv action_space )for episode in range( num_episodes ) |
17,455 | following highlighted lines this code starts your computation by computing (safor all to make q_current and ( ' 'for all ato make q_target/ataribot/bot_ _q_network py rew_ph tf placeholder(shape=()dtype=tf float q_target_ph tf placeholder(shape=[ n_actions]dtype=tf float setup computation graph tf variable(tf random_uniform([n_obsn_actions] )q_current tf matmul(obs_t_phwq_target tf matmul(obs_tp _phwq np zeros((env observation_space nenv action_space )for episode in range( num_episodes )again directly beneath the last line you addedinsert the following higlighted code the first two lines are equivalent to the line added in step that computes qtargetwhere qtarget reward discount_factor np max( [state :]the next two lines set up your losswhile the last line computes the action that maximizes your -value/ataribot/bot_ _q_network py |
17,456 | q_target tf matmul(obs_tp _phwq_target_max tf reduce_max(q_target_phaxis= q_target_sa rew_ph discount_factor q_target_max q_current_sa q_current[ act_pherror tf reduce_sum(tf square(q_target_sa q_current_sa)pred_act_ph tf argmax(q_current np zeros((env observation_space nenv action_space )for episode in range( num_episodes )after setting up your algorithm and the loss functiondefine your optimizer/ataribot/bot_ _q_network py error tf reduce_sum(tf square(q_target_sa q_current_sa)pred_act_ph tf argmax(q_current setup optimization trainer tf train gradientdescentoptimizer(learning_rate=learning_rateupdate_model trainer minimize(errorq np zeros((env observation_space nenv action_space )for episode in range( num_episodes ) |
17,457 | nextset up the body of the game loop to do thispass data to the tensorflow placeholders and tensorflow' abstractions will handle the computation on the gpureturning the result of the algorithm start by deleting the old -table and logic specificallydelete the lines that define (right before the for loop)noise (in the while loop)actionqtargetand [stateactionrename state to obs_t and state to obs_tp to align with the tensorflow placeholders you set previously when finishedyour for loop will match the following/ataribot/bot_ _q_network py setup optimization trainer tf train gradientdescentoptimizer(learning_rate=learning_rateupdate_model trainer minimize(errorfor episode in range( num_episodes )obs_t env reset(episode_reward while trueobs_tp rewarddone_ env step(actionepisode_reward +reward obs_t obs_tp if done |
17,458 | directly above the for loopadd the following two highlighted lines these lines initialize tensorflow session which in turn manages the resources needed to run operations on the gpu the second line initializes all the variables in your computation graphfor exampleinitializing weights to before updating them additionallyyou will nest the for loop within the with statementso indent the entire for loop by four spaces/ataribot/bot_ _q_network py trainer tf train gradientdescentoptimizer(learning_rate=learning_rateupdate_model trainer minimize(errorwith tf session(as sessionsession run(tf global_variables_initializer()for episode in range( num_episodes )obs_t env reset(before the line reading obs_tp rewarddone_ env step(action)insert the following lines to compute the action this code evaluates the corresponding placeholder and replaces the action with random action with some probability |
17,459 | while true take step using best action or random action obs_t_oh one_hot(obs_tn_obsaction session run(pred_act_phfeed_dict={obs_t_phobs_t_oh})[ if np random rand( exploration_probability(episode)action env action_space sample(after the line containing env step(action)insert the following to train the neural network in estimating your -value function/ataribot/bot_ _q_network py obs_tp rewarddone_ env step(action train model obs_tp _oh one_hot(obs_tp n_obsq_target_val session run(q_targetfeed_dict=obs_tp _phobs_tp _oh }session run(update_modelfeed_dict=obs_t_phobs_t_ohrew_phrewardq_target_phq_target_valact_phaction |
17,460 | episode_reward +reward your final file will match this source code/ataribot/bot_ _q_network py ""bot -use -learning network to train bot ""from typing import list import gym import numpy as np import random import tensorflow as tf random seed( np random seed( tf set_random_seed( num_episodes discount_factor learning_rate report_interval exploration_probability lambda episode (episode report ' -ep average best -ep average average |
17,461 | def one_hot(iintnint-np array"""implements one-hot encoding by selecting the ith standard basis vector""return np identity( )[ireshape(( - )def print_report(rewardslistepisodeint)"""print rewards report for current episode average for last episodes best -episode average across all time average for all episodes across time ""print(report np mean(rewards[- :])max([np mean(rewards[ : + ]for in range(len(rewards )])np mean(rewards)episode)def main()env gym make('frozenlake- 'env seed( rewards [create the game make results reproducible |
17,462 | n_obsn_actions env observation_space nenv action_space obs_t_ph tf placeholder(shape=[ n_obs]dtype=tf float obs_tp _ph tf placeholder(shape=[ n_obs]dtype=tf float act_ph tf placeholder(tf int shape=()rew_ph tf placeholder(shape=()dtype=tf float q_target_ph tf placeholder(shape=[ n_actions]dtype=tf float setup computation graph tf variable(tf random_uniform([n_obsn_actions] )q_current tf matmul(obs_t_phwq_target tf matmul(obs_tp _phwq_target_max tf reduce_max(q_target_phaxis= q_target_sa rew_ph discount_factor q_target_max q_current_sa q_current[ act_pherror tf reduce_sum(tf square(q_target_sa q_current_sa)pred_act_ph tf argmax(q_current setup optimization trainer tf train gradientdescentoptimizer(learning_rate=learning_rateupdate_model trainer minimize(errorwith tf session(as sessionsession run(tf global_variables_initializer()for episode in range( num_episodes ) |
17,463 | episode_reward while true take step using best action or random action obs_t_oh one_hot(obs_tn_obsaction session run(pred_act_phfeed_dict={obs_t_phobs_t_oh})[ if np random rand( exploration_probability(episode)action env action_space sample(obs_tp rewarddone_ env step(action train model obs_tp _oh one_hot(obs_tp n_obsq_target_val session run(q_targetfeed_dict=obs_tp _phobs_tp _oh }session run(update_modelfeed_dict=obs_t_phobs_t_ohrew_phrewardq_target_phq_target_valact_phaction }episode_reward +reward obs_t obs_tp if donerewards append(episode_reward |
17,464 | print_report(rewardsepisodebreak print_report(rewards- if __name__ ='__main__'main(save the fileexit your editorand run the scriptpython bot_ _q_network py your output will end with the followingexactlyoutput -ep average best -ep average average (episode -ep average best -ep average average (episode -ep average best -ep average average (episode -ep average best -ep average average (episode -ep average best -ep average average (episode -ep average best -ep average average (episode -ep average best -ep average average |
17,465 | -ep average best -ep average average (episode -ep average best -ep average average (episode - you've now trained your very first deep -learning agent for game as simple as frozenlakeyour deep -learning agent required episodes to train imagine if the game were far more complex how many training samples would that require to trainas it turns outthe agent could require millions of samples the number of samples required is referred to as sample complexitya concept explored further in the next section understanding bias-variance tradeoffs generally speakingsample complexity is at odds with model complexity in machine learning model complexityone wants sufficiently complex model to solve their problem for examplea model as simple as line is not sufficiently complex to predict car' trajectory sample complexityone would like model that does not require many samples this could be because they have limited access to labeled dataan insufficient amount of computing powerlimited memoryetc say we have two modelsone simple and one extremely complex for both models to attain the same performancebias-variance tells us that |
17,466 | train case in pointyour neural network-based -learning agent required episodes to solve frozenlake adding second layer to the neural network agent quadruples the number of necessary training episodes with increasingly complex neural networksthis divide only grows to maintain the same error rateincreasing model complexity increases the sample complexity exponentially likewisedecreasing sample complexity decreases model complexity thuswe cannot maximize model complexity and minimize sample complexity to our heart' desire we canhoweverleverage our knowledge of this tradeoff for visual interpretation of the mathematics behind the bias-variance decompositionsee understanding the bias-variance tradeoff at high levelthe bias-variance decomposition is breakdown of "true errorinto two componentsbias and variance we refer to "true erroras mean squared error (mse)which is the expected difference between our predicted labels and the true labels the following is plot showing the change of "true erroras model complexity increases |
17,467 | step -building least squares agent for frozen lake the least squares methodalso known as linear regressionis means of regression analysis used widely in the fields of mathematics and data science in machine learningit' often used to find the optimal linear model of two parameters or datasets in step you built neural network to compute -values instead of neural networkin this step you will use ridge regressiona variant of least squaresto compute this vector of -values the hope is that with model as uncomplicated as least squaressolving the game will require fewer training episodes start by duplicating the script from step cp bot_ _q_table py bot_ _ls py |
17,468 | nano bot_ _ls py againupdate the comment at the top of the file describing what this script will do/ataribot/bot_ _q_network py ""bot -build least squares -learning agent for frozenlake ""before the block of imports near the top of your fileadd two more imports for type checking/ataribot/bot_ _ls py from typing import tuple from typing import callable from typing import list import gym in your list of hyperparametersadd another hyperparameterw_lrto control the second -function' learning rate additionallyupdate the number of episodes to and the discount factor to by changing |
17,469 | larger valuesthe agent will be able to issue stronger performance/ataribot/bot_ _ls py num_episodes discount_factor learning_rate w_lr report_interval before your print_report functionadd the following higher-order function it returns lambda -an anonymous function -that abstracts away the model/ataribot/bot_ _ls py report_interval report ' -ep average best -ep average average '(episode % )def makeq(modelnp array-callable[[np array]np array]"""returns -functionwhich takes state -distribution over actions""return lambda xx dot(model |
17,470 | after makeqadd another functioninitializewhich initializes the model using normally-distributed values/ataribot/bot_ _ls py def makeq(modelnp array-callable[[np array]np array]"""returns -functionwhich takes state -distribution over actions""return lambda xx dot(modeldef initialize(shapetuple)"""initialize model"" np random normal( shapeq makeq(wreturn wq def print_report(rewardslistepisodeint)after the initialize blockadd train method that computes the ridge regression closed-form solutionthen weights the old model with the new one it returns both the model and the abstracted -function/ataribot/bot_ _ls py |
17,471 | return wq def train(xnp arrayynp arraywnp array-tuple[np arraycallable]"""train the modelusing solution to ridge regression"" np eye( shape[ ]neww np linalg inv( dot( - idot( dot( ) w_lr neww ( w_lr\ makeq(wreturn wq def print_report(rewardslistepisodeint)after trainadd one last functionone_hotto perform one-hot encoding for your states and actions/ataribot/bot_ _ls py def train(xnp arrayynp arraywnp array-tuple[np arraycallable]return wq def one_hot(iintnint-np array"""implements one-hot encoding by selecting the ith standard basis |
17,472 | return np identity( )[idef print_report(rewardslistepisodeint)following thisyou will need to modify the training logic in the previous script you wrotethe -table was updated every iteration this scripthoweverwill collect samples and labels every time step and train new model every steps additionallyinstead of holding -table or neural networkit will use least squares model to predict -values go to the main function and replace the definition of the -table ( np zeros)with the following/ataribot/bot_ _ls py def main()rewards [n_obsn_actions env observation_space nenv action_space wq initialize((n_obsn_actions)stateslabels [][for episode in range( num_episodes )scroll down before the for loop directly below thisadd the following lines which reset the states and labels lists if there is too much |
17,473 | /ataribot/bot_ _ls py def main()for episode in range( num_episodes )if len(states> stateslabels [][modify the line directly after this onewhich defines state env reset()so that it becomes the following this will one-hot encode the state immediatelyas all of its usages will require one-hot vector/ataribot/bot_ _ls py for episode in range( num_episodes )if len(states> stateslabels [][state one_hot(env reset()n_obsbefore the first line in your while main game loopamend the list of states/ataribot/bot_ _ls py |
17,474 | for episode in range( num_episodes )episode_reward while truestates append(statenoise np random random(( env action_space )(episode\*\* update the computation for actiondecrease the probability of noiseand modify the -function evaluation/ataribot/bot_ _ls py while truestates append(statenoise np random random(( *actions)episode action np argmax( (statenoisestate rewarddoneenv step(actionadd one-hot version of state and amend the -function call in your definition for qtarget as follows/ataribot/bot_ _ls py while true |
17,475 | state one_hot(state n_obsqtarget reward discount_factor np max( (state )delete the line that updates [state,actionand replace it with the following lines this code takes the output of the current model and updates only the value in this output that corresponds to the current action taken as resultq-values for the other actions don' incur loss/ataribot/bot_ _ls py state one_hot(state n_obsqtarget reward discount_factor np max( (state )label (statelabel[action( learning_rate_ label[actionlearning_rate \qtarget labels append(labelepisode_reward +reward right after state state add periodic update to the model this trains your model every time steps/ataribot/bot_ _ls py |
17,476 | if len(states = wq train(np array(states)np array(labels)wif doneensure that your code matches the following/ataribot_ _ls py ""bot -build least squares -learning agent for frozenlake ""from typing import tuple from typing import callable from typing import list import gym import numpy as np import random random seed( make results reproducible np random seed( make results reproducible num_episodes discount_factor learning_rate w_lr report_interval |
17,477 | '(episode % )def makeq(modelnp array-callable[[np array]np array]"""returns -functionwhich takes state -distribution over actions""return lambda xx dot(modeldef initialize(shapetuple)"""initialize model"" np random normal( shapeq makeq(wreturn wq def train(xnp arrayynp arraywnp array-tuple[np arraycallable]"""train the modelusing solution to ridge regression"" np eye( shape[ ]neww np linalg inv( dot( - idot( dot( ) w_lr neww ( w_lrw makeq(wreturn wq |
17,478 | """implements one-hot encoding by selecting the ith standard basis vector""return np identity( )[idef print_report(rewardslistepisodeint)"""print rewards report for current episode average for last episodes best -episode average across all time average for all episodes across time ""print(report np mean(rewards[- :])max([np mean(rewards[ : + ]for in range(len(rewards )])np mean(rewards)episode)def main()env gym make('frozenlake- 'env seed( create the game make results reproducible rewards [n_obsn_actions env observation_space nenv action_space wq initialize((n_obsn_actions)stateslabels [][ |
17,479 | if len(states> stateslabels [][state one_hot(env reset()n_obsepisode_reward while truestates append(statenoise np random random(( n_actions)episode action np argmax( (statenoisestate rewarddone_ env step(actionstate one_hot(state n_obsqtarget reward discount_factor np max( (state )label (statelabel[action( learning_ratelabel[actionlearning_rate qtarget labels append(labelepisode_reward +reward state state if len(states = wq train(np array(states)np array(labels)wif donerewards append(episode_rewardif episode report_interval = print_report(rewardsepisodebreak print_report(rewards- |
17,480 | main(thensave the fileexit the editorand run the scriptpython bot_ _ls py this will output the followingoutput -ep average best -ep average average (episode -ep average best -ep average average (episode -ep average best -ep average average (episode -ep average best -ep average average (episode -ep average best -ep average average (episode -ep average best -ep average average (episode -ep average best -ep average average (episode -ep average best -ep average average (episode -ep average best -ep average average |
17,481 | -ep average best -ep average average (episode -ep average best -ep average average (episode - recall thataccording to the gym frozenlake page"solvingthe game means attaining -episode average of here the agent acheives an average of meaning it was able to solve the game in episodes although this does not solve the game in fewer episodesthis basic least squares method is still able to solve simple game with roughly the same number of training episodes although your neural networks may grow in complexityyou've shown that simple models are sufficient for frozenlake with thatyou have explored three -learning agentsone using qtableanother using neural networkand third using least squares nextyou will build deep reinforcement learning agent for more complex gamespace invaders step -creating deep -learning agent for space invaders say you tuned the previous -learning algorithm' model complexity an sample complexity perfectlyregardless of whether you picked neural network or least squares method as it turns outthis unintelligent -learning agent still performs poorly on more complex gameseven with an especially high number of training episodes this section will cover two techniques that can improve performancethen you will test an agent that was trained using these techniques |
17,482 | without any human intervention was developed by the researchers at deepmindwho also trained their agent to play variety of atari games deepmind' original deep -learning (dqnpaper recognized two important issues correlated statestake the state of our game at time which we will call say we update ( )according to the rules we derived previously nowtake the state at time which we call and update ( according to the same rules note that the game' state at time is very similar to its state at time in space invadersfor examplethe aliens may have moved by one pixel each said more succinctlys and are very similar likewisewe also expect ( and ( to be very similarso updating one affects the other this leads to fluctuating valuesas an update to ( may in fact counter the update to ( more formallys and are correlated since the -function is deterministicq( is correlated with ( -function instabilityrecall that the function is both the model we train and the source of our labels say that our labels are randomly-selected values that truly represent distributionl every time we update qwe change lmeaning that our model is trying to learn moving target this is an issueas the models we use assume fixed distribution to combat correlated states and an unstable -function one could keep list of states called replay buffer each time stepyou add the game state that you observe to this replay buffer you |
17,483 | those states the team at deepmind duplicated (saone is called q_current(sa)which is the -function you update you need another -function for successor statesq_target( ' ')which you won' update recall q_target( ' 'is used to generate your labels by separating q_current from q_target and fixing the latteryou fix the distribution your labels are sampled from thenyour deep learning model can spend short period learning this distribution after period of timeyou then re-duplicate q_current for new q_target you won' implement these yourselfbut you will load pretrained models that trained with these solutions to do thiscreate new directory where you will store these modelsparametersmkdir models then use wget to download pretrained space invaders model' parameterswget models nextdownload python script that specifies the model associated with the parameters you just downloaded note that this pretrained model has two constraints on the input that are necessary to keep in mind |
17,484 | the input consists of four statesstacked we will address these constraints in more detail later on for nowdownload the script by typingwget you will now run this pretrained space invaders agent to see how it performs unlike the past few bots we've usedyou will write this script from scratch create new script filenano bot_ _dqn py begin this script by adding header commentimporting the necessary utilitiesand beginning the main game loop/ataribot/bot_ _dqn py ""bot fully featured deep -learning network ""import cv import gym import numpy as np import random |
17,485 | from bot_ _a import c_model def main()if **name*='**main**'main(directly after your importsset random seeds to make your results reproducible alsodefine hyperparameter num_episodes which will tell the script how many episodes to run the agent for/ataribot/bot_ _dqn py import tensorflow as tf from bot_ _a import c_model random seed( make results reproducible tf set_random_seed( num_episodes def main()two lines after declaring num_episodesdefine downsample function that downsamples all images to size of you will downsample all images before passing them into the pretrained neural networkas the pretrained model was trained on images |
17,486 | num_episodes def downsample(state)return cv resize(state( )interpolation=cv inter_linear)[nonedef main()create the game environment at the start of your main function and seed the environment so that the results are reproducible/ataribot/bot_ _dqn py def main()env gym make('spaceinvaders- 'create the game env seed( make results reproducible directly after the environment seedinitialize an empty list to hold the rewards/ataribot/bot_ _dqn py def main()env gym make('spaceinvaders- 'create the game env seed( make results reproducible |
17,487 | initialize the pretrained model with the pretrained model parameters that you downloaded at the beginning of this step/ataribot/bot_ _dqn py def main()env gym make('spaceinvaders- 'create the game env seed( make results reproducible rewards [model c_model(load='models/spaceinvaders- tfmodel'nextadd some lines telling the script to iterate for num_episodes times to compute average performance and initialize each episode' reward to additionallyadd line to reset the environment (env reset())collecting the new initial state in the processdownsample this initial state with downsample()and start the game loop using while loop/ataribot/bot_ _dqn py def main()env gym make('spaceinvaders- 'create the game env seed( make results reproducible rewards [ |
17,488 | for in range(num_episodes)episode_reward states [downsample(env reset())while trueinstead of accepting one state at timethe new neural network accepts four states at time as resultyou must wait until the list of states contains at least four states before applying the pretrained model add the following lines below the line reading while truethese tell the agent to take random action if there are fewer than four states or to concatenate the states and pass it to the pretrained model if there are at least four/ataribot/bot_ _dqn py while trueif len(states action env action_space sample(elseframes np concatenate(states[- :]axis= action np argmax(model([frames])then take an action and update the relevant data add downsampled version of the observed stateand update the reward for this episode |
17,489 | author no part of this publication may be reproducedstored in retrieval systemor transmitted in any form or by any meanselectronicmechanicalphotocopyingrecording and/or otherwise without the prior written permission of the author and the publisher first edition published by mrs meena pandey for himalaya publishing house pvt ltd "ramdoot"dr bhalerao marggirgaonmumbai phone - fax - -mailhimpub@vsnl comwebsitewww himpub com branch offices new delhi "pooja apartments" -bmurari lal streetansari roaddarya ganjnew delhi phone - fax - nagpur kundanlal chandak industrial estateghat roadnagpur phone - telefax - bengaluru plot no - nd main roadseshadripurambehind nataraja theatrebengaluru phone - mobile hyderabad no lingampallybesides raghavendra swamy mathamkachigudahyderabad phone - chennai new no / old no / ground floorsarangapani streett nagarchennai mobile pune "lakshaapartmentfirst floorno mehunpurashaniwarpeth (near prabhat theatre)pune phone - mobile lucknow house no shekhupura colonynear convent schoolaliganjlucknow phone - mobile ahmedabad "shail" st flooropp madhu sudan housec roadnavrang puraahmedabad phone - mobile ernakulam / (new no / ) st floorkarikkamuri roadernakulamkochi phone - mobile bhubaneswar plot no / budheswari colonybehind durga mandapbhubaneswar phone - mobile kolkata / beliaghata main roadnear id hospitalopp sbi bankkolkata phone - mobile dtp by rakhi printed at geetanjali press pvt ltd nagpur on behalf of hph |
17,490 | to my parents |
17,491 | having spent more than years in the field of information technology and having published more than research papersi feel obliged to share my knowledgeexperienceanalysis and results of real-life situations the book has evolved from my teaching experience in several technical institutions and rich experience of working in it industry the objective is enabling readers to gain sufficient knowledge and experience to perform useful programming tasks using the latest techniques this book help students to become competent learners by using reading and writing programs to acquire knowledge in the various areas readers who already know python will also find the book useful because the presentation is quite different from that of other books and includes material not found elsewhere python language is always worth looking at from different perspective because there is never an end to learning something new about this rich and flexible programming environment this book comprises of divided into three sections related to core programmingcore libraries and core statistics in python each begins with number of important and interesting examples taken from variety of fields the aim is to explain the concepts and simultaneously to develop in readers an understanding of its application in real-life example every effort has been made to present the topic in an easyclearlucid and systematic manner hope this easy-to-understand approach would enable readers to develop the required skills and apply techniques to all kinds of problems additional questions at the end of each are provided to test the reader' understanding of the subject matter programming is the art of expressing solutions to problems enabling computer to execute those solutions programming is learned by writing programs and hence is similar to other endeavors with practical component we cannot learn to program without writing lots of code along with reading the concept like we cannot learn to swimplay guitaror drive car by just merely reading bookthere is no substitute for writing code along with understanding concepts and principles of programming for acquiring practical skills of programmingwe need to do practical exercises and get used to the tools for writingcompilingand running programs this book is intentionally written with this methodologyand helps you understand the concepts and principles through the practical skills of programming the reader is suggested to execute the programs for understanding utility and effectiveness of the concept in better manner constructive suggestions and comments will be sincerely appreciated from the readers of this book at bhartimotwani@hotmail com dr bharti motwani |
17,492 | expression of feelings by words loses its significance when it comes to say few words of gratitudeyet to express it in some formhowever imperfectis duty towards those who helped offer my special gratitude to the almighty god for his blessings that has made completion of this book possible find myself with paucity of words to express my deep sense of gratitude to my father mr shrichand jagwani and mother mrs anita jagwani for their affectioncontinuous supportconstant encouragement and appreciative understanding my real strength has been the selfless cooperationsolicitous concern and emotional support of my husband mr bharat motwani no formal words can convey thanks to my children pearl and jahan who had to suffer lot because of my preoccupation with the book it is for their patienceforbearancelove and support throughoutthat this mind-absorbing and time-consuming task has been possible am indebted to the entire team of himalaya publishing house pvt ltd and mr srivastav for his sincere effortsunfailing courtesy and co-operation in bringing out the book in this elegant form it has been real pleasure working with such professional staff am grateful to all those people whose constructive suggestions and work have helped me in enhancing the standard of the work directly and/or indirectly and brought the task to fruition dr bharti motwani |
17,493 | section icore programming in python introduction to python features of python installation of python first interaction command line versus scripts comments identifiers reserved words whitespace input and output in python operators arithmetic operators assignment operators relational operators logical operators boolean operators operators precedence libraries in python control flow statements conditional statements if statement if-else statement nested if statement if-elif-else ladder iterators for loop nesting of for loops while loop nesting of while loops nesting of conditional statements and iterators abnormal loop termination break statement continue statement pass statement exception handling user defined functions concept and importance importance of functions defining function calling function |
17,494 | function without arguments function with arguments nesting of functions recursive function scope of variables within functions data structures lists creating list accessing list elements functions for list tuples creating tuple accessing tuple elements functions for tuple dictionary creating dictionary accessing dictionary elements functions for dictionary section iicore libraries in python modules user defined module creating module importing the user defined module inbuilt modules in python the math module the random module the tkinter module the sys module the time module the datetime module arrays using module and numpy library arrays using array module creating an array accessing elements of array functions on array arrays using numpy library creating an array functions for numpy array mathematical and relational operators for multiple arrays multidimensional arrays creating multidimensional array accessing elements in multidimensional array |
17,495 | mathematical operators for single and multiple multidimensional arrays data processing with pandas basics of data frame creating data frame adding rows and columns to the data frame deleting rows and columns from the data frame import of data basic functions of data frame relational and logical operators for filtering data relational operators for filtering data logical operators for filtering data group by functionality creating charts for data frame using pandas handling missing values data visualization with matplotlib and seaborn data visualization with matplotlib charts using plot(function pie chart scatter plot histogram bar chart quiver plot meshgrid contour plot data visualization with seaborn scatter plot regression plot pair plot heatmap box plot violin plot point plot line plot count plot bar plot working with text creating string accessing string elements functions for strings case conversion functions find and replace functions alignment and indentation functions |
17,496 | stripping whitespace functions data type check functions partition functions other functions regular expressions from "remodule text pre-processing using string module the nltk toolkit shallow parsing displaying similar words creating summary of the document by data cleaning section iiicore statistics in python basic statistical functions statistical functions in statistics module measures of central tendency measures of spread statistical functions in scipy library descriptive statistics determining rank determining normality determining homogeneity of variances determining correlation chi-square test for correlation statistical functions in numpy library compare means parametric techniques one sample -test independent sample -test dependent -test one way anova non-parametric techniques kolmogorov-smirnov test for one sample kolmogorov-smirnov test for two samples mann-whitney test for independent samples wilcoxon test for dependent samples kruskal-wallis test regression simple linear regression multiple linear regression ordinary least squares regression logistic regression |
17,497 | section icore programming in python introduction to python python is an interpreted high-levelgeneral-purpose programming language created by guido van rossum and first released in python is van rossum' vision of small core language with large standard library and easily extensible interpreterthat stemmed from his frustrations with abc python has top the charts in the recent years over other programming after undergoing drastic change since its release years ago the python language has diversified application in the software development and has higher plethora over other programming languages used in the industry python is both free and open source and runs on all major operating systems like microsoft windowslinuxand mac os with its strong process integration featuresunit testing framework and enhanced control capabilitiesit contribute towards the increased speed for most applications and productivity of applications it is great option for building scalable multi-protocol network applications features of python python supports multiple programming paradigmsincluding object-orientedimperativefunctional and procedural the following are the important features of pythonpython allows branching and looping as well as modular programming using functions python has an effective data handling and storage facility for numeric and textual data python features dynamic type system and automatic memory management python provides collection of operators for calculations on listtuple and dictionary python provides large and integrated collection of tools for data analysis and statistical functions python code is interpreted by interpreter line by line at time this means that there is no need to compile it like other programming languages it is platform independent programming languageits code easily run on any platform such as windowslinuxunix macintosh etc |
17,498 | python language is more expressive it means that it is more understandable and readable python is dynamically-typed this means that the type for value is decided at runtimenot in advance this is why we don' need to specify the type of data while declaring it python is very easy to code compared to other popular languages like java and +python is an integrated suite of software facilities for data manipulationcalculation and graphical facilities for data analysis and display python' syntax is easy to learnnon-programmers and programmers can do programming easily rather than having all of its functionality built into its corepython was designed to be highly extensible this compact modularity has made it particularly popular as means of adding programmable interfaces to existing applications there are over standard library modules which contain modules and classes for wide variety of programming in addition to the standard libraries there are extensive collections of freely available add-on moduleslibrariesframeworksand tool-kits installation of python python is programming language python can be installed for windows ( / bitand save it in local directory python is available for both the versions of windows ( -bit -bitafter installationlocate the icon to run the program in directory structure under the windows program files (fig clicking this icon brings python-gui which is the start for python programming (fig programing in an easy way can be done using anaconda or pycharm software anaconda is free and open-source distribution that consists of python couple of python libraries it also has own virtual environment and repository which can be used along with python command anaconda can be installed from optionsanaconda navigatoranaconda promptjupyter notebookreset spyder settingsspyder (fig jupyter notebook is sheet which gives you opportunity to arrange your code into cells and run it in desired order the first screen of jupyter notebook is displayed in fig we can observe that the jupyter screen does the operation cell by cell and hence there is single place where the programming is done spyder is an integrated development environment (idemeant for python the opening screen of spyder is displayed in fig we can observe that the programming is done in left window and the results are displayed in right bottom window the right top window displays the variables and the data all the programs in this book are made using the spyder software pycharm is an integrated development environment used in computer programmingspecifically for the python language it is developed by the czech company jetbrains and the software can be downloaded from windows as displayed in fig the top left window dives details about the project on which we are working the center top window is the place where programming is done and the bottom window displays the results of the programming |
17,499 | first interaction in python environment setuplaunch python interpreter to get prompt "we will start learning python programming by writing "helloprogram depending on the needswe can program either at python command prompt or jupyter (command lineor we can write python script file in spyder or jupyter or pycharm in all the optionspython issues prompt whereit expects input commands type print("hello"in different software and observe the results after executing the command in python interpreterwrite the statement at the command prompt and press enter to view the result in spyder and jupyter softwareclick on "runto view the results in pycharmclick on run menu and choose on "runoptionselect the appropriate file name to execute the program and for viewing the results command line versus scripts command line is generally used for single line and script is used for multiple commands if we want to execute only one functionhighlight the function and click on run current selection butif we want to use the whole file containing many functions to get executed togetherwe need to write script for the same and click on run comments comments are like text notes for user' help in python program and they are ignored by the compiler single line comment start with example #this is my first program multi-line comments can be created using ""(three double quotes in start and end of comment blockidentifiers python identifier is name used to identify variablefunctionor any other user-defined item python is case-sensitive programming language thusamount and amount are two different identifiers in python the name of identifier must follow the naming rules the name of an identifier can be composed of lettersdigitsand the underscore character an identifier starts with letter to za to zor an underscore '_followed by zero or more lettersunderscoresand digits ( to python does not allow punctuation characters such as @$and within identifiers examplebillamountanswera sd etc it should be noted here that unlike any other programming languagethere is no need to declare the variable in python an identifier is generally associated with an expression or value the operator means assignmentnot mathematical equality the statement copies the value stored in variable into variable it should be noted that the numeric value is written without double quotes and string value is enclosed in double quotes " string contains characters that are similar to character literalsplain charactersescape sequencesand universal characters examplefor the statementrevenue revenue is an identifier which has the value in pythonwe say that revenue is assigned value and the assignment operator (=is used for the statement city="punestores the string value of pune in the city identifier |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.