id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
19,300 | transactions in databases transactions in databases another important concept within database is that of transaction transaction represents unit of work performed within database management system (or similar systemagainst database instanceand is independent of any other transaction transactions in database environment have two main purposes to provide unit of work that allows recovery from failures and keeps database consistent even in cases of system failurewhen execution stops (completely or partiallythis is because either all the operations within transaction are performed or none of them are thusif one operation causes an error then all the changes being made by the transaction thus far are rolled back and none of them will have been made to provide isolation between programs accessing database concurrently this means that the work being done by one program will not interact with another programs work database transactionby definitionmust be atomicconsistentisolated and durableatomic this indicates that transaction represents an atomic unit of workthat is either all the operations in the transaction are performed or none of them are performed consistent once completed the transaction must leave the data in consistent state with any data constraints met (such as row in one table must not reference an non-existent row in another table in one to many relationship etc isolated this relates to the changes being made by concurrent transactionsthese changes must be isolated from each other that isone transaction cannot see the changes being made by another transaction until the second transaction completes and all changes are permanently saved into the database durable this means that once transaction completes then the changes it has made are permanently stored into the database (until some future transaction modifies that datadatabase practitioners often refer to these properties of database transactions using the acronym acid (for atomicconsistentisolateddurablenot all databases support transactions although all commercialproduction quality databases such as oraclemicrosoft sql server and mysqldo support transactions |
19,301 | introduction to databases further reading if you want to know more about databases and database management systems here are some online resourcesmaterial database which provides short introduction to databases databases relational databases page if you want to explore the subject of database design (that is design of the tables and links between tables in databasethen these references may helpdesign the core ideas within database design which provides another tutorial that covers most of the core elements of database design if you wish to explore sql more then seeon sql and as such an excellent resource sql |
19,302 | python db-api accessing database from python the standard for accessing database in python is the python db-api this specifies set of standard interfaces for modules that wish to allow python to access specific database the standard is described in pep (dev/peps/pep- )-- pep is python enhancement proposal almost all python database access modules adhere to this standard this means that if you are moving from one database to anotheror attempting to port python program from one database to anotherthen the apis you encounter should be very similar (although the sql processed by different database can also differthere are modules available for most common databases such as mysqloraclemicrosoft sql server etc the db-api there are several key elements to the db_api these arethe connect function the connect(function that is used to connect to database and returns connection object connection objects within the db-api access to database is achieved through connection objects these connection objects provide access to cursor objects cursor objects are used to execute sql statements on the database the result of an execution these are the results that can be fetched as sequence of sequences (such tuple of tuplesthe standard can thus be used to selectinsert or update information in the database (cspringer nature switzerland ag huntadvanced guide to python programmingundergraduate topics in computer science |
19,303 | python db-api these elements are illustrated belowthe standard specifies set of functions and objects to be used to connect to database these include the connection functionthe connection object and the cursor object the above elements are described in more detail below the connect function the connection function is defined asconnect(parameters it is used to make the initial connection to the database the connection returns connection object the parameters required by the connection function are database dependent the connection object the connection object is returned by the connect(function the connection object provides several methods includingclose(used to close the connection once you no longer need it the connection will be unusable from this point onwards commit(used to commit pending transaction |
19,304 | the db-api rollback(used to rollback all the changes made to the database since the last transaction commit (optional as not all databases provide transaction supportcursor(returns new cursor object to use with the connection the cursor object the cursor object is returned from the connection cusor(method cursor object represents database cursorwhich is used to manage the context of fetch operation or the execution of database command cursors support variety of attributes and methodscursor execute(operationparametersprepare and execute database operation (such as query statement or an update commandparameters may be provided as sequence or mapping and will be bound to variables in the operation variables are specified in database-specific notation cursor rowcount read-only attribute providing the number of rows that the last cursor execute(call returned (for select style statementsor affected (for update or insert style statementscursor description read only attribute providing information on the columns present in any results returned from select operation cursor close(closes the cursor from this point on the cursor will not be usable in additionthe cursor object also provides several fetch style methods these methods are used to return the results of database query the data returned is made up of sequence of sequences (such as tuple of tupleswhere each inner sequence represents single row returned by the select statement the fetch methods defined by the standard arecursor fetchone(fetch the next row of query result setreturning single sequenceor none when no more data is available cursor fetchall(fetch all (remainingrows of query resultreturning them as sequence of sequences cursor fetchman(sizefetch the next set of rows of query resultreturning sequence of sequences ( tuple of tuplesan empty sequence is returned when no more rows are available the number of rows to fetch per call is specified by the parameter |
19,305 | python db-api mappings from database types to python types the db-api standard also specifies set of mappings from the types used in database to the types used in python for full listing see the db-api standard itself but the key mappings includedate(yearmonthdaytime(hourminutesecondtimestamp(yearmonthdayhourminutesecondstring represents database date represents time database value holds database time stamp value used to represent string like database data (such as varcharsgenerating errors the standard also specifies set of exceptions that can be thrown in different situations these are presented below and in the following tablethe above diagram illustrates the inheritance hierarchy for the errors and warning associated with the standard note that the db-api warning and error both extend the exception class from standard pythonhoweverdepending on the specific implementation there may be one or more additional classes in the hierarchy between these classes for examplein the pymysql module there is |
19,306 | the db-api mysqlerror class that extends exception and is then extended by both warning and error also note that warning and error have no relationship with each other this is because warnings are not considered errors and thus have separate class hierarchies howeverthe error is the root class for all database error classes description of each warning or error class is provided below warning error interfaceerror databaseerror dataerror operationalerror integrityerror internalerror programmingerror notsupportederror used to warn of issues such as data truncations during insertingetc the base class of all other error exceptions exception raised for errors that are related to the database interface rather than the database itself exception raised for errors that are related to the database exception raised for errors that are due to problems with the data such as division by zeronumeric value out of rangeetc exception raised for errors that are related to the database' operation and not necessarily under the control of the programmere an unexpected disconnect occursetc exception raised when the relational integrity of the database is affected exception raised when the database encounters an internal errore the cursor is not valid anymorethe transaction is out of syncetc exception raised for programming errorse table not foundsyntax error in the sql statementwrong number of parameters specifiedetc exception raised in case method or database api was used which is not supported by the databasee requesting rollback(on connection that does not support transactions or has transactions turned off row descriptions the cursor object has an attribute description that provides sequence of sequenceseach sub sequence provides description of one of the attributes of the data returned by select statement the sequence describing the attribute is made up of up to seven itemsthese includename representing the name of the attributetype_code which indicates what python type this attribute has been mapped todisplay_size the size used to display the attributeinternal_size the size used internally to represent the value |
19,307 | python db-api precision if real numeric value the precision supported by the attributescale indicates the scale of the attributenull_ok this indicates whether null values are acceptable for this attribute the first two items (name and type_codeare mandatorythe other five are optional and are set to none if no meaningful values can be provided transactions in pymysql transactions are managed in pymysql via the database connection object this object provides the following methodconnection commit(this causes the current transaction to commit all the changes made permanently to the database new transaction is then started connection rollback(this causes all changes that have been made so far (but not permanently stored into the database not committedto be removed new transaction is then started the standard does not specify how database interface should manage turning on and off transaction (not least because not all databases support transactionshowevermysql does support transactions and can work in two modesone supports the use of transactions as already describedthe other uses an auto commit mode in auto commit mode each command sent to the database (whether select statement or an insert/update statementis treated as an independent transaction and any changes are automatically committed at the end of the statement this auto commit mode can be turned on in pymysql usingconnection autocommit(trueturn on autocommit (false to turn off auto commit which is the defaultother associated methods include connection get_autocommit(which returns boolean indicating whether auto commit is turned on or not connection begin(to explicitly begin new transaction online resources see the following online resources for more information on the python database api |
19,308 | online resources python |
19,309 | pymysql module the pymysql module the pymysql module provides access to mysql database from python it implements the python db-api this module is pure python database interface implementation meaning that it is portable across different operating systemsthis is notable because some database interface modules are merely wrappers around other (nativeimplementations that may or may not be available on different operating systems for examplea native linux based database interface module may not be available for the windows operating system if you are never going to switch between different operating systemsthen this is not problem of course to use the pymysql module you will need to install it on your computer this will involve using tool such as anaconda or adding it to your pycharm project you can also use pip to install itpip install pymysql working with the pymysql module to use the pymysql module to access database you will need to follow these steps import the module make connection to the host machine running the database and to the database you are using obtain cursor object from the connection object execute some sql using the cursor execute(method (cspringer nature switzerland ag huntadvanced guide to python programmingundergraduate topics in computer science |
19,310 | pymysql module fetch the result(sof the sql using the cursor object ( fetchallfetchmany or fetchone close the database connection these steps are essentially boiler platecode that is you will use them whenever you access database via pymysql (or indeed any db-api compliant modulewe will take each of these steps in turn importing the module as the pymysql module is not one of the built-in modules provided by default with python you will need to import the module into your codefor example using import pymsql be careful with the case used here as the module name is pymysql in the code (if you try to import pymysql python will not find it!connect to the database each database module will have their own specifics for connecting to the database serverthese usually involve specifying the machine that the database is running on (as databases can be quiet resource intensivethey are often run on separate physical computer)the user to use for the connection and any security information required such as password and the database instance to connect to in most cases database is looked after by database management system ( dbmsthat can manage multiple database instances and it is therefore necessary to specify which database instance you are interested in for mysqlthe mysql database server is dbms that can indeed look after multiple database instances the pymysql connect function thus requires the following information when connecting to the database isthe name of the machine hosting the mysql database server dbserver mydomain com if you want to connect to the same machine as your python program is running onthen you can use localhost this is special name reserved for the local machine and avoids you needing to worry about the name of your local computer the user name to use for the connection most databases limit access to their databases to named users these are not necessary users such as humans that log into system but rather entities that are allowed to connect to the database and perform certain operations for exampleone user may only be able to read data in the database where as another user is allowed to insert new data into the |
19,311 | working with the pymysql module database these users are authenticated by requiring them to provide password the password for the user the database instance to connect to as mentioned in the previous database management system (dmscan manage multiple database instances and thus it is necessary to say which database instance you are interested in for exampleopen database connection connection pymysql connect('localhost','username','password','unidatabase'in this case the machine we are connecting to is 'localhost(that is the same machine as the python program itself is running on)the user is represented by 'usernameand 'passwordand the database instance of interest is called 'uni-databasethis returns connection object as per the db-api standard obtaining the cursor object you can obtain the cursor object from the connection using the cursor(methodprepare cursor object using cursor(method cursor connection cursor(using the cursor object once you have obtained the cursor object you can use it to execute an sql query or dml insertupdate or delete statement the following example uses simple select statement to select all the attributes in the students table for all rows currently stored in the students tableexecute sql query using execute(method cursor execute('select from students'note that this method executes the select statement but does not return the set of results directly instead the execute method returns an integer indicating the number of rows either affected by the modification or returned as part of the query in the case of select statement the number returned can be used to determine which type of fetch method to use |
19,312 | pymysql module obtaining information about the results the cursor object can also be used to obtain information about the results to be fetched such as how many rows there are in the results and what the type is of each attribute in the resultscusor rowcount(this is read-only property that indicates the number of rows returned for select statement or rows affected for update or insert statement cursor description(this is read-only property that provides description of each attribute in the results set each description provides the name of the attribute and an indication of the type (via type_codeas well as further information on whether the value can be null or not and for numbers scaleprecision and size information an example of using these two properties is given belowprint('cursor rowcount'cursor rowcountprint('cursor description'cursor descriptiona sample of the output generated by these lines is given belowcursor rowcount cursor description (('id' none false)('name' none false)('surname' none false)('subject' none false)('email' none false)fetching results now that successful select statement has been run against the databasewe can fetch the results the results are returned as tuple of tuples as mentioned in the last there are several different fetch options available including fetchone ()fetchmany(sizeand fetchall(in the following example we use the fetchall(option as we know that there are only up to six rows that can be returned fetch all the rows and then iterate over the data data cursor fetchall(for row in dataprint('row:'rowin this case we loop through each tuple within the data collection and print that row out howeverwe could just as easily have extracted the information in the tuple |
19,313 | working with the pymysql module into individual elements these elements could then be used to construct an object that could then be processed within an applicationfor examplefor row in dataidnamesurnamesubjectemail row student student(idnamesurnamesubjectemailprint(studentclose the connection once you have finished with the database connection it should be closed disconnect from server connection close( complete pymysql query example complete listing illustrating connecting up to the databaserunning select statement and printing out the results using student class is given belowimport pymysql class studentdef __init__(selfidnamesurnamesubjectemail)self id id self name name self surname surname self subject subject self email email def __str__(self)return 'student[str(id'name surname subject email open database connection connection pymysql connect('localhost''user''password''uni-database'prepare cursor object using cursor(method cursor connection cursor( |
19,314 | pymysql module execute sql query using execute(method cursor execute('select from students'print('cursor rowcount'cursor rowcountprint('cursor description'cursor descriptionfetch all the rows and then iterate over the data data cursor fetchall(for row in datastudent_idnamesurnamesubjectemail row student student(student_idnamesurnamesubjectemailprint(studentdisconnect from server connection close(the output from this programfor the database created in the last is shown herecursor rowcount cursor description (('id' none false)('name' none false)('surname' none false)('subject' none false)('email' none false)student[ phoebe cooke animation pc@my com student[ gryff jones games grj@my com student[ adam fosh music af@my com student[ jasmine smith games js@my com student[ tom jones music tj@my com student[ james andrews games ja@my com inserting data to the database as well as reading data from database many applications also need to add new data to the database this is done via the dml (data manipulation languageinsert statement the process for this is very similar to running query against the database using select statementthat isyou need to make connectionobtain cursor object and execute the statement the one difference here is that you do not need to fetch the results |
19,315 | inserting data to the database import pymysql open database connection connection pymysql connect('localhost''user''password''uni-database'prepare cursor object using cursor(method cursor connection cursor(tryexecute insert command cursor execute("insert into students (idnamesurnamesubjectemailvalues ( 'denise''byrne''history''db@my com')"commit the changes to the database connection commit(exceptsomething went wrong rollback the changes connection rollback(close the database connection connection close(the result of running this code is that the database is updated with seventh row for 'denise byrnethis can be seen in the mysql workbench if we look at the contents of the students tablethere are couple of points to note about this code example the first is that we have used the double quotes around the string defining the insert command-this is because double quotes string allows us to include single quotes within that string this is necessary as we need to quote any string values passed to the database (such as 'denise'the second thing to note is that by default the pymysql database interface requires the programmer to decide when to commit or rollback transaction transaction was introduced in the last as an atomic unit of work that must either be completed or as whole or rollback so that no changes are made howeverthe way in which we indicate that transaction is completed is by calling |
19,316 | pymysql module the commit(method on the database connection in turn we can indicate that we want to rollback the current transaction by calling rollback(in either caseonce the method has been invoked new transaction is started for any further database activity in the above code we have used try block to ensure that if everything succeedswe will commit the changes madebut if an exception is thrown (of any kindwe will rollback the transaction--this is common pattern updating data in the database if we are able to insert new data into the databasewe may also want to update the data in databasefor example to correct some information this is done using the update statement which must indicate which existing row is being updated as well as what the new data should be import pymysql open database connection connection pymysql connect('localhost''user''password''uni-database'prepare cursor object using cursor(method cursor connection cursor(tryexecute update command cursor execute("update students set email 'denise@my comwhere id "commit the changes to the database connection commit(exceptrollback the changes if an exception error connection rollback(close the database connection connection close(in this example we are updating the student with id such that their email address will be changed to 'denise@my comthis can be verified by examining the contents of the students table in the mysql workbench |
19,317 | deleting data in the database deleting data in the database finallyit is also possible to delete data from databasefor example if student leaves their course this follows the same format as the previous two examples with the difference that the delete statement is used insteadimport pymysql open database connection connection pymysql connect('localhost''user''password''uni-database'prepare cursor object using cursor(method cursor connection cursor(tryexecute delete command cursor execute("delete from students where id "commit the changes to the database connection commit(exceptrollback the changes if an exception error connection rollback(close the database connection connection close(in this case we have deleted the student with id we can see that again in the mysql workbench by examining the contents of the students table after this code has run |
19,318 | pymysql module creating tables it is not just data that you can add to databaseif you wish you can programmatically create new tables to be used with an application this process follows exactly the same pattern as those used for insertupdate and delete the only difference is that the command sent to the database contains create statement with description of the table to be created this is illustrated belowimport pymysql open database connection connection pymysql connect('localhost''user''password''uni-database'prepare cursor object using cursor(method cursor connection cursor(tryexecute create command cursor execute("create table log (message varchar( not null)"commit the changes to the database connection commit(exceptrollback the changes if an exception error connection rollback(close the database connection connection close(this creates new table log within the uni-databasethis can be seen by looking at the tables listed for the uni-database within the mysql workbench |
19,319 | online resources online resources see the following online resources for more information on the python database apilibrary exercises in this exercise you will create database and tables based on set of transactions stored in current account you can use the account class you created in the csv and excel for this you will need two tablesone for the account information and one for the transaction history the primary key of the account information table can be used as the foreign key for the transaction history table then write function that takes an account object and populates the tables with the appropriate data to create the account information table you might use the following ddlcreate table acc_info (idacc_info int not nullname varchar( not nullprimary key (idacc_info)while for the transactions table you might usecreate table transactions (idtransactions int not nulltype varchar( not nullamount varchar( not nullaccount int not nullprimary key (idtransactions))remember to be careful with integers and decimals if you are creating an sql string such asstatement "insert into transactions (idtransactionstypeamountaccountvalues (str(id"'action "'str(amount"str(account_number") |
19,320 | introduction to logging introduction many programming languages have common logging libraries including java and cand of course python also has logging module indeed the python logging module has been part of the built in modules since python this discusses why you should add logging to your programswhat you should (and should notlog and why just using the print(function is not sufficient why loglogging is typically key aspect of any production applicationthis is because it is important to provide appropriate information to allow future investigation following some event or issue in such applications these investigations includediagnosing failuresthat is why did an application fail/crash identifying unusual or unexpected behaviourwhich might not cause the application to fail but which may leave it in an unexpected state or where data may be corrupted etc identifying performance or capacity issuesin such situations the application is performing as expected by it is not meeting some non-functional requirements associated with the speed at which it is operating or its ability to scale as the amount of data or the number of users grows dealing with attempted malicious behaviour in which some outside agent is attempting to affect the behaviour of the system or to acquire information which they should not have access to etc this could happen for exampleif you are creating python web application and user tries to hack into your web server (cspringer nature switzerland ag huntadvanced guide to python programmingundergraduate topics in computer science |
19,321 | introduction to logging regulatory or legal compliance in some cases records of program execution may be required for regulatory or legal reasons this is particularly true of the financial sector where records must be kept for many years in case there is need to investigate the organisationsor individualsbehaviour what is the purpose of loggingin general there are therefore two general reason to log what an application is doing during it operationfor diagnostic purposes so that recorded events/steps can be used to analyse the behaviour of the system when something goes wrong auditing purposes that allow for later analysis of the behaviour of the system for businesslegal or regulatory purposes for examplein this case to determine who did what with what and when without such logged information it is impossible after the event to know what happened for exampleif all you know is that an application crashed (unexpectedly stopped executinghow can you determine what state the application was inwhat functionsmethods etc were being executed and which statements runremember that although developer may have been using an ide to run their applications during development and may possibly been using the debugging facilities available that allow you to see what functions or methodsstatements and even variable values are placethis is not how most production systems are run in general production python system will be run either from command line or possibly through short cut (on windows boxto simplify running the program all the user will know is that something failed or that the behaviour they expected didn' occur--if in fact they are aware of any issue at alllogs are therefore key to after the event analysis of failuresunexpected behaviour or for analysis of the operation of the system for business reasons what should you logone question that you might be considering at this point is 'what information should log?an application should log enough information so that post event investigators can understand what was happeningwhen and where in general this means that you will want to log the time of the log messagethe module/filenamefunction name or method name executingpotentially the log level being used (see laterand in some cases the parameter values/state of the environmentprogram or class involved |
19,322 | what should you log in many cases developers log the entry (and to lesser extentthe exit from function or method howeverit may also be useful to log what happens at branch points within function or method so that the logic of the application can be followed all applications should log all errors/exceptions although care is needed to ensure that this is done appropriately for example if an exception is caught and then re thrown several times it is not necessary to log it every time it is caught indeed doing this can make the log files much largercause confusion when the problem is being investigated and result in unnecessary overheads one common approach is to log an exception where it is first raised and caught and not to log it after that what not to log the follow on question to consider is 'what information should not log?one general area not to log is any personal or sensitive information including any information that can be used to identify an individual this sort of information is known as pii or personally identification information such information includes user ids and passwordsemail addressesdata of birthbirth placepersonally identifiable financial information such as bank account detailscredit card details etc biometric informationmedical/health informationgovernment issued personal information such as passport detailsdrivers license numbersocial security numbersnational insurance numbers etc official organisational information such as professional registrations and membership numbersphysical addressesphone (land-linenumbersmobile phone numbersverification elated information such as mother' maiden namepetsnameshigh schoolfirst schoolfavourite filmetc it also increasing includes online information relating to social media such as facebook or linkedin accounts all of the above is sensitive information and much of it can be used to identify an individualnone of this information should be logged directly that does not mean that you cannot and shouldn' log that user logged inyou may well need to do that howeverthe information should at least be obfuscated and should not include any information not required for example you may record that user represented by some id attempted to log in at specific time and whether they were successful or not howeveryou should not log their password and may |
19,323 | introduction to logging not log the actual userid--instead you may log an id that can be used to map to their actual userid you should also be careful about directly logging data input too an application directly into log file one way in which malicious agent can attack an application (particularly web applicationis by attempting to send very large amounts of data to it (as part of field or as parameter to an operationif the application blindly logs all data submitted to itthen the log files can fill up very quickly this can result in the file store being used by the application filling up and causing potential problems for all software using the same file store this form of attack is known as log (or log fileinjection attack and is well documented (see www owasp org/index php/log_injection which is part of the well respected open web application security projectanother point to note is that it is not merely enough to log an error this is not error handlinglogging an error does not mean you have handled itonly that you have noted it an application should still decide how it should manage the error or exception in general you should also aim for empty logs in production systemthat is only information that needs to be logged in production system should be logged (often information about errorsexceptions or other unexpected behaviourhoweverduring testing much more detail is required so that the execution of the system should be followed it should therefore be possible to select how much information is logged depending on the environment the code is running in (that is within test environment or within production environmenta final point to note is that it is important to log information to the correct place many applications (and organisationslog general information to one log fileerrors and exceptions to another and security information to third it is therefore important to know where your log information is being sent and not to send information to the wrong log why not just use printassuming that you want to log information in your application then next question is how should you do thatthrough this book we have been using the python print(function to print out information that indicates results generated by our code but also at times what is happening with function or method etc thus we need to consider whether using the print(function the best way to log information in actual factusing print(to log information in production system is almost never the right answerthis is for several reasonsthe print(function by default writes strings out to the standard output (stdoutor standard error output (stderrwhich by default directs output to the consoleterminal for examplewhen you run an application within an idethe output is |
19,324 | why not just use print displayed in the console window if you run an application from the command line then the output is directed back to that command/terminal window both of these are fine during developmentbut what if the program is not run from command windowperhaps instead it is started up by the operating system automatically (as is typical of numerous services such as print service or web serverin this case there is no terminal/console window to send the data toinstead the data is just lost as it happens the stdout and stderr output streams can be directed to file (or fileshoweverthis is typically done when the program is launched and may be easily omitted in addition there is only the option of sending all stdout to specific file or all error output to the stderr another issue with using the print(function is that all calls to print will be output when using most loggers it is possible to specify the log level required these different log levels allow different amounts of information to be generated depending upon the scenario for examplein well tested reliable production system we may only want error related or critical information to be logged this will reduce the amount of information we are collecting and reduce any performance impact introduced by logging into the application howeverduring testing phases we may want far more detailed level of logging in other situations we may wish to change the log level being used for running production system without needing to modify the actual code (as this has the potential to introduced errors into the codeinstead we would like to have the facility to externally change the way in which the logging system behavesfor example through configuration file this allows system administrators to modify the amount and the detail of the information being logged it typically also allows the designation of the log information to be changed finallywhen using the print(function developer can use whatever format they likethey can include timestamp on the message or notthey can include the module or function/method name or not they can include parameters of not using logging system usually standardises the information generated along with the log message thus all log messages will have (or not havea timestampor all messages will include (or not includeinformation on the function or method in which they were generated etc online resources for further information on logging see the followingwww owasp org/index php the open web application security project (owasp |
19,325 | logging in python the logging module python has included built-in logging module since python this modulethe logging moduledefines functions and classes which implement flexible logging framework that can be used in any python application/script or in python libraries/modules although different logging frameworks differ in the specific details of what they offeralmost all offer the same core elements (although different names are sometimes usedthe python logging module is no different and the core elements that make up the logging framework and its processing pipeline are shown below (note that very similar diagram could be drawn for logging frameworks in javascalac+etc the following diagram illustrates python program that uses the built-in python logging framework to log messages to file the core elements of the logging framework (some of which are optionalare shown above and described below(cspringer nature switzerland ag huntadvanced guide to python programmingundergraduate topics in computer science |
19,326 | logging in python log message the is the message to be logged from the application logger provides the programmers entry point/interface to the logging system the logger class provides variety of methods that can be used to log messages at different levels handler handlers determine where to send log messagedefault handlers include file handlers that send messages to file and http handlers that send messages to web server filter this is an optional element in the logging pipeline they can be used to further filter the information to be logged providing fine grained control of which log messages are actually output (for example to log fileformatter these are used to format the log message as required this may involve adding timestampsmodule and function/method information etc to the original log message configuration information the logger (and associated handlersfilters and formatterscan be configured either programmatically in python or through configuration files these configuration files can be written using key-value pairs or in yaml file (which is simple mark up languageyaml stands for yet another markup languageit is worth noting that much of the logging framework is hidden from the developer who really only sees the loggerthe remainder of the logging pipeline is either configured by default or via log configuration information typically in the form of log configuration file the logger the logger provides the programmers interface to the logging pipeline logger object is obtained from the getlogger(function defined in the logging module the following code snippet illustrates acquiring the default logger and using it to log an error message note that the logging module must be importedimport logging logger logging getlogger(logger error('this should be used with something unexpectedthe output from this short application is logged to the console as this is the default configurationthis should be used with something unexpected |
19,327 | controlling the amount of information logged controlling the amount of information logged log messages are actually associated with log level these log levels are intended to indicate the severity of the message being logged there are six different log levels associated with the python logging frameworkthese arenotset at this level no logging takes place and logging is effectively turned off debug this level is intended to provide detailed informationtypically of interest when developer is diagnosing bug or issues within an application info this level is expected to provide less detail than the debug log level as it is expected to provide information that can be used to confirm that the application is working as expected warning this is used to provide information on an unexpected event or an indication of some likely problem that developer or system administration might wish to investigate further error this is used to provide information on some serious issue or problem that the application has not been able to deal with and that is likely to mean that the application cannot function correctly critical this is the highest level of issue and is reserved for critical situations such as ones in which the program can no longer continue executing the log levels are relative to one another and defined in hierarchy each log level has numeric value associated with it as shown below (although you should never need to use the numbersthus info is higher log level than debugin turn error is higher log level than warninginfodebug etc associated with the log level that message is logged witha logger also has log level associated with it the logger will process all messages that are at the loggers log level or above that level thus if logger has log level of warning then it will log all messages logged using the warningerror and critical log levels generally speakingan application will not use the debug level in production system this is usually considered inappropriate as it is only intended for debug scenarios the info level may be considered appropriate for production system although it is likely to produce large amounts of information as it typically traces the execution of functions and methods if an application has been well tested and verified then it is only really warnings and errors which should occur/be of concern it is therefore not uncommon to default to the warning level for production |
19,328 | logging in python systems (indeed this is why the default log level is set to warning within the python logging systemif we now look at the following code that obtains the default logger object and then uses several different logger methodswe can see the effect of the log levels on the outputimport logging logger logging getlogger(logger debug('this is to help with debugging'logger info('this is just for information'logger warning('this is warning!'logger error('this should be used with something unexpected'logger critical('something serious'the default log level is set to warningand thus only messages logged at the warning level or above will be printed outthis is warningthis should be used with something unexpected something serious as can be seen from thisthe messages logged at the debug and info level have been ignored howeverthe logger object allows us to change the log level programmatically using the setlevel(methodfor example logger setlevel(logging debugor via the logging basicconfig(level logging debugfunctionboth of these will set the logging level to debug note that the log level must be set before the logger is obtained if we add one of the above approaches to setting the log level to the previous program we will change the amount of log information generatedimport logging logging basicconfig(level=logging debuglogger logging getlogger(logger warning('this is warning!'logger info('this is just for information'logger debug('this is to help with debugging'logger error('this should be used with something unexpected logger critical('something serious' |
19,329 | controlling the amount of information logged this will now output all the log messages as debug is the lowest logging level we can of course turn off logging by setting the log level to notset logger setlevel(logging notsetalternatively you can set the loggers disabled attribute to truelogging logger disabled true logger methods the logger class provides number of methods that can be used to control what is logged includingsetlevel(levelsets this loggers log level geteffectivelevel(returns this loggers log level isenabledfor(levelchecks to see if this logger is enabled for the log level specified debug(messagelogs messages at the debug level info(messagelogs messages at the info level warning(messagelogs messages at the warning level error(messagelogs messages at the error level critical(messagelogs messages at the critical level exception(messagethis method logs message at the error level howeverit can only be used within an exception handler and includes stack trace of any associated exceptionfor exampleimport logging logger logging getlogger(tryprint('starting' print(xexceptlogger exception('an exception message'print('done'log(levelmessagelogs messages at the log level specified as the first parameter |
19,330 | logging in python in addition there are several methods that are used to manage handlers and filtersaddfilter(filterthis method adds the specified filter filter to this logger removefilter(filterthe specified filter is removed from this logger object addhandler(handlerthe specified handler is added to this logger removehandler(handlerremoves the specified handler from this logger default logger default (or rootlogger is always available from the logging framework this logger can be accessed via the functions defined in the logging module these functions allow messages to be logged at different levels using methods such as info()error()warning(but without the need to obtain reference to logger object first for exampleimport logging set the root logger level logging basicconfig(level=logging debuguse root (defaultlogger logging debug('this is to help with debugging'logging info('this is just for information'logging warning('this is warning!'logging error('this should be used with something unexpectedlogging critical('something serious'this example sets the logging level for the root or default logger to debug (the default is warningit then uses the default logger to generate range of log messages at different levels (from debug up to criticalthe output from this program is given belowdebug:root:this is to help with debugging info:root:this is just for information warning:root:this is warningerror:root:this should be used with something unexpected critical:root:something serious note that the format used by default with the root logger prints the log levelthe name of the logger generating the output and the message from this you can see that it is the root longer that is generating the output |
19,331 | module level loggers module level loggers most modules will not use the root logger to log informationinstead they will use named or module level logger such logger can be configured independently of the root logger this allows developers to turn on logging just for module rather than for whole application this can be useful if developer wishes to investigate an issue that is located within single module previous code examples in this have used the getlogger(function with no parameters to obtain logger objectfor examplelogger logging getlogger(this is really just another way of obtaining reference to the root logger which is used by the stand alone logging functions such as logging info()logging debug(functionthuslogging warning('my warning'and logger warning('my warninghave exactly the same effectthe only difference is that the first version involves less code howeverit is also possible to create named logger this is separate logger object that has its own name and can potentially have its own log levelhandlers and formatters etc to obtain named logger pass name string into the getlogger(methodlogger logging getlogger('my logger'this returns logger object with the name 'my loggernote that this may be brand new logger objecthowever if any other code within the current system has previously requested logger called 'my loggerthen that logger object will be returned to the current code thus multiple calls to getlogger(with the same name will always return reference to the same logger object it is common practice to use the name of the module as the name of the loggeras only one module with specific name should exist within any specific system the name of the module does not need to be hard coded as it can be obtained using the __name__ module attributeit is thus common to seelogger logging getlogger(__name__ |
19,332 | logging in python we can see the effect of each of these statements by printing out each loggerlogger logging getlogger(print('root logger:'loggerlogger logging getlogger('my logger'print('named logger:'logger logger logging getlogger(__name__print('module logger:'logger when the above code is run the output isroot loggernamed loggermodule loggerthis shows that each logger has their own name (the code was run in the main module and thus the module name was __main__and all three loggers have an effective log level of warning (which is the default logger hierarchy there is in fact hierarchy of loggers with the root logger at the top of this hierarchy all named loggers are below the root logger the name of logger can actually be period-separated hierarchical value such as utilutil lib and util lib printer loggers that are further down the hierarchy are children of loggers further up the logger hierarchy for example given logger called libthen it will be below the root logger but above the logger with the name util lib this logger will in turn be above the logger called util lib printer this is illustrated in the following diagram |
19,333 | logger hierarchy the logger name hierarchy is analogous to the python package hierarchyand identical to it if you organise your loggers on per-module basis using the recommended construction logging getlogger(__name__this hierarchy is important when considering the log level if log level has not been set for the current logger then it will look to its parent to see if that logger has log level set if it does that will be the log level used this search back up the logger hierarchy will continue until either an explicit log level is found or the root logger is encountered which has default log level of warning this is useful as it is not necessary to explicitly set the log level for every logger object used in an application instead it is only necessary to set the root log level (or for module hierarchy an appropriate point in the module hierarchythis can then be overridden where specifically required formatters the are two levels at which you can format the messages loggedthese are within the log message passed to logging method (such as info(or warn()and via the top level configuration that indicates what additional information may be added to the individual log message formatting log messages the log message can have control characters that indicate what values should be placed within the messagefor examplelogger warning('% is set to % ''count' this indicates that the format string expects to be given string and number the parameters to be substituted into the format string follow the format string as comma separated list of values formatting log output the logging pipeline can be configured to incorporate standard information with each log message this can be done globally for all handlers it is also possible to programmatically set specific formatter on individual handlerthis is discussed in the next section to globally set the output format for log messages use the logging basicconfig(function using the named parameter format |
19,334 | logging in python the format parameter takes string that can contain logrecord attributes organised as you see fit there is comprehensive list of logrecord attributes which can be referenced at html#logrecord-attributes the key ones areargs tuple listing the arguments used to call the associated function or method asctime indicates the time that the log message was created filename the name of the file containing the log statement module the module name (the name portion of the filenamefuncname the name of the function or method containing the log statement levelname the log level of the log statement message the log message itself as provided to the log method the effect of some of these are illustrated below import logging logging basicconfig(format='%(asctime) %(message) 'level=logging debuglogger logging getlogger(__name__def do_something()logger debug('this is to help with debugging'logger info('this is just for information'logger warning('this is warning!'logger error('this should be used with something unexpected'logger critical('something serious'do_something(the above program generates the following log statements : : , this is to help with debugging : : , this is just for information : : , this is warning : : , this should be used with something unexpected : : , something serious howeverit might be useful to know the log level associated with the log statementsas well as the function that the log statements were called from it is possible to obtain this information by changing the format string passed to the logging basicconfig(function |
19,335 | formatters logging basicconfig(format='%(asctime) [%(levelname) %(funcname) %(message) 'level=logging debugwhich will now generate the output within log level information and the function involved : : , [debugdo_somethingthis is to help with debugging : : , [infodo_somethingthis is just for information : : , [warningdo_somethingthis is warning : : , [errordo_somethingthis should be used with something unexpected : : , [criticaldo_somethingsomething serious we can even control the format of the date time information associated with the log statement using the datafmt parameter of the logging basicconfig(functionlogging basicconfig(format='%(asctime) %(message) 'datefmt='% /% /% % :% :% % 'level=logging debugthis format string uses the formatting options used by the datetime strptime(function (see % --month as zero-padded decimal number % --day of the month as zero-padded decimal number etc % --year with century as decimal number % --hour ( - clockas zero-padded decimal number etc % --minute as zero-padded decimal number etc % --second as zero-padded decimal number etc % --either am or pm thus the output generated using the above datefmt string is : : pm this is to help with debugging : : pm this is just for information : : pm this is warning : : pm this should be used with something unexpected : : pm something serious to set formatter on an individual handler see the next section |
19,336 | logging in python online resources for further information on the python logging framework see the followingthe python standard library documentation page exercises this exercise will involve adding logging to the account class you have been working on in this book you should add log methods to each of the methods in the class using either the debug or info methods you should also obtain module logger for the account classes |
19,337 | advanced logging introduction in this we go further into the configuration and modification of the python logging module in particular we will look at handlers (used to determine the destination fo log messages)filters which can be used by handlers to provide finer grained control of log output and logger configuration files we conclude the by considering performance issues associated with logging handlers within the logging pipelineit ia handlers that send the log message to their final destination by default the handler is set up to direct output to the console/terminal associated with the running program howeverthis can be changed to send the log messages to fileto an email serviceto web server etc or indeed to any combination of these as there can be multiple handlers configured for logger this is shown in the diagram below(cspringer nature switzerland ag huntadvanced guide to python programmingundergraduate topics in computer science |
19,338 | advanced logging in the above diagram the logger has been configured to send all log messages to four different handlers which allow log message to be written to the consoleto web server to file and to an email service such behaviour may be required becausethe web server will allow developers access to web interface that allows them to see the log files even if they do not have permission to access production server the log file ensures that all the log data is permanently stored in file within the file store an email message may be sent to notification system so that someone will be notified that there is an issue to be investigated the console may still be available to the system administrators who may wish to look at the log messages generated the python logging framework comes with several different handlers as suggested above and listed belowlogging stream handler sends messages to outputs such as stdoutstderr etc logging filehandler sends log messages to files there are several varieties of file handler in addition to the basic filehandlerthese include the logging handlers rotatingfilehandler (which will rotate log files based on maximum file sizeand logging handlers timerotatingfilehandler (which rotates the log file at specified time intervals dailylogging handlers sockethandler which sends messages to tcp/ip socket where it can be received by tcp server logging handlers smtphandler that sends messages by the smtp (simple mail transfer protocolto email server logging handlers sysloghandler that sends log messages to unix syslog program |
19,339 | handlers logging handlers nteventloghandler that sends message to windows event log logging handlers httphandler which sends messages to http server logging nullhandler that does nothing with error messages this is often used by library developers who want to include logging in their applications but expect developers to set up an appropriate handler when they use the library all of these handlers can be configured programmatically or via configuration file setting the root output handler the following exampleuses the logging basicconfig(function to set up the root logger to use filehandler that will write the log messages to file called 'example log'import logging sets file handler on the root logger to save log messages to the example log file logging basicconfig(filename='example log,level=logging debugif no handler is explicitly set on the name logger it will delegate the messages to the parent logger to handle logger logging getlogger(__name__logger debug('this is to help with debugginglogger info('this is just for informationlogger warning('this is warning!logger error('this should be used with something unexpectedlogger critical('something seriousnote that if no handler is specified for named logger then it delegates output to the parent (in this case the rootlogger the file generated for the above program is shown below |
19,340 | advanced logging as can be seen from this the default formatter is now configured for filehandler this filehandler adds the log message level before the log message itself programmatically setting the handler it is also possible to programmatically create handler and set it for the logger this is done by instantiating one of the existing handler classes (or by subclassing an existing handler such as the root handler class or the filehander etc the instantiated handler can then be added as handler to the logger (remember the logger can have multiple handlers this is why the method is called addhandler (rather than something such as sethandleran example of explicitly setting the filehandler for logger is given belowimport logging empty basic config turns off default console handler logging basicconfig(logger logging getlogger(__name__logger setlevel(logging debugcreate file handler which logs to the specified file file_handler logging filehandler('detailed log'add the handler to the logger logger addhandler(file_handler'applicationcode def do_something()logger debug('debug message'logger info('info message'logger warning('warn message'logger error('error message'logger critical('critical message'logger info('starting'do_something(logger info('done'the result of running this code is that log file is created with the logged messages |
19,341 | handlers given that this is lot more code than using the basicconfig(functionthe question here might be 'why bother?the answer is two foldyou can have different handlers for different loggers rather than setting the handler to be used centrally each handler can have its own format set so that logging to file has different format to logging to the console we can set the format for the handler by instantiating the logging formatter class with an appropriate format string the formatter object can then be applied to handler using the setformatter(method on the handler object for examplewe can modify the above code to include formatter that is then set on the file handler as shown below create file handler which logs to the specified file file_handler logging filehandler('detailed logcreate formatter for the file_handler formatter logging formatter('%(asctime) %(funcname) %(message)sfile_handler setformatter(formatterlogger addhandler(file_handlerthe log file now generated is modified such that each message includes time stampthe function name (or module if at the module levelas well as the log message itself |
19,342 | advanced logging multiple handlers as suggested in the previous section we can create multiple handlers to send log messages to different locationsfor example from the consoleto files and even email servers the following program illustrates setting up both file handler and console handler for module level logger to do this we create two handlers the file_handler and the console_handler as side effect we can also give them different log levels and different formatters in this case the file_handler inherits the log level of the logger itself (which is debugwhile the console_handler has its log level set explicitly at warning this means different amounts of information will be logged to the log file than the console output we have also set different formatters on each handlerin this case the log file handler' formatter provides more information than the console handlers formatter both handlers are then added to the logger before it is used multiple handlers and formatters import logging set up the default root logger to do nothing logging basicconfig(handlers=[logging nullhandler()]obtain the module level logger and set level to debug logger logging getlogger(__name__logger setlevel(logging debugcreate file handler file_handler logging filehandler('detailed log'create console handler with higher log level console_handler logging streamhandler(console_handler setlevel(logging warningcreate formatter for the file handler fh_formatter logging formatter%(name) %(funcname) %(message) 'datefmt% 'file_handler setformatter(fh_formatter-% % :% :% |
19,343 | handlers create formatter for the console handler console_formatter logging formatter('%(asctime) %(funcname) %(message) 'console_handler setformatter(console_formatteradd the handlers to logger logger addhandler(console_handlerlogger addhandler(file_handler'applicationcode def do_something()logger debug('debug message'logger info('info message'logger warning('warn message'logger error('error message'logger critical('critical message'logger info('starting'do_something(logger info('done'the output from this program is now split between the log file and the console outas shown below filters filters can be used by handlers to provide finer grained control of the log output filter can be added to logger using the logger addfilter(method filter can be created by extending the logging filter class and |
19,344 | advanced logging implementing the filter(method this method takes log record this log record can be validated to determine if the record should be output or not if it should be output then true is returnedif the record should be ignored false should be returned in the following examplea filter called myfilter is defined that will filter out all log messages containing the string 'johnit is added as filter to the logger and then two log messages are generated import logging class myfilter(logging filter)def filter(selfrecord)if 'johnin record msgreturn false elsereturn true logging basicconfig(format='%(asctime) %(message) 'level=logging debuglogger logging getlogger(logger addfilter(myfilter()logger debug('this is to help with debugging'logger info('this is information on john'the output shows that only the log message that does not contain the string 'johnis output : : , this is to help with debugging logger configuration all the examples so far in this have used programmatic configuration of the logging framework this is certainly feasible as the examples showbut it does require code change if you wish to alter the logging level for any particular loggeror to change where particular handler is routing the log messages for most production systems better solution is to use an external configuration file which is loaded when the application is run and is used to dynamically configure the logging framework this allows system administrators and others to change the log levelthe log destinationthe log format etc without needing to change the code |
19,345 | logger configuration the logging configuration file can be written using several standard formats from json (the java script object notation)to yaml (yet another markup languageformator as set of key-value pairs in conf file for further information on the different options available see the python logging module documentation in this book we will briefly explore the yaml file format used to configure loggers version formattersmyformatterformat'%(asctime) [%(levelname) %(name) %(funcname) %(message)shandlersconsoleclasslogging streamhandler leveldebug formattermyformatter streamext://sys stdout loggersmyloggerleveldebug handlers[consolepropagateno rootlevelerror handlers[consolethe above yaml code is stored in file called logging conf yamlhowever you can call this file anything that is meaningful the yaml file always starts with version number this is an integer value representing the yaml schema version (currently this can only be the value all other keys in the file are optionalthey includeformatters--this lists one or more formatterseach formatter has name which acts as key and then format value which is string defining the format of log message filters--this is lit of filter names and set of filter definitions handlers--this is list of named handlers each handler definition is made up of set of key value pairs where the keys define the class used for the filter (mandatory)the log level of the filter (optional)the formatter to use with the handler (optionaland list of filters to apply (optionalloggers--provides one or more named loggers each logger can indicate the log level (optionaland list of handlers (optionalthe propagate option can be used to stop messages propagating to parent logger (by setting it to falseroot--this is the configuration for the root logger |
19,346 | advanced logging this file can be loaded into python application using the pyyaml module this provides yaml parser that can load yaml file as dictionary structure that can be passed to the logging config dictconfig(function as this is file it must be opened and closed to ensure that the resource is handled appropriatelyit is therefore best managed using the with-as statement as shown belowwith open('logging config yaml' 'as fconfig yaml safe_load( read()logging config dictconfig(configthis will open the yaml file in read-only mode and close it when the two statements have been executed this snippet is used in the following application that loads the logger configuration from the yaml fileimport logging import logging config import yaml with open('logging config yaml'' 'as fconfig yaml safe_load( read()logging config dictconfig(configlogger logging getlogger('mylogger''applicationcode def do_something()logger debug('debug message'logger info('info message'logger warning('warn message'logger error('error message'logger critical('critical message'logger info('starting'do_something(logger info('done' |
19,347 | logger configuration the output from this using the earlier yaml file is : : , [infomylogger starting : : , [debugmylogger do_somethingdebug message : : , [infomylogger do_somethinginfo message : : , [warningmylogger do_somethingwarn message : : , [errormylogger do_somethingerror message : : , [criticalmylogger do_somethingcritical message : : , [infomylogger done performance considerations performance when logging should always be consideration in general you should aim to avoid performing any unnecessary work when logging is disabled (or disabled for the level being usedthis may seem obvious but it can occur in several unexpected ways one example is string concatenation if message to be logged involves string concatenationthen that string concatenation will always be performed when log method is being invoked for examplelogger debug('countcount 'totaltotalthis will always result in the string being generated for count and total before the call is made to the debug functioneven if the debug level is not turned on however using format string will avoid this the formatting involved will only be performed if the string is to be used in log message you should therefore always use string formatting to populate log messages for erxmaplelogger debug(count%dtotal% 'count another potential optimisation is to use the logger isenabledfor (levelmethod as guard against running the log statement this can be useful in situations where an associated operation must be performed to support the logging operation and this operation is expensive for exampleif logger isenabledfor(logging debug)logger debug('message with % % 'expensive_func ()expensive_func () |
19,348 | advanced logging now the two expensive functions will only be executed if the debug log level is set exercises using the logging you dded to the account class int he last you should load the log configuration information from yaml file similar to that used in this this should be loaded into the application program used to drive the account classes |
19,349 | concurrency and parallelism |
19,350 | introduction to concurrency and parallelism introduction in this we will introduce the concepts of concurrency and parallelism we will also briefly consider the related topic of distribution after this we will consider process synchronisationwhy object oriented approaches are well suited to concurrency and parallelism before finishing with short discussion of threads versus processes concurrency concurrency is defined by the dictionary as two or more events or circumstances happening or existing at the same time in computer science concurrency refers to the ability of different parts or units of programalgorithm or problem to be executed at the same timepotentially on multiple processors or multiple cores here processor refers to the central processing unit (or cpuor computer while core refers to the idea that cpu chip can have multiple cores or processors on it originally cpu chip had single core that is the cpu chip had single processing unit on it howeverover timeto increase computer performancehardware manufacturers added additional cores or processing units to chips thus dual-core cpu chip has two processing units while quad-core cpu chip has four processing units this means that as far as the operating system of the computer is concernedit has multiple cpus on which it can run programs running processing at the same timeon multiple cpuscan substantially improve the overall performance of an application (cspringer nature switzerland ag huntadvanced guide to python programmingundergraduate topics in computer science |
19,351 | introduction to concurrency and parallelism for examplelet us assume that we have program that will call three independent functionsthese functions aremake backup of the current data held by the programprint the data currently held by the programrun an animation using the current data let us assume that these functions run sequentiallywith the following timingsthe backup function takes sthe print function takes sthe animation function takes this would result in total of to perform all three operations this is illustrated graphically belowhoweverthe three functions are all completely independent of each other that is they do not rely on each other for any results or behaviourthey do not need one of the other functions to complete before they can complete etc thus we can run each function concurrently if the underlying operating system and program language being used support multiple processesthen we can potentially run each function in separate process at the same time and obtain significant speed up in overall execution time if the application starts all three functions at the same timethen the maximum time before the main process can continue will be sas that is the time taken by the longest function to execute howeverthe main program may be able to continue as soon as all three functions are started as it also does not depend on the |
19,352 | concurrency results from any of the functionsthus the delay may be negligible (although there will typically be some small delay as each process is set upthis is shown graphically below parallelism distinction its often made in computer science between concurrency and parallelism in concurrencyseparate independent tasks are performed potentially at the same time in parallelisma large complex task is broken down into set of subtasks the subtasks represent part of the overall problem each subtask can be executed at the same time typically it is necessary to combine the results of the subtasks together to generate an overall result these subtasks are also very similar if not functionally exactly the same (although in general each subtask invocation will have been supplied with different datathus parallelism is when multiple copies of the same functionality are run at the same timebut on different data some examples of where parallelism can be applied includea web search engine such system may look at manymany web pages each time it does so it must send request to the appropriate web sitereceive the result and process the data obtained these steps are the same whether it is the bbc web sitemicrosoft' web site or the web site of cambridge university thus the requests can be run sequentially or in parallel image processing large image may be broken down into slices so that each slice can be analysed in parallel |
19,353 | introduction to concurrency and parallelism the following diagram illustrates the basic idea behind parallelisma main program fires off three subtasks each of which runs in parallel the main program then waits for all the subtasks to complete before combining together the results from the subtasks before it can continue distribution when implementing concurrent or parallel solutionwhere the resulting processes run is typically an implementation detail conceptually these processes could run on the same processorphysical machine or on remote or distributed machine as such distributionin which problems are solved or processes executed by sharing the work across multiple physical machinesis often related to concurrency and parallelism howeverthere is no requirement to distribute work across physical machinesindeed in doing so extra work is usually involved to distribute work to remote machinedata and in many cases codemust be transferred and made available to the remote machine this can result in significant delays in running the code remotely and may offset any potential performance advantages of using physically separate computer as result many concurrentparallel technologies default to executing code in separate process on the same machine grid computing grid computing is based on the use of network of loosely coupled computersin which each computer can have job submitted to itwhich it will run to completion before returning result |
19,354 | grid computing in many cases the grid is made up of heterogeneous set of computers (rather than all computers being the sameand may be geographically dispersed these computers may be comprised of both physical computers and virtual machines virtual machine is piece of software that emulates whole computer and runs on some underlying hardware that is shared with other virtual machines each virtual machine thinks it is the only computer on the hardwarehowever the virtual machines all share the resources of the physical computer multiple virtual machines can thus run simultaneously on the same physical computer each virtual machine provides its own virtual hardwareincluding cpusmemoryhard drivesnetwork interfaces and other devices the virtual hardware is then mapped to the real hardware on the physical machine which saves costs by reducing the need for physical hardware systems along with the associated maintenance costsas well as reducing the power and cooling demands of multiple computers within gridsoftware is used to manage the grid nodes and to submit jobs to those nodes such software will receive the jobs to perform (programs to run and information about the environment such as libraries to usefrom clients of the grid these jobs are typically added to job queue before job scheduler submits them to node within the grid when any results are generated by the job they are collected from the node and returned to the client this is illustrated below |
19,355 | introduction to concurrency and parallelism the use of grids can make distributing concurrent/parallel processes amongst set of physical and virtual machines much easier concurrency and synchronisation concurrency relates to executing multiple tasks at the same time in many cases these tasks are not related to each other such as printing document and refreshing the user interface in these casesthe separate tasks are completely independent and can execute at the same time without any interaction in other situations multiple concurrent tasks need to interactfor examplewhere one or more tasks produce data and one or more other tasks consume that data this is often referred to as producer-consumer relationship in other situationsall parallel processes must have reached the same point before some other behaviour is executed another situation that can occur is where we want to ensure that only one concurrent task executes piece of sensitive code at timethis code must therefore be protected from concurrent access concurrent and parallel libraires need to provide facilities that allow for such synchronisation to occur object orientation and concurrency the concepts behind object-oriented programming lend themselves particularly well to the concepts associated with concurrency for examplea system can be described as set of discrete objects communicating with one another when necessary in pythononly one object may execute at any one moment in time within single interpreter howeverconceptually at leastthere is no reason why this restriction should be enforced the basic concepts behind object orientation still holdeven if each object executes within separate independent process |
19,356 | object orientation and concurrency traditionally message send is treated like procedural callin which the calling object' execution is blocked until response is returned howeverwe can extend this model quite simply to view each object as concurrently executable programwith activity starting when the object is created and continuing even when message is sent to another object (unless the response is required for further processingin this modelthere may be very many (concurrentobjects executing at the same time of coursethis introduces issues associated with resource allocationetc but no more so than in any concurrent system one implication of the concurrent object model is that objects are larger than in the traditional single execution thread approachbecause of the overhead of having each object as separate thread of execution overheads such as the need for scheduler to handling these execution threads and resource allocation mechanisms means that it is not feasible to have integerscharactersetc as separate processes threads processes as part of this discussion it is useful to understand what is meant by process process is an instance of computer program that is being executed by the operating system any process has three key elementsthe program being executedthe data used by that program (such as the variables used by the programand the state of the process (also known as the execution context of the programa (pythonthread is preemptive lightweight process thread is considered to be pre-emptive because every thread has chance to run as the main thread at some point when thread gets to execute then it will execute until completionuntil it is waiting for some form of / (input/output)sleeps for period of timeit has run for ms (the current threshold in python if the thread has not completed when one of the above situations occursthen it will give up being the executing thread and another thread will be run instead this means that one thread can be interrupted in the middle of performing series of related steps thread is considered lightweight process because it does not possess its own address space and it is not treated as separate entity by the host operating system insteadit exists within single machine process using the same address space it is useful to get clear idea of the difference between thread (running within single machine processand multi-process system that uses separate processes on the underlying hardware |
19,357 | introduction to concurrency and parallelism some terminology the world of concurrent programming is full of terminology that you may not be familiar with some of those terms and concepts are outlined belowasynchronous versus synchronous invocations most of the methodfunction or procedure invocations you will have seen in programming represent synchronous invocations synchronous method or function call is one which blocks the calling code from executing until it returns such calls are typically within single thread of execution asynchronous calls are ones where the flow of control immediately returns to the callee and the caller is able to execute in its own thread of execution allowing both the caller and the call to continue processing non-blocking versus blocking code blocking code is term used to describe the code running in one thread of executionwaiting for some activity to complete which causes one of more separate threads of execution to also be delayed for exampleif one thread is the producer of some data and other threads are the consumers of that datathen the consumer treads cannot continue until the producer generates the data for them to consume in contrastnon-blocking means that no thread is able to indefinitely delay others concurrent versus parallel code concurrent code and parallel code are similarbut different in one significant aspect concurrency indicates that two or more activities are both making progress even though they might not be executing at the same point in time this is typically achieved by continuously swapping competing processes between execution and non-execution this process is repeated until at least one of the threads of execution (threadshas completed their task this may occur because two threads are sharing the same physical processor with each is being given short time period in which to progress before the other gets short time period to progress the two threads are said to be sharing the processing time using technique known as time slicing parallelism on the other hand implies that there are multiple processors available allowing each thread to execute on their own processor simultaneously online resources see the following online resources for information on the topics in this on concurrency machines |
19,358 | online resources concurrency versus parallelism tutorial an introduction to grid computing |
19,359 | threading introduction threading is one of the ways in which python allows you to write programs that multitaskthat is appearing to do more than one thing at time this presents the threading module and uses short example to illustrate how these features can be used threads in python the thread class from the threading module represents an activity that is run in separate thread of execution within single process these threads of execution are lightweightpre-emptive execution threads thread is lightweight because it does not possess its own address space and it is not treated as separate entity by the host operating systemit is not process insteadit exists within single machine process using the same address space as other threads thread states when thread object is first created it existsbut it is not yet runnableit must be started once it has been started it is then runnablethat isit is eligible to be scheduled for execution it may switch back and forth between running and being runnable under the control of the scheduler the scheduler is responsible for managing multiple threads that all wish to grab some execution time thread object remains runnable or running until its run(method terminatesat which point it has finished its execution and it is now dead all states between (cspringer nature switzerland ag huntadvanced guide to python programmingundergraduate topics in computer science |
19,360 | threading un-started and dead are considered to indicate that the thread is alive (and therefore may run at some pointthis is shown belowa thread may also be in the waiting statefor examplewhen it is waiting for another thread to finish its work before continuing (possibly because it needs the results produced by that thread to continuethis can be achieved using the join(method and is also illustrated above once the second thread completes the waiting thread will again become runnable the thread which is currently executing is termed the active thread there are few points to note about thread statesa thread is considered to be alive unless its run(method terminates after which it can be considered dead live thread can be runningrunnablewaitingetc the runnable state indicates that the thread can be executed by the processorbut it is not currently executing this is because an equal or higher priority process is already executingand the thread must wait until the processor becomes free thus the diagram shows that the scheduler can move thread between the running and runnable state in factthis could happen many times as the thread executes for whileis then removed from the processor by the scheduler and added to the waiting queuebefore being returned to the processor again at later date creating thread there are two ways in which to initiate new thread of executionpass reference to callable object (such as function or methodinto the thread class constructor this reference acts as the target for the thread to execute |
19,361 | creating thread create subclass of the thread class and redefine the run(method to perform the set of actions that the thread is intended to do we will look at both approaches as thread is an objectit can be treated just like any other objectit can be sent messagesit can have instance variables and it can provide methods thusthe multi-threaded aspects of python all conform to the object-oriented model this greatly simplifies the creation of multi-threaded systems as well as the maintainability and clarity of the resulting software once new instance of thread is createdit must be started before it is startedit cannot runalthough it exists instantiating the thread class the thread class can be found in the threading module and therefore must be imported prior to use the class thread defines single constructor that takes up to six optional argumentsclass threading thread(group=nonetarget=nonename=noneargs=()kwargs={}daemon=nonethe thread constructor should always be called using keyword argumentsthe meaning of these arguments isgroup should be nonereserved for future extension when threadgroup class is implemented target is the callable object to be invoked by the run(method defaults to nonemeaning nothing is called name is the thread name by defaulta unique name is constructed of the form "thread-nwhere is an integer args is the argument tuple for the target invocation defaults to (if single argument is provided the tuple is not required if multiple arguments are provided then each argument is an element within the tuple kwargs is dictionary of keyword arguments for the target invocation defaults to {daemon indicates whether this thread runs as daemon thread or not if not nonedaemon explicitly sets whether the thread is daemonic if none (the default)the daemonic property is inherited from the current thread |
19,362 | threading once thread is created it must be started to become eligible for execution using the thread start(method the following illustrates very simple program that creates thread that will run the simple_worker(functionfrom threading import thread def simple_worker()print('hello'create new thread and start it the thread will run the function simple_worker thread(target=simple_workert start(in this examplethe thread will execute the function simple_worker the main code will be executed by the main thread that is present when the program startsthere are thus two threads used in the above programmain and the thread class the thread class defines all the facilities required to create an object that can execute within its own lightweight process the key methods arestart(start the thread' activity it must be called at most once per thread object it arranges for the object' run(method to be invoked in separate thread of control this method will raise runtimeerror if called more than once on the same thread object run(method representing the thread' activity you may override this method in subclass the standard run(method invokes the callable object passed to the object' constructor as the target argumentif anywith positional and keyword arguments taken from the args and kwargs argumentsrespectively you should not call this method directly join(timeout nonewait until the thread sent this message terminates this blocks the calling thread until the thread whose join()method is called terminates when the timeout argument is present and not noneit should be floating-point number specifying timeout for the operation in seconds (or fractions thereofa thread can be join()ed many times name string used for identification purposes only it has no semantics multiple threads may be given the same name the initial name is set by the constructor giving thread name can be useful for debugging purposes ident the 'thread identifierof this thread or none if the thread has not been started this is nonzero integer |
19,363 | the thread class is_alive(return whether the thread is alive this method returns true just before the run(method starts until just after the run(method terminates the module function threading enumerate(returns list of all alive threads daemon boolean value indicating whether this thread is daemon thread (trueor not (falsethis must be set before start(is calledotherwise runtimeerror is raised its default value is inherited from the creating thread the entire python program exits when no alive non-daemon threads are left an example illustrating using some of these methods is given belowfrom threading import thread def simple_worker()print('hello' thread(target=simple_workert start(print( getname()print( identprint( is_alive()this produceshello thread- true the join(method can cause one thread to wait for another to complete for exampleif we want the main thread to wait until thread completes before it prints the done messagethen we can make it join athatthreadfrom threading import thread from time import sleep def worker()for in range( )print('end=''flush=truesleep( print('starting'create read object with reference to worker function thread(target=workerstart the thread object |
19,364 | threading start(wait for the thread to complete join(print('\ndone'now the 'donemessage should not be printed out until after the worker thread has finished as shown belowstarting done the threading module functions there are set of threading module functions which support working with threadsthese functions includethreading active_count(return the number of thread objects currently alive the returned count is equal to the length of the list returned by enumerate(threading current_thread(return the current thread objectcorresponding to the caller' thread of control if the caller' thread of control was not created through the threading modulea dummy thread object with limited functionality is returned threading get_ident(return the 'thread identifierof the current thread this is nonzero integer thread identifiers may be recycled when thread exits and another thread is created threading enumerate()return list of all thread objects currently alive the list includes daemon threadsdummy thread objects created by current_thread(and the main thread it excludes terminated threads and threads that have not yet been started threading main_thread()return the main thread object passing arguments to thread many functions expect to be given set of parameter values when they are runthese arguments still need to be passed to the function when they are run via separate thread these parameters can be passed to the function to be executed via the args parameterfor example |
19,365 | passing arguments to thread from threading import thread from time import sleep def worker(msg)for in range( )print(msgend=''flush=truesleep( print('starting' thread(target=workerargs=' ' thread(target=workerargs=' ' thread(target=workerargs=' ' start( start( start(print('done'in this examplethe worker function takes message to be printed times within loop inside the loop the thread will print the message and then sleep for second this allows other threads to be executed as the thread must wait for the sleep timeout to finish before again becoming runnable three threads and are then created each with different message note that the worker(function can be reused with each thread as each invocation of the function will have its own parameter values passed to it the three threads are then started this means that at this point there is the main threadand three worker threads that are runnable (although only one thread will run at timethe three worker threads each run the worker(function printing out either the letter ab or ten times this means that once started each thread will print out stringsleep for and then wait until it is selected to run againthis is illustrated in the following diagram |
19,366 | threading the output generated by this program is illustrated belowstarting abcdone abcacbabcabccbaabcabcabcbac notice that the main thread is finished after the worker threads have only printed out single letter eachhowever as long as there is at least one non-daemon thread running the program will not terminateas none of these threads are marked as daemon thread the program continues until the last thread has finished printing out the tenth of its letters also notice how each of the threads gets chance to run on the processor before it sleeps againthus we can see the letters ab and all mixed in together extending the thread class the second approach to creating thread mentioned earlier was to subclass the thread class to do this you must define new subclass of thread override the run(method define new __init__(method that calls the parent class __init__(method to pass the required parameters up to the thread class constructor this is illustrated below where the workerthread class passes the nametarget and daemon parameters up to the thread super class constructor once you have done this you can create an instance of the new workerthread class and then start that instance print('starting' workerthread( start(print('\ndone' |
19,367 | extending the thread class the output from this isstarting done note that it is common to call any subclasses of the thread classsomethingthreadto make it clear that it is subclass of the thread class and should be treated as if it was thread (which of course it is daemon threads thread can be marked as daemon thread by setting the daemon property to true either in the constructor or later via the accessor property for examplefrom threading import thread from time import sleep def worker(msg)for in range( )print(msgend=''flush=truesleep( print('starting'create daemon thread thread(daemon=truetarget=workerargs=' ' start(sleep( print('done'this creates background daemon thread that will run the function worker(such threads are often used for house keeping tasks (such as background data backups etc as mentioned above daemon thread is not enough on its own to keep the current program from terminating this means that the daemon thread will keep looping until the main thread finishes as the main thread sleeps for that allows the daemon thread to print out about strings before the main thread terminates this is illustrated by the output belowstarting cccccdone |
19,368 | threading naming threads threads can be namedwhich can be very useful when debugging an application with multiple threads in the following examplethree threads have been createdtwo have been explicitly given name related to what they are doing while the middle one has been left with the default name we then start all three threads and use the threading enumerate(function to loop through all the currently live threads printing out their namesthe output from this program is given blowabc mainthread worker thread- daemon abcbacacbcbacbaabccbacbacba as you can see in addition to the worker thread and the daemon thread there is mainthread (that initiates the whole programand thread- which is the thread referenced by the variable and uses the default thread name |
19,369 | thread local data thread local data in some situations each thread requires its own copy of the data it is working withthis means that the shared (heapmemory is difficult to use as it is inherently shared between all threads to overcome this python provides concept known as thread-local data thread-local data is data whose values are associated with thread rather than with the shared memory this idea is illustrated belowto create thread-local data it is only necessary to create an instance of threading local (or subclass of thisand store attributes into it the instances will be thread specificmeaning that one thread will not see the values stored by another thread for examplefrom threading import threadlocalcurrentthread from random import randint def show_value(data)tryval data value except attributeerrorprint(currentthread(nameno value yet'elseprint(currentthread(namevalue ='valdef worker(data)show_value(datadata value randint( show_value(dataprint(currentthread(namestarting'local_data local(show_value(local_data |
19,370 | threading for in range( ) thread(name='wstr( )target=workerargs=[local_data] start(show_value(local_dataprint(currentthread(namedone'the output from this is mainthread starting mainthread no value yet no value yet value no value yet value mainthread no value yet mainthread done the example presented above defines two functions the first function attempts to access value in the thread local data object if the value is not present an exception is raised (attributeerrorthe show_value(function catches the exception or successfully processes the data the worker function calls show_value(twiceonce before it sets value in the local data object and once after as this function will be run by separate threads the currentthread name is printed by the show_value(function the main function crates local data object using the local(function from the threading library it then calls show_value(itself next it creates two threads to execute the worker function in passing the local_data object into themeach thread is then started finallyit calls show_value(again as can be seen from the output one thread cannot see the data set by another thread in the local_data object (even when the attribute name is the same timers the timer class represents an action (or taskto run after certain amount of time has elapsed the timer class is subclass of thread and as such also functions as an example of creating custom threads |
19,371 | timers timers are startedas with threadsby calling their start(method the timer can be stopped (before its action has begunby calling the cancel(method the interval the timer will wait before executing its action may not be exactly the same as the interval specified by the user as another thread may be running when the timer wishes to start the signature of the timer class constructor istimer(intervalfunctionargs nonekwargs nonean example of using the timer class is given belowfrom threading import timer def hello()print('hello'print('starting' timer( hellot start(print('done'in this case the timer will run the hello function after an initial delay of the global interpreter lock the global interpreter lock (or the gilis global lock within the underlying cpython interpreter that was designed to avoid potential deadlocks between multiple tasks it is designed to protect access to python objects by preventing multiple threads from executing at the same time for the most part you do not need to worry about the gil as it is at lower level than the programs you will be writing howeverit is worth noting that the gil is controversial because it prevents multithreaded python programs from taking full advantage of multiprocessor systems in certain situations this is because in order to execute thread must obtain the gil and only one thread at time can hold the gil (that is the lock it representsthis means that python acts like single cpu machineonly one thing can run at time thread will only give up the gil if it sleepshas to wait for something (such as some / |
19,372 | threading or it has held the gil for certain amount of time if the maximum time that thread can hold the gil has been met the scheduler will release the gil from that thread (resulting it stopping execution and now having to wait until it has the gil returned to itand will select another thread to gain the gil and start to execute it is thus impossible for standard python threads to take advantage of the multiple cpus typically available on modern computer hardware one solution to this is to use the python multiprocessing library described in the next online resources see the following online resources for information on the topics in this documentation on threading threading threading module exercise create function called printer(that takes message and maximum value to use for period to sleep within the function create loop that iterates times within the loop generate random number from to the max period specified and then sleep for that period of time you can use the random randint(function for this once the sleep period has finished print out the message passed into the function then loop again until this has been repeated times now create five threads to run five invocations of the function you produced above and start all five threads each thread should have different max_sleep time an example program to run the printer function five times via set of threads is given belowt thread(target=printerargs=(' ' ) thread(target=printerargs=(' ' ) thread(target=printerargs=(' ' ) thread(target=printerargs=(' ' ) thread(target=printerargs=(' ' ) start( |
19,373 | exercise start( start( start( start(an example of the sort of output this could generate is given belowbaeaeabedaeaebedcecbeeeadcdbbdabcadbbdabadcdcdcccc |
19,374 | multiprocessing introduction the multiprocessing library supports the generation of separate (operating system levelprocesses to execute behaviour (such as functions or methodsusing an api that is similar to the threading api presented in the last it can be used to avoid the limitation introduced by the global interpreter lock (the gilby using separate operating system processes rather than lightweight threads (which run within single processthis means that the multiprocessing library allows developers to fully exploit the multiple processor environment of modern computer hardware which typically has multiple processor cores allowing multiple operations/behaviours to run in parallelthis can be very significant for data analyticsimage processinganimation and games applications the multiprocessing library also introduces some new featuresmost notably the pool object for parallelising execution of callable object ( functions and methodsthat has no equivalent within the threading api the process class the process class is the multiprocessing library' equivalent to the thread class in the threading library it can be used to run callable object such as function in separate process to do this it is necessary to create new instance of the process class and then call the start(method on it methods such as join(are also available so that one process can wait for another process to complete before continuing etc the main difference is that when new process is created it runs within separate process on the underlying operating systems (such as windowlinux or (cspringer nature switzerland ag huntadvanced guide to python programmingundergraduate topics in computer science |
19,375 | multiprocessing mac osin contrast thread runs within the same process as the original program this means that the process is managed and executed directly by the operating system on one of the processors that are part of the underlying computer hardware the up side of this is that you are able to exploit the underlying parallelism inherent in the physical computer hardware the downside is that process takes more work to set up than the lighter weight threads the constructor for the process class provides the same set of arguments as the thread classnamelyclass multiprocessing process(group=nonetarget=nonename=noneargs=()kwargs={}daemon=nonegroup should always be noneit exists solely for compatibility with the threading api target is the callable object to be invoked by the run(method it defaults to nonemeaning nothing is called name is the process name args is the argument tuple for the target invocation kwargs is dictionary of keyword arguments for the target invocation daemon argument sets the process daemon flag to true or false if none (the default)this flag will be inherited from the creating process as with the thread classthe process constructor should always be called using keyword arguments the process class also provides similar set of methods to the thread class start(start the process' activity this must be called at most once per process object it arranges for the object' run(method to be invoked in separate process join([timeout]if the optional argument timeout is none (the default)the method blocks until the joined process terminates if timeout is positive numberit blocks at most timeout seconds note that the method returns none if its process terminates or if the method times out |
19,376 | the process class is_alive(return whether the process is alive roughlya process object is alive from the moment the start(method returns until the child process terminates the process class also has several attributesname the process' name the name is string used for identification purposes only it has no semantics multiple processes may be given the same name it can be useful for debugging purposes daemon the process' daemon flaga boolean value this must be set before start(is called the default value is inherited from the creating process when process exitsit attempts to terminate all of its daemonic child processes note that daemonic process is not allowed to create child processes pid return the process id before the process is spawnedthis will be none exitcode the process exit code this will be none if the process has not yet terminated negative value - indicates that the child was terminated by signal in addition to these methods and attributesthe process class also defines additional process related methods includingterminate(terminate the process kill(same as terminate(except that on unix the sigkill signal is used instead of the sigterm signal close(close the process objectreleasing all resources associated with it valueerror is raised if the underlying process is still running once close(returns successfullymost of the other methods and attributes of the process object will raise valueerror working with the process class the following simple program creates three process objectseach runs the function worker()with the string arguments ab and respectively these three process objects are then started using the start(method |
19,377 | multiprocessing from multiprocessing import process from time import sleep def worker(msg)for in range( )print(msgend=''flush=truesleep( print('starting' process(target=workerargs=' ' process(target=workerargs=' ' process(target=workerargs=' ' start( start( start(print('done'it is essentially the same as the equivalent program for threads but with the process class being used instead of the thread class the output from this application is given belowstarting done abcabcabcabcabcabcabcacbacbacb the main difference between the thread and process versions is that the process version runs the worker function in separate processes whereas in the thread version all the threads share the same process alternative ways to start process when the start(method is called on processthree different approaches to starting the underlying process are available these approaches can be set using the multiprocessing set_start_method(which takes string indicating the approach to use the actual process initiation mechanisms available depend on the underlying operating system'spawnthe parent process starts fresh python interpreter process the child process will only inherit those resources necessary to run the process objects run(method in particularunnecessary file descriptors and handles from the |
19,378 | alternative ways to start process parent process will not be inherited starting process using this method is rather slow compared to using fork or forkserver available on unix and windows this is the default on windows 'forkthe parent process uses os fork(to fork the python interpreter the child processwhen it beginsis effectively identical to the parent process all resources of the parent are inherited by the child process available only on unix type operating systems this is the default on unixlinux and mac os 'forkserverin this case server process is started from then onwhenever new process is neededthe parent process connects to the server and requests that it fork new process the fork server process is single threaded so it is safe for it to use os fork(no unnecessary resources are inherited available on unix style platforms which support passing file descriptors over unix pipes the set_start_method(should be used to set the start method (and this should only be set once within programthis is illustrated belowwhere the spawn start method is specifiedfrom multiprocessing import process from multiprocessing import set_start_method from time import sleep import os def worker(msg)print('module name:'__name__print('parent process:'os getppid()print('process id:'os getpid()for in range( )print(msgend=''flush=truesleep( def main()print('starting'print('root application process id:'os getpid()set_start_method('spawn' process(target=workerargs=' ' start(print('done'if __name__ ='__main__'main(the output from this is shown belowstarting root application process id done |
19,379 | multiprocessing module name__main__ parent process process id aaaaaaaaaa note that the parent process and current process ids are printed out for the worker (functionwhile the main(method prints out only its own id this shows that the main application process id is the same as the worker process parentsid alternativelyit is possible to use the get_context(method to obtain context object context objects have the same api as the multiprocessing module and allow you to use multiple start methods in the same programfor examplectx multiprocessing get_context('spawn' ctx queue( ctx process(target fooargs ( ,) using pool creating processes is expensive in terms of computer resources it would therefore be useful to be able to reuse processes within an application the pool class provides such reusable processes the pool class represents pool of worker processes that can be used to perform set of concurrentparallel operations the pool provides methods which allow tasks to be offloaded to these worker processes the pool class provides constructor which takes number of argumentsclass multiprocessing pool pool(processesinitializerinitargsmaxtasksperchildcontextthese representprocesses is the number of worker processes to use if processes is none then the number returned by os cpu_count(is used initializer if initializer is not none then each worker process will call initializer(*initargswhen it starts maxtasksperchild is the number of tasks worker process can complete before it will exit and be replaced with fresh worker processto enable unused resources to be freed the default maxtasksperchild is nonewhich means worker processes will live as long as the pool |
19,380 | using pool context can be used to specify the context used for starting the worker processes usually pool is created using the function multiprocessing pool(alternatively the pool can be created using the pool(method of context object the pool class provides range of methods that can be used to submit work to the worker processes managed by the pool note that the methods of the pool object should only be called by the process which created the pool the following diagram illustrates the effect of submitting some work or task to the pool from the list of available processesone process is selected and the task is passed to the process the process will then execute the task on completion any results are returned and the process is returned to the available list if when task is submitted to the pool there are no available processes then the task will be added to wait queue until such time as process is available to handle the task the simplest of the methods provided by the pool for work submission is the map methodpool map(funciterablechunksize=nonethis method returns list of the results obtained by executing the function in parallel against each of the items in the iterable parameter the func parameter is the callable object to be executed (such as function or methodthe iteratable is used to pass in any parameters to the function this method chops the iterable into number of chunks which it submits to the process pool as separate tasks the (approximatesize of these chunks can be specified by setting chunksize to positive integer the method blocks until the result is ready |
19,381 | multiprocessing the following sample program illustrates the basic use of the pool and the map(method from multiprocessing import pool def worker( )print('in worker with'xreturn def main()with pool(processes= as poolprint(pool map(worker[ ])if __name__ ='__main__'main(note that the pool object must be closed once you have finished with itwe are therefore using the 'with asstatement described earlier in this book to handle the pool resource cleanly (it will ensure the pool is closed when the block of code within the with as statement is completedthe output from this program is in worker with in worker with in worker with in worker with in worker with in worker with [ as can be seen from this output the map(function is used to run six different instances of the worker(function with the values provided by the list of integers each instance is executed by worker process managed by the pool howevernote that the pool only has worker processesthis means that the last two instances of the worker function must wait until two of the worker processes have finished the work they are doing and can be reused this can act as way of throttlingor controllinghow much work is done in parallel variant on the map(method is the imap_unordered(method this method also applies given function to an iterable but does not attempt to maintain the order of the results the results are accessible via the iterable returned by the function this may improve the performance of the resulting program the following program modified the worker(function to return its result rather than print it these results are then accessible by iterating over them as they are produced via for loop |
19,382 | using pool as the new method obtains results as soon as they are availablethe order in which the results are returned may be differentas shown belowin worker with in worker with in worker with in worker with in worker with in worker with further method available on the pool class is the pool apply_async(method this method allows operations/functions to be executed asynchronously allowing the method calls to return immediately that is as soon as the method call is madecontrol is returned to the calling code which can continue immediately any results to be collected from the asynchronous operations can be obtained either by providing callback function or by using the blocking get(method to obtain result two examples are shown belowthe first uses the blocking get(method this method will wait until result is available before continuing the second approach uses callback function the callback function is called when result is availablethe result is passed into the function |
19,383 | multiprocessing from multiprocessing import pool def collect_results(result)print('in collect_results'resultdef worker( )print('in worker with'xreturn def main()with pool(processes= as poolget based example res pool apply_async(worker[ ]print('result from async'res get(timeout= )with pool(processes= as poolcallback based example pool apply_async(workerargs=[ ]callback=collect_resultsif __name__ ='__main__'main(the output from this isin worker with result from async in worker with in collect_results exchanging data between processes in some situations it is necessary for two processes to exchange data howeverthe two process objects do not share memory as they are running in separate operating system level processes to get around this the multiprocessing library provides the pipe(function the pipe(function returns pair of connection connection objects connected by pipe which by default is duplex (two-waythe two connection objects returned by pipe(represent the two ends of the pipe each connection object has send(and recv(methods (among othersthis allows one process to send data via the send(method of one end of the connection object in turn second process can receive that data via the receive (method of the other connection object this is illustrated below |
19,384 | exchanging data between processes once program has finished with connection is should be closed using close (the following program illustrates how pipe connections are usedthe output from this pipe example ismain startingcreating the pipe main setting up the process main starting the process main wait for response from the child process worker started now sleeping for second worker sending data via pipe worker closing worker end of connection hello |
19,385 | multiprocessing main closing parent process end of connection main done note that data in pipe may become corrupted if two processes try to read from or write to the same end of the pipe at the same time howeverthere is no risk of corruption from processes using different ends of the pipe at the same time sharing state between processes in generalif it can be avoidedthen you should not share state between separate processes howeverif it is unavoidable then the mutiprocessing library provides two ways in which state (datacan be sharedthese are shared memory (as supported by multiprocessing value and multiprocessing arrayand server process process shared memory data can be stored in shared memory map using multiprocessing value or multiprocessing array this data can be accessed by multiple processes the constructor for the multiprocessing value type ismultiprocessing value (typecode_or_type*argslock truewheretypecode_or_type determines the type of the returned objectit is either ctypes type or one character typecode for example'dindicates double precision float and 'iindicates signed integer *args is passed on to the constructor for the type lock if lock is true (the defaultthen new recursive lock object is created to synchronise access to the value if lock is false then access to the returned object will not be automatically protected by lockso it will not necessarily be process-safe the constructor for multiprocessing array is multiprocessing array multiprocessing array(typecode_or_typesize_or_initializerlock=true |
19,386 | sharing state between processes wheretypecode_or_type determines the type of the elements of the returned array size_or_initializer if size_or_initializer is an integerthen it determines the length of the arrayand the array will be initially zeroed otherwisesize_or_initializer is sequence which is used to initialise the array and whose length determines the length of the array if lock is true (the defaultthen new lock object is created to synchronise access to the value if lock is false then access to the returned object will not be automatically protected by lockso it will not necessarily be "process-safean example using both the value and array type is given belowfrom multiprocessing import processvaluearray def worker(na) value for in range(len( )) [ - [idef main()print('starting'num value(' ' arr array(' 'range( ) process(target=workerargs=(numarr) start( join(print(num valueprint(*arrprint('done'if __name__ ='__main__'main( online resources see the following online resources for information on multiprocessinglibrary documentation on multiprocessing multiprocessing |
19,387 | multiprocessing exercises write program that can find the factorial of any given number for examplefind the factorial of the number (often written as !which is and equals the factorial is not defined for negative numbers and the factorial of zero is that is next modify the program to run multiple factorial calculations in parallel collect all the results together in list and print that list out you an use whichever approach you like to running multiple processes although pool could be good approach to use your program should compute the factorials of and in parallel |
19,388 | inter thread/process synchronisation introduction in this we will look at several facilities supported by both the threading and multiprocessing libraries that allow for synchronisation and cooperation between threads or processes in the remainder of this we will look at some of the ways in which python supports synchronisation between multiple threads and processes note that most of the libraries are mirrored between threading and multiprocessing so that the same basic ideas hold for both approaches within the mainvery similar apis howeveryou should not mix and match threads and processes if you are using threads then you should only use facilities from the threading library in turn if you are using processes than you should only use facilities in the multiprocessing library the examples given in this will use one or other of the technologies but are relevant for both approaches using barrier using threading barrier (or multiprocessing barrieris one of the simplest ways in which the execution of set of threads (or processescan be synchronised the threads or processes involved in the barrier are known as the parties that are taking part in the barrier each of the parties in the barrier can work independently until it reaches the barrier point in the code the barrier represents an end point that all parties must reach before any further behaviour can be triggered at the point that all the parties reach the barrier it is possible to optionally trigger post-phase action (also known as the barrier callbackthis post-phase action represents some behaviour that should be run when (cspringer nature switzerland ag huntadvanced guide to python programmingundergraduate topics in computer science |
19,389 | inter thread/process synchronisation all parties reach the barrier but before allowing those parties to continue the post-phase action (the callbackexecutes in single thread (or processonce it is completed then all the parties are unblocked and may continue this is illustrated in the following diagram threads and are all involved in the barrier when thread reaches the barrier it must wait until it is released by the barrier similarly when reaches the barrier it must wait when finally reaches the barrier the callback is invoked once the callback has completed the barrier releases all three threads which are then able to continue an example of using barrier object is given below note that the function being invoked in each thread must also cooperate in using the barrier as the code will run up to the barrier wait(method and then wait until all other threads have also reached this point before being allowed to continue the barrier is class that can be used to create barrier object when the barrier class is instantiatedit can be provided with three parameterswhere parties the number of individual parties that will participate in the barrier action is callable object (such as functionwhichwhen suppliedwill be called after all the parties have entered the barrier and just prior to releasing them all timeout if 'timeoutis providedit is used as the default for all subsequent wait(calls on the barrier thusin the following code barrier( action=callbackindicates that there will be three parties involved in the barrier and that the callback function will be invoked when all three reach the barrier (however the timeout is left as the default value nonethe barrier object is created outside of the threads (or processesbut must be made available to the function being executed by the thread (or processthe easiest way to handle this is to pass the barrier into the function as one of the |
19,390 | using barrier parametersthis means that the function can be used with different barrier objects depending upon the context an example using the barrier class with set of threads is given belowfrom threading import barrierthread from time import sleep from random import randint def print_it(msgbarrier)print('print_it for:'msgfor in range( )print(msgend=''flush=truesleep( sleep(randint( )print('wait for barrier with:'msgbarrier wait(print('returning from print_it:'msgdef callback()print('callback executing'print('main starting' barrier( callbackt thread(target=print_itargs=(' ' ) thread(target=print_itargs=(' ' ) thread(target=print_itargs=(' ' ) start( start( start(print('main done'the output from this ismain starting print_it fora print_it forb print_it forc abc main done abcacbacbabcacbcabacbacbbac wait for barrier withb wait for barrier witha wait for barrier withc callback executing returning from print_ita returning from print_itb returning from print_itc from this you can see that the print_it(function is run three times concurrentlyall three invocations reach the barrier wait(statement but in different order to that in which they were started once the three have reached this point the callback function is executed before the print_it(function invocations can proceed |
19,391 | inter thread/process synchronisation the barrier class itself provides several methods used to manage or find out information about the barriermethod description wait(timeout=nonewait until all threads have notified the barrier (unless timeout is reached)--returns the number of threads that passed the barrier return barrier to default state put the barrier into broken state return the number of threads required to pass the barrier number of threads currently waiting reset(abort(parties n_waiting barrier object can be reused any number of times for the same number of threads the above example could easily be changed to run using process by altering the import statement and creating set of processes instead of threadsfrom multiprocessing import barrierprocess print('main starting' barrier( callbackt process(target=print_itargs=(' ' )note that you should only use threads with threading barrier in turn you should only use processes with multiprocessing barrier event signalling although the point of using multiple threads or processes is to execute separate operations concurrentlythere are times when it is important to be able to allow two or more threads or processes to cooperate on the timing of their behaviour the barrier object presented above is relatively high-level way to do thishoweverin some cases finer grained control is required the threading event or multiprocessing event classes can be used for this purpose an event manages an internal flag that callers can either set(or clear(other threads can wait(for the flag to be set()effectively blocking their own progress until allowed to continue by the event the internal flag is initially set to false which ensures that if task gets to the event before it is set then it must wait you can infact invoke wait with an optional timeout if you do not include the optional timeout then wait(will wait forever while wait(timeoutwill wait up to the timeout given in seconds if the time out is reachedthen the wait method returns falseotherwise wait returns true as an examplethe following diagram illustrates two processes sharing an event object the first process runs function that waits for the event to be set in turn the second process runs function that will set the event and thus release the waiting process |
19,392 | event signalling the following program implements the above scenariofrom multiprocessing import processevent from time import sleep def wait_for_event(event)print('wait_for_event entered and waiting'event_is_set event wait(print('wait_for_event event is set'event_is_setdef set_event(event)print('set_event entered but about to sleep'sleep( print('set_event waking up and setting event'event set(print('set_event event set'print('starting'create the event object event event(start process to wait for the event notification process(target=wait_for_eventargs=[event] start(set up process to set the event process(target=set_eventargs=[event] start(wait for the first process to complete join(print('done' |
19,393 | inter thread/process synchronisation the output from this program isstarting wait_for_event entered and waiting set_event entered but about to sleep set_event waking up and setting event set_event event set wait_for_event event is settrue done to change this to use threads we would merely need to change the import and to create two threadsfrom threading import threadevent print('starting'event event( thread(target=wait_for_eventargs=[event] start( thread(target=set_eventargs=[event] start( join(print('done' synchronising concurrent code it is not uncommon to need to ensure that critical regions of code are protected from concurrent execution by multiple threads or processes these blocks of code typically involve the modification ofor access toshared data it is therefore necessary to ensure that only one thread or process is updating shared object at time and that consumer threads or processes are blocked while this update is occurring this situation is most common where one or more threads or processes are the producers of data and one or more other threads or processes are the consumers of that data this is illustrated in the following diagram |
19,394 | synchronising concurrent code in this diagram the producer is running in its own thread (although it could also run in separate processand places data onto some common shared data container subsequently number of independent consumers can consume that data when it is available and when they are free to process the data howeverthere is no point in the consumers repeatedly checking the container for data as that would be waste of resources (for example in terms of executing code on processor and of context switching between multiple threads or processeswe therefore need some form of notification or synchronisation between the producer and the consumer to manage this situation python provides several classes in the threading (and also in the multiprocessinglibrary that can be used to manage critical code blocks these classes include lockcondition and semaphore python locks the lock class defined (both in the threading and the multiprocessing librariesprovides mechanism for synchronising access to block of code the lock object can be in one of two states locked and unlocked (with the initial state being unlockedthe lock grants access to single thread at timeother threads must wait for the lock to become free before progressing the lock class provides two basic methods for acquiring the lock (acquire()and releasing (release()the lock when the state of the lock object is unlockedthen acquire(changes the state to locked and returns immediately when the state is lockedacquire(blocks until call to release(in another thread changes it to unlockedthen the acquire(call resets it to locked and returns the release(method should only be called in the locked stateit changes the state to unlocked and returns immediately if an attempt is made to release an unlocked locka runtimeerror will be raised an example of using lock object is shown below |
19,395 | inter thread/process synchronisation from threading import threadlock class shareddata(object)def __init__(self)self value self lock lock(def read_value(self)tryprint('read_value acquiring lock'self lock acquire(return self value finallyprint('read_value releasing lock'self lock release(def change_value(self)print('change_value acquiring lock'with self lockself value self value print('change_value lock released'the shareddata class presented above uses locks to control access to critical blocks of codespecifically to the read_value(and the change_value(methods the lock object is held internally to the sharedata object and both methods attempt to acquire the lock before performing their behavior but must then release the lock after use the read_value(method does this explicitly using tryfinallyblocks while the change_value(method uses with statement (as the lock type supports the context manager protocolboth approaches achieve the same result but the with statement style is more concise the shareddata class is used below with two simple functions in this case the shareddata object has been defined as global variable but it could also have been passed into the reader(and updater(functions as an argument both the reader and updater functions loopattempting to call the read_value(and change_value(methods on the shared_data object as both methods use lock to control access to the methodsonly one thread can gain access to the locked area at time this means that the reader(function may start to read data before the updater(function has changed the data (or vice versathis is indicated by the output where the reader thread accesses the value ' twice before the updater records the value ' howeverthe updater(function runs second time before the reader gains access to locked block of code which is why the value is missed depending upon the application this may or may not be an issue |
19,396 | python locks shared_data shareddata(def reader()while trueprint(shared_data read_value()def updater()while trueshared_data change_value(print('starting' thread(target=readert thread(target=updatert start( start(print('done'the output from this isstarting read_value acquiring lock read_value releasing lock read_value acquiring lock read_value releasing lock done change_value acquiring lock change_value lock released change_value acquiring lock change_value lock released change_value acquiring lock change_value lock released change_value acquiring lock change_value lock released lock objects can only be acquired onceif thread attempts to acquire lock on the same lock object more than once then runtimeerror is thrown if it is necessary to re-acquire lock on lock object then the threading rlock class should be used this is re-entrant lock and allows the same thread (or processto acquire lock multiple times the code must however release the lock as many times as it has acquired it |
19,397 | inter thread/process synchronisation python conditions conditions can be used to synchronise the interaction between two or more threads or processes conditions objects support the concept of notification modelideal for shared data resource being accessed by multiple consumers and producers condition can be used to notify one or all of the waiting threads or processes that they can proceed (for example to read data from shared resourcethe methods available that support this arenotify(notifies one waiting thread which can then continue notify_all(notifies all waiting threads that they can continue wait(causes thread to wait until it has been notified that it can continue condition is always associated with an internal lock which must be acquired and released before the wait(and notify(methods can be called the condition supports the context manager protocol and can therefore be used via with statement (which is the most typical way to use conditionto obtain this lock for exampleto obtain the condition lock and call the wait method we might writewith conditioncondition wait(print('now we can proceed'the condition object is used in the following example to illustrate how producer thread and two consumer threads can cooperate dataresource class has been defined which will hold an item of data that will be shared between consumer and set of producers it also (internallydefines condition attribute note that this means that the condition is completely internalised to the dataresource classexternal code does not need to knowor be concerned withthe condition and its use instead external code can merely call the consumer(and producer(functions in separate threads as required the consumer(method uses with statement to obtain the (internallock on the condition object before waiting to be notified that the data is available in turn the producer(method also uses with statement to obtain lock on the condition object before generating the data attribute value and then notifying anything waiting on the condition that they can proceed note that although the consumer method obtains lock on the condition objectif it has to wait it will release the lock and re obtain the lock once it is notified that it can continue this is subtly that is often missed |
19,398 | python conditions from threading import threadconditioncurrentthread from time import sleep from random import randint class dataresourcedef __init__(self)print('dataresource initialising the empty data'self data none print('dataresource setting up the condition object'self condition condition(def consumer(self)"""wait for the condition and use the resource""print('dataresource starting consumer method in'currentthread(namewith self conditionself condition wait(print('dataresource resource is available to'currentthread(nameprint('dataresource data read in'currentthread(name':'self datadef producer(self)"""set up the resource to be used by the consumer""print('dataresource starting producer method'with self conditionprint('dataresource producer setting data'self data randint( print('dataresource producer notifying all waiting threads'self condition notifyall(print('main starting'print('main creating the dataresource object'resource dataresource(print('main create the consumer threads' thread(target=resource consumerc name 'consumer thread(target=resource consumerc name 'consumer print('main create the producer thread' thread(target=resource producerprint('main starting consumer threads' start( start(sleep( print('main starting producer thread' start(print('main done' |
19,399 | inter thread/process synchronisation the output from an example run of this program ismain starting main creating the dataresource object dataresource initialising the empty data dataresource setting up the condition object main create the consumer threads main create the producer thread main starting consumer threads dataresource starting consumer method in consumer dataresource starting consumer method in consumer main starting producer thread dataresource starting producer method dataresource producer setting data main done dataresource producer notifying all waiting threads dataresource resource is available to consumer dataresource data read in consumer dataresource resource is available to consumer dataresource data read in consumer python semaphores the python semaphore class implements dijkstra' counting semaphore model in generala semaphore is like an integer variableits value is intended to represent number of available resources of some kind there are typically two operations available on semaphorethese operations are acquire(and release((although in some libraries dijkstra' original names of (and (are usedthese operation names are based on the original dutch phrasesthe acquire(operation subtracts one from the value of the semaphoreunless the value is in which case it blocks the calling thread until the semaphore' value increases above again the signal(operation adds one to the valueindicating new instance of the resource has been added to the pool both the threading semaphore and the multiprocessing semaphore classes also supports the context management protocol an optional parameter used with the semaphore constructor gives the initial value for the internal counterit defaults to if the value given is less than valueerror is raised the following example illustrates different threads all running the same worker(function the worker(function attempts to acquire semaphoreif it does then it continues into the with statement blockif it doesn'tit waits until it can acquire it as the semaphore is initialised to there can only be two threads that can acquire the semaphore at time |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.