id
int64
0
25.6k
text
stringlengths
0
4.59k
18,200
alsobe aware that the buffer size parameters only specify limit at which writes occur and do not necessarily set limit on internal resource use for examplewhen you do write(dataon buffered file fall of the bytes in data are first copied into the internal buffers if data represents very large byte arraythis copying will substantially increase the memory use of your program thusit is better to write large amounts of data in reasonably sized chunksnot all at once with single write(operation it should be noted that because the io module is relatively newthis behavior might be different in future versions text / the text / layer is used to process line-oriented character data the classes defined in this section build upon buffered / streams and add line-oriented processing as well as unicode character encoding and decoding all of the classes here inherit from textiobase textiowrapper(buffered [encoding [errors [newline [line_buffering]]]] class for buffered text stream buffered is buffered / as described in the previous section encoding is string such as 'asciior 'utf- that specifies the text encoding errors specifies the unicode error-handling policy and is 'strictby default (see "input and output,for descriptionnewline is the character sequence representing newline and may be none'''\ ''\ 'or '\ \nif none is giventhen universal newline mode is enabled in which any of the other line endings are translated into '\nwhen reading and os linesep is used as the newline on output if newline is one of the other valuesthen all '\ncharacters are translated into the specified newline on output line_buffering is flag that controls whether or not flush(operation is performed when any write operation contains the newline character by defaultthis is false an instance of textiowrapper supports all of the operations defined on iobase as well as the followingmethod description encoding errors line_buffering newlines the name of the text encoding being used encoding and decoding error handling policy flag that determines line buffering behavior nonea stringor tuple giving all of the different forms of newlines translated reads at most characters from the underlying stream and returns as string if is omittedthen this reads all available data to the end of file returns the empty string at eof the returned strings are decoded according to the encoding setting in encoding reads single line of text and returns as string returns an empty string at eof limit is the maximum number of bytes to read writes string to the underlying stream using the text encoding in encoding read([ ] readline([limit] write(sstringio([initial [encoding [errors [newline]]]]an in-memory file object with the same behavior as textiowrapper initial is string that specifies the initial contents of the file the other parameters have the same lib fl ff
18,201
operating system services meaning as with textiowrapper an instance of stringio supports all of the usual file operationsin addition to method getvalue(that returns the current contents of the memory buffer the open(function the io module defines the following open(functionwhich is the same as the builtin open(function in python open(file [mode [buffering [encoding [errors [newline [closefd]]]]]]opens file and returns an appropriate / object file is either string specifying the name of file or an integer file descriptor for an / stream that has already been opened the result of this function is one of the / classes defined in the io module depending on the settings of mode and buffering if mode is any of the text modes such as ' '' '' 'or ' 'then an instance of textiowrapper is returned if mode is binary mode such as 'rbor 'wb'then the result depends on the setting of buffering if buffering is then an instance of fileio is returned for performing raw unbuffered / if buffering is any other valuethen an instance of bufferreaderbufferedwriteror bufferedrandom is returned depending on the file mode the encodingerrorsand errors parameters are only applicable to files opened in text mode and passed to the textiowrapper constructor the closefd is only applicable if file is an integer descriptor and is passed to the fileio constructor abstract base classes the io module defines the following abstract base classes that can be used for type checking and defining new / classesabstract class description iobase rawiobase base class for all / classes base class for objects that support raw binary / inherits from iobase base class for objects that support buffered binary / inherits from iobase base class for objects that support text streams inherits from iobase bufferediobase textiobase it is rare for most programmers to work with these classes directly you should refer to the online documentation for details concerning their use and definition note the io module is new addition to pythonfirst appearing in python and backported to python as of this writingthe module is immature and has extremely poor runtime performance--especially for any application that involves heavy amounts of text / if you are using python you will be better served by the built-in open(function than using the / classes defined in the io module if you are using python there seems to be no other alternative although performance improvements are likely in future releasesthis layered approach to / coupled with unicode decoding is unlikely to match the raw / performance found in the standard librarywhich is the basis for / in python lib fl ff
18,202
logging the logging module provides flexible facility for applications to log eventserrorswarningsand debugging information this information can be collectedfilteredwritten to filessent to the system logand even sent over the network to remote machines this section covers the essential details of using this module for most common cases logging levels the main focus of the logging module concerns the issuing and handling of log messages each message consists of some text along with an associated level that indicates its severity levels have both symbolic name and numerical value as followslevel value description critical error warning info debug notset critical errors/messages errors warning messages informative messages debugging no level set these different levels are the basis for various functions and methods throughout the logging module for examplethere are methods to issue log messages at each level as well as filters that work by blocking messages that don' meet certain threshold value basic configuration before using any other functions in the logging moduleyou should first perform some basic configuration of special object known as the root logger the root logger is responsible for managing the default behavior of log messages including the logging leveloutput destinationmessage formatand other basic details the following function is used for configurationbasicconfig([**kwargs]performs basic configuration of the root logger this function should be called before any other logging calls are made the function accepts number of keyword argumentskeyword argument description filename filemode appends log messages to file with the given filename specifies the mode used to open the file by defaultmode ' (appendis used format string used to produce log messages format string used to output dates and times sets the level of the root logger all log messages with level equal to or above this level will be processed lower-level messages will be silently ignored provides an open file to which log messages are sent the default stream is sys stderr this parameter may not be used simultaneously with the filename parameter format datefmt level stream lib fl ff
18,203
operating system services most of these parameters are self-explanatory the format argument is used to specify the format of log messages along with optional contextual information such as filenameslevelsline numbersand so forth datefmt is date format string compatible with the time strftime(function if omittedthe date format is set to the iso format the following expansions are recognized in formatformat description %(name) %(levelno) %(levelname) %(pathname) %(filename) %(funcname) %(module) %(lineno) %(created) name of the logger numeric logging level text name of the logging level pathname of the source file where the logging call was executed filename of the source file where the logging call was executed function name in which the logging call was made module name where the logging call executed line number where the logging call executed time when the logging call executed the value is number as returned by time time(ascii-formatted date and time when the logging call was executed millisecond portion of the time when the logging call executed thread id thread name process id the logged message (supplied by user%(asctime) %(msecs) %(thread) %(threadname) %(process) %(message) here is an example that illustrates single configuration where log messages with level of info or higher are appended to fileimport logging logging basicconfigfilename "app log"format "%(levelname)- %(asctime) %(message)slevel logging info with this configurationa critical log message of 'hello worldwill appear as follows in the log file 'app logcritical : : , hello world logger objects in order to issue log messagesyou have to obtain logger object this section describes the process of creatingconfiguringand using these objects creating logger to create new logger objectyou use the following functiongetlogger([logname]returns logger instance associated with the name logname if no such object existsa new logger instance is created and returned logname is string that specifies name lib fl ff
18,204
or series of names separated by periods (for example 'appor 'app net'if you omit lognameyou will get the logger object associated with the root logger the creation of logger instances is different than what you find in most other library modules when you create loggeryou always give it name which is passed to getlogger(as the logname parameter internallygetlogger(keeps cache of the logger instances along with their associated names if another part of the program requests logger with the same namethe previously created instance is returned this arrangement greatly simplifies the handling of log messages in large applications because you don' have to figure out how to pass logger instances around between different program modules insteadin each module where you want loggingyou just use getlogger(to get reference to the appropriate logger object picking names for reasons that will become clear lateryou should always pick meaningful names when using getlogger(for exampleif your application is called 'app'then you should minimally use getlogger('app'at the top of every program module that makes up the application for exampleimport logging log logging getlogger('app'you might also consider adding the module name to the logger such as getlogger('app net'or getlogger('app user'in order to more clearly indicate the source of log messages this can be done using statements such as thisimport logging log logging getlogger('app '+ _name_ _adding the module name makes it easier to selectively turn off or reconfigure the logging for specific program modules as will be described later issuing log messages if log is an instance of logger object (created using the getlogger(function in the previous section)the following methods are used to issue log messages at the different logging levelslogging level method critical error warning info debug log critical(fmt [*args [exc_info [extra]]]log error(fmt [*args [exc_info [extra]]]log warning(fmt [*args [exc_info [extra]]]log info(fmt [*args [exc_info [extra]]]log debug(fmt [*args [exc_info [extra]]]the fmt argument is format string that specifies the format of the log message any remaining arguments in args serve as arguments for format specifiers in the format string the string formatting operator is used to form the resulting message from these arguments if multiple arguments are providedthey are placed into tuple for formatting if single argument is providedit is placed directly after the when formatting lib fl ff
18,205
operating system services thusif you pass single dictionary as an argumentthe format string can include dictionary key names here are few examples that illustrate how this workslog logging getlogger("app" log message using positional formatting log critical("can' connect to % at port % "hostporta log message using dictionary formatting parms 'host'www python org''port log critical("can' connect to %(host) at port %(port) "parmsthe keyword argument exc_infoif set to trueadds exception information from sys exc_info(to the log message if exc_info is set to an exception tuple such as that returned by sys exc_info()then that information is used the extra keyword argument is dictionary that supplies additional values for use in log message format strings (described laterboth exc_info and extra must be specified as keyword arguments when issuing log messagesyou should avoid code that carries out string formatting at the time the message is issued (that isformatting message and then passing the result into the logging modulefor examplelog critical("can' connect to % at port % (hostport)in this examplethe string formatting operation always occurs before the call to log critical(because the arguments to function or method have to be fully evaluated howeverin the example at the top of the pagethe parameters used for string formatting operation are merely passed to the logging module and used only if the log message is actually going to be handled this is very subtle distinctionbut because many applications choose to filter log messages or only emit logs during debuggingthe first approach performs less work and runs faster when logging is disabled in addition to the methods shownthere are few additional methods for issuing log messages on logger instance log log exception(fmt [*args ]issues message at the error level but adds exception information from the current exception being handled this can only be used inside except blocks log log(levelfmt [*args [exc_info [extra]]]issues logging message at the level specified by level this can be used if the logging level is determined by variable or if you want to have additional logging levels not covered by the five basic levels log findcaller(returns tuple (filenamelinenofuncnamecorresponding to the caller' source filenameline numberand function name this information is sometimes useful when issuing log messages--for exampleif you want to add information about the location of the logging call to message lib fl ff
18,206
filtering log messages each logger object log has an internal level and filtering mechanism that determines which log messages get handled the following two methods are used to perform simple filtering based on the numeric level of log messageslog setlevel(levelsets the level of log only logging messages with level greater than or equal to level will be handled all other messages are simply ignored by defaultthe level is logging notset which processes all log messages log isenabledfor(levelreturns true if logging message at level level would be processed logging messages can also be filtered based on information associated with the message itself--for examplethe filenamethe line numberand other details the following methods are used for thislog addfilter(filtadds filter objectfiltto the logger log removefilter(filtremoves filter objectfiltfrom the logger in both methodsfilt is an instance of filter object filter(lognamecreates filter that only allows log messages from logname or its children to pass through for exampleif logname is 'app'then messages from loggers such as 'app''app net'or 'app userwill passbut messages from logger such as 'spamwill not custom filters can be created by subclassing filter and implementing the method filter(recordthat receives as input record containing information about logging message as outputtrue or false is returned depending on whether or not the message should be handled the record object passed to this method typically has the following attributesattribute description record name record levelname record levelno record pathname record filename record module record exc_info record lineno record funcname record created record thread record threadname record process logger name level name level number pathname of the module base filename module name exception information line number where log message was issued function name where log message was issued time at which issued thread identifier thread name pid of currently executing process lib fl ff
18,207
operating system services the following example illustrates how you create custom filterclass filterfunc(logging filter)def _init_ (self,name)self funcname name def filter(selfrecord)if record funcname =self funcnamereturn false elsereturn true log addfilter(filterfunc('foo')log addfilter(filterfunc('bar')ignore all messages originating from foo(ignore all messages originating from bar(message propagation and hierarchical loggers in advanced logging applicationslogger objects can be organized into hierarchy this is done by giving logger object name such as 'app net clientherethere are actually three different logger objects called 'app''app net'and 'app net clientwhen message is issued on any of the loggers and it successfully passes that logger' filterit propagates to and is handled by all of the parents for examplea message successfully issued on 'app net clientalso propagates to 'app net''appand the root logger the following attributes and methods of logger object log control this propagation log propagate boolean flag that indicates whether or not messages propagate to the parent logger by defaultthis is set to true log geteffectivelevel(returns the effective level of the logger if level has been set using setlevel()that level is returned if no level has been explicitly set (the level is logging notset in this case)this function returns the effective level of the parent logger instead if none of the parent loggers have level setthe effective level of the root logger will be returned the primary purpose of hierarchical logging is to be able to more easily filter log messages originating from different parts of large application for exampleif you wanted to shut down log messages from the 'app net clientpart of an applicationyou might add configuration code such as the followingimport logging logging getlogger('app net client'propagate false orin this codewe're ignoring all but the most severe messages from program moduleimport logging logging getlogger('app net client'setlevel(logging criticala subtle aspect of hierarchical loggers is that the decision to handle log message is made entirely by the level and filters on the logger object on which the message was issuednot by the filters on any of the parents thusif message passes the first set of filtersit is propagated to and handled by all the parent loggers regardless of their own filter and level settings--even if these filters would have rejected the message at first glancethe behavior is counterintuitive and might even seem like bug howeversetting the level of child logger to value that is lower than its parent is one way to lib fl ff
18,208
override the settings on the parentachieving kind of level promotion here is an exampleimport logging the top-level logger 'applog logging getlogger('app'log setlevel(logging criticalonly accept critical level messages child logger 'app netnet_log logging getlogger('app net'net_log setlevel(logging erroraccept error messages on 'app netthese messages will now be handled by the 'applogger even though its level is critical when using hierarchical loggersyou only have to configure the logging objects where you want to change the filtering or propagation behavior because messages naturally propagate to the root loggerit will ultimately be responsible for producing the output and any configuration that you made using basicconfig(will apply message handling normallymessages are handled by the root logger howeverany logger object can have special handlers added to it that receive and process log messages this is done using these methods of logger instance log log addhandler(handleradds handler object to the logger log removehandler(handlerremoves the handler object handler from the logger the logging module has variety of pre-built handlers for writing log messages to filesstreamssystem logsand so forth these are described in further detail in the next section howeverthe following example shows how loggers and handlers are hooked together using these methods import logging import sys create top-level logger called 'appapp_log logging getlogger("app"app_log setlevel(logging infoapp_log propagate false add some message handlers to the 'applog app_log addhandler(logging filehandler('app log')app_log addhandler(logging streamhandler(sys stderr)issue some messages these go to app log and sys stderr app_log critical("creeping death detected!"app_log info("fyi"when you add your own handlers to process messagesit is often your intent to override the behavior of the root logger this is why message propagation is disabled in the previous example ( the 'applogger is simply going to handle all of the messagesf lib fl ff
18,209
operating system services handler objects the logging module provides collection of pre-built handlers that can process log messages in various in ways these handlers are added to logger objects using their addhandler(method in additioneach handler can be configured with its own filtering and levels built-in handlers the following handler objects are built-in some of these handlers are defined in submodule logging handlerswhich must be imported specifically if necessary handlers datagramhandler(host,portsends log messages to udp server located on the given host and port log messages are encoded by taking the dictionary of the corresponding logrecord object and encoding it using the pickle module the transmitted network message consists of byte network order (big-endianlength followed by the pickled record data to reconstruct the messagethe receiver must strip the length headerread the entire messageunpickle the contentsand call logging makelogrecord(because udp is unreliablenetwork errors may result in lost log messages filehandler(filename [mode [encoding [delay]]]writes log messages to the file filename mode is the file mode to use when opening the file and defaults to 'aencoding is the file encoding delay is boolean flag thatif set truedefers the opening of the log file until the first log message is issued by defaultit is false handlers httphandler(hosturl [method]uploads log messages to an http server using http get or post methods host specifies the host machineurl is the url to useand method is either 'get(the defaultor 'postthe log message is encoded by taking the dictionary of the corresponding logrecord object and encoding it as set of url query-string variables using the urllib urlencode(function handlers memoryhandler(capacity [flushlevel [target]]this handler is used to collect log messages in memory and to flush them to another handlertargetperiodically capacity is the size of the memory buffer in bytes flushlevel is numeric logging level that forces memory flush should logging message of that level or higher appear the default value is error target is another handler object that receives the messages if target is omittedyou will need to set target using the settarget(method of the resulting handler object in order for this handler to do anything handlers nteventloghandler(appname [dllname [logtype]]sends messages to the event log on windows nt,windows or windows xp appname is the name of the application name to use in the event log dllname is full path name to dll or exe file that provides message definitions to hold in the log if omitteddllname is set to 'win service pydlogtype is either 'application''system'or 'securitythe default value is 'applicationthis handler is only available if win extensions for python have been installed lib fl ff
18,210
handlers rotatingfilehandler(filename [mode [maxbytes [backupcount [encoding [delay]]]]]writes log messages to the file filename howeverif the file exceeds the size specified by maxbytesthe file is rotated to filename and new log filefilenameis opened backupcount specifies the maximum number of backup files to create by defaultthe value of backupcount is howeverwhen specifiedbackup files are rotated through the sequence filename filename ,filename nwhere filename is always the most recent backup and filename is always the oldest backup mode specifies the file mode to use when opening the log file the default mode is 'aif maxbytes is (the default)the log file is never rolled over and is allowed to grow indefinitely encoding and delay have the same meaning as with filehandler handlers smtphandler(mailhostfromaddrtoaddrssubject [credentials]sends log messages to remote host using email mailhost is the address of an smtp server that can receive the message the address can be simple host name specified as string or tuple (hostportfromaddr is the from addresstoaddrs is the destination addressand subject is the subject to use in the message credentials is tuple (usernamepasswordwith the username and password handlers sockethandler(hostportsends log messages to remote host using tcp socket connection host and port specify the destination messages are sent in the same format as described for datagramhandler unlike datagramhandlerthis handler reliably delivers log messages streamhandler([fileobj]writes log messages to an already open file-like objectfileobj if no argument is providedmessages are written to sys stderr handlers sysloghandler([address [facility]]sends log messages to unix system logging daemon address specifies the destination as (hostporttuple if omitteda destination of ('localhost' is used facility is an integer facility code and is set to sysloghandler log_user by default full list of facility codes can be found in the definition of sysloghandler handlers timedrotatingfilehandler(filename [when [interval [backupcount [encoding [delay [utc]]]]]]the same as rotatingfilehandlerbut the rotation of files is controlled by time instead of file size interval is numberand when is string that specifies units possible values for when are ' (seconds)' (minutes)' (hours)' (days)' (weeks)and 'midnight(roll over at midnightfor examplesetting interval to and when to 'drolls the log every three days backupcount specifies the maximum number of backup files to keep utc is boolean flag that determines whether or not to use local time (the defaultor utc time handlers watchedfilehandler(filename [mode [encoding [delay]]]the same as filehandlerbut the inode and device of the opened log file is monitored if it changes since the last log message was issuedthe file is closed and reopened lib fl ff
18,211
operating system services again using the same filename these changes might occur if log file has been deleted or moved as result of log rotation operation carried out externally to the running program this handler only works on unix systems handler configuration each handler object can be configured with its own level and filtering the following methods are used to do thish setlevel(levelsets the threshold of messages to be handled level is numeric code such as error or critical addfilter(filtadds filter objectfiltto the handler see the addfilter(method of logger objects for more information removefilter(filtremoves filter objectfiltfrom the handler it is important to stress that levels and filters can be set on handlers independently from any settings used on the logger objects to which handlers are attached here is an example that illustrates thisimport logging import sys create handler that prints critical level messages to stderr crit_hand logging streamhandler(sys stderrcrit_hand setlevel(logging criticalcreate top-level logger called 'appapp_log logging getlogger("app"app_log setlevel(logging infoapp_log addhandler(logging filehandler('app log')app_log addhandler(crit_handlerin this examplethere is single logger called 'appwith level of info two handlers are attached to itbut one of the handlers (crit_handlerhas its own level setting of critical although this handler will receive log messages with level of info or higherit selectively discards those that are not critical handler cleanup the following methods are used on handlers to perform cleanup flush(flushes all logging output close(closes the handler message formatting by defaulthandler objects emit log messages exactly as they are formatted in logging calls howeversometimes you want to add additional contextual information to the messages such as timestampsfilenamesline numbersand so forth this section lib fl ff
18,212
describes how this extra information can be automatically added to log messages formatter objects to change the log message formatyou must first create formatter objectformatter([fmt [datefmt]]creates new formatter object fmt provides format string for messages within fmtyou can place various expansions as previously described for the basicconfig(function datefmt is date format string compatible with the time strftime(function if omittedthe date format is set to the iso format to take effectformatter objects must be attached to handler objects this is done using the setformatter(method of handler instance setformatter(formatsets the message formatter object used to create log messages on the handler instance format must be an instance of formatter here is an example that illustrates how to customize the log message format on handlerimport logging import sys set the message format format logging formatter("%(levelname)- %(asctime) %(message) "create handler that prints critical level messages to stderr crit_hand logging streamhandler(sys stderrcrit_hand setlevel(logging criticalcrit_hand setformatter(formatin this examplea custom formatter is set on the crit_hand handler if logging message such as "creeping death detected is processed by this handlerthe following log message is producedcritical : : , creeping death detected adding extra context to messages in certain applicationsit is useful to add additional context information to log messages this extra information can be provided in one of two ways firstall of the basic logging operations ( log critical()log warning()etc have keyword parameter extra that is used to supply dictionary of additional fields for use in message format strings these fields are merged in with the context data previously described for formatter objects here is an exampleimport loggingsocket logging basicconfigformat "%(hostname) %(levelname)- %(asctime) %(message)ssome extra context netinfo 'hostnamesocket gethostname()'ipsocket gethostbyname(socket gethostname()log logging getlogger('app'issue log message with the extra context data lib fl ff
18,213
operating system services log critical("could not connect to server"extra=netinfothe downside of this approach is that you have to make sure every logging operation includes the extra information or else the program will crash an alternative approach is to use the logadapter class as wrapper for an existing logger logadapter(log [extra]creates wrapper around logger object log extra is dictionary of extra context information to be supplied to message formatters an instance of logadapter has the same interface as logger object howeveroperations that issue log messages will automatically add the extra information supplied in extra here is an example of using logadapter objectimport loggingsocket logging basicconfigformat "%(hostname) %(levelname)- %(asctime) %(message)ssome extra context netinfo 'hostnamesocket gethostname()'ipsocket gethostbyname(socket gethostname()create logger log logging logadapter(logging getlogger("app")netinfoissue log message extra context data is supplied by the logadapter log critical("could not connect to server"miscellaneous utility functions the following functions in logging control few other aspects of loggingdisable(levelglobally disables all logging messages below the level specified in level this can be used to turn off logging on applicationwide basis--for instanceif you want to temporarily disable or reduce the amount of logging output addlevelname(levellevelnamecreates an entirely new logging level and name level is number and levelname is string this can be used to change the names of the built-in levels or to add more levels than are supported by default getlevelname(levelreturns the name of the level corresponding to the numeric value level shutdown(shuts down all logging objectsflushing output if necessary logging configuration setting an application to use the logging module typically involves the following basic steps use getlogger(to create various logger objects set parameters such as the lib fl ff
18,214
levelas appropriate create handler objects by instantiating one of the various types of handlers (filehandlerstreamhandlersockethandlerand so onand set an appropriate level create message formatter objects and attach them to the handler objects using the setformatter(method attach the handler objects to the logger objects using the addhandler(method because the configuration of each step can be somewhat involvedyour best bet is to put all the logging configuration into single well-documented location for exampleyou might create file applogconfig py that is imported by the main program of your applicationapplogconfig py import logging import sys set the message format format logging formatter("%(levelname)- %(asctime) %(message) "create handler that prints critical level messages to stderr crit_hand logging streamhandler(sys stderrcrit_hand setlevel(logging criticalcrit_hand setformatter(formatcreate handler that prints messages to file applog_hand logging filehandler('app log'applog_hand setformatter(formatcreate top-level logger called 'appapp_log logging getlogger("app"app_log setlevel(logging infoapp_log addhandler(applog_handapp_log addhandler(crit_handchange the level on the 'app netlogger logging getlogger("app net"setlevel(logging errorif changes need to be made to any part of the logging configurationhaving everything in one location makes things easier to maintain keep in mind that this special file only needs to be imported once and in only one location in the program in other parts of the code where you want to issue log messagesyou simply include code like thisimport logging app_log logging getlogger("app"app_log critical("an error occurred"the logging config submodule as an alternative to hard-coding the logging configuration in python codeit is also possible to configure the logging module through the use of an ini-format configuration file to do thisuse the following functions found in logging config fileconfig(filename [defaults [disable_existing_loggers]]reads the logging configuration from the configuration file filename defaults is lib fl ff
18,215
operating system services dictionary of default configuration parameters for use in the config file the specified filename is read using the configparser module disable_existing_loggers is boolean flag that specifies whether or not any existing loggers are disabled when new configuration data is read by defaultthis is true the online documentation for the logging module goes into some detail on the expected format of configuration files howeverexperienced programmers can probably extrapolate from the following examplewhich is configuration file version of applogconfig py shown in the previous section applogconfig ini configuration file for setting up logging the following sections provide names for loggerhandlerand formatter objects that will be configured later in the file [loggerskeys=root,app,app_net [handlerskeys=crit,applog [formatterskeys=format [logger_rootlevel=notset handlers[logger_applevel=info propagate= qualname=app handlers=crit,applog [logger_app_netlevel=error propagate= qualname=app net handlers[handler_critclass=streamhandler level=critical formatter=format args=(sys stderr,[handler_applogclass=filehandler level=notset formatter=format args=('app log',[formatter_formatformat=%(levelname)- %(asctime) %(message) datefmtto read this configuration file and set up loggingyou would use this codeimport logging config lib fl ff
18,216
logging config fileconfig('applogconfig ini'as beforemodules that want to issue log messages do not need to worry about the details of loading the logging configuration they merely import the logging module and get reference to the appropriate logger object for exampleimport logging app_log logging getlogger("app"app_log critical("an error occurred"performance considerations adding logging to an application can severely degrade its performance if you aren' careful howeverthere are some techniques that can be used to reduce the overhead firstpython' optimized mode (-oremoves all code that is conditionally executed using statements such as if _debug_ _statements if the sole purpose of logging is debuggingyou could conditionally execute all of the logging calls and have the calls removed in optimized mode second technique would be to use null object in place of logger object when logging is to be completely disabled this is different than using none insteadyou want to use an instance of an object that silently swallows all operations that get performed on it for exampleclass null(object)def _init_ (self*args**kwargs)pass def _call_ (self*args**kwargs)return self def _getattribute_ (selfname)return self def _setattr_ (selfnamevalue)pass def _delattr_ (self,name)pass log null(log critical("an error occurred "does nothing depending on your clevernesslogging can also be managed through the use of decorators and metaclasses because these features of python operate at the time that functionsmethodsand classes are definedthey can be used to selectively add or remove logging features from parts of program in way that does not impact performance when logging is disabled please refer to "functions and functional programming,and "classes and object-oriented programming,for further details notes the logging module provides large number of customization options not discussed here readers should consult online documentation for further details it is safe to use the logging module with programs that use threads in particularit is not necessary to add locking operations around code that is issuing log messages mmap the mmap module provides support for memory-mapped file object this object behaves both like file and byte string and can be used in most places where an ordif lib fl ff
18,217
operating system services nary file or byte string is expected furthermorethe contents of memory-mapped file are mutable this means that modifications can be made using index-assignment and slice-assignment operators unless private mapping of the file has been madesuch changes directly alter the contents of the underlying file memory-mapping file is created by the mmap(functionwhich is slightly different on unix and windows mmap(filenolength [flags[prot [,access [offset]]]](unixreturns an mmap object that maps length bytes from the file with an integer file descriptorfileno if fileno is - anonymous memory is mapped flags specifies the nature of the mapping and is one of the followingflag meaning map_private creates private copy-on-write mapping changes to the object will be private to this process shares the mapping with all other processes mapping the same areas of the file changes to the object will affect all mappings map_shared the default flags setting is map_shared prot specifies the memory protections of the object and is the bitwise or of the followingsetting meaning prot_read prot_write prot_exec data can be read from the object modifications can be made to the object the object can contain executable instructions the default value of prot is prot_read prot_write the modes specified in prot must match the access permissions used to open the underlying file descriptor fileno in most casesthis means that the file should be opened in read/write mode (for exampleos open(nameos o_rdwr)the optional access parameter may be used as an alternative to flags and prot if givenit has one of the following valuesaccess meaning access_read access_write read-only access read/write access with write-through modifications affect the underlying file read/write access with copy-on-write modifications affect memory but do not change the underlying file access_copy when access is suppliedit is typically given as keyword argument--for examplemmap(filenolengthaccess=access_readit is an error to supply values for both access and flags the offset parameter specifies the number of bytes from the start of the file and defaults to it must be multiple of mmap allocationgranularity mmap(filenolength[tagname [,access [offset]]](windowsreturns an mmap object that maps length bytes from the file specified by the integer file descriptor fileno use fileno of - for anonymous memory if lib fl ff
18,218
length is larger than the current size of the filethe file is extended to length bytes if length is the current length of the file is used as the length as long as the file is nonempty (otherwisean exception will be raisedtagname is an optional string that can be used to name the mapping if tagname refers to an existing mappingthat mapping is opened otherwisea new mapping is created if tagname is nonean unnamed mapping is created access is an optional parameter that specifies the access mode it takes the same values for access as described for the unix version of mmap(shown earlier by defaultaccess is access_write offset is the number of bytes from the beginning of the file and defaults to it must be multiple of mmap allocationgranularity memory-mapped file objectmsupports the following methods close(closes the file subsequent operations will result in an exception find(string[start]returns the index of the first occurrence of string start specifies an optional starting position returns - if no match is found flush([offsetsize]flushes modifications of the in-memory copy back to the file system offset and size specify an optional range of bytes to flush otherwisethe entire mapping is flushed move(dst,src,countcopies count bytes starting at index src to the destination index dst this copy is performed using the memmove(functionwhich is guaranteed to work correctly when the source and destination regions happen to overlap read(nreads up to bytes from the current file position and returns the data as string read_byte(reads single byte from the current file position and returns as string of length readline(returns line of input starting at the current file position resize(newsizeresizes the memory-mapped object to contain newsize bytes seek(pos[whence]sets the file position to new value pos and whence have the same meaning as for the seek(method on file objects size(returns the length of the file this value may be larger than the size of the memorymapped region tell(returns the value of the file pointer lib fl ff
18,219
write(stringwrites string of bytes to the file at the current file pointer write_byte(bytewrites single byte into memory at the current file pointer notes although unix and windows supply slightly different mmap(functionsthis module can be used in portable manner by relying on the optional access parameter that is common to both functions for examplemmap(fileno,length,access=access_writewill work on both unix and windows certain memory mapping may only work with length that' multiple of the system page sizewhich is contained in the constant mmap pagesize on unix svr systemsanonymous mapped memory can be obtained by calling mmap(on the file /dev/zeroopened with appropriate permissions on unix bsd systemsanonymous mapped memory can be obtained by calling mmap(with negative file descriptor and the flag mmap map_anon msvcrt the msvcrt module provides access to number of useful functions in the microsoft visual runtime library this module is available only on windows getch(reads keypress and returns the resulting character this call blocks if keypress is not available if the pressed key was special function keythe call returns '\ or '\xe and the next call returns the keycode this function doesn' echo characters to the consolenor can the function be used to read ctrl+ getwch(the same as getch(except that unicode character is returned getche(like getch(except that characters are echoed (if printablegetwche(the same as getche(except that unicode character is returned get_osfhandle(fdreturns the file handle for file descriptor fd raises ioerror if fd is not recognized heapmin(forces the internal python memory manager to return unused blocks to the operating system this works only on windows nt and raises ioerror on failure lib fl ff
18,220
kbhit(returns true if keypress is waiting to be read locking(fdmodenbyteslocks part of filegiven file descriptor from the runtime nbytes is the number of bytes to lock relative to the current file pointer mode is one of the following integerssetting description unlocks the file region (lk_unlcklocks the file region (lk_locklocks the file regionnonblocking (lk_nblcklocks for writing (lk_rlcklocks for writingnonblocking (lk_nbrlck attempts to acquire lock that takes more than approximately seconds results in an ioerror exception open_osfhandle(handleflagscreates runtime file descriptor from the file handle handle flags is the bitwise or of os o_appendos o_rdonlyand os o_text returns an integer file descriptor that can be used as parameter to os fdopen(to create file object putch(charprints the character char to the console without buffering putwch(charthe same as putch(except that char is unicode character setmode(fdflagssets the line-end translation mode for file descriptor fd flags is os o_text for text mode and os o_binary for binary mode ungetch(charcauses the character char to be "pushed backinto the console buffer it will be the next character read by getch(or getche(ungetwch(charthe same as ungetch(except that char is unicode character note wide variety of win extensions are available that provide access to the microsoft foundation classescom componentsgraphical user interfacesand so forth these topics are far beyond the scope of this bookbut detailed information about many of these topics is available in python programming on win by mark hammond and andy robinson ( 'reilly associates alsoextensive list of contributed modules for use under windows lib fl ff
18,221
operating system services see alsowinreg ( optparse the optparse module provides high-level support for processing unix-style command-line options supplied in sys argv simple example of using the module is found in use of optparse primarily focuses on the optionparser class optionparser([**args]creates new command option parser and returns an optionparser instance variety of optional keyword arguments can be supplied to control configuration these keyword arguments are described in the following listkeyword argument description specifies whether or not special help option (--help and -his supported by defaultthis is set to true conflict_handler specifies the handling of conflicting command-line options may be set to either 'error(the default valueor 'resolvein 'errormodean optparse optionconflicterror exception will be raised if conflicting option strings are added to the parser in 'resolvemodeconflicts are resolved so that options added later take priority howeverearlier options may still be available if they were added under multiple names and no conflicts exist for at least one of the names description string that provides description of the program for display during help this string will automatically be reformatted to fit the screen when displayed formatter instance of an optparse helpformatter class used to format text when printing help may be either optparse indentedhelpformatter (the defaultor optparse titledhelpformatter option_class the python class that' used to hold information about each command-line option the default class is optparse option option_list list of options used to populate the parser by defaultthis list is emptyand options are added using the add_option(method instead if suppliedthis list contains objects of type option prog the program name used to replace '%progin help text usage the usage string that' printed when the --help option is used or incorrect options are passed the default value is the string '%prog [options]'where the '%progkeyword gets replaced with either the value of os path basename (sys argv[ ]or the value of the prog keyword argument (if suppliedthe value optparse suppress_usage can be given to suppress the usage message entirely version version string that' printed when the -version option is supplied by defaultversion is none and no --version option is added when this string is supplied-version is automatically added the special keyword '%progis replaced by the program name add_help_option lib fl ff
18,222
unless you really need to customize option processing in some wayan optionparser will usually be created with no arguments for examplep optparse optionparser(an instancepof optionparser supports the following methodsp add_option(name namen [**parms]adds new option to the arguments name name and so on are all of the various names for the option for exampleyou might include short and long option names such as '-fand '--filefollowing the option namesan optional set of keyword arguments is supplied that specifies how the option will be processed when parsed these keyword arguments are described in the following listkeyword argument description action action to perform when the option is parsed acceptable values are as follows'store'--option has an argument that is read and stored this is the default if no action is specified explicitly 'store_const'--the option takes no argumentsbut when the option is encountereda constant value specified with the const keyword argument is stored 'store_true'--like 'store_constbut stores boolean true when the option is parsed 'store_false'--like 'store_truebut stores false instead 'append'--option has an argument that is appended to list when parsed this is used if the same command-line option is used to specify multiple values 'count'--option takes no argumentsbut counter value is stored the counter value is increased by one each time the argument is encountered 'callback'--invokes callback function specified with the callback keyword argument when the option is encountered 'help'--prints help message when the option is parsed this is only needed if you want help to be displayed via different option than the standard - or --help option 'version'--prints the version number supplied to optionparser()if any only used if you want to display version information using an option other than the standard - or --version option specifies callback function to be invoked when the option is encountered this callback function is python callable object that is invoked as callback(optionopt_strvalueparser*args**kwargsthe option argument is an instance of optparse optionopt_str is the option string supplied on the command line that triggered the callbackvalue is the value of the option (if any)parser is the instance of optionparser that' runningargs are positional arguments supplied using the callback_args keyword argumentand kwargs are keyword arguments supplied using the callback_kwargs keyword argument callback lib fl ff
18,223
operating system services keyword argument description callback_args optional positional arguments supplied to callback function specified with the callback argument optional keyword arguments supplied to callback function specified with the callback argument list of strings that specifies all possible option values used when an option only has limited set of values (for example['small''medium''large']the constant value that' stored with the 'store_constaction sets the default value of the option if not supplied by defaultthe default value is none sets the name of the attribute used to store option values during parsing normally the name is derived from the option name itself help text for this particular option if this is not suppliedthe option will be listed in help without description the value optparse suppress_help can be used to hide an option the special keyword '%defaultis replaced by the option default value in the help string specifies the name of an option argument that' used when printing help text specifies the number of option arguments for actions that expect arguments the default value is if number greater than is usedoption arguments will be collected into tuple that is then used whenever arguments are handled specifies the type of an option valid types are 'string(the default)'int''long''choice''float'and 'complexcallback_kwargs choices const default dest help metavar nargs type disable_interspersed_args(disallows the mixing of simple options with positional arguments for exampleif '-xand '-yare options that take no parametersthe options must appear before any arguments (for example'prog - - arg arg arg ' enable_interspersed_args(allows the mixing of options with positional arguments for exampleif '-xand '-yare simple options that take no parametersthey may be mixed with the argumentssuch as in 'prog - arg arg - arg this is the default behavior parse_args([arglist]parses command-line options and returns tuple (optionsargswhere options is an object containing the values of all the options and args is list of all the remaining positional arguments left over the options object stores all the option data in attributes with names that match the option name for examplethe option '--outputwould have its value stored in options output if the option does not appearthe value will be none the name of the attribute can be set using the dest keyword argument to add_option()described previously by defaultarguments are taken from sys argv[ :howevera different source of arguments can be supplied as an optional argumentarglist lib fl ff
18,224
set_defaults(dest=valuedest=valuesets the default values of particular option destinations you simply supply keyword arguments that specify the destinations you wish to set the name of the keyword arguments should match the names specified using the dest parameter in add_option()described earlier set_usage(usagechanges the usage string displayed in text produced by the --help option example foo py import optparse optparse optionparser( simple optionwith no arguments add_option("- "action="store_true"dest="tracing"an option that accepts string argument add_option("- ""--outfile"action="store"type="string"dest="outfile"an option requires an integer argument add_option("- ""--debuglevel"action="store"type="int"dest="debug"an option with few choices add_option("--speed"action="store"type="choice"dest="speed"choices=["slow","fast","ludicrous"]an option taking multiple arguments add_option("--coord"action="store"type="int"dest="coord"nargs= set of options that control common destination add_option("--novice"action="store_const"const="novice"dest="mode" add_option("--guru"action="store_const"const="guru"dest="mode"set default values for the various option destinations set_defaults(tracing=falsedebug= speed="fast"coord=( , )mode="novice"parse the arguments optargs parse_args(print option values print "tracing :"opt tracing print "outfile :"opt outfile print "debug :"opt debug print "speed :"opt speed print "coord :"opt coord print "mode :"opt mode print remaining arguments print "args :"args lib fl ff
18,225
here is short interactive unix session that shows how the previous code workspython foo py - usagefoo py [optionsoptions- --help show this help message and exit - - outfile--outfile=outfile - debug--debuglevel=debug --speed=speed --coord=coord --novice --guru python foo py - - outfile dat - --coord --speed=ludicrous blah tracing true outfile outfile dat debug speed ludicrous coord ( mode novice args ['blah'python foo py --speed=insane usagefoo py [optionsfoo py:error:option --speed:invalid choice:'insane(choose from 'slow''fast''ludicrous'notes when specifying option namesuse single dash to specify short name such as '-xand double-dash to specify long name such as '--excludean optionerror exception will be raised if you attempt to define an option that is mix of the two stylessuch as '-excluden python also includes module getopt that provides support for command-line parsing in style similar to library of the same name for all practical purposesthere is no benefit to using that module over optparse (which is much higher level and requires far less codingn the optparse module contains considerable number of advanced features related to customization and specialized handling of certain kinds of commandline options howevernone of these features are required for the most common types of command-line option parsing readers should consult the online library documentation for more details and additional examples os the os module provides portable interface to common operating-system services it does this by searching for an os-dependent built-in module such as nt or posix and exporting the functions and data as found there unless otherwise notedfunctions are available on windows and unix unix systems include both linux and mac os lib fl ff
18,226
the following general-purpose variables are definedenviron mapping object representing the current environment variables changes to the mapping are reflected in the current environment if the putenv(function is also availablethen changes are also reflected in subprocesses linesep the string used to separate lines on the current platform may be single character such as '\nfor posix or multiple characters such as '\ \nfor windows name the name of the os-dependent module imported'posix''nt''dos''mac''ce''java''os 'or 'riscospath the os-dependent standard module for pathname operations this module can also be loaded using import os path process environment the following functions are used to access and modify various parameters related to the environment in which process runs processgroupprocess groupand session ids are integers unless otherwise noted chdir(pathchanges the current working directory to path chroot(pathchanges the root directory of the current process (unixctermid(returns string with the filename of the control terminal for the process (unixfchdir(fdchanges the current working directory fd is file descriptor to an opened directory (unixgetcwd(returns string with the current working directory getcwdu(returns unicode string with the current working directory getegid(returns the effective group id (unixgeteuid(returns the effective user id (unixf lib fl ff
18,227
operating system services getgid(returns the real group id of the process (unixgetgroups(returns list of integer group ids to which the process owner belongs (unixgetlogin(returns the user name associated with the effective user id (unixgetpgid(pidreturns the process group id of the process with process id pid if pid is the process group of the calling process is returned (unixgetpgrp(returns the id of the current process group process groups are typically used in conjunction with job control the process group is not necessarily the same as the group id of the process (unixgetpid(returns the real process id of the current process (unix and windowsgetppid(returns the process id of the parent process (unixgetsid(pidreturns the process session identifier of process pid if pid is the identifier of the current process is returned (unixgetuid(returns the real user id of the current process (unixputenv(varnamevaluesets environment variable varname to value changes affect subprocesses started with os system()popen()fork()and execv(assignments to items in os environ automatically call putenv(howevercalls to putenv(don' update os environ (unix and windowssetegid(egidsets the effective group id (unixseteuid(euidsets the effective user id (unixsetgid(gidsets the group id of the current process (unixsetgroups(groupssets the group access list of the current process groups is sequence of integers specifying group identifiers can only be called by root (unixf lib fl ff
18,228
setpgrp(creates new process group by calling the system call setpgrp(or setpgrp( )depending on which version is implemented (if anyreturns the id of the new process group (unixsetpgid(pidpgrpassigns process pid to process group pgrp if pid is equal to pgrpthe process becomes new process group leader if pid is not equal to pgrpthe process joins an existing group if pid is the process id of the calling process is used if pgrp is the process specified by pid becomes process group leader (unixsetreuid(ruid,euidsets the real and effective user id of the calling process (unixsetregid(rgid,egidsets the real and effective group id of the calling process (unixsetsid(creates new session and returns the newly created session id sessions are typically associated with terminal devices and the job control of processes that are started within them (unixsetuid(uidsets the real user id of the current process this function is privileged and often can be performed only by processes running as root (unixstrerror(codereturns the error message corresponding to the integer error code (unix and windowsthe errno module defines symbolic names for these error codes umask(masksets the current numeric umask and returns the previous umask the umask is used to clear permissions bits on files that are created by the process (unix and windowsuname(returns tuple of strings (sysnamenodenamereleaseversionmachineidentifying the system type (unixunsetenv(nameunsets the environment variable name file creation and file descriptors the following functions provide low-level interface for manipulating files and pipes in these functionsfiles are manipulated in terms of an integer file descriptorfd the file descriptor can be extracted from file object by invoking its fileno(method close(fdcloses the file descriptor fd previously returned by open(or pipe( lib fl ff
18,229
operating system services closerange(lowhighcloses all file descriptors fd in the range low <fd high errors are ignored dup(fdduplicates file descriptor fd returns new file descriptor that' the lowest-numbered unused file descriptor for the process the new and old file descriptors can be used interchangeably furthermorethey share statesuch as the current file pointer and locks (unix and windowsdup (oldfdnewfdduplicates file descriptor oldfd to newfd if newfd already corresponds to valid file descriptorit' closed first (unix and windowsfchmod(fdmodechanges the mode of the file associated with fd to mode see the description of os open(for description of mode (unixfchown(fduidgidchanges the owner and group id of the file associated with fd to uid and gid use valid of - for uid or gid to keep the value unchanged (unixfdatasync(fdforces all cached data written to fd to be flushed to disk (unixfdopen(fd [mode [bufsize]]creates an open file object connected to file descriptor fd the mode and bufsize arguments have the same meaning as in the built-in open(function mode should be string such as ' '' 'or 'aon python this function accepts any additional parameters that work with the built-in open(function such as specifications for the encoding and line ending howeverif portability with python is concernyou should only use the mode and bufsize arguments described here fpathconf(fdnamereturns configurable pathname variables associated with the open file with descriptor fd name is string that specifies the name of the value to retrieve the values are usually taken from parameters contained in system header files such as and posix defines the following constants for nameconstant description "pc_async_ioindicates whether asynchronous / can be performed on fd indicates whether the chown(function can be used if fd refers to directorythis applies to all files in the directory maximum size of file maximum value of the file' link count maximum length of formatted input line fd refers to terminal maximum length of an input line fd refers to terminal "pc_chown_restricted"pc_filesizebits"pc_link_max"pc_max_canon"pc_max_inputf lib fl ff
18,230
constant description "pc_name_max"pc_no_truncmaximum length of filename in directory indicates whether an attempt to create file with name longer than pc_name_max for directory will fail with an enametoolong error maximum length of relative path name when the directory fd is the current working directory size of the pipe buffer when fd refers to pipe or fifo indicates whether priority / can be performed on fd indicates whether synchronous / can be performed on fd indicates whether fd allows special-character processing to be disabled fd must refer to terminal "pc_path_max"pc_pipe_buf"pc_prio_io"pc_sync_io"pc_vdisablenot all names are available on all platformsand some systems may define additional configuration parameters howevera list of the names known to the operating system can be found in the dictionary os pathconf_names if known configuration name is not included in os pathconf_namesits integer value can also be passed as name even if name is recognized by pythonthis function may still raise an oserror if the host operating system doesn' recognize the parameter or associate it with the file fd this function is only available on some versions of unix fstat(fdreturns the status for file descriptor fd returns the same values as the os stat(function (unix and windowsfstatvfs(fdreturns information about the file system containing the file associated with file descriptor fd returns the same values as the os statvfs(function (unixfsync(fdforces any unwritten data on fd to be written to disk note that if you are using an object with buffered / (for examplea python file object)you should first flush the data before calling fsync(available on unix and windows ftruncate(fdlengthtruncates the file corresponding to file descriptor fd so that it' at most length bytes in size (unixisatty(fdreturns true if fd is associated with tty-like device such as terminal (unixlseek(fdposhowsets the current position of file descriptor fd to position pos values of how are as followsseek_set sets the position relative to the beginning of the fileseek_cur sets it relative to the current positionand seek_end sets it relative to the end of the file in older python codeit is common to see these constants replaced with their numeric values of or respectively lib fl ff
18,231
operating system services open(file [flags [mode]]opens the file file flags is the bitwise or of the following constant valuesvalue description o_rdonly o_wronly open the file for reading open the file for writing open for reading and writing (updatesappend bytes to the end of the file create the file if it doesn' exist don' block on openreador write (unixsame as o_nonblock (unixsynchronous writes (unixwhen opening devicedon' set controlling terminal (unixif the file existstruncates to zero length synchronous reads (unixsynchronous writes (unixerror if o_creat and the file already exists set an exclusive lock on the file set shared lock on the file enables asynchronous input mode in which sigio signal is generated with input is available use direct / mode where reads and writes go directly to the disk instead of the operating system read/write caches raises an error if the file is not directory don' follow symbolic links don' update the last access time of the file text mode (windowsbinary mode (windowsfile not inherited by child processes (windowshint to system that the file is used for short-term storage (windowsdelete file when closed (windowshint to system that file will be used for random access (windowshint to system that file will be accessed sequentially (windowso_rdwr o_append o_creat o_nonblock o_ndelay o_dsync o_noctty o_trunc o_rsync o_sync o_excl o_exlock o_shlock o_async o_direct o_directory o_nofollow o_noatime o_text o_binary o_noinherit o_short_lived o_temporary o_random o_sequential synchronous / modes (o_synco_dsynco_rsyncforce / operations to block until they've been completed at the hardware level (for examplea write will block until the bytes have been physically written to diskthe mode parameter contains the file permissions represented as the bitwise or of the following octal values (which are defined as constants in the stat module as indicated)mode meaning user has execute permission (stat s_ixusruser has write permission (stat s_iwusruser has read permission (stat s_irusruser has read/write/exec permission (stat s_irwxuf lib fl ff
18,232
mode meaning group has execute permission (stat s_ixgrpgroup has write permission (stat s_iwgrpgroup has read permission (stat s_irgrpgroup has read/write/exec permission (stat s_irwxgothers have execute permission (stat s_ixothothers have write permission (stat s_iwothothers have read permission (stat s_irothothers have read/write/exec permission (stat s_irwxoset uid mode (stat s_isuidset gid mode (stat s_isgidset the sticky bit (stat s_isvtx the default mode of file is ( ~umask)where the umask setting is used to remove selected permissions for examplea umask of removes the write permission for groups and others the umask can be changed using the os umask(function the umask setting has no effect on windows openpty(opens psuedo-terminal and returns pair of file descriptors (master,slavefor the pty and tty available on some versions of unix pipe(creates pipe that can be used to establish unidirectional communication with another process returns pair of file descriptors (rwusable for reading and writingrespectively this function is usually called prior to executing fork(function after the fork()the sending process closes the read end of the pipe and the receiving process closes the write end of the pipe at this pointthe pipe is activated and data can be sent from one process to another using read(and write(functions (unixread(fdnreads at most bytes from file descriptor fd returns byte string containing the bytes read tcgetpgrp(fdreturns the process group associated with the control terminal given by fd (unixtcsetpgrp(fdpgsets the process group associated with the control terminal given by fd (unixttyname(fdreturns string that specifies the terminal device associated with file descriptor fd if fd is not associated with terminal devicean oserror exception is raised (unixwrite(fdstrwrites the byte string str to file descriptor fd returns the number of bytes actually written lib fl ff
18,233
operating system services files and directories the following functions and variables are used to manipulate files and directories on the file system to handle variances in filenaming schemesthe following variables contain information about the construction of path namesvariable description altsep an alternative character used by the os to separate pathname componentsor none if only one separator character exists this is set to '/on dos and windows systemswhere sep is backslash the string used to refer to the current working directoryfor unix and windows and ':for the macintosh the path of the null device (for example/dev/nullcharacter that separates the base filename from its type (for examplethe in 'foo txt'the string used to refer to the parent directoryfor unix and windows and '::for the macintosh the character used to separate search path components (as contained in the $path environment variable)':for unix and ';for dos and windows the character used to separate pathname components'/for unix and windows and ':for the macintosh curdir devnull extsep pardir pathsep sep the following functions are used to manipulate filesaccess(pathaccessmodechecks read/write/execute permissions for this process to access the file path accessmode is r_okw_okx_okor f_ok for readwriteexecuteor existencerespectively returns if access is granted if not chflags(pathflagschanges the file flags on path flags is the bitwise-or of the constants listed next flags starting with uf_ can be set by any userwhereas sf_ flags can only be changed by the superuser (unixflag meaning stat uf_nodump do not dump the file the file is read-only the file only supports append operations the directory is opaque the file may not be deleted or renamed the file can be archived the file is read-only the file only supports append operations the file may not be deleted or renamed the file is snapshot file stat uf_immutable stat uf_append stat uf_opaque stat uf_nounlink stat sf_archived stat sf_immutable stat sf_append stat sf_nounlink stat sf_snapshot lib fl ff
18,234
chmod(pathmodechanges the mode of path mode has the same values as described for the open(function (unix and windowschown(pathuidgidchanges the owner and group id of path to the numeric uid and gid setting uid or gid to - causes that parameter to remain unmodified (unixlchflags(pathflagsthe same as chflags()but doesn' follow symbolic links (unixlchmod(pathmodethe same as chmod(except that if path is symbolic linkit modifies the link itselfnot the file the link refers to lchown(pathuidgidthe same as chown(but doesn' follow symbolic links (unixlink(srcdstcreates hard link named dst that points to src (unixlistdir(pathreturns list containing the names of the entries in the directory path the list is returned in arbitrary order and doesn' include the special entries of and if path is unicodethe resulting list will only contain unicode strings be aware that if any filenames in the directory can' be properly encoded into unicodethey are silently skipped if path is given as byte stringthen all filenames are returned as list of byte strings lstat(pathlike stat(but doesn' follow symbolic links (unixmakedev(majorminorcreates raw device number given major and minor device numbers (unixmajor(devicenumreturns the major device number from raw device number devicenum created by makedev(minor(devicenumreturns the minor device number from raw device number devicenum created by makedev(makedirs(path [mode]recursive directory-creation function like mkdir(but makes all the intermediatelevel directories needed to contain the leaf directory raises an oserror exception if the leaf directory already exists or cannot be created lib fl ff
18,235
operating system services mkdir(path [mode]creates directory named path with numeric mode mode the default mode is on non-unix systemsthe mode setting may have no effect or be ignored mkfifo(path [mode]creates fifo ( named pipenamed path with numeric mode mode the default mode is (unixmknod(path [modedevice]creates device-special file path is the name of the filemode specifies the permissions and type of fileand device is the raw device number created using os makedev(the mode parameter accepts the same parameters as open(when setting the file' access permissions in additionthe flags stat s_ifregstat s_ifchrstat s_ifblkand stat s_ififo are added to mode to indicate file type (unixpathconf(pathnamereturns configurable system parameters related to the path name path name is string that specifies the name of the parameter and is the same as described for the fpathconf(function (unixreadlink(pathreturns string representing the path to which symbolic linkpathpoints (unixremove(pathremoves the file path this is identical to the unlink(function removedirs(pathrecursive directory-removal function works like rmdir(except thatif the leaf directory is successfully removeddirectories corresponding to the rightmost path segments will be pruned away until either the whole path is consumed or an error is raised (which is ignored because it generally means that parent directory isn' emptyraises an oserror exception if the leaf directory could not be removed successfully rename(srcdstrenames the file or directory src to dst renames(oldnewrecursive directory-renaming or file-renaming function works like rename(except it first attempts to create any intermediate directories needed to make the new path name after the renamedirectories corresponding to the rightmost path segments of the old name will be pruned away using removedirs(rmdir(pathremoves the directory path stat(pathperforms stat(system call on the given path to extract information about file the return value is an object whose attributes contain file information common attributes includef lib fl ff
18,236
attribute description st_mode st_ino st_dev st_nlink inode protection mode inode number device the inode resides on number of links to the inode user id of the owner group id of the owner file size in bytes time of last access time of last modification time of last status change st_uid st_gid st_size st_atime st_mtime st_ctime howeveradditional attributes may be available depending on the system the object returned by stat(also looks like -tuple containing the parameters (st_modest_inost_devst_nlinkst_uidst_gidst_sizest_atimest_mtimest_ctimethis latter form is provided for backward compatibility the stat module defines constants that are used to extract fields from this tuple stat_float_times([newvalue]returns true if the times returned by stat(are floating-point numbers instead of integers the behavior can be changed by supplying boolean value for newvalue statvfs(pathperforms statvfs(system call on the given path to get information about the file system the return value is an object whose attributes describe the file system common attributes includeattribute description f_bsize f_frsize f_blocks f_bfree f_bavail f_files f_ffree f_favail f_flag f_namemax preferred system block size fundamental file system block size total number of blocks in the file system total number of free blocks free blocks available to non-superuser total number of file inodes total number of free file inodes free nodes available to non-superuser flags (system-dependentmaximum filename length the returned object also behaves like tuple containing these attributes in the order listed the standard module statvfs defines constants that can be used to extract information from the returned statvfs data (unixsymlink(srcdstcreates symbolic link named dst that points to src lib fl ff
18,237
operating system services unlink(pathremoves the file path same as remove(utime(path(atimemtime)sets the access and modified time of the file to the given values (the second argument is tuple of two items the time arguments are specified in terms of the numbers returned by the time time(function walk(top [topdown [onerror [,followlinks]]]creates generator object that walks through directory tree top specifies the top of the directoryand topdown is boolean that indicates whether to traverse directories in top-down (the defaultor bottom-up order the returned generator produces tuples (dirpathdirnamesfilenameswhere dirpath is string containing the path to the directorydirnames is list of all subdirectories in dirpathand filenames is list of the files in dirpathnot including directories the onerror parameter is function accepting single argument if any errors occur during processingthis function will be called with an instance of os error the default behavior is to ignore errors if directory is walked in top-down mannermodifications to dirnames will affect the walking process for exampleif directories are removed from dirnamesthose directories will be skipped by defaultsymbolic links are not followed unless the followlinks argument is set to true process management the following functions and variables are used to createdestroyand manage processesabort(generates sigabrt signal that' sent to the calling process unless the signal is caught with signal handlerthe default is for the process to terminate with an error defpath this variable contains the default search path used by the exec* *(functions if the environment doesn' have 'pathvariable execl(patharg arg equivalent to execv(path(arg arg )execle(patharg arg envequivalent to execve(path(arg arg )envexeclp(patharg arg equivalent to execvp(path(arg arg )execv(pathargsexecutes the program path with the argument list argsreplacing the current process (that isthe python interpreterthe argument list may be tuple or list of strings lib fl ff
18,238
execve(pathargsenvexecutes new program like execv(but additionally accepts dictionaryenvthat defines the environment in which the program runs env must be dictionary mapping strings to strings execvp(pathargslike execv(pathargsbut duplicates the shell' actions in searching for an executable file in list of directories the directory list is obtained from environ['path'execvpe(pathargsenvlike execvp(but with an additional environment variable as in the execve(function _exit(nexits immediately to the system with status nwithout performing any cleanup actions this is typically only done in child processes created by fork(this is also different than calling sys exit()which performs graceful shutdown of the interpreter the exit code is application-dependentbut value of usually indicates successwhereas nonzero value indicates an error of some kind depending on the systema number of standard exit code values may be definedvalue description ex_ok ex_usage ex_dataerr ex_noinput ex_nouser ex_nohost ex_notfound ex_unavailable ex_software ex_oserr ex_osfile ex_cantcreat ex_ioerr ex_tempfail ex_protocol ex_noperm ex_config no errors incorrect command usage incorrect input data missing input user doesn' exist host doesn' exist not found service unavailable internal software error operating system error file system error can' create output / error temporary failure protocol error insufficient permissions configuration error fork(creates child process returns in the newly created child process and the child' process id in the original process the child process is clone of the original process and shares many resources such as open files (unixf lib fl ff
18,239
operating system services forkpty(creates child process using new pseudo-terminal as the child' controlling terminal returns pair (pidfd)in which pid is in the child and fd is file descriptor of the master end of the pseudo-terminal this function is available only in certain versions of unix kill(pidsigsends the process pid the signal sig list of signal names can be found in the signal module (unixkillpg(pgidsigsends the process group pgid the signal sig list of signal names can be found in the signal module (unixnice(incrementadds an increment to the scheduling priority (the "niceness"of the process returns the new niceness typicallyusers can only decrease the priority of process because increasing the priority requires root access the effect of changing the priority is system-dependentbut decreasing the priority is commonly done to make process run in the background in way such that it doesn' noticeably impact the performance of other processes (unixplock(oplocks program segments into memorypreventing them from being swapped the value of op is an integer that determines which segments are locked the value of op is platform-specific but is typically one of unlockproclocktxtlockor datlock these constants are not defined by python but might be found in the header file this function is not available on all platforms and often can be performed only by process with an effective user id of (root(unixpopen(command [mode [bufsize]]opens pipe to or from command the return value is an open file object connected to the pipewhich can be read or written depending on whether mode is ' (the defaultor 'wbufsize has the same meaning as in the built-in open(function the exit status of the command is returned by the close(method of the returned file objectexcept that when the exit status is zeronone is returned spawnv(modepathargsexecutes the program path in new processpassing the arguments specified in args as command-line parameters args can be list or tuple the first element of args should be the name of the program mode is one of the following constantsconstant description p_wait executes the program and waits for it to terminate returns the program' exit code executes the program and returns the process handle same as p_nowait p_nowait p_nowaito lib fl ff
18,240
constant description p_overlay executes the program and destroys the calling process (same as the exec functionsexecutes the program and detaches from it the calling program continues to run but cannot wait for the spawned process p_detach spawnv(is available on windows and some versions of unix spawnve(modepathargsenvexecutes the program path in new processpassing the arguments specified in args as command-line parameters and the contents of the mapping env as the environment args can be list or tuple mode has the same meaning as described for spawnv(spawnl(modepatharg argnthe same as spawnv(except that all the arguments are supplied as extra parameters spawnle(modepatharg argnenvthe same as spawnve(except that the arguments are supplied as parameters the last parameter is mapping containing the environment variables spawnlp(modefilearg argnthe same as spawnl(but looks for file using the settings of the path environment variable (unixspawnlpe(modefilearg argnenvthe same as spawnle(but looks for file using the settings of the path environment variable (unixspawnvp(modefileargsthe same as spawnv(but looks for file using the settings of the path environment variable (unixspawnvpe(modefileargsenvthe same as spawnve(but looks for file using the settings of the path environment variable (unixstartfile(path [operation]launches the application associated with the file path this performs the same action as would occur if you double-clicked the file in windows explorer the function returns as soon as the application is launched furthermorethere is no way to wait for completion or to obtain exit codes from the application path is relative to the current directory operation is an optional string that specifies the action to perform when opening path by defaultit is set to 'open'but it may be set to 'print''edit''explore'or 'find(the exact list depends on the type of path (windows)system(commandexecutes command ( stringin subshell on unixthe return value is the exit status of the process as returned by wait(on windowsthe exit code is always the subprocess module provides considerably more power and is the preferred way to launch subprocesses lib fl ff
18,241
operating system services times(returns -tuple of floating-point numbers indicating accumulated times in seconds on unixthe tuple contains the user timesystem timechildren' user timechildren' system timeand elapsed real time in that order on windowsthe tuple contains the user timesystem timeand zeros for the other three values wait([pid]waits for completion of child process and returns tuple containing its process id and exit status the exit status is -bit number whose low byte is the signal number that killed the process and whose high byte is the exit status (if the signal number is zerothe high bit of the low byte is set if core file was produced pidif givenspecifies the process to wait for if it' omittedwait(returns when any child process exits (unixwaitpid(pidoptionswaits for change in the state of child process given by process id pid and returns tuple containing its process id and exit status indicationencoded as for wait(options should be for normal operation or wnohang to avoid hanging if no child process status is available immediately this function can also be used to gather information about child processes that have only stopped executing for some reason setting options to wcontinued gathers information from child when it resumes operation after being stopped via job control setting options to wuntraced gathers information from child that has been stoppedbut from which no status information has been reported yet wait ([options]the same as waitpid(except that the function waits for change in any child process returns -tuple (pidstatusrusage)where pid is the child process idstatus is the exit status codeand rusage contains resource usage information as returned by resource getrusage(the options parameter has the same meaning as for waitpid(wait (pidoptionsthe same as waitpid(except that the return result is the same tuple as returned by wait (the following functions take process status code as returned by waitpid()wait ()or wait (and are used to examine the state of the process (unixwcoredump(statusreturns true if the process dumped core wifexited(statusreturns true if the process exited using the exit(system call wexitstatus(statusif wifexited(statusis truethe integer parameter to the exit(system call is returned otherwisethe return value is meaningless lib fl ff
18,242
wifcontinued(statusreturns true if the process has resumed from job-control stop wifsignaled(statusreturns true if the process exited due to signal wifstopped(statusreturns true if the process has been stopped wstopsig(statusreturns the signal that caused the process to stop wtermsig(statusreturns the signal that caused the process to exit system configuration the following functions are used to obtain system configuration informationconfstr(namereturns string-valued system configuration variable name is string specifying the name of the variable the acceptable names are platform-specificbut dictionary of known names for the host system is found in os confstr_names if configuration value for specified name is not definedthe empty string is returned if name is unknownvalueerror is raised an oserror may also be raised if the host system doesn' support the configuration name the parameters returned by this function mostly pertain to the build environment on the host machine and include paths of system utilitiescompiler options for various program configurations (for example -bit bitand large-file support)and linker options (unixgetloadavg(returns -tuple containing the average number of items in the system run-queue over the last and minutes (unixsysconf(namereturns an integer-valued system-configuration variable name is string specifying the name of the variable the names defined on the host system can be found in the dictionary os sysconf_names returns - if the configuration name is known but the value is not defined otherwisea valueerror or oserror may be raised some systems may define more than different system parameters howeverthe following list details the parameters defined by posix that should be available on most unix systemsparameter description "sc_arg_maxmaximum length of the arguments that can be used with exec("sc_child_maxmaximum number of processes per user id "sc_clk_tcknumber of clock ticks per second "sc_ngroups_maxmaximum number of simultaneous supplementary group ids "sc_stream_maxmaximum number of streams process can open at one time lib fl ff
18,243
operating system services parameter description "sc_tzname_maxmaximum number of bytes in time zone name "sc_open_maxmaximum number of files process can open at one time "sc_job_controlsystem supports job control "sc_saved_idsindicates whether each process has saved set-user-id and saved set-group-id urandom(nreturns string containing random bytes generated by the system (for example/dev/urandom on unixthe returned bytes are suitable for cryptography exceptions the os module defines single exception to indicate errors error exception raised when function returns system-related error this is the same as the built-in exception oserror the exception carries two valueserrno and strerr the first contains the integer error value as described for the errno module the latter contains string error message for exceptions involving the file systemthe exception also contains third attributefilenamewhich is the filename passed to the function os path the os path module is used to manipulate pathnames in portable manner it' imported by the os module abspath(pathreturns an absolute version of the path name pathtaking the current working directory into account for exampleabspath(/python/foo'might return '/home/beazley/python/foobasename(pathreturns the base name of path name path for examplebasename('/usr/localpython'returns 'pythoncommonprefix(listreturns the longest string that' prefix of all strings in list if list is emptythe empty string is returned dirname(pathreturns the directory name of path name path for exampledirname('/usr/localpython'returns '/usr/localexists(pathreturns true if path refers to an existing path returns false if path refers to broken symbolic link lib fl ff
18,244
expanduser(pathreplaces path names of the form '~userwith user' home directory if the expansion fails or path does not begin with '~'the path is returned unmodified expandvars(pathexpands environment variables of the form '$nameor '${name}in path malformed or nonexistent variable names are left unchanged getatime(pathreturns the time of last access as the number of seconds since the epoch (see the time modulethe return value may be floating-point number if os stat_float_times(returns true getctime(pathreturns the time of last modification on unix and the time of creation on windows the time is returned as the number of seconds since the epoch (see the time modulethe return value may be floating-point number in certain cases (see getatime()getmtime(pathreturns the time of last modification as the number of seconds since the epoch (see the time modulethe return value may be floating-point number in certain cases (see getatime()getsize(pathreturns the file size in bytes isabs(pathreturns true if path is an absolute path name (begins with slashisfile(pathreturns true if path is regular file this function follows symbolic linksso both islink(and isfile(can be true for the same path isdir(pathreturns true if path is directory follows symbolic links islink(pathreturns true if path refers to symbolic link returns false if symbolic links are unsupported ismount(pathreturns true if path is mount point join(path [path []]intelligently joins one or more path components into pathname for examplejoin('home''beazley''python'returns 'home/beazley/pythonlexists(pathreturns true if path exists returns true for all symbolic linkseven if the link is broken lib fl ff
18,245
operating system services normcase(pathnormalizes the case of path name on non-case-sensitive file systemsthis converts path to lowercase on windowsforward slashes are also converted to backslashes normpath(pathnormalizes path name this collapses redundant separators and up-level references so that ' // '' / 'and ' /foo/ball become ' /bon windowsforward slashes are converted to backslashes realpath(pathreturns the real path of patheliminating symbolic links if any (unixrelpath(path [start]returns relative path to path from the current working directory start can be supplied to specify different starting directory samefile(path path returns true if path and path refer to the same file or directory (unixsameopenfile(fp fp returns true if the open file objects fp and fp refer to the same file (unixsamestat(stat stat returns true if the stat tuples stat and stat as returned by fstat()lstat()or stat(refer to the same file (unixsplit(pathsplits path into pair (headtail)where tail is the last pathname component and head is everything leading up to that for example'/home/user/foogets split into ('/homeuser''foo'this tuple is the same as would be returned by (dirname()basename()splitdrive(pathsplits path into pair (drivefilenamewhere drive is either drive specification or the empty string drive is always the empty string on machines without drive specifications splitext(pathsplits path name into base filename and suffix for examplesplitext('foo txt'returns ('foo'txt'splitunc(pathsplits path name into pair (unc,restwhere unc is unc (universal naming conventionmount point and rest the remainder of the path (windowssupports_unicode_filenames variable set to true if the file system allows unicode filenames lib fl ff
18,246
note on windowssome care is required when working with filenames that include drive letter (for example' :spam txt'in most casesfilenames are interpreted as being relative to the current working directory for exampleif the current directory is ' :\foo\'then the file ' :spam txtis interpreted as the file ' :\foo\ :spam txt'not the file ' :\spam txtsee alsofnmatch ( )glob ( )os ( signal the signal module is used to write signal handlers in python signals usually correspond to asynchronous events that are sent to program due to the expiration of timerarrival of incoming dataor some action performed by user the signal interface emulates that of unixalthough parts of the module are supported on other platforms alarm(timeif time is nonzeroa sigalrm signal is scheduled to be sent to the program in time seconds any previously scheduled alarm is canceled if time is zerono alarm is scheduled and any previously set alarm is canceled returns the number of seconds remaining before any previously scheduled alarm or zero if no alarm was scheduled (unixgetsignal(signalnumreturns the signal handler for signal signalnum the returned object is callable python object the function may also return sig_ign for an ignored signalsig_dfl for the default signal handleror none if the signal handler was not installed from the python interpreter getitimer(whichreturns the current value of an internal timer identified by which pause(goes to sleep until the next signal is received (unixset_wakeup_fd(fdsets file descriptor fd on which '\ byte will be written when signal is received thisin turncan be used to handle signals in programs that are polling file descriptors using functions such as those found in the select module the file described by fd must be opened in non-blocking mode for this to work setitimer(whichseconds [interval]sets an internal timer to generate signal after seconds seconds and repeatedly thereafter every interval seconds both of these parameters are specified as floating-point numbers the which parameter is one of itimer_realitimer_virtualor itimer_prof the choice of which determines what signal is generated after the timer has expired sigalrm is generated for itimer_realsigvtalrm is generated for lib fl ff
18,247
operating system services itimer_virtualand sigprof is generated for itimer_prof set seconds to to clear timer returns tuple (secondsintervalwith the previous settings of the timer siginterrupt(signalnumflagsets the system call restart behavior for given signal number if flag is falsesystem calls interrupted by signal signalnum will be automatically restarted if set truethe system call will be interrupted an interrupted system call will typically result in an oserror or ioerror exception where the associated error number is set to errno eintr or errno eagain signal(signalnumhandlersets signal handler for signal signalnum to the function handler handler must be callable python object taking two argumentsthe signal number and frame object sig_ign or sig_dfl can also be given to ignore signal or use the default signal handlerrespectively the return value is the previous signal handlersig_ignor sig_dfl when threads are enabledthis function can only be called from the main thread otherwisea valueerror exception is raised individual signals are identified using symbolic constants of the form sigthese names correspond to integer values that are machine-specific typical values are as followssignal name description sigabrt sigalrm sigbus sigchld sigcld sigcont sigfpe sighup sigill sigint sigio sigiot sigkill sigpipe sigpoll sigprof sigpwr abnormal termination alarm bus error change in child status change in child status continue floating-point error hang up illegal instruction terminal interrupt character asynchronous / hardware fault terminate write to pipeno readers pollable event profiling alarm power failure terminal quit character segmentation fault stop termination hardware fault terminal stop character control tty sigquit sigsegv sigstop sigterm sigtrap sigtstp sigttin lib fl ff
18,248
signal name description sigttou sigurg sigusr sigusr control tty urgent condition user defined user defined virtual time alarm window size change cpu limit exceeded file size limit exceeded sigvtalrm sigwinch sigxcpu sigxfsz in additionthe module defines the following variablesvariable description sig_dfl sig_ign nsig signal handler that invokes the default signal handler signal handler that ignores signal one more than the highest signal number example the following example illustrates timeout on establishing network connection (the socket module already provides timeout option so this example is merely meant to illustrate the basic concept of using the signal moduleimport signalsocket def handler(signumframe)print 'timeout!raise ioerror'host not responding sock socket socket(socket af_inetsocket sock_streamsignal signal(signal sigalrmhandlersignal alarm( -second alarm sock connect('www python org' connect signal alarm( clear alarm notes signal handlers remain installed until explicitly resetwith the exception of sigchld (whose behavior is implementation-specificn it' not possible to temporarily disable signals signals are only handled between the atomic instructions of the python interpreter the delivery of signal can be delayed by long-running calculations written in (as might be performed in an extension modulen if signal occurs during an / operationthe / operation may fail with an exception in this casethe errno value is set to errno eintr to indicate an interrupted system call certain signals such as sigsegv cannot be handled from python python installs small number of signal handlers by default sigpipe is ignoredsigint is translated into keyboardinterrupt exceptionand sigterm is caught in order to perform cleanup and invoke sys exitfunc lib fl ff
18,249
extreme care is needed if signals and threads are used in the same program currentlyonly the main thread of execution can set new signal handlers or receive signals signal handling on windows is of only limited functionality the number of supported signals is extremely limited on this platform subprocess the subprocess module contains functions and objects that generalize the task of creating new processescontrolling input and output streamsand handling return codes the module centralizes functionality contained in variety of other modules such as ospopen and commands popen(args**parmsexecutes new command as subprocess and returns popen object representing the new process the command is specified in args as either stringsuch as 'ls - 'or as list of stringssuch as ['ls''- 'parms represents collection of keyword arguments that can be set to control various properties of the subprocess the following keyword parameters are understoodkeyword description bufsize specifies the buffering behaviorwhere is unbuffered is line-buffereda negative value uses the system defaultand other positive values specify the approximate buffer size the default value is if trueall file descriptors except and are closed prior to execution of the child process the default value is false specifies process-creation flags on windows the only flag currently available is create_new_console the default value is the directory in which the command will execute the current directory of the child process is changed to cwd prior to execution the default value is nonewhich uses the current directory of the parent process dictionary of environment variables for the new process the default value is nonewhich uses the environment variables of the parent process specifies the name of the executable program to use this is rarely needed because the program name is already included in args if shell has been giventhis parameter specifies the name of the shell to use the default value is none specifies function that will be called in the child process just before the command is executed the function should take no arguments if truethe command is executed using the unix shell like the os system(function the default shell is /bin/shbut this can be changed by also setting executable the default value of shell is none close_fds creation_flags cwd env executable preexec_fn shell lib fl ff
18,250
keyword description provides startup flags used when creating processes on windows the default value is none possible values include startf_useshowwindow and startf_usestdhandlers stderr file object representing the file to use for stderr in the child process may be file object created via open()an integer file descriptoror the special value pipewhich indicates that new pipe should be created the default value is none stdin file object representing the file to use for stdin in the child process may be set to the same values as stderr the default value is none stdout file object representing the file to use for stdout in the child process may be set to the same values as stderr the default value is none universal_newlines if truethe files representing stdinstdoutand stderr are opened in text mode with universal newline mode enabled see the open(function for full description startupinfo call(args**parmsthis function is exactly the same as popen()except that it simply executes the command and returns its status code instead (that isit does not return popen objectthis function is useful if you just want to execute command but are not concerned with capturing its output or controlling it in other ways the parameters have the same meaning as with popen(check_call(args**parmsthe same as call(except that if the exit code is non-zerothe calledprocesserror exception is raised this exception has the exit code stored in its returncode attribute the popen object returned by popen(has variety of methods and attributes that can be used for interacting with the subprocess communicate([input]communicates with the child process by sending the data supplied in input to the standard input of the process once data is sentthe method waits for the process to terminate while collecting output received on standard output and standard error returns tuple (stdoutstderrwhere stdout and stderr are strings if no data is sent to the child processinput is set to none (the defaultp kill(kills the subprocess by sending it sigkill signal on unix or calling the terminate(method on windows poll(checks to see if has terminated if sothe return code of the subprocess is returned otherwisenone is returned send_signal(signalsends signal to the subprocess signal is signal number as defined in the signal module on windowsthe only supported signal is sigterm lib fl ff
18,251
terminate(terminates the subprocess by sending it sigterm signal on unix or calling the win api terminateprocess function on windows wait(waits for to terminate and returns the return code pid process id of the child process returncode numeric return code of the process if nonethe process has not terminated yet if negativeit indicates the process was terminated by signal (unixp stdinp stdoutp stderr these three attributes are set to open file objects whenever the corresponding / stream is opened as pipe (for examplesetting the stdout argument in popen(to pipethese file objects are provided so that the pipe can be connected to other subprocesses these attributes are set to none if pipes are not in use examples execute basic system command like os system(ret subprocess call("ls - "shell=truesilently execute basic system command ret subprocess call("rm - java",shell=truestdout=open("/dev/null")execute system commandbut capture the output subprocess popen("ls - "shell=truestdout=subprocess pipeout stdout read(execute commandbut send input and receive output subprocess popen("wc"shell=truestdin=subprocess pipestdout=subprocess pipestderr=subprocess pipeouterr communicate(ssend string to the process create two subprocesses and link them together via pipe subprocess popen("ls - "shell=truestdout=subprocess pipep subprocess popen("wc",shell=truestdin= stdoutstdout=subprocess pipeout stdout read(notes as general ruleit is better to supply the command line as list of strings instead of single string with shell command (for example['wc','filename'instead of 'wc filename'on many systemsit is common for filenames to include funny characters and spaces (for examplethe "documents and settingsfolder on windowsif you stick to supplying command arguments as listeverything will work normally if you try to form shell commandyou will have to take additional steps to make sure special characters and spaces are properly escaped lib fl ff
18,252
on windowspipes are opened in binary file mode thusif you are reading text output from subprocessline endings will include the extra carriage return character ('\ \ninstead of '\ 'if this is concernsupply the universal_newlines option to popen( the subprocess module can not be used to control processes that expect to be running in terminal or tty the most common example is any program that expects user to enter password (such as sshftpsvnand so onto control these programslook for third-party modules based on the popular "expectunix utility time the time module provides various time-related functions in pythontime is measured as the number of seconds since the epoch the epoch is the beginning of time (the point at which time secondsthe epoch is january on unix and can be determined by calling time gmtime( on other systems the following variables are definedaccept dyear boolean value that indicates whether two-digit years are accepted normally this is truebut it' set to false if the environment variable $pythony is set to nonempty string the value can be changed manually as well altzone the time zone used during daylight saving time (dst)if applicable daylight is set to nonzero value if dst time zone has been defined timezone the local (non-dsttime zone tzname tuple containing the name of the local time zone and the name of the local daylight saving time zone (if definedthe following functions can be usedasctime([tuple]converts tuple representing time as returned by gmtime(or localtime(to string of the form 'mon jul : : if no arguments are suppliedthe current time is used clock(returns the current cpu time in seconds as floating-point number ctime([secs]converts time expressed in seconds since the epoch to string representing local time ctime(secsis the same as asctime(localtime(secs)if secs is omitted or nonethe current time is used lib fl ff
18,253
operating system services gmtime([secs]converts time expressed in seconds since the epoch to time in utc coordinated universal time ( greenwich mean timethis function returns struct_time object with the following attributesattribute value tm_year tm_mon tm_mday tm_hour tm_min tm_sec tm_wday tm_yday tm_isdst four-digit value such as - - - - - - ( =monday - - the tm_isdst attribute is if dst is in effect if notand - if no information is available if secs is omitted or nonethe current time is used for backward compatibilitythe returned struct_time object also behaves like -tuple containing the preceding attribute values in the same order as listed localtime([secs]returns struct_time object as with gmtime()but corresponding to the local time zone if secs is omitted or nonethe current time is used mktime(tuplethis function takes struct_time object or tuple representing time in the local time zone (in the same format as returned by localtime()and returns floating-point number representing the number of seconds since the epoch an overflowerror exception is raised if the input value is not valid time sleep(secsputs the current process to sleep for secs seconds secs is floating-point number strftime(format [tm]converts struct_time object tm representing time as returned by gmtime(or localtime(to string (for backwards compatibilitytm may also be tuple representing time valueformat is format string in which the following format codes can be embeddeddirective meaning % % % % % % % locale' abbreviated weekday name locale' full weekday name locale' abbreviated month name locale' full month name locale' appropriate date and time representation day of the month as decimal number [ - hour ( -hour clockas decimal number [ - lib fl ff
18,254
directive meaning % % % % hour ( -hour clockas decimal number [ - day of the year as decimal number [ - month as decimal number [ - minute as decimal number [ - locale' equivalent of either am or pm seconds as decimal number [ - week number of the year [ - (sunday as first dayweekday as decimal number [ - ( sundayweek number of the year (monday as first daylocale' appropriate date representation locale' appropriate time representation year without century as decimal number [ - year with century as decimal number time zone name (or by no characters if no time zone existsthe character % % % % % % % % % % % the format codes can include width and precision in the same manner as used with the operator on strings valueerror is raised if any of the tuple fields are out of range if tuple is omittedthe time tuple corresponding to the current time is used strptime(string [format]parses string representing time and returns struct_time object as returned by localtime(or gmtime(the format parameter uses the same specifiers as used by strftime(and defaults to '% % % % :% :% %ythis is the same format as produced by the ctime(function if the string cannot be parseda valueerror exception is raised time(returns the current time as the number of seconds since the epoch in utc (coordinated universal timetzset(resets the time zone setting based on the value of the tz environment variable on unix for exampleos environ['tz''us/mountaintime tzset(os environ['tz'"cst+ cdt, ,mtime tzset(notes when two-digit years are acceptedthey're converted to four-digit years according to the posix /open standardwhere the values - are mapped to - and the values - are mapped to - lib fl ff
18,255
the accuracy of the time functions is often much less than what might be suggested by the units in which time is represented for examplethe operating system might only update the time - times second see alsodatetime ( winreg the winreg module (_winreg in python provides low-level interface to the windows registry the registry is large hierarchical tree in which each node is called key the children of particular key are known as subkeys and may contain additional subkeys or values for examplethe setting of the python sys path variable is typically contained in the registry as follows\hkey_local_machine\software\python\pythoncore\ \pythonpath in this casesoftware is subkey of hkey_local_machinepython is subkey of softwareand so forth the value of the pythonpath key contains the actual path setting keys are accessed through open and close operations open keys are represented by special handles (which are wrappers around the integer handle identifiers normally used by windowsclosekey(keycloses previously opened registry key with handle key connectregistry(computer_namekeyreturns handle to predefined registry key on another computer computer_name is the name of the remote machine as string of the \\computername if computer_name is nonethe local registry is used key is predefined handle such as hkey_current_user or hkey_ users raises environmenterror on failure the following list shows all hkey_values defined in the _winreg modulen hkey_classes_root hkey_current_config hkey_current_user hkey_dyn_data hkey_local_machine hkey_performance_data hkey_users createkey(keysub_keycreates or opens key and returns handle key is previously opened key or predefined key defined by the hkey_constants sub_key is the name of the key that will be opened or created if key is predefined keysub_key may be nonein which case key is returned lib fl ff
18,256
deletekey(keysub_keydeletes sub_key key is an open key or one of the predefined hkey_constants sub_key is string that identifies the key to delete sub_key must not have any subkeysotherwiseenvironmenterror is raised deletevalue(keyvaluedeletes named value from registry key key is an open key or one of the predefined hkey_constants value is string containing the name of the value to remove enumkey(keyindexreturns the name of subkey by index key is an open key or one of the predefined hkey_constants index is an integer that specifies the key to retrieve if index is out of rangean environmenterror is raised enumvalue(keyindexreturns value of an open key key is an open key or predefined hkey_constant index is an integer specifying the value to retrieve the function returns tuple (namedatatypein which name is the value namedata is an object holding the value dataand type is an integer that specifies the type of the value data the following type codes are currently definedcode description reg_binary reg_dword reg_dword_little_endian reg_dword_big_endian reg_expand_sz binary data -bit number -bit little-endian number -bit number in big-endian format null-terminated string with unexpanded references to environment variables unicode symbolic link sequence of null-terminated strings no defined value type device driver resource list null-terminated string reg_link reg_multi_sz reg_none reg_resource_list reg_sz expandenvironmentstrings(sexpands environment strings of the form %namein unicode string flushkey(keywrites the attributes of key to the registryforcing changes to disk this function should only be called if an application requires absolute certainty that registry data is stored on disk it does not return until data is written it is not necessary to use this function under normal circumstances regloadkey(keysub_keyfilenamecreates subkey and stores registration information from file into it key is an open key or predefined hkey_constant sub_key is string identifying the subkey to load filename is the name of the file from which to load data the contents of this file must lib fl ff
18,257
operating system services be created with the savekey(functionand the calling process must have se_restore_ privilege for this to work if key was returned by connectregistry()filename should be path that' relative to the remote computer openkey(keysub_key[res [sam]]opens key key is an open key or an hkey_constant sub_key is string identifying the subkey to open res is reserved integer that must be zero (the defaultsam is an integer defining the security access mask for the key the default is key_read here are the other possible values for samn key_all_access key_create_link key_create_sub_key key_enumerate_sub_keys key_execute key_notify key_query_value key_read key_set_value key_write openkeyex(same as openkey(queryinfokey(keyreturns information about key as tuple (num_subkeysnum_valueslast_modifiedin which num_subkeys is the number of subkeysnum_values is the number of valuesand last_modified is long integer containing the time of last modification time is measured from january in units of nanoseconds queryvalue(key,sub_keyreturns the unnamed value for key as string key is an open key or an hkey_constant sub_key is the name of the subkey to useif any if omittedthe function returns the value associated with key instead this function returns the data for the first value with null name howeverthe type is returned (use queryvalueex insteadqueryvalueex(keyvalue_namereturns tuple (valuetypecontaining the data value and type for key key is an open key or hkey_constant value_name is the name of the value to return the returned type is one of the integer codes as described for the enumvalue(function savekey(keyfilenamesaves key and all its subkeys to file key is an open key or predefined hkey_constant filename must not already exist and should not include filename extension furthermorethe caller must have backup privileges for the operation to succeed lib fl ff
18,258
setvalue(keysub_keytypevaluesets the value of key key is an open key or hkey_constant sub_key is the name of the subkey with which to associate the value type is an integer type codecurrently limited to reg_sz value is string containing the value data if sub_key does not existit is created key must have been opened with key_set_value access for this function to succeed setvalueex(keyvalue_namereservedtypevaluesets the value field of key key is an open key or an hkey_constant value_name is the name of the value type is an integer type code as described for the enumvalue(function value is string containing the new value when the values of numeric types (for examplereg_dwordare being setvalue is still string containing the raw data this string can be created using the struct module reserved is currently ignored and can be set to anything (the value is not usednotes functions that return windows hkey object return special registry handle object described by the class pyhkey this object can be converted into windows handle value using int(this object can also be used with the context-management protocol to automatically close the underlying handle--for examplewith winreg openkey(winreg hkey_local_machine"spam"as keystatements lib fl ff
18,259
lib fl ff
18,260
threads and concurrency his describes library modules and programming strategies for writing concurrent programs in python topics include threadsmessage passingmultiprocessingand coroutines before covering specific library modulessome basic concepts are first described basic concepts running program is called process each process has its own system statewhich includes memorylists of open filesa program counter that keeps track of the instruction being executedand call stack used to hold the local variables of functions normallya process executes statements one after the other in single sequence of control flowwhich is sometimes called the main thread of the process at any given timethe program is only doing one thing program can create new processes using library functions such as those found in the os or subprocess modules ( os fork()subprocess popen()etc howeverthese processesknown as subprocessesrun as completely independent entities--each with their own private system state and main thread of execution because subprocess is independentit executes concurrently with the original process that isthe process that created the subprocess can go on to work on other things while the subprocess carries out its own work behind the scenes although processes are isolatedthey can communicate with each other--something known as interprocess communication (ipcone of the most common forms of ipc is based on message passing message is simply buffer of raw bytes primitive operations such as send(and recv(are then used to transmit or receive messages through an / channel such as pipe or network socket another somewhat less common ipc mechanism relies upon memory-mapped regions (see the mmap modulewith memory mappingprocesses can create shared regions of memory modifications to these regions are then visible in all processes that happen to be viewing them multiple processes can be used by an application if it wants to work on multiple tasks at the same time--with each process responsible for part of the processing howeveranother approach for subdividing work into tasks is to use threads thread is similar to process in that it has its own control flow and execution stack howevera thread runs inside the process that created itsharing all of the data and system resources threads are useful when an application wants to perform tasks concurrentlybut there is potentially large amount of system state that needs to be shared by the tasks lib fl ff
18,261
threads and concurrency when multiple processes or threads are usedthe host operating system is responsible for scheduling their work this is done by giving each process (or threada small time slice and rapidly cycling between all of the active tasks--giving each portion of the available cpu cycles for exampleif your system had active processes runningthe operating system would allocate approximately / th of its cpu time to each process and cycle between processes in rapid succession on systems with more than one cpu corethe operating system can schedule processes so that each cpu is kept busyexecuting processes in parallel writing programs that take advantage of concurrent execution is something that is intrinsically complicated major source of complexity concerns synchronization and access to shared data in particularattempts to update data structure by multiple tasks at approximately the same time can lead to corrupted and inconsistent program state ( problem formally known as race conditionto fix these problemsconcurrent programs must identify critical sections of code and protect them using mutual-exclusion locks and other similar synchronization primitives for exampleif different threads were trying to write data to the same file at the same timeyou might use mutual exclusion lock to synchronize their operation so that once one of the threads starts writingthe other threads have to wait until it has finished before they are allowed to start writing the code for this scenario typically looks like thiswrite_lock lock(critical section where writing occurs write_lock acquire( write("here' some data \ " write("here' more data \ "write_lock release(there' joke attributed to jason whittington that goes as like this"why did the multithreaded chicken cross the roadto to other side get thethis joke typifies the kinds of problems that arise with task synchronization and concurrent programming if you're scratching your head saying" don' get it,then it might be wise to do bit more reading before diving into the rest of this concurrent programming and python python supports both message passing and thread-based concurrent programming on most systems although most programmers tend to be familiar with the thread interfacepython threads are actually rather restricted although minimally thread-safethe python interpreter uses an internal global interpreter lock (the gilthat only allows single python thread to execute at any given moment this restricts python programs to run on single processor regardless of how many cpu cores might be available on the system although the gil is often heated source of debate in the python communityit is unlikely to be removed at any time in the foreseeable future the presence of the gil has direct impact on how many python programmers address concurrent programming problems if an application is mostly / boundit is generally fine to use threads because extra processors aren' going to do much to help program that spends most of its time waiting for events for applications that involve heavy amounts of cpu processingusing threads to subdivide work doesn' provide any benefit and will make the program run slower (often much slower than you would guessfor thisyou'll want to rely on subprocesses and message passing lib fl ff
18,262
even when threads are usedmany programmers find their scaling properties to be rather mysterious for examplea threaded network server that works fine with threads may have horrible performance if it' scaled up to , threads as general ruleyou really don' want to be writing programs with , threads because each thread requires its own system resources and the overhead associated with thread context switchinglockingand other matters starts to become significant (not to mention the fact that all threads are constrained to run on single cputo deal with thisit is somewhat common to see such applications restructured as asynchronous eventhandling systems for examplea central event loop might monitor all of the / sources using the select module and dispatch asynchronous events to large collection of / handlers this is the basis for library modules such as asyncore as well as popular third-party modules such as twisted (looking forwardmessage passing is concept that you should probably embrace for any kind of concurrent programming in python even when working with threadsan often-recommended approach is to structure your application as collection of independent threads that exchange data through message queues this particular approach tends to be less error-prone because it greatly reduces the need to use locks and other synchronization primitives message passing also naturally extends into networking and distributed systems for exampleif part of program starts out as thread to which you send messagesthat component can later be migrated to separate process or onto different machine by sending the messages over network connection the message passing abstraction is also tied to advanced python features such as coroutines for examplea coroutine is function that can receive and processe messages that are sent to it soby embracing message passingyou will find that you can write programs that have great deal of flexibility the remainder of this looks at different library modules for supporting concurrent programming at the endmore detailed information on common programming idioms is provided multiprocessing the multiprocessing module provides support for launching tasks in subprocesscommunicating and sharing dataand performing various forms of synchronization the programming interface is meant to mimic the programming interface for threads in the threading module howeverunlike threadsit is important to emphasize that processes do not have any shared state thusif process modifies datathat change is local only to that process the features of the multiprocessing module are vastmaking it one of the larger and most advanced built-in libraries covering every detail of the module is impossible herebut the essential parts of it along with examples will be given experienced programmers should be able to take the examples and expand them to larger problems processes all of the features of the multiprocessing module are focused on processes they are described by the following class lib fl ff
18,263
threads and concurrency process([group [target [name [args [kwargs]]]]] class that represents task running in subprocess the arguments in the constructor should always been specified using keyword arguments target is callable object that will execute when the process startsargs is tuple of positional arguments passed to targetand kwargs is dictionary of keyword arguments passed to target if args and kwargs are omittedtarget is called with no arguments name is string that gives descriptive name to the process group is unused and is always set to none its presence here is simply to make the construction of process mimic the creation of thread in the threading module an instance of process has the following methodsp is_alive(returns true if is still running join([timeout]waits for process to terminate timeout specifies an optional timeout period process can be joined as many times as you wishbut it is an error for process to try and join itself run(the method that runs when the process starts by defaultthis invokes target that was passed to the process constructor as an alternativea process can be defined by inheriting from process and reimplementing run( start(starts the process this launches the subprocess that represents the process and invokes run(in that subprocess terminate(forcefully terminates the process if this is invokedthe process is terminated immediately without performing any kind of cleanup actions if the process created subprocesses of its ownthose processes will turn into zombies some care is required when using this method if holds lock or is involved with interprocess communicationterminating it might cause deadlock or corrupted / process instance also has the following data attributesp authkey the processauthentication key unless explicitly setthis is -character string generated by os urandom(the purpose of this key is to provide security for low-level interprocess communication involving network connections such connections only work if both ends have the same authentication key daemon boolean flag that indicates whether or not the process is daemonic daemonic process is automatically terminated when the python process that created it terminates in additiona daemonic process is prohibited from creating new processes on its own the value of daemon must be set before process is started using start( lib fl ff
18,264
exitcode the integer exit code of the process if the process is still runningthis is none if the value is negativea value of - means the process was terminated by signal name the name of the process pid the integer process id of the process here is an example that shows how to create and launch function (or other callableas separate processimport multiprocessing import time def clock(interval)while trueprint("the time is %stime ctime()time sleep(intervalif _name_ =' _main_ ' multiprocessing process(target=clockargs=( ,) start(here is an example that shows how to define this process as class that inherits from processimport multiprocessing import time class clockprocess(multiprocessing process)def _init_ (self,interval)multiprocessing process _init__(selfself interval interval def run(self)while trueprint("the time is %stime ctime()time sleep(self intervalif _name_ =' _main_ ' clockprocess( start(in both examplesthe time should be printed by the subprocess every seconds it is important to emphasize that for cross-platform portabilitynew processes should only be created by the main program as shown although this is optional on unixit is required on windows it should also be noted that on windowsyou will probably need to run the preceding examples in the command shell (command exeinstead of python idesuch as idle interprocess communication two primary forms of interprocess communication are supported by the multiprocessing modulepipes and queues both methods are implemented using message passing howeverthe queue interface is meant to mimic the use of queues commonly used with thread programs lib fl ff
18,265
threads and concurrency queue([maxsize]creates shared process queue maxsize is the maximum number of items allowed in the queue if omittedthere is no size limit the underlying queue is implemented using pipes and locks in additiona support thread is launched in order to feed queued data into the underlying pipe an instance of queue has the following methodsq cancel_join_thread(don' automatically join the background thread on process exit this prevents the join_thread(method from blocking close(closes the queuepreventing any more data from being added to it when this is calledthe background thread will continue to write any queued data not yet written but will shut down as soon as this is complete this method is called automatically if is garbage-collected closing queue does not generate any kind of end-of-data signal or exception in queue consumers for exampleif consumer is blocking on get(operationclosing the queue in the producer does not cause the get(to return with an error empty(returns true if is empty at the time of the call if other processes or threads are being used to add queue itemsbe aware that the result is not reliable ( new items could have been added to the queue in between the time that the result is returned and usedq full(returns true if is full the result is also not reliable due to threads (see empty() get([block [timeout]]returns an item from if is emptyblocks until queue item becomes available block controls the blocking behavior and is true by default if set to falsea queue empty exception (defined in the queue library moduleis raised if the queue is empty timeout is an optional timeout to use in blocking mode if no items become available in the specified time intervala queue empty exception is raised get_nowait(the same as get(falseq join_thread(joins the queue' background thread this is used to wait for all queue items to be consumed after close(has been called this method gets called by default in all processes that are not the original creator of this behavior can be disabled by called cancel_join_thread( put(item [block [timeout]]puts item onto the queue if the queue is fullblock until space becomes available block controls the blocking behavior and is true by default if set to falsea queue full exception (defined in the queue library moduleis raised if the queue is full timeout specifies how long to wait for space to become available in blocking mode queue full exception is raised on timeout lib fl ff
18,266
put_nowait(itemthe same as put(itemfalseq qsize(returns the approximate number of items currently in the queue the result of this function is not reliable because items may have been added or removed from the queue in between the time the result is returned and later used in program on some systemsthis method may raise an notimplementederror joinablequeue([maxsize]creates joinable shared process queue this is just like queue except that the queue allows consumer of items to notify the producer that the items have been successfully been processed the notification process is implemented using shared semaphore and condition variable an instance of joinablequeue has the same methods as queuebut it has the following additional methodsq task_done(used by consumer to signal that an enqueued item returned by get(has been processed valueerror exception is raised if this is called more times than have been removed from the queue join(used by producer to block until all items placed in queue have been processed this blocks until task_done(is called for every item placed into the queue the following example shows how you set up process that runs foreverconsuming and processing items on queue the producer feeds items into the queue and waits for them to be processed import multiprocessing def consumer(input_q)while trueitem input_q get(process item print(itemreplace with useful work signal task completion input_q task_done(def producer(sequenceoutput_q)for item in sequenceput the item on the queue output_q put(itemset up if _name_ =' _main_ ' multiprocessing joinablequeue(launch the consumer process cons_p multiprocessing process(target=consumer,args=( ,)cons_p daemon=true cons_p start(produce items sequence represents sequence of items to be sent to the consumer in practicethis could be the output lib fl ff
18,267
threads and concurrency of generator or produced in some other manner sequence [ , , , producer(sequenceqwait for all items to be processed join(in this examplethe consumer process is set to daemonic because it runs forever and we want it to terminate when the main program finishes (if you forget thisthe program will hanga joinablequeue is being used so that the producer actually knows when all of the items put in the queue have been successfully processed the join(operation ensures thisif you forget this stepthe consumer will be terminated before it has had time to complete all of its work if desiredmultiple processes can put and get items from the same queue for exampleif you wanted to have pool of consumer processesyou could just write code like thisif _name_ =' _main_ ' multiprocessing joinablequeue(launch some consumer processes cons_p multiprocessing process(target=consumer,args=( ,)cons_p daemon=true cons_p start(cons_p multiprocessing process(target=consumer,args=( ,)cons_p daemon=true cons_p start(produce items sequence represents sequence of items to be sent to the consumer in practicethis could be the output of generator or produced in some other manner sequence [ , , , producer(sequenceqwait for all items to be processed join(when writing code such as thisbe aware that every item placed into the queue is pickled and sent to the process over pipe or socket connection as general ruleit is better to send fewer large objects than many small objects in certain applicationsa producer may want to signal consumers that no more items will be produced and that they should shut down to do thisyou should write code that uses sentinel-- special value that indicates completion here is an example that illustrates this concept using none as sentinelimport multiprocessing def consumer(input_q)while trueitem input_q get(if item is nonebreak process item print(itemreplace with useful work shutdown print("consumer done"def producer(sequenceoutput_q)for item in sequencef lib fl ff
18,268
put the item on the queue output_q put(itemif _name_ =' _main_ ' multiprocessing queue(launch the consumer process cons_p multiprocessing process(target=consumer,args=( ,)cons_p start(produce items sequence [ , , , producer(sequenceqsignal completion by putting the sentinel on the queue put(nonewait for the consumer process to shutdown cons_p join(if you are using sentinels as shown in this examplebe aware that you will need to put sentinel on the queue for every single consumer for exampleif there were three consumer processes consuming items on the queuethe producer needs to put three sentinels on the queue to get all of the consumers to shut down as an alternative to using queuesa pipe can be used to perform message passing between processes pipe([duplex]creates pipe between processes and returns tuple (conn conn where conn and conn are connection objects representing the ends of the pipe by defaultthe pipe is bidirectional if duplex is set falsethen conn can only be used for receiving and conn can only be used for sending pipe(must be called prior to creating and launching any process objects that use the pipe an instance of connection object returned by pipe(has the following methods and attributesc close(closes the connection called automatically if is garbage collected fileno(returns the integer file descriptor used by the connection poll([timeout]returns true if data is available on the connection timeout specifies the maximum amount of time to wait if omittedthe method returns immediately with result if timeout is set to nonethen the operation will wait indefinitely for data to arrive recv(receives an object sent by send(raises eoferror if the other end of the connection has been closed and there is no more data recv_bytes([maxlength]receives complete byte message sent by send_bytes(maxlength specifies the maximum number of bytes to receive if an incoming message exceeds thisan ioerror is raised and no further reads can be made on the connection raises eoferror if the other end of the connection has been closed and there is no more data lib fl ff
18,269
threads and concurrency recv_bytes_into(buffer [offset]receives complete byte message and stores it in the object bufferwhich supports the writable buffer interface ( bytearray object or similaroffset specifies the byte offset into the buffer where to place the message returns the number of bytes received raises buffertooshort if the length of the message exceeds available buffer space send(objsends an object through the connection obj is any object that is compatible with pickle send_bytes(buffer [offset [size]]sends buffer of byte data through the connection buffer is any object that supports the buffer interfaceoffset is the byte offset into the bufferand size is the number of bytes to send the resulting data is sent as single message to be received using single call to recv_bytes(pipes can be used in similar manner as queues here is an example that shows the previous producer-consumer problem implemented using pipesimport multiprocessing consume items on pipe def consumer(pipe)output_pinput_p pipe input_p close(close the input end of the pipe while truetryitem output_p recv(except eoferrorbreak process item print(itemreplace with useful work shutdown print("consumer done"produce items and put on queue sequence is an iterable representing items to be processed def producer(sequenceinput_p)for item in sequenceput the item on the queue input_p send(itemif _name_ =' _main_ '(output_pinput_pmultiprocessing pipe(launch the consumer process cons_p multiprocessing process(target=consumer,args=((output_pinput_p),)cons_p start(close the output pipe in the producer output_p close(produce items sequence [ , , , producer(sequenceinput_psignal completion by closing the input pipe input_p close(wait for the consumer process to shutdown cons_p join( lib fl ff
18,270
great attention should be given to proper management of the pipe endpoints if one of the ends of the pipe is not used in either the producer or consumerit should be closed this explainsfor instancewhy the output end of the pipe is closed in the producer and the input end of the pipe is closed in the consumer if you forget one of these stepsthe program may hang on the recv(operation in the consumer pipes are reference counted by the operating system and have to be closed in all processes to produce the eoferror exception thusclosing the pipe in the producer doesn' have any effect unless the consumer also closes the same end of the pipe pipes can be used for bidirectional communication this can be used to write programs that interact with process using request/response model typically associated with client/server computing or remote procedure call here is an exampleimport multiprocessing server process def adder(pipe)server_pclient_p pipe client_p close(while truetryx, server_p recv(except eoferrorbreak result server_p send(resultshutdown print("server done"if _name_ =' _main_ '(server_pclient_pmultiprocessing pipe(launch the server process adder_p multiprocessing process(target=adder,args=((server_pclient_p),)adder_p start(close the server pipe in the client server_p close(make some requests on the server client_p send(( , )print(client_p recv()client_p send(('hello','world')print(client_p recv()done close the pipe client_p close(wait for the consumer process to shutdown adder_p join(in this examplethe adder(function runs as server waiting for messages to arrive on its end of the pipe when receivedit performs some processing and sends the result back on the pipe keep in mind that send(and recv(use the pickle module to serialize objects in the examplethe server receives tuple (xyas input and returns the result for more advanced applications that use remote procedure callhoweveryou should use process pool as described next lib fl ff
18,271
threads and concurrency process pools the following class allows you to create pool of processes to which various kind of data processing tasks can be submitted the functionality provided by pool is somewhat similar to that provided by list comprehensions and functional programming operations such as map-reduce pool([numprocess [,initializer [initargs]]]creates pool of worker processes numprocess is the number of processes to create if omittedthe value of cpu_count(is used initializer is callable object that will be executed in each worker process upon startup initargs is tuple of arguments to pass to initializer by defaultinitializer is none an instance of pool supports the following operationsp apply(func [args [kwargs]]executes func(*args**kwargsin one of the pool workers and returns the result it is important to emphasize this does not execute func in parallel in all pool workers if you want func to execute concurrently with different argumentsyou either have to call apply(from different threads or use apply_async( apply_async(func [args [kwargs [callback]]]executes func(*args**kwargsin one of the pool workers and returns the result asynchronously the result of this method is an instance of asyncresult which can be used to obtain the final result at later time callback is callable object that accepts single input argument when the result of func becomes availableit is immediately passed to callback callback should not perform any blocking operations or else it will block the reception of results in other asynchronous operations close(closes the process poolpreventing any further operations if any operations are still pendingthey will be completed before the worker processes terminate join(waits for all worker processes to exit this can only be called after close(or terminate( imap(funciterable [chunksize] version of map(that returns an iterator instead of list of results imap_unordered(funciterable [chunksize]]the same as imap(except that the results are returned in an arbitrary order based on when they are received from the worker processes map(funciterable [chunksize]applies the callable object func to all of the items in iterable and returns the result as list the operation is carried out in parallel by splitting iterable into chunks and farming out the work to the worker processes chunksize specifies the number of items in each chunk for large amounts of dataincreasing the chunksize will improve performance lib fl ff
18,272
map_async(funciterable [chunksize [callback]]the same as map(except that the result is returned asynchronously the return value is an instance of asyncresult that can be used to later obtain the result callback is callable object accepting single argument if suppliedcallback is called with the result when it becomes available terminate(immediately terminates all of the worker processes without performing any cleanup or finishing any pending work if is garbage-collectedthis is called the methods apply_async(and map_async(return an asyncresult instance as result an instance of asyncresult has the following methodsa get([timeout]returns the resultwaiting for it to arrive if necessary timeout is an optional timeout if the result does not arrive in the given timea multiprocessing timeouterror exception is raised if an exception was raised in the remote operationit is reraised when this method is called ready(returns true if the call has completed sucessful(returns true if the call completed without any exceptions an assertionerror is raised if this method is called prior to the result being ready wait([timeout]waits for the result to become available timeout is an optional timeout the following example illustrates the use of process pool to build dictionary mapping filenames to sha digest values for an entire directory of filesimport os import multiprocessing import hashlib some parameters you can tweak bufsize read buffer size poolsize number of workers def compute_digest(filename)tryf open(filename,"rb"except ioerrorreturn none digest hashlib sha (while truechunk read(bufsizeif not chunkbreak digest update(chunkf close(return filenamedigest digest(def build_digest_map(topdir)digest_pool multiprocessing pool(poolsizef lib fl ff
18,273
threads and concurrency allfiles (os path join(path,namefor pathdirsfiles in os walk(topdirfor name in filesdigest_map dict(digest_pool imap_unordered(compute_digest,allfiles, )digest_pool close(return digest_map try it out change the directory name as desired if _name_ =' _main_ 'digest_map build_digest_map("/users/beazley/software/python- "print(len(digest_map)in the examplea sequence of pathnames for all files in directory tree is specified using generator expression this sequence is then chopped up and farmed out to process pool using the imap_unordered(function each pool worker computes sha digest value for its files using the compute_digest(function the results are sent back to the master and collected into python dictionary although it' by no means scientific resultthis example gives percent speedup over single-process solution when run on the author' dual-core macbook keep in mind that it only makes sense to use process pool if the pool workers perform enough work to justify the extra communication overhead as general ruleit would not make sense to use pool for simple calculations such as just adding two numbers together shared data and synchronization normallyprocesses are completed isolated from each other with the only means of communication being queues or pipes howevertwo objects can be used to represent shared data underneath the coversthese objects use shared memory (via mmapto make access possible in multiple processes value(typecodearg argnlockcreates ctypes object in shared memory typecode is either string containing type code as used by the array module ( ' '' 'etc or type object from the ctypes module ( ctypes c_intctypes c_doubleetc all extra positional arguments arg arg argn are passed to the constructor for the given type lock is keyword-only argument that if set to true (the default) new lock is created to protect access to the value if you pass in an existing lock such as lock or rlock instancethen that lock is used for synchronization if is an instance of shared value created by valuethen the underlying value is accessed used value for examplereading value will get the value and assigning value will change the value rawvalue(typecodearg argnthe same as value except that there is no locking array(typecodeinitializerlockcreates ctypes array in shared memory typecode describes the contents of the array and has the same meaning as described for value(initializer is either an integer that sets the initial size of the array or sequence of items whose values and size are used to initialize the array lock is keyword-only argument with the same meaning as described for value(if is an instance of shared array created by arraythen you lib fl ff
18,274
access its contents using the standard python indexingslicingand iteration operationseach of which are synchronized by the lock for byte stringsa will also have an value attribute to access the entire array as single string rawarray(typecodeinitializerthe same as array except that there is no locking if you are writing programs that must manipulate large number of array items all at oncethe performance will be significantly better if you use this datatype along with separate lock for synchronization (if neededin addition to shared values created using value(and array()the multiprocessing module provides shared versions of the following synchronization primitivesprimitive description lock rlock mutual exclusion lock reentrant mutual exclusion lock (can be acquired multiple times by the same process without blockingsemaphore bounded semaphore event condition variable semaphore boundedsemaphore event condition the behavior of these objects mimics the synchronization primitives defined in the threading module with identical names please refer to the threading documentation for further details it should be noted that with multiprocessingit is not normally necessary to worry about low-level synchronization with lockssemaphoresor similar constructs to the same degree as with threads in partsend(and receive(operations on pipes and put(and get(operations on queues already provide synchronization howevershared values and locks can have uses in certain specialized settings here is an example that sends python list of floats to another process using shared array instead of pipeimport multiprocessing class floatchannel(object)def _init_ (selfmaxsize)self buffer multiprocessing rawarray(' ',maxsizeself buffer_len multiprocessing value(' 'self empty multiprocessing semaphore( self full multiprocessing semaphore( def send(self,values)self empty acquire(only proceed if buffer empty nitems len(valuesself buffer_len nitems set the buffer size self buffer[:nitemsvalues copy values into the buffer self full release(signal that buffer is full def recv(self)self full acquire(only proceed if buffer full values self buffer[:self buffer_len valuecopy values self empty release(signal that buffer is empty return values lib fl ff
18,275
threads and concurrency performance test receive bunch of messages def consume_test(countch)for in xrange(count)values ch recv(performance test send bunch of messages def produce_test(countvaluesch)for in xrange(count)ch send(valuesif _name_ =' _main_ 'ch floatchannel( multiprocessing process(target=consume_testargs=( ,ch) start(values [float(xfor in xrange( )produce_test( valueschprint("done" join(further study of this example is left to the reader howeverin performance test on the author' machinesending large list of floats through the floatchannel is about percent faster than sending the list through pipe (which has to pickle and unpickle all of the valuesmanaged objects unlike threadsprocesses do not support shared objects although you can create shared values and arrays as shown in the previous sectionthis doesn' work for more advanced python objects such as dictionarieslistsor instances of user-defined classes the multiprocessing module doeshoweverprovide way to work with shared objects if they run under the control of so-called manager manager is separate subprocess where the real objects exist and which operates as server other processes access the shared objects through the use of proxies that operate as clients of the manager server the most straightforward way to work with simple managed objects is to use the manager(function manager(creates running manager server in separate process returns an instance of type syncmanager which is defined in the multiprocessing managers submodule an instance of syncmanager as returned by manager(has series of methods for creating shared objects and returning proxy which can be used to access them normallyyou would create manager and use these methods to create shared objects before launching any new processes the following methods are definedm array(typecodesequencecreates shared array instance on the server and returns proxy to it see the "shared data and synchronizationsection for description of the arguments boundedsemaphore([value]creates shared threading boundedsemaphore instance on the server and returns proxy to it lib fl ff
18,276
condition([lock]creates shared threading condition instance on the server and returns proxy to it lock is proxy instance created by lock(or rlock( dict([args]creates shared dict instance on the server and returns proxy to it the arguments to this method are the same as for the built-in dict(function event(creates shared threading event instance on the server and returns proxy to it list([sequence]creates shared list instance on the server and returns proxy to it the arguments to this method are the same as for the built-in list(function lock(creates shared threading lock instance on the server and returns proxy to it namespace(creates shared namespace object on the server and returns proxy to it namespace is an object that is somewhat similar to python module for exampleif is namespace proxyyou can assign and read attributes using such as name value or value name howeverthe choice of name is significant if name starts with letterthen that value is part of the shared object held by the manager and is accessible in all other processes if name starts with an underscoreit is only part of the proxy object and is not shared queue(creates shared queue queue object on the server and returns proxy to it rlock(creates shared threading rlock object on the server and returns proxy to it semaphore([value]creates shared threading semaphore object on the server and returns proxy to it value(typecodevaluecreates shared value object on the server and returns proxy to it see the "shared data and synchronizationsection for description of the arguments the following example shows how you would use manager in order to create dictionary shared between processes import multiprocessing import time print out whenever the passed event gets set def watch(devt)while trueevt wait(print(devt clear( lib fl ff
18,277
threads and concurrency if _name_ =' _main_ ' multiprocessing manager( dict(create shared dict evt event(create shared event launch process that watches the dictionary multiprocessing process(target=watch,args=( ,evt) daemon=true start(update the dictionary and notify the watcher ['foo' evt set(time sleep( update the dictionary and notify the watcher ['bar' evt set(time sleep( terminate the process and manager terminate( shutdown(if you run this examplethe watch(function prints out the value of every time the passed event gets set in the main programa shared dictionary and event are created and manipulated in the main process when you run thisyou will see the child process printing data if you want to have shared objects of other types such as instances of user-defined classesyou have to create your custom manager object to do thisyou create class that inherits from basemanagerwhich is defined in the multiprocessing managers submodule managers basemanager([address [authkey]]base class used to create custom manager servers for user-defined objects address is an optional tuple (hostnameportthat specifies network address for the server if omittedthe operating system will simply assign an address corresponding to some free port number authkey is string that is used to authenticate clients connecting to the server if omittedthe value of current_process(authkey is used if mgrclass is class that inherits from basemanagerthe following class method is used to create methods for returning proxies to shared objects mgrclass register(typeid [callable [proxytype [exposed [method_to_typeid [create_method]]]]]registers new data type with the manager class typeid is string that is used to name particular kind of shared object this string should be valid python identifier callable is callable object that creates or returns the instance to be shared proxytype is class that provides the implementation of the proxy objects to be used in clients normallythese classes are generated by default so this is normally set to none exposed is sequence of method names on the shared object that will be exposed to proxy objects if omittedthe value of proxytype _exposed_ is used and if that is undefinedthen all public methods (all callable methods that don' start with an underscore (_are usedmethod_to_typeid is mapping from method names to type ids that is used to specify which methods should return their results using proxy objects if lib fl ff
18,278
method is not found in this mappingthe return value is copied and returned if method_to_typeid is nonethe value of proxytype _method_to_typeid_ is used if it is defined create_method is boolean flag that specifies whether method with the name typeid should be created in mgrclass by defaultthis is true an instance of manager derived from basemanager must be manually started to operate the following attributes and methods are related to thism address tuple (hostnameportthat has the address being used by the manager server connect(connects to remote manager objectthe address of which was given to the basemanager constructor serve_forever(runs the manager server in the current process shutdown(shuts down manager server launched by the start(method start(starts separate subprocess and starts the manager server in that process the following example shows how to create manager for user-defined classimport multiprocessing from multiprocessing managers import basemanager class (object)def _init_ (self,value)self value def _repr_ (self)return " (% )self def getx(self)return self def setx(self,value)self value def _iadd_ (self,value)self +value return self class mymanager(basemanager)pass mymanager register(" ",aif _name_ =' _main_ ' mymanager( start(create managed object ( in this examplethe last statement creates an instance of that lives on the manager server the variable in the previous code is only proxy for this instance the behavior of this proxy is similar to (but not completely identical toreferentthe object on the lib fl ff
18,279
threads and concurrency server firstyou will find that data attributes and properties cannot be accessed insteadyou have to use access functionsa traceback (most recent call last)file ""line in attributeerror'autoproxy[ ]object has no attribute 'xa getx( setx( with proxiesthe repr(function returns string representing the proxywhereas str(returns the output of _repr_ (on the referent for examplea print(aa( special methods and any method starting with an underscore (_are not accessible on proxies for exampleif you tried to invoke _iadd_ ()it doesn' worka + traceback (most recent call last)file ""line in typeerrorunsupported operand type(sfor +='autoproxy[ ]and 'inta _iadd_ ( traceback (most recent call last)file ""line in attributeerror'autoproxy[ ]object has no attribute ' _iadd_ _in more advanced applicationsit is possible to customize proxies to more carefully control access this is done by defining class that inherits from baseproxywhich is defined in multiprocessing managers the following code shows how you could make custom proxy to the class in the previous example that properly exposes the _iadd_ (method and which uses property to expose the attributefrom multiprocessing managers import baseproxy class aproxy(baseproxy) list of all methods exposed on the referent _exposed_ [' _iadd_ ','getx','setx'implement the public interface of the proxy def _iadd_ (self,value)self _callmethod(' _iadd_ ',(value,)return self @property def (self)return self _callmethod('getx',()@ setter def (self,value)self _callmethod('setx',(value,)class mymanager(basemanager)pass mymanager register(" "aproxytype=aproxyan instance proxy of class derived from baseproxy has the following methodsf lib fl ff
18,280
proxy _callmethod(name [args [kwargs]]calls the method name on the proxy' referent object name is string with the method nameargs is tuple containing positional argumentsand kwargs is dictionary of keyword arguments the method name must be explicitly exposed normally this is done by including the name in the _exposed_ class attribute of the proxy class proxy _getvalue(returns copy of the referent in the caller if this call is made in different processthe referent object is pickledsent to the callerand is unpickled an exception is raised if the referent can' be pickled connections programs that use the multiprocessing module can perform message passing with other processes running on the same machine or with processes located on remote systems this can be useful if you want to take program written to work on single system and expand it work on computing cluster the multiprocessing connection submodule has functions and classes for this purposeconnections client(address [family [authenticate [authkey]]]connects to another process which must already be listening at address address address is tuple (hostname portrepresenting network addressa file name representing unix domain socketor string of the form '\\servername\pipe\pipenamerepresenting windows named pipe on remote system servername (use servername of for the local machinefamily is string representing the addess format and is typically one of 'af_inet''af_unix'or 'af_pipeif omittedthe family is inferred from the format of address authentication is boolean flag that specifies whether digest authentication is to be used authkey is string containing the authentication key if omittedthen the value of current_process(authkey is used the return value from this function is connection objectwhich was previously described in the pipes section of "interprocess communication connections listener([address [family [backlog [authenticate [authkey]]]]] class that implements server for listening for and handling connections made by the client(function the addressfamilyauthenticateand authkey arguments have the same meaning as for client(backlog is an integer corresponding to the value passed to the listen(method of sockets if the address parameter specifies network connection by defaultbacklog is if address is omittedthen default address is chosen if both address and family are omittedthen the fastest available communications scheme on the local system is chosen an instance of listener supports the following methods and attributess accept(accepts new connection and returns connection object raises authenticationerror if authentication fails address the address that the listener is using lib fl ff
18,281
threads and concurrency close(closes the pipe or socket being used by the listener last_accepted the address of the last client that was accepted here is an example of server program that listens for clients and implements simple remote operation (adding)from multiprocessing connection import listener serv listener(('', ),authkey=' 'while trueconn serv accept(while truetryx, conn recv(except eoferrorbreak result conn send(resultconn close(here is simple client program that connects to this server and sends some messagesfrom multiprocessing connection import client conn client(('localhost', )authkey=" "conn send(( , ) conn recv(print(rprints ' conn send(("hello","world") conn recv(print(rprints 'helloworldconn close(miscellaneous utility functions the following utility functions are also definedactive_children(returns list of process objects for all active child processes cpu_count(returns the number of cpus on the system if it can be determined current_process(returns the process object for the current process freeze_support( function that should be included as the first statement of the main program in an application that will be "frozenusing various packaging tools such as py exe this is needed to prevent runtime errors associated with launching subprocesses in frozen application lib fl ff
18,282
get_logger(returns the logging object associated with the multiprocessing modulecreating it if it doesn' already exist the returned logger does not propagate messages to the root loggerhas level of logging notsetand prints all logging messages to standard error set_executable(executablesets the name of the python executable used to execute subprocesses this is only defined on windows general advice on multiprocessing the multiprocessing module is one of the most advanced and powerful modules in the python library here are some general tips for keeping your head from explodingn carefully read the online documentation before building large application although this section has covered the essential basicsthe official documentation covers some of the more sneaky issues that can arise make sure that all data passed between processes is compatible with pickle avoid shared data and learn to love message passing and queues with message passingyou don' have to worry so much about synchronizationlockingand other issues it also tends to provide better scaling as the number of processes increases don' use global variables inside functions that are meant to run in separate processes it is better to explicitly pass parameters instead try not to mix threads and multiprocessing together in the same program unless you're vastly trying to improve your job security (or to have it reduced depending on who is doing the code reviewn pay very careful attention to how processes get shut down as general ruleyou will want to explicitly close processes and have well-defined termination scheme in place as opposed to just relying on garbage collection or having to forcefully terminate children using the terminate(operation the use of managers and proxies is closely related to variety of concepts in distributed computing ( distributed objectsa good distributed computing book might be useful reference the multiprocessing module originated from third-party library known as pyprocessing searching for usage tips and information on this library may be useful resource although this module works on windowsyou should carefully read the official documentation for variety of subtle details for exampleto launch new process on windowsthe multiprocessing module implements its own clone of the unix fork(operationin which process state is copied to the child process over pipe as general rulethis module is much more tuned to unix systems above all elsetry to keep things as simple as possible lib fl ff
18,283
threads and concurrency threading the threading module provides thread class and variety of synchronization primitives for writing multithreaded programs thread objects the thread class is used to represent separate thread of control new thread can be created as followsthread(group=nonetarget=nonename=noneargs=()kwargs={}this creates new thread instance group is none and is reserved for future extensions target is callable object invoked by the run(method when the thread starts by defaultit' nonemeaning that nothing is called name is the thread name by defaulta unique name of the form "thread-nis created args is tuple of arguments passed to the target function kwargs is dictionary of keyword arguments passed to target thread instance supports the following methods and attributest start(starts the thread by invoking the run(method in separate thread of control this method can be invoked only once run(this method is called when the thread starts by defaultit calls the target function passed in the constructor this method can also be redefined in subclasses of thread join([timeout]waits until the thread terminates or timeout occurs timeout is floating-point number specifying timeout in seconds thread cannot join itselfand it' an error to join thread before it has been started is_alive(returns true if the thread is alive and false otherwise thread is alive from the moment the start(method returns until its run(method terminates isalive(is an alias for this method in older code name the thread name this is string that is used for identification only and which can be changed to more meaningful value if desired (which may simplify debuggingin older codet getname(and setname(nameare used to manipulate the thread name ident an integer thread identifier if the thread has not yet startedthe value is none daemon the thread' boolean daemonic flag this must be set prior to calling start(and the initial value is inherited from daemonic status of the creating thread the entire python program exits when no active non-daemon threads are left all programs have main lib fl ff
18,284
thread that represents the initial thread of control and which is not daemonic in older codet setdaemon(flagand isdaemon(are used to manipulate this value here is an example that shows how to create and launch function (or other callableas threadimport threading import time def clock(interval)while trueprint("the time is %stime ctime()time sleep(intervalt threading thread(target=clockargs=( ,) daemon true start(here is an example that shows how to define the same thread as classimport threading import time class clockthread(threading thread)def _init_ (self,interval)threading thread _init_ (selfself daemon true self interval interval def run(self)while trueprint("the time is %stime ctime()time sleep(self intervalt clockprocess( start(if you define thread as class and define your own _init_ (methodit is critically important to call the base class constructor thread _init_ (as shown if you forget thisyou will get nasty error other than run()it is an error to override any of the other methods already defined for thread the setting of the daemon attribute in these examples is common feature of threads that will run forever in the background normallypython waits for all threads to terminate before the interpreter exits howeverfor nonterminating background tasksthis behavior is often undesirable setting the daemon flag makes the interpreter quit immediately after the main program exits in this casethe daemonic threads are simply destroyed timer objects timer object is used to execute function at some later time timer(intervalfunc [args [kwargs]]creates timer object that runs the function func after interval seconds have elapsed args and kwargs provide the arguments and keyword arguments passed to func the timer does not start until the start(method is called lib fl ff
18,285
threads and concurrency timer objectthas the following methodst start(starts the timer the function func supplied to timer(will be executed after the specified timer interval cancel(cancels the timer if the function has not executed yet lock objects primitive lock (or mutual exclusion lockis synchronization primitive that' in either "lockedor "unlockedstate two methodsacquire(and release()are used to change the state of the lock if the state is lockedattempts to acquire the lock are blocked until the lock is released if more than one thread is waiting to acquire the lockonly one is allowed to proceed when the lock is released the order in which waiting threads proceed is undefined new lock instance is created using the following constructorlock(creates new lock object that' initially unlocked lock instancelocksupports the following methodslock acquire([blocking ]acquires the lockblocking until the lock is released if necessary if blocking is supplied and set to falsethe function returns immediately with value of false if the lock could not be acquired or true if locking was successful lock release(releases lock it' an error to call this method when the lock is in an unlocked state or from different thread than the one that originally called acquire(rlock reentrant lock is synchronization primitive that' similar to lock objectbut it can be acquired multiple times by the same thread this allows the thread owning the lock to perform nested acquire(and release(operations in this caseonly the outermost release(operation resets the lock to its unlocked state new rlock object is created using the following constructorrlock(creates new reentrant lock object an rlock objectrlocksupports the following methodsrlock acquire([blocking ]acquires the lockblocking until the lock is released if necessary if no thread owns the lockit' locked and the recursion level is set to if this thread already owns the lockthe recursion level of the lock is increased by one and the function returns immediately lib fl ff
18,286
rlock release(releases lock by decrementing its recursion level if the recursion level is zero after the decrementthe lock is reset to the unlocked state otherwisethe lock remains locked this function should only be called by the thread that currently owns the lock semaphore and bounded semaphore semaphore is synchronization primitive based on counter that' decremented by each acquire(call and incremented by each release(call if the counter ever reaches zerothe acquire(method blocks until some other thread calls release(semaphore([value]creates new semaphore value is the initial value for the counter if omittedthe counter is set to value of semaphore instancessupports the following methodss acquire([blocking]acquires the semaphore if the internal counter is larger than zero on entrythis method decrements it by and returns immediately if it' zerothis method blocks until another thread calls release(the blocking argument has the same behavior as described for lock and rlock objects release(releases semaphore by incrementing the internal counter by if the counter is zero and another thread is waitingthat thread is awakened if multiple threads are waitingonly one will be returned from its acquire(call the order in which threads are released is not deterministic boundedsemaphore([value]creates new semaphore value is the initial value for the counter if value is omittedthe counter is set to value of boundedsemaphore works exactly like semaphore except the number of release(operations cannot exceed the number of acquire(operations subtle difference between semaphore and mutex lock is that semaphore can be used for signaling for examplethe acquire(and release(methods can be called from different threads to communicate between producer and consumer threads produced threading semaphore( consumed threading semaphore( def producer()while trueconsumed acquire(produce_item(produced release(def consumer()while trueproduced acquire(item get_item(consumed release( lib fl ff
18,287
threads and concurrency the kind of signaling shown in this example is often instead carried out using condition variableswhich will be described shortly events events are used to communicate between threads one thread signals an "event,and one or more other threads wait for it an event instance manages an internal flag that can be set to true with the set(method and reset to false with the clear(method the wait(method blocks until the flag is true event(creates new event instance with the internal flag set to false an event instanceesupports the following methodse is_set(returns true only if the internal flag is true this method is called isset(in older code set(sets the internal flag to true all threads waiting for it to become true are awakened clear(resets the internal flag to false wait([timeout]blocks until the internal flag is true if the internal flag is true on entrythis method returns immediately otherwiseit blocks until another thread calls set(to set the flag to true or until the optional timeout occurs timeout is floating-point number specifying timeout period in seconds although event objects can be used to signal other threadsthey should not be used to implement the kind of notification that is typical in producer/consumer problems for exampleyou should avoid code like thisevt event(def producer()while trueproduce item evt signal(def consumer()while truewait for an item evt wait(consume the item clear the event and wait again evt clear(this code does not work reliably because the producer might produce new item in between the evt wait(and evt clear(operations howeverby clearing the eventthis new item won' be seen by the consumer until the producer creates new item in the best casethe program will experience minor hiccup where the processing lib fl ff
18,288
of an item is inexplicably delayed in the worst casethe whole program will hang due to the loss of an event signal for these types of problemsyou are better off using condition variables condition variables condition variable is synchronization primitivebuilt on top of another lock that' used when thread is interested in particular change of state or event occurring typical use is producer-consumer problem where one thread is producing data to be consumed by another thread new condition instance is created using the following constructorcondition([lock]creates new condition variable lock is an optional lock or rlock instance if not supplieda new rlock instance is created for use with the condition variable condition variablecvsupports the following methodscv acquire(*argsacquires the underlying lock this method calls the corresponding acquire(*argsmethod on the underlying lock and returns the result cv release(releases the underlying lock this method calls the corresponding release(method on the underlying lock cv wait([timeout]waits until notified or until timeout occurs this method is called after the calling thread has already acquired the lock when calledthe underlying lock is releasedand the thread goes to sleep until it' awakened by notify(or notifyall(call performed on the condition variable by another thread once awakenedthe thread reacquires the lock and the method returns timeout is floating-point number in seconds if this time expiresthe thread is awakenedthe lock reacquiredand control returned cv notify([ ]wakes up one or more threads waiting on this condition variable this method is called only after the calling thread has acquired the lockand it does nothing if no threads are waiting specifies the number of threads to awaken and defaults to awakened threads don' return from the wait(call until they can reacquire the lock cv notify_all(wakes up all threads waiting on this condition this method is called notifyall(in older code here is an example that provides template of using condition variablescv threading condition(def producer()while truecv acquire(produce_item(cv notify(cv release( lib fl ff
18,289
threads and concurrency def consumer()while truecv acquire(while not item_is_available()cv wait(wait for an item to show up cv release(consume_item( subtle aspect of using condition variables is that if there are multiple threads waiting on the same conditionthe notify(operation may awaken one or more of them (this behavior often depends on the underlying operating systembecause of thisthere is always possibility that thread will awaken only to find that the condition of interest no longer holds this explainsfor instancewhy while loop is used in the consumer(function if the thread awakensbut the produced item is already goneit just goes back to waiting for the next signal working with locks great care must be taken when working with any of the locking primitives such as lockrlockor semaphore mismanagement of locks is frequent source of deadlock or race conditions code that relies on lock should always make sure locks get properly released even when exceptions occur typical code looks like thistrylock acquire(critical section statements finallylock release(alternativelyall of the locks also support the context management protocol which is little cleanerwith lockcritical section statements in this last examplethe lock is automatically acquired by the with statement and released when control flow leaves the context alsoas general rule you should avoid writing code where more than one lock is acquired at any given time for examplewith lock_acritical section statements with lock_bcritical section on statements this is usually good way to have your application mysteriously deadlock although there are strategies for avoiding this (for examplehierarchical locking)you're often better off writing code that avoids this altogether lib fl ff
18,290
thread termination and suspension threads do not have any methods for forceful termination or suspension this omission is by design and due to the intrinsic complexity of writing threaded programs for exampleif thread has acquired lockforcefully terminating or suspending it before it is able to release the lock may cause the entire application to deadlock moreoverit is generally not possible to simply "release all lockson termination either because complicated thread synchronization often involves locking and unlocking operations that must be carried out in very precise sequence to work if you want to support termination or suspensionyou need to build these features yourself typicallyit' done by making thread run in loop that periodically checks its status to see if it should terminate for exampleclass stoppablethread(threading thread)def _init_ (self)threading thread _init_ (self _terminate false self _suspend_lock threading lock(def terminate(self)self _terminate true def suspend(self)self _suspend_lock acquire(def resume(self)self _suspend_lock release(def run(self)while trueif self _terminatebreak self _suspend_lock acquire(self _suspend_lock release(statements keep in mind that to make this approach work reliabilitythe thread should take great care not to perform any kind of blocking / operation for exampleif the thread blocks waiting for data to arriveit won' terminate until it wakes up from that operation because of thisyou would probably want to make the implementation use timeoutsnon-blocking /oand other advanced features to make sure that that the termination check executes every so often utility functions the following utility functions are availableactive_count(returns the number of currently active thread objects current_thread(returns the thread object corresponding to the caller' thread of control enumerate(returns list of all currently active thread objects local(returns local object that allows for the storage of thread-local data this object is guaranteed to be unique in each thread lib fl ff
18,291
threads and concurrency setprofile(funcsets profile function that will be used for all threads created func is passed to sys setprofile(before each thread starts running settrace(funcsets tracing function that will be used for all threads created func is passed to sys settrace(before each thread starts running stack_size([size]returns the stack size used when creating new threads if an optional integer size is givenit sets the stack size to be used for creating new threads size can be value that is ( kbor greater and multiple of ( kbfor maximum portability threaderror exception is raised if this operation isn' supported on the system the global interpreter lock the python interpreter is protected by lock that only allows one thread to execute at time even if there are multiple processors available this severely limits the usefulness of threads in compute-intensive programs--in factthe use of threads will often make cpu-bound programs run significantly worse than would be the case if they just sequentially carried out the same work thusthreads should really only be reserved for programs that are primarily concerned with / such as network servers for more compute-intensive tasksconsider using extension modules or the multiprocessing module instead extensions have the option of releasing the interpreter lock and running in parallelprovided that they don' interact with the interpreter when the lock is released the multiprocessing module farms work out to independent subprocesses that aren' restricted by the lock programming with threads although it is possible to write very traditional multithreaded programs in python using various combinations of locks and synchronization primitivesthere is one style of programming that is recommended over all others--and that' to try and organize multithreaded programs as collection of independent tasks that communicate through message queues this is described in the next section (the queue modulealong with an example queuequeue the queue module (named queue in python implements various multiproducermulticonsumer queues that can be used to safely exchange information between multiple threads of execution the queue module defines three different queue classesqueue([maxsize]creates fifo (first-in first-outqueue maxsize is the maximum number of items that can be placed in the queue if maxsize omitted or the queue size is infinite lifoqueue([maxsize]creates lifo (last-infirst-outqueue (also known as stackf lib fl ff
18,292
priorityqueue([maxsize]creates priority queue in which items are ordered from lowest to highest priority when working with this queueitems should be tuples of the form (prioritydatawhere priority is number an instance of any of the queue classes has the following methodsq qsize(returns the approximate size of the queue because other threads may be updating the queuethis number is not entirely reliable empty(returns true if the queue is empty and returns false otherwise full(returns true if the queue is full and returns false otherwise put(item [block [timeout]]puts item into the queue if optional argument block is true (the default)the caller blocks until free slot is available otherwise (block is false)the full exception is raised if the queue is full timeout supplies an optional timeout value in seconds if timeout occursthe full exception is raised put_nowait(itemequivalent to put(itemfalseq get([block [timeout]]removes and returns an item from the queue if optional argument block is true (the default)the caller blocks until an item is available otherwise (block is false)the empty exception is raised if the queue is empty timeout supplies an optional timeout value in seconds if timeout occursthe empty exception is raised get_nowait(equivalent to get( task_done(used by consumers of queued data to indicate that processing of an item has been finished if this is usedit should be called once for every item removed from the queue join(blocks until all items on the queue have been removed and processed this will only return once task_done(has been called for every item placed on the queue queue example with threads multithreaded programs are often simplified with the use of queues for exampleinstead of relying upon shared state that must be protected by locksthreads can be linked together using shared queues in this modelworker threads typically operate as consumers of data here is an example that illustrates the conceptf lib fl ff
18,293
threads and concurrency import threading from queue import queue use from queue on python class workerthread(threading thread)def _init_ (self,*args,**kwargs)threading thread _init_ (self,*args,**kwargsself input_queue queue(def send(self,item)self input_queue put(itemdef close(self)self input_queue put(noneself input_queue join(def run(self)while trueitem self input_queue get(if item is nonebreak process the item (replace with useful workprint(itemself input_queue task_done(done indicate that sentinel was received and return self input_queue task_done(return example use workerthread( start( send("hello" send("world" close(send items to the worker (via queuethe design of this class has been chosen very carefully firstyou will notice that the programming api is subset of the connection objects that get created by pipes in the multiprocessing module this allows for future expansion for exampleworkers could later be migrated into separate process without breaking the code that sends them data secondthe programming interface allows for thread termination the close(method places sentinel onto the queue whichin turncauses the thread to shut down when processed finallythe programming api is also almost identical to coroutine if the work to be performed doesn' involve any blocking operationsyou could reimplement the run(method as coroutine and dispense with threads altogether this latter approach might run faster because there would no longer be any overhead due to thread context switching coroutines and microthreading in certain kinds of applicationsit is possible to implement cooperative user-space multithreading using task scheduler and collection of generators or coroutines this is sometimes called microthreadingalthough the terminology varies--sometimes this is described in the context of taskletsgreen threadsgreenletsetc common use of this technique is in programs that need to manage large collection of open files or sockets for examplea network server that wants to simultaneously manage , client connections instead of creating , threads to do thatasynchronous / or polling (using the select moduleis used in conjunction with task scheduler that processes / events lib fl ff
18,294
the underlying concept that drives this programming technique is the fact that the yield statement in generator or coroutine function suspends the execution of the function until it is later resumed with next(or send(operation this makes it possible to cooperatively multitask between set of generator functions using scheduler loop here is an example that illustrates the ideadef foo()for in xrange( )print(" ' foo %dnyield def bar()for in xrange( )print(" ' bar %dnyield def spam()for in xrange( )print(" ' spam %dnyield create and populate task queue from collections import deque taskqueue deque(taskqueue append(foo()add some tasks (generatorstaskqueue append(bar()taskqueue append(spam()run all of the tasks while taskqueueget the next task task taskqueue pop(tryrun it to the next yield and enqueue next(tasktaskqueue appendleft(taskexcept stopiterationtask is done pass it is uncommon for program to define series of cpu-bound coroutines and schedule them as shown insteadyou are more likely to see this technique used with / bound taskspollingor event handling an advanced example showing this technique is found in the select module section of "network programming and sockets lib fl ff
18,295
lib fl ff
18,296
network programming and sockets his describes the modules used to implement low-level network servers and clients python provides extensive network supportranging from programming directly with sockets to working with high-level application protocols such as http to begina very brief (and admittedly terseintroduction to network programming is presented readers are advised to consult book such as unix network programming,volume networking apissockets and xti by richard stevens (prentice hall isbn -xfor many of the advanced details "internet application programming,describes modules related to application-level protocols network programming basics python' network programming modules primarily support two internet protocols:tcp and udp the tcp protocol is reliable connection-oriented protocol used to establish two-way communications stream between machines udp is lower-level packet-based protocol (connectionlessin which machines send and receive discrete packets of information without formally establishing connection unlike tcpudp communication is unreliable and thus inherently more complicated to manage in applications that require reliable communications consequentlymost internet applications utilize tcp connections both network protocols are handled through programming abstraction known as socket socket is an object similar to file that allows program to accept incoming connectionsmake outgoing connectionsand send and receive data before two machines can communicateboth must create socket object the machine receiving the connection (the servermust bind its socket object to known port number port is -bit number in the range - that' managed by the operating system and used by clients to uniquely identify servers ports - are reserved by the system and used by common network protocols the following table shows the port assignments for couple of common protocols ( more complete list can be found at lib fl ff
18,297
network programming and sockets service port number ftp-data ftp-control ssh telnet smtp (mailhttp (wwwpop imap https (secure www the process of establishing tcp connection involves precise sequence of steps on both the server and clientas shown in figure server client socket(socket(bind(listen(accept(wait for connection establish connection read(connect(request write(process request write(figure response read(tcp connection protocol for tcp serversthe socket object used to receive connections is not the same socket used to perform subsequent communication with the client in particularthe accept(system call returns new socket object that' actually used for the connection this allows server to manage connections from large number of clients simultaneously lib fl ff
18,298
udp communication is performed in similar mannerexcept that clients and servers don' establish "connectionwith each otheras shown in figure server client socket(socket(bind(bind(recvfrom(wait for data request sendto(process request response sendto(figure recvfrom(udp connection protocol the following example illustrates the tcp protocol with client and server written using the socket module in this casethe server simply returns the current time to the client as string time server program from socket import import time socket(af_inetsock_streams bind(('', ) listen( create tcp socket bind to port listenbut allow no more than pending connections while trueclient,addr accept(get connection print("got connection from %sstr(addr)timestr time ctime(time time()"\ \nclient send(timestr encode('ascii')client close(here' the client programtime client program from socket import socket(af_inet,sock_streamcreate tcp socket connect(('localhost' )connect to the server tm recv( receive no more than bytes close(print("the time is %stm decode('ascii') lib fl ff
18,299
network programming and sockets an example of establishing udp connection appears in the socket module section later in this it is common for network protocols to exchange data in the form of text howevergreat attention needs to be given to text encoding in python all strings are unicode thereforeif any kind of text string is to be sent across the networkit needs to be encoded this is why the server is using the encode('ascii'method on the data it transmits likewisewhen client receives network datathat data is first received as raw unencoded bytes if you print it out or try to process it as textyou're unlikely to get what you expected insteadyou need to decode it first this is why the client code is using decode('ascii'on the result the remainder of this describes modules that are related to socket programming describes higher-level modules that provide support for various internet applications such as email and the web asynchat the asynchat module simplifies the implementation of applications that implement asynchronous networking using the asyncore module it does this by wrapping the low-level / functionality of asyncore with higher-level programming interface that is designed for network protocols based on simple request/response mechanisms (for examplehttpto use this moduleyou must define class that inherits from async_chat within this classyou must define two methodscollect_incoming_data(and found_terminator(the first method is invoked whenever data is received on the network connection typicallyit would simply take the data and store it someplace the found_terminator(method is called when the end of request has been detected for examplein httprequests are terminated by blank line for data outputasync_chat maintains producer fifo queue if you need to output datait is simply added to this queue thenwhenever writes are possible on the network connectiondata is transparently taken from this queue async_chat([sock]base class used to define new handlers async_chat inherits from asyncore dispatcher and provides the same methods sock is socket object that' used for communication an instanceaof async_chat has the following methods in addition to those already provided by the asyncore dispatcher base classa close_when_done(signals an end-of-file on the outgoing data stream by pushing none onto the producer fifo queue when this is reached by the writerthe channel will be closed collect_incoming_data(datacalled whenever data is received on the channel data is the received data and is typically stored for later processing this method must be implemented by the user discard_buffers(discards all data held in input/output buffers and the producer fifo queue lib fl ff